domain
stringclasses 48
values | url
stringlengths 35
137
| text
stringlengths 0
836k
| topic
stringclasses 13
values |
---|---|---|---|
psyteachr.github.io | https://psyteachr.github.io/introdataviz/plotstyle.html |
C Styling Plots
===============
C.1 Aesthetics
--------------
### C.1\.1 Colour/Fill
The `colour` argument changes the point and line colour, while the `fill` argument changes the interior colour of shapes. Type `[colours()](https://rdrr.io/r/grDevices/colors.html)` into the console to see a list of all the named colours in R. Alternatively, you can use hexadecimal colours like `"#FF8000"` or the `[rgb()](https://rdrr.io/r/grDevices/rgb.html)` function to set red, green, and blue values on a scale from 0 to 1\.
Hover over a colour to see its R name.
* black
* gray1
* gray2
* gray3
* gray4
* gray5
* gray6
* gray7
* gray8
* gray9
* gray10
* gray11
* gray12
* gray13
* gray14
* gray15
* gray16
* gray17
* gray18
* gray19
* gray20
* gray21
* gray22
* gray23
* gray24
* gray25
* gray26
* gray27
* gray28
* gray29
* gray30
* gray31
* gray32
* gray33
* gray34
* gray35
* gray36
* gray37
* gray38
* gray39
* gray40
* dimgray
* gray42
* gray43
* gray44
* gray45
* gray46
* gray47
* gray48
* gray49
* gray50
* gray51
* gray52
* gray53
* gray54
* gray55
* gray56
* gray57
* gray58
* gray59
* gray60
* gray61
* gray62
* gray63
* gray64
* gray65
* darkgray
* gray66
* gray67
* gray68
* gray69
* gray70
* gray71
* gray72
* gray73
* gray74
* gray
* gray75
* gray76
* gray77
* gray78
* gray79
* gray80
* gray81
* gray82
* gray83
* lightgray
* gray84
* gray85
* gainsboro
* gray86
* gray87
* gray88
* gray89
* gray90
* gray91
* gray92
* gray93
* gray94
* gray95
* gray96
* gray97
* gray98
* gray99
* white
* snow4
* snow3
* snow2
* snow
* rosybrown4
* rosybrown
* rosybrown3
* rosybrown2
* rosybrown1
* lightcoral
* indianred
* indianred4
* indianred2
* indianred1
* indianred3
* brown4
* brown
* brown3
* brown2
* brown1
* firebrick4
* firebrick
* firebrick3
* firebrick1
* firebrick2
* darkred
* red3
* red2
* red
* mistyrose3
* mistyrose4
* mistyrose2
* mistyrose
* salmon
* tomato3
* coral4
* coral3
* coral2
* coral1
* tomato2
* tomato
* tomato4
* darksalmon
* salmon4
* salmon3
* salmon2
* salmon1
* coral
* orangered4
* orangered3
* orangered2
* lightsalmon3
* lightsalmon2
* lightsalmon
* lightsalmon4
* sienna
* sienna3
* sienna2
* sienna1
* sienna4
* orangered
* seashell4
* seashell3
* seashell2
* seashell
* chocolate4
* chocolate3
* chocolate
* chocolate2
* chocolate1
* linen
* peachpuff4
* peachpuff3
* peachpuff2
* peachpuff
* sandybrown
* tan4
* peru
* tan2
* tan1
* darkorange4
* darkorange3
* darkorange2
* darkorange1
* antiquewhite3
* antiquewhite2
* antiquewhite1
* bisque4
* bisque3
* bisque2
* bisque
* burlywood4
* burlywood3
* burlywood
* burlywood2
* burlywood1
* darkorange
* antiquewhite4
* antiquewhite
* papayawhip
* blanchedalmond
* navajowhite4
* navajowhite3
* navajowhite2
* navajowhite
* tan
* floralwhite
* oldlace
* wheat4
* wheat3
* wheat2
* wheat
* wheat1
* moccasin
* orange4
* orange3
* orange2
* orange
* goldenrod
* goldenrod1
* goldenrod4
* goldenrod3
* goldenrod2
* darkgoldenrod4
* darkgoldenrod
* darkgoldenrod3
* darkgoldenrod2
* darkgoldenrod1
* cornsilk
* cornsilk4
* cornsilk3
* cornsilk2
* lightgoldenrod4
* lightgoldenrod3
* lightgoldenrod
* lightgoldenrod2
* lightgoldenrod1
* gold4
* gold3
* gold2
* gold
* lemonchiffon4
* lemonchiffon3
* lemonchiffon2
* lemonchiffon
* palegoldenrod
* khaki
* darkkhaki
* khaki4
* khaki3
* khaki2
* khaki1
* ivory4
* ivory3
* ivory2
* ivory
* beige
* lightyellow4
* lightyellow3
* lightyellow2
* lightyellow
* lightgoldenrodyellow
* yellow4
* yellow3
* yellow2
* yellow
* olivedrab
* olivedrab4
* olivedrab3
* olivedrab2
* olivedrab1
* darkolivegreen
* darkolivegreen4
* darkolivegreen3
* darkolivegreen2
* darkolivegreen1
* greenyellow
* chartreuse4
* chartreuse3
* chartreuse2
* lawngreen
* chartreuse
* honeydew4
* honeydew3
* honeydew2
* honeydew
* darkseagreen4
* darkseagreen
* darkseagreen3
* darkseagreen2
* darkseagreen1
* lightgreen
* palegreen
* palegreen4
* palegreen3
* palegreen1
* forestgreen
* limegreen
* darkgreen
* green4
* green3
* green2
* green
* mediumseagreen
* seagreen
* seagreen3
* seagreen2
* seagreen1
* mintcream
* springgreen4
* springgreen3
* springgreen2
* springgreen
* aquamarine3
* aquamarine2
* aquamarine
* mediumspringgreen
* aquamarine4
* turquoise
* mediumturquoise
* lightseagreen
* azure4
* azure3
* azure2
* azure
* lightcyan4
* lightcyan3
* lightcyan2
* lightcyan
* paleturquoise
* paleturquoise4
* paleturquoise3
* paleturquoise2
* paleturquoise1
* darkslategray
* darkslategray4
* darkslategray3
* darkslategray2
* darkslategray1
* cyan4
* cyan3
* darkturquoise
* cyan2
* cyan
* cadetblue4
* cadetblue
* turquoise4
* turquoise3
* turquoise2
* turquoise1
* powderblue
* cadetblue3
* cadetblue2
* cadetblue1
* lightblue4
* lightblue3
* lightblue
* lightblue2
* lightblue1
* deepskyblue4
* deepskyblue3
* deepskyblue2
* deepskyblue
* skyblue
* lightskyblue4
* lightskyblue3
* lightskyblue2
* lightskyblue1
* lightskyblue
* skyblue4
* skyblue3
* skyblue2
* skyblue1
* aliceblue
* slategray
* lightslategray
* slategray3
* slategray2
* slategray1
* steelblue4
* steelblue
* steelblue3
* steelblue2
* steelblue1
* dodgerblue4
* dodgerblue3
* dodgerblue2
* dodgerblue
* lightsteelblue4
* lightsteelblue3
* lightsteelblue
* lightsteelblue2
* lightsteelblue1
* slategray4
* cornflowerblue
* royalblue
* royalblue4
* royalblue3
* royalblue2
* royalblue1
* ghostwhite
* lavender
* midnightblue
* navy
* blue4
* blue3
* blue2
* blue
* darkslateblue
* slateblue
* mediumslateblue
* lightslateblue
* slateblue1
* slateblue4
* slateblue3
* slateblue2
* mediumpurple4
* mediumpurple3
* mediumpurple
* mediumpurple2
* mediumpurple1
* purple4
* purple3
* blueviolet
* purple1
* purple2
* purple
* darkorchid
* darkorchid4
* darkorchid3
* darkorchid2
* darkorchid1
* darkviolet
* mediumorchid4
* mediumorchid3
* mediumorchid
* mediumorchid2
* mediumorchid1
* thistle4
* thistle3
* thistle
* thistle2
* thistle1
* plum4
* plum3
* plum2
* plum1
* plum
* violet
* darkmagenta
* magenta3
* magenta2
* magenta
* orchid4
* orchid3
* orchid
* orchid2
* orchid1
* maroon4
* violetred
* maroon3
* maroon2
* maroon1
* mediumvioletred
* deeppink3
* deeppink2
* deeppink
* deeppink4
* hotpink2
* hotpink1
* hotpink4
* hotpink
* violetred4
* violetred3
* violetred2
* violetred1
* hotpink3
* lavenderblush4
* lavenderblush3
* lavenderblush2
* lavenderblush
* maroon
* palevioletred4
* palevioletred3
* palevioletred
* palevioletred2
* palevioletred1
* pink4
* pink3
* pink2
* pink1
* pink
* lightpink
* lightpink4
* lightpink3
* lightpink2
* lightpink1
### C.1\.2 Alpha
The `alpha` argument changes transparency (0 \= totally transparent, 1 \= totally opaque).
Figure C.1: Varying alpha values.
### C.1\.3 Shape
The `shape` argument changes the shape of points.
Figure C.2: The 25 shape values
### C.1\.4 Linetype
You can probably guess what the `linetype` argument does.
Figure C.3: The 6 linetype values at different sizes.
C.2 Palettes
------------
Discrete palettes change depending on the number of categories.
Figure C.4: Default discrete palette with different numbers of levels.
### C.2\.1 Viridis Palettes
Viridis palettes are very good for colourblind\-safe and greyscale\-safe plots. The work with any number of categories, but are best for larger numbers of categories or continuous colours.
#### C.2\.1\.1 Discrete Viridis Palettes
Set [discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") viridis colours with `[scale_colour_viridis_d()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` or `[scale_fill_viridis_d()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` and set the `option` argument to one of the options below. Set `direction = -1` to reverse the order of colours.
Figure C.5: Discrete viridis palettes.
If the end colour is too light for your plot or the start colour too dark, you can set the `begin` and `end` arguments to values between 0 and 1, such as `scale_colour_viridis_c(begin = 0.1, end = 0.9)`.
#### C.2\.1\.2 Continuous Viridis Palettes
Set [continuous](https://psyteachr.github.io/glossary/c#continuous "Data that can take on any values between other existing values.") viridis colours with `[scale_colour_viridis_c()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` or `[scale_fill_viridis_c()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` and set the `option` argument to one of the options below. Set `direction = -1` to reverse the order of colours.
Figure 3\.7: Continuous viridis palettes.
### C.2\.2 Brewer Palettes
Brewer palettes give you a lot of control over plot colour and fill. You set them with `[scale_color_brewer()](https://ggplot2.tidyverse.org/reference/scale_brewer.html)` or `[scale_fill_brewer()](https://ggplot2.tidyverse.org/reference/scale_brewer.html)` and set the `palette` argument to one of the palettes below. Set `direction = -1` to reverse the order of colours.
#### C.2\.2\.1 Qualitative Brewer Palettes
These palettes are good for [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") data with up to 8 categories (some palettes can handle up to 12\). The "Paired" palette is useful if your categories are arranged in pairs.
Figure C.6: Qualitative brewer palettes.
#### C.2\.2\.2 Sequential Brewer Palettes
These palettes are good for up to 9 [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs") categories with a lot of categories.
Figure C.7: Sequential brewer palettes.
#### C.2\.2\.3 Diverging Brewer Palettes
These palettes are good for [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs") categories with up to 11 levels where the centre level is a neutral or baseline category and the levels above and below it differ in an important way, such as agree versus disagree options.
Figure C.8: Diverging brewer palettes.
C.3 Themes
----------
`ggplot2` has 8 built\-in themes that you can add to a plot like `plot + theme_bw()` or set as the default theme at the top of your script like `theme_set(theme_bw())`.
Figure C.9: {ggplot2} themes.
### C.3\.1 ggthemes
You can get more themes from add\-on packages, like `[ggthemes](https://yutannihilation.github.io/allYourFigureAreBelongToUs/ggthemes/)`. Most of the themes also have custom `scale_` functions like `scale_colour_economist()`. Their website has extensive examples and instructions for alternate or dark versions of these themes.
Figure C.10: {ggthemes} themes.
### C.3\.2 Fonts
You can customise the fonts used in themes. All computers should be able to recognise the families "sans", "serif", and "mono", and some computers will be able to access other installed fonts by name.
```
sans <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "sans") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Sans")
serif <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "serif") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Serif")
mono <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "mono") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Mono")
font <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "Comic Sans MS") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Comic Sans MS")
sans + serif + mono + font + [plot_layout](https://patchwork.data-imaginist.com/reference/plot_layout.html)(nrow = 1)
```
Figure C.11: Different fonts.
If you are working on a Windows machine and get the error "font family not found in Windows font database", you may need to explicitly map the fonts. In your setup code chunk, add the following code, which should fix the error. You may need to do this for any fonts that you specify.
The `showtext` package is a flexible way to add fonts.
If you have a .ttf file from a font site, like [Font Squirrel](https://www.fontsquirrel.com), you can load the file directly using `[font_add()](https://rdrr.io/pkg/sysfonts/man/font_add.html)`. Set `regular` as the path to the file for the regular version of the font, and optionally add other versions. Set the `family` to the name you want to use for the font. You will need to include any local font files if you are sharing your script with others.
```
[library](https://rdrr.io/r/base/library.html)([showtext](https://github.com/yixuan/showtext))
# font from https://www.fontsquirrel.com/fonts/SF-Cartoonist-Hand
[font_add](https://rdrr.io/pkg/sysfonts/man/font_add.html)(
regular = "fonts/cartoonist/SF_Cartoonist_Hand.ttf",
bold = "fonts/cartoonist/SF_Cartoonist_Hand_Bold.ttf",
italic = "fonts/cartoonist/SF_Cartoonist_Hand_Italic.ttf",
bolditalic = "fonts/cartoonist/SF_Cartoonist_Hand_Bold_Italic.ttf",
family = "cartoonist"
)
```
To download fonts directly from [Google fonts](https://fonts.google.com/), use the function `[font_add_google()](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)`, set the `name` to the exact name from the site, and the `family` to the name you want to use for the font.
```
# download fonts from Google
[font_add_google](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)(name = "Courgette", family = "courgette")
[font_add_google](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)(name = "Poiret One", family = "poiret")
```
After you've added fonts from local files or Google, you need to make them available to R using `[showtext_auto()](https://rdrr.io/pkg/showtext/man/showtext_auto.html)`. You will have to do these steps in each script where you want to use the custom fonts.
```
[showtext_auto](https://rdrr.io/pkg/showtext/man/showtext_auto.html)() # load the fonts
```
To change the fonts used overall in a plot, use the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function and set `text` to `element_text(family = "new_font_family")`.
```
a <- g + [theme](https://ggplot2.tidyverse.org/reference/theme.html)(text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "courgette")) +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Courgette")
b <- g + [theme](https://ggplot2.tidyverse.org/reference/theme.html)(text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "cartoonist")) +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Cartoonist Hand")
c <- g + [theme](https://ggplot2.tidyverse.org/reference/theme.html)(text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "poiret")) +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Poiret One")
a + b + c
```
Figure C.12: Custom Fonts.
To set the fonts for individual elements in the plot, you need to find the specific argument for that element. You can use the argument `face` to choose "bold", "italic", or "bolditalic" versions, if they are available.
```
g + [ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Cartoonist Hand") +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
title = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "cartoonist", face = "bold"),
strip.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "cartoonist", face = "italic"),
axis.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "sans")
)
```
Figure C.13: Multiple custom fonts on the same plot.
### C.3\.3 Setting A Lab Theme using `theme()`
The `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function, as we mentioned, does a lot more than just change the position of a legend and can be used to really control a variety of elements and to eventually create your own "theme" for your figures \- say you want to have a consistent look to your figures across your publications or across your lab posters.
First, we'll create a basic plot to demonstrate the changes.
```
g <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(diamonds, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = carat,
y = price,
color = cut)) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~color, nrow = 2) +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = lm, formula = y~x) +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(title = "The relationship between carat and price",
subtitle = "For each level of color and cut",
caption = "Data from ggplot2::diamonds")
g
```
Figure C.14: Basic plot in default theme
Always start with a base theme, like `[theme_minimal()](https://ggplot2.tidyverse.org/reference/ggtheme.html)` and set the size and font. Make sure to load any custom fonts.
```
[font_add_google](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)(name = "Nunito", family = "Nunito")
[showtext_auto](https://rdrr.io/pkg/showtext/man/showtext_auto.html)() # load the fonts
# set up custom theme to add to all plots
mytheme <- [theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)( # always start with a base theme_****
base_size = 16, # 16-point font (adjusted for axes)
base_family = "Nunito" # custom font family
)
```
```
g + mytheme
```
Figure C.15: Basic customised theme
Now add specific theme customisations. See `[?theme](https://ggplot2.tidyverse.org/reference/theme.html)` for detailed explanations. Most theme arguments take a value of `[element_blank()](https://ggplot2.tidyverse.org/reference/element.html)` to remove the feature entirely, or `[element_text()](https://ggplot2.tidyverse.org/reference/element.html)`, `[element_line()](https://ggplot2.tidyverse.org/reference/element.html)` or `[element_rect()](https://ggplot2.tidyverse.org/reference/element.html)`, depending on whether the feature is text, a box, or a line.
```
# add more specific customisations with theme()
mytheme <- [theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)(
base_size = 16,
base_family = "Nunito"
) +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
plot.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "black"),
panel.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "grey10",
color = "grey30"),
text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(color = "white"),
strip.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(hjust = 0), # left justify
strip.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "grey60", ),
axis.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(color = "grey60"),
axis.line = [element_line](https://ggplot2.tidyverse.org/reference/element.html)(color = "grey60", size = 1),
panel.grid = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
plot.title = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(hjust = 0.5), # center justify
plot.subtitle = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(hjust = 0.5, color = "grey60"),
plot.caption = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(face = "italic")
)
```
```
g + mytheme
```
Figure C.16: Further customised theme
You can still add further theme customisation for specific plots.
```
g + mytheme +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
legend.title = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(size = 11),
legend.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(size = 9),
legend.key.height = [unit](https://rdrr.io/r/grid/unit.html)(0.2, "inches"),
legend.position = [c](https://rdrr.io/r/base/c.html)(.9, 0.175)
)
```
Figure C.17: Plot\-specific customising.
C.1 Aesthetics
--------------
### C.1\.1 Colour/Fill
The `colour` argument changes the point and line colour, while the `fill` argument changes the interior colour of shapes. Type `[colours()](https://rdrr.io/r/grDevices/colors.html)` into the console to see a list of all the named colours in R. Alternatively, you can use hexadecimal colours like `"#FF8000"` or the `[rgb()](https://rdrr.io/r/grDevices/rgb.html)` function to set red, green, and blue values on a scale from 0 to 1\.
Hover over a colour to see its R name.
* black
* gray1
* gray2
* gray3
* gray4
* gray5
* gray6
* gray7
* gray8
* gray9
* gray10
* gray11
* gray12
* gray13
* gray14
* gray15
* gray16
* gray17
* gray18
* gray19
* gray20
* gray21
* gray22
* gray23
* gray24
* gray25
* gray26
* gray27
* gray28
* gray29
* gray30
* gray31
* gray32
* gray33
* gray34
* gray35
* gray36
* gray37
* gray38
* gray39
* gray40
* dimgray
* gray42
* gray43
* gray44
* gray45
* gray46
* gray47
* gray48
* gray49
* gray50
* gray51
* gray52
* gray53
* gray54
* gray55
* gray56
* gray57
* gray58
* gray59
* gray60
* gray61
* gray62
* gray63
* gray64
* gray65
* darkgray
* gray66
* gray67
* gray68
* gray69
* gray70
* gray71
* gray72
* gray73
* gray74
* gray
* gray75
* gray76
* gray77
* gray78
* gray79
* gray80
* gray81
* gray82
* gray83
* lightgray
* gray84
* gray85
* gainsboro
* gray86
* gray87
* gray88
* gray89
* gray90
* gray91
* gray92
* gray93
* gray94
* gray95
* gray96
* gray97
* gray98
* gray99
* white
* snow4
* snow3
* snow2
* snow
* rosybrown4
* rosybrown
* rosybrown3
* rosybrown2
* rosybrown1
* lightcoral
* indianred
* indianred4
* indianred2
* indianred1
* indianred3
* brown4
* brown
* brown3
* brown2
* brown1
* firebrick4
* firebrick
* firebrick3
* firebrick1
* firebrick2
* darkred
* red3
* red2
* red
* mistyrose3
* mistyrose4
* mistyrose2
* mistyrose
* salmon
* tomato3
* coral4
* coral3
* coral2
* coral1
* tomato2
* tomato
* tomato4
* darksalmon
* salmon4
* salmon3
* salmon2
* salmon1
* coral
* orangered4
* orangered3
* orangered2
* lightsalmon3
* lightsalmon2
* lightsalmon
* lightsalmon4
* sienna
* sienna3
* sienna2
* sienna1
* sienna4
* orangered
* seashell4
* seashell3
* seashell2
* seashell
* chocolate4
* chocolate3
* chocolate
* chocolate2
* chocolate1
* linen
* peachpuff4
* peachpuff3
* peachpuff2
* peachpuff
* sandybrown
* tan4
* peru
* tan2
* tan1
* darkorange4
* darkorange3
* darkorange2
* darkorange1
* antiquewhite3
* antiquewhite2
* antiquewhite1
* bisque4
* bisque3
* bisque2
* bisque
* burlywood4
* burlywood3
* burlywood
* burlywood2
* burlywood1
* darkorange
* antiquewhite4
* antiquewhite
* papayawhip
* blanchedalmond
* navajowhite4
* navajowhite3
* navajowhite2
* navajowhite
* tan
* floralwhite
* oldlace
* wheat4
* wheat3
* wheat2
* wheat
* wheat1
* moccasin
* orange4
* orange3
* orange2
* orange
* goldenrod
* goldenrod1
* goldenrod4
* goldenrod3
* goldenrod2
* darkgoldenrod4
* darkgoldenrod
* darkgoldenrod3
* darkgoldenrod2
* darkgoldenrod1
* cornsilk
* cornsilk4
* cornsilk3
* cornsilk2
* lightgoldenrod4
* lightgoldenrod3
* lightgoldenrod
* lightgoldenrod2
* lightgoldenrod1
* gold4
* gold3
* gold2
* gold
* lemonchiffon4
* lemonchiffon3
* lemonchiffon2
* lemonchiffon
* palegoldenrod
* khaki
* darkkhaki
* khaki4
* khaki3
* khaki2
* khaki1
* ivory4
* ivory3
* ivory2
* ivory
* beige
* lightyellow4
* lightyellow3
* lightyellow2
* lightyellow
* lightgoldenrodyellow
* yellow4
* yellow3
* yellow2
* yellow
* olivedrab
* olivedrab4
* olivedrab3
* olivedrab2
* olivedrab1
* darkolivegreen
* darkolivegreen4
* darkolivegreen3
* darkolivegreen2
* darkolivegreen1
* greenyellow
* chartreuse4
* chartreuse3
* chartreuse2
* lawngreen
* chartreuse
* honeydew4
* honeydew3
* honeydew2
* honeydew
* darkseagreen4
* darkseagreen
* darkseagreen3
* darkseagreen2
* darkseagreen1
* lightgreen
* palegreen
* palegreen4
* palegreen3
* palegreen1
* forestgreen
* limegreen
* darkgreen
* green4
* green3
* green2
* green
* mediumseagreen
* seagreen
* seagreen3
* seagreen2
* seagreen1
* mintcream
* springgreen4
* springgreen3
* springgreen2
* springgreen
* aquamarine3
* aquamarine2
* aquamarine
* mediumspringgreen
* aquamarine4
* turquoise
* mediumturquoise
* lightseagreen
* azure4
* azure3
* azure2
* azure
* lightcyan4
* lightcyan3
* lightcyan2
* lightcyan
* paleturquoise
* paleturquoise4
* paleturquoise3
* paleturquoise2
* paleturquoise1
* darkslategray
* darkslategray4
* darkslategray3
* darkslategray2
* darkslategray1
* cyan4
* cyan3
* darkturquoise
* cyan2
* cyan
* cadetblue4
* cadetblue
* turquoise4
* turquoise3
* turquoise2
* turquoise1
* powderblue
* cadetblue3
* cadetblue2
* cadetblue1
* lightblue4
* lightblue3
* lightblue
* lightblue2
* lightblue1
* deepskyblue4
* deepskyblue3
* deepskyblue2
* deepskyblue
* skyblue
* lightskyblue4
* lightskyblue3
* lightskyblue2
* lightskyblue1
* lightskyblue
* skyblue4
* skyblue3
* skyblue2
* skyblue1
* aliceblue
* slategray
* lightslategray
* slategray3
* slategray2
* slategray1
* steelblue4
* steelblue
* steelblue3
* steelblue2
* steelblue1
* dodgerblue4
* dodgerblue3
* dodgerblue2
* dodgerblue
* lightsteelblue4
* lightsteelblue3
* lightsteelblue
* lightsteelblue2
* lightsteelblue1
* slategray4
* cornflowerblue
* royalblue
* royalblue4
* royalblue3
* royalblue2
* royalblue1
* ghostwhite
* lavender
* midnightblue
* navy
* blue4
* blue3
* blue2
* blue
* darkslateblue
* slateblue
* mediumslateblue
* lightslateblue
* slateblue1
* slateblue4
* slateblue3
* slateblue2
* mediumpurple4
* mediumpurple3
* mediumpurple
* mediumpurple2
* mediumpurple1
* purple4
* purple3
* blueviolet
* purple1
* purple2
* purple
* darkorchid
* darkorchid4
* darkorchid3
* darkorchid2
* darkorchid1
* darkviolet
* mediumorchid4
* mediumorchid3
* mediumorchid
* mediumorchid2
* mediumorchid1
* thistle4
* thistle3
* thistle
* thistle2
* thistle1
* plum4
* plum3
* plum2
* plum1
* plum
* violet
* darkmagenta
* magenta3
* magenta2
* magenta
* orchid4
* orchid3
* orchid
* orchid2
* orchid1
* maroon4
* violetred
* maroon3
* maroon2
* maroon1
* mediumvioletred
* deeppink3
* deeppink2
* deeppink
* deeppink4
* hotpink2
* hotpink1
* hotpink4
* hotpink
* violetred4
* violetred3
* violetred2
* violetred1
* hotpink3
* lavenderblush4
* lavenderblush3
* lavenderblush2
* lavenderblush
* maroon
* palevioletred4
* palevioletred3
* palevioletred
* palevioletred2
* palevioletred1
* pink4
* pink3
* pink2
* pink1
* pink
* lightpink
* lightpink4
* lightpink3
* lightpink2
* lightpink1
### C.1\.2 Alpha
The `alpha` argument changes transparency (0 \= totally transparent, 1 \= totally opaque).
Figure C.1: Varying alpha values.
### C.1\.3 Shape
The `shape` argument changes the shape of points.
Figure C.2: The 25 shape values
### C.1\.4 Linetype
You can probably guess what the `linetype` argument does.
Figure C.3: The 6 linetype values at different sizes.
### C.1\.1 Colour/Fill
The `colour` argument changes the point and line colour, while the `fill` argument changes the interior colour of shapes. Type `[colours()](https://rdrr.io/r/grDevices/colors.html)` into the console to see a list of all the named colours in R. Alternatively, you can use hexadecimal colours like `"#FF8000"` or the `[rgb()](https://rdrr.io/r/grDevices/rgb.html)` function to set red, green, and blue values on a scale from 0 to 1\.
Hover over a colour to see its R name.
* black
* gray1
* gray2
* gray3
* gray4
* gray5
* gray6
* gray7
* gray8
* gray9
* gray10
* gray11
* gray12
* gray13
* gray14
* gray15
* gray16
* gray17
* gray18
* gray19
* gray20
* gray21
* gray22
* gray23
* gray24
* gray25
* gray26
* gray27
* gray28
* gray29
* gray30
* gray31
* gray32
* gray33
* gray34
* gray35
* gray36
* gray37
* gray38
* gray39
* gray40
* dimgray
* gray42
* gray43
* gray44
* gray45
* gray46
* gray47
* gray48
* gray49
* gray50
* gray51
* gray52
* gray53
* gray54
* gray55
* gray56
* gray57
* gray58
* gray59
* gray60
* gray61
* gray62
* gray63
* gray64
* gray65
* darkgray
* gray66
* gray67
* gray68
* gray69
* gray70
* gray71
* gray72
* gray73
* gray74
* gray
* gray75
* gray76
* gray77
* gray78
* gray79
* gray80
* gray81
* gray82
* gray83
* lightgray
* gray84
* gray85
* gainsboro
* gray86
* gray87
* gray88
* gray89
* gray90
* gray91
* gray92
* gray93
* gray94
* gray95
* gray96
* gray97
* gray98
* gray99
* white
* snow4
* snow3
* snow2
* snow
* rosybrown4
* rosybrown
* rosybrown3
* rosybrown2
* rosybrown1
* lightcoral
* indianred
* indianred4
* indianred2
* indianred1
* indianred3
* brown4
* brown
* brown3
* brown2
* brown1
* firebrick4
* firebrick
* firebrick3
* firebrick1
* firebrick2
* darkred
* red3
* red2
* red
* mistyrose3
* mistyrose4
* mistyrose2
* mistyrose
* salmon
* tomato3
* coral4
* coral3
* coral2
* coral1
* tomato2
* tomato
* tomato4
* darksalmon
* salmon4
* salmon3
* salmon2
* salmon1
* coral
* orangered4
* orangered3
* orangered2
* lightsalmon3
* lightsalmon2
* lightsalmon
* lightsalmon4
* sienna
* sienna3
* sienna2
* sienna1
* sienna4
* orangered
* seashell4
* seashell3
* seashell2
* seashell
* chocolate4
* chocolate3
* chocolate
* chocolate2
* chocolate1
* linen
* peachpuff4
* peachpuff3
* peachpuff2
* peachpuff
* sandybrown
* tan4
* peru
* tan2
* tan1
* darkorange4
* darkorange3
* darkorange2
* darkorange1
* antiquewhite3
* antiquewhite2
* antiquewhite1
* bisque4
* bisque3
* bisque2
* bisque
* burlywood4
* burlywood3
* burlywood
* burlywood2
* burlywood1
* darkorange
* antiquewhite4
* antiquewhite
* papayawhip
* blanchedalmond
* navajowhite4
* navajowhite3
* navajowhite2
* navajowhite
* tan
* floralwhite
* oldlace
* wheat4
* wheat3
* wheat2
* wheat
* wheat1
* moccasin
* orange4
* orange3
* orange2
* orange
* goldenrod
* goldenrod1
* goldenrod4
* goldenrod3
* goldenrod2
* darkgoldenrod4
* darkgoldenrod
* darkgoldenrod3
* darkgoldenrod2
* darkgoldenrod1
* cornsilk
* cornsilk4
* cornsilk3
* cornsilk2
* lightgoldenrod4
* lightgoldenrod3
* lightgoldenrod
* lightgoldenrod2
* lightgoldenrod1
* gold4
* gold3
* gold2
* gold
* lemonchiffon4
* lemonchiffon3
* lemonchiffon2
* lemonchiffon
* palegoldenrod
* khaki
* darkkhaki
* khaki4
* khaki3
* khaki2
* khaki1
* ivory4
* ivory3
* ivory2
* ivory
* beige
* lightyellow4
* lightyellow3
* lightyellow2
* lightyellow
* lightgoldenrodyellow
* yellow4
* yellow3
* yellow2
* yellow
* olivedrab
* olivedrab4
* olivedrab3
* olivedrab2
* olivedrab1
* darkolivegreen
* darkolivegreen4
* darkolivegreen3
* darkolivegreen2
* darkolivegreen1
* greenyellow
* chartreuse4
* chartreuse3
* chartreuse2
* lawngreen
* chartreuse
* honeydew4
* honeydew3
* honeydew2
* honeydew
* darkseagreen4
* darkseagreen
* darkseagreen3
* darkseagreen2
* darkseagreen1
* lightgreen
* palegreen
* palegreen4
* palegreen3
* palegreen1
* forestgreen
* limegreen
* darkgreen
* green4
* green3
* green2
* green
* mediumseagreen
* seagreen
* seagreen3
* seagreen2
* seagreen1
* mintcream
* springgreen4
* springgreen3
* springgreen2
* springgreen
* aquamarine3
* aquamarine2
* aquamarine
* mediumspringgreen
* aquamarine4
* turquoise
* mediumturquoise
* lightseagreen
* azure4
* azure3
* azure2
* azure
* lightcyan4
* lightcyan3
* lightcyan2
* lightcyan
* paleturquoise
* paleturquoise4
* paleturquoise3
* paleturquoise2
* paleturquoise1
* darkslategray
* darkslategray4
* darkslategray3
* darkslategray2
* darkslategray1
* cyan4
* cyan3
* darkturquoise
* cyan2
* cyan
* cadetblue4
* cadetblue
* turquoise4
* turquoise3
* turquoise2
* turquoise1
* powderblue
* cadetblue3
* cadetblue2
* cadetblue1
* lightblue4
* lightblue3
* lightblue
* lightblue2
* lightblue1
* deepskyblue4
* deepskyblue3
* deepskyblue2
* deepskyblue
* skyblue
* lightskyblue4
* lightskyblue3
* lightskyblue2
* lightskyblue1
* lightskyblue
* skyblue4
* skyblue3
* skyblue2
* skyblue1
* aliceblue
* slategray
* lightslategray
* slategray3
* slategray2
* slategray1
* steelblue4
* steelblue
* steelblue3
* steelblue2
* steelblue1
* dodgerblue4
* dodgerblue3
* dodgerblue2
* dodgerblue
* lightsteelblue4
* lightsteelblue3
* lightsteelblue
* lightsteelblue2
* lightsteelblue1
* slategray4
* cornflowerblue
* royalblue
* royalblue4
* royalblue3
* royalblue2
* royalblue1
* ghostwhite
* lavender
* midnightblue
* navy
* blue4
* blue3
* blue2
* blue
* darkslateblue
* slateblue
* mediumslateblue
* lightslateblue
* slateblue1
* slateblue4
* slateblue3
* slateblue2
* mediumpurple4
* mediumpurple3
* mediumpurple
* mediumpurple2
* mediumpurple1
* purple4
* purple3
* blueviolet
* purple1
* purple2
* purple
* darkorchid
* darkorchid4
* darkorchid3
* darkorchid2
* darkorchid1
* darkviolet
* mediumorchid4
* mediumorchid3
* mediumorchid
* mediumorchid2
* mediumorchid1
* thistle4
* thistle3
* thistle
* thistle2
* thistle1
* plum4
* plum3
* plum2
* plum1
* plum
* violet
* darkmagenta
* magenta3
* magenta2
* magenta
* orchid4
* orchid3
* orchid
* orchid2
* orchid1
* maroon4
* violetred
* maroon3
* maroon2
* maroon1
* mediumvioletred
* deeppink3
* deeppink2
* deeppink
* deeppink4
* hotpink2
* hotpink1
* hotpink4
* hotpink
* violetred4
* violetred3
* violetred2
* violetred1
* hotpink3
* lavenderblush4
* lavenderblush3
* lavenderblush2
* lavenderblush
* maroon
* palevioletred4
* palevioletred3
* palevioletred
* palevioletred2
* palevioletred1
* pink4
* pink3
* pink2
* pink1
* pink
* lightpink
* lightpink4
* lightpink3
* lightpink2
* lightpink1
### C.1\.2 Alpha
The `alpha` argument changes transparency (0 \= totally transparent, 1 \= totally opaque).
Figure C.1: Varying alpha values.
### C.1\.3 Shape
The `shape` argument changes the shape of points.
Figure C.2: The 25 shape values
### C.1\.4 Linetype
You can probably guess what the `linetype` argument does.
Figure C.3: The 6 linetype values at different sizes.
C.2 Palettes
------------
Discrete palettes change depending on the number of categories.
Figure C.4: Default discrete palette with different numbers of levels.
### C.2\.1 Viridis Palettes
Viridis palettes are very good for colourblind\-safe and greyscale\-safe plots. The work with any number of categories, but are best for larger numbers of categories or continuous colours.
#### C.2\.1\.1 Discrete Viridis Palettes
Set [discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") viridis colours with `[scale_colour_viridis_d()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` or `[scale_fill_viridis_d()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` and set the `option` argument to one of the options below. Set `direction = -1` to reverse the order of colours.
Figure C.5: Discrete viridis palettes.
If the end colour is too light for your plot or the start colour too dark, you can set the `begin` and `end` arguments to values between 0 and 1, such as `scale_colour_viridis_c(begin = 0.1, end = 0.9)`.
#### C.2\.1\.2 Continuous Viridis Palettes
Set [continuous](https://psyteachr.github.io/glossary/c#continuous "Data that can take on any values between other existing values.") viridis colours with `[scale_colour_viridis_c()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` or `[scale_fill_viridis_c()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` and set the `option` argument to one of the options below. Set `direction = -1` to reverse the order of colours.
Figure 3\.7: Continuous viridis palettes.
### C.2\.2 Brewer Palettes
Brewer palettes give you a lot of control over plot colour and fill. You set them with `[scale_color_brewer()](https://ggplot2.tidyverse.org/reference/scale_brewer.html)` or `[scale_fill_brewer()](https://ggplot2.tidyverse.org/reference/scale_brewer.html)` and set the `palette` argument to one of the palettes below. Set `direction = -1` to reverse the order of colours.
#### C.2\.2\.1 Qualitative Brewer Palettes
These palettes are good for [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") data with up to 8 categories (some palettes can handle up to 12\). The "Paired" palette is useful if your categories are arranged in pairs.
Figure C.6: Qualitative brewer palettes.
#### C.2\.2\.2 Sequential Brewer Palettes
These palettes are good for up to 9 [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs") categories with a lot of categories.
Figure C.7: Sequential brewer palettes.
#### C.2\.2\.3 Diverging Brewer Palettes
These palettes are good for [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs") categories with up to 11 levels where the centre level is a neutral or baseline category and the levels above and below it differ in an important way, such as agree versus disagree options.
Figure C.8: Diverging brewer palettes.
### C.2\.1 Viridis Palettes
Viridis palettes are very good for colourblind\-safe and greyscale\-safe plots. The work with any number of categories, but are best for larger numbers of categories or continuous colours.
#### C.2\.1\.1 Discrete Viridis Palettes
Set [discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") viridis colours with `[scale_colour_viridis_d()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` or `[scale_fill_viridis_d()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` and set the `option` argument to one of the options below. Set `direction = -1` to reverse the order of colours.
Figure C.5: Discrete viridis palettes.
If the end colour is too light for your plot or the start colour too dark, you can set the `begin` and `end` arguments to values between 0 and 1, such as `scale_colour_viridis_c(begin = 0.1, end = 0.9)`.
#### C.2\.1\.2 Continuous Viridis Palettes
Set [continuous](https://psyteachr.github.io/glossary/c#continuous "Data that can take on any values between other existing values.") viridis colours with `[scale_colour_viridis_c()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` or `[scale_fill_viridis_c()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` and set the `option` argument to one of the options below. Set `direction = -1` to reverse the order of colours.
Figure 3\.7: Continuous viridis palettes.
#### C.2\.1\.1 Discrete Viridis Palettes
Set [discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") viridis colours with `[scale_colour_viridis_d()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` or `[scale_fill_viridis_d()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` and set the `option` argument to one of the options below. Set `direction = -1` to reverse the order of colours.
Figure C.5: Discrete viridis palettes.
If the end colour is too light for your plot or the start colour too dark, you can set the `begin` and `end` arguments to values between 0 and 1, such as `scale_colour_viridis_c(begin = 0.1, end = 0.9)`.
#### C.2\.1\.2 Continuous Viridis Palettes
Set [continuous](https://psyteachr.github.io/glossary/c#continuous "Data that can take on any values between other existing values.") viridis colours with `[scale_colour_viridis_c()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` or `[scale_fill_viridis_c()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` and set the `option` argument to one of the options below. Set `direction = -1` to reverse the order of colours.
Figure 3\.7: Continuous viridis palettes.
### C.2\.2 Brewer Palettes
Brewer palettes give you a lot of control over plot colour and fill. You set them with `[scale_color_brewer()](https://ggplot2.tidyverse.org/reference/scale_brewer.html)` or `[scale_fill_brewer()](https://ggplot2.tidyverse.org/reference/scale_brewer.html)` and set the `palette` argument to one of the palettes below. Set `direction = -1` to reverse the order of colours.
#### C.2\.2\.1 Qualitative Brewer Palettes
These palettes are good for [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") data with up to 8 categories (some palettes can handle up to 12\). The "Paired" palette is useful if your categories are arranged in pairs.
Figure C.6: Qualitative brewer palettes.
#### C.2\.2\.2 Sequential Brewer Palettes
These palettes are good for up to 9 [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs") categories with a lot of categories.
Figure C.7: Sequential brewer palettes.
#### C.2\.2\.3 Diverging Brewer Palettes
These palettes are good for [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs") categories with up to 11 levels where the centre level is a neutral or baseline category and the levels above and below it differ in an important way, such as agree versus disagree options.
Figure C.8: Diverging brewer palettes.
#### C.2\.2\.1 Qualitative Brewer Palettes
These palettes are good for [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") data with up to 8 categories (some palettes can handle up to 12\). The "Paired" palette is useful if your categories are arranged in pairs.
Figure C.6: Qualitative brewer palettes.
#### C.2\.2\.2 Sequential Brewer Palettes
These palettes are good for up to 9 [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs") categories with a lot of categories.
Figure C.7: Sequential brewer palettes.
#### C.2\.2\.3 Diverging Brewer Palettes
These palettes are good for [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs") categories with up to 11 levels where the centre level is a neutral or baseline category and the levels above and below it differ in an important way, such as agree versus disagree options.
Figure C.8: Diverging brewer palettes.
C.3 Themes
----------
`ggplot2` has 8 built\-in themes that you can add to a plot like `plot + theme_bw()` or set as the default theme at the top of your script like `theme_set(theme_bw())`.
Figure C.9: {ggplot2} themes.
### C.3\.1 ggthemes
You can get more themes from add\-on packages, like `[ggthemes](https://yutannihilation.github.io/allYourFigureAreBelongToUs/ggthemes/)`. Most of the themes also have custom `scale_` functions like `scale_colour_economist()`. Their website has extensive examples and instructions for alternate or dark versions of these themes.
Figure C.10: {ggthemes} themes.
### C.3\.2 Fonts
You can customise the fonts used in themes. All computers should be able to recognise the families "sans", "serif", and "mono", and some computers will be able to access other installed fonts by name.
```
sans <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "sans") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Sans")
serif <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "serif") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Serif")
mono <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "mono") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Mono")
font <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "Comic Sans MS") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Comic Sans MS")
sans + serif + mono + font + [plot_layout](https://patchwork.data-imaginist.com/reference/plot_layout.html)(nrow = 1)
```
Figure C.11: Different fonts.
If you are working on a Windows machine and get the error "font family not found in Windows font database", you may need to explicitly map the fonts. In your setup code chunk, add the following code, which should fix the error. You may need to do this for any fonts that you specify.
The `showtext` package is a flexible way to add fonts.
If you have a .ttf file from a font site, like [Font Squirrel](https://www.fontsquirrel.com), you can load the file directly using `[font_add()](https://rdrr.io/pkg/sysfonts/man/font_add.html)`. Set `regular` as the path to the file for the regular version of the font, and optionally add other versions. Set the `family` to the name you want to use for the font. You will need to include any local font files if you are sharing your script with others.
```
[library](https://rdrr.io/r/base/library.html)([showtext](https://github.com/yixuan/showtext))
# font from https://www.fontsquirrel.com/fonts/SF-Cartoonist-Hand
[font_add](https://rdrr.io/pkg/sysfonts/man/font_add.html)(
regular = "fonts/cartoonist/SF_Cartoonist_Hand.ttf",
bold = "fonts/cartoonist/SF_Cartoonist_Hand_Bold.ttf",
italic = "fonts/cartoonist/SF_Cartoonist_Hand_Italic.ttf",
bolditalic = "fonts/cartoonist/SF_Cartoonist_Hand_Bold_Italic.ttf",
family = "cartoonist"
)
```
To download fonts directly from [Google fonts](https://fonts.google.com/), use the function `[font_add_google()](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)`, set the `name` to the exact name from the site, and the `family` to the name you want to use for the font.
```
# download fonts from Google
[font_add_google](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)(name = "Courgette", family = "courgette")
[font_add_google](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)(name = "Poiret One", family = "poiret")
```
After you've added fonts from local files or Google, you need to make them available to R using `[showtext_auto()](https://rdrr.io/pkg/showtext/man/showtext_auto.html)`. You will have to do these steps in each script where you want to use the custom fonts.
```
[showtext_auto](https://rdrr.io/pkg/showtext/man/showtext_auto.html)() # load the fonts
```
To change the fonts used overall in a plot, use the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function and set `text` to `element_text(family = "new_font_family")`.
```
a <- g + [theme](https://ggplot2.tidyverse.org/reference/theme.html)(text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "courgette")) +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Courgette")
b <- g + [theme](https://ggplot2.tidyverse.org/reference/theme.html)(text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "cartoonist")) +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Cartoonist Hand")
c <- g + [theme](https://ggplot2.tidyverse.org/reference/theme.html)(text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "poiret")) +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Poiret One")
a + b + c
```
Figure C.12: Custom Fonts.
To set the fonts for individual elements in the plot, you need to find the specific argument for that element. You can use the argument `face` to choose "bold", "italic", or "bolditalic" versions, if they are available.
```
g + [ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Cartoonist Hand") +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
title = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "cartoonist", face = "bold"),
strip.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "cartoonist", face = "italic"),
axis.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "sans")
)
```
Figure C.13: Multiple custom fonts on the same plot.
### C.3\.3 Setting A Lab Theme using `theme()`
The `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function, as we mentioned, does a lot more than just change the position of a legend and can be used to really control a variety of elements and to eventually create your own "theme" for your figures \- say you want to have a consistent look to your figures across your publications or across your lab posters.
First, we'll create a basic plot to demonstrate the changes.
```
g <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(diamonds, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = carat,
y = price,
color = cut)) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~color, nrow = 2) +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = lm, formula = y~x) +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(title = "The relationship between carat and price",
subtitle = "For each level of color and cut",
caption = "Data from ggplot2::diamonds")
g
```
Figure C.14: Basic plot in default theme
Always start with a base theme, like `[theme_minimal()](https://ggplot2.tidyverse.org/reference/ggtheme.html)` and set the size and font. Make sure to load any custom fonts.
```
[font_add_google](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)(name = "Nunito", family = "Nunito")
[showtext_auto](https://rdrr.io/pkg/showtext/man/showtext_auto.html)() # load the fonts
# set up custom theme to add to all plots
mytheme <- [theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)( # always start with a base theme_****
base_size = 16, # 16-point font (adjusted for axes)
base_family = "Nunito" # custom font family
)
```
```
g + mytheme
```
Figure C.15: Basic customised theme
Now add specific theme customisations. See `[?theme](https://ggplot2.tidyverse.org/reference/theme.html)` for detailed explanations. Most theme arguments take a value of `[element_blank()](https://ggplot2.tidyverse.org/reference/element.html)` to remove the feature entirely, or `[element_text()](https://ggplot2.tidyverse.org/reference/element.html)`, `[element_line()](https://ggplot2.tidyverse.org/reference/element.html)` or `[element_rect()](https://ggplot2.tidyverse.org/reference/element.html)`, depending on whether the feature is text, a box, or a line.
```
# add more specific customisations with theme()
mytheme <- [theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)(
base_size = 16,
base_family = "Nunito"
) +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
plot.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "black"),
panel.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "grey10",
color = "grey30"),
text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(color = "white"),
strip.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(hjust = 0), # left justify
strip.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "grey60", ),
axis.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(color = "grey60"),
axis.line = [element_line](https://ggplot2.tidyverse.org/reference/element.html)(color = "grey60", size = 1),
panel.grid = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
plot.title = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(hjust = 0.5), # center justify
plot.subtitle = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(hjust = 0.5, color = "grey60"),
plot.caption = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(face = "italic")
)
```
```
g + mytheme
```
Figure C.16: Further customised theme
You can still add further theme customisation for specific plots.
```
g + mytheme +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
legend.title = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(size = 11),
legend.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(size = 9),
legend.key.height = [unit](https://rdrr.io/r/grid/unit.html)(0.2, "inches"),
legend.position = [c](https://rdrr.io/r/base/c.html)(.9, 0.175)
)
```
Figure C.17: Plot\-specific customising.
### C.3\.1 ggthemes
You can get more themes from add\-on packages, like `[ggthemes](https://yutannihilation.github.io/allYourFigureAreBelongToUs/ggthemes/)`. Most of the themes also have custom `scale_` functions like `scale_colour_economist()`. Their website has extensive examples and instructions for alternate or dark versions of these themes.
Figure C.10: {ggthemes} themes.
### C.3\.2 Fonts
You can customise the fonts used in themes. All computers should be able to recognise the families "sans", "serif", and "mono", and some computers will be able to access other installed fonts by name.
```
sans <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "sans") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Sans")
serif <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "serif") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Serif")
mono <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "mono") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Mono")
font <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "Comic Sans MS") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Comic Sans MS")
sans + serif + mono + font + [plot_layout](https://patchwork.data-imaginist.com/reference/plot_layout.html)(nrow = 1)
```
Figure C.11: Different fonts.
If you are working on a Windows machine and get the error "font family not found in Windows font database", you may need to explicitly map the fonts. In your setup code chunk, add the following code, which should fix the error. You may need to do this for any fonts that you specify.
The `showtext` package is a flexible way to add fonts.
If you have a .ttf file from a font site, like [Font Squirrel](https://www.fontsquirrel.com), you can load the file directly using `[font_add()](https://rdrr.io/pkg/sysfonts/man/font_add.html)`. Set `regular` as the path to the file for the regular version of the font, and optionally add other versions. Set the `family` to the name you want to use for the font. You will need to include any local font files if you are sharing your script with others.
```
[library](https://rdrr.io/r/base/library.html)([showtext](https://github.com/yixuan/showtext))
# font from https://www.fontsquirrel.com/fonts/SF-Cartoonist-Hand
[font_add](https://rdrr.io/pkg/sysfonts/man/font_add.html)(
regular = "fonts/cartoonist/SF_Cartoonist_Hand.ttf",
bold = "fonts/cartoonist/SF_Cartoonist_Hand_Bold.ttf",
italic = "fonts/cartoonist/SF_Cartoonist_Hand_Italic.ttf",
bolditalic = "fonts/cartoonist/SF_Cartoonist_Hand_Bold_Italic.ttf",
family = "cartoonist"
)
```
To download fonts directly from [Google fonts](https://fonts.google.com/), use the function `[font_add_google()](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)`, set the `name` to the exact name from the site, and the `family` to the name you want to use for the font.
```
# download fonts from Google
[font_add_google](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)(name = "Courgette", family = "courgette")
[font_add_google](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)(name = "Poiret One", family = "poiret")
```
After you've added fonts from local files or Google, you need to make them available to R using `[showtext_auto()](https://rdrr.io/pkg/showtext/man/showtext_auto.html)`. You will have to do these steps in each script where you want to use the custom fonts.
```
[showtext_auto](https://rdrr.io/pkg/showtext/man/showtext_auto.html)() # load the fonts
```
To change the fonts used overall in a plot, use the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function and set `text` to `element_text(family = "new_font_family")`.
```
a <- g + [theme](https://ggplot2.tidyverse.org/reference/theme.html)(text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "courgette")) +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Courgette")
b <- g + [theme](https://ggplot2.tidyverse.org/reference/theme.html)(text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "cartoonist")) +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Cartoonist Hand")
c <- g + [theme](https://ggplot2.tidyverse.org/reference/theme.html)(text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "poiret")) +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Poiret One")
a + b + c
```
Figure C.12: Custom Fonts.
To set the fonts for individual elements in the plot, you need to find the specific argument for that element. You can use the argument `face` to choose "bold", "italic", or "bolditalic" versions, if they are available.
```
g + [ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Cartoonist Hand") +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
title = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "cartoonist", face = "bold"),
strip.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "cartoonist", face = "italic"),
axis.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "sans")
)
```
Figure C.13: Multiple custom fonts on the same plot.
### C.3\.3 Setting A Lab Theme using `theme()`
The `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function, as we mentioned, does a lot more than just change the position of a legend and can be used to really control a variety of elements and to eventually create your own "theme" for your figures \- say you want to have a consistent look to your figures across your publications or across your lab posters.
First, we'll create a basic plot to demonstrate the changes.
```
g <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(diamonds, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = carat,
y = price,
color = cut)) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~color, nrow = 2) +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = lm, formula = y~x) +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(title = "The relationship between carat and price",
subtitle = "For each level of color and cut",
caption = "Data from ggplot2::diamonds")
g
```
Figure C.14: Basic plot in default theme
Always start with a base theme, like `[theme_minimal()](https://ggplot2.tidyverse.org/reference/ggtheme.html)` and set the size and font. Make sure to load any custom fonts.
```
[font_add_google](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)(name = "Nunito", family = "Nunito")
[showtext_auto](https://rdrr.io/pkg/showtext/man/showtext_auto.html)() # load the fonts
# set up custom theme to add to all plots
mytheme <- [theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)( # always start with a base theme_****
base_size = 16, # 16-point font (adjusted for axes)
base_family = "Nunito" # custom font family
)
```
```
g + mytheme
```
Figure C.15: Basic customised theme
Now add specific theme customisations. See `[?theme](https://ggplot2.tidyverse.org/reference/theme.html)` for detailed explanations. Most theme arguments take a value of `[element_blank()](https://ggplot2.tidyverse.org/reference/element.html)` to remove the feature entirely, or `[element_text()](https://ggplot2.tidyverse.org/reference/element.html)`, `[element_line()](https://ggplot2.tidyverse.org/reference/element.html)` or `[element_rect()](https://ggplot2.tidyverse.org/reference/element.html)`, depending on whether the feature is text, a box, or a line.
```
# add more specific customisations with theme()
mytheme <- [theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)(
base_size = 16,
base_family = "Nunito"
) +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
plot.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "black"),
panel.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "grey10",
color = "grey30"),
text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(color = "white"),
strip.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(hjust = 0), # left justify
strip.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "grey60", ),
axis.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(color = "grey60"),
axis.line = [element_line](https://ggplot2.tidyverse.org/reference/element.html)(color = "grey60", size = 1),
panel.grid = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
plot.title = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(hjust = 0.5), # center justify
plot.subtitle = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(hjust = 0.5, color = "grey60"),
plot.caption = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(face = "italic")
)
```
```
g + mytheme
```
Figure C.16: Further customised theme
You can still add further theme customisation for specific plots.
```
g + mytheme +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
legend.title = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(size = 11),
legend.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(size = 9),
legend.key.height = [unit](https://rdrr.io/r/grid/unit.html)(0.2, "inches"),
legend.position = [c](https://rdrr.io/r/base/c.html)(.9, 0.175)
)
```
Figure C.17: Plot\-specific customising.
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/plotstyle.html |
C Styling Plots
===============
C.1 Aesthetics
--------------
### C.1\.1 Colour/Fill
The `colour` argument changes the point and line colour, while the `fill` argument changes the interior colour of shapes. Type `[colours()](https://rdrr.io/r/grDevices/colors.html)` into the console to see a list of all the named colours in R. Alternatively, you can use hexadecimal colours like `"#FF8000"` or the `[rgb()](https://rdrr.io/r/grDevices/rgb.html)` function to set red, green, and blue values on a scale from 0 to 1\.
Hover over a colour to see its R name.
* black
* gray1
* gray2
* gray3
* gray4
* gray5
* gray6
* gray7
* gray8
* gray9
* gray10
* gray11
* gray12
* gray13
* gray14
* gray15
* gray16
* gray17
* gray18
* gray19
* gray20
* gray21
* gray22
* gray23
* gray24
* gray25
* gray26
* gray27
* gray28
* gray29
* gray30
* gray31
* gray32
* gray33
* gray34
* gray35
* gray36
* gray37
* gray38
* gray39
* gray40
* dimgray
* gray42
* gray43
* gray44
* gray45
* gray46
* gray47
* gray48
* gray49
* gray50
* gray51
* gray52
* gray53
* gray54
* gray55
* gray56
* gray57
* gray58
* gray59
* gray60
* gray61
* gray62
* gray63
* gray64
* gray65
* darkgray
* gray66
* gray67
* gray68
* gray69
* gray70
* gray71
* gray72
* gray73
* gray74
* gray
* gray75
* gray76
* gray77
* gray78
* gray79
* gray80
* gray81
* gray82
* gray83
* lightgray
* gray84
* gray85
* gainsboro
* gray86
* gray87
* gray88
* gray89
* gray90
* gray91
* gray92
* gray93
* gray94
* gray95
* gray96
* gray97
* gray98
* gray99
* white
* snow4
* snow3
* snow2
* snow
* rosybrown4
* rosybrown
* rosybrown3
* rosybrown2
* rosybrown1
* lightcoral
* indianred
* indianred4
* indianred2
* indianred1
* indianred3
* brown4
* brown
* brown3
* brown2
* brown1
* firebrick4
* firebrick
* firebrick3
* firebrick1
* firebrick2
* darkred
* red3
* red2
* red
* mistyrose3
* mistyrose4
* mistyrose2
* mistyrose
* salmon
* tomato3
* coral4
* coral3
* coral2
* coral1
* tomato2
* tomato
* tomato4
* darksalmon
* salmon4
* salmon3
* salmon2
* salmon1
* coral
* orangered4
* orangered3
* orangered2
* lightsalmon3
* lightsalmon2
* lightsalmon
* lightsalmon4
* sienna
* sienna3
* sienna2
* sienna1
* sienna4
* orangered
* seashell4
* seashell3
* seashell2
* seashell
* chocolate4
* chocolate3
* chocolate
* chocolate2
* chocolate1
* linen
* peachpuff4
* peachpuff3
* peachpuff2
* peachpuff
* sandybrown
* tan4
* peru
* tan2
* tan1
* darkorange4
* darkorange3
* darkorange2
* darkorange1
* antiquewhite3
* antiquewhite2
* antiquewhite1
* bisque4
* bisque3
* bisque2
* bisque
* burlywood4
* burlywood3
* burlywood
* burlywood2
* burlywood1
* darkorange
* antiquewhite4
* antiquewhite
* papayawhip
* blanchedalmond
* navajowhite4
* navajowhite3
* navajowhite2
* navajowhite
* tan
* floralwhite
* oldlace
* wheat4
* wheat3
* wheat2
* wheat
* wheat1
* moccasin
* orange4
* orange3
* orange2
* orange
* goldenrod
* goldenrod1
* goldenrod4
* goldenrod3
* goldenrod2
* darkgoldenrod4
* darkgoldenrod
* darkgoldenrod3
* darkgoldenrod2
* darkgoldenrod1
* cornsilk
* cornsilk4
* cornsilk3
* cornsilk2
* lightgoldenrod4
* lightgoldenrod3
* lightgoldenrod
* lightgoldenrod2
* lightgoldenrod1
* gold4
* gold3
* gold2
* gold
* lemonchiffon4
* lemonchiffon3
* lemonchiffon2
* lemonchiffon
* palegoldenrod
* khaki
* darkkhaki
* khaki4
* khaki3
* khaki2
* khaki1
* ivory4
* ivory3
* ivory2
* ivory
* beige
* lightyellow4
* lightyellow3
* lightyellow2
* lightyellow
* lightgoldenrodyellow
* yellow4
* yellow3
* yellow2
* yellow
* olivedrab
* olivedrab4
* olivedrab3
* olivedrab2
* olivedrab1
* darkolivegreen
* darkolivegreen4
* darkolivegreen3
* darkolivegreen2
* darkolivegreen1
* greenyellow
* chartreuse4
* chartreuse3
* chartreuse2
* lawngreen
* chartreuse
* honeydew4
* honeydew3
* honeydew2
* honeydew
* darkseagreen4
* darkseagreen
* darkseagreen3
* darkseagreen2
* darkseagreen1
* lightgreen
* palegreen
* palegreen4
* palegreen3
* palegreen1
* forestgreen
* limegreen
* darkgreen
* green4
* green3
* green2
* green
* mediumseagreen
* seagreen
* seagreen3
* seagreen2
* seagreen1
* mintcream
* springgreen4
* springgreen3
* springgreen2
* springgreen
* aquamarine3
* aquamarine2
* aquamarine
* mediumspringgreen
* aquamarine4
* turquoise
* mediumturquoise
* lightseagreen
* azure4
* azure3
* azure2
* azure
* lightcyan4
* lightcyan3
* lightcyan2
* lightcyan
* paleturquoise
* paleturquoise4
* paleturquoise3
* paleturquoise2
* paleturquoise1
* darkslategray
* darkslategray4
* darkslategray3
* darkslategray2
* darkslategray1
* cyan4
* cyan3
* darkturquoise
* cyan2
* cyan
* cadetblue4
* cadetblue
* turquoise4
* turquoise3
* turquoise2
* turquoise1
* powderblue
* cadetblue3
* cadetblue2
* cadetblue1
* lightblue4
* lightblue3
* lightblue
* lightblue2
* lightblue1
* deepskyblue4
* deepskyblue3
* deepskyblue2
* deepskyblue
* skyblue
* lightskyblue4
* lightskyblue3
* lightskyblue2
* lightskyblue1
* lightskyblue
* skyblue4
* skyblue3
* skyblue2
* skyblue1
* aliceblue
* slategray
* lightslategray
* slategray3
* slategray2
* slategray1
* steelblue4
* steelblue
* steelblue3
* steelblue2
* steelblue1
* dodgerblue4
* dodgerblue3
* dodgerblue2
* dodgerblue
* lightsteelblue4
* lightsteelblue3
* lightsteelblue
* lightsteelblue2
* lightsteelblue1
* slategray4
* cornflowerblue
* royalblue
* royalblue4
* royalblue3
* royalblue2
* royalblue1
* ghostwhite
* lavender
* midnightblue
* navy
* blue4
* blue3
* blue2
* blue
* darkslateblue
* slateblue
* mediumslateblue
* lightslateblue
* slateblue1
* slateblue4
* slateblue3
* slateblue2
* mediumpurple4
* mediumpurple3
* mediumpurple
* mediumpurple2
* mediumpurple1
* purple4
* purple3
* blueviolet
* purple1
* purple2
* purple
* darkorchid
* darkorchid4
* darkorchid3
* darkorchid2
* darkorchid1
* darkviolet
* mediumorchid4
* mediumorchid3
* mediumorchid
* mediumorchid2
* mediumorchid1
* thistle4
* thistle3
* thistle
* thistle2
* thistle1
* plum4
* plum3
* plum2
* plum1
* plum
* violet
* darkmagenta
* magenta3
* magenta2
* magenta
* orchid4
* orchid3
* orchid
* orchid2
* orchid1
* maroon4
* violetred
* maroon3
* maroon2
* maroon1
* mediumvioletred
* deeppink3
* deeppink2
* deeppink
* deeppink4
* hotpink2
* hotpink1
* hotpink4
* hotpink
* violetred4
* violetred3
* violetred2
* violetred1
* hotpink3
* lavenderblush4
* lavenderblush3
* lavenderblush2
* lavenderblush
* maroon
* palevioletred4
* palevioletred3
* palevioletred
* palevioletred2
* palevioletred1
* pink4
* pink3
* pink2
* pink1
* pink
* lightpink
* lightpink4
* lightpink3
* lightpink2
* lightpink1
### C.1\.2 Alpha
The `alpha` argument changes transparency (0 \= totally transparent, 1 \= totally opaque).
Figure C.1: Varying alpha values.
### C.1\.3 Shape
The `shape` argument changes the shape of points.
Figure C.2: The 25 shape values
### C.1\.4 Linetype
You can probably guess what the `linetype` argument does.
Figure C.3: The 6 linetype values at different sizes.
C.2 Palettes
------------
Discrete palettes change depending on the number of categories.
Figure C.4: Default discrete palette with different numbers of levels.
### C.2\.1 Viridis Palettes
Viridis palettes are very good for colourblind\-safe and greyscale\-safe plots. The work with any number of categories, but are best for larger numbers of categories or continuous colours.
#### C.2\.1\.1 Discrete Viridis Palettes
Set [discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") viridis colours with `[scale_colour_viridis_d()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` or `[scale_fill_viridis_d()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` and set the `option` argument to one of the options below. Set `direction = -1` to reverse the order of colours.
Figure C.5: Discrete viridis palettes.
If the end colour is too light for your plot or the start colour too dark, you can set the `begin` and `end` arguments to values between 0 and 1, such as `scale_colour_viridis_c(begin = 0.1, end = 0.9)`.
#### C.2\.1\.2 Continuous Viridis Palettes
Set [continuous](https://psyteachr.github.io/glossary/c#continuous "Data that can take on any values between other existing values.") viridis colours with `[scale_colour_viridis_c()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` or `[scale_fill_viridis_c()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` and set the `option` argument to one of the options below. Set `direction = -1` to reverse the order of colours.
Figure 3\.7: Continuous viridis palettes.
### C.2\.2 Brewer Palettes
Brewer palettes give you a lot of control over plot colour and fill. You set them with `[scale_color_brewer()](https://ggplot2.tidyverse.org/reference/scale_brewer.html)` or `[scale_fill_brewer()](https://ggplot2.tidyverse.org/reference/scale_brewer.html)` and set the `palette` argument to one of the palettes below. Set `direction = -1` to reverse the order of colours.
#### C.2\.2\.1 Qualitative Brewer Palettes
These palettes are good for [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") data with up to 8 categories (some palettes can handle up to 12\). The "Paired" palette is useful if your categories are arranged in pairs.
Figure C.6: Qualitative brewer palettes.
#### C.2\.2\.2 Sequential Brewer Palettes
These palettes are good for up to 9 [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs") categories with a lot of categories.
Figure C.7: Sequential brewer palettes.
#### C.2\.2\.3 Diverging Brewer Palettes
These palettes are good for [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs") categories with up to 11 levels where the centre level is a neutral or baseline category and the levels above and below it differ in an important way, such as agree versus disagree options.
Figure C.8: Diverging brewer palettes.
C.3 Themes
----------
`ggplot2` has 8 built\-in themes that you can add to a plot like `plot + theme_bw()` or set as the default theme at the top of your script like `theme_set(theme_bw())`.
Figure C.9: {ggplot2} themes.
### C.3\.1 ggthemes
You can get more themes from add\-on packages, like `[ggthemes](https://yutannihilation.github.io/allYourFigureAreBelongToUs/ggthemes/)`. Most of the themes also have custom `scale_` functions like `scale_colour_economist()`. Their website has extensive examples and instructions for alternate or dark versions of these themes.
Figure C.10: {ggthemes} themes.
### C.3\.2 Fonts
You can customise the fonts used in themes. All computers should be able to recognise the families "sans", "serif", and "mono", and some computers will be able to access other installed fonts by name.
```
sans <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "sans") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Sans")
serif <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "serif") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Serif")
mono <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "mono") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Mono")
font <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "Comic Sans MS") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Comic Sans MS")
sans + serif + mono + font + [plot_layout](https://patchwork.data-imaginist.com/reference/plot_layout.html)(nrow = 1)
```
Figure C.11: Different fonts.
If you are working on a Windows machine and get the error "font family not found in Windows font database", you may need to explicitly map the fonts. In your setup code chunk, add the following code, which should fix the error. You may need to do this for any fonts that you specify.
The `showtext` package is a flexible way to add fonts.
If you have a .ttf file from a font site, like [Font Squirrel](https://www.fontsquirrel.com), you can load the file directly using `[font_add()](https://rdrr.io/pkg/sysfonts/man/font_add.html)`. Set `regular` as the path to the file for the regular version of the font, and optionally add other versions. Set the `family` to the name you want to use for the font. You will need to include any local font files if you are sharing your script with others.
```
[library](https://rdrr.io/r/base/library.html)([showtext](https://github.com/yixuan/showtext))
# font from https://www.fontsquirrel.com/fonts/SF-Cartoonist-Hand
[font_add](https://rdrr.io/pkg/sysfonts/man/font_add.html)(
regular = "fonts/cartoonist/SF_Cartoonist_Hand.ttf",
bold = "fonts/cartoonist/SF_Cartoonist_Hand_Bold.ttf",
italic = "fonts/cartoonist/SF_Cartoonist_Hand_Italic.ttf",
bolditalic = "fonts/cartoonist/SF_Cartoonist_Hand_Bold_Italic.ttf",
family = "cartoonist"
)
```
To download fonts directly from [Google fonts](https://fonts.google.com/), use the function `[font_add_google()](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)`, set the `name` to the exact name from the site, and the `family` to the name you want to use for the font.
```
# download fonts from Google
[font_add_google](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)(name = "Courgette", family = "courgette")
[font_add_google](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)(name = "Poiret One", family = "poiret")
```
After you've added fonts from local files or Google, you need to make them available to R using `[showtext_auto()](https://rdrr.io/pkg/showtext/man/showtext_auto.html)`. You will have to do these steps in each script where you want to use the custom fonts.
```
[showtext_auto](https://rdrr.io/pkg/showtext/man/showtext_auto.html)() # load the fonts
```
To change the fonts used overall in a plot, use the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function and set `text` to `element_text(family = "new_font_family")`.
```
a <- g + [theme](https://ggplot2.tidyverse.org/reference/theme.html)(text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "courgette")) +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Courgette")
b <- g + [theme](https://ggplot2.tidyverse.org/reference/theme.html)(text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "cartoonist")) +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Cartoonist Hand")
c <- g + [theme](https://ggplot2.tidyverse.org/reference/theme.html)(text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "poiret")) +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Poiret One")
a + b + c
```
Figure C.12: Custom Fonts.
To set the fonts for individual elements in the plot, you need to find the specific argument for that element. You can use the argument `face` to choose "bold", "italic", or "bolditalic" versions, if they are available.
```
g + [ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Cartoonist Hand") +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
title = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "cartoonist", face = "bold"),
strip.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "cartoonist", face = "italic"),
axis.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "sans")
)
```
Figure C.13: Multiple custom fonts on the same plot.
### C.3\.3 Setting A Lab Theme using `theme()`
The `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function, as we mentioned, does a lot more than just change the position of a legend and can be used to really control a variety of elements and to eventually create your own "theme" for your figures \- say you want to have a consistent look to your figures across your publications or across your lab posters.
First, we'll create a basic plot to demonstrate the changes.
```
g <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(diamonds, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = carat,
y = price,
color = cut)) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~color, nrow = 2) +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = lm, formula = y~x) +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(title = "The relationship between carat and price",
subtitle = "For each level of color and cut",
caption = "Data from ggplot2::diamonds")
g
```
Figure C.14: Basic plot in default theme
Always start with a base theme, like `[theme_minimal()](https://ggplot2.tidyverse.org/reference/ggtheme.html)` and set the size and font. Make sure to load any custom fonts.
```
[font_add_google](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)(name = "Nunito", family = "Nunito")
[showtext_auto](https://rdrr.io/pkg/showtext/man/showtext_auto.html)() # load the fonts
# set up custom theme to add to all plots
mytheme <- [theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)( # always start with a base theme_****
base_size = 16, # 16-point font (adjusted for axes)
base_family = "Nunito" # custom font family
)
```
```
g + mytheme
```
Figure C.15: Basic customised theme
Now add specific theme customisations. See `[?theme](https://ggplot2.tidyverse.org/reference/theme.html)` for detailed explanations. Most theme arguments take a value of `[element_blank()](https://ggplot2.tidyverse.org/reference/element.html)` to remove the feature entirely, or `[element_text()](https://ggplot2.tidyverse.org/reference/element.html)`, `[element_line()](https://ggplot2.tidyverse.org/reference/element.html)` or `[element_rect()](https://ggplot2.tidyverse.org/reference/element.html)`, depending on whether the feature is text, a box, or a line.
```
# add more specific customisations with theme()
mytheme <- [theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)(
base_size = 16,
base_family = "Nunito"
) +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
plot.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "black"),
panel.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "grey10",
color = "grey30"),
text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(color = "white"),
strip.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(hjust = 0), # left justify
strip.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "grey60", ),
axis.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(color = "grey60"),
axis.line = [element_line](https://ggplot2.tidyverse.org/reference/element.html)(color = "grey60", size = 1),
panel.grid = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
plot.title = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(hjust = 0.5), # center justify
plot.subtitle = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(hjust = 0.5, color = "grey60"),
plot.caption = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(face = "italic")
)
```
```
g + mytheme
```
Figure C.16: Further customised theme
You can still add further theme customisation for specific plots.
```
g + mytheme +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
legend.title = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(size = 11),
legend.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(size = 9),
legend.key.height = [unit](https://rdrr.io/r/grid/unit.html)(0.2, "inches"),
legend.position = [c](https://rdrr.io/r/base/c.html)(.9, 0.175)
)
```
Figure C.17: Plot\-specific customising.
C.1 Aesthetics
--------------
### C.1\.1 Colour/Fill
The `colour` argument changes the point and line colour, while the `fill` argument changes the interior colour of shapes. Type `[colours()](https://rdrr.io/r/grDevices/colors.html)` into the console to see a list of all the named colours in R. Alternatively, you can use hexadecimal colours like `"#FF8000"` or the `[rgb()](https://rdrr.io/r/grDevices/rgb.html)` function to set red, green, and blue values on a scale from 0 to 1\.
Hover over a colour to see its R name.
* black
* gray1
* gray2
* gray3
* gray4
* gray5
* gray6
* gray7
* gray8
* gray9
* gray10
* gray11
* gray12
* gray13
* gray14
* gray15
* gray16
* gray17
* gray18
* gray19
* gray20
* gray21
* gray22
* gray23
* gray24
* gray25
* gray26
* gray27
* gray28
* gray29
* gray30
* gray31
* gray32
* gray33
* gray34
* gray35
* gray36
* gray37
* gray38
* gray39
* gray40
* dimgray
* gray42
* gray43
* gray44
* gray45
* gray46
* gray47
* gray48
* gray49
* gray50
* gray51
* gray52
* gray53
* gray54
* gray55
* gray56
* gray57
* gray58
* gray59
* gray60
* gray61
* gray62
* gray63
* gray64
* gray65
* darkgray
* gray66
* gray67
* gray68
* gray69
* gray70
* gray71
* gray72
* gray73
* gray74
* gray
* gray75
* gray76
* gray77
* gray78
* gray79
* gray80
* gray81
* gray82
* gray83
* lightgray
* gray84
* gray85
* gainsboro
* gray86
* gray87
* gray88
* gray89
* gray90
* gray91
* gray92
* gray93
* gray94
* gray95
* gray96
* gray97
* gray98
* gray99
* white
* snow4
* snow3
* snow2
* snow
* rosybrown4
* rosybrown
* rosybrown3
* rosybrown2
* rosybrown1
* lightcoral
* indianred
* indianred4
* indianred2
* indianred1
* indianred3
* brown4
* brown
* brown3
* brown2
* brown1
* firebrick4
* firebrick
* firebrick3
* firebrick1
* firebrick2
* darkred
* red3
* red2
* red
* mistyrose3
* mistyrose4
* mistyrose2
* mistyrose
* salmon
* tomato3
* coral4
* coral3
* coral2
* coral1
* tomato2
* tomato
* tomato4
* darksalmon
* salmon4
* salmon3
* salmon2
* salmon1
* coral
* orangered4
* orangered3
* orangered2
* lightsalmon3
* lightsalmon2
* lightsalmon
* lightsalmon4
* sienna
* sienna3
* sienna2
* sienna1
* sienna4
* orangered
* seashell4
* seashell3
* seashell2
* seashell
* chocolate4
* chocolate3
* chocolate
* chocolate2
* chocolate1
* linen
* peachpuff4
* peachpuff3
* peachpuff2
* peachpuff
* sandybrown
* tan4
* peru
* tan2
* tan1
* darkorange4
* darkorange3
* darkorange2
* darkorange1
* antiquewhite3
* antiquewhite2
* antiquewhite1
* bisque4
* bisque3
* bisque2
* bisque
* burlywood4
* burlywood3
* burlywood
* burlywood2
* burlywood1
* darkorange
* antiquewhite4
* antiquewhite
* papayawhip
* blanchedalmond
* navajowhite4
* navajowhite3
* navajowhite2
* navajowhite
* tan
* floralwhite
* oldlace
* wheat4
* wheat3
* wheat2
* wheat
* wheat1
* moccasin
* orange4
* orange3
* orange2
* orange
* goldenrod
* goldenrod1
* goldenrod4
* goldenrod3
* goldenrod2
* darkgoldenrod4
* darkgoldenrod
* darkgoldenrod3
* darkgoldenrod2
* darkgoldenrod1
* cornsilk
* cornsilk4
* cornsilk3
* cornsilk2
* lightgoldenrod4
* lightgoldenrod3
* lightgoldenrod
* lightgoldenrod2
* lightgoldenrod1
* gold4
* gold3
* gold2
* gold
* lemonchiffon4
* lemonchiffon3
* lemonchiffon2
* lemonchiffon
* palegoldenrod
* khaki
* darkkhaki
* khaki4
* khaki3
* khaki2
* khaki1
* ivory4
* ivory3
* ivory2
* ivory
* beige
* lightyellow4
* lightyellow3
* lightyellow2
* lightyellow
* lightgoldenrodyellow
* yellow4
* yellow3
* yellow2
* yellow
* olivedrab
* olivedrab4
* olivedrab3
* olivedrab2
* olivedrab1
* darkolivegreen
* darkolivegreen4
* darkolivegreen3
* darkolivegreen2
* darkolivegreen1
* greenyellow
* chartreuse4
* chartreuse3
* chartreuse2
* lawngreen
* chartreuse
* honeydew4
* honeydew3
* honeydew2
* honeydew
* darkseagreen4
* darkseagreen
* darkseagreen3
* darkseagreen2
* darkseagreen1
* lightgreen
* palegreen
* palegreen4
* palegreen3
* palegreen1
* forestgreen
* limegreen
* darkgreen
* green4
* green3
* green2
* green
* mediumseagreen
* seagreen
* seagreen3
* seagreen2
* seagreen1
* mintcream
* springgreen4
* springgreen3
* springgreen2
* springgreen
* aquamarine3
* aquamarine2
* aquamarine
* mediumspringgreen
* aquamarine4
* turquoise
* mediumturquoise
* lightseagreen
* azure4
* azure3
* azure2
* azure
* lightcyan4
* lightcyan3
* lightcyan2
* lightcyan
* paleturquoise
* paleturquoise4
* paleturquoise3
* paleturquoise2
* paleturquoise1
* darkslategray
* darkslategray4
* darkslategray3
* darkslategray2
* darkslategray1
* cyan4
* cyan3
* darkturquoise
* cyan2
* cyan
* cadetblue4
* cadetblue
* turquoise4
* turquoise3
* turquoise2
* turquoise1
* powderblue
* cadetblue3
* cadetblue2
* cadetblue1
* lightblue4
* lightblue3
* lightblue
* lightblue2
* lightblue1
* deepskyblue4
* deepskyblue3
* deepskyblue2
* deepskyblue
* skyblue
* lightskyblue4
* lightskyblue3
* lightskyblue2
* lightskyblue1
* lightskyblue
* skyblue4
* skyblue3
* skyblue2
* skyblue1
* aliceblue
* slategray
* lightslategray
* slategray3
* slategray2
* slategray1
* steelblue4
* steelblue
* steelblue3
* steelblue2
* steelblue1
* dodgerblue4
* dodgerblue3
* dodgerblue2
* dodgerblue
* lightsteelblue4
* lightsteelblue3
* lightsteelblue
* lightsteelblue2
* lightsteelblue1
* slategray4
* cornflowerblue
* royalblue
* royalblue4
* royalblue3
* royalblue2
* royalblue1
* ghostwhite
* lavender
* midnightblue
* navy
* blue4
* blue3
* blue2
* blue
* darkslateblue
* slateblue
* mediumslateblue
* lightslateblue
* slateblue1
* slateblue4
* slateblue3
* slateblue2
* mediumpurple4
* mediumpurple3
* mediumpurple
* mediumpurple2
* mediumpurple1
* purple4
* purple3
* blueviolet
* purple1
* purple2
* purple
* darkorchid
* darkorchid4
* darkorchid3
* darkorchid2
* darkorchid1
* darkviolet
* mediumorchid4
* mediumorchid3
* mediumorchid
* mediumorchid2
* mediumorchid1
* thistle4
* thistle3
* thistle
* thistle2
* thistle1
* plum4
* plum3
* plum2
* plum1
* plum
* violet
* darkmagenta
* magenta3
* magenta2
* magenta
* orchid4
* orchid3
* orchid
* orchid2
* orchid1
* maroon4
* violetred
* maroon3
* maroon2
* maroon1
* mediumvioletred
* deeppink3
* deeppink2
* deeppink
* deeppink4
* hotpink2
* hotpink1
* hotpink4
* hotpink
* violetred4
* violetred3
* violetred2
* violetred1
* hotpink3
* lavenderblush4
* lavenderblush3
* lavenderblush2
* lavenderblush
* maroon
* palevioletred4
* palevioletred3
* palevioletred
* palevioletred2
* palevioletred1
* pink4
* pink3
* pink2
* pink1
* pink
* lightpink
* lightpink4
* lightpink3
* lightpink2
* lightpink1
### C.1\.2 Alpha
The `alpha` argument changes transparency (0 \= totally transparent, 1 \= totally opaque).
Figure C.1: Varying alpha values.
### C.1\.3 Shape
The `shape` argument changes the shape of points.
Figure C.2: The 25 shape values
### C.1\.4 Linetype
You can probably guess what the `linetype` argument does.
Figure C.3: The 6 linetype values at different sizes.
### C.1\.1 Colour/Fill
The `colour` argument changes the point and line colour, while the `fill` argument changes the interior colour of shapes. Type `[colours()](https://rdrr.io/r/grDevices/colors.html)` into the console to see a list of all the named colours in R. Alternatively, you can use hexadecimal colours like `"#FF8000"` or the `[rgb()](https://rdrr.io/r/grDevices/rgb.html)` function to set red, green, and blue values on a scale from 0 to 1\.
Hover over a colour to see its R name.
* black
* gray1
* gray2
* gray3
* gray4
* gray5
* gray6
* gray7
* gray8
* gray9
* gray10
* gray11
* gray12
* gray13
* gray14
* gray15
* gray16
* gray17
* gray18
* gray19
* gray20
* gray21
* gray22
* gray23
* gray24
* gray25
* gray26
* gray27
* gray28
* gray29
* gray30
* gray31
* gray32
* gray33
* gray34
* gray35
* gray36
* gray37
* gray38
* gray39
* gray40
* dimgray
* gray42
* gray43
* gray44
* gray45
* gray46
* gray47
* gray48
* gray49
* gray50
* gray51
* gray52
* gray53
* gray54
* gray55
* gray56
* gray57
* gray58
* gray59
* gray60
* gray61
* gray62
* gray63
* gray64
* gray65
* darkgray
* gray66
* gray67
* gray68
* gray69
* gray70
* gray71
* gray72
* gray73
* gray74
* gray
* gray75
* gray76
* gray77
* gray78
* gray79
* gray80
* gray81
* gray82
* gray83
* lightgray
* gray84
* gray85
* gainsboro
* gray86
* gray87
* gray88
* gray89
* gray90
* gray91
* gray92
* gray93
* gray94
* gray95
* gray96
* gray97
* gray98
* gray99
* white
* snow4
* snow3
* snow2
* snow
* rosybrown4
* rosybrown
* rosybrown3
* rosybrown2
* rosybrown1
* lightcoral
* indianred
* indianred4
* indianred2
* indianred1
* indianred3
* brown4
* brown
* brown3
* brown2
* brown1
* firebrick4
* firebrick
* firebrick3
* firebrick1
* firebrick2
* darkred
* red3
* red2
* red
* mistyrose3
* mistyrose4
* mistyrose2
* mistyrose
* salmon
* tomato3
* coral4
* coral3
* coral2
* coral1
* tomato2
* tomato
* tomato4
* darksalmon
* salmon4
* salmon3
* salmon2
* salmon1
* coral
* orangered4
* orangered3
* orangered2
* lightsalmon3
* lightsalmon2
* lightsalmon
* lightsalmon4
* sienna
* sienna3
* sienna2
* sienna1
* sienna4
* orangered
* seashell4
* seashell3
* seashell2
* seashell
* chocolate4
* chocolate3
* chocolate
* chocolate2
* chocolate1
* linen
* peachpuff4
* peachpuff3
* peachpuff2
* peachpuff
* sandybrown
* tan4
* peru
* tan2
* tan1
* darkorange4
* darkorange3
* darkorange2
* darkorange1
* antiquewhite3
* antiquewhite2
* antiquewhite1
* bisque4
* bisque3
* bisque2
* bisque
* burlywood4
* burlywood3
* burlywood
* burlywood2
* burlywood1
* darkorange
* antiquewhite4
* antiquewhite
* papayawhip
* blanchedalmond
* navajowhite4
* navajowhite3
* navajowhite2
* navajowhite
* tan
* floralwhite
* oldlace
* wheat4
* wheat3
* wheat2
* wheat
* wheat1
* moccasin
* orange4
* orange3
* orange2
* orange
* goldenrod
* goldenrod1
* goldenrod4
* goldenrod3
* goldenrod2
* darkgoldenrod4
* darkgoldenrod
* darkgoldenrod3
* darkgoldenrod2
* darkgoldenrod1
* cornsilk
* cornsilk4
* cornsilk3
* cornsilk2
* lightgoldenrod4
* lightgoldenrod3
* lightgoldenrod
* lightgoldenrod2
* lightgoldenrod1
* gold4
* gold3
* gold2
* gold
* lemonchiffon4
* lemonchiffon3
* lemonchiffon2
* lemonchiffon
* palegoldenrod
* khaki
* darkkhaki
* khaki4
* khaki3
* khaki2
* khaki1
* ivory4
* ivory3
* ivory2
* ivory
* beige
* lightyellow4
* lightyellow3
* lightyellow2
* lightyellow
* lightgoldenrodyellow
* yellow4
* yellow3
* yellow2
* yellow
* olivedrab
* olivedrab4
* olivedrab3
* olivedrab2
* olivedrab1
* darkolivegreen
* darkolivegreen4
* darkolivegreen3
* darkolivegreen2
* darkolivegreen1
* greenyellow
* chartreuse4
* chartreuse3
* chartreuse2
* lawngreen
* chartreuse
* honeydew4
* honeydew3
* honeydew2
* honeydew
* darkseagreen4
* darkseagreen
* darkseagreen3
* darkseagreen2
* darkseagreen1
* lightgreen
* palegreen
* palegreen4
* palegreen3
* palegreen1
* forestgreen
* limegreen
* darkgreen
* green4
* green3
* green2
* green
* mediumseagreen
* seagreen
* seagreen3
* seagreen2
* seagreen1
* mintcream
* springgreen4
* springgreen3
* springgreen2
* springgreen
* aquamarine3
* aquamarine2
* aquamarine
* mediumspringgreen
* aquamarine4
* turquoise
* mediumturquoise
* lightseagreen
* azure4
* azure3
* azure2
* azure
* lightcyan4
* lightcyan3
* lightcyan2
* lightcyan
* paleturquoise
* paleturquoise4
* paleturquoise3
* paleturquoise2
* paleturquoise1
* darkslategray
* darkslategray4
* darkslategray3
* darkslategray2
* darkslategray1
* cyan4
* cyan3
* darkturquoise
* cyan2
* cyan
* cadetblue4
* cadetblue
* turquoise4
* turquoise3
* turquoise2
* turquoise1
* powderblue
* cadetblue3
* cadetblue2
* cadetblue1
* lightblue4
* lightblue3
* lightblue
* lightblue2
* lightblue1
* deepskyblue4
* deepskyblue3
* deepskyblue2
* deepskyblue
* skyblue
* lightskyblue4
* lightskyblue3
* lightskyblue2
* lightskyblue1
* lightskyblue
* skyblue4
* skyblue3
* skyblue2
* skyblue1
* aliceblue
* slategray
* lightslategray
* slategray3
* slategray2
* slategray1
* steelblue4
* steelblue
* steelblue3
* steelblue2
* steelblue1
* dodgerblue4
* dodgerblue3
* dodgerblue2
* dodgerblue
* lightsteelblue4
* lightsteelblue3
* lightsteelblue
* lightsteelblue2
* lightsteelblue1
* slategray4
* cornflowerblue
* royalblue
* royalblue4
* royalblue3
* royalblue2
* royalblue1
* ghostwhite
* lavender
* midnightblue
* navy
* blue4
* blue3
* blue2
* blue
* darkslateblue
* slateblue
* mediumslateblue
* lightslateblue
* slateblue1
* slateblue4
* slateblue3
* slateblue2
* mediumpurple4
* mediumpurple3
* mediumpurple
* mediumpurple2
* mediumpurple1
* purple4
* purple3
* blueviolet
* purple1
* purple2
* purple
* darkorchid
* darkorchid4
* darkorchid3
* darkorchid2
* darkorchid1
* darkviolet
* mediumorchid4
* mediumorchid3
* mediumorchid
* mediumorchid2
* mediumorchid1
* thistle4
* thistle3
* thistle
* thistle2
* thistle1
* plum4
* plum3
* plum2
* plum1
* plum
* violet
* darkmagenta
* magenta3
* magenta2
* magenta
* orchid4
* orchid3
* orchid
* orchid2
* orchid1
* maroon4
* violetred
* maroon3
* maroon2
* maroon1
* mediumvioletred
* deeppink3
* deeppink2
* deeppink
* deeppink4
* hotpink2
* hotpink1
* hotpink4
* hotpink
* violetred4
* violetred3
* violetred2
* violetred1
* hotpink3
* lavenderblush4
* lavenderblush3
* lavenderblush2
* lavenderblush
* maroon
* palevioletred4
* palevioletred3
* palevioletred
* palevioletred2
* palevioletred1
* pink4
* pink3
* pink2
* pink1
* pink
* lightpink
* lightpink4
* lightpink3
* lightpink2
* lightpink1
### C.1\.2 Alpha
The `alpha` argument changes transparency (0 \= totally transparent, 1 \= totally opaque).
Figure C.1: Varying alpha values.
### C.1\.3 Shape
The `shape` argument changes the shape of points.
Figure C.2: The 25 shape values
### C.1\.4 Linetype
You can probably guess what the `linetype` argument does.
Figure C.3: The 6 linetype values at different sizes.
C.2 Palettes
------------
Discrete palettes change depending on the number of categories.
Figure C.4: Default discrete palette with different numbers of levels.
### C.2\.1 Viridis Palettes
Viridis palettes are very good for colourblind\-safe and greyscale\-safe plots. The work with any number of categories, but are best for larger numbers of categories or continuous colours.
#### C.2\.1\.1 Discrete Viridis Palettes
Set [discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") viridis colours with `[scale_colour_viridis_d()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` or `[scale_fill_viridis_d()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` and set the `option` argument to one of the options below. Set `direction = -1` to reverse the order of colours.
Figure C.5: Discrete viridis palettes.
If the end colour is too light for your plot or the start colour too dark, you can set the `begin` and `end` arguments to values between 0 and 1, such as `scale_colour_viridis_c(begin = 0.1, end = 0.9)`.
#### C.2\.1\.2 Continuous Viridis Palettes
Set [continuous](https://psyteachr.github.io/glossary/c#continuous "Data that can take on any values between other existing values.") viridis colours with `[scale_colour_viridis_c()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` or `[scale_fill_viridis_c()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` and set the `option` argument to one of the options below. Set `direction = -1` to reverse the order of colours.
Figure 3\.7: Continuous viridis palettes.
### C.2\.2 Brewer Palettes
Brewer palettes give you a lot of control over plot colour and fill. You set them with `[scale_color_brewer()](https://ggplot2.tidyverse.org/reference/scale_brewer.html)` or `[scale_fill_brewer()](https://ggplot2.tidyverse.org/reference/scale_brewer.html)` and set the `palette` argument to one of the palettes below. Set `direction = -1` to reverse the order of colours.
#### C.2\.2\.1 Qualitative Brewer Palettes
These palettes are good for [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") data with up to 8 categories (some palettes can handle up to 12\). The "Paired" palette is useful if your categories are arranged in pairs.
Figure C.6: Qualitative brewer palettes.
#### C.2\.2\.2 Sequential Brewer Palettes
These palettes are good for up to 9 [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs") categories with a lot of categories.
Figure C.7: Sequential brewer palettes.
#### C.2\.2\.3 Diverging Brewer Palettes
These palettes are good for [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs") categories with up to 11 levels where the centre level is a neutral or baseline category and the levels above and below it differ in an important way, such as agree versus disagree options.
Figure C.8: Diverging brewer palettes.
### C.2\.1 Viridis Palettes
Viridis palettes are very good for colourblind\-safe and greyscale\-safe plots. The work with any number of categories, but are best for larger numbers of categories or continuous colours.
#### C.2\.1\.1 Discrete Viridis Palettes
Set [discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") viridis colours with `[scale_colour_viridis_d()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` or `[scale_fill_viridis_d()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` and set the `option` argument to one of the options below. Set `direction = -1` to reverse the order of colours.
Figure C.5: Discrete viridis palettes.
If the end colour is too light for your plot or the start colour too dark, you can set the `begin` and `end` arguments to values between 0 and 1, such as `scale_colour_viridis_c(begin = 0.1, end = 0.9)`.
#### C.2\.1\.2 Continuous Viridis Palettes
Set [continuous](https://psyteachr.github.io/glossary/c#continuous "Data that can take on any values between other existing values.") viridis colours with `[scale_colour_viridis_c()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` or `[scale_fill_viridis_c()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` and set the `option` argument to one of the options below. Set `direction = -1` to reverse the order of colours.
Figure 3\.7: Continuous viridis palettes.
#### C.2\.1\.1 Discrete Viridis Palettes
Set [discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") viridis colours with `[scale_colour_viridis_d()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` or `[scale_fill_viridis_d()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` and set the `option` argument to one of the options below. Set `direction = -1` to reverse the order of colours.
Figure C.5: Discrete viridis palettes.
If the end colour is too light for your plot or the start colour too dark, you can set the `begin` and `end` arguments to values between 0 and 1, such as `scale_colour_viridis_c(begin = 0.1, end = 0.9)`.
#### C.2\.1\.2 Continuous Viridis Palettes
Set [continuous](https://psyteachr.github.io/glossary/c#continuous "Data that can take on any values between other existing values.") viridis colours with `[scale_colour_viridis_c()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` or `[scale_fill_viridis_c()](https://ggplot2.tidyverse.org/reference/scale_viridis.html)` and set the `option` argument to one of the options below. Set `direction = -1` to reverse the order of colours.
Figure 3\.7: Continuous viridis palettes.
### C.2\.2 Brewer Palettes
Brewer palettes give you a lot of control over plot colour and fill. You set them with `[scale_color_brewer()](https://ggplot2.tidyverse.org/reference/scale_brewer.html)` or `[scale_fill_brewer()](https://ggplot2.tidyverse.org/reference/scale_brewer.html)` and set the `palette` argument to one of the palettes below. Set `direction = -1` to reverse the order of colours.
#### C.2\.2\.1 Qualitative Brewer Palettes
These palettes are good for [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") data with up to 8 categories (some palettes can handle up to 12\). The "Paired" palette is useful if your categories are arranged in pairs.
Figure C.6: Qualitative brewer palettes.
#### C.2\.2\.2 Sequential Brewer Palettes
These palettes are good for up to 9 [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs") categories with a lot of categories.
Figure C.7: Sequential brewer palettes.
#### C.2\.2\.3 Diverging Brewer Palettes
These palettes are good for [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs") categories with up to 11 levels where the centre level is a neutral or baseline category and the levels above and below it differ in an important way, such as agree versus disagree options.
Figure C.8: Diverging brewer palettes.
#### C.2\.2\.1 Qualitative Brewer Palettes
These palettes are good for [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") data with up to 8 categories (some palettes can handle up to 12\). The "Paired" palette is useful if your categories are arranged in pairs.
Figure C.6: Qualitative brewer palettes.
#### C.2\.2\.2 Sequential Brewer Palettes
These palettes are good for up to 9 [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs") categories with a lot of categories.
Figure C.7: Sequential brewer palettes.
#### C.2\.2\.3 Diverging Brewer Palettes
These palettes are good for [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs") categories with up to 11 levels where the centre level is a neutral or baseline category and the levels above and below it differ in an important way, such as agree versus disagree options.
Figure C.8: Diverging brewer palettes.
C.3 Themes
----------
`ggplot2` has 8 built\-in themes that you can add to a plot like `plot + theme_bw()` or set as the default theme at the top of your script like `theme_set(theme_bw())`.
Figure C.9: {ggplot2} themes.
### C.3\.1 ggthemes
You can get more themes from add\-on packages, like `[ggthemes](https://yutannihilation.github.io/allYourFigureAreBelongToUs/ggthemes/)`. Most of the themes also have custom `scale_` functions like `scale_colour_economist()`. Their website has extensive examples and instructions for alternate or dark versions of these themes.
Figure C.10: {ggthemes} themes.
### C.3\.2 Fonts
You can customise the fonts used in themes. All computers should be able to recognise the families "sans", "serif", and "mono", and some computers will be able to access other installed fonts by name.
```
sans <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "sans") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Sans")
serif <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "serif") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Serif")
mono <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "mono") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Mono")
font <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "Comic Sans MS") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Comic Sans MS")
sans + serif + mono + font + [plot_layout](https://patchwork.data-imaginist.com/reference/plot_layout.html)(nrow = 1)
```
Figure C.11: Different fonts.
If you are working on a Windows machine and get the error "font family not found in Windows font database", you may need to explicitly map the fonts. In your setup code chunk, add the following code, which should fix the error. You may need to do this for any fonts that you specify.
The `showtext` package is a flexible way to add fonts.
If you have a .ttf file from a font site, like [Font Squirrel](https://www.fontsquirrel.com), you can load the file directly using `[font_add()](https://rdrr.io/pkg/sysfonts/man/font_add.html)`. Set `regular` as the path to the file for the regular version of the font, and optionally add other versions. Set the `family` to the name you want to use for the font. You will need to include any local font files if you are sharing your script with others.
```
[library](https://rdrr.io/r/base/library.html)([showtext](https://github.com/yixuan/showtext))
# font from https://www.fontsquirrel.com/fonts/SF-Cartoonist-Hand
[font_add](https://rdrr.io/pkg/sysfonts/man/font_add.html)(
regular = "fonts/cartoonist/SF_Cartoonist_Hand.ttf",
bold = "fonts/cartoonist/SF_Cartoonist_Hand_Bold.ttf",
italic = "fonts/cartoonist/SF_Cartoonist_Hand_Italic.ttf",
bolditalic = "fonts/cartoonist/SF_Cartoonist_Hand_Bold_Italic.ttf",
family = "cartoonist"
)
```
To download fonts directly from [Google fonts](https://fonts.google.com/), use the function `[font_add_google()](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)`, set the `name` to the exact name from the site, and the `family` to the name you want to use for the font.
```
# download fonts from Google
[font_add_google](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)(name = "Courgette", family = "courgette")
[font_add_google](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)(name = "Poiret One", family = "poiret")
```
After you've added fonts from local files or Google, you need to make them available to R using `[showtext_auto()](https://rdrr.io/pkg/showtext/man/showtext_auto.html)`. You will have to do these steps in each script where you want to use the custom fonts.
```
[showtext_auto](https://rdrr.io/pkg/showtext/man/showtext_auto.html)() # load the fonts
```
To change the fonts used overall in a plot, use the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function and set `text` to `element_text(family = "new_font_family")`.
```
a <- g + [theme](https://ggplot2.tidyverse.org/reference/theme.html)(text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "courgette")) +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Courgette")
b <- g + [theme](https://ggplot2.tidyverse.org/reference/theme.html)(text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "cartoonist")) +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Cartoonist Hand")
c <- g + [theme](https://ggplot2.tidyverse.org/reference/theme.html)(text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "poiret")) +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Poiret One")
a + b + c
```
Figure C.12: Custom Fonts.
To set the fonts for individual elements in the plot, you need to find the specific argument for that element. You can use the argument `face` to choose "bold", "italic", or "bolditalic" versions, if they are available.
```
g + [ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Cartoonist Hand") +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
title = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "cartoonist", face = "bold"),
strip.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "cartoonist", face = "italic"),
axis.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "sans")
)
```
Figure C.13: Multiple custom fonts on the same plot.
### C.3\.3 Setting A Lab Theme using `theme()`
The `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function, as we mentioned, does a lot more than just change the position of a legend and can be used to really control a variety of elements and to eventually create your own "theme" for your figures \- say you want to have a consistent look to your figures across your publications or across your lab posters.
First, we'll create a basic plot to demonstrate the changes.
```
g <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(diamonds, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = carat,
y = price,
color = cut)) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~color, nrow = 2) +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = lm, formula = y~x) +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(title = "The relationship between carat and price",
subtitle = "For each level of color and cut",
caption = "Data from ggplot2::diamonds")
g
```
Figure C.14: Basic plot in default theme
Always start with a base theme, like `[theme_minimal()](https://ggplot2.tidyverse.org/reference/ggtheme.html)` and set the size and font. Make sure to load any custom fonts.
```
[font_add_google](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)(name = "Nunito", family = "Nunito")
[showtext_auto](https://rdrr.io/pkg/showtext/man/showtext_auto.html)() # load the fonts
# set up custom theme to add to all plots
mytheme <- [theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)( # always start with a base theme_****
base_size = 16, # 16-point font (adjusted for axes)
base_family = "Nunito" # custom font family
)
```
```
g + mytheme
```
Figure C.15: Basic customised theme
Now add specific theme customisations. See `[?theme](https://ggplot2.tidyverse.org/reference/theme.html)` for detailed explanations. Most theme arguments take a value of `[element_blank()](https://ggplot2.tidyverse.org/reference/element.html)` to remove the feature entirely, or `[element_text()](https://ggplot2.tidyverse.org/reference/element.html)`, `[element_line()](https://ggplot2.tidyverse.org/reference/element.html)` or `[element_rect()](https://ggplot2.tidyverse.org/reference/element.html)`, depending on whether the feature is text, a box, or a line.
```
# add more specific customisations with theme()
mytheme <- [theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)(
base_size = 16,
base_family = "Nunito"
) +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
plot.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "black"),
panel.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "grey10",
color = "grey30"),
text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(color = "white"),
strip.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(hjust = 0), # left justify
strip.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "grey60", ),
axis.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(color = "grey60"),
axis.line = [element_line](https://ggplot2.tidyverse.org/reference/element.html)(color = "grey60", size = 1),
panel.grid = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
plot.title = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(hjust = 0.5), # center justify
plot.subtitle = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(hjust = 0.5, color = "grey60"),
plot.caption = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(face = "italic")
)
```
```
g + mytheme
```
Figure C.16: Further customised theme
You can still add further theme customisation for specific plots.
```
g + mytheme +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
legend.title = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(size = 11),
legend.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(size = 9),
legend.key.height = [unit](https://rdrr.io/r/grid/unit.html)(0.2, "inches"),
legend.position = [c](https://rdrr.io/r/base/c.html)(.9, 0.175)
)
```
Figure C.17: Plot\-specific customising.
### C.3\.1 ggthemes
You can get more themes from add\-on packages, like `[ggthemes](https://yutannihilation.github.io/allYourFigureAreBelongToUs/ggthemes/)`. Most of the themes also have custom `scale_` functions like `scale_colour_economist()`. Their website has extensive examples and instructions for alternate or dark versions of these themes.
Figure C.10: {ggthemes} themes.
### C.3\.2 Fonts
You can customise the fonts used in themes. All computers should be able to recognise the families "sans", "serif", and "mono", and some computers will be able to access other installed fonts by name.
```
sans <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "sans") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Sans")
serif <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "serif") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Serif")
mono <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "mono") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Mono")
font <- g + [theme_bw](https://ggplot2.tidyverse.org/reference/ggtheme.html)(base_family = "Comic Sans MS") +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Comic Sans MS")
sans + serif + mono + font + [plot_layout](https://patchwork.data-imaginist.com/reference/plot_layout.html)(nrow = 1)
```
Figure C.11: Different fonts.
If you are working on a Windows machine and get the error "font family not found in Windows font database", you may need to explicitly map the fonts. In your setup code chunk, add the following code, which should fix the error. You may need to do this for any fonts that you specify.
The `showtext` package is a flexible way to add fonts.
If you have a .ttf file from a font site, like [Font Squirrel](https://www.fontsquirrel.com), you can load the file directly using `[font_add()](https://rdrr.io/pkg/sysfonts/man/font_add.html)`. Set `regular` as the path to the file for the regular version of the font, and optionally add other versions. Set the `family` to the name you want to use for the font. You will need to include any local font files if you are sharing your script with others.
```
[library](https://rdrr.io/r/base/library.html)([showtext](https://github.com/yixuan/showtext))
# font from https://www.fontsquirrel.com/fonts/SF-Cartoonist-Hand
[font_add](https://rdrr.io/pkg/sysfonts/man/font_add.html)(
regular = "fonts/cartoonist/SF_Cartoonist_Hand.ttf",
bold = "fonts/cartoonist/SF_Cartoonist_Hand_Bold.ttf",
italic = "fonts/cartoonist/SF_Cartoonist_Hand_Italic.ttf",
bolditalic = "fonts/cartoonist/SF_Cartoonist_Hand_Bold_Italic.ttf",
family = "cartoonist"
)
```
To download fonts directly from [Google fonts](https://fonts.google.com/), use the function `[font_add_google()](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)`, set the `name` to the exact name from the site, and the `family` to the name you want to use for the font.
```
# download fonts from Google
[font_add_google](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)(name = "Courgette", family = "courgette")
[font_add_google](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)(name = "Poiret One", family = "poiret")
```
After you've added fonts from local files or Google, you need to make them available to R using `[showtext_auto()](https://rdrr.io/pkg/showtext/man/showtext_auto.html)`. You will have to do these steps in each script where you want to use the custom fonts.
```
[showtext_auto](https://rdrr.io/pkg/showtext/man/showtext_auto.html)() # load the fonts
```
To change the fonts used overall in a plot, use the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function and set `text` to `element_text(family = "new_font_family")`.
```
a <- g + [theme](https://ggplot2.tidyverse.org/reference/theme.html)(text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "courgette")) +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Courgette")
b <- g + [theme](https://ggplot2.tidyverse.org/reference/theme.html)(text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "cartoonist")) +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Cartoonist Hand")
c <- g + [theme](https://ggplot2.tidyverse.org/reference/theme.html)(text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "poiret")) +
[ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Poiret One")
a + b + c
```
Figure C.12: Custom Fonts.
To set the fonts for individual elements in the plot, you need to find the specific argument for that element. You can use the argument `face` to choose "bold", "italic", or "bolditalic" versions, if they are available.
```
g + [ggtitle](https://ggplot2.tidyverse.org/reference/labs.html)("Cartoonist Hand") +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
title = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "cartoonist", face = "bold"),
strip.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "cartoonist", face = "italic"),
axis.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(family = "sans")
)
```
Figure C.13: Multiple custom fonts on the same plot.
### C.3\.3 Setting A Lab Theme using `theme()`
The `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` function, as we mentioned, does a lot more than just change the position of a legend and can be used to really control a variety of elements and to eventually create your own "theme" for your figures \- say you want to have a consistent look to your figures across your publications or across your lab posters.
First, we'll create a basic plot to demonstrate the changes.
```
g <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(diamonds, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = carat,
y = price,
color = cut)) +
[facet_wrap](https://ggplot2.tidyverse.org/reference/facet_wrap.html)(~color, nrow = 2) +
[geom_smooth](https://ggplot2.tidyverse.org/reference/geom_smooth.html)(method = lm, formula = y~x) +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(title = "The relationship between carat and price",
subtitle = "For each level of color and cut",
caption = "Data from ggplot2::diamonds")
g
```
Figure C.14: Basic plot in default theme
Always start with a base theme, like `[theme_minimal()](https://ggplot2.tidyverse.org/reference/ggtheme.html)` and set the size and font. Make sure to load any custom fonts.
```
[font_add_google](https://rdrr.io/pkg/sysfonts/man/font_add_google.html)(name = "Nunito", family = "Nunito")
[showtext_auto](https://rdrr.io/pkg/showtext/man/showtext_auto.html)() # load the fonts
# set up custom theme to add to all plots
mytheme <- [theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)( # always start with a base theme_****
base_size = 16, # 16-point font (adjusted for axes)
base_family = "Nunito" # custom font family
)
```
```
g + mytheme
```
Figure C.15: Basic customised theme
Now add specific theme customisations. See `[?theme](https://ggplot2.tidyverse.org/reference/theme.html)` for detailed explanations. Most theme arguments take a value of `[element_blank()](https://ggplot2.tidyverse.org/reference/element.html)` to remove the feature entirely, or `[element_text()](https://ggplot2.tidyverse.org/reference/element.html)`, `[element_line()](https://ggplot2.tidyverse.org/reference/element.html)` or `[element_rect()](https://ggplot2.tidyverse.org/reference/element.html)`, depending on whether the feature is text, a box, or a line.
```
# add more specific customisations with theme()
mytheme <- [theme_minimal](https://ggplot2.tidyverse.org/reference/ggtheme.html)(
base_size = 16,
base_family = "Nunito"
) +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
plot.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "black"),
panel.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "grey10",
color = "grey30"),
text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(color = "white"),
strip.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(hjust = 0), # left justify
strip.background = [element_rect](https://ggplot2.tidyverse.org/reference/element.html)(fill = "grey60", ),
axis.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(color = "grey60"),
axis.line = [element_line](https://ggplot2.tidyverse.org/reference/element.html)(color = "grey60", size = 1),
panel.grid = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
plot.title = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(hjust = 0.5), # center justify
plot.subtitle = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(hjust = 0.5, color = "grey60"),
plot.caption = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(face = "italic")
)
```
```
g + mytheme
```
Figure C.16: Further customised theme
You can still add further theme customisation for specific plots.
```
g + mytheme +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
legend.title = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(size = 11),
legend.text = [element_text](https://ggplot2.tidyverse.org/reference/element.html)(size = 9),
legend.key.height = [unit](https://rdrr.io/r/grid/unit.html)(0.2, "inches"),
legend.position = [c](https://rdrr.io/r/base/c.html)(.9, 0.175)
)
```
Figure C.17: Plot\-specific customising.
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/advanced-plots-1.html |
D Advanced Plots
================
This appendix provides some examples of more complex plots. See Lisa's tutorials for the 2022 [30\-day chart challenge](https://debruine.github.io/30DCC-2022/) for even more plots.
D.1 Easter Egg \- Overlaying Plots
----------------------------------
Hopefully from some of the materials we have shown you, you will have found ways of presenting data in an informative manner \- for example, we have shown violin plots and how they can be effective, when combined with boxplots, at displaying distributions. However, if you are familiar with other software you may be used to seeing this sort of information displayed differently, as perhaps a histogram with a normal curve overlaid. Whist the violin plots are better to convey that information we thought it might help to see alternative approaches here. Really it is about overlaying some of the plots we have already shown, but with some slight adjustments. For example, lets look at the histogram and density plot of reaction times we saw earlier \- shown here side by side for convenience.
```
a <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(subtitle = "+ geom_histogram()")
b <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)()+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(subtitle = "+ geom_density()")
a+b
```
Now that in itself is fairly informative but perhaps takes up a lot of room so one option using some of the features of the patchwork library would be to inset the density plot in the top right of the histogram. We already showed a little of patchwork earlier so we won't repeat it here but all we are doing is placing one of the figures (the density plot) within the `[inset_element()](https://patchwork.data-imaginist.com/reference/inset_element.html)` function and applying some appropriate values to position the inset \- through a little trial and error \- based on the bottom left corner of the plot area being `left = 0`, `bottom = 0`, and the top right corner being `right = 1`, `top = 1`:
```
a <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
b <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)()+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
a + [inset_element](https://patchwork.data-imaginist.com/reference/inset_element.html)(b, left = 0.6, bottom = 0.6, right = 1, top = 1)
```
Figure D.1: Insetting a plot within a plot using `[inset_element()](https://patchwork.data-imaginist.com/reference/inset_element.html)` from the patchwork library
But of course that only works if there is space for the inset and it doesn't start overlapping on the main figure. This next approach fully overlays the density plot on top of the histogram. There is one main change though and that is the addition of `aes(y=..density..)` within `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)`. This tells the histogram to now be plotted in terms of density and not count, meaning that the density plot and the histogram and now based on the same y\-axis:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = ..density..),
binwidth = 10, fill = "white", color = "black") +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)()+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
```
Figure D.2: A histogram with density plot overlaid
The main thing to not in the above figure is that both the histogram and the density plot are based on the data you have collected. An alternative that you might want to look at it is plotting a normal distribution on top of the histogram based on the mean and standard deviation of the data. This is a bit more complicated but works as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = ..density..),
binwidth = 10, fill = "white", color = "black") +
[stat_function](https://ggplot2.tidyverse.org/reference/geom_function.html)(
fun = dnorm,
args = [list](https://rdrr.io/r/base/list.html)(mean = [mean](https://rdrr.io/r/base/mean.html)(dat_long$rt),
sd = [sd](https://rdrr.io/r/stats/sd.html)(dat_long$rt))
)
```
Figure D.3: A histogram with normal distribution based on the data overlaid
The first part of this approach is identical to what we say above but instead of using `[geom_density()](https://ggplot2.tidyverse.org/reference/geom_density.html)` we are using a statistics function called `[stat_function()](https://ggplot2.tidyverse.org/reference/geom_function.html)` similar to ones we saw earlier when plotting means and standard deviations. What `[stat_function()](https://ggplot2.tidyverse.org/reference/geom_function.html)` is doing is taking the Normal distribution density function, `fun = dnorm` (read as function equals density normal), and then the mean of the data (`mean = mean(dat_long$rt)`) and the standard deviation of the data `sd = sd(dat_long$rt)` and creates a distribution based on those values. The `args` refers to the arguments that the `dnorm` function takes, and they are passed to the function as a list (`[list()](https://rdrr.io/r/base/list.html)`). But from there, you can then start to alter the `linetype`, `color`, and thickness (`lwd = 3` for example) as you please.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = ..density..),
binwidth = 10, fill = "white", color = "black") +
[stat_function](https://ggplot2.tidyverse.org/reference/geom_function.html)(
fun = dnorm,
args = [list](https://rdrr.io/r/base/list.html)(mean = [mean](https://rdrr.io/r/base/mean.html)(dat_long$rt),
sd = [sd](https://rdrr.io/r/stats/sd.html)(dat_long$rt)),
color = "red",
lwd = 3,
linetype = 2
)
```
Figure D.4: Changing the line of the `[stat_function()](https://ggplot2.tidyverse.org/reference/geom_function.html)`
D.2 Easter Egg \- A Dumbbell Plot
---------------------------------
A nice way of representing a change across different conditions, within participants or across timepoints, is the dumbbell chart. These figures can do a lot of heavy lifting in conveying patterns within the data and are not as hard to create in ggplot as they might first appear. The premise is that you need the start point, in terms of x (`x =`) and y (`y =`), and the end point, again in terms of x (`xend =`) and y (`yend =`). You draw a line between those two points using `[geom_segment()](https://ggplot2.tidyverse.org/reference/geom_segment.html)` and then add a data point at the both ends of the line using `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)`. So for example, we will use the average accuracy scores for the word and non\-word conditions, for monolingual and bilinguals, to demonstrate. We could do the same figure for all participants but as we have 100 participants it can be a bit **wild**. We first need to create the averages using a little bit of data wrangling we have seen:
```
dat_avg <- dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[group_by](https://dplyr.tidyverse.org/reference/group_by.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[summarise](https://r-spatial.github.io/sf/reference/tidyverse.html)(mean_acc_nonword = [mean](https://rdrr.io/r/base/mean.html)(acc_nonword),
mean_acc_word = [mean](https://rdrr.io/r/base/mean.html)(acc_word)) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ungroup](https://dplyr.tidyverse.org/reference/group_by.html)()
```
So our data looks as follows:
| language | mean\_acc\_nonword | mean\_acc\_word |
| --- | --- | --- |
| monolingual | 84\.87273 | 94\.87273 |
| bilingual | 84\.93333 | 95\.17778 |
With our average accuracies for non\-word trials in \*\*
| nom |
| --- |
| mean\_acc\_nonword |
\*\* and our average accuracies for word trials in \*\*
| nom |
| --- |
| mean\_acc\_word |
\*\*. And now we can create our dumbbell plot as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_avg) +
[geom_segment](https://ggplot2.tidyverse.org/reference/geom_segment.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = mean_acc_nonword, y = language,
xend = mean_acc_word, yend = language)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = mean_acc_nonword, y = language), color = "red") +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = mean_acc_word, y = language), color = "blue") +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(x = "Change in Accuracy")
```
Figure D.5: A dumbbell plot of change in Average Accuracy from Non\-word trials (red dots) to Word trials (blue dots) for monolingual and bilingual participants.
Which actually gives the least exciting figure ever as both groups showed the same change from the non\-word trials (red dots) to the word trials (blue dots) but we can break the code down a bit just to highlight what we are doing, remembering the idea about layers. Layers one and two add the basic background and black line from the start point (x,y), the mean accuracy of non\-word trials for the two conditions, to the end point (xend, yend), the mean accuracy of word trials for the two conditions:
Figure D.6: Building the bars of our dumbbells. The (x,y) and (xend, yend) have been added to show the values you need to consider and enter to create the dumbbell
and the remaining lines add the dots at the end of the dumbells and changes the x axis label to something useful:
Figure D.7: Adding the weights to the dumbbells. Red dots are added in one layer to show Average Accuracy of Non\-word trials, and blue dots are added in final layer to show Average Accuracy of Word trials.
Of course, worth remembering, it is better to always think of the dumbbell as a start and end point, not left and right, as had accuracy gone down when moving from Non\-word trials to Word trials then our bars would run the opposite direction. If you repeat the above process using reaction times instead of accuracy you will see what we mean.
D.3 Easter Egg \- A Pie Chart
-----------------------------
Pie Charts are not the best form of visualisation as they generally require people to compare areas and/or angles which is a fairly unintuitive means of doing a comparison. They are so disliked in many fields that ggplot does not actually have a `geom_...()` function to create one. But, there is always somebody that wants to create a pie chart regardless and who are we to judge. So here would be the code to produce a pie chart of the demographic data we saw in the start of the paper:
```
count_dat <- dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[group_by](https://dplyr.tidyverse.org/reference/group_by.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[count](https://dplyr.tidyverse.org/reference/count.html)() [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ungroup](https://dplyr.tidyverse.org/reference/group_by.html)() [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(percent = (n/[sum](https://rdrr.io/r/base/sum.html)(n)*100))
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(count_dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = "",
y = percent,
fill = language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)(width = 1, stat="identity") +
[coord_polar](https://ggplot2.tidyverse.org/reference/coord_polar.html)("y", start = 0) +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
axis.title = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
panel.grid = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
panel.border = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
axis.ticks = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
axis.text.x = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)()
) +
[geom_text](https://ggplot2.tidyverse.org/reference/geom_text.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = [c](https://rdrr.io/r/base/c.html)(75, 25),
label = [paste](https://rdrr.io/r/base/paste.html)(percent, "%")),
size = 6)
```
Figure D.8: A pie chart of the demographics
Note that this is effectively creating a stacked bar chart with no x variable (i.e. `x = ""`) and then wrapping the y\-axis into a circle (i.e. `coord_polar("y", start = 0)`). That is what the first three lines of the `[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)` code does:
Figure D.9: The basis of a pie chart
The remainder of the code is used to remove the various panel and tick lines, and text, setting them all to `[element_blank()](https://ggplot2.tidyverse.org/reference/element.html)` through the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` functions we saw above, and to add new labelling text on top of the pie chart at specific y\-values (i.e. `y = c(75,25)`). But remember, **friends don't let friends make pie charts!**
D.4 Easter Egg \- A Lollipop Plot
---------------------------------
Lollipop plots are a sweet alternative to pie charts for representing relative counts. They're a combination of `[geom_linerange()](https://ggplot2.tidyverse.org/reference/geom_linerange.html)` and `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)`. Use `[coord_flip()](https://ggplot2.tidyverse.org/reference/coord_flip.html)` to make them horizontal.
```
pets <- [c](https://rdrr.io/r/base/c.html)("cats", "dogs", "ferrets", "fish", "hamsters", "snakes")
prob <- [c](https://rdrr.io/r/base/c.html)(50, 50, 20, 30, 20, 15)
[tibble](https://r-spatial.github.io/sf/reference/tibble.html)(pet = [sample](https://rdrr.io/r/base/sample.html)(pets, 500, TRUE, prob) [%>%](https://magrittr.tidyverse.org/reference/pipe.html) [factor](https://rdrr.io/r/base/factor.html)([rev](https://rdrr.io/r/base/rev.html)(pets))) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[count](https://dplyr.tidyverse.org/reference/count.html)(pet) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = pet)) +
[geom_linerange](https://ggplot2.tidyverse.org/reference/geom_linerange.html)(mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(ymin = 0, ymax = n),
size = 2) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)(mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = n, colour = pet),
shape = 21,
size = 8,
stroke = 4,
fill = "white",
show.legend = FALSE) +
[geom_text](https://ggplot2.tidyverse.org/reference/geom_text.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(label = pet),
y = 1, hjust = 0, size = 6,
position = [position_nudge](https://ggplot2.tidyverse.org/reference/position_nudge.html)(x = 0.3)) +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(labels = NULL) +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(axis.ticks.y = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)()) +
[scale_colour_viridis_d](https://ggplot2.tidyverse.org/reference/scale_viridis.html)() +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(x = "", y = "") +
[coord_flip](https://ggplot2.tidyverse.org/reference/coord_flip.html)(ylim = [c](https://rdrr.io/r/base/c.html)(0, 200)) +
[theme_light](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure D.10: A lollipop plot showing the number of different types of pets.
D.1 Easter Egg \- Overlaying Plots
----------------------------------
Hopefully from some of the materials we have shown you, you will have found ways of presenting data in an informative manner \- for example, we have shown violin plots and how they can be effective, when combined with boxplots, at displaying distributions. However, if you are familiar with other software you may be used to seeing this sort of information displayed differently, as perhaps a histogram with a normal curve overlaid. Whist the violin plots are better to convey that information we thought it might help to see alternative approaches here. Really it is about overlaying some of the plots we have already shown, but with some slight adjustments. For example, lets look at the histogram and density plot of reaction times we saw earlier \- shown here side by side for convenience.
```
a <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(subtitle = "+ geom_histogram()")
b <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)()+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(subtitle = "+ geom_density()")
a+b
```
Now that in itself is fairly informative but perhaps takes up a lot of room so one option using some of the features of the patchwork library would be to inset the density plot in the top right of the histogram. We already showed a little of patchwork earlier so we won't repeat it here but all we are doing is placing one of the figures (the density plot) within the `[inset_element()](https://patchwork.data-imaginist.com/reference/inset_element.html)` function and applying some appropriate values to position the inset \- through a little trial and error \- based on the bottom left corner of the plot area being `left = 0`, `bottom = 0`, and the top right corner being `right = 1`, `top = 1`:
```
a <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
b <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)()+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
a + [inset_element](https://patchwork.data-imaginist.com/reference/inset_element.html)(b, left = 0.6, bottom = 0.6, right = 1, top = 1)
```
Figure D.1: Insetting a plot within a plot using `[inset_element()](https://patchwork.data-imaginist.com/reference/inset_element.html)` from the patchwork library
But of course that only works if there is space for the inset and it doesn't start overlapping on the main figure. This next approach fully overlays the density plot on top of the histogram. There is one main change though and that is the addition of `aes(y=..density..)` within `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)`. This tells the histogram to now be plotted in terms of density and not count, meaning that the density plot and the histogram and now based on the same y\-axis:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = ..density..),
binwidth = 10, fill = "white", color = "black") +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)()+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
```
Figure D.2: A histogram with density plot overlaid
The main thing to not in the above figure is that both the histogram and the density plot are based on the data you have collected. An alternative that you might want to look at it is plotting a normal distribution on top of the histogram based on the mean and standard deviation of the data. This is a bit more complicated but works as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = ..density..),
binwidth = 10, fill = "white", color = "black") +
[stat_function](https://ggplot2.tidyverse.org/reference/geom_function.html)(
fun = dnorm,
args = [list](https://rdrr.io/r/base/list.html)(mean = [mean](https://rdrr.io/r/base/mean.html)(dat_long$rt),
sd = [sd](https://rdrr.io/r/stats/sd.html)(dat_long$rt))
)
```
Figure D.3: A histogram with normal distribution based on the data overlaid
The first part of this approach is identical to what we say above but instead of using `[geom_density()](https://ggplot2.tidyverse.org/reference/geom_density.html)` we are using a statistics function called `[stat_function()](https://ggplot2.tidyverse.org/reference/geom_function.html)` similar to ones we saw earlier when plotting means and standard deviations. What `[stat_function()](https://ggplot2.tidyverse.org/reference/geom_function.html)` is doing is taking the Normal distribution density function, `fun = dnorm` (read as function equals density normal), and then the mean of the data (`mean = mean(dat_long$rt)`) and the standard deviation of the data `sd = sd(dat_long$rt)` and creates a distribution based on those values. The `args` refers to the arguments that the `dnorm` function takes, and they are passed to the function as a list (`[list()](https://rdrr.io/r/base/list.html)`). But from there, you can then start to alter the `linetype`, `color`, and thickness (`lwd = 3` for example) as you please.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = ..density..),
binwidth = 10, fill = "white", color = "black") +
[stat_function](https://ggplot2.tidyverse.org/reference/geom_function.html)(
fun = dnorm,
args = [list](https://rdrr.io/r/base/list.html)(mean = [mean](https://rdrr.io/r/base/mean.html)(dat_long$rt),
sd = [sd](https://rdrr.io/r/stats/sd.html)(dat_long$rt)),
color = "red",
lwd = 3,
linetype = 2
)
```
Figure D.4: Changing the line of the `[stat_function()](https://ggplot2.tidyverse.org/reference/geom_function.html)`
D.2 Easter Egg \- A Dumbbell Plot
---------------------------------
A nice way of representing a change across different conditions, within participants or across timepoints, is the dumbbell chart. These figures can do a lot of heavy lifting in conveying patterns within the data and are not as hard to create in ggplot as they might first appear. The premise is that you need the start point, in terms of x (`x =`) and y (`y =`), and the end point, again in terms of x (`xend =`) and y (`yend =`). You draw a line between those two points using `[geom_segment()](https://ggplot2.tidyverse.org/reference/geom_segment.html)` and then add a data point at the both ends of the line using `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)`. So for example, we will use the average accuracy scores for the word and non\-word conditions, for monolingual and bilinguals, to demonstrate. We could do the same figure for all participants but as we have 100 participants it can be a bit **wild**. We first need to create the averages using a little bit of data wrangling we have seen:
```
dat_avg <- dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[group_by](https://dplyr.tidyverse.org/reference/group_by.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[summarise](https://r-spatial.github.io/sf/reference/tidyverse.html)(mean_acc_nonword = [mean](https://rdrr.io/r/base/mean.html)(acc_nonword),
mean_acc_word = [mean](https://rdrr.io/r/base/mean.html)(acc_word)) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ungroup](https://dplyr.tidyverse.org/reference/group_by.html)()
```
So our data looks as follows:
| language | mean\_acc\_nonword | mean\_acc\_word |
| --- | --- | --- |
| monolingual | 84\.87273 | 94\.87273 |
| bilingual | 84\.93333 | 95\.17778 |
With our average accuracies for non\-word trials in \*\*
| nom |
| --- |
| mean\_acc\_nonword |
\*\* and our average accuracies for word trials in \*\*
| nom |
| --- |
| mean\_acc\_word |
\*\*. And now we can create our dumbbell plot as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_avg) +
[geom_segment](https://ggplot2.tidyverse.org/reference/geom_segment.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = mean_acc_nonword, y = language,
xend = mean_acc_word, yend = language)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = mean_acc_nonword, y = language), color = "red") +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = mean_acc_word, y = language), color = "blue") +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(x = "Change in Accuracy")
```
Figure D.5: A dumbbell plot of change in Average Accuracy from Non\-word trials (red dots) to Word trials (blue dots) for monolingual and bilingual participants.
Which actually gives the least exciting figure ever as both groups showed the same change from the non\-word trials (red dots) to the word trials (blue dots) but we can break the code down a bit just to highlight what we are doing, remembering the idea about layers. Layers one and two add the basic background and black line from the start point (x,y), the mean accuracy of non\-word trials for the two conditions, to the end point (xend, yend), the mean accuracy of word trials for the two conditions:
Figure D.6: Building the bars of our dumbbells. The (x,y) and (xend, yend) have been added to show the values you need to consider and enter to create the dumbbell
and the remaining lines add the dots at the end of the dumbells and changes the x axis label to something useful:
Figure D.7: Adding the weights to the dumbbells. Red dots are added in one layer to show Average Accuracy of Non\-word trials, and blue dots are added in final layer to show Average Accuracy of Word trials.
Of course, worth remembering, it is better to always think of the dumbbell as a start and end point, not left and right, as had accuracy gone down when moving from Non\-word trials to Word trials then our bars would run the opposite direction. If you repeat the above process using reaction times instead of accuracy you will see what we mean.
D.3 Easter Egg \- A Pie Chart
-----------------------------
Pie Charts are not the best form of visualisation as they generally require people to compare areas and/or angles which is a fairly unintuitive means of doing a comparison. They are so disliked in many fields that ggplot does not actually have a `geom_...()` function to create one. But, there is always somebody that wants to create a pie chart regardless and who are we to judge. So here would be the code to produce a pie chart of the demographic data we saw in the start of the paper:
```
count_dat <- dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[group_by](https://dplyr.tidyverse.org/reference/group_by.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[count](https://dplyr.tidyverse.org/reference/count.html)() [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ungroup](https://dplyr.tidyverse.org/reference/group_by.html)() [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(percent = (n/[sum](https://rdrr.io/r/base/sum.html)(n)*100))
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(count_dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = "",
y = percent,
fill = language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)(width = 1, stat="identity") +
[coord_polar](https://ggplot2.tidyverse.org/reference/coord_polar.html)("y", start = 0) +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
axis.title = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
panel.grid = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
panel.border = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
axis.ticks = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
axis.text.x = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)()
) +
[geom_text](https://ggplot2.tidyverse.org/reference/geom_text.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = [c](https://rdrr.io/r/base/c.html)(75, 25),
label = [paste](https://rdrr.io/r/base/paste.html)(percent, "%")),
size = 6)
```
Figure D.8: A pie chart of the demographics
Note that this is effectively creating a stacked bar chart with no x variable (i.e. `x = ""`) and then wrapping the y\-axis into a circle (i.e. `coord_polar("y", start = 0)`). That is what the first three lines of the `[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)` code does:
Figure D.9: The basis of a pie chart
The remainder of the code is used to remove the various panel and tick lines, and text, setting them all to `[element_blank()](https://ggplot2.tidyverse.org/reference/element.html)` through the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` functions we saw above, and to add new labelling text on top of the pie chart at specific y\-values (i.e. `y = c(75,25)`). But remember, **friends don't let friends make pie charts!**
D.4 Easter Egg \- A Lollipop Plot
---------------------------------
Lollipop plots are a sweet alternative to pie charts for representing relative counts. They're a combination of `[geom_linerange()](https://ggplot2.tidyverse.org/reference/geom_linerange.html)` and `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)`. Use `[coord_flip()](https://ggplot2.tidyverse.org/reference/coord_flip.html)` to make them horizontal.
```
pets <- [c](https://rdrr.io/r/base/c.html)("cats", "dogs", "ferrets", "fish", "hamsters", "snakes")
prob <- [c](https://rdrr.io/r/base/c.html)(50, 50, 20, 30, 20, 15)
[tibble](https://r-spatial.github.io/sf/reference/tibble.html)(pet = [sample](https://rdrr.io/r/base/sample.html)(pets, 500, TRUE, prob) [%>%](https://magrittr.tidyverse.org/reference/pipe.html) [factor](https://rdrr.io/r/base/factor.html)([rev](https://rdrr.io/r/base/rev.html)(pets))) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[count](https://dplyr.tidyverse.org/reference/count.html)(pet) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = pet)) +
[geom_linerange](https://ggplot2.tidyverse.org/reference/geom_linerange.html)(mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(ymin = 0, ymax = n),
size = 2) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)(mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = n, colour = pet),
shape = 21,
size = 8,
stroke = 4,
fill = "white",
show.legend = FALSE) +
[geom_text](https://ggplot2.tidyverse.org/reference/geom_text.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(label = pet),
y = 1, hjust = 0, size = 6,
position = [position_nudge](https://ggplot2.tidyverse.org/reference/position_nudge.html)(x = 0.3)) +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(labels = NULL) +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(axis.ticks.y = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)()) +
[scale_colour_viridis_d](https://ggplot2.tidyverse.org/reference/scale_viridis.html)() +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(x = "", y = "") +
[coord_flip](https://ggplot2.tidyverse.org/reference/coord_flip.html)(ylim = [c](https://rdrr.io/r/base/c.html)(0, 200)) +
[theme_light](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure D.10: A lollipop plot showing the number of different types of pets.
| Data Visualization |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/advanced-plots-1.html |
D Advanced Plots
================
This appendix provides some examples of more complex plots. See Lisa's tutorials for the 2022 [30\-day chart challenge](https://debruine.github.io/30DCC-2022/) for even more plots.
D.1 Easter Egg \- Overlaying Plots
----------------------------------
Hopefully from some of the materials we have shown you, you will have found ways of presenting data in an informative manner \- for example, we have shown violin plots and how they can be effective, when combined with boxplots, at displaying distributions. However, if you are familiar with other software you may be used to seeing this sort of information displayed differently, as perhaps a histogram with a normal curve overlaid. Whist the violin plots are better to convey that information we thought it might help to see alternative approaches here. Really it is about overlaying some of the plots we have already shown, but with some slight adjustments. For example, lets look at the histogram and density plot of reaction times we saw earlier \- shown here side by side for convenience.
```
a <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(subtitle = "+ geom_histogram()")
b <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)()+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(subtitle = "+ geom_density()")
a+b
```
Now that in itself is fairly informative but perhaps takes up a lot of room so one option using some of the features of the patchwork library would be to inset the density plot in the top right of the histogram. We already showed a little of patchwork earlier so we won't repeat it here but all we are doing is placing one of the figures (the density plot) within the `[inset_element()](https://patchwork.data-imaginist.com/reference/inset_element.html)` function and applying some appropriate values to position the inset \- through a little trial and error \- based on the bottom left corner of the plot area being `left = 0`, `bottom = 0`, and the top right corner being `right = 1`, `top = 1`:
```
a <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
b <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)()+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
a + [inset_element](https://patchwork.data-imaginist.com/reference/inset_element.html)(b, left = 0.6, bottom = 0.6, right = 1, top = 1)
```
Figure D.1: Insetting a plot within a plot using `[inset_element()](https://patchwork.data-imaginist.com/reference/inset_element.html)` from the patchwork library
But of course that only works if there is space for the inset and it doesn't start overlapping on the main figure. This next approach fully overlays the density plot on top of the histogram. There is one main change though and that is the addition of `aes(y=..density..)` within `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)`. This tells the histogram to now be plotted in terms of density and not count, meaning that the density plot and the histogram and now based on the same y\-axis:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = ..density..),
binwidth = 10, fill = "white", color = "black") +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)()+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
```
Figure D.2: A histogram with density plot overlaid
The main thing to not in the above figure is that both the histogram and the density plot are based on the data you have collected. An alternative that you might want to look at it is plotting a normal distribution on top of the histogram based on the mean and standard deviation of the data. This is a bit more complicated but works as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = ..density..),
binwidth = 10, fill = "white", color = "black") +
[stat_function](https://ggplot2.tidyverse.org/reference/geom_function.html)(
fun = dnorm,
args = [list](https://rdrr.io/r/base/list.html)(mean = [mean](https://rdrr.io/r/base/mean.html)(dat_long$rt),
sd = [sd](https://rdrr.io/r/stats/sd.html)(dat_long$rt))
)
```
Figure D.3: A histogram with normal distribution based on the data overlaid
The first part of this approach is identical to what we say above but instead of using `[geom_density()](https://ggplot2.tidyverse.org/reference/geom_density.html)` we are using a statistics function called `[stat_function()](https://ggplot2.tidyverse.org/reference/geom_function.html)` similar to ones we saw earlier when plotting means and standard deviations. What `[stat_function()](https://ggplot2.tidyverse.org/reference/geom_function.html)` is doing is taking the Normal distribution density function, `fun = dnorm` (read as function equals density normal), and then the mean of the data (`mean = mean(dat_long$rt)`) and the standard deviation of the data `sd = sd(dat_long$rt)` and creates a distribution based on those values. The `args` refers to the arguments that the `dnorm` function takes, and they are passed to the function as a list (`[list()](https://rdrr.io/r/base/list.html)`). But from there, you can then start to alter the `linetype`, `color`, and thickness (`lwd = 3` for example) as you please.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = ..density..),
binwidth = 10, fill = "white", color = "black") +
[stat_function](https://ggplot2.tidyverse.org/reference/geom_function.html)(
fun = dnorm,
args = [list](https://rdrr.io/r/base/list.html)(mean = [mean](https://rdrr.io/r/base/mean.html)(dat_long$rt),
sd = [sd](https://rdrr.io/r/stats/sd.html)(dat_long$rt)),
color = "red",
lwd = 3,
linetype = 2
)
```
Figure D.4: Changing the line of the `[stat_function()](https://ggplot2.tidyverse.org/reference/geom_function.html)`
D.2 Easter Egg \- A Dumbbell Plot
---------------------------------
A nice way of representing a change across different conditions, within participants or across timepoints, is the dumbbell chart. These figures can do a lot of heavy lifting in conveying patterns within the data and are not as hard to create in ggplot as they might first appear. The premise is that you need the start point, in terms of x (`x =`) and y (`y =`), and the end point, again in terms of x (`xend =`) and y (`yend =`). You draw a line between those two points using `[geom_segment()](https://ggplot2.tidyverse.org/reference/geom_segment.html)` and then add a data point at the both ends of the line using `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)`. So for example, we will use the average accuracy scores for the word and non\-word conditions, for monolingual and bilinguals, to demonstrate. We could do the same figure for all participants but as we have 100 participants it can be a bit **wild**. We first need to create the averages using a little bit of data wrangling we have seen:
```
dat_avg <- dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[group_by](https://dplyr.tidyverse.org/reference/group_by.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[summarise](https://r-spatial.github.io/sf/reference/tidyverse.html)(mean_acc_nonword = [mean](https://rdrr.io/r/base/mean.html)(acc_nonword),
mean_acc_word = [mean](https://rdrr.io/r/base/mean.html)(acc_word)) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ungroup](https://dplyr.tidyverse.org/reference/group_by.html)()
```
So our data looks as follows:
| language | mean\_acc\_nonword | mean\_acc\_word |
| --- | --- | --- |
| monolingual | 84\.87273 | 94\.87273 |
| bilingual | 84\.93333 | 95\.17778 |
With our average accuracies for non\-word trials in \*\*
| nom |
| --- |
| mean\_acc\_nonword |
\*\* and our average accuracies for word trials in \*\*
| nom |
| --- |
| mean\_acc\_word |
\*\*. And now we can create our dumbbell plot as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_avg) +
[geom_segment](https://ggplot2.tidyverse.org/reference/geom_segment.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = mean_acc_nonword, y = language,
xend = mean_acc_word, yend = language)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = mean_acc_nonword, y = language), color = "red") +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = mean_acc_word, y = language), color = "blue") +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(x = "Change in Accuracy")
```
Figure D.5: A dumbbell plot of change in Average Accuracy from Non\-word trials (red dots) to Word trials (blue dots) for monolingual and bilingual participants.
Which actually gives the least exciting figure ever as both groups showed the same change from the non\-word trials (red dots) to the word trials (blue dots) but we can break the code down a bit just to highlight what we are doing, remembering the idea about layers. Layers one and two add the basic background and black line from the start point (x,y), the mean accuracy of non\-word trials for the two conditions, to the end point (xend, yend), the mean accuracy of word trials for the two conditions:
Figure D.6: Building the bars of our dumbbells. The (x,y) and (xend, yend) have been added to show the values you need to consider and enter to create the dumbbell
and the remaining lines add the dots at the end of the dumbells and changes the x axis label to something useful:
Figure D.7: Adding the weights to the dumbbells. Red dots are added in one layer to show Average Accuracy of Non\-word trials, and blue dots are added in final layer to show Average Accuracy of Word trials.
Of course, worth remembering, it is better to always think of the dumbbell as a start and end point, not left and right, as had accuracy gone down when moving from Non\-word trials to Word trials then our bars would run the opposite direction. If you repeat the above process using reaction times instead of accuracy you will see what we mean.
D.3 Easter Egg \- A Pie Chart
-----------------------------
Pie Charts are not the best form of visualisation as they generally require people to compare areas and/or angles which is a fairly unintuitive means of doing a comparison. They are so disliked in many fields that ggplot does not actually have a `geom_...()` function to create one. But, there is always somebody that wants to create a pie chart regardless and who are we to judge. So here would be the code to produce a pie chart of the demographic data we saw in the start of the paper:
```
count_dat <- dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[group_by](https://dplyr.tidyverse.org/reference/group_by.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[count](https://dplyr.tidyverse.org/reference/count.html)() [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ungroup](https://dplyr.tidyverse.org/reference/group_by.html)() [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(percent = (n/[sum](https://rdrr.io/r/base/sum.html)(n)*100))
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(count_dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = "",
y = percent,
fill = language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)(width = 1, stat="identity") +
[coord_polar](https://ggplot2.tidyverse.org/reference/coord_polar.html)("y", start = 0) +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
axis.title = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
panel.grid = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
panel.border = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
axis.ticks = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
axis.text.x = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)()
) +
[geom_text](https://ggplot2.tidyverse.org/reference/geom_text.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = [c](https://rdrr.io/r/base/c.html)(75, 25),
label = [paste](https://rdrr.io/r/base/paste.html)(percent, "%")),
size = 6)
```
Figure D.8: A pie chart of the demographics
Note that this is effectively creating a stacked bar chart with no x variable (i.e. `x = ""`) and then wrapping the y\-axis into a circle (i.e. `coord_polar("y", start = 0)`). That is what the first three lines of the `[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)` code does:
Figure D.9: The basis of a pie chart
The remainder of the code is used to remove the various panel and tick lines, and text, setting them all to `[element_blank()](https://ggplot2.tidyverse.org/reference/element.html)` through the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` functions we saw above, and to add new labelling text on top of the pie chart at specific y\-values (i.e. `y = c(75,25)`). But remember, **friends don't let friends make pie charts!**
D.4 Easter Egg \- A Lollipop Plot
---------------------------------
Lollipop plots are a sweet alternative to pie charts for representing relative counts. They're a combination of `[geom_linerange()](https://ggplot2.tidyverse.org/reference/geom_linerange.html)` and `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)`. Use `[coord_flip()](https://ggplot2.tidyverse.org/reference/coord_flip.html)` to make them horizontal.
```
pets <- [c](https://rdrr.io/r/base/c.html)("cats", "dogs", "ferrets", "fish", "hamsters", "snakes")
prob <- [c](https://rdrr.io/r/base/c.html)(50, 50, 20, 30, 20, 15)
[tibble](https://r-spatial.github.io/sf/reference/tibble.html)(pet = [sample](https://rdrr.io/r/base/sample.html)(pets, 500, TRUE, prob) [%>%](https://magrittr.tidyverse.org/reference/pipe.html) [factor](https://rdrr.io/r/base/factor.html)([rev](https://rdrr.io/r/base/rev.html)(pets))) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[count](https://dplyr.tidyverse.org/reference/count.html)(pet) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = pet)) +
[geom_linerange](https://ggplot2.tidyverse.org/reference/geom_linerange.html)(mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(ymin = 0, ymax = n),
size = 2) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)(mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = n, colour = pet),
shape = 21,
size = 8,
stroke = 4,
fill = "white",
show.legend = FALSE) +
[geom_text](https://ggplot2.tidyverse.org/reference/geom_text.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(label = pet),
y = 1, hjust = 0, size = 6,
position = [position_nudge](https://ggplot2.tidyverse.org/reference/position_nudge.html)(x = 0.3)) +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(labels = NULL) +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(axis.ticks.y = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)()) +
[scale_colour_viridis_d](https://ggplot2.tidyverse.org/reference/scale_viridis.html)() +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(x = "", y = "") +
[coord_flip](https://ggplot2.tidyverse.org/reference/coord_flip.html)(ylim = [c](https://rdrr.io/r/base/c.html)(0, 200)) +
[theme_light](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure D.10: A lollipop plot showing the number of different types of pets.
D.1 Easter Egg \- Overlaying Plots
----------------------------------
Hopefully from some of the materials we have shown you, you will have found ways of presenting data in an informative manner \- for example, we have shown violin plots and how they can be effective, when combined with boxplots, at displaying distributions. However, if you are familiar with other software you may be used to seeing this sort of information displayed differently, as perhaps a histogram with a normal curve overlaid. Whist the violin plots are better to convey that information we thought it might help to see alternative approaches here. Really it is about overlaying some of the plots we have already shown, but with some slight adjustments. For example, lets look at the histogram and density plot of reaction times we saw earlier \- shown here side by side for convenience.
```
a <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(subtitle = "+ geom_histogram()")
b <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)()+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(subtitle = "+ geom_density()")
a+b
```
Now that in itself is fairly informative but perhaps takes up a lot of room so one option using some of the features of the patchwork library would be to inset the density plot in the top right of the histogram. We already showed a little of patchwork earlier so we won't repeat it here but all we are doing is placing one of the figures (the density plot) within the `[inset_element()](https://patchwork.data-imaginist.com/reference/inset_element.html)` function and applying some appropriate values to position the inset \- through a little trial and error \- based on the bottom left corner of the plot area being `left = 0`, `bottom = 0`, and the top right corner being `right = 1`, `top = 1`:
```
a <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
b <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)()+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
a + [inset_element](https://patchwork.data-imaginist.com/reference/inset_element.html)(b, left = 0.6, bottom = 0.6, right = 1, top = 1)
```
Figure D.1: Insetting a plot within a plot using `[inset_element()](https://patchwork.data-imaginist.com/reference/inset_element.html)` from the patchwork library
But of course that only works if there is space for the inset and it doesn't start overlapping on the main figure. This next approach fully overlays the density plot on top of the histogram. There is one main change though and that is the addition of `aes(y=..density..)` within `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)`. This tells the histogram to now be plotted in terms of density and not count, meaning that the density plot and the histogram and now based on the same y\-axis:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = ..density..),
binwidth = 10, fill = "white", color = "black") +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)()+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
```
Figure D.2: A histogram with density plot overlaid
The main thing to not in the above figure is that both the histogram and the density plot are based on the data you have collected. An alternative that you might want to look at it is plotting a normal distribution on top of the histogram based on the mean and standard deviation of the data. This is a bit more complicated but works as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = ..density..),
binwidth = 10, fill = "white", color = "black") +
[stat_function](https://ggplot2.tidyverse.org/reference/geom_function.html)(
fun = dnorm,
args = [list](https://rdrr.io/r/base/list.html)(mean = [mean](https://rdrr.io/r/base/mean.html)(dat_long$rt),
sd = [sd](https://rdrr.io/r/stats/sd.html)(dat_long$rt))
)
```
Figure D.3: A histogram with normal distribution based on the data overlaid
The first part of this approach is identical to what we say above but instead of using `[geom_density()](https://ggplot2.tidyverse.org/reference/geom_density.html)` we are using a statistics function called `[stat_function()](https://ggplot2.tidyverse.org/reference/geom_function.html)` similar to ones we saw earlier when plotting means and standard deviations. What `[stat_function()](https://ggplot2.tidyverse.org/reference/geom_function.html)` is doing is taking the Normal distribution density function, `fun = dnorm` (read as function equals density normal), and then the mean of the data (`mean = mean(dat_long$rt)`) and the standard deviation of the data `sd = sd(dat_long$rt)` and creates a distribution based on those values. The `args` refers to the arguments that the `dnorm` function takes, and they are passed to the function as a list (`[list()](https://rdrr.io/r/base/list.html)`). But from there, you can then start to alter the `linetype`, `color`, and thickness (`lwd = 3` for example) as you please.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = ..density..),
binwidth = 10, fill = "white", color = "black") +
[stat_function](https://ggplot2.tidyverse.org/reference/geom_function.html)(
fun = dnorm,
args = [list](https://rdrr.io/r/base/list.html)(mean = [mean](https://rdrr.io/r/base/mean.html)(dat_long$rt),
sd = [sd](https://rdrr.io/r/stats/sd.html)(dat_long$rt)),
color = "red",
lwd = 3,
linetype = 2
)
```
Figure D.4: Changing the line of the `[stat_function()](https://ggplot2.tidyverse.org/reference/geom_function.html)`
D.2 Easter Egg \- A Dumbbell Plot
---------------------------------
A nice way of representing a change across different conditions, within participants or across timepoints, is the dumbbell chart. These figures can do a lot of heavy lifting in conveying patterns within the data and are not as hard to create in ggplot as they might first appear. The premise is that you need the start point, in terms of x (`x =`) and y (`y =`), and the end point, again in terms of x (`xend =`) and y (`yend =`). You draw a line between those two points using `[geom_segment()](https://ggplot2.tidyverse.org/reference/geom_segment.html)` and then add a data point at the both ends of the line using `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)`. So for example, we will use the average accuracy scores for the word and non\-word conditions, for monolingual and bilinguals, to demonstrate. We could do the same figure for all participants but as we have 100 participants it can be a bit **wild**. We first need to create the averages using a little bit of data wrangling we have seen:
```
dat_avg <- dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[group_by](https://dplyr.tidyverse.org/reference/group_by.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[summarise](https://r-spatial.github.io/sf/reference/tidyverse.html)(mean_acc_nonword = [mean](https://rdrr.io/r/base/mean.html)(acc_nonword),
mean_acc_word = [mean](https://rdrr.io/r/base/mean.html)(acc_word)) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ungroup](https://dplyr.tidyverse.org/reference/group_by.html)()
```
So our data looks as follows:
| language | mean\_acc\_nonword | mean\_acc\_word |
| --- | --- | --- |
| monolingual | 84\.87273 | 94\.87273 |
| bilingual | 84\.93333 | 95\.17778 |
With our average accuracies for non\-word trials in \*\*
| nom |
| --- |
| mean\_acc\_nonword |
\*\* and our average accuracies for word trials in \*\*
| nom |
| --- |
| mean\_acc\_word |
\*\*. And now we can create our dumbbell plot as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_avg) +
[geom_segment](https://ggplot2.tidyverse.org/reference/geom_segment.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = mean_acc_nonword, y = language,
xend = mean_acc_word, yend = language)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = mean_acc_nonword, y = language), color = "red") +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = mean_acc_word, y = language), color = "blue") +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(x = "Change in Accuracy")
```
Figure D.5: A dumbbell plot of change in Average Accuracy from Non\-word trials (red dots) to Word trials (blue dots) for monolingual and bilingual participants.
Which actually gives the least exciting figure ever as both groups showed the same change from the non\-word trials (red dots) to the word trials (blue dots) but we can break the code down a bit just to highlight what we are doing, remembering the idea about layers. Layers one and two add the basic background and black line from the start point (x,y), the mean accuracy of non\-word trials for the two conditions, to the end point (xend, yend), the mean accuracy of word trials for the two conditions:
Figure D.6: Building the bars of our dumbbells. The (x,y) and (xend, yend) have been added to show the values you need to consider and enter to create the dumbbell
and the remaining lines add the dots at the end of the dumbells and changes the x axis label to something useful:
Figure D.7: Adding the weights to the dumbbells. Red dots are added in one layer to show Average Accuracy of Non\-word trials, and blue dots are added in final layer to show Average Accuracy of Word trials.
Of course, worth remembering, it is better to always think of the dumbbell as a start and end point, not left and right, as had accuracy gone down when moving from Non\-word trials to Word trials then our bars would run the opposite direction. If you repeat the above process using reaction times instead of accuracy you will see what we mean.
D.3 Easter Egg \- A Pie Chart
-----------------------------
Pie Charts are not the best form of visualisation as they generally require people to compare areas and/or angles which is a fairly unintuitive means of doing a comparison. They are so disliked in many fields that ggplot does not actually have a `geom_...()` function to create one. But, there is always somebody that wants to create a pie chart regardless and who are we to judge. So here would be the code to produce a pie chart of the demographic data we saw in the start of the paper:
```
count_dat <- dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[group_by](https://dplyr.tidyverse.org/reference/group_by.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[count](https://dplyr.tidyverse.org/reference/count.html)() [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ungroup](https://dplyr.tidyverse.org/reference/group_by.html)() [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(percent = (n/[sum](https://rdrr.io/r/base/sum.html)(n)*100))
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(count_dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = "",
y = percent,
fill = language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)(width = 1, stat="identity") +
[coord_polar](https://ggplot2.tidyverse.org/reference/coord_polar.html)("y", start = 0) +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
axis.title = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
panel.grid = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
panel.border = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
axis.ticks = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
axis.text.x = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)()
) +
[geom_text](https://ggplot2.tidyverse.org/reference/geom_text.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = [c](https://rdrr.io/r/base/c.html)(75, 25),
label = [paste](https://rdrr.io/r/base/paste.html)(percent, "%")),
size = 6)
```
Figure D.8: A pie chart of the demographics
Note that this is effectively creating a stacked bar chart with no x variable (i.e. `x = ""`) and then wrapping the y\-axis into a circle (i.e. `coord_polar("y", start = 0)`). That is what the first three lines of the `[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)` code does:
Figure D.9: The basis of a pie chart
The remainder of the code is used to remove the various panel and tick lines, and text, setting them all to `[element_blank()](https://ggplot2.tidyverse.org/reference/element.html)` through the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` functions we saw above, and to add new labelling text on top of the pie chart at specific y\-values (i.e. `y = c(75,25)`). But remember, **friends don't let friends make pie charts!**
D.4 Easter Egg \- A Lollipop Plot
---------------------------------
Lollipop plots are a sweet alternative to pie charts for representing relative counts. They're a combination of `[geom_linerange()](https://ggplot2.tidyverse.org/reference/geom_linerange.html)` and `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)`. Use `[coord_flip()](https://ggplot2.tidyverse.org/reference/coord_flip.html)` to make them horizontal.
```
pets <- [c](https://rdrr.io/r/base/c.html)("cats", "dogs", "ferrets", "fish", "hamsters", "snakes")
prob <- [c](https://rdrr.io/r/base/c.html)(50, 50, 20, 30, 20, 15)
[tibble](https://r-spatial.github.io/sf/reference/tibble.html)(pet = [sample](https://rdrr.io/r/base/sample.html)(pets, 500, TRUE, prob) [%>%](https://magrittr.tidyverse.org/reference/pipe.html) [factor](https://rdrr.io/r/base/factor.html)([rev](https://rdrr.io/r/base/rev.html)(pets))) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[count](https://dplyr.tidyverse.org/reference/count.html)(pet) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = pet)) +
[geom_linerange](https://ggplot2.tidyverse.org/reference/geom_linerange.html)(mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(ymin = 0, ymax = n),
size = 2) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)(mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = n, colour = pet),
shape = 21,
size = 8,
stroke = 4,
fill = "white",
show.legend = FALSE) +
[geom_text](https://ggplot2.tidyverse.org/reference/geom_text.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(label = pet),
y = 1, hjust = 0, size = 6,
position = [position_nudge](https://ggplot2.tidyverse.org/reference/position_nudge.html)(x = 0.3)) +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(labels = NULL) +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(axis.ticks.y = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)()) +
[scale_colour_viridis_d](https://ggplot2.tidyverse.org/reference/scale_viridis.html)() +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(x = "", y = "") +
[coord_flip](https://ggplot2.tidyverse.org/reference/coord_flip.html)(ylim = [c](https://rdrr.io/r/base/c.html)(0, 200)) +
[theme_light](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure D.10: A lollipop plot showing the number of different types of pets.
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/advanced-plots-1.html |
D Advanced Plots
================
This appendix provides some examples of more complex plots. See Lisa's tutorials for the 2022 [30\-day chart challenge](https://debruine.github.io/30DCC-2022/) for even more plots.
D.1 Easter Egg \- Overlaying Plots
----------------------------------
Hopefully from some of the materials we have shown you, you will have found ways of presenting data in an informative manner \- for example, we have shown violin plots and how they can be effective, when combined with boxplots, at displaying distributions. However, if you are familiar with other software you may be used to seeing this sort of information displayed differently, as perhaps a histogram with a normal curve overlaid. Whist the violin plots are better to convey that information we thought it might help to see alternative approaches here. Really it is about overlaying some of the plots we have already shown, but with some slight adjustments. For example, lets look at the histogram and density plot of reaction times we saw earlier \- shown here side by side for convenience.
```
a <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(subtitle = "+ geom_histogram()")
b <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)()+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(subtitle = "+ geom_density()")
a+b
```
Now that in itself is fairly informative but perhaps takes up a lot of room so one option using some of the features of the patchwork library would be to inset the density plot in the top right of the histogram. We already showed a little of patchwork earlier so we won't repeat it here but all we are doing is placing one of the figures (the density plot) within the `[inset_element()](https://patchwork.data-imaginist.com/reference/inset_element.html)` function and applying some appropriate values to position the inset \- through a little trial and error \- based on the bottom left corner of the plot area being `left = 0`, `bottom = 0`, and the top right corner being `right = 1`, `top = 1`:
```
a <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
b <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)()+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
a + [inset_element](https://patchwork.data-imaginist.com/reference/inset_element.html)(b, left = 0.6, bottom = 0.6, right = 1, top = 1)
```
Figure D.1: Insetting a plot within a plot using `[inset_element()](https://patchwork.data-imaginist.com/reference/inset_element.html)` from the patchwork library
But of course that only works if there is space for the inset and it doesn't start overlapping on the main figure. This next approach fully overlays the density plot on top of the histogram. There is one main change though and that is the addition of `aes(y=..density..)` within `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)`. This tells the histogram to now be plotted in terms of density and not count, meaning that the density plot and the histogram and now based on the same y\-axis:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = ..density..),
binwidth = 10, fill = "white", color = "black") +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)()+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
```
Figure D.2: A histogram with density plot overlaid
The main thing to not in the above figure is that both the histogram and the density plot are based on the data you have collected. An alternative that you might want to look at it is plotting a normal distribution on top of the histogram based on the mean and standard deviation of the data. This is a bit more complicated but works as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = ..density..),
binwidth = 10, fill = "white", color = "black") +
[stat_function](https://ggplot2.tidyverse.org/reference/geom_function.html)(
fun = dnorm,
args = [list](https://rdrr.io/r/base/list.html)(mean = [mean](https://rdrr.io/r/base/mean.html)(dat_long$rt),
sd = [sd](https://rdrr.io/r/stats/sd.html)(dat_long$rt))
)
```
Figure D.3: A histogram with normal distribution based on the data overlaid
The first part of this approach is identical to what we say above but instead of using `[geom_density()](https://ggplot2.tidyverse.org/reference/geom_density.html)` we are using a statistics function called `[stat_function()](https://ggplot2.tidyverse.org/reference/geom_function.html)` similar to ones we saw earlier when plotting means and standard deviations. What `[stat_function()](https://ggplot2.tidyverse.org/reference/geom_function.html)` is doing is taking the Normal distribution density function, `fun = dnorm` (read as function equals density normal), and then the mean of the data (`mean = mean(dat_long$rt)`) and the standard deviation of the data `sd = sd(dat_long$rt)` and creates a distribution based on those values. The `args` refers to the arguments that the `dnorm` function takes, and they are passed to the function as a list (`[list()](https://rdrr.io/r/base/list.html)`). But from there, you can then start to alter the `linetype`, `color`, and thickness (`lwd = 3` for example) as you please.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = ..density..),
binwidth = 10, fill = "white", color = "black") +
[stat_function](https://ggplot2.tidyverse.org/reference/geom_function.html)(
fun = dnorm,
args = [list](https://rdrr.io/r/base/list.html)(mean = [mean](https://rdrr.io/r/base/mean.html)(dat_long$rt),
sd = [sd](https://rdrr.io/r/stats/sd.html)(dat_long$rt)),
color = "red",
lwd = 3,
linetype = 2
)
```
Figure D.4: Changing the line of the `[stat_function()](https://ggplot2.tidyverse.org/reference/geom_function.html)`
D.2 Easter Egg \- A Dumbbell Plot
---------------------------------
A nice way of representing a change across different conditions, within participants or across timepoints, is the dumbbell chart. These figures can do a lot of heavy lifting in conveying patterns within the data and are not as hard to create in ggplot as they might first appear. The premise is that you need the start point, in terms of x (`x =`) and y (`y =`), and the end point, again in terms of x (`xend =`) and y (`yend =`). You draw a line between those two points using `[geom_segment()](https://ggplot2.tidyverse.org/reference/geom_segment.html)` and then add a data point at the both ends of the line using `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)`. So for example, we will use the average accuracy scores for the word and non\-word conditions, for monolingual and bilinguals, to demonstrate. We could do the same figure for all participants but as we have 100 participants it can be a bit **wild**. We first need to create the averages using a little bit of data wrangling we have seen:
```
dat_avg <- dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[group_by](https://dplyr.tidyverse.org/reference/group_by.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[summarise](https://r-spatial.github.io/sf/reference/tidyverse.html)(mean_acc_nonword = [mean](https://rdrr.io/r/base/mean.html)(acc_nonword),
mean_acc_word = [mean](https://rdrr.io/r/base/mean.html)(acc_word)) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ungroup](https://dplyr.tidyverse.org/reference/group_by.html)()
```
So our data looks as follows:
| language | mean\_acc\_nonword | mean\_acc\_word |
| --- | --- | --- |
| monolingual | 84\.87273 | 94\.87273 |
| bilingual | 84\.93333 | 95\.17778 |
With our average accuracies for non\-word trials in \*\*
| nom |
| --- |
| mean\_acc\_nonword |
\*\* and our average accuracies for word trials in \*\*
| nom |
| --- |
| mean\_acc\_word |
\*\*. And now we can create our dumbbell plot as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_avg) +
[geom_segment](https://ggplot2.tidyverse.org/reference/geom_segment.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = mean_acc_nonword, y = language,
xend = mean_acc_word, yend = language)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = mean_acc_nonword, y = language), color = "red") +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = mean_acc_word, y = language), color = "blue") +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(x = "Change in Accuracy")
```
Figure D.5: A dumbbell plot of change in Average Accuracy from Non\-word trials (red dots) to Word trials (blue dots) for monolingual and bilingual participants.
Which actually gives the least exciting figure ever as both groups showed the same change from the non\-word trials (red dots) to the word trials (blue dots) but we can break the code down a bit just to highlight what we are doing, remembering the idea about layers. Layers one and two add the basic background and black line from the start point (x,y), the mean accuracy of non\-word trials for the two conditions, to the end point (xend, yend), the mean accuracy of word trials for the two conditions:
Figure D.6: Building the bars of our dumbbells. The (x,y) and (xend, yend) have been added to show the values you need to consider and enter to create the dumbbell
and the remaining lines add the dots at the end of the dumbells and changes the x axis label to something useful:
Figure D.7: Adding the weights to the dumbbells. Red dots are added in one layer to show Average Accuracy of Non\-word trials, and blue dots are added in final layer to show Average Accuracy of Word trials.
Of course, worth remembering, it is better to always think of the dumbbell as a start and end point, not left and right, as had accuracy gone down when moving from Non\-word trials to Word trials then our bars would run the opposite direction. If you repeat the above process using reaction times instead of accuracy you will see what we mean.
D.3 Easter Egg \- A Pie Chart
-----------------------------
Pie Charts are not the best form of visualisation as they generally require people to compare areas and/or angles which is a fairly unintuitive means of doing a comparison. They are so disliked in many fields that ggplot does not actually have a `geom_...()` function to create one. But, there is always somebody that wants to create a pie chart regardless and who are we to judge. So here would be the code to produce a pie chart of the demographic data we saw in the start of the paper:
```
count_dat <- dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[group_by](https://dplyr.tidyverse.org/reference/group_by.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[count](https://dplyr.tidyverse.org/reference/count.html)() [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ungroup](https://dplyr.tidyverse.org/reference/group_by.html)() [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(percent = (n/[sum](https://rdrr.io/r/base/sum.html)(n)*100))
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(count_dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = "",
y = percent,
fill = language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)(width = 1, stat="identity") +
[coord_polar](https://ggplot2.tidyverse.org/reference/coord_polar.html)("y", start = 0) +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
axis.title = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
panel.grid = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
panel.border = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
axis.ticks = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
axis.text.x = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)()
) +
[geom_text](https://ggplot2.tidyverse.org/reference/geom_text.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = [c](https://rdrr.io/r/base/c.html)(75, 25),
label = [paste](https://rdrr.io/r/base/paste.html)(percent, "%")),
size = 6)
```
Figure D.8: A pie chart of the demographics
Note that this is effectively creating a stacked bar chart with no x variable (i.e. `x = ""`) and then wrapping the y\-axis into a circle (i.e. `coord_polar("y", start = 0)`). That is what the first three lines of the `[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)` code does:
Figure D.9: The basis of a pie chart
The remainder of the code is used to remove the various panel and tick lines, and text, setting them all to `[element_blank()](https://ggplot2.tidyverse.org/reference/element.html)` through the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` functions we saw above, and to add new labelling text on top of the pie chart at specific y\-values (i.e. `y = c(75,25)`). But remember, **friends don't let friends make pie charts!**
D.4 Easter Egg \- A Lollipop Plot
---------------------------------
Lollipop plots are a sweet alternative to pie charts for representing relative counts. They're a combination of `[geom_linerange()](https://ggplot2.tidyverse.org/reference/geom_linerange.html)` and `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)`. Use `[coord_flip()](https://ggplot2.tidyverse.org/reference/coord_flip.html)` to make them horizontal.
```
pets <- [c](https://rdrr.io/r/base/c.html)("cats", "dogs", "ferrets", "fish", "hamsters", "snakes")
prob <- [c](https://rdrr.io/r/base/c.html)(50, 50, 20, 30, 20, 15)
[tibble](https://r-spatial.github.io/sf/reference/tibble.html)(pet = [sample](https://rdrr.io/r/base/sample.html)(pets, 500, TRUE, prob) [%>%](https://magrittr.tidyverse.org/reference/pipe.html) [factor](https://rdrr.io/r/base/factor.html)([rev](https://rdrr.io/r/base/rev.html)(pets))) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[count](https://dplyr.tidyverse.org/reference/count.html)(pet) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = pet)) +
[geom_linerange](https://ggplot2.tidyverse.org/reference/geom_linerange.html)(mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(ymin = 0, ymax = n),
size = 2) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)(mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = n, colour = pet),
shape = 21,
size = 8,
stroke = 4,
fill = "white",
show.legend = FALSE) +
[geom_text](https://ggplot2.tidyverse.org/reference/geom_text.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(label = pet),
y = 1, hjust = 0, size = 6,
position = [position_nudge](https://ggplot2.tidyverse.org/reference/position_nudge.html)(x = 0.3)) +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(labels = NULL) +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(axis.ticks.y = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)()) +
[scale_colour_viridis_d](https://ggplot2.tidyverse.org/reference/scale_viridis.html)() +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(x = "", y = "") +
[coord_flip](https://ggplot2.tidyverse.org/reference/coord_flip.html)(ylim = [c](https://rdrr.io/r/base/c.html)(0, 200)) +
[theme_light](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure D.10: A lollipop plot showing the number of different types of pets.
D.1 Easter Egg \- Overlaying Plots
----------------------------------
Hopefully from some of the materials we have shown you, you will have found ways of presenting data in an informative manner \- for example, we have shown violin plots and how they can be effective, when combined with boxplots, at displaying distributions. However, if you are familiar with other software you may be used to seeing this sort of information displayed differently, as perhaps a histogram with a normal curve overlaid. Whist the violin plots are better to convey that information we thought it might help to see alternative approaches here. Really it is about overlaying some of the plots we have already shown, but with some slight adjustments. For example, lets look at the histogram and density plot of reaction times we saw earlier \- shown here side by side for convenience.
```
a <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(subtitle = "+ geom_histogram()")
b <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)()+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)") +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(subtitle = "+ geom_density()")
a+b
```
Now that in itself is fairly informative but perhaps takes up a lot of room so one option using some of the features of the patchwork library would be to inset the density plot in the top right of the histogram. We already showed a little of patchwork earlier so we won't repeat it here but all we are doing is placing one of the figures (the density plot) within the `[inset_element()](https://patchwork.data-imaginist.com/reference/inset_element.html)` function and applying some appropriate values to position the inset \- through a little trial and error \- based on the bottom left corner of the plot area being `left = 0`, `bottom = 0`, and the top right corner being `right = 1`, `top = 1`:
```
a <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)(binwidth = 10, fill = "white", color = "black") +
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
b <- [ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)()+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
a + [inset_element](https://patchwork.data-imaginist.com/reference/inset_element.html)(b, left = 0.6, bottom = 0.6, right = 1, top = 1)
```
Figure D.1: Insetting a plot within a plot using `[inset_element()](https://patchwork.data-imaginist.com/reference/inset_element.html)` from the patchwork library
But of course that only works if there is space for the inset and it doesn't start overlapping on the main figure. This next approach fully overlays the density plot on top of the histogram. There is one main change though and that is the addition of `aes(y=..density..)` within `[geom_histogram()](https://ggplot2.tidyverse.org/reference/geom_histogram.html)`. This tells the histogram to now be plotted in terms of density and not count, meaning that the density plot and the histogram and now based on the same y\-axis:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = ..density..),
binwidth = 10, fill = "white", color = "black") +
[geom_density](https://ggplot2.tidyverse.org/reference/geom_density.html)()+
[scale_x_continuous](https://ggplot2.tidyverse.org/reference/scale_continuous.html)(name = "Reaction time (ms)")
```
Figure D.2: A histogram with density plot overlaid
The main thing to not in the above figure is that both the histogram and the density plot are based on the data you have collected. An alternative that you might want to look at it is plotting a normal distribution on top of the histogram based on the mean and standard deviation of the data. This is a bit more complicated but works as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = ..density..),
binwidth = 10, fill = "white", color = "black") +
[stat_function](https://ggplot2.tidyverse.org/reference/geom_function.html)(
fun = dnorm,
args = [list](https://rdrr.io/r/base/list.html)(mean = [mean](https://rdrr.io/r/base/mean.html)(dat_long$rt),
sd = [sd](https://rdrr.io/r/stats/sd.html)(dat_long$rt))
)
```
Figure D.3: A histogram with normal distribution based on the data overlaid
The first part of this approach is identical to what we say above but instead of using `[geom_density()](https://ggplot2.tidyverse.org/reference/geom_density.html)` we are using a statistics function called `[stat_function()](https://ggplot2.tidyverse.org/reference/geom_function.html)` similar to ones we saw earlier when plotting means and standard deviations. What `[stat_function()](https://ggplot2.tidyverse.org/reference/geom_function.html)` is doing is taking the Normal distribution density function, `fun = dnorm` (read as function equals density normal), and then the mean of the data (`mean = mean(dat_long$rt)`) and the standard deviation of the data `sd = sd(dat_long$rt)` and creates a distribution based on those values. The `args` refers to the arguments that the `dnorm` function takes, and they are passed to the function as a list (`[list()](https://rdrr.io/r/base/list.html)`). But from there, you can then start to alter the `linetype`, `color`, and thickness (`lwd = 3` for example) as you please.
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_long, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(rt)) +
[geom_histogram](https://ggplot2.tidyverse.org/reference/geom_histogram.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = ..density..),
binwidth = 10, fill = "white", color = "black") +
[stat_function](https://ggplot2.tidyverse.org/reference/geom_function.html)(
fun = dnorm,
args = [list](https://rdrr.io/r/base/list.html)(mean = [mean](https://rdrr.io/r/base/mean.html)(dat_long$rt),
sd = [sd](https://rdrr.io/r/stats/sd.html)(dat_long$rt)),
color = "red",
lwd = 3,
linetype = 2
)
```
Figure D.4: Changing the line of the `[stat_function()](https://ggplot2.tidyverse.org/reference/geom_function.html)`
D.2 Easter Egg \- A Dumbbell Plot
---------------------------------
A nice way of representing a change across different conditions, within participants or across timepoints, is the dumbbell chart. These figures can do a lot of heavy lifting in conveying patterns within the data and are not as hard to create in ggplot as they might first appear. The premise is that you need the start point, in terms of x (`x =`) and y (`y =`), and the end point, again in terms of x (`xend =`) and y (`yend =`). You draw a line between those two points using `[geom_segment()](https://ggplot2.tidyverse.org/reference/geom_segment.html)` and then add a data point at the both ends of the line using `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)`. So for example, we will use the average accuracy scores for the word and non\-word conditions, for monolingual and bilinguals, to demonstrate. We could do the same figure for all participants but as we have 100 participants it can be a bit **wild**. We first need to create the averages using a little bit of data wrangling we have seen:
```
dat_avg <- dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[group_by](https://dplyr.tidyverse.org/reference/group_by.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[summarise](https://r-spatial.github.io/sf/reference/tidyverse.html)(mean_acc_nonword = [mean](https://rdrr.io/r/base/mean.html)(acc_nonword),
mean_acc_word = [mean](https://rdrr.io/r/base/mean.html)(acc_word)) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ungroup](https://dplyr.tidyverse.org/reference/group_by.html)()
```
So our data looks as follows:
| language | mean\_acc\_nonword | mean\_acc\_word |
| --- | --- | --- |
| monolingual | 84\.87273 | 94\.87273 |
| bilingual | 84\.93333 | 95\.17778 |
With our average accuracies for non\-word trials in \*\*
| nom |
| --- |
| mean\_acc\_nonword |
\*\* and our average accuracies for word trials in \*\*
| nom |
| --- |
| mean\_acc\_word |
\*\*. And now we can create our dumbbell plot as follows:
```
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(dat_avg) +
[geom_segment](https://ggplot2.tidyverse.org/reference/geom_segment.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = mean_acc_nonword, y = language,
xend = mean_acc_word, yend = language)) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = mean_acc_nonword, y = language), color = "red") +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = mean_acc_word, y = language), color = "blue") +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(x = "Change in Accuracy")
```
Figure D.5: A dumbbell plot of change in Average Accuracy from Non\-word trials (red dots) to Word trials (blue dots) for monolingual and bilingual participants.
Which actually gives the least exciting figure ever as both groups showed the same change from the non\-word trials (red dots) to the word trials (blue dots) but we can break the code down a bit just to highlight what we are doing, remembering the idea about layers. Layers one and two add the basic background and black line from the start point (x,y), the mean accuracy of non\-word trials for the two conditions, to the end point (xend, yend), the mean accuracy of word trials for the two conditions:
Figure D.6: Building the bars of our dumbbells. The (x,y) and (xend, yend) have been added to show the values you need to consider and enter to create the dumbbell
and the remaining lines add the dots at the end of the dumbells and changes the x axis label to something useful:
Figure D.7: Adding the weights to the dumbbells. Red dots are added in one layer to show Average Accuracy of Non\-word trials, and blue dots are added in final layer to show Average Accuracy of Word trials.
Of course, worth remembering, it is better to always think of the dumbbell as a start and end point, not left and right, as had accuracy gone down when moving from Non\-word trials to Word trials then our bars would run the opposite direction. If you repeat the above process using reaction times instead of accuracy you will see what we mean.
D.3 Easter Egg \- A Pie Chart
-----------------------------
Pie Charts are not the best form of visualisation as they generally require people to compare areas and/or angles which is a fairly unintuitive means of doing a comparison. They are so disliked in many fields that ggplot does not actually have a `geom_...()` function to create one. But, there is always somebody that wants to create a pie chart regardless and who are we to judge. So here would be the code to produce a pie chart of the demographic data we saw in the start of the paper:
```
count_dat <- dat [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[group_by](https://dplyr.tidyverse.org/reference/group_by.html)(language) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[count](https://dplyr.tidyverse.org/reference/count.html)() [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ungroup](https://dplyr.tidyverse.org/reference/group_by.html)() [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[mutate](https://dplyr.tidyverse.org/reference/mutate.html)(percent = (n/[sum](https://rdrr.io/r/base/sum.html)(n)*100))
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)(count_dat, [aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = "",
y = percent,
fill = language)) +
[geom_bar](https://ggplot2.tidyverse.org/reference/geom_bar.html)(width = 1, stat="identity") +
[coord_polar](https://ggplot2.tidyverse.org/reference/coord_polar.html)("y", start = 0) +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(
axis.title = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
panel.grid = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
panel.border = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
axis.ticks = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)(),
axis.text.x = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)()
) +
[geom_text](https://ggplot2.tidyverse.org/reference/geom_text.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = [c](https://rdrr.io/r/base/c.html)(75, 25),
label = [paste](https://rdrr.io/r/base/paste.html)(percent, "%")),
size = 6)
```
Figure D.8: A pie chart of the demographics
Note that this is effectively creating a stacked bar chart with no x variable (i.e. `x = ""`) and then wrapping the y\-axis into a circle (i.e. `coord_polar("y", start = 0)`). That is what the first three lines of the `[ggplot()](https://ggplot2.tidyverse.org/reference/ggplot.html)` code does:
Figure D.9: The basis of a pie chart
The remainder of the code is used to remove the various panel and tick lines, and text, setting them all to `[element_blank()](https://ggplot2.tidyverse.org/reference/element.html)` through the `[theme()](https://ggplot2.tidyverse.org/reference/theme.html)` functions we saw above, and to add new labelling text on top of the pie chart at specific y\-values (i.e. `y = c(75,25)`). But remember, **friends don't let friends make pie charts!**
D.4 Easter Egg \- A Lollipop Plot
---------------------------------
Lollipop plots are a sweet alternative to pie charts for representing relative counts. They're a combination of `[geom_linerange()](https://ggplot2.tidyverse.org/reference/geom_linerange.html)` and `[geom_point()](https://ggplot2.tidyverse.org/reference/geom_point.html)`. Use `[coord_flip()](https://ggplot2.tidyverse.org/reference/coord_flip.html)` to make them horizontal.
```
pets <- [c](https://rdrr.io/r/base/c.html)("cats", "dogs", "ferrets", "fish", "hamsters", "snakes")
prob <- [c](https://rdrr.io/r/base/c.html)(50, 50, 20, 30, 20, 15)
[tibble](https://r-spatial.github.io/sf/reference/tibble.html)(pet = [sample](https://rdrr.io/r/base/sample.html)(pets, 500, TRUE, prob) [%>%](https://magrittr.tidyverse.org/reference/pipe.html) [factor](https://rdrr.io/r/base/factor.html)([rev](https://rdrr.io/r/base/rev.html)(pets))) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[count](https://dplyr.tidyverse.org/reference/count.html)(pet) [%>%](https://magrittr.tidyverse.org/reference/pipe.html)
[ggplot](https://ggplot2.tidyverse.org/reference/ggplot.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(x = pet)) +
[geom_linerange](https://ggplot2.tidyverse.org/reference/geom_linerange.html)(mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(ymin = 0, ymax = n),
size = 2) +
[geom_point](https://ggplot2.tidyverse.org/reference/geom_point.html)(mapping = [aes](https://ggplot2.tidyverse.org/reference/aes.html)(y = n, colour = pet),
shape = 21,
size = 8,
stroke = 4,
fill = "white",
show.legend = FALSE) +
[geom_text](https://ggplot2.tidyverse.org/reference/geom_text.html)([aes](https://ggplot2.tidyverse.org/reference/aes.html)(label = pet),
y = 1, hjust = 0, size = 6,
position = [position_nudge](https://ggplot2.tidyverse.org/reference/position_nudge.html)(x = 0.3)) +
[scale_x_discrete](https://ggplot2.tidyverse.org/reference/scale_discrete.html)(labels = NULL) +
[theme](https://ggplot2.tidyverse.org/reference/theme.html)(axis.ticks.y = [element_blank](https://ggplot2.tidyverse.org/reference/element.html)()) +
[scale_colour_viridis_d](https://ggplot2.tidyverse.org/reference/scale_viridis.html)() +
[labs](https://ggplot2.tidyverse.org/reference/labs.html)(x = "", y = "") +
[coord_flip](https://ggplot2.tidyverse.org/reference/coord_flip.html)(ylim = [c](https://rdrr.io/r/base/c.html)(0, 200)) +
[theme_light](https://ggplot2.tidyverse.org/reference/ggtheme.html)()
```
Figure D.10: A lollipop plot showing the number of different types of pets.
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/license.html |
License
=======
This book is licensed under Creative Commons Attribution\-ShareAlike 4\.0 International License [(CC\-BY\-SA 4\.0\)](https://creativecommons.org/licenses/by-sa/4.0/). You are free to share and adapt this book. You must give appropriate credit, provide a link to the license, and indicate if changes were made. If you adapt the material, you must distribute your contributions under the same license as the original.
| Data Visualization |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/license.html |
License
=======
This book is licensed under Creative Commons Attribution\-ShareAlike 4\.0 International License [(CC\-BY\-SA 4\.0\)](https://creativecommons.org/licenses/by-sa/4.0/). You are free to share and adapt this book. You must give appropriate credit, provide a link to the license, and indicate if changes were made. If you adapt the material, you must distribute your contributions under the same license as the original.
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/license.html |
License
=======
This book is licensed under Creative Commons Attribution\-ShareAlike 4\.0 International License [(CC\-BY\-SA 4\.0\)](https://creativecommons.org/licenses/by-sa/4.0/). You are free to share and adapt this book. You must give appropriate credit, provide a link to the license, and indicate if changes were made. If you adapt the material, you must distribute your contributions under the same license as the original.
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/introdataviz/references.html | Data Visualization |
|
psyteachr.github.io | https://psyteachr.github.io/introdataviz/references.html | Field Specific |
|
psyteachr.github.io | https://psyteachr.github.io/introdataviz/references.html | Field Specific |
|
rkabacoff.github.io | https://rkabacoff.github.io/datavis/index.html |
Welcome
=======
---
This is the online version of “**Modern Data Visualization with R**”, published by [CRC Press](https://www.routledge.com/Modern-Data-Visualization-with-R/Kabacoff/p/book/9781032287607). A print version is also available from [Amazon](https://www.amazon.com/Modern-Data-Visualization-Chapman-Hall-ebook/dp/B0CKFJBL6D/ref=sr_1_3?crid=27XI4MRV1BKSZ&dib=eyJ2IjoiMSJ9.rsTCReggKbTKXePD3hLgkbh7gInVjEXIz4COjRA_phivrjWX3rrBUtsGwHKY6FJgP6fh_xosiqDv-Y10kDdFOi2irWl8Jf1W74itKQKShFVAtGku6E-wLrl4IVfYaT7zL7QegQxXWD--udjiW6bE9Q.qpZ4ecAFu4DZh7-72r-kzCeqkgf2MfqRV-o0DeT4H8Y&dib_tag=se&keywords=kabacoff&qid=1712765890&sprefix=kabacoff%2Caps%2C82&sr=8-3).
R is an amazing platform for data analysis, capable of creating almost any type of graph. This book helps you create the most popular visualizations \- from quick and dirty plots to publication\-ready graphs. The text relies heavily on the [ggplot2](https://ggplot2.tidyverse.org/) package for graphics, but other approaches are covered as well.
My goal is make this book as helpful and user\-friendly as possible. Any feedback is both welcome and appreciated.
.
| Data Visualization |
rkabacoff.github.io | https://rkabacoff.github.io/datavis/preface.html |
Preface
=======
> *There is magic in graphs. The profile of a curve reveals in a flash a whole situation — the life history of an epidemic, a panic, or an era of prosperity. The curve informs the mind, awakens the imagination, convinces.* – **Henry D. Hubbard**
> *Above all else, show the data.* – **Edward Tufte**
> *We cannot just look at a country by looking at charts, graphs, and modeling the economy. Behind the numbers there are people.* \- **Christine Lagarde**
0\.1 Why this book?
-------------------
There are many books on data visualization using R. So why another one? I am trying to achieve five goals with this book.
1. **Help identify the most appropriate graph for a given situation**. With the plethora of graphs types available, some guidance is required when choosing a graph for a given problem. I’ve tried to provide that guidance here.
2. **Allow easy access to these graphs**. The graphs in this book are presented in **cookbook** fashion. Basic graphs are demonstrated first, followed by more attractive and customized versions.
3. **Expand the breadth of visualizations available**. There are many more types of graphs than we typically see in reports and blogs. They can be helpful, intuitive and compelling. I’ve tried to include many of them here.
4. **Help you customize any graph to meet your needs**. Basic graphs are easy but highly customized graphs can take some work. This book provides the neccessary details for modifying axes, shapes, colors, fonts, annotations, formats, and more. You can make your graph look *exactly* as you wish.
5. **Offer suggestions for best practices**. There is an ethical obligation to convey information clearly, and with as little distortion or obfuscation as possible. I hope this book helps support that goal.
0\.2 Acknowledgements
---------------------
**\[Acknowledgements to CRC employees will go here]**
There are two other people I would like to thank. The first person is Manolis Kaparakis, Director of the Quantitative Analysis Center at Wesleyan University and ostensibly my boss. He has always strived to empower me and help me feel valued and appreciated. He is simply the best boss I’ve ever had. We should all be so lucky.
The second person is really the first person in all things. It was my idea to write this book. It was my wife Carol Lynn’s idea to finish the book. Her love and support knows no bounds and this book is a statistician’s version of PDA. How did I get so lucky?
0\.3 Supporting website
-----------------------
Supplementary materials (including all the code and datasets used this book) are available on the support website, <http://www.github.com/rkabacoff/datavis_support>.
0\.1 Why this book?
-------------------
There are many books on data visualization using R. So why another one? I am trying to achieve five goals with this book.
1. **Help identify the most appropriate graph for a given situation**. With the plethora of graphs types available, some guidance is required when choosing a graph for a given problem. I’ve tried to provide that guidance here.
2. **Allow easy access to these graphs**. The graphs in this book are presented in **cookbook** fashion. Basic graphs are demonstrated first, followed by more attractive and customized versions.
3. **Expand the breadth of visualizations available**. There are many more types of graphs than we typically see in reports and blogs. They can be helpful, intuitive and compelling. I’ve tried to include many of them here.
4. **Help you customize any graph to meet your needs**. Basic graphs are easy but highly customized graphs can take some work. This book provides the neccessary details for modifying axes, shapes, colors, fonts, annotations, formats, and more. You can make your graph look *exactly* as you wish.
5. **Offer suggestions for best practices**. There is an ethical obligation to convey information clearly, and with as little distortion or obfuscation as possible. I hope this book helps support that goal.
0\.3 Supporting website
-----------------------
Supplementary materials (including all the code and datasets used this book) are available on the support website, <http://www.github.com/rkabacoff/datavis_support>.
| Data Visualization |
rkabacoff.github.io | https://rkabacoff.github.io/datavis/Intro.html |
Chapter 1 Introduction
======================
If you are reading this book, you probably already appreciate the importance of visualizing data. It is an essential component of any data analysis. While we generally accept the old adage that *“a picture is worth a thousand words”*, it is worthwhile taking a moment to consider why.
Humans are remarkably capable of discerning patterns in visual data. This allows us to discover relationships, identify unusual or mistaken values, determine trends and differences, and with some effort, understand the relationships among several variables at once.
Additionally, data visualizations tend to have a greater cognitive and *emotional* impact than either text descriptions or tables of numbers. This makes them a key ingredient in both storytelling and the crafting of persuasive arguments.
Because graphs are so compelling, researchers and data scientists have an ethical obligation to create visualizations that fairly and accurately reflect the information contained in the data. The goal of this book is provide you with the tools to both select and create graphs that present data as clearly, understandably, and accurately (honestly) as possible.
The R platform ([R Core Team 2023](#ref-R-base)) provides one of the most comprehensive set of tools for accomplishing these goals. The software is open source, freely available, runs on almost any platform, is highly customizable, and is supported by a massive world\-wide user base. The tools described in this book should allow you to create almost any type of data visualization desired.
Currently, the most popular approach to creating graphs in R uses the **ggplot2** package ([Wickham et al. 2023](#ref-R-ggplot2)). Based on a *Grammar of Graphics* ([Wilkinson and Wills 2005](#ref-RN4)), the ggplot2 package provides a coherent and extensible system for data visualization and is the central approach used in this book. Since its release, a number of additional packages have been have been developed to enhance and expand the types of graphs that can easily be created with ggplot2\. Many of these are explored in later chapters.
1\.1 How to use this book
-------------------------
I hope that this book will provide you with comprehensive overview of data visualization. However, you don’t need to read this book from start to finish in order to start building effective graphs. Feel free to jump to the section that you need and then explore others that you find interesting.
Graphs are organized by
* the number of variables to be plotted
* the type of variables to be plotted
* the purpose of the visualization
| Chapter | Description |
| --- | --- |
| [**Ch 2**](DataPrep.html#DataPrep) | provided a quick overview of how to get your data into R and how to prepare it for analysis. |
| [**Ch 3**](IntroGGPLOT.html#IntroGGPLOT) | provides an overview of the **ggplot2** package. |
| [**Ch 4**](Univariate.html#Univariate) | describes graphs for visualizing the distribution of a single categorical (e.g. race) or quantitative (e.g. income) variable. |
| [**Ch 5**](Bivariate.html#Bivariate) | describes graphs that display the relationship between two variables. |
| [**Ch 6**](Multivariate.html#Multivariate) | describes graphs that display the relationships among 3 or more variables. It is helpful to read chapters 4 and 5 before this chapter. |
| [**Ch 7**](Maps.html#Maps) | provides a brief introduction to displaying data geographically. |
| [**Ch 8**](Time.html#Time) | describes graphs that display change over time. |
| [**Ch 9**](Models.html#Models) | describes graphs that can help you interpret the results of statistical models. |
| [**Ch 10**](Other.html#Other) | covers graphs that do not fit neatly elsewhere (every book needs a miscellaneous chapter). |
| [**Ch 11**](Customizing.html#Customizing) | describes how to customize the look and feel of your graphs. If you are going to share your graphs with others, be sure to check it out. |
| [**Ch 12**](SavingGraphs.html#SavingGraphs) | covers how to save your graphs. Different formats are optimized for different purposes. |
| [**Ch 13**](Interactive.html#Interactive) | provides an introduction to interactive graphics. |
| [**Ch 14**](Advice.html#Advice) | gives advice on creating effective graphs and where to go to learn more. It’s worth a look. |
| [**The Appendices**](#Data) | describe each of the datasets used in this book, and provides a short blurb about the author and the Wesleyan QAC. |
There is **no one right graph** for displaying data. Check out the examples, and see which type best fits your needs.
1\.2 Prequisites
----------------
It’s assumed that you have some experience with the R language and that you have already installed [R](https://cran.r-project.org/) and [RStudio](https://www.rstudio.com/products/RStudio/#Desktop). If not, here are two excellent resources for getting started:
* **A (very) short introduction to R** by Paul Torfs \& Claudia Brauer ([https://cran.r\-project.org/doc/contrib/Torfs\+Brauer\-Short\-R\-Intro.pdf](https://cran.r-project.org/doc/contrib/Torfs+Brauer-Short-R-Intro.pdf)). This is a great introductory article that will get you up and running quickly.
* **An Introduction to R** by Alex Douglas, Deon Roos, Francesca Mancini, Anna Couto, \& David Lussea (<https://intro2r.com>). This is a comprehensive e\-book on R. Chapters 1\-3 provide a solid introduction.
Either of these resources will help you familiarize yourself with R quickly.
1\.3 Setup
----------
In order to create the graphs in this book, you’ll need to install a number of optional R packages. Most of these packages are hosted on the **Comprehensive R Archive Network** (CRAN). To install **all** of these CRAN packages, run the following code in the RStudio console window.
```
CRAN_pkgs <- c("ggplot2", "dplyr", "tidyr", "mosaicData",
"carData", "VIM", "scales", "treemapify",
"gapminder","sf", "tidygeocoder", "mapview",
"ggmap", "osmdata", "choroplethr",
"choroplethrMaps", "lubridate", "CGPfunctions",
"ggcorrplot", "visreg", "gcookbook", "forcats",
"survival", "survminer", "car", "rgl",
"ggalluvial", "ggridges", "GGally", "superheat",
"waterfalls", "factoextra","networkD3",
"ggthemes", "patchwork", "hrbrthemes", "ggpol",
"quantmod", "gghighlight", "leaflet", "ggiraph",
"rbokeh", "ggalt")
install.packages(CRAN_pkgs)
```
Alternatively, you can install a given package the first time it is needed.
For example, if you execute
`library(gapminder)`
and get the message
`Error in library(gapminder) : there is no package called ‘gapminder’`
you know that the package has never been installed. Simply execute
`install.packages("gapminder")`
once and
`library(gapminder)`
will work from that point on.
A few specialized packages used later in the book are only hosted on **GitHub**. You can install them using the `install_github` function in the **remotes** package. First install the **remotes** package from CRAN.
```
install.packages("remotes")
```
Then run the following code to install the remaining packages.
```
github_pkgs <- c("rkabacoff/ggpie", "hrbrmstr/waffle",
"ricardo-bion/ggradar", "ramnathv/rCharts",
"Mikata-Project/ggthemr")
remotes::install_github(github_pkgs, dependencies = TRUE)
```
Although it may seem like a lot, these packages should install fairly quickly. And again, you can install them individually as needed.
At this point, you should be ready to go. Let’s get started!
1\.1 How to use this book
-------------------------
I hope that this book will provide you with comprehensive overview of data visualization. However, you don’t need to read this book from start to finish in order to start building effective graphs. Feel free to jump to the section that you need and then explore others that you find interesting.
Graphs are organized by
* the number of variables to be plotted
* the type of variables to be plotted
* the purpose of the visualization
| Chapter | Description |
| --- | --- |
| [**Ch 2**](DataPrep.html#DataPrep) | provided a quick overview of how to get your data into R and how to prepare it for analysis. |
| [**Ch 3**](IntroGGPLOT.html#IntroGGPLOT) | provides an overview of the **ggplot2** package. |
| [**Ch 4**](Univariate.html#Univariate) | describes graphs for visualizing the distribution of a single categorical (e.g. race) or quantitative (e.g. income) variable. |
| [**Ch 5**](Bivariate.html#Bivariate) | describes graphs that display the relationship between two variables. |
| [**Ch 6**](Multivariate.html#Multivariate) | describes graphs that display the relationships among 3 or more variables. It is helpful to read chapters 4 and 5 before this chapter. |
| [**Ch 7**](Maps.html#Maps) | provides a brief introduction to displaying data geographically. |
| [**Ch 8**](Time.html#Time) | describes graphs that display change over time. |
| [**Ch 9**](Models.html#Models) | describes graphs that can help you interpret the results of statistical models. |
| [**Ch 10**](Other.html#Other) | covers graphs that do not fit neatly elsewhere (every book needs a miscellaneous chapter). |
| [**Ch 11**](Customizing.html#Customizing) | describes how to customize the look and feel of your graphs. If you are going to share your graphs with others, be sure to check it out. |
| [**Ch 12**](SavingGraphs.html#SavingGraphs) | covers how to save your graphs. Different formats are optimized for different purposes. |
| [**Ch 13**](Interactive.html#Interactive) | provides an introduction to interactive graphics. |
| [**Ch 14**](Advice.html#Advice) | gives advice on creating effective graphs and where to go to learn more. It’s worth a look. |
| [**The Appendices**](#Data) | describe each of the datasets used in this book, and provides a short blurb about the author and the Wesleyan QAC. |
There is **no one right graph** for displaying data. Check out the examples, and see which type best fits your needs.
1\.2 Prequisites
----------------
It’s assumed that you have some experience with the R language and that you have already installed [R](https://cran.r-project.org/) and [RStudio](https://www.rstudio.com/products/RStudio/#Desktop). If not, here are two excellent resources for getting started:
* **A (very) short introduction to R** by Paul Torfs \& Claudia Brauer ([https://cran.r\-project.org/doc/contrib/Torfs\+Brauer\-Short\-R\-Intro.pdf](https://cran.r-project.org/doc/contrib/Torfs+Brauer-Short-R-Intro.pdf)). This is a great introductory article that will get you up and running quickly.
* **An Introduction to R** by Alex Douglas, Deon Roos, Francesca Mancini, Anna Couto, \& David Lussea (<https://intro2r.com>). This is a comprehensive e\-book on R. Chapters 1\-3 provide a solid introduction.
Either of these resources will help you familiarize yourself with R quickly.
1\.3 Setup
----------
In order to create the graphs in this book, you’ll need to install a number of optional R packages. Most of these packages are hosted on the **Comprehensive R Archive Network** (CRAN). To install **all** of these CRAN packages, run the following code in the RStudio console window.
```
CRAN_pkgs <- c("ggplot2", "dplyr", "tidyr", "mosaicData",
"carData", "VIM", "scales", "treemapify",
"gapminder","sf", "tidygeocoder", "mapview",
"ggmap", "osmdata", "choroplethr",
"choroplethrMaps", "lubridate", "CGPfunctions",
"ggcorrplot", "visreg", "gcookbook", "forcats",
"survival", "survminer", "car", "rgl",
"ggalluvial", "ggridges", "GGally", "superheat",
"waterfalls", "factoextra","networkD3",
"ggthemes", "patchwork", "hrbrthemes", "ggpol",
"quantmod", "gghighlight", "leaflet", "ggiraph",
"rbokeh", "ggalt")
install.packages(CRAN_pkgs)
```
Alternatively, you can install a given package the first time it is needed.
For example, if you execute
`library(gapminder)`
and get the message
`Error in library(gapminder) : there is no package called ‘gapminder’`
you know that the package has never been installed. Simply execute
`install.packages("gapminder")`
once and
`library(gapminder)`
will work from that point on.
A few specialized packages used later in the book are only hosted on **GitHub**. You can install them using the `install_github` function in the **remotes** package. First install the **remotes** package from CRAN.
```
install.packages("remotes")
```
Then run the following code to install the remaining packages.
```
github_pkgs <- c("rkabacoff/ggpie", "hrbrmstr/waffle",
"ricardo-bion/ggradar", "ramnathv/rCharts",
"Mikata-Project/ggthemr")
remotes::install_github(github_pkgs, dependencies = TRUE)
```
Although it may seem like a lot, these packages should install fairly quickly. And again, you can install them individually as needed.
At this point, you should be ready to go. Let’s get started!
| Data Visualization |
rkabacoff.github.io | https://rkabacoff.github.io/datavis/DataPrep.html |
Chapter 2 Data Preparation
==========================
Before you can visualize your data, you have to get it into R. This involves importing the data from an external source and massaging it into a useful format. It would be great if data came in a clean rectangular format, without errors, or missing values. It would also be great if ice cream grew on trees. A significant part of data analysis is preparing the data for analysis.
2\.1 Importing data
-------------------
R can import data from almost any source, including text files, excel spreadsheets, statistical packages, and database management systems (DBMS). We’ll illustrate these techniques using the [`Salaries`](Datasets.html#Salaries) dataset, containing the 9 month academic salaries of college professors at a single institution in 2008\-2009\. The dataset is described in Appendix [A.1](Datasets.html#Salaries).
### 2\.1\.1 Text files
The **readr** package provides functions for importing delimited text files into R data frames.
```
library(readr)
# import data from a comma delimited file
Salaries <- read_csv("salaries.csv")
# import data from a tab delimited file
Salaries <- read_tsv("salaries.txt")
```
These function assume that the first line of data contains the variable names, values are separated by commas or tabs respectively, and that missing data are represented by blanks. For example, the first few lines of the comma delimited file looks like this.
```
"rank","discipline","yrs.since.phd","yrs.service","sex","salary"
"Prof","B",19,18,"Male",139750
"Prof","B",20,16,"Male",173200
"AsstProf","B",4,3,"Male",79750
"Prof","B",45,39,"Male",115000
"Prof","B",40,41,"Male",141500
"AssocProf","B",6,6,"Male",97000
```
Options allow you to alter these assumptions. See the [`?read_delim`](https://www.rdocumentation.org/packages/readr/versions/0.1.1/topics/read_delim) for more details.
### 2\.1\.2 Excel spreadsheets
The **readxl** package can import data from Excel workbooks. Both xls and xlsx formats are supported.
```
library(readxl)
# import data from an Excel workbook
Salaries <- read_excel("salaries.xlsx", sheet=1)
```
Since workbooks can have more than one worksheet, you can specify the one you want with the `sheet` option. The default is `sheet=1`.
### 2\.1\.3 Statistical packages
The **haven** package provides functions for importing data from a variety of statistical packages.
```
library(haven)
# import data from Stata
Salaries <- read_dta("salaries.dta")
# import data from SPSS
Salaries <- read_sav("salaries.sav")
# import data from SAS
Salaries <- read_sas("salaries.sas7bdat")
```
> Note: you do not have have these statistical packages installed in order to import their data files.
### 2\.1\.4 Databases
Importing data from a database requires additional steps and is beyond the scope of this book. Depending on the database containing the data, the following packages can help: **RODBC**, **RMySQL**, **ROracle**, **RPostgreSQL**, **RSQLite**, and **RMongo**. In the newest versions of RStudio, you can use the [Connections pane](https://db.rstudio.com/rstudio/connections/) to quickly access the data stored in database management systems.
2\.2 Cleaning data
------------------
The processes of cleaning your data can be the most time\-consuming part of any data analysis. The most important steps are considered below. While there are many approaches, those using the **dplyr** and **tidyr** packages are some of the quickest and easiest to learn.
| Package | Function | Use |
| --- | --- | --- |
| dplyr | select | select variables/columns |
| dplyr | filter | select observations/rows |
| dplyr | mutate | transform or recode variables |
| dplyr | summarize | summarize data |
| dplyr | group\_by | identify subgroups for further processing |
| tidyr | gather | convert wide format dataset to long format |
| tidyr | spread | convert long format dataset to wide format |
Examples in this section will use the [Starwars](Datasets.html#Starwars) dataset from the **dplyr** package. The dataset provides descriptions of 87 characters from the Starwars universe on 13 variables. (I actually prefer StarTrek, but we work with what we have.) The dataset is described in Appendix [A.2](Datasets.html#Starwars).
### 2\.2\.1 Selecting variables
The `select` function allows you to limit your dataset to specified variables (columns).
```
library(dplyr)
# keep the variables name, height, and gender
newdata <- select(starwars, name, height, gender)
# keep the variables name and all variables
# between mass and species inclusive
newdata <- select(starwars, name, mass:species)
# keep all variables except birth_year and gender
newdata <- select(starwars, -birth_year, -gender)
```
### 2\.2\.2 Selecting observations
The `filter` function allows you to limit your dataset to observations (rows) meeting a specific criteria. Multiple criteria can be combined with the `&` (AND) and `|` (OR) symbols.
```
library(dplyr)
# select females
newdata <- filter(starwars,
gender == "female")
# select females that are from Alderaan
newdata <- select(starwars,
gender == "female" &
homeworld == "Alderaan")
# select individuals that are from Alderaan, Coruscant, or Endor
newdata <- select(starwars,
homeworld == "Alderaan" |
homeworld == "Coruscant" |
homeworld == "Endor")
# this can be written more succinctly as
newdata <- select(starwars,
homeworld %in%
c("Alderaan", "Coruscant", "Endor"))
```
### 2\.2\.3 Creating/Recoding variables
The `mutate` function allows you to create new variables or transform existing ones.
```
library(dplyr)
# convert height in centimeters to inches,
# and mass in kilograms to pounds
newdata <- mutate(starwars,
height = height * 0.394,
mass = mass * 2.205)
```
The `ifelse` function (part of base R) can be used for recoding data. The format is `ifelse(test, return if TRUE, return if FALSE)`.
```
library(dplyr)
# if height is greater than 180 then heightcat = "tall",
# otherwise heightcat = "short"
newdata <- mutate(starwars,
heightcat = ifelse(height > 180,
"tall",
"short"))
# convert any eye color that is not black, blue or brown, to other.
newdata <- mutate(starwars,
eye_color = ifelse(eye_color %in%
c("black", "blue", "brown"),
eye_color,
"other"))
# set heights greater than 200 or less than 75 to missing
newdata <- mutate(starwars,
height = ifelse(height < 75 | height > 200,
NA,
height))
```
### 2\.2\.4 Summarizing data
The `summarize` function can be used to reduce multiple values down to a single value (such as a mean). It is often used in conjunction with the `by_group` function, to calculate statistics by group. In the code below, the `na.rm=TRUE` option is used to drop missing values before calculating the means.
```
library(dplyr)
# calculate mean height and mass
newdata <- summarize(starwars,
mean_ht = mean(height, na.rm=TRUE),
mean_mass = mean(mass, na.rm=TRUE))
newdata
```
```
## # A tibble: 1 × 2
## mean_ht mean_mass
## <dbl> <dbl>
## 1 175. 97.3
```
```
# calculate mean height and weight by gender
newdata <- group_by(starwars, gender)
newdata <- summarize(newdata,
mean_ht = mean(height, na.rm=TRUE),
mean_wt = mean(mass, na.rm=TRUE))
newdata
```
```
## # A tibble: 3 × 3
## gender mean_ht mean_wt
## <chr> <dbl> <dbl>
## 1 feminine 167. 54.7
## 2 masculine 177. 107.
## 3 <NA> 175 81
```
Graphs are often created from summarized data, rather than from the original observations. You will see several examples in Chapter 4\.
### 2\.2\.5 Using pipes
Packages like **dplyr** and **tidyr** allow you to write your code in a compact format using the pipe `%>%` operator. Here is an example.
```
library(dplyr)
# calculate the mean height for women by species
newdata <- filter(starwars,
gender == "female")
newdata <- group_by(species)
newdata <- summarize(newdata,
mean_ht = mean(height, na.rm = TRUE))
# this can be written as more succinctly as
newdata <- starwars %>%
filter(gender == "female") %>%
group_by(species) %>%
summarize(mean_ht = mean(height, na.rm = TRUE))
```
The `%>%` operator passes the result on the left to the first parameter of the function on the right.
### 2\.2\.6 Processing dates
Date values are entered in R as character values. For example, consider the following simple dataset recording the birth date of 3 individuals.
```
df <- data.frame(
dob = c("11/10/1963", "Jan-23-91", "12:1:2001")
)
# view struction of data frame
str(df)
```
```
## 'data.frame': 3 obs. of 1 variable:
## $ dob: chr "11/10/1963" "Jan-23-91" "12:1:2001"
```
There are many ways to convert character variables to *Date* variables. One of they simplest is to use the functions provided in the **lubridate** package. These include `ymd`, `dmy`, and `mdy` for importing year\-month\-day, day\-month\-year, and month\-day\-year formats respectively.
```
library(lubridate)
# convert dob from character to date
df$dob <- mdy(df$dob)
str(df)
```
```
## 'data.frame': 3 obs. of 1 variable:
## $ dob: Date, format: "1963-11-10" "1991-01-23" ...
```
The values are recorded internally as the number of days since January 1, 1970\. Now that the variable is a Date variable, you can perform date arithmetic (how old are they now), extract date elements (month, day, year), and reformat the values (e.g., October 11, 1963\). Date variables are important for time\-dependent graphs (Chapter [8](Time.html#Time)).
### 2\.2\.7 Reshaping data
Some graphs require the data to be in wide format, while some graphs require the data to be in long format. An example of wide data is given in Table [2\.1](DataPrep.html#tab:wide).
Table 2\.1: Wide data
| id | name | sex | height | weight |
| --- | --- | --- | --- | --- |
| 01 | Bill | Male | 70 | 180 |
| 02 | Bob | Male | 72 | 195 |
| 03 | Mary | Female | 62 | 130 |
You can convert a wide dataset to a long dataset (Table [2\.2](DataPrep.html#tab:long)) using
```
# convert wide dataset to long dataset
library(tidyr)
long_data <- pivot_longer(wide_data,
cols = c("height", "weight"),
names_to = "variable",
values_to ="value")
```
Table 2\.2: Long data
| id | name | sex | variable | value |
| --- | --- | --- | --- | --- |
| 01 | Bill | Male | height | 70 |
| 01 | Bill | Male | weight | 180 |
| 02 | Bob | Male | height | 72 |
| 02 | Bob | Male | weight | 195 |
| 03 | Mary | Female | height | 62 |
| 03 | Mary | Female | weight | 130 |
Conversely, you can convert a long dataset to a wide dataset using
```
# convert long dataset to wide dataset
library(tidyr)
wide_data <- pivot_wider(long_data,
names_from = "variable",
values_from = "value")
```
### 2\.2\.8 Missing data
Real data is likely to contain missing values. There are three basic approaches to dealing with missing data: feature selection, listwise deletion, and imputation. Let’s see how each applies to the [msleep](Datasets.html#Msleep) dataset from the **ggplot2** package. The msleep dataset describes the sleep habits of mammals and contains missing values on several variables. (See Appendix [A.3](Datasets.html#Msleep).)
#### 2\.2\.8\.1 Feature selection
In feature selection, you delete variables (columns) that contain too many missing values.
```
data(msleep, package="ggplot2")
# what is the proportion of missing data for each variable?
pctmiss <- colSums(is.na(msleep))/nrow(msleep)
round(pctmiss, 2)
```
Sixty\-two percent of the sleep\_cycle values are missing. You may decide to drop it.
#### 2\.2\.8\.2 Listwise deletion
Listwise deletion involves deleting observations (rows) that contain missing values on *any* of the variables of interest.
```
# Create a dataset containing genus, vore, and conservation.
# Delete any rows containing missing data.
newdata <- select(msleep, genus, vore, conservation)
newdata <- na.omit(newdata)
```
#### 2\.2\.8\.3 Imputation
Imputation involves replacing missing values with “reasonable” guesses about what the values would have been if they had not been missing. There are several approaches, as detailed in such packages as **VIM**, **mice**, **Amelia** and **missForest**. Here we will use the `kNN()` function from the **VIM** package to replace missing values with imputed values.
```
# Impute missing values using the 5 nearest neighbors
library(VIM)
newdata <- kNN(msleep, k=5)
```
Basically, for each case with a missing value, the *k* most similar cases not having a missing value are selected. If the missing value is numeric, the median of those *k* cases is used as the imputed value. If the missing value is categorical, the most frequent value from the *k* cases is used. The process iterates over cases and variables until the results converge (become stable). This is a bit of an oversimplification \- see Kowarik and Templ ([2016](#ref-RN3)) for the actual details.
> Important caveat: Missing values can bias the results of studies (sometimes severely). If you have a significant amount of missing data, it is probably a good idea to consult a statistician or data scientist before deleting cases or imputing missing values.
2\.1 Importing data
-------------------
R can import data from almost any source, including text files, excel spreadsheets, statistical packages, and database management systems (DBMS). We’ll illustrate these techniques using the [`Salaries`](Datasets.html#Salaries) dataset, containing the 9 month academic salaries of college professors at a single institution in 2008\-2009\. The dataset is described in Appendix [A.1](Datasets.html#Salaries).
### 2\.1\.1 Text files
The **readr** package provides functions for importing delimited text files into R data frames.
```
library(readr)
# import data from a comma delimited file
Salaries <- read_csv("salaries.csv")
# import data from a tab delimited file
Salaries <- read_tsv("salaries.txt")
```
These function assume that the first line of data contains the variable names, values are separated by commas or tabs respectively, and that missing data are represented by blanks. For example, the first few lines of the comma delimited file looks like this.
```
"rank","discipline","yrs.since.phd","yrs.service","sex","salary"
"Prof","B",19,18,"Male",139750
"Prof","B",20,16,"Male",173200
"AsstProf","B",4,3,"Male",79750
"Prof","B",45,39,"Male",115000
"Prof","B",40,41,"Male",141500
"AssocProf","B",6,6,"Male",97000
```
Options allow you to alter these assumptions. See the [`?read_delim`](https://www.rdocumentation.org/packages/readr/versions/0.1.1/topics/read_delim) for more details.
### 2\.1\.2 Excel spreadsheets
The **readxl** package can import data from Excel workbooks. Both xls and xlsx formats are supported.
```
library(readxl)
# import data from an Excel workbook
Salaries <- read_excel("salaries.xlsx", sheet=1)
```
Since workbooks can have more than one worksheet, you can specify the one you want with the `sheet` option. The default is `sheet=1`.
### 2\.1\.3 Statistical packages
The **haven** package provides functions for importing data from a variety of statistical packages.
```
library(haven)
# import data from Stata
Salaries <- read_dta("salaries.dta")
# import data from SPSS
Salaries <- read_sav("salaries.sav")
# import data from SAS
Salaries <- read_sas("salaries.sas7bdat")
```
> Note: you do not have have these statistical packages installed in order to import their data files.
### 2\.1\.4 Databases
Importing data from a database requires additional steps and is beyond the scope of this book. Depending on the database containing the data, the following packages can help: **RODBC**, **RMySQL**, **ROracle**, **RPostgreSQL**, **RSQLite**, and **RMongo**. In the newest versions of RStudio, you can use the [Connections pane](https://db.rstudio.com/rstudio/connections/) to quickly access the data stored in database management systems.
### 2\.1\.1 Text files
The **readr** package provides functions for importing delimited text files into R data frames.
```
library(readr)
# import data from a comma delimited file
Salaries <- read_csv("salaries.csv")
# import data from a tab delimited file
Salaries <- read_tsv("salaries.txt")
```
These function assume that the first line of data contains the variable names, values are separated by commas or tabs respectively, and that missing data are represented by blanks. For example, the first few lines of the comma delimited file looks like this.
```
"rank","discipline","yrs.since.phd","yrs.service","sex","salary"
"Prof","B",19,18,"Male",139750
"Prof","B",20,16,"Male",173200
"AsstProf","B",4,3,"Male",79750
"Prof","B",45,39,"Male",115000
"Prof","B",40,41,"Male",141500
"AssocProf","B",6,6,"Male",97000
```
Options allow you to alter these assumptions. See the [`?read_delim`](https://www.rdocumentation.org/packages/readr/versions/0.1.1/topics/read_delim) for more details.
### 2\.1\.2 Excel spreadsheets
The **readxl** package can import data from Excel workbooks. Both xls and xlsx formats are supported.
```
library(readxl)
# import data from an Excel workbook
Salaries <- read_excel("salaries.xlsx", sheet=1)
```
Since workbooks can have more than one worksheet, you can specify the one you want with the `sheet` option. The default is `sheet=1`.
### 2\.1\.3 Statistical packages
The **haven** package provides functions for importing data from a variety of statistical packages.
```
library(haven)
# import data from Stata
Salaries <- read_dta("salaries.dta")
# import data from SPSS
Salaries <- read_sav("salaries.sav")
# import data from SAS
Salaries <- read_sas("salaries.sas7bdat")
```
> Note: you do not have have these statistical packages installed in order to import their data files.
### 2\.1\.4 Databases
Importing data from a database requires additional steps and is beyond the scope of this book. Depending on the database containing the data, the following packages can help: **RODBC**, **RMySQL**, **ROracle**, **RPostgreSQL**, **RSQLite**, and **RMongo**. In the newest versions of RStudio, you can use the [Connections pane](https://db.rstudio.com/rstudio/connections/) to quickly access the data stored in database management systems.
2\.2 Cleaning data
------------------
The processes of cleaning your data can be the most time\-consuming part of any data analysis. The most important steps are considered below. While there are many approaches, those using the **dplyr** and **tidyr** packages are some of the quickest and easiest to learn.
| Package | Function | Use |
| --- | --- | --- |
| dplyr | select | select variables/columns |
| dplyr | filter | select observations/rows |
| dplyr | mutate | transform or recode variables |
| dplyr | summarize | summarize data |
| dplyr | group\_by | identify subgroups for further processing |
| tidyr | gather | convert wide format dataset to long format |
| tidyr | spread | convert long format dataset to wide format |
Examples in this section will use the [Starwars](Datasets.html#Starwars) dataset from the **dplyr** package. The dataset provides descriptions of 87 characters from the Starwars universe on 13 variables. (I actually prefer StarTrek, but we work with what we have.) The dataset is described in Appendix [A.2](Datasets.html#Starwars).
### 2\.2\.1 Selecting variables
The `select` function allows you to limit your dataset to specified variables (columns).
```
library(dplyr)
# keep the variables name, height, and gender
newdata <- select(starwars, name, height, gender)
# keep the variables name and all variables
# between mass and species inclusive
newdata <- select(starwars, name, mass:species)
# keep all variables except birth_year and gender
newdata <- select(starwars, -birth_year, -gender)
```
### 2\.2\.2 Selecting observations
The `filter` function allows you to limit your dataset to observations (rows) meeting a specific criteria. Multiple criteria can be combined with the `&` (AND) and `|` (OR) symbols.
```
library(dplyr)
# select females
newdata <- filter(starwars,
gender == "female")
# select females that are from Alderaan
newdata <- select(starwars,
gender == "female" &
homeworld == "Alderaan")
# select individuals that are from Alderaan, Coruscant, or Endor
newdata <- select(starwars,
homeworld == "Alderaan" |
homeworld == "Coruscant" |
homeworld == "Endor")
# this can be written more succinctly as
newdata <- select(starwars,
homeworld %in%
c("Alderaan", "Coruscant", "Endor"))
```
### 2\.2\.3 Creating/Recoding variables
The `mutate` function allows you to create new variables or transform existing ones.
```
library(dplyr)
# convert height in centimeters to inches,
# and mass in kilograms to pounds
newdata <- mutate(starwars,
height = height * 0.394,
mass = mass * 2.205)
```
The `ifelse` function (part of base R) can be used for recoding data. The format is `ifelse(test, return if TRUE, return if FALSE)`.
```
library(dplyr)
# if height is greater than 180 then heightcat = "tall",
# otherwise heightcat = "short"
newdata <- mutate(starwars,
heightcat = ifelse(height > 180,
"tall",
"short"))
# convert any eye color that is not black, blue or brown, to other.
newdata <- mutate(starwars,
eye_color = ifelse(eye_color %in%
c("black", "blue", "brown"),
eye_color,
"other"))
# set heights greater than 200 or less than 75 to missing
newdata <- mutate(starwars,
height = ifelse(height < 75 | height > 200,
NA,
height))
```
### 2\.2\.4 Summarizing data
The `summarize` function can be used to reduce multiple values down to a single value (such as a mean). It is often used in conjunction with the `by_group` function, to calculate statistics by group. In the code below, the `na.rm=TRUE` option is used to drop missing values before calculating the means.
```
library(dplyr)
# calculate mean height and mass
newdata <- summarize(starwars,
mean_ht = mean(height, na.rm=TRUE),
mean_mass = mean(mass, na.rm=TRUE))
newdata
```
```
## # A tibble: 1 × 2
## mean_ht mean_mass
## <dbl> <dbl>
## 1 175. 97.3
```
```
# calculate mean height and weight by gender
newdata <- group_by(starwars, gender)
newdata <- summarize(newdata,
mean_ht = mean(height, na.rm=TRUE),
mean_wt = mean(mass, na.rm=TRUE))
newdata
```
```
## # A tibble: 3 × 3
## gender mean_ht mean_wt
## <chr> <dbl> <dbl>
## 1 feminine 167. 54.7
## 2 masculine 177. 107.
## 3 <NA> 175 81
```
Graphs are often created from summarized data, rather than from the original observations. You will see several examples in Chapter 4\.
### 2\.2\.5 Using pipes
Packages like **dplyr** and **tidyr** allow you to write your code in a compact format using the pipe `%>%` operator. Here is an example.
```
library(dplyr)
# calculate the mean height for women by species
newdata <- filter(starwars,
gender == "female")
newdata <- group_by(species)
newdata <- summarize(newdata,
mean_ht = mean(height, na.rm = TRUE))
# this can be written as more succinctly as
newdata <- starwars %>%
filter(gender == "female") %>%
group_by(species) %>%
summarize(mean_ht = mean(height, na.rm = TRUE))
```
The `%>%` operator passes the result on the left to the first parameter of the function on the right.
### 2\.2\.6 Processing dates
Date values are entered in R as character values. For example, consider the following simple dataset recording the birth date of 3 individuals.
```
df <- data.frame(
dob = c("11/10/1963", "Jan-23-91", "12:1:2001")
)
# view struction of data frame
str(df)
```
```
## 'data.frame': 3 obs. of 1 variable:
## $ dob: chr "11/10/1963" "Jan-23-91" "12:1:2001"
```
There are many ways to convert character variables to *Date* variables. One of they simplest is to use the functions provided in the **lubridate** package. These include `ymd`, `dmy`, and `mdy` for importing year\-month\-day, day\-month\-year, and month\-day\-year formats respectively.
```
library(lubridate)
# convert dob from character to date
df$dob <- mdy(df$dob)
str(df)
```
```
## 'data.frame': 3 obs. of 1 variable:
## $ dob: Date, format: "1963-11-10" "1991-01-23" ...
```
The values are recorded internally as the number of days since January 1, 1970\. Now that the variable is a Date variable, you can perform date arithmetic (how old are they now), extract date elements (month, day, year), and reformat the values (e.g., October 11, 1963\). Date variables are important for time\-dependent graphs (Chapter [8](Time.html#Time)).
### 2\.2\.7 Reshaping data
Some graphs require the data to be in wide format, while some graphs require the data to be in long format. An example of wide data is given in Table [2\.1](DataPrep.html#tab:wide).
Table 2\.1: Wide data
| id | name | sex | height | weight |
| --- | --- | --- | --- | --- |
| 01 | Bill | Male | 70 | 180 |
| 02 | Bob | Male | 72 | 195 |
| 03 | Mary | Female | 62 | 130 |
You can convert a wide dataset to a long dataset (Table [2\.2](DataPrep.html#tab:long)) using
```
# convert wide dataset to long dataset
library(tidyr)
long_data <- pivot_longer(wide_data,
cols = c("height", "weight"),
names_to = "variable",
values_to ="value")
```
Table 2\.2: Long data
| id | name | sex | variable | value |
| --- | --- | --- | --- | --- |
| 01 | Bill | Male | height | 70 |
| 01 | Bill | Male | weight | 180 |
| 02 | Bob | Male | height | 72 |
| 02 | Bob | Male | weight | 195 |
| 03 | Mary | Female | height | 62 |
| 03 | Mary | Female | weight | 130 |
Conversely, you can convert a long dataset to a wide dataset using
```
# convert long dataset to wide dataset
library(tidyr)
wide_data <- pivot_wider(long_data,
names_from = "variable",
values_from = "value")
```
### 2\.2\.8 Missing data
Real data is likely to contain missing values. There are three basic approaches to dealing with missing data: feature selection, listwise deletion, and imputation. Let’s see how each applies to the [msleep](Datasets.html#Msleep) dataset from the **ggplot2** package. The msleep dataset describes the sleep habits of mammals and contains missing values on several variables. (See Appendix [A.3](Datasets.html#Msleep).)
#### 2\.2\.8\.1 Feature selection
In feature selection, you delete variables (columns) that contain too many missing values.
```
data(msleep, package="ggplot2")
# what is the proportion of missing data for each variable?
pctmiss <- colSums(is.na(msleep))/nrow(msleep)
round(pctmiss, 2)
```
Sixty\-two percent of the sleep\_cycle values are missing. You may decide to drop it.
#### 2\.2\.8\.2 Listwise deletion
Listwise deletion involves deleting observations (rows) that contain missing values on *any* of the variables of interest.
```
# Create a dataset containing genus, vore, and conservation.
# Delete any rows containing missing data.
newdata <- select(msleep, genus, vore, conservation)
newdata <- na.omit(newdata)
```
#### 2\.2\.8\.3 Imputation
Imputation involves replacing missing values with “reasonable” guesses about what the values would have been if they had not been missing. There are several approaches, as detailed in such packages as **VIM**, **mice**, **Amelia** and **missForest**. Here we will use the `kNN()` function from the **VIM** package to replace missing values with imputed values.
```
# Impute missing values using the 5 nearest neighbors
library(VIM)
newdata <- kNN(msleep, k=5)
```
Basically, for each case with a missing value, the *k* most similar cases not having a missing value are selected. If the missing value is numeric, the median of those *k* cases is used as the imputed value. If the missing value is categorical, the most frequent value from the *k* cases is used. The process iterates over cases and variables until the results converge (become stable). This is a bit of an oversimplification \- see Kowarik and Templ ([2016](#ref-RN3)) for the actual details.
> Important caveat: Missing values can bias the results of studies (sometimes severely). If you have a significant amount of missing data, it is probably a good idea to consult a statistician or data scientist before deleting cases or imputing missing values.
### 2\.2\.1 Selecting variables
The `select` function allows you to limit your dataset to specified variables (columns).
```
library(dplyr)
# keep the variables name, height, and gender
newdata <- select(starwars, name, height, gender)
# keep the variables name and all variables
# between mass and species inclusive
newdata <- select(starwars, name, mass:species)
# keep all variables except birth_year and gender
newdata <- select(starwars, -birth_year, -gender)
```
### 2\.2\.2 Selecting observations
The `filter` function allows you to limit your dataset to observations (rows) meeting a specific criteria. Multiple criteria can be combined with the `&` (AND) and `|` (OR) symbols.
```
library(dplyr)
# select females
newdata <- filter(starwars,
gender == "female")
# select females that are from Alderaan
newdata <- select(starwars,
gender == "female" &
homeworld == "Alderaan")
# select individuals that are from Alderaan, Coruscant, or Endor
newdata <- select(starwars,
homeworld == "Alderaan" |
homeworld == "Coruscant" |
homeworld == "Endor")
# this can be written more succinctly as
newdata <- select(starwars,
homeworld %in%
c("Alderaan", "Coruscant", "Endor"))
```
### 2\.2\.3 Creating/Recoding variables
The `mutate` function allows you to create new variables or transform existing ones.
```
library(dplyr)
# convert height in centimeters to inches,
# and mass in kilograms to pounds
newdata <- mutate(starwars,
height = height * 0.394,
mass = mass * 2.205)
```
The `ifelse` function (part of base R) can be used for recoding data. The format is `ifelse(test, return if TRUE, return if FALSE)`.
```
library(dplyr)
# if height is greater than 180 then heightcat = "tall",
# otherwise heightcat = "short"
newdata <- mutate(starwars,
heightcat = ifelse(height > 180,
"tall",
"short"))
# convert any eye color that is not black, blue or brown, to other.
newdata <- mutate(starwars,
eye_color = ifelse(eye_color %in%
c("black", "blue", "brown"),
eye_color,
"other"))
# set heights greater than 200 or less than 75 to missing
newdata <- mutate(starwars,
height = ifelse(height < 75 | height > 200,
NA,
height))
```
### 2\.2\.4 Summarizing data
The `summarize` function can be used to reduce multiple values down to a single value (such as a mean). It is often used in conjunction with the `by_group` function, to calculate statistics by group. In the code below, the `na.rm=TRUE` option is used to drop missing values before calculating the means.
```
library(dplyr)
# calculate mean height and mass
newdata <- summarize(starwars,
mean_ht = mean(height, na.rm=TRUE),
mean_mass = mean(mass, na.rm=TRUE))
newdata
```
```
## # A tibble: 1 × 2
## mean_ht mean_mass
## <dbl> <dbl>
## 1 175. 97.3
```
```
# calculate mean height and weight by gender
newdata <- group_by(starwars, gender)
newdata <- summarize(newdata,
mean_ht = mean(height, na.rm=TRUE),
mean_wt = mean(mass, na.rm=TRUE))
newdata
```
```
## # A tibble: 3 × 3
## gender mean_ht mean_wt
## <chr> <dbl> <dbl>
## 1 feminine 167. 54.7
## 2 masculine 177. 107.
## 3 <NA> 175 81
```
Graphs are often created from summarized data, rather than from the original observations. You will see several examples in Chapter 4\.
### 2\.2\.5 Using pipes
Packages like **dplyr** and **tidyr** allow you to write your code in a compact format using the pipe `%>%` operator. Here is an example.
```
library(dplyr)
# calculate the mean height for women by species
newdata <- filter(starwars,
gender == "female")
newdata <- group_by(species)
newdata <- summarize(newdata,
mean_ht = mean(height, na.rm = TRUE))
# this can be written as more succinctly as
newdata <- starwars %>%
filter(gender == "female") %>%
group_by(species) %>%
summarize(mean_ht = mean(height, na.rm = TRUE))
```
The `%>%` operator passes the result on the left to the first parameter of the function on the right.
### 2\.2\.6 Processing dates
Date values are entered in R as character values. For example, consider the following simple dataset recording the birth date of 3 individuals.
```
df <- data.frame(
dob = c("11/10/1963", "Jan-23-91", "12:1:2001")
)
# view struction of data frame
str(df)
```
```
## 'data.frame': 3 obs. of 1 variable:
## $ dob: chr "11/10/1963" "Jan-23-91" "12:1:2001"
```
There are many ways to convert character variables to *Date* variables. One of they simplest is to use the functions provided in the **lubridate** package. These include `ymd`, `dmy`, and `mdy` for importing year\-month\-day, day\-month\-year, and month\-day\-year formats respectively.
```
library(lubridate)
# convert dob from character to date
df$dob <- mdy(df$dob)
str(df)
```
```
## 'data.frame': 3 obs. of 1 variable:
## $ dob: Date, format: "1963-11-10" "1991-01-23" ...
```
The values are recorded internally as the number of days since January 1, 1970\. Now that the variable is a Date variable, you can perform date arithmetic (how old are they now), extract date elements (month, day, year), and reformat the values (e.g., October 11, 1963\). Date variables are important for time\-dependent graphs (Chapter [8](Time.html#Time)).
### 2\.2\.7 Reshaping data
Some graphs require the data to be in wide format, while some graphs require the data to be in long format. An example of wide data is given in Table [2\.1](DataPrep.html#tab:wide).
Table 2\.1: Wide data
| id | name | sex | height | weight |
| --- | --- | --- | --- | --- |
| 01 | Bill | Male | 70 | 180 |
| 02 | Bob | Male | 72 | 195 |
| 03 | Mary | Female | 62 | 130 |
You can convert a wide dataset to a long dataset (Table [2\.2](DataPrep.html#tab:long)) using
```
# convert wide dataset to long dataset
library(tidyr)
long_data <- pivot_longer(wide_data,
cols = c("height", "weight"),
names_to = "variable",
values_to ="value")
```
Table 2\.2: Long data
| id | name | sex | variable | value |
| --- | --- | --- | --- | --- |
| 01 | Bill | Male | height | 70 |
| 01 | Bill | Male | weight | 180 |
| 02 | Bob | Male | height | 72 |
| 02 | Bob | Male | weight | 195 |
| 03 | Mary | Female | height | 62 |
| 03 | Mary | Female | weight | 130 |
Conversely, you can convert a long dataset to a wide dataset using
```
# convert long dataset to wide dataset
library(tidyr)
wide_data <- pivot_wider(long_data,
names_from = "variable",
values_from = "value")
```
### 2\.2\.8 Missing data
Real data is likely to contain missing values. There are three basic approaches to dealing with missing data: feature selection, listwise deletion, and imputation. Let’s see how each applies to the [msleep](Datasets.html#Msleep) dataset from the **ggplot2** package. The msleep dataset describes the sleep habits of mammals and contains missing values on several variables. (See Appendix [A.3](Datasets.html#Msleep).)
#### 2\.2\.8\.1 Feature selection
In feature selection, you delete variables (columns) that contain too many missing values.
```
data(msleep, package="ggplot2")
# what is the proportion of missing data for each variable?
pctmiss <- colSums(is.na(msleep))/nrow(msleep)
round(pctmiss, 2)
```
Sixty\-two percent of the sleep\_cycle values are missing. You may decide to drop it.
#### 2\.2\.8\.2 Listwise deletion
Listwise deletion involves deleting observations (rows) that contain missing values on *any* of the variables of interest.
```
# Create a dataset containing genus, vore, and conservation.
# Delete any rows containing missing data.
newdata <- select(msleep, genus, vore, conservation)
newdata <- na.omit(newdata)
```
#### 2\.2\.8\.3 Imputation
Imputation involves replacing missing values with “reasonable” guesses about what the values would have been if they had not been missing. There are several approaches, as detailed in such packages as **VIM**, **mice**, **Amelia** and **missForest**. Here we will use the `kNN()` function from the **VIM** package to replace missing values with imputed values.
```
# Impute missing values using the 5 nearest neighbors
library(VIM)
newdata <- kNN(msleep, k=5)
```
Basically, for each case with a missing value, the *k* most similar cases not having a missing value are selected. If the missing value is numeric, the median of those *k* cases is used as the imputed value. If the missing value is categorical, the most frequent value from the *k* cases is used. The process iterates over cases and variables until the results converge (become stable). This is a bit of an oversimplification \- see Kowarik and Templ ([2016](#ref-RN3)) for the actual details.
> Important caveat: Missing values can bias the results of studies (sometimes severely). If you have a significant amount of missing data, it is probably a good idea to consult a statistician or data scientist before deleting cases or imputing missing values.
#### 2\.2\.8\.1 Feature selection
In feature selection, you delete variables (columns) that contain too many missing values.
```
data(msleep, package="ggplot2")
# what is the proportion of missing data for each variable?
pctmiss <- colSums(is.na(msleep))/nrow(msleep)
round(pctmiss, 2)
```
Sixty\-two percent of the sleep\_cycle values are missing. You may decide to drop it.
#### 2\.2\.8\.2 Listwise deletion
Listwise deletion involves deleting observations (rows) that contain missing values on *any* of the variables of interest.
```
# Create a dataset containing genus, vore, and conservation.
# Delete any rows containing missing data.
newdata <- select(msleep, genus, vore, conservation)
newdata <- na.omit(newdata)
```
#### 2\.2\.8\.3 Imputation
Imputation involves replacing missing values with “reasonable” guesses about what the values would have been if they had not been missing. There are several approaches, as detailed in such packages as **VIM**, **mice**, **Amelia** and **missForest**. Here we will use the `kNN()` function from the **VIM** package to replace missing values with imputed values.
```
# Impute missing values using the 5 nearest neighbors
library(VIM)
newdata <- kNN(msleep, k=5)
```
Basically, for each case with a missing value, the *k* most similar cases not having a missing value are selected. If the missing value is numeric, the median of those *k* cases is used as the imputed value. If the missing value is categorical, the most frequent value from the *k* cases is used. The process iterates over cases and variables until the results converge (become stable). This is a bit of an oversimplification \- see Kowarik and Templ ([2016](#ref-RN3)) for the actual details.
> Important caveat: Missing values can bias the results of studies (sometimes severely). If you have a significant amount of missing data, it is probably a good idea to consult a statistician or data scientist before deleting cases or imputing missing values.
| Data Visualization |
rkabacoff.github.io | https://rkabacoff.github.io/datavis/IntroGGPLOT.html |
Chapter 3 Introduction to ggplot2
=================================
This chapter provides an brief overview of how the [**ggplot2**](https://ggplot2.tidyverse.org/) package works. It introduces the central concepts used to develop an informative graph by exploring the relationships contained in insurance dataset.
3\.1 A worked example
---------------------
The functions in the **ggplot2** package build up a graph in layers. We’ll build a a complex graph by starting with a simple graph and adding additional elements, one at a time.
The example explores the relationship between smoking, obesity, age, and medical costs using data from the [Medical Insurance Costs](Datasets.html#Medical) dataset (Appendix [A.4](Datasets.html#Medical)).
First, lets import the data.
```
# load the data
url <- "https://tinyurl.com/mtktm8e5"
insurance <- read.csv(url)
```
Next, we’ll add a variable indicating if the patient is obese or not. Obesity will be defined as a body mass index greater than or equal to 30\.
```
# create an obesity variable
insurance$obese <- ifelse(insurance$bmi >= 30,
"obese", "not obese")
```
In building a ggplot2 graph, only the first two functions described below are required. The others are optional and can appear in any order.
### 3\.1\.1 ggplot
The first function in building a graph is the `ggplot` function. It specifies the data frame to be used and the mapping of the variables to the visual properties of the graph. The mappings are placed within the `aes` function, which stands for aesthetics. Let’s start by looking at the relationship between age and medical expenses.
```
# specify dataset and mapping
library(ggplot2)
ggplot(data = insurance,
mapping = aes(x = age, y = expenses))
```
Figure 3\.1: Map variables
Why is the graph empty? We specified that the *age* variable should be mapped to the *x*\-axis and that the *expenses* should be mapped to the *y*\-axis, but we haven’t yet specified what we wanted placed on the graph.
### 3\.1\.2 geoms
Geoms are the geometric objects (points, lines, bars, etc.) that can be placed on a graph. They are added using functions that start with `geom_`. In this example, we’ll add points using the `geom_point` function, creating a scatterplot.
In ggplot2 graphs, functions are chained together using the `+` sign to build a final plot.
```
# add points
ggplot(data = insurance,
mapping = aes(x = age, y = expenses)) +
geom_point()
```
Figure 3\.2: Add points
Figure [3\.2](IntroGGPLOT.html#fig:insurance3) indicates that expenses rise with age in a fairly linear fashion.
A number of parameters (options) can be specified in a `geom_` function. Options for the `geom_point` function include `color`, `size`, and `alpha`. These control the point color, size, and transparency, respectively. Transparency ranges from 0 (completely transparent) to 1 (completely opaque). Adding a degree of transparency can help visualize overlapping points.
```
# make points blue, larger, and semi-transparent
ggplot(data = insurance,
mapping = aes(x = age, y = expenses)) +
geom_point(color = "cornflowerblue",
alpha = .7,
size = 2)
```
Figure 3\.3: Modify point color, transparency, and size
Next, let’s add a line of best fit. We can do this with the `geom_smooth` function. Options control the type of line (linear, quadratic, nonparametric), the thickness of the line, the line’s color, and the presence or absence of a confidence interval. Here we request a linear regression (`method = lm`) line (where *lm* stands for linear model).
```
# add a line of best fit.
ggplot(data = insurance,
mapping = aes(x = age, y = expenses)) +
geom_point(color = "cornflowerblue",
alpha = .5,
size = 2) +
geom_smooth(method = "lm")
```
Figure 3\.4: Add line of best fit
Expenses appears to increase with age, but there is an unusual clustering of the point. We will find out why as we delve deeper into the data.
### 3\.1\.3 grouping
In addition to mapping variables to the *x* and *y* axes, variables can be mapped to the color, shape, size, transparency, and other visual characteristics of geometric objects. This allows groups of observations to be superimposed in a single graph.
Let’s add smoker status to the plot and represent it by color.
```
# indicate sex using color
ggplot(data = insurance,
mapping = aes(x = age,
y = expenses,
color = smoker)) +
geom_point(alpha = .5,
size = 2) +
geom_smooth(method = "lm",
se = FALSE,
size = 1.5)
```
Figure 3\.5: Include sex, using color
The `color = smoker` option is place in the `aes` function, because we are mapping a variable to an aesthetic (a visual characteristic of the graph). `The geom_smooth` option (`se = FALSE`) was added to suppresses the confidence intervals.
It appears that smokers tend to incur greater expenses than non\-smokers (not a surprise).
### 3\.1\.4 scales
Scales control how variables are mapped to the visual characteristics of the plot. Scale functions (which start with `scale_`) allow you to modify this mapping. In the next plot, we’ll change the *x* and *y* axis scaling, and the colors employed.
```
# modify the x and y axes and specify the colors to be used
ggplot(data = insurance,
mapping = aes(x = age,
y = expenses,
color = smoker)) +
geom_point(alpha = .5,
size = 2) +
geom_smooth(method = "lm",
se = FALSE,
size = 1.5) +
scale_x_continuous(breaks = seq(0, 70, 10)) +
scale_y_continuous(breaks = seq(0, 60000, 20000),
label = scales::dollar) +
scale_color_manual(values = c("indianred3",
"cornflowerblue"))
```
Figure 3\.6: Change colors and axis labels
We’re getting there. Here is a question. Is the relationship between age, expenses and smoking the same for obese and non\-obese patients? Let’s repeat this graph once for each weight status in order to explore this.
### 3\.1\.5 facets
Facets reproduce a graph for each level a given variable (or pair of variables). Facets are created using functions that start with `facet_`. Here, facets will be defined by the two levels of the *obese* variable.
```
# reproduce plot for each obsese and non-obese individuals
ggplot(data = insurance,
mapping = aes(x = age,
y = expenses,
color = smoker)) +
geom_point(alpha = .5) +
geom_smooth(method = "lm",
se = FALSE) +
scale_x_continuous(breaks = seq(0, 70, 10)) +
scale_y_continuous(breaks = seq(0, 60000, 20000),
label = scales::dollar) +
scale_color_manual(values = c("indianred3",
"cornflowerblue")) +
facet_wrap(~obese)
```
Figure 3\.7: Add job sector, using faceting
From Figure [3\.7](IntroGGPLOT.html#fig:insurance8) we can simultaneously visualize the relationships among age, smoking status, obesity, and annual medical expenses.
### 3\.1\.6 labels
Graphs should be easy to interpret and informative labels are a key element in achieving this goal. The `labs` function provides customized labels for the axes and legends. Additionally, a custom title, subtitle, and caption can be added.
```
# add informative labels
ggplot(data = insurance,
mapping = aes(x = age,
y = expenses,
color = smoker)) +
geom_point(alpha = .5) +
geom_smooth(method = "lm",
se = FALSE) +
scale_x_continuous(breaks = seq(0, 70, 10)) +
scale_y_continuous(breaks = seq(0, 60000, 20000),
label = scales::dollar) +
scale_color_manual(values = c("indianred3",
"cornflowerblue")) +
facet_wrap(~obese) +
labs(title = "Relationship between patient demographics and medical costs",
subtitle = "US Census Bureau 2013",
caption = "source: http://mosaic-web.org/",
x = " Age (years)",
y = "Annual expenses",
color = "Smoker?")
```
Figure 3\.8: Add informative titles and labels
Now a viewer doesn’t need to guess what the labels *expenses* and *age* mean, or where the data come from.
### 3\.1\.7 themes
Finally, we can fine tune the appearance of the graph using themes. Theme functions (which start with `theme_`) control background colors, fonts, grid\-lines, legend placement, and other non\-data related features of the graph. Let’s use a cleaner theme.
```
# use a minimalist theme
ggplot(data = insurance,
mapping = aes(x = age,
y = expenses,
color = smoker)) +
geom_point(alpha = .5) +
geom_smooth(method = "lm",
se = FALSE) +
scale_x_continuous(breaks = seq(0, 70, 10)) +
scale_y_continuous(breaks = seq(0, 60000, 20000),
label = scales::dollar) +
scale_color_manual(values = c("indianred3",
"cornflowerblue")) +
facet_wrap(~obese) +
labs(title = "Relationship between age and medical expenses",
subtitle = "US Census Data 2013",
caption = "source: https://github.com/dataspelunking/MLwR",
x = " Age (years)",
y = "Medical Expenses",
color = "Smoker?") +
theme_minimal()
```
Figure 3\.9: Use a simpler theme
Now we have something. From Figure [3\.9](IntroGGPLOT.html#fig:insurance10) it appears that:
* There is a positive linear relationship between age and expenses. The relationship is constant across smoking and obesity status (i.e., the slope doesn’t change).
* Smokers and obese patients have higher medical expenses.
* There is an interaction between smoking and obesity. Non\-smokers look fairly similar across obesity groups. However, for smokers, obese patients have much higher expenses.
* There are some very high outliers (large expenses) among the obese smoker group.
These findings are tentative. They are based on a limited sample size and do not involve statistical testing to assess whether differences may be due to chance variation.
3\.2 Placing the `data` and `mapping` options
---------------------------------------------
Plots created with ggplot2 always start with the `ggplot` function. In the examples above, the `data` and `mapping` options were placed in this function. In this case they apply to each `geom_` function that follows. You can also place these options directly within a `geom`. In that case, they only apply only to that specific geom.
Consider the following graph.
```
# placing color mapping in the ggplot function
ggplot(insurance,
aes(x = age,
y = expenses,
color = smoker)) +
geom_point(alpha = .5,
size = 2) +
geom_smooth(method = "lm",
se = FALSE,
size = 1.5)
```
Figure 3\.10: Color mapping in ggplot function
Since the mapping of the variable smoker to color appears in the `ggplot` function, it applies to *both* `geom_point` and `geom_smooth`. The point color indicates the smoker status, and a separate colored trend line is produced for smokers and non\-smokers. Compare this to
```
# placing color mapping in the geom_point function
ggplot(insurance,
aes(x = age,
y = expenses)) +
geom_point(aes(color = smoker),
alpha = .5,
size = 2) +
geom_smooth(method = "lm",
se = FALSE,
size = 1.5)
```
Figure 3\.11: Color mapping in ggplot function
Since the smoker to color mapping only appears in the `geom_point` function, it is only used there. A single trend line is created for all observations.
Most of the examples in this book place the data and mapping options in the `ggplot` function. Additionally, the phrases *data\=* and *mapping\=* are omitted since the first option always refers to data and the second option always refers to mapping.
3\.3 Graphs as objects
----------------------
A ggplot2 graph can be saved as a named R object (like a data frame), manipulated further, and then printed or saved to disk.
```
# create scatterplot and save it
myplot <- ggplot(data = insurance,
aes(x = age, y = expenses)) +
geom_point()
# plot the graph
myplot
# make the points larger and blue
# then print the graph
myplot <- myplot + geom_point(size = 2, color = "blue")
myplot
# print the graph with a title and line of best fit
# but don't save those changes
myplot + geom_smooth(method = "lm") +
labs(title = "Mildly interesting graph")
# print the graph with a black and white theme
# but don't save those changes
myplot + theme_bw()
```
This can be a real time saver (and help you avoid carpal tunnel syndrome). It is also handy when [saving graphs](SavingGraphs.html#SavingGraphsProgrammatically) programmatically.
Now it’s time to apply what we’ve learned.
3\.1 A worked example
---------------------
The functions in the **ggplot2** package build up a graph in layers. We’ll build a a complex graph by starting with a simple graph and adding additional elements, one at a time.
The example explores the relationship between smoking, obesity, age, and medical costs using data from the [Medical Insurance Costs](Datasets.html#Medical) dataset (Appendix [A.4](Datasets.html#Medical)).
First, lets import the data.
```
# load the data
url <- "https://tinyurl.com/mtktm8e5"
insurance <- read.csv(url)
```
Next, we’ll add a variable indicating if the patient is obese or not. Obesity will be defined as a body mass index greater than or equal to 30\.
```
# create an obesity variable
insurance$obese <- ifelse(insurance$bmi >= 30,
"obese", "not obese")
```
In building a ggplot2 graph, only the first two functions described below are required. The others are optional and can appear in any order.
### 3\.1\.1 ggplot
The first function in building a graph is the `ggplot` function. It specifies the data frame to be used and the mapping of the variables to the visual properties of the graph. The mappings are placed within the `aes` function, which stands for aesthetics. Let’s start by looking at the relationship between age and medical expenses.
```
# specify dataset and mapping
library(ggplot2)
ggplot(data = insurance,
mapping = aes(x = age, y = expenses))
```
Figure 3\.1: Map variables
Why is the graph empty? We specified that the *age* variable should be mapped to the *x*\-axis and that the *expenses* should be mapped to the *y*\-axis, but we haven’t yet specified what we wanted placed on the graph.
### 3\.1\.2 geoms
Geoms are the geometric objects (points, lines, bars, etc.) that can be placed on a graph. They are added using functions that start with `geom_`. In this example, we’ll add points using the `geom_point` function, creating a scatterplot.
In ggplot2 graphs, functions are chained together using the `+` sign to build a final plot.
```
# add points
ggplot(data = insurance,
mapping = aes(x = age, y = expenses)) +
geom_point()
```
Figure 3\.2: Add points
Figure [3\.2](IntroGGPLOT.html#fig:insurance3) indicates that expenses rise with age in a fairly linear fashion.
A number of parameters (options) can be specified in a `geom_` function. Options for the `geom_point` function include `color`, `size`, and `alpha`. These control the point color, size, and transparency, respectively. Transparency ranges from 0 (completely transparent) to 1 (completely opaque). Adding a degree of transparency can help visualize overlapping points.
```
# make points blue, larger, and semi-transparent
ggplot(data = insurance,
mapping = aes(x = age, y = expenses)) +
geom_point(color = "cornflowerblue",
alpha = .7,
size = 2)
```
Figure 3\.3: Modify point color, transparency, and size
Next, let’s add a line of best fit. We can do this with the `geom_smooth` function. Options control the type of line (linear, quadratic, nonparametric), the thickness of the line, the line’s color, and the presence or absence of a confidence interval. Here we request a linear regression (`method = lm`) line (where *lm* stands for linear model).
```
# add a line of best fit.
ggplot(data = insurance,
mapping = aes(x = age, y = expenses)) +
geom_point(color = "cornflowerblue",
alpha = .5,
size = 2) +
geom_smooth(method = "lm")
```
Figure 3\.4: Add line of best fit
Expenses appears to increase with age, but there is an unusual clustering of the point. We will find out why as we delve deeper into the data.
### 3\.1\.3 grouping
In addition to mapping variables to the *x* and *y* axes, variables can be mapped to the color, shape, size, transparency, and other visual characteristics of geometric objects. This allows groups of observations to be superimposed in a single graph.
Let’s add smoker status to the plot and represent it by color.
```
# indicate sex using color
ggplot(data = insurance,
mapping = aes(x = age,
y = expenses,
color = smoker)) +
geom_point(alpha = .5,
size = 2) +
geom_smooth(method = "lm",
se = FALSE,
size = 1.5)
```
Figure 3\.5: Include sex, using color
The `color = smoker` option is place in the `aes` function, because we are mapping a variable to an aesthetic (a visual characteristic of the graph). `The geom_smooth` option (`se = FALSE`) was added to suppresses the confidence intervals.
It appears that smokers tend to incur greater expenses than non\-smokers (not a surprise).
### 3\.1\.4 scales
Scales control how variables are mapped to the visual characteristics of the plot. Scale functions (which start with `scale_`) allow you to modify this mapping. In the next plot, we’ll change the *x* and *y* axis scaling, and the colors employed.
```
# modify the x and y axes and specify the colors to be used
ggplot(data = insurance,
mapping = aes(x = age,
y = expenses,
color = smoker)) +
geom_point(alpha = .5,
size = 2) +
geom_smooth(method = "lm",
se = FALSE,
size = 1.5) +
scale_x_continuous(breaks = seq(0, 70, 10)) +
scale_y_continuous(breaks = seq(0, 60000, 20000),
label = scales::dollar) +
scale_color_manual(values = c("indianred3",
"cornflowerblue"))
```
Figure 3\.6: Change colors and axis labels
We’re getting there. Here is a question. Is the relationship between age, expenses and smoking the same for obese and non\-obese patients? Let’s repeat this graph once for each weight status in order to explore this.
### 3\.1\.5 facets
Facets reproduce a graph for each level a given variable (or pair of variables). Facets are created using functions that start with `facet_`. Here, facets will be defined by the two levels of the *obese* variable.
```
# reproduce plot for each obsese and non-obese individuals
ggplot(data = insurance,
mapping = aes(x = age,
y = expenses,
color = smoker)) +
geom_point(alpha = .5) +
geom_smooth(method = "lm",
se = FALSE) +
scale_x_continuous(breaks = seq(0, 70, 10)) +
scale_y_continuous(breaks = seq(0, 60000, 20000),
label = scales::dollar) +
scale_color_manual(values = c("indianred3",
"cornflowerblue")) +
facet_wrap(~obese)
```
Figure 3\.7: Add job sector, using faceting
From Figure [3\.7](IntroGGPLOT.html#fig:insurance8) we can simultaneously visualize the relationships among age, smoking status, obesity, and annual medical expenses.
### 3\.1\.6 labels
Graphs should be easy to interpret and informative labels are a key element in achieving this goal. The `labs` function provides customized labels for the axes and legends. Additionally, a custom title, subtitle, and caption can be added.
```
# add informative labels
ggplot(data = insurance,
mapping = aes(x = age,
y = expenses,
color = smoker)) +
geom_point(alpha = .5) +
geom_smooth(method = "lm",
se = FALSE) +
scale_x_continuous(breaks = seq(0, 70, 10)) +
scale_y_continuous(breaks = seq(0, 60000, 20000),
label = scales::dollar) +
scale_color_manual(values = c("indianred3",
"cornflowerblue")) +
facet_wrap(~obese) +
labs(title = "Relationship between patient demographics and medical costs",
subtitle = "US Census Bureau 2013",
caption = "source: http://mosaic-web.org/",
x = " Age (years)",
y = "Annual expenses",
color = "Smoker?")
```
Figure 3\.8: Add informative titles and labels
Now a viewer doesn’t need to guess what the labels *expenses* and *age* mean, or where the data come from.
### 3\.1\.7 themes
Finally, we can fine tune the appearance of the graph using themes. Theme functions (which start with `theme_`) control background colors, fonts, grid\-lines, legend placement, and other non\-data related features of the graph. Let’s use a cleaner theme.
```
# use a minimalist theme
ggplot(data = insurance,
mapping = aes(x = age,
y = expenses,
color = smoker)) +
geom_point(alpha = .5) +
geom_smooth(method = "lm",
se = FALSE) +
scale_x_continuous(breaks = seq(0, 70, 10)) +
scale_y_continuous(breaks = seq(0, 60000, 20000),
label = scales::dollar) +
scale_color_manual(values = c("indianred3",
"cornflowerblue")) +
facet_wrap(~obese) +
labs(title = "Relationship between age and medical expenses",
subtitle = "US Census Data 2013",
caption = "source: https://github.com/dataspelunking/MLwR",
x = " Age (years)",
y = "Medical Expenses",
color = "Smoker?") +
theme_minimal()
```
Figure 3\.9: Use a simpler theme
Now we have something. From Figure [3\.9](IntroGGPLOT.html#fig:insurance10) it appears that:
* There is a positive linear relationship between age and expenses. The relationship is constant across smoking and obesity status (i.e., the slope doesn’t change).
* Smokers and obese patients have higher medical expenses.
* There is an interaction between smoking and obesity. Non\-smokers look fairly similar across obesity groups. However, for smokers, obese patients have much higher expenses.
* There are some very high outliers (large expenses) among the obese smoker group.
These findings are tentative. They are based on a limited sample size and do not involve statistical testing to assess whether differences may be due to chance variation.
### 3\.1\.1 ggplot
The first function in building a graph is the `ggplot` function. It specifies the data frame to be used and the mapping of the variables to the visual properties of the graph. The mappings are placed within the `aes` function, which stands for aesthetics. Let’s start by looking at the relationship between age and medical expenses.
```
# specify dataset and mapping
library(ggplot2)
ggplot(data = insurance,
mapping = aes(x = age, y = expenses))
```
Figure 3\.1: Map variables
Why is the graph empty? We specified that the *age* variable should be mapped to the *x*\-axis and that the *expenses* should be mapped to the *y*\-axis, but we haven’t yet specified what we wanted placed on the graph.
### 3\.1\.2 geoms
Geoms are the geometric objects (points, lines, bars, etc.) that can be placed on a graph. They are added using functions that start with `geom_`. In this example, we’ll add points using the `geom_point` function, creating a scatterplot.
In ggplot2 graphs, functions are chained together using the `+` sign to build a final plot.
```
# add points
ggplot(data = insurance,
mapping = aes(x = age, y = expenses)) +
geom_point()
```
Figure 3\.2: Add points
Figure [3\.2](IntroGGPLOT.html#fig:insurance3) indicates that expenses rise with age in a fairly linear fashion.
A number of parameters (options) can be specified in a `geom_` function. Options for the `geom_point` function include `color`, `size`, and `alpha`. These control the point color, size, and transparency, respectively. Transparency ranges from 0 (completely transparent) to 1 (completely opaque). Adding a degree of transparency can help visualize overlapping points.
```
# make points blue, larger, and semi-transparent
ggplot(data = insurance,
mapping = aes(x = age, y = expenses)) +
geom_point(color = "cornflowerblue",
alpha = .7,
size = 2)
```
Figure 3\.3: Modify point color, transparency, and size
Next, let’s add a line of best fit. We can do this with the `geom_smooth` function. Options control the type of line (linear, quadratic, nonparametric), the thickness of the line, the line’s color, and the presence or absence of a confidence interval. Here we request a linear regression (`method = lm`) line (where *lm* stands for linear model).
```
# add a line of best fit.
ggplot(data = insurance,
mapping = aes(x = age, y = expenses)) +
geom_point(color = "cornflowerblue",
alpha = .5,
size = 2) +
geom_smooth(method = "lm")
```
Figure 3\.4: Add line of best fit
Expenses appears to increase with age, but there is an unusual clustering of the point. We will find out why as we delve deeper into the data.
### 3\.1\.3 grouping
In addition to mapping variables to the *x* and *y* axes, variables can be mapped to the color, shape, size, transparency, and other visual characteristics of geometric objects. This allows groups of observations to be superimposed in a single graph.
Let’s add smoker status to the plot and represent it by color.
```
# indicate sex using color
ggplot(data = insurance,
mapping = aes(x = age,
y = expenses,
color = smoker)) +
geom_point(alpha = .5,
size = 2) +
geom_smooth(method = "lm",
se = FALSE,
size = 1.5)
```
Figure 3\.5: Include sex, using color
The `color = smoker` option is place in the `aes` function, because we are mapping a variable to an aesthetic (a visual characteristic of the graph). `The geom_smooth` option (`se = FALSE`) was added to suppresses the confidence intervals.
It appears that smokers tend to incur greater expenses than non\-smokers (not a surprise).
### 3\.1\.4 scales
Scales control how variables are mapped to the visual characteristics of the plot. Scale functions (which start with `scale_`) allow you to modify this mapping. In the next plot, we’ll change the *x* and *y* axis scaling, and the colors employed.
```
# modify the x and y axes and specify the colors to be used
ggplot(data = insurance,
mapping = aes(x = age,
y = expenses,
color = smoker)) +
geom_point(alpha = .5,
size = 2) +
geom_smooth(method = "lm",
se = FALSE,
size = 1.5) +
scale_x_continuous(breaks = seq(0, 70, 10)) +
scale_y_continuous(breaks = seq(0, 60000, 20000),
label = scales::dollar) +
scale_color_manual(values = c("indianred3",
"cornflowerblue"))
```
Figure 3\.6: Change colors and axis labels
We’re getting there. Here is a question. Is the relationship between age, expenses and smoking the same for obese and non\-obese patients? Let’s repeat this graph once for each weight status in order to explore this.
### 3\.1\.5 facets
Facets reproduce a graph for each level a given variable (or pair of variables). Facets are created using functions that start with `facet_`. Here, facets will be defined by the two levels of the *obese* variable.
```
# reproduce plot for each obsese and non-obese individuals
ggplot(data = insurance,
mapping = aes(x = age,
y = expenses,
color = smoker)) +
geom_point(alpha = .5) +
geom_smooth(method = "lm",
se = FALSE) +
scale_x_continuous(breaks = seq(0, 70, 10)) +
scale_y_continuous(breaks = seq(0, 60000, 20000),
label = scales::dollar) +
scale_color_manual(values = c("indianred3",
"cornflowerblue")) +
facet_wrap(~obese)
```
Figure 3\.7: Add job sector, using faceting
From Figure [3\.7](IntroGGPLOT.html#fig:insurance8) we can simultaneously visualize the relationships among age, smoking status, obesity, and annual medical expenses.
### 3\.1\.6 labels
Graphs should be easy to interpret and informative labels are a key element in achieving this goal. The `labs` function provides customized labels for the axes and legends. Additionally, a custom title, subtitle, and caption can be added.
```
# add informative labels
ggplot(data = insurance,
mapping = aes(x = age,
y = expenses,
color = smoker)) +
geom_point(alpha = .5) +
geom_smooth(method = "lm",
se = FALSE) +
scale_x_continuous(breaks = seq(0, 70, 10)) +
scale_y_continuous(breaks = seq(0, 60000, 20000),
label = scales::dollar) +
scale_color_manual(values = c("indianred3",
"cornflowerblue")) +
facet_wrap(~obese) +
labs(title = "Relationship between patient demographics and medical costs",
subtitle = "US Census Bureau 2013",
caption = "source: http://mosaic-web.org/",
x = " Age (years)",
y = "Annual expenses",
color = "Smoker?")
```
Figure 3\.8: Add informative titles and labels
Now a viewer doesn’t need to guess what the labels *expenses* and *age* mean, or where the data come from.
### 3\.1\.7 themes
Finally, we can fine tune the appearance of the graph using themes. Theme functions (which start with `theme_`) control background colors, fonts, grid\-lines, legend placement, and other non\-data related features of the graph. Let’s use a cleaner theme.
```
# use a minimalist theme
ggplot(data = insurance,
mapping = aes(x = age,
y = expenses,
color = smoker)) +
geom_point(alpha = .5) +
geom_smooth(method = "lm",
se = FALSE) +
scale_x_continuous(breaks = seq(0, 70, 10)) +
scale_y_continuous(breaks = seq(0, 60000, 20000),
label = scales::dollar) +
scale_color_manual(values = c("indianred3",
"cornflowerblue")) +
facet_wrap(~obese) +
labs(title = "Relationship between age and medical expenses",
subtitle = "US Census Data 2013",
caption = "source: https://github.com/dataspelunking/MLwR",
x = " Age (years)",
y = "Medical Expenses",
color = "Smoker?") +
theme_minimal()
```
Figure 3\.9: Use a simpler theme
Now we have something. From Figure [3\.9](IntroGGPLOT.html#fig:insurance10) it appears that:
* There is a positive linear relationship between age and expenses. The relationship is constant across smoking and obesity status (i.e., the slope doesn’t change).
* Smokers and obese patients have higher medical expenses.
* There is an interaction between smoking and obesity. Non\-smokers look fairly similar across obesity groups. However, for smokers, obese patients have much higher expenses.
* There are some very high outliers (large expenses) among the obese smoker group.
These findings are tentative. They are based on a limited sample size and do not involve statistical testing to assess whether differences may be due to chance variation.
3\.2 Placing the `data` and `mapping` options
---------------------------------------------
Plots created with ggplot2 always start with the `ggplot` function. In the examples above, the `data` and `mapping` options were placed in this function. In this case they apply to each `geom_` function that follows. You can also place these options directly within a `geom`. In that case, they only apply only to that specific geom.
Consider the following graph.
```
# placing color mapping in the ggplot function
ggplot(insurance,
aes(x = age,
y = expenses,
color = smoker)) +
geom_point(alpha = .5,
size = 2) +
geom_smooth(method = "lm",
se = FALSE,
size = 1.5)
```
Figure 3\.10: Color mapping in ggplot function
Since the mapping of the variable smoker to color appears in the `ggplot` function, it applies to *both* `geom_point` and `geom_smooth`. The point color indicates the smoker status, and a separate colored trend line is produced for smokers and non\-smokers. Compare this to
```
# placing color mapping in the geom_point function
ggplot(insurance,
aes(x = age,
y = expenses)) +
geom_point(aes(color = smoker),
alpha = .5,
size = 2) +
geom_smooth(method = "lm",
se = FALSE,
size = 1.5)
```
Figure 3\.11: Color mapping in ggplot function
Since the smoker to color mapping only appears in the `geom_point` function, it is only used there. A single trend line is created for all observations.
Most of the examples in this book place the data and mapping options in the `ggplot` function. Additionally, the phrases *data\=* and *mapping\=* are omitted since the first option always refers to data and the second option always refers to mapping.
3\.3 Graphs as objects
----------------------
A ggplot2 graph can be saved as a named R object (like a data frame), manipulated further, and then printed or saved to disk.
```
# create scatterplot and save it
myplot <- ggplot(data = insurance,
aes(x = age, y = expenses)) +
geom_point()
# plot the graph
myplot
# make the points larger and blue
# then print the graph
myplot <- myplot + geom_point(size = 2, color = "blue")
myplot
# print the graph with a title and line of best fit
# but don't save those changes
myplot + geom_smooth(method = "lm") +
labs(title = "Mildly interesting graph")
# print the graph with a black and white theme
# but don't save those changes
myplot + theme_bw()
```
This can be a real time saver (and help you avoid carpal tunnel syndrome). It is also handy when [saving graphs](SavingGraphs.html#SavingGraphsProgrammatically) programmatically.
Now it’s time to apply what we’ve learned.
| Data Visualization |
rkabacoff.github.io | https://rkabacoff.github.io/datavis/Univariate.html |
Chapter 4 Univariate Graphs
===========================
The first step in any comprehensive data analysis is to explore each import variable in turn. Univariate graphs plot the distribution of data from a single variable. The variable can be categorical (e.g., race, sex, political affiliation) or quantitative (e.g., age, weight, income).
The dataset [`Marriage`](Datasets.html#Marriage) contains the marriage records of 98 individuals in Mobile County, Alabama (see Appendix [A.5](Datasets.html#Marriage)). We’ll explore the distribution of three variables from this dataset \- the age and race of the wedding participants, and the occupation of the wedding officials.
4\.1 Categorical
----------------
The race of the participants and the occupation of the officials are both categorical variables.The distribution of a single categorical variable is typically plotted with a bar chart, a pie chart, or (less commonly) a tree map or waffle chart.
### 4\.1\.1 Bar chart
In Figure [4\.1](Univariate.html#fig:barchart1), a bar chart is used to display the distribution of wedding participants by race.
```
# simple bar chart
library(ggplot2)
data(Marriage, package = "mosaicData")
# plot the distribution of race
ggplot(Marriage, aes(x = race)) +
geom_bar()
```
Figure 4\.1: Simple barchart
The majority of participants are white, followed by black, with very few Hispanics or American Indians.
You can modify the bar fill and border colors, plot labels, and title by adding options to the `geom_bar` function. In ggplot2, the `fill` parameter is used to specify the color of areas such as bars, rectangles, and polygons. The `color` parameter specifies the color objects that technically do not have an area, such as points, lines, and borders.
```
# plot the distribution of race with modified colors and labels
ggplot(Marriage, aes(x=race)) +
geom_bar(fill = "cornflowerblue",
color="black") +
labs(x = "Race",
y = "Frequency",
title = "Participants by race")
```
Figure 4\.2: Barchart with modified colors, labels, and title
#### 4\.1\.1\.1 Percents
Bars can represent percents rather than counts. For bar charts, the code `aes(x=race)` is actually a shortcut for `aes(x = race, y = after_stat(count))`, where `count` is a special variable representing the frequency within each category. You can use this to calculate percentages, by specifying `y` variable explicitly.
```
# plot the distribution as percentages
ggplot(Marriage,
aes(x = race, y = after_stat(count/sum(count)))) +
geom_bar() +
labs(x = "Race",
y = "Percent",
title = "Participants by race") +
scale_y_continuous(labels = scales::percent)
```
Figure 4\.3: Barchart with percentages
In the code above, the `scales` package is used to add % symbols to the *y*\-axis labels.
#### 4\.1\.1\.2 Sorting categories
It is often helpful to sort the bars by frequency. In the code below, the frequencies are calculated explicitly. Then the `reorder` function is used to sort the categories by the frequency. The option `stat="identity"` tells the plotting function not to calculate counts, because they are supplied directly.
```
# calculate number of participants in each race category
library(dplyr)
plotdata <- Marriage %>%
count(race)
```
The resulting dataset is give below.
Table 4\.1: plotdata
| race | n |
| --- | --- |
| American Indian | 1 |
| Black | 22 |
| Hispanic | 1 |
| White | 74 |
This new dataset is then used to create the graph.
```
# plot the bars in ascending order
ggplot(plotdata,
aes(x = reorder(race, n), y = n)) +
geom_bar(stat="identity") +
labs(x = "Race",
y = "Frequency",
title = "Participants by race")
```
Figure 4\.4: Sorted bar chart
The graph bars are sorted in ascending order. Use `reorder(race, -n)` to sort in descending order.
#### 4\.1\.1\.3 Labeling bars
Finally, you may want to label each bar with its numerical value.
```
# plot the bars with numeric labels
ggplot(plotdata,
aes(x = race, y = n)) +
geom_bar(stat="identity") +
geom_text(aes(label = n), vjust=-0.5) +
labs(x = "Race",
y = "Frequency",
title = "Participants by race")
```
Figure 4\.5: Bar chart with numeric labels
Here `geom_text` adds the labels, and `vjust` controls vertical justification. See [Annotations](Customizing.html#Annotations) (Section [11\.7](Customizing.html#Annotations)) for more details.
Putting these ideas together, you can create a graph like the one below. The minus sign in `reorder(race, -pct)` is used to order the bars in descending order.
```
library(dplyr)
library(scales)
plotdata <- Marriage %>%
count(race) %>%
mutate(pct = n / sum(n),
pctlabel = paste0(round(pct*100), "%"))
# plot the bars as percentages,
# in decending order with bar labels
ggplot(plotdata,
aes(x = reorder(race, -pct), y = pct)) +
geom_bar(stat="identity", fill="indianred3", color="black") +
geom_text(aes(label = pctlabel), vjust=-0.25) +
scale_y_continuous(labels = percent) +
labs(x = "Race",
y = "Percent",
title = "Participants by race")
```
Figure 4\.6: Sorted bar chart with percent labels
#### 4\.1\.1\.4 Overlapping labels
Category labels may overlap if (1\) there are many categories or (2\) the labels are long. Consider the distribution of marriage officials.
```
# basic bar chart with overlapping labels
ggplot(Marriage, aes(x=officialTitle)) +
geom_bar() +
labs(x = "Officiate",
y = "Frequency",
title = "Marriages by officiate")
```
Figure 4\.7: Barchart with problematic labels
In this case, you can flip the x and y axes with the `coord_flip` function.
```
# horizontal bar chart
ggplot(Marriage, aes(x = officialTitle)) +
geom_bar() +
labs(x = "",
y = "Frequency",
title = "Marriages by officiate") +
coord_flip()
```
Figure 4\.8: Horizontal barchart
Alternatively, you can rotate the axis labels.
```
# bar chart with rotated labels
ggplot(Marriage, aes(x=officialTitle)) +
geom_bar() +
labs(x = "",
y = "Frequency",
title = "Marriages by officiate") +
theme(axis.text.x = element_text(angle = 45,
hjust = 1))
```
Figure 4\.9: Barchart with rotated labels
Finally, you can try staggering the labels. The trick is to add a newline `\n` to every other label.
```
# bar chart with staggered labels
lbls <- paste0(c("","\n"), levels(Marriage$officialTitle))
ggplot(Marriage,
aes(x=factor(officialTitle,
labels = lbls))) +
geom_bar() +
labs(x = "",
y = "Frequency",
title = "Marriages by officiate")
```
Figure 4\.10: Barchart with staggered labels
In general, I recommend trying not to rotate axis labels. It places a greater cognitive demand on the end user (i.e., it is harder to read!).
### 4\.1\.2 Pie chart
Pie charts are controversial in statistics. If your goal is to compare the frequency of categories, you are better off with bar charts (humans are better at judging the length of bars than the volume of pie slices). If your goal is compare each category with the the whole (e.g., what portion of participants are Hispanic compared to all participants), and the number of categories is small, then pie charts may work for you.
Pie charts are easily created with `ggpie` function in the **ggpie** package. The format is `ggpie(data, variable)`, where *data* is a data frame, and *variable* is the categorical variable to be plotted.
```
# create a basic ggplot2 pie chart
library(ggpie)
ggpie(Marriage, race)
```
Figure 4\.11: Basic pie chart with legend
The `ggpie` function has many option, as described in package homepage (<http://rkabacoff.github.io/ggpie>). For example to place the labels within the pie, set `legend = FALSE`. A title can be added with the `title` option.
```
# create a pie chart with slice labels within figure
ggpie(Marriage, race, legend = FALSE, title = "Participants by race")
```
Figure 4\.12: Pie chart with percent labels
The pie chart makes it easy to compare each slice with the whole. For example, roughly a quarter of the total participants are Black.
### 4\.1\.3 Tree map
An alternative to a pie chart is a tree map. Unlike pie charts, it can handle categorical variables that have *many* levels.
```
library(treemapify)
# create a treemap of marriage officials
plotdata <- Marriage %>%
count(officialTitle)
ggplot(plotdata,
aes(fill = officialTitle, area = n)) +
geom_treemap() +
labs(title = "Marriages by officiate")
```
Figure 4\.13: Basic treemap
Here is a more useful version with labels.
```
# create a treemap with tile labels
ggplot(plotdata,
aes(fill = officialTitle,
area = n,
label = officialTitle)) +
geom_treemap() +
geom_treemap_text(colour = "white",
place = "centre") +
labs(title = "Marriages by officiate") +
theme(legend.position = "none")
```
Figure 4\.14: Treemap with labels
The treemapify package offers many options for customization. See <https://wilkox.org/treemapify/> for details.
### 4\.1\.4 Waffle chart
A waffle chart, also known as a gridplot or square pie chart, represents observations as squares in a rectangular grid, where each cell represents a percentage of the whole. You can create a ggplot2 waffle chart using the `geom_waffle` function in the **waffle** package.
Let’s create a waffle chart for the professions of wedding officiates. As with tree maps, start by summarizing the data into groups and counts.
```
library(dplyr)
plotdata <- Marriage %>%
count(officialTitle)
```
Next create the ggplot2 graph. Set *fill* to the grouping variable and *values* to the counts. Don’t specify an *x* and *y*.
> Note: The na.rm parameter in the geom\_waffle function indicates whether missing values should be deleted. At time of this writing, there is a bug in the function. The default for the na.rm parameter is NA, but it actually must be either TRUE or FALSE. Specifying one or the other eliminates the error.
The following code produces the default waffle plot.
```
# create a basic waffle chart
library(waffle)
ggplot(plotdata, aes(fill = officialTitle, values=n)) +
geom_waffle(na.rm=TRUE)
```
Figure 4\.15: Basic waffle chart
Next, we’ll customize the graph by
* specifying the number of rows and cell sizes and setting borders around the cells to “white” (`geom_waffle`)
* change the color scheme to “Spectral” (`scale_fill_brewer`)
* assure that the cells are squares and not rectangles (`coord_equal`)
* simplify the theme (the `theme` functions)
* modify the title and add a caption with the scale (`labs`)
```
# Create a customized caption
cap <- paste0("1 square = ", ceiling(sum(plotdata$n)/100),
" case(s).")
library(waffle)
ggplot(plotdata, aes(fill = officialTitle, values=n)) +
geom_waffle(na.rm=TRUE,
n_rows = 10,
size = .4,
color = "white") +
scale_fill_brewer(palette = "Spectral") +
coord_equal() +
theme_minimal() +
theme_enhance_waffle() +
theme(legend.title = element_blank()) +
labs(title = "Proportion of Wedding Officials",
caption = cap)
```
Figure 4\.16: Customized waffle chart
While new to R, waffle charts are becoming increasingly popular.
4\.2 Quantitative
-----------------
In the [Marriage](Datasets.html#Marriage) dataset, age is quantitative variable. The distribution of a single quantitative variable is typically plotted with a histogram, kernel density plot, or dot plot.
### 4\.2\.1 Histogram
Histograms are the most common approach to visualizing a quantitative variable. In a histogram, the values of a variable are typically divided up into adjacent, equal width ranges (called *bins*), and the number of observations in each bin is plotted with a vertical bar.
```
library(ggplot2)
# plot the age distribution using a histogram
ggplot(Marriage, aes(x = age)) +
geom_histogram() +
labs(title = "Participants by age",
x = "Age")
```
Figure 4\.17: Basic histogram
Most participants appear to be in their early 20’s with another group in their 40’s, and a much smaller group in their late sixties and early seventies. This would be a *multimodal* distribution.
Histogram colors can be modified using two options
* `fill` \- fill color for the bars
* `color` \- border color around the bars
```
# plot the histogram with blue bars and white borders
ggplot(Marriage, aes(x = age)) +
geom_histogram(fill = "cornflowerblue",
color = "white") +
labs(title="Participants by age",
x = "Age")
```
Figure 4\.18: Histogram with specified fill and border colors
#### 4\.2\.1\.1 Bins and bandwidths
One of the most important histogram options is `bins`, which controls the number of bins into which the numeric variable is divided (i.e., the number of bars in the plot). The default is 30, but it is helpful to try smaller and larger numbers to get a better impression of the shape of the distribution.
```
# plot the histogram with 20 bins
ggplot(Marriage, aes(x = age)) +
geom_histogram(fill = "cornflowerblue",
color = "white",
bins = 20) +
labs(title="Participants by age",
subtitle = "number of bins = 20",
x = "Age")
```
Figure 4\.19: Histogram with a specified number of bins
Alternatively, you can specify the `binwidth`, the width of the bins represented by the bars.
```
# plot the histogram with a binwidth of 5
ggplot(Marriage, aes(x = age)) +
geom_histogram(fill = "cornflowerblue",
color = "white",
binwidth = 5) +
labs(title="Participants by age",
subtitle = "binwidth = 5 years",
x = "Age")
```
Figure 4\.20: Histogram with specified a bin width
As with bar charts, the *y*\-axis can represent counts or percent of the total.
```
# plot the histogram with percentages on the y-axis
library(scales)
ggplot(Marriage,
aes(x = age, y= after_stat(count/sum(count)))) +
geom_histogram(fill = "cornflowerblue",
color = "white",
binwidth = 5) +
labs(title="Participants by age",
y = "Percent",
x = "Age") +
scale_y_continuous(labels = percent)
```
Figure 4\.21: Histogram with percentages on the y\-axis
### 4\.2\.2 Kernel Density plot
An alternative to a histogram is the kernel density plot. Technically, kernel density estimation is a nonparametric method for estimating the probability density function of a continuous random variable (what??). Basically, we are trying to draw a smoothed histogram, where the area under the curve equals one.
```
# Create a kernel density plot of age
ggplot(Marriage, aes(x = age)) +
geom_density() +
labs(title = "Participants by age")
```
Figure 4\.22: Basic kernel density plot
The graph shows the distribution of scores. For example, the proportion of cases between 20 and 40 years old would be represented by the area under the curve between 20 and 40 on the x\-axis.
As with previous charts, we can use `fill` and `color` to specify the fill and border colors.
```
# Create a kernel density plot of age
ggplot(Marriage, aes(x = age)) +
geom_density(fill = "indianred3") +
labs(title = "Participants by age")
```
Figure 4\.23: Kernel density plot with fill
#### 4\.2\.2\.1 Smoothing parameter
The degree of smoothness is controlled by the bandwidth parameter `bw`. To find the default value for a particular variable, use the `bw.nrd0` function. Values that are larger will result in more smoothing, while values that are smaller will produce less smoothing.
```
# default bandwidth for the age variable
bw.nrd0(Marriage$age)
```
```
## [1] 5.181946
```
```
# Create a kernel density plot of age
ggplot(Marriage, aes(x = age)) +
geom_density(fill = "deepskyblue",
bw = 1) +
labs(title = "Participants by age",
subtitle = "bandwidth = 1")
```
Figure 4\.24: Kernel density plot with a specified bandwidth
In this example, the default bandwidth for age is 5\.18\. Choosing a value of 1 resulted in less smoothing and more detail.
Kernel density plots allow you to easily see which scores are most frequent and which are relatively rare. However it can be difficult to explain the meaning of the *y*\-axis means to a non\-statistician. (But it will make you look really smart at parties!)
### 4\.2\.3 Dot Chart
Another alternative to the histogram is the dot chart. Again, the quantitative variable is divided into bins, but rather than summary bars, each observation is represented by a dot. By default, the width of a dot corresponds to the bin width, and dots are stacked, with each dot representing one observation. This works best when the number of observations is small (say, less than 150\).
```
# plot the age distribution using a dotplot
ggplot(Marriage, aes(x = age)) +
geom_dotplot() +
labs(title = "Participants by age",
y = "Proportion",
x = "Age")
```
Figure 4\.25: Basic dotplot
The `fill` and `color` options can be used to specify the fill and border color of each dot respectively.
```
# Plot ages as a dot plot using
# gold dots with black borders
ggplot(Marriage, aes(x = age)) +
geom_dotplot(fill = "gold",
color="black") +
labs(title = "Participants by age",
y = "Proportion",
x = "Age")
```
Figure 4\.26: Dotplot with a specified color scheme
There are many more options available. See [`?geom_dotplot`](http://ggplot2.tidyverse.org/reference/geom_dotplot.html) for details and examples.
4\.1 Categorical
----------------
The race of the participants and the occupation of the officials are both categorical variables.The distribution of a single categorical variable is typically plotted with a bar chart, a pie chart, or (less commonly) a tree map or waffle chart.
### 4\.1\.1 Bar chart
In Figure [4\.1](Univariate.html#fig:barchart1), a bar chart is used to display the distribution of wedding participants by race.
```
# simple bar chart
library(ggplot2)
data(Marriage, package = "mosaicData")
# plot the distribution of race
ggplot(Marriage, aes(x = race)) +
geom_bar()
```
Figure 4\.1: Simple barchart
The majority of participants are white, followed by black, with very few Hispanics or American Indians.
You can modify the bar fill and border colors, plot labels, and title by adding options to the `geom_bar` function. In ggplot2, the `fill` parameter is used to specify the color of areas such as bars, rectangles, and polygons. The `color` parameter specifies the color objects that technically do not have an area, such as points, lines, and borders.
```
# plot the distribution of race with modified colors and labels
ggplot(Marriage, aes(x=race)) +
geom_bar(fill = "cornflowerblue",
color="black") +
labs(x = "Race",
y = "Frequency",
title = "Participants by race")
```
Figure 4\.2: Barchart with modified colors, labels, and title
#### 4\.1\.1\.1 Percents
Bars can represent percents rather than counts. For bar charts, the code `aes(x=race)` is actually a shortcut for `aes(x = race, y = after_stat(count))`, where `count` is a special variable representing the frequency within each category. You can use this to calculate percentages, by specifying `y` variable explicitly.
```
# plot the distribution as percentages
ggplot(Marriage,
aes(x = race, y = after_stat(count/sum(count)))) +
geom_bar() +
labs(x = "Race",
y = "Percent",
title = "Participants by race") +
scale_y_continuous(labels = scales::percent)
```
Figure 4\.3: Barchart with percentages
In the code above, the `scales` package is used to add % symbols to the *y*\-axis labels.
#### 4\.1\.1\.2 Sorting categories
It is often helpful to sort the bars by frequency. In the code below, the frequencies are calculated explicitly. Then the `reorder` function is used to sort the categories by the frequency. The option `stat="identity"` tells the plotting function not to calculate counts, because they are supplied directly.
```
# calculate number of participants in each race category
library(dplyr)
plotdata <- Marriage %>%
count(race)
```
The resulting dataset is give below.
Table 4\.1: plotdata
| race | n |
| --- | --- |
| American Indian | 1 |
| Black | 22 |
| Hispanic | 1 |
| White | 74 |
This new dataset is then used to create the graph.
```
# plot the bars in ascending order
ggplot(plotdata,
aes(x = reorder(race, n), y = n)) +
geom_bar(stat="identity") +
labs(x = "Race",
y = "Frequency",
title = "Participants by race")
```
Figure 4\.4: Sorted bar chart
The graph bars are sorted in ascending order. Use `reorder(race, -n)` to sort in descending order.
#### 4\.1\.1\.3 Labeling bars
Finally, you may want to label each bar with its numerical value.
```
# plot the bars with numeric labels
ggplot(plotdata,
aes(x = race, y = n)) +
geom_bar(stat="identity") +
geom_text(aes(label = n), vjust=-0.5) +
labs(x = "Race",
y = "Frequency",
title = "Participants by race")
```
Figure 4\.5: Bar chart with numeric labels
Here `geom_text` adds the labels, and `vjust` controls vertical justification. See [Annotations](Customizing.html#Annotations) (Section [11\.7](Customizing.html#Annotations)) for more details.
Putting these ideas together, you can create a graph like the one below. The minus sign in `reorder(race, -pct)` is used to order the bars in descending order.
```
library(dplyr)
library(scales)
plotdata <- Marriage %>%
count(race) %>%
mutate(pct = n / sum(n),
pctlabel = paste0(round(pct*100), "%"))
# plot the bars as percentages,
# in decending order with bar labels
ggplot(plotdata,
aes(x = reorder(race, -pct), y = pct)) +
geom_bar(stat="identity", fill="indianred3", color="black") +
geom_text(aes(label = pctlabel), vjust=-0.25) +
scale_y_continuous(labels = percent) +
labs(x = "Race",
y = "Percent",
title = "Participants by race")
```
Figure 4\.6: Sorted bar chart with percent labels
#### 4\.1\.1\.4 Overlapping labels
Category labels may overlap if (1\) there are many categories or (2\) the labels are long. Consider the distribution of marriage officials.
```
# basic bar chart with overlapping labels
ggplot(Marriage, aes(x=officialTitle)) +
geom_bar() +
labs(x = "Officiate",
y = "Frequency",
title = "Marriages by officiate")
```
Figure 4\.7: Barchart with problematic labels
In this case, you can flip the x and y axes with the `coord_flip` function.
```
# horizontal bar chart
ggplot(Marriage, aes(x = officialTitle)) +
geom_bar() +
labs(x = "",
y = "Frequency",
title = "Marriages by officiate") +
coord_flip()
```
Figure 4\.8: Horizontal barchart
Alternatively, you can rotate the axis labels.
```
# bar chart with rotated labels
ggplot(Marriage, aes(x=officialTitle)) +
geom_bar() +
labs(x = "",
y = "Frequency",
title = "Marriages by officiate") +
theme(axis.text.x = element_text(angle = 45,
hjust = 1))
```
Figure 4\.9: Barchart with rotated labels
Finally, you can try staggering the labels. The trick is to add a newline `\n` to every other label.
```
# bar chart with staggered labels
lbls <- paste0(c("","\n"), levels(Marriage$officialTitle))
ggplot(Marriage,
aes(x=factor(officialTitle,
labels = lbls))) +
geom_bar() +
labs(x = "",
y = "Frequency",
title = "Marriages by officiate")
```
Figure 4\.10: Barchart with staggered labels
In general, I recommend trying not to rotate axis labels. It places a greater cognitive demand on the end user (i.e., it is harder to read!).
### 4\.1\.2 Pie chart
Pie charts are controversial in statistics. If your goal is to compare the frequency of categories, you are better off with bar charts (humans are better at judging the length of bars than the volume of pie slices). If your goal is compare each category with the the whole (e.g., what portion of participants are Hispanic compared to all participants), and the number of categories is small, then pie charts may work for you.
Pie charts are easily created with `ggpie` function in the **ggpie** package. The format is `ggpie(data, variable)`, where *data* is a data frame, and *variable* is the categorical variable to be plotted.
```
# create a basic ggplot2 pie chart
library(ggpie)
ggpie(Marriage, race)
```
Figure 4\.11: Basic pie chart with legend
The `ggpie` function has many option, as described in package homepage (<http://rkabacoff.github.io/ggpie>). For example to place the labels within the pie, set `legend = FALSE`. A title can be added with the `title` option.
```
# create a pie chart with slice labels within figure
ggpie(Marriage, race, legend = FALSE, title = "Participants by race")
```
Figure 4\.12: Pie chart with percent labels
The pie chart makes it easy to compare each slice with the whole. For example, roughly a quarter of the total participants are Black.
### 4\.1\.3 Tree map
An alternative to a pie chart is a tree map. Unlike pie charts, it can handle categorical variables that have *many* levels.
```
library(treemapify)
# create a treemap of marriage officials
plotdata <- Marriage %>%
count(officialTitle)
ggplot(plotdata,
aes(fill = officialTitle, area = n)) +
geom_treemap() +
labs(title = "Marriages by officiate")
```
Figure 4\.13: Basic treemap
Here is a more useful version with labels.
```
# create a treemap with tile labels
ggplot(plotdata,
aes(fill = officialTitle,
area = n,
label = officialTitle)) +
geom_treemap() +
geom_treemap_text(colour = "white",
place = "centre") +
labs(title = "Marriages by officiate") +
theme(legend.position = "none")
```
Figure 4\.14: Treemap with labels
The treemapify package offers many options for customization. See <https://wilkox.org/treemapify/> for details.
### 4\.1\.4 Waffle chart
A waffle chart, also known as a gridplot or square pie chart, represents observations as squares in a rectangular grid, where each cell represents a percentage of the whole. You can create a ggplot2 waffle chart using the `geom_waffle` function in the **waffle** package.
Let’s create a waffle chart for the professions of wedding officiates. As with tree maps, start by summarizing the data into groups and counts.
```
library(dplyr)
plotdata <- Marriage %>%
count(officialTitle)
```
Next create the ggplot2 graph. Set *fill* to the grouping variable and *values* to the counts. Don’t specify an *x* and *y*.
> Note: The na.rm parameter in the geom\_waffle function indicates whether missing values should be deleted. At time of this writing, there is a bug in the function. The default for the na.rm parameter is NA, but it actually must be either TRUE or FALSE. Specifying one or the other eliminates the error.
The following code produces the default waffle plot.
```
# create a basic waffle chart
library(waffle)
ggplot(plotdata, aes(fill = officialTitle, values=n)) +
geom_waffle(na.rm=TRUE)
```
Figure 4\.15: Basic waffle chart
Next, we’ll customize the graph by
* specifying the number of rows and cell sizes and setting borders around the cells to “white” (`geom_waffle`)
* change the color scheme to “Spectral” (`scale_fill_brewer`)
* assure that the cells are squares and not rectangles (`coord_equal`)
* simplify the theme (the `theme` functions)
* modify the title and add a caption with the scale (`labs`)
```
# Create a customized caption
cap <- paste0("1 square = ", ceiling(sum(plotdata$n)/100),
" case(s).")
library(waffle)
ggplot(plotdata, aes(fill = officialTitle, values=n)) +
geom_waffle(na.rm=TRUE,
n_rows = 10,
size = .4,
color = "white") +
scale_fill_brewer(palette = "Spectral") +
coord_equal() +
theme_minimal() +
theme_enhance_waffle() +
theme(legend.title = element_blank()) +
labs(title = "Proportion of Wedding Officials",
caption = cap)
```
Figure 4\.16: Customized waffle chart
While new to R, waffle charts are becoming increasingly popular.
### 4\.1\.1 Bar chart
In Figure [4\.1](Univariate.html#fig:barchart1), a bar chart is used to display the distribution of wedding participants by race.
```
# simple bar chart
library(ggplot2)
data(Marriage, package = "mosaicData")
# plot the distribution of race
ggplot(Marriage, aes(x = race)) +
geom_bar()
```
Figure 4\.1: Simple barchart
The majority of participants are white, followed by black, with very few Hispanics or American Indians.
You can modify the bar fill and border colors, plot labels, and title by adding options to the `geom_bar` function. In ggplot2, the `fill` parameter is used to specify the color of areas such as bars, rectangles, and polygons. The `color` parameter specifies the color objects that technically do not have an area, such as points, lines, and borders.
```
# plot the distribution of race with modified colors and labels
ggplot(Marriage, aes(x=race)) +
geom_bar(fill = "cornflowerblue",
color="black") +
labs(x = "Race",
y = "Frequency",
title = "Participants by race")
```
Figure 4\.2: Barchart with modified colors, labels, and title
#### 4\.1\.1\.1 Percents
Bars can represent percents rather than counts. For bar charts, the code `aes(x=race)` is actually a shortcut for `aes(x = race, y = after_stat(count))`, where `count` is a special variable representing the frequency within each category. You can use this to calculate percentages, by specifying `y` variable explicitly.
```
# plot the distribution as percentages
ggplot(Marriage,
aes(x = race, y = after_stat(count/sum(count)))) +
geom_bar() +
labs(x = "Race",
y = "Percent",
title = "Participants by race") +
scale_y_continuous(labels = scales::percent)
```
Figure 4\.3: Barchart with percentages
In the code above, the `scales` package is used to add % symbols to the *y*\-axis labels.
#### 4\.1\.1\.2 Sorting categories
It is often helpful to sort the bars by frequency. In the code below, the frequencies are calculated explicitly. Then the `reorder` function is used to sort the categories by the frequency. The option `stat="identity"` tells the plotting function not to calculate counts, because they are supplied directly.
```
# calculate number of participants in each race category
library(dplyr)
plotdata <- Marriage %>%
count(race)
```
The resulting dataset is give below.
Table 4\.1: plotdata
| race | n |
| --- | --- |
| American Indian | 1 |
| Black | 22 |
| Hispanic | 1 |
| White | 74 |
This new dataset is then used to create the graph.
```
# plot the bars in ascending order
ggplot(plotdata,
aes(x = reorder(race, n), y = n)) +
geom_bar(stat="identity") +
labs(x = "Race",
y = "Frequency",
title = "Participants by race")
```
Figure 4\.4: Sorted bar chart
The graph bars are sorted in ascending order. Use `reorder(race, -n)` to sort in descending order.
#### 4\.1\.1\.3 Labeling bars
Finally, you may want to label each bar with its numerical value.
```
# plot the bars with numeric labels
ggplot(plotdata,
aes(x = race, y = n)) +
geom_bar(stat="identity") +
geom_text(aes(label = n), vjust=-0.5) +
labs(x = "Race",
y = "Frequency",
title = "Participants by race")
```
Figure 4\.5: Bar chart with numeric labels
Here `geom_text` adds the labels, and `vjust` controls vertical justification. See [Annotations](Customizing.html#Annotations) (Section [11\.7](Customizing.html#Annotations)) for more details.
Putting these ideas together, you can create a graph like the one below. The minus sign in `reorder(race, -pct)` is used to order the bars in descending order.
```
library(dplyr)
library(scales)
plotdata <- Marriage %>%
count(race) %>%
mutate(pct = n / sum(n),
pctlabel = paste0(round(pct*100), "%"))
# plot the bars as percentages,
# in decending order with bar labels
ggplot(plotdata,
aes(x = reorder(race, -pct), y = pct)) +
geom_bar(stat="identity", fill="indianred3", color="black") +
geom_text(aes(label = pctlabel), vjust=-0.25) +
scale_y_continuous(labels = percent) +
labs(x = "Race",
y = "Percent",
title = "Participants by race")
```
Figure 4\.6: Sorted bar chart with percent labels
#### 4\.1\.1\.4 Overlapping labels
Category labels may overlap if (1\) there are many categories or (2\) the labels are long. Consider the distribution of marriage officials.
```
# basic bar chart with overlapping labels
ggplot(Marriage, aes(x=officialTitle)) +
geom_bar() +
labs(x = "Officiate",
y = "Frequency",
title = "Marriages by officiate")
```
Figure 4\.7: Barchart with problematic labels
In this case, you can flip the x and y axes with the `coord_flip` function.
```
# horizontal bar chart
ggplot(Marriage, aes(x = officialTitle)) +
geom_bar() +
labs(x = "",
y = "Frequency",
title = "Marriages by officiate") +
coord_flip()
```
Figure 4\.8: Horizontal barchart
Alternatively, you can rotate the axis labels.
```
# bar chart with rotated labels
ggplot(Marriage, aes(x=officialTitle)) +
geom_bar() +
labs(x = "",
y = "Frequency",
title = "Marriages by officiate") +
theme(axis.text.x = element_text(angle = 45,
hjust = 1))
```
Figure 4\.9: Barchart with rotated labels
Finally, you can try staggering the labels. The trick is to add a newline `\n` to every other label.
```
# bar chart with staggered labels
lbls <- paste0(c("","\n"), levels(Marriage$officialTitle))
ggplot(Marriage,
aes(x=factor(officialTitle,
labels = lbls))) +
geom_bar() +
labs(x = "",
y = "Frequency",
title = "Marriages by officiate")
```
Figure 4\.10: Barchart with staggered labels
In general, I recommend trying not to rotate axis labels. It places a greater cognitive demand on the end user (i.e., it is harder to read!).
#### 4\.1\.1\.1 Percents
Bars can represent percents rather than counts. For bar charts, the code `aes(x=race)` is actually a shortcut for `aes(x = race, y = after_stat(count))`, where `count` is a special variable representing the frequency within each category. You can use this to calculate percentages, by specifying `y` variable explicitly.
```
# plot the distribution as percentages
ggplot(Marriage,
aes(x = race, y = after_stat(count/sum(count)))) +
geom_bar() +
labs(x = "Race",
y = "Percent",
title = "Participants by race") +
scale_y_continuous(labels = scales::percent)
```
Figure 4\.3: Barchart with percentages
In the code above, the `scales` package is used to add % symbols to the *y*\-axis labels.
#### 4\.1\.1\.2 Sorting categories
It is often helpful to sort the bars by frequency. In the code below, the frequencies are calculated explicitly. Then the `reorder` function is used to sort the categories by the frequency. The option `stat="identity"` tells the plotting function not to calculate counts, because they are supplied directly.
```
# calculate number of participants in each race category
library(dplyr)
plotdata <- Marriage %>%
count(race)
```
The resulting dataset is give below.
Table 4\.1: plotdata
| race | n |
| --- | --- |
| American Indian | 1 |
| Black | 22 |
| Hispanic | 1 |
| White | 74 |
This new dataset is then used to create the graph.
```
# plot the bars in ascending order
ggplot(plotdata,
aes(x = reorder(race, n), y = n)) +
geom_bar(stat="identity") +
labs(x = "Race",
y = "Frequency",
title = "Participants by race")
```
Figure 4\.4: Sorted bar chart
The graph bars are sorted in ascending order. Use `reorder(race, -n)` to sort in descending order.
#### 4\.1\.1\.3 Labeling bars
Finally, you may want to label each bar with its numerical value.
```
# plot the bars with numeric labels
ggplot(plotdata,
aes(x = race, y = n)) +
geom_bar(stat="identity") +
geom_text(aes(label = n), vjust=-0.5) +
labs(x = "Race",
y = "Frequency",
title = "Participants by race")
```
Figure 4\.5: Bar chart with numeric labels
Here `geom_text` adds the labels, and `vjust` controls vertical justification. See [Annotations](Customizing.html#Annotations) (Section [11\.7](Customizing.html#Annotations)) for more details.
Putting these ideas together, you can create a graph like the one below. The minus sign in `reorder(race, -pct)` is used to order the bars in descending order.
```
library(dplyr)
library(scales)
plotdata <- Marriage %>%
count(race) %>%
mutate(pct = n / sum(n),
pctlabel = paste0(round(pct*100), "%"))
# plot the bars as percentages,
# in decending order with bar labels
ggplot(plotdata,
aes(x = reorder(race, -pct), y = pct)) +
geom_bar(stat="identity", fill="indianred3", color="black") +
geom_text(aes(label = pctlabel), vjust=-0.25) +
scale_y_continuous(labels = percent) +
labs(x = "Race",
y = "Percent",
title = "Participants by race")
```
Figure 4\.6: Sorted bar chart with percent labels
#### 4\.1\.1\.4 Overlapping labels
Category labels may overlap if (1\) there are many categories or (2\) the labels are long. Consider the distribution of marriage officials.
```
# basic bar chart with overlapping labels
ggplot(Marriage, aes(x=officialTitle)) +
geom_bar() +
labs(x = "Officiate",
y = "Frequency",
title = "Marriages by officiate")
```
Figure 4\.7: Barchart with problematic labels
In this case, you can flip the x and y axes with the `coord_flip` function.
```
# horizontal bar chart
ggplot(Marriage, aes(x = officialTitle)) +
geom_bar() +
labs(x = "",
y = "Frequency",
title = "Marriages by officiate") +
coord_flip()
```
Figure 4\.8: Horizontal barchart
Alternatively, you can rotate the axis labels.
```
# bar chart with rotated labels
ggplot(Marriage, aes(x=officialTitle)) +
geom_bar() +
labs(x = "",
y = "Frequency",
title = "Marriages by officiate") +
theme(axis.text.x = element_text(angle = 45,
hjust = 1))
```
Figure 4\.9: Barchart with rotated labels
Finally, you can try staggering the labels. The trick is to add a newline `\n` to every other label.
```
# bar chart with staggered labels
lbls <- paste0(c("","\n"), levels(Marriage$officialTitle))
ggplot(Marriage,
aes(x=factor(officialTitle,
labels = lbls))) +
geom_bar() +
labs(x = "",
y = "Frequency",
title = "Marriages by officiate")
```
Figure 4\.10: Barchart with staggered labels
In general, I recommend trying not to rotate axis labels. It places a greater cognitive demand on the end user (i.e., it is harder to read!).
### 4\.1\.2 Pie chart
Pie charts are controversial in statistics. If your goal is to compare the frequency of categories, you are better off with bar charts (humans are better at judging the length of bars than the volume of pie slices). If your goal is compare each category with the the whole (e.g., what portion of participants are Hispanic compared to all participants), and the number of categories is small, then pie charts may work for you.
Pie charts are easily created with `ggpie` function in the **ggpie** package. The format is `ggpie(data, variable)`, where *data* is a data frame, and *variable* is the categorical variable to be plotted.
```
# create a basic ggplot2 pie chart
library(ggpie)
ggpie(Marriage, race)
```
Figure 4\.11: Basic pie chart with legend
The `ggpie` function has many option, as described in package homepage (<http://rkabacoff.github.io/ggpie>). For example to place the labels within the pie, set `legend = FALSE`. A title can be added with the `title` option.
```
# create a pie chart with slice labels within figure
ggpie(Marriage, race, legend = FALSE, title = "Participants by race")
```
Figure 4\.12: Pie chart with percent labels
The pie chart makes it easy to compare each slice with the whole. For example, roughly a quarter of the total participants are Black.
### 4\.1\.3 Tree map
An alternative to a pie chart is a tree map. Unlike pie charts, it can handle categorical variables that have *many* levels.
```
library(treemapify)
# create a treemap of marriage officials
plotdata <- Marriage %>%
count(officialTitle)
ggplot(plotdata,
aes(fill = officialTitle, area = n)) +
geom_treemap() +
labs(title = "Marriages by officiate")
```
Figure 4\.13: Basic treemap
Here is a more useful version with labels.
```
# create a treemap with tile labels
ggplot(plotdata,
aes(fill = officialTitle,
area = n,
label = officialTitle)) +
geom_treemap() +
geom_treemap_text(colour = "white",
place = "centre") +
labs(title = "Marriages by officiate") +
theme(legend.position = "none")
```
Figure 4\.14: Treemap with labels
The treemapify package offers many options for customization. See <https://wilkox.org/treemapify/> for details.
### 4\.1\.4 Waffle chart
A waffle chart, also known as a gridplot or square pie chart, represents observations as squares in a rectangular grid, where each cell represents a percentage of the whole. You can create a ggplot2 waffle chart using the `geom_waffle` function in the **waffle** package.
Let’s create a waffle chart for the professions of wedding officiates. As with tree maps, start by summarizing the data into groups and counts.
```
library(dplyr)
plotdata <- Marriage %>%
count(officialTitle)
```
Next create the ggplot2 graph. Set *fill* to the grouping variable and *values* to the counts. Don’t specify an *x* and *y*.
> Note: The na.rm parameter in the geom\_waffle function indicates whether missing values should be deleted. At time of this writing, there is a bug in the function. The default for the na.rm parameter is NA, but it actually must be either TRUE or FALSE. Specifying one or the other eliminates the error.
The following code produces the default waffle plot.
```
# create a basic waffle chart
library(waffle)
ggplot(plotdata, aes(fill = officialTitle, values=n)) +
geom_waffle(na.rm=TRUE)
```
Figure 4\.15: Basic waffle chart
Next, we’ll customize the graph by
* specifying the number of rows and cell sizes and setting borders around the cells to “white” (`geom_waffle`)
* change the color scheme to “Spectral” (`scale_fill_brewer`)
* assure that the cells are squares and not rectangles (`coord_equal`)
* simplify the theme (the `theme` functions)
* modify the title and add a caption with the scale (`labs`)
```
# Create a customized caption
cap <- paste0("1 square = ", ceiling(sum(plotdata$n)/100),
" case(s).")
library(waffle)
ggplot(plotdata, aes(fill = officialTitle, values=n)) +
geom_waffle(na.rm=TRUE,
n_rows = 10,
size = .4,
color = "white") +
scale_fill_brewer(palette = "Spectral") +
coord_equal() +
theme_minimal() +
theme_enhance_waffle() +
theme(legend.title = element_blank()) +
labs(title = "Proportion of Wedding Officials",
caption = cap)
```
Figure 4\.16: Customized waffle chart
While new to R, waffle charts are becoming increasingly popular.
4\.2 Quantitative
-----------------
In the [Marriage](Datasets.html#Marriage) dataset, age is quantitative variable. The distribution of a single quantitative variable is typically plotted with a histogram, kernel density plot, or dot plot.
### 4\.2\.1 Histogram
Histograms are the most common approach to visualizing a quantitative variable. In a histogram, the values of a variable are typically divided up into adjacent, equal width ranges (called *bins*), and the number of observations in each bin is plotted with a vertical bar.
```
library(ggplot2)
# plot the age distribution using a histogram
ggplot(Marriage, aes(x = age)) +
geom_histogram() +
labs(title = "Participants by age",
x = "Age")
```
Figure 4\.17: Basic histogram
Most participants appear to be in their early 20’s with another group in their 40’s, and a much smaller group in their late sixties and early seventies. This would be a *multimodal* distribution.
Histogram colors can be modified using two options
* `fill` \- fill color for the bars
* `color` \- border color around the bars
```
# plot the histogram with blue bars and white borders
ggplot(Marriage, aes(x = age)) +
geom_histogram(fill = "cornflowerblue",
color = "white") +
labs(title="Participants by age",
x = "Age")
```
Figure 4\.18: Histogram with specified fill and border colors
#### 4\.2\.1\.1 Bins and bandwidths
One of the most important histogram options is `bins`, which controls the number of bins into which the numeric variable is divided (i.e., the number of bars in the plot). The default is 30, but it is helpful to try smaller and larger numbers to get a better impression of the shape of the distribution.
```
# plot the histogram with 20 bins
ggplot(Marriage, aes(x = age)) +
geom_histogram(fill = "cornflowerblue",
color = "white",
bins = 20) +
labs(title="Participants by age",
subtitle = "number of bins = 20",
x = "Age")
```
Figure 4\.19: Histogram with a specified number of bins
Alternatively, you can specify the `binwidth`, the width of the bins represented by the bars.
```
# plot the histogram with a binwidth of 5
ggplot(Marriage, aes(x = age)) +
geom_histogram(fill = "cornflowerblue",
color = "white",
binwidth = 5) +
labs(title="Participants by age",
subtitle = "binwidth = 5 years",
x = "Age")
```
Figure 4\.20: Histogram with specified a bin width
As with bar charts, the *y*\-axis can represent counts or percent of the total.
```
# plot the histogram with percentages on the y-axis
library(scales)
ggplot(Marriage,
aes(x = age, y= after_stat(count/sum(count)))) +
geom_histogram(fill = "cornflowerblue",
color = "white",
binwidth = 5) +
labs(title="Participants by age",
y = "Percent",
x = "Age") +
scale_y_continuous(labels = percent)
```
Figure 4\.21: Histogram with percentages on the y\-axis
### 4\.2\.2 Kernel Density plot
An alternative to a histogram is the kernel density plot. Technically, kernel density estimation is a nonparametric method for estimating the probability density function of a continuous random variable (what??). Basically, we are trying to draw a smoothed histogram, where the area under the curve equals one.
```
# Create a kernel density plot of age
ggplot(Marriage, aes(x = age)) +
geom_density() +
labs(title = "Participants by age")
```
Figure 4\.22: Basic kernel density plot
The graph shows the distribution of scores. For example, the proportion of cases between 20 and 40 years old would be represented by the area under the curve between 20 and 40 on the x\-axis.
As with previous charts, we can use `fill` and `color` to specify the fill and border colors.
```
# Create a kernel density plot of age
ggplot(Marriage, aes(x = age)) +
geom_density(fill = "indianred3") +
labs(title = "Participants by age")
```
Figure 4\.23: Kernel density plot with fill
#### 4\.2\.2\.1 Smoothing parameter
The degree of smoothness is controlled by the bandwidth parameter `bw`. To find the default value for a particular variable, use the `bw.nrd0` function. Values that are larger will result in more smoothing, while values that are smaller will produce less smoothing.
```
# default bandwidth for the age variable
bw.nrd0(Marriage$age)
```
```
## [1] 5.181946
```
```
# Create a kernel density plot of age
ggplot(Marriage, aes(x = age)) +
geom_density(fill = "deepskyblue",
bw = 1) +
labs(title = "Participants by age",
subtitle = "bandwidth = 1")
```
Figure 4\.24: Kernel density plot with a specified bandwidth
In this example, the default bandwidth for age is 5\.18\. Choosing a value of 1 resulted in less smoothing and more detail.
Kernel density plots allow you to easily see which scores are most frequent and which are relatively rare. However it can be difficult to explain the meaning of the *y*\-axis means to a non\-statistician. (But it will make you look really smart at parties!)
### 4\.2\.3 Dot Chart
Another alternative to the histogram is the dot chart. Again, the quantitative variable is divided into bins, but rather than summary bars, each observation is represented by a dot. By default, the width of a dot corresponds to the bin width, and dots are stacked, with each dot representing one observation. This works best when the number of observations is small (say, less than 150\).
```
# plot the age distribution using a dotplot
ggplot(Marriage, aes(x = age)) +
geom_dotplot() +
labs(title = "Participants by age",
y = "Proportion",
x = "Age")
```
Figure 4\.25: Basic dotplot
The `fill` and `color` options can be used to specify the fill and border color of each dot respectively.
```
# Plot ages as a dot plot using
# gold dots with black borders
ggplot(Marriage, aes(x = age)) +
geom_dotplot(fill = "gold",
color="black") +
labs(title = "Participants by age",
y = "Proportion",
x = "Age")
```
Figure 4\.26: Dotplot with a specified color scheme
There are many more options available. See [`?geom_dotplot`](http://ggplot2.tidyverse.org/reference/geom_dotplot.html) for details and examples.
### 4\.2\.1 Histogram
Histograms are the most common approach to visualizing a quantitative variable. In a histogram, the values of a variable are typically divided up into adjacent, equal width ranges (called *bins*), and the number of observations in each bin is plotted with a vertical bar.
```
library(ggplot2)
# plot the age distribution using a histogram
ggplot(Marriage, aes(x = age)) +
geom_histogram() +
labs(title = "Participants by age",
x = "Age")
```
Figure 4\.17: Basic histogram
Most participants appear to be in their early 20’s with another group in their 40’s, and a much smaller group in their late sixties and early seventies. This would be a *multimodal* distribution.
Histogram colors can be modified using two options
* `fill` \- fill color for the bars
* `color` \- border color around the bars
```
# plot the histogram with blue bars and white borders
ggplot(Marriage, aes(x = age)) +
geom_histogram(fill = "cornflowerblue",
color = "white") +
labs(title="Participants by age",
x = "Age")
```
Figure 4\.18: Histogram with specified fill and border colors
#### 4\.2\.1\.1 Bins and bandwidths
One of the most important histogram options is `bins`, which controls the number of bins into which the numeric variable is divided (i.e., the number of bars in the plot). The default is 30, but it is helpful to try smaller and larger numbers to get a better impression of the shape of the distribution.
```
# plot the histogram with 20 bins
ggplot(Marriage, aes(x = age)) +
geom_histogram(fill = "cornflowerblue",
color = "white",
bins = 20) +
labs(title="Participants by age",
subtitle = "number of bins = 20",
x = "Age")
```
Figure 4\.19: Histogram with a specified number of bins
Alternatively, you can specify the `binwidth`, the width of the bins represented by the bars.
```
# plot the histogram with a binwidth of 5
ggplot(Marriage, aes(x = age)) +
geom_histogram(fill = "cornflowerblue",
color = "white",
binwidth = 5) +
labs(title="Participants by age",
subtitle = "binwidth = 5 years",
x = "Age")
```
Figure 4\.20: Histogram with specified a bin width
As with bar charts, the *y*\-axis can represent counts or percent of the total.
```
# plot the histogram with percentages on the y-axis
library(scales)
ggplot(Marriage,
aes(x = age, y= after_stat(count/sum(count)))) +
geom_histogram(fill = "cornflowerblue",
color = "white",
binwidth = 5) +
labs(title="Participants by age",
y = "Percent",
x = "Age") +
scale_y_continuous(labels = percent)
```
Figure 4\.21: Histogram with percentages on the y\-axis
#### 4\.2\.1\.1 Bins and bandwidths
One of the most important histogram options is `bins`, which controls the number of bins into which the numeric variable is divided (i.e., the number of bars in the plot). The default is 30, but it is helpful to try smaller and larger numbers to get a better impression of the shape of the distribution.
```
# plot the histogram with 20 bins
ggplot(Marriage, aes(x = age)) +
geom_histogram(fill = "cornflowerblue",
color = "white",
bins = 20) +
labs(title="Participants by age",
subtitle = "number of bins = 20",
x = "Age")
```
Figure 4\.19: Histogram with a specified number of bins
Alternatively, you can specify the `binwidth`, the width of the bins represented by the bars.
```
# plot the histogram with a binwidth of 5
ggplot(Marriage, aes(x = age)) +
geom_histogram(fill = "cornflowerblue",
color = "white",
binwidth = 5) +
labs(title="Participants by age",
subtitle = "binwidth = 5 years",
x = "Age")
```
Figure 4\.20: Histogram with specified a bin width
As with bar charts, the *y*\-axis can represent counts or percent of the total.
```
# plot the histogram with percentages on the y-axis
library(scales)
ggplot(Marriage,
aes(x = age, y= after_stat(count/sum(count)))) +
geom_histogram(fill = "cornflowerblue",
color = "white",
binwidth = 5) +
labs(title="Participants by age",
y = "Percent",
x = "Age") +
scale_y_continuous(labels = percent)
```
Figure 4\.21: Histogram with percentages on the y\-axis
### 4\.2\.2 Kernel Density plot
An alternative to a histogram is the kernel density plot. Technically, kernel density estimation is a nonparametric method for estimating the probability density function of a continuous random variable (what??). Basically, we are trying to draw a smoothed histogram, where the area under the curve equals one.
```
# Create a kernel density plot of age
ggplot(Marriage, aes(x = age)) +
geom_density() +
labs(title = "Participants by age")
```
Figure 4\.22: Basic kernel density plot
The graph shows the distribution of scores. For example, the proportion of cases between 20 and 40 years old would be represented by the area under the curve between 20 and 40 on the x\-axis.
As with previous charts, we can use `fill` and `color` to specify the fill and border colors.
```
# Create a kernel density plot of age
ggplot(Marriage, aes(x = age)) +
geom_density(fill = "indianred3") +
labs(title = "Participants by age")
```
Figure 4\.23: Kernel density plot with fill
#### 4\.2\.2\.1 Smoothing parameter
The degree of smoothness is controlled by the bandwidth parameter `bw`. To find the default value for a particular variable, use the `bw.nrd0` function. Values that are larger will result in more smoothing, while values that are smaller will produce less smoothing.
```
# default bandwidth for the age variable
bw.nrd0(Marriage$age)
```
```
## [1] 5.181946
```
```
# Create a kernel density plot of age
ggplot(Marriage, aes(x = age)) +
geom_density(fill = "deepskyblue",
bw = 1) +
labs(title = "Participants by age",
subtitle = "bandwidth = 1")
```
Figure 4\.24: Kernel density plot with a specified bandwidth
In this example, the default bandwidth for age is 5\.18\. Choosing a value of 1 resulted in less smoothing and more detail.
Kernel density plots allow you to easily see which scores are most frequent and which are relatively rare. However it can be difficult to explain the meaning of the *y*\-axis means to a non\-statistician. (But it will make you look really smart at parties!)
#### 4\.2\.2\.1 Smoothing parameter
The degree of smoothness is controlled by the bandwidth parameter `bw`. To find the default value for a particular variable, use the `bw.nrd0` function. Values that are larger will result in more smoothing, while values that are smaller will produce less smoothing.
```
# default bandwidth for the age variable
bw.nrd0(Marriage$age)
```
```
## [1] 5.181946
```
```
# Create a kernel density plot of age
ggplot(Marriage, aes(x = age)) +
geom_density(fill = "deepskyblue",
bw = 1) +
labs(title = "Participants by age",
subtitle = "bandwidth = 1")
```
Figure 4\.24: Kernel density plot with a specified bandwidth
In this example, the default bandwidth for age is 5\.18\. Choosing a value of 1 resulted in less smoothing and more detail.
Kernel density plots allow you to easily see which scores are most frequent and which are relatively rare. However it can be difficult to explain the meaning of the *y*\-axis means to a non\-statistician. (But it will make you look really smart at parties!)
### 4\.2\.3 Dot Chart
Another alternative to the histogram is the dot chart. Again, the quantitative variable is divided into bins, but rather than summary bars, each observation is represented by a dot. By default, the width of a dot corresponds to the bin width, and dots are stacked, with each dot representing one observation. This works best when the number of observations is small (say, less than 150\).
```
# plot the age distribution using a dotplot
ggplot(Marriage, aes(x = age)) +
geom_dotplot() +
labs(title = "Participants by age",
y = "Proportion",
x = "Age")
```
Figure 4\.25: Basic dotplot
The `fill` and `color` options can be used to specify the fill and border color of each dot respectively.
```
# Plot ages as a dot plot using
# gold dots with black borders
ggplot(Marriage, aes(x = age)) +
geom_dotplot(fill = "gold",
color="black") +
labs(title = "Participants by age",
y = "Proportion",
x = "Age")
```
Figure 4\.26: Dotplot with a specified color scheme
There are many more options available. See [`?geom_dotplot`](http://ggplot2.tidyverse.org/reference/geom_dotplot.html) for details and examples.
| Data Visualization |
rkabacoff.github.io | https://rkabacoff.github.io/datavis/Bivariate.html |
Chapter 5 Bivariate Graphs
==========================
One of the most fundamental questions in research is *“What is the relationship between A and B?”*. Bivariate graphs display the relationship between two variables. The type of graph will depend on the measurement level of each variable (categorical or quantitative).
5\.1 Categorical vs. Categorical
--------------------------------
When plotting the relationship between two categorical variables, stacked, grouped, or segmented bar charts are typically used. A less common approach is the [mosaic](Models.html#Mosaic) chart (section [9\.5](Models.html#Mosaic)).
In this section, we will look at automobile characteristics contained in [mpg](Datasets.html#MPG) dataset that comes with the **ggplot2** package. It provides fuel efficiency data for 38 popular car models in 1998 and 2008 (see Appendix [A.6](Datasets.html#MPG)).
### 5\.1\.1 Stacked bar chart
Let’s examine the relationship between automobile class and drive type (front\-wheel, rear\-wheel, or 4\-wheel drive) for the automobiles in the [mpg](Datasets.html#MPG) dataset.
```
library(ggplot2)
# stacked bar chart
ggplot(mpg, aes(x = class, fill = drv)) +
geom_bar(position = "stack")
```
Figure 5\.1: Stacked bar chart
From the Figure [5\.1](Bivariate.html#fig:stackedbar), we can see for example, that the most common vehicle is the SUV. All 2seater cars are rear wheel drive, while most, but not all SUVs are 4\-wheel drive.
Stacked is the default, so the last line could have also been written as `geom_bar()`.
### 5\.1\.2 Grouped bar chart
Grouped bar charts place bars for the second categorical variable side\-by\-side. To create a grouped bar plot use the `position = "dodge"` option.
```
library(ggplot2)
# grouped bar plot
ggplot(mpg, aes(x = class, fill = drv)) +
geom_bar(position = "dodge")
```
Figure 5\.2: Side\-by\-side bar chart
Notice that all Minivans are front\-wheel drive. By default, zero count bars are dropped and the remaining bars are made wider. This may not be the behavior you want. You can modify this using the `position = position_dodge(preserve = "single")"` option.
```
library(ggplot2)
# grouped bar plot preserving zero count bars
ggplot(mpg, aes(x = class, fill = drv)) +
geom_bar(position = position_dodge(preserve = "single"))
```
Figure 5\.3: Side\-by\-side bar chart with zero count bars retained
Note that this option is only available in the later versions of `ggplot2`.
### 5\.1\.3 Segmented bar chart
A segmented bar plot is a stacked bar plot where each bar represents 100 percent. You can create a segmented bar chart using the `position = "filled"` option.
```
library(ggplot2)
# bar plot, with each bar representing 100%
ggplot(mpg, aes(x = class, fill = drv)) +
geom_bar(position = "fill") +
labs(y = "Proportion")
```
Figure 5\.4: Segmented bar chart
This type of plot is particularly useful if the goal is to compare the percentage of a category in one variable across each level of another variable. For example, the proportion of front\-wheel drive cars go up as you move from compact, to midsize, to minivan.
### 5\.1\.4 Improving the color and labeling
You can use additional options to improve color and labeling. In the graph below
* `factor` modifies the order of the categories for the class variable and both the order and the labels for the drive variable
* `scale_y_continuous` modifies the y\-axis tick mark labels
* `labs` provides a title and changed the labels for the x and y axes and the legend
* `scale_fill_brewer` changes the fill color scheme
* `theme_minimal` removes the grey background and changed the grid color
```
library(ggplot2)
# bar plot, with each bar representing 100%,
# reordered bars, and better labels and colors
library(scales)
ggplot(mpg,
aes(x = factor(class,
levels = c("2seater", "subcompact",
"compact", "midsize",
"minivan", "suv", "pickup")),
fill = factor(drv,
levels = c("f", "r", "4"),
labels = c("front-wheel",
"rear-wheel",
"4-wheel")))) +
geom_bar(position = "fill") +
scale_y_continuous(breaks = seq(0, 1, .2),
label = percent) +
scale_fill_brewer(palette = "Set2") +
labs(y = "Percent",
fill="Drive Train",
x = "Class",
title = "Automobile Drive by Class") +
theme_minimal()
```
Figure 5\.5: Segmented bar chart with improved labeling and color
Each of these functions is discussed more fully in the section on [Customizing](Customizing.html#Customizing) graphs (see Section [11](Customizing.html#Customizing)).
Next, let’s add percent labels to each segment. First, we’ll create a summary dataset that has the necessary labels.
```
# create a summary dataset
library(dplyr)
plotdata <- mpg %>%
group_by(class, drv) %>%
summarize(n = n()) %>%
mutate(pct = n/sum(n),
lbl = scales::percent(pct))
plotdata
```
```
## # A tibble: 12 × 5
## # Groups: class [7]
## class drv n pct lbl
## <chr> <chr> <int> <dbl> <chr>
## 1 2seater r 5 1 100%
## 2 compact 4 12 0.255 26%
## 3 compact f 35 0.745 74%
## 4 midsize 4 3 0.0732 7%
## 5 midsize f 38 0.927 93%
## 6 minivan f 11 1 100%
## 7 pickup 4 33 1 100%
## 8 subcompact 4 4 0.114 11%
## 9 subcompact f 22 0.629 63%
## 10 subcompact r 9 0.257 26%
## 11 suv 4 51 0.823 82%
## 12 suv r 11 0.177 18%
```
Next, we’ll use this dataset and the `geom_text` function to add labels to each bar segment.
```
# create segmented bar chart
# adding labels to each segment
ggplot(plotdata,
aes(x = factor(class,
levels = c("2seater", "subcompact",
"compact", "midsize",
"minivan", "suv", "pickup")),
y = pct,
fill = factor(drv,
levels = c("f", "r", "4"),
labels = c("front-wheel",
"rear-wheel",
"4-wheel")))) +
geom_bar(stat = "identity",
position = "fill") +
scale_y_continuous(breaks = seq(0, 1, .2),
label = percent) +
geom_text(aes(label = lbl),
size = 3,
position = position_stack(vjust = 0.5)) +
scale_fill_brewer(palette = "Set2") +
labs(y = "Percent",
fill="Drive Train",
x = "Class",
title = "Automobile Drive by Class") +
theme_minimal()
```
Figure 5\.6: Segmented bar chart with value labeling
Now we have a graph that is easy to read and interpret.
### 5\.1\.5 Other plots
[Mosaic plots](Models.html#Mosaic) provide an alternative to stacked bar charts for displaying the relationship between categorical variables. They can also provide more sophisticated statistical information. See Section [9\.5](Models.html#Mosaic) for details.
5\.2 Quantitative vs. Quantitative
----------------------------------
The relationship between two quantitative variables is typically displayed using scatterplots and line graphs.
### 5\.2\.1 Scatterplot
The simplest display of two quantitative variables is a scatterplot, with each variable represented on an axis. Here, we will use the [Salaries](Datasets.html#Salaries) dataset described in Appendix [A.1](Datasets.html#Salaries). First, let’s plot experience (*yrs.since.phd*) vs. academic salary (*salary*) for college professors.
```
library(ggplot2)
data(Salaries, package="carData")
# simple scatterplot
ggplot(Salaries,
aes(x = yrs.since.phd, y = salary)) +
geom_point()
```
Figure 5\.7: Simple scatterplot
As expected, salary tends to rise with experience, but the relationship may not be strictly linear. Note that salary appears to fall off after about 40 years of experience.
The `geom_point` function has options that can be used to change the
* `color` \- point color
* `size` \- point size
* `shape` \- point shape
* `alpha` \- point transparency. Transparency ranges from 0 (transparent) to 1 (opaque), and is a useful parameter when points overlap.
The functions `scale_x_continuous` and `scale_y_continuous` control the scaling on *x* and *y* axes respectively.
We can use these options and functions to create a more attractive scatterplot.
```
# enhanced scatter plot
ggplot(Salaries,
aes(x = yrs.since.phd, y = salary)) +
geom_point(color="cornflowerblue",
size = 2,
alpha=.8) +
scale_y_continuous(label = scales::dollar,
limits = c(50000, 250000)) +
scale_x_continuous(breaks = seq(0, 60, 10),
limits=c(0, 60)) +
labs(x = "Years Since PhD",
y = "",
title = "Experience vs. Salary",
subtitle = "9-month salary for 2008-2009")
```
Figure 5\.8: Scatterplot with color, transparency, and axis scaling
Again, see [Customizing](Customizing.html#Customizing) graphs (Section [11](Customizing.html#Customizing)) for more details.
#### 5\.2\.1\.1 Adding best fit lines
It is often useful to summarize the relationship displayed in the scatterplot, using a best fit line. Many types of lines are supported, including linear, polynomial, and nonparametric (loess). By default, 95% confidence limits for these lines are displayed.
```
# scatterplot with linear fit line
ggplot(Salaries, aes(x = yrs.since.phd, y = salary)) +
geom_point(color= "steelblue") +
geom_smooth(method = "lm")
```
Figure 5\.9: Scatterplot with linear fit line
Clearly, salary increases with experience. However, there seems to be a dip at the right end \- professors with significant experience, earning lower salaries. A straight line does not capture this non\-linear effect. A line with a bend will fit better here.
A polynomial regression line provides a fit line of the form \\\[\\hat{y} \= \\beta\_{0} \+\\beta\_{1}x \+ \\beta{2}x^{2} \+ \\beta{3}x^{3} \+ \\beta{4}x^{4} \+ \\dots\\]
Typically either a quadratic (one bend), or cubic (two bends) line is used. It is rarely necessary to use a higher order( \>3 ) polynomials. Adding a quadratic fit line to the salary dataset produces the following result.
```
# scatterplot with quadratic line of best fit
ggplot(Salaries, aes(x = yrs.since.phd, y = salary)) +
geom_point(color= "steelblue") +
geom_smooth(method = "lm",
formula = y ~ poly(x, 2),
color = "indianred3")
```
Figure 5\.10: Scatterplot with quadratic fit line
Finally, a smoothed nonparametric fit line can often provide a good picture of the relationship. The default in `ggplot2` is a [loess](https://www.ime.unicamp.br/~dias/loess.pdf) line which stands for for **lo**cally w**e**ighted **s**catterplot **s**moothing ([Cleveland 1979](#ref-RN5)).
```
# scatterplot with loess smoothed line
ggplot(Salaries, aes(x = yrs.since.phd, y = salary)) +
geom_point(color= "steelblue") +
geom_smooth(color = "tomato")
```
Figure 5\.11: Scatterplot with nonparametric fit line
You can suppress the confidence bands by including the option `se = FALSE`.
Here is a complete (and more attractive) plot.
```
# scatterplot with loess smoothed line
# and better labeling and color
ggplot(Salaries,
aes(x = yrs.since.phd, y = salary)) +
geom_point(color="cornflowerblue",
size = 2,
alpha=.6) +
geom_smooth(size = 1.5,
color = "darkgrey") +
scale_y_continuous(label = scales::dollar,
limits=c(50000, 250000)) +
scale_x_continuous(breaks = seq(0, 60, 10),
limits=c(0, 60)) +
labs(x = "Years Since PhD",
y = "",
title = "Experience vs. Salary",
subtitle = "9-month salary for 2008-2009") +
theme_minimal()
```
Figure 5\.12: Scatterplot with nonparametric fit line
### 5\.2\.2 Line plot
When one of the two variables represents time, a line plot can be an effective method of displaying relationship. For example, the code below displays the relationship between time (*year*) and life expectancy (*lifeExp*) in the United States between 1952 and 2007\. The data comes from the [gapminder](Datasets.html#Gapminder) dataset (Appendix [A.8](Datasets.html#Gapminder)).
```
data(gapminder, package="gapminder")
# Select US cases
library(dplyr)
plotdata <- filter(gapminder, country == "United States")
# simple line plot
ggplot(plotdata, aes(x = year, y = lifeExp)) +
geom_line()
```
Figure 5\.13: Simple line plot
It is hard to read indivial values in the graph above. In the next plot, we’ll add points as well.
```
# line plot with points
# and improved labeling
ggplot(plotdata, aes(x = year, y = lifeExp)) +
geom_line(size = 1.5,
color = "lightgrey") +
geom_point(size = 3,
color = "steelblue") +
labs(y = "Life Expectancy (years)",
x = "Year",
title = "Life expectancy changes over time",
subtitle = "United States (1952-2007)",
caption = "Source: http://www.gapminder.org/data/")
```
Figure 5\.14: Line plot with points and labels
Time dependent data is covered in more detail under [Time series](Time.html#Time) (Section [8](Time.html#Time)). Customizing line graphs is covered in the [Customizing](Customizing.html#Customizing) graphs (Section [11](Customizing.html#Customizing)).
5\.3 Categorical vs. Quantitative
---------------------------------
When plotting the relationship between a categorical variable and a quantitative variable, a large number of graph types are available. These include bar charts using summary statistics, grouped kernel density plots, side\-by\-side box plots, side\-by\-side violin plots, mean/sem plots, ridgeline plots, and Cleveland plots. Each is considered in turn.
### 5\.3\.1 Bar chart (on summary statistics)
In previous sections, bar charts were used to display the number of cases by category for a [single variable](Univariate.html#Barchart) (Section [4\.1\.1](Univariate.html#Barchart)) or for [two variables](Bivariate.html#Categorical-Categorical) (Section [5\.1](Bivariate.html#Categorical-Categorical)). You can also use bar charts to display other summary statistics (e.g., means or medians) on a quantitative variable for each level of a categorical variable.
For example, the following graph displays the mean salary for a sample of university professors by their academic rank.
```
data(Salaries, package="carData")
# calculate mean salary for each rank
library(dplyr)
plotdata <- Salaries %>%
group_by(rank) %>%
summarize(mean_salary = mean(salary))
# plot mean salaries
ggplot(plotdata, aes(x = rank, y = mean_salary)) +
geom_bar(stat = "identity")
```
Figure 5\.15: Bar chart displaying means
We can make it more attractive with some options. In particular, the `factor` function modifies the labels for each rank, the `scale_y_continuous` function improves the y\-axis labeling, and the `geom_text` function adds the mean values to each bar.
```
# plot mean salaries in a more attractive fashion
library(scales)
ggplot(plotdata,
aes(x = factor(rank,
labels = c("Assistant\nProfessor",
"Associate\nProfessor",
"Full\nProfessor")),
y = mean_salary)) +
geom_bar(stat = "identity",
fill = "cornflowerblue") +
geom_text(aes(label = dollar(mean_salary)),
vjust = -0.25) +
scale_y_continuous(breaks = seq(0, 130000, 20000),
label = dollar) +
labs(title = "Mean Salary by Rank",
subtitle = "9-month academic salary for 2008-2009",
x = "",
y = "")
```
Figure 5\.16: Bar chart displaying means
The `vjust` parameter in the `geom_text` function controls vertical justification and nudges the text above the bars. See [Annotations](Customizing.html#Annotations) (Section [11\.7](Customizing.html#Annotations)) for more details.
One limitation of such plots is that they do not display the distribution of the data \- only the summary statistic for each group. The plots below correct this limitation to some extent.
### 5\.3\.2 Grouped kernel density plots
One can compare groups on a numeric variable by superimposing [kernel density](Univariate.html#Kernel) plots (Section [4\.2\.2](Univariate.html#Kernel)) in a single graph.
```
# plot the distribution of salaries
# by rank using kernel density plots
ggplot(Salaries, aes(x = salary, fill = rank)) +
geom_density(alpha = 0.4) +
labs(title = "Salary distribution by rank")
```
Figure 5\.17: Grouped kernel density plots
The `alpha` option makes the density plots partially transparent, so that we can see what is happening under the overlaps. Alpha values range from 0 (transparent) to 1 (opaque). The graph makes clear that, in general, salary goes up with rank. However, the salary range for full professors is *very* wide.
### 5\.3\.3 Box plots
A boxplot displays the 25th percentile, median, and 75th percentile of a distribution. The whiskers (vertical lines) capture roughly 99% of a normal distribution, and observations outside this range are plotted as points representing outliers (see the figure below).
on a numerical variable.
```
# plot the distribution of salaries by rank using boxplots
ggplot(Salaries, aes(x = rank, y = salary)) +
geom_boxplot() +
labs(title = "Salary distribution by rank")
```
Figure 5\.18: Side\-by\-side boxplots
Notched boxplots provide an approximate method for visualizing whether groups differ. Although not a formal test, if the notches of two boxplots do not overlap, there is strong evidence (95% confidence) that the medians of the two groups differ ([McGill, Tukey, and Larsen 1978](#ref-RN6)).
```
# plot the distribution of salaries by rank using boxplots
ggplot(Salaries, aes(x = rank, y = salary)) +
geom_boxplot(notch = TRUE,
fill = "cornflowerblue",
alpha = .7) +
labs(title = "Salary distribution by rank")
```
Figure 5\.19: Side\-by\-side notched boxplots
In the example above, all three groups appear to differ.
One of the advantages of boxplots is that the width is usually not meaningful. This allows you to compare the distribution of many groups in a single graph.
### 5\.3\.4 Violin plots
Violin plots are similar to [kernel density](Univariate.html#Kernel) plots, but are mirrored and rotated 90o.
```
# plot the distribution of salaries
# by rank using violin plots
ggplot(Salaries, aes(x = rank, y = salary)) +
geom_violin() +
labs(title = "Salary distribution by rank")
```
Figure 5\.20: Side\-by\-side violin plots
A violin plots capture more a a distribution’s shape than a boxplot, but does not indicate median or middle 50% of the data. A useful variation is to superimpose boxplots on violin plots.
```
# plot the distribution using violin and boxplots
ggplot(Salaries, aes(x = rank, y = salary)) +
geom_violin(fill = "cornflowerblue") +
geom_boxplot(width = .15,
fill = "orange",
outlier.color = "orange",
outlier.size = 2) +
labs(title = "Salary distribution by rank")
```
Figure 5\.21: Side\-by\-side violin/box plots
Be sure to set the `width` parameter in the `geom_boxplot` in order to assure the boxplots fit within the violin plots. You may need to play around with this in order to find a value that works well. Since geoms are layered, it is also important for the `geom_boxplot` function to appear after the `geom_violin` function. Otherwise the boxplots will be hidden beneath the violin plots.
### 5\.3\.5 Ridgeline plots
A ridgeline plot (also called a joyplot) displays the distribution of a quantitative variable for several groups. They’re similar to [kernel density](Univariate.html#Kernel) plots with vertical [faceting](Multivariate.html#Faceting), but take up less room. Ridgeline plots are created with the **ggridges** package.
Using the [mpg](Datasets.html#MPG) dataset, let’s plot the distribution of city driving miles per gallon by car class.
```
# create ridgeline graph
library(ggplot2)
library(ggridges)
ggplot(mpg,
aes(x = cty, y = class, fill = class)) +
geom_density_ridges() +
theme_ridges() +
labs("Highway mileage by auto class") +
theme(legend.position = "none")
```
Figure 5\.22: Ridgeline graph with color fill
I’ve suppressed the legend here because it’s redundant (the distributions are already labeled on the *y*\-axis). Unsurprisingly, pickup trucks have the poorest mileage, while subcompacts and compact cars tend to achieve ratings. However, there is a very wide range of gas mileage scores for these smaller cars.
Note the the possible overlap of distributions is the trade\-off for a more compact graph. You can add transparency if the the overlap is severe using `geom_density_ridges(alpha = n)`, where *n* ranges from 0 (transparent) to 1 (opaque). See the package vignette ([https://cran.r\-project.org/web/packages/ggridges/vignettes/introduction.html](https://cran.r-project.org/web/packages/ggridges/vignettes/introduction.html)) for more details.
### 5\.3\.6 Mean/SEM plots
A popular method for comparing groups on a numeric variable is a mean plot with error bars. Error bars can represent standard deviations, standard errors of the means, or confidence intervals. In this section, we’ll calculate all three, but only plot means and standard errors to save space.
```
# calculate means, standard deviations,
# standard errors, and 95% confidence
# intervals by rank
library(dplyr)
plotdata <- Salaries %>%
group_by(rank) %>%
summarize(n = n(),
mean = mean(salary),
sd = sd(salary),
se = sd / sqrt(n),
ci = qt(0.975, df = n - 1) * sd / sqrt(n))
```
The resulting dataset is given below.
Table 5\.1: Plot data
| rank | n | mean | sd | se | ci |
| --- | --- | --- | --- | --- | --- |
| AsstProf | 67 | 80775\.99 | 8174\.113 | 998\.6268 | 1993\.823 |
| AssocProf | 64 | 93876\.44 | 13831\.700 | 1728\.9625 | 3455\.056 |
| Prof | 266 | 126772\.11 | 27718\.675 | 1699\.5410 | 3346\.322 |
```
# plot the means and standard errors
ggplot(plotdata,
aes(x = rank,
y = mean,
group = 1)) +
geom_point(size = 3) +
geom_line() +
geom_errorbar(aes(ymin = mean - se,
ymax = mean + se),
width = .1)
```
Figure 5\.23: Mean plots with standard error bars
Although we plotted error bars representing the standard error, we could have plotted standard deviations or 95% confidence intervals. Simply replace `se` with `sd` or `error` in the `aes` option.
We can use the same technique to compare salary across rank and sex. (Technically, this is not bivariate since we’re plotting rank, sex, and salary, but it seems to fit here.)
```
# calculate means and standard errors by rank and sex
plotdata <- Salaries %>%
group_by(rank, sex) %>%
summarize(n = n(),
mean = mean(salary),
sd = sd(salary),
se = sd/sqrt(n))
# plot the means and standard errors by sex
ggplot(plotdata, aes(x = rank,
y = mean,
group=sex,
color=sex)) +
geom_point(size = 3) +
geom_line(size = 1) +
geom_errorbar(aes(ymin =mean - se,
ymax = mean+se),
width = .1)
```
Figure 5\.24: Mean plots with standard error bars by sex
Unfortunately, the error bars overlap. We can dodge the horizontal positions a bit to overcome this.
```
# plot the means and standard errors by sex (dodged)
pd <- position_dodge(0.2)
ggplot(plotdata,
aes(x = rank,
y = mean,
group=sex,
color=sex)) +
geom_point(position = pd,
size = 3) +
geom_line(position = pd,
size = 1) +
geom_errorbar(aes(ymin = mean - se,
ymax = mean + se),
width = .1,
position= pd)
```
Figure 5\.25: Mean plots with standard error bars (dodged)
Finally, lets add some options to make the graph more attractive.
```
# improved means/standard error plot
pd <- position_dodge(0.2)
ggplot(plotdata,
aes(x = factor(rank,
labels = c("Assistant\nProfessor",
"Associate\nProfessor",
"Full\nProfessor")),
y = mean, group=sex, color=sex)) +
geom_point(position=pd,
size=3) +
geom_line(position=pd,
size = 1) +
geom_errorbar(aes(ymin = mean - se,
ymax = mean + se),
width = .1,
position=pd,
size=1) +
scale_y_continuous(label = scales::dollar) +
scale_color_brewer(palette="Set1") +
theme_minimal() +
labs(title = "Mean salary by rank and sex",
subtitle = "(mean +/- standard error)",
x = "",
y = "",
color = "Gender")
```
Figure 5\.26: Mean/se plot with better labels and colors
This is a graph you could publish in a journal.
### 5\.3\.7 Strip plots
The relationship between a grouping variable and a numeric variable can be also displayed with a scatter plot. For example
```
# plot the distribution of salaries
# by rank using strip plots
ggplot(Salaries, aes(y = rank, x = salary)) +
geom_point() +
labs(title = "Salary distribution by rank")
```
Figure 5\.27: Categorical by quantiative scatterplot
These one\-dimensional scatterplots are called strip plots. Unfortunately, overprinting of points makes interpretation difficult. The relationship is easier to see if the the points are jittered. Basically a small random number is added to each y\-coordinate. To jitter the points, replace `geom_point` with `geom_jitter`.
```
# plot the distribution of salaries
# by rank using jittering
ggplot(Salaries, aes(y = rank, x = salary)) +
geom_jitter() +
labs(title = "Salary distribution by rank")
```
Figure 5\.28: Jittered plot
It is easier to compare groups if we use color.
```
# plot the distribution of salaries
# by rank using jittering
library(scales)
ggplot(Salaries,
aes(y = factor(rank,
labels = c("Assistant\nProfessor",
"Associate\nProfessor",
"Full\nProfessor")),
x = salary, color = rank)) +
geom_jitter(alpha = 0.7) +
scale_x_continuous(label = dollar) +
labs(title = "Academic Salary by Rank",
subtitle = "9-month salary for 2008-2009",
x = "",
y = "") +
theme_minimal() +
theme(legend.position = "none")
```
Figure 5\.29: Fancy jittered plot
The option `legend.position = "none"` is used to suppress the legend (which is not needed here). Jittered plots work well when the number of points in not overly large. Here, we can not only compare groups, but see the salaries of each individual faculty member. As a college professor myself, I want to know who is making more than $200,000 on a nine month contract!
Finally, we can superimpose boxplots on the jitter plots.
```
# plot the distribution of salaries
# by rank using jittering
library(scales)
ggplot(Salaries,
aes(x = factor(rank,
labels = c("Assistant\nProfessor",
"Associate\nProfessor",
"Full\nProfessor")),
y = salary, color = rank)) +
geom_boxplot(size=1,
outlier.shape = 1,
outlier.color = "black",
outlier.size = 3) +
geom_jitter(alpha = 0.5,
width=.2) +
scale_y_continuous(label = dollar) +
labs(title = "Academic Salary by Rank",
subtitle = "9-month salary for 2008-2009",
x = "",
y = "") +
theme_minimal() +
theme(legend.position = "none") +
coord_flip()
```
Figure 5\.30: Jitter plot with superimposed box plots
Several options were added to create this plot.
For the boxplot
* `size = 1` makes the lines thicker
* `outlier.color = "black"` makes outliers black
* `outlier.shape = 1` specifies circles for outliers
* `outlier.size = 3` increases the size of the outlier symbol
For the jitter
* `alpha = 0.5` makes the points more transparent
* `width = .2` decreases the amount of jitter (.4 is the default)
Finally, the *x* and *y* axes are revered using the `coord_flip` function (i.e., the graph is turned on its side).
Before moving on, it is worth mentioning the [`geom_boxjitter`](https://www.rdocumentation.org/packages/ggpol/versions/0.0.1/topics/geom_boxjitter) function provided in the [**ggpol**](https://erocoar.github.io/ggpol/) package. It creates a hybrid boxplot \- half boxplot, half scaterplot.
```
# plot the distribution of salaries
# by rank using jittering
library(ggpol)
library(scales)
ggplot(Salaries,
aes(x = factor(rank,
labels = c("Assistant\nProfessor",
"Associate\nProfessor",
"Full\nProfessor")),
y = salary,
fill=rank)) +
geom_boxjitter(color="black",
jitter.color = "darkgrey",
errorbar.draw = TRUE) +
scale_y_continuous(label = dollar) +
labs(title = "Academic Salary by Rank",
subtitle = "9-month salary for 2008-2009",
x = "",
y = "") +
theme_minimal() +
theme(legend.position = "none")
```
Figure 5\.31: Using geom\_boxjitter
Choose the approach that you find most useful.
### 5\.3\.8 Cleveland Dot Charts
Cleveland plots are useful when you want to compare each observation on a numeric variable, or compare a large number of groups on a numeric summary statistic. For example, say that you want to compare the 2007 life expectancy for Asian country using the [gapminder](Datasets.html#Gapminder) dataset.
```
data(gapminder, package="gapminder")
# subset Asian countries in 2007
library(dplyr)
plotdata <- gapminder %>%
filter(continent == "Asia" &
year == 2007)
# basic Cleveland plot of life expectancy by country
ggplot(plotdata,
aes(x= lifeExp, y = country)) +
geom_point()
```
Figure 5\.32: Basic Cleveland dot plot
Comparisons are usually easier if the *y*\-axis is sorted.
```
# Sorted Cleveland plot
ggplot(plotdata, aes(x=lifeExp,
y=reorder(country, lifeExp))) +
geom_point()
```
Figure 5\.33: Sorted Cleveland dot plot
The difference in life expectancy between countries like Japan and Afghanistan is striking.
Finally, we can use options to make the graph more attractive by removing unnecessary elements, like the grey background panel and horizontal reference lines, and adding a line segment connecting each point to the y axis.
```
# Fancy Cleveland plot
ggplot(plotdata, aes(x=lifeExp,
y=reorder(country, lifeExp))) +
geom_point(color="blue", size = 2) +
geom_segment(aes(x = 40,
xend = lifeExp,
y = reorder(country, lifeExp),
yend = reorder(country, lifeExp)),
color = "lightgrey") +
labs (x = "Life Expectancy (years)",
y = "",
title = "Life Expectancy by Country",
subtitle = "GapMinder data for Asia - 2007") +
theme_minimal() +
theme(panel.grid.major = element_blank(),
panel.grid.minor = element_blank())
```
Figure 5\.34: Fancy Cleveland plot
This last plot is also called a lollipop graph (I wonder why?).
5\.1 Categorical vs. Categorical
--------------------------------
When plotting the relationship between two categorical variables, stacked, grouped, or segmented bar charts are typically used. A less common approach is the [mosaic](Models.html#Mosaic) chart (section [9\.5](Models.html#Mosaic)).
In this section, we will look at automobile characteristics contained in [mpg](Datasets.html#MPG) dataset that comes with the **ggplot2** package. It provides fuel efficiency data for 38 popular car models in 1998 and 2008 (see Appendix [A.6](Datasets.html#MPG)).
### 5\.1\.1 Stacked bar chart
Let’s examine the relationship between automobile class and drive type (front\-wheel, rear\-wheel, or 4\-wheel drive) for the automobiles in the [mpg](Datasets.html#MPG) dataset.
```
library(ggplot2)
# stacked bar chart
ggplot(mpg, aes(x = class, fill = drv)) +
geom_bar(position = "stack")
```
Figure 5\.1: Stacked bar chart
From the Figure [5\.1](Bivariate.html#fig:stackedbar), we can see for example, that the most common vehicle is the SUV. All 2seater cars are rear wheel drive, while most, but not all SUVs are 4\-wheel drive.
Stacked is the default, so the last line could have also been written as `geom_bar()`.
### 5\.1\.2 Grouped bar chart
Grouped bar charts place bars for the second categorical variable side\-by\-side. To create a grouped bar plot use the `position = "dodge"` option.
```
library(ggplot2)
# grouped bar plot
ggplot(mpg, aes(x = class, fill = drv)) +
geom_bar(position = "dodge")
```
Figure 5\.2: Side\-by\-side bar chart
Notice that all Minivans are front\-wheel drive. By default, zero count bars are dropped and the remaining bars are made wider. This may not be the behavior you want. You can modify this using the `position = position_dodge(preserve = "single")"` option.
```
library(ggplot2)
# grouped bar plot preserving zero count bars
ggplot(mpg, aes(x = class, fill = drv)) +
geom_bar(position = position_dodge(preserve = "single"))
```
Figure 5\.3: Side\-by\-side bar chart with zero count bars retained
Note that this option is only available in the later versions of `ggplot2`.
### 5\.1\.3 Segmented bar chart
A segmented bar plot is a stacked bar plot where each bar represents 100 percent. You can create a segmented bar chart using the `position = "filled"` option.
```
library(ggplot2)
# bar plot, with each bar representing 100%
ggplot(mpg, aes(x = class, fill = drv)) +
geom_bar(position = "fill") +
labs(y = "Proportion")
```
Figure 5\.4: Segmented bar chart
This type of plot is particularly useful if the goal is to compare the percentage of a category in one variable across each level of another variable. For example, the proportion of front\-wheel drive cars go up as you move from compact, to midsize, to minivan.
### 5\.1\.4 Improving the color and labeling
You can use additional options to improve color and labeling. In the graph below
* `factor` modifies the order of the categories for the class variable and both the order and the labels for the drive variable
* `scale_y_continuous` modifies the y\-axis tick mark labels
* `labs` provides a title and changed the labels for the x and y axes and the legend
* `scale_fill_brewer` changes the fill color scheme
* `theme_minimal` removes the grey background and changed the grid color
```
library(ggplot2)
# bar plot, with each bar representing 100%,
# reordered bars, and better labels and colors
library(scales)
ggplot(mpg,
aes(x = factor(class,
levels = c("2seater", "subcompact",
"compact", "midsize",
"minivan", "suv", "pickup")),
fill = factor(drv,
levels = c("f", "r", "4"),
labels = c("front-wheel",
"rear-wheel",
"4-wheel")))) +
geom_bar(position = "fill") +
scale_y_continuous(breaks = seq(0, 1, .2),
label = percent) +
scale_fill_brewer(palette = "Set2") +
labs(y = "Percent",
fill="Drive Train",
x = "Class",
title = "Automobile Drive by Class") +
theme_minimal()
```
Figure 5\.5: Segmented bar chart with improved labeling and color
Each of these functions is discussed more fully in the section on [Customizing](Customizing.html#Customizing) graphs (see Section [11](Customizing.html#Customizing)).
Next, let’s add percent labels to each segment. First, we’ll create a summary dataset that has the necessary labels.
```
# create a summary dataset
library(dplyr)
plotdata <- mpg %>%
group_by(class, drv) %>%
summarize(n = n()) %>%
mutate(pct = n/sum(n),
lbl = scales::percent(pct))
plotdata
```
```
## # A tibble: 12 × 5
## # Groups: class [7]
## class drv n pct lbl
## <chr> <chr> <int> <dbl> <chr>
## 1 2seater r 5 1 100%
## 2 compact 4 12 0.255 26%
## 3 compact f 35 0.745 74%
## 4 midsize 4 3 0.0732 7%
## 5 midsize f 38 0.927 93%
## 6 minivan f 11 1 100%
## 7 pickup 4 33 1 100%
## 8 subcompact 4 4 0.114 11%
## 9 subcompact f 22 0.629 63%
## 10 subcompact r 9 0.257 26%
## 11 suv 4 51 0.823 82%
## 12 suv r 11 0.177 18%
```
Next, we’ll use this dataset and the `geom_text` function to add labels to each bar segment.
```
# create segmented bar chart
# adding labels to each segment
ggplot(plotdata,
aes(x = factor(class,
levels = c("2seater", "subcompact",
"compact", "midsize",
"minivan", "suv", "pickup")),
y = pct,
fill = factor(drv,
levels = c("f", "r", "4"),
labels = c("front-wheel",
"rear-wheel",
"4-wheel")))) +
geom_bar(stat = "identity",
position = "fill") +
scale_y_continuous(breaks = seq(0, 1, .2),
label = percent) +
geom_text(aes(label = lbl),
size = 3,
position = position_stack(vjust = 0.5)) +
scale_fill_brewer(palette = "Set2") +
labs(y = "Percent",
fill="Drive Train",
x = "Class",
title = "Automobile Drive by Class") +
theme_minimal()
```
Figure 5\.6: Segmented bar chart with value labeling
Now we have a graph that is easy to read and interpret.
### 5\.1\.5 Other plots
[Mosaic plots](Models.html#Mosaic) provide an alternative to stacked bar charts for displaying the relationship between categorical variables. They can also provide more sophisticated statistical information. See Section [9\.5](Models.html#Mosaic) for details.
### 5\.1\.1 Stacked bar chart
Let’s examine the relationship between automobile class and drive type (front\-wheel, rear\-wheel, or 4\-wheel drive) for the automobiles in the [mpg](Datasets.html#MPG) dataset.
```
library(ggplot2)
# stacked bar chart
ggplot(mpg, aes(x = class, fill = drv)) +
geom_bar(position = "stack")
```
Figure 5\.1: Stacked bar chart
From the Figure [5\.1](Bivariate.html#fig:stackedbar), we can see for example, that the most common vehicle is the SUV. All 2seater cars are rear wheel drive, while most, but not all SUVs are 4\-wheel drive.
Stacked is the default, so the last line could have also been written as `geom_bar()`.
### 5\.1\.2 Grouped bar chart
Grouped bar charts place bars for the second categorical variable side\-by\-side. To create a grouped bar plot use the `position = "dodge"` option.
```
library(ggplot2)
# grouped bar plot
ggplot(mpg, aes(x = class, fill = drv)) +
geom_bar(position = "dodge")
```
Figure 5\.2: Side\-by\-side bar chart
Notice that all Minivans are front\-wheel drive. By default, zero count bars are dropped and the remaining bars are made wider. This may not be the behavior you want. You can modify this using the `position = position_dodge(preserve = "single")"` option.
```
library(ggplot2)
# grouped bar plot preserving zero count bars
ggplot(mpg, aes(x = class, fill = drv)) +
geom_bar(position = position_dodge(preserve = "single"))
```
Figure 5\.3: Side\-by\-side bar chart with zero count bars retained
Note that this option is only available in the later versions of `ggplot2`.
### 5\.1\.3 Segmented bar chart
A segmented bar plot is a stacked bar plot where each bar represents 100 percent. You can create a segmented bar chart using the `position = "filled"` option.
```
library(ggplot2)
# bar plot, with each bar representing 100%
ggplot(mpg, aes(x = class, fill = drv)) +
geom_bar(position = "fill") +
labs(y = "Proportion")
```
Figure 5\.4: Segmented bar chart
This type of plot is particularly useful if the goal is to compare the percentage of a category in one variable across each level of another variable. For example, the proportion of front\-wheel drive cars go up as you move from compact, to midsize, to minivan.
### 5\.1\.4 Improving the color and labeling
You can use additional options to improve color and labeling. In the graph below
* `factor` modifies the order of the categories for the class variable and both the order and the labels for the drive variable
* `scale_y_continuous` modifies the y\-axis tick mark labels
* `labs` provides a title and changed the labels for the x and y axes and the legend
* `scale_fill_brewer` changes the fill color scheme
* `theme_minimal` removes the grey background and changed the grid color
```
library(ggplot2)
# bar plot, with each bar representing 100%,
# reordered bars, and better labels and colors
library(scales)
ggplot(mpg,
aes(x = factor(class,
levels = c("2seater", "subcompact",
"compact", "midsize",
"minivan", "suv", "pickup")),
fill = factor(drv,
levels = c("f", "r", "4"),
labels = c("front-wheel",
"rear-wheel",
"4-wheel")))) +
geom_bar(position = "fill") +
scale_y_continuous(breaks = seq(0, 1, .2),
label = percent) +
scale_fill_brewer(palette = "Set2") +
labs(y = "Percent",
fill="Drive Train",
x = "Class",
title = "Automobile Drive by Class") +
theme_minimal()
```
Figure 5\.5: Segmented bar chart with improved labeling and color
Each of these functions is discussed more fully in the section on [Customizing](Customizing.html#Customizing) graphs (see Section [11](Customizing.html#Customizing)).
Next, let’s add percent labels to each segment. First, we’ll create a summary dataset that has the necessary labels.
```
# create a summary dataset
library(dplyr)
plotdata <- mpg %>%
group_by(class, drv) %>%
summarize(n = n()) %>%
mutate(pct = n/sum(n),
lbl = scales::percent(pct))
plotdata
```
```
## # A tibble: 12 × 5
## # Groups: class [7]
## class drv n pct lbl
## <chr> <chr> <int> <dbl> <chr>
## 1 2seater r 5 1 100%
## 2 compact 4 12 0.255 26%
## 3 compact f 35 0.745 74%
## 4 midsize 4 3 0.0732 7%
## 5 midsize f 38 0.927 93%
## 6 minivan f 11 1 100%
## 7 pickup 4 33 1 100%
## 8 subcompact 4 4 0.114 11%
## 9 subcompact f 22 0.629 63%
## 10 subcompact r 9 0.257 26%
## 11 suv 4 51 0.823 82%
## 12 suv r 11 0.177 18%
```
Next, we’ll use this dataset and the `geom_text` function to add labels to each bar segment.
```
# create segmented bar chart
# adding labels to each segment
ggplot(plotdata,
aes(x = factor(class,
levels = c("2seater", "subcompact",
"compact", "midsize",
"minivan", "suv", "pickup")),
y = pct,
fill = factor(drv,
levels = c("f", "r", "4"),
labels = c("front-wheel",
"rear-wheel",
"4-wheel")))) +
geom_bar(stat = "identity",
position = "fill") +
scale_y_continuous(breaks = seq(0, 1, .2),
label = percent) +
geom_text(aes(label = lbl),
size = 3,
position = position_stack(vjust = 0.5)) +
scale_fill_brewer(palette = "Set2") +
labs(y = "Percent",
fill="Drive Train",
x = "Class",
title = "Automobile Drive by Class") +
theme_minimal()
```
Figure 5\.6: Segmented bar chart with value labeling
Now we have a graph that is easy to read and interpret.
### 5\.1\.5 Other plots
[Mosaic plots](Models.html#Mosaic) provide an alternative to stacked bar charts for displaying the relationship between categorical variables. They can also provide more sophisticated statistical information. See Section [9\.5](Models.html#Mosaic) for details.
5\.2 Quantitative vs. Quantitative
----------------------------------
The relationship between two quantitative variables is typically displayed using scatterplots and line graphs.
### 5\.2\.1 Scatterplot
The simplest display of two quantitative variables is a scatterplot, with each variable represented on an axis. Here, we will use the [Salaries](Datasets.html#Salaries) dataset described in Appendix [A.1](Datasets.html#Salaries). First, let’s plot experience (*yrs.since.phd*) vs. academic salary (*salary*) for college professors.
```
library(ggplot2)
data(Salaries, package="carData")
# simple scatterplot
ggplot(Salaries,
aes(x = yrs.since.phd, y = salary)) +
geom_point()
```
Figure 5\.7: Simple scatterplot
As expected, salary tends to rise with experience, but the relationship may not be strictly linear. Note that salary appears to fall off after about 40 years of experience.
The `geom_point` function has options that can be used to change the
* `color` \- point color
* `size` \- point size
* `shape` \- point shape
* `alpha` \- point transparency. Transparency ranges from 0 (transparent) to 1 (opaque), and is a useful parameter when points overlap.
The functions `scale_x_continuous` and `scale_y_continuous` control the scaling on *x* and *y* axes respectively.
We can use these options and functions to create a more attractive scatterplot.
```
# enhanced scatter plot
ggplot(Salaries,
aes(x = yrs.since.phd, y = salary)) +
geom_point(color="cornflowerblue",
size = 2,
alpha=.8) +
scale_y_continuous(label = scales::dollar,
limits = c(50000, 250000)) +
scale_x_continuous(breaks = seq(0, 60, 10),
limits=c(0, 60)) +
labs(x = "Years Since PhD",
y = "",
title = "Experience vs. Salary",
subtitle = "9-month salary for 2008-2009")
```
Figure 5\.8: Scatterplot with color, transparency, and axis scaling
Again, see [Customizing](Customizing.html#Customizing) graphs (Section [11](Customizing.html#Customizing)) for more details.
#### 5\.2\.1\.1 Adding best fit lines
It is often useful to summarize the relationship displayed in the scatterplot, using a best fit line. Many types of lines are supported, including linear, polynomial, and nonparametric (loess). By default, 95% confidence limits for these lines are displayed.
```
# scatterplot with linear fit line
ggplot(Salaries, aes(x = yrs.since.phd, y = salary)) +
geom_point(color= "steelblue") +
geom_smooth(method = "lm")
```
Figure 5\.9: Scatterplot with linear fit line
Clearly, salary increases with experience. However, there seems to be a dip at the right end \- professors with significant experience, earning lower salaries. A straight line does not capture this non\-linear effect. A line with a bend will fit better here.
A polynomial regression line provides a fit line of the form \\\[\\hat{y} \= \\beta\_{0} \+\\beta\_{1}x \+ \\beta{2}x^{2} \+ \\beta{3}x^{3} \+ \\beta{4}x^{4} \+ \\dots\\]
Typically either a quadratic (one bend), or cubic (two bends) line is used. It is rarely necessary to use a higher order( \>3 ) polynomials. Adding a quadratic fit line to the salary dataset produces the following result.
```
# scatterplot with quadratic line of best fit
ggplot(Salaries, aes(x = yrs.since.phd, y = salary)) +
geom_point(color= "steelblue") +
geom_smooth(method = "lm",
formula = y ~ poly(x, 2),
color = "indianred3")
```
Figure 5\.10: Scatterplot with quadratic fit line
Finally, a smoothed nonparametric fit line can often provide a good picture of the relationship. The default in `ggplot2` is a [loess](https://www.ime.unicamp.br/~dias/loess.pdf) line which stands for for **lo**cally w**e**ighted **s**catterplot **s**moothing ([Cleveland 1979](#ref-RN5)).
```
# scatterplot with loess smoothed line
ggplot(Salaries, aes(x = yrs.since.phd, y = salary)) +
geom_point(color= "steelblue") +
geom_smooth(color = "tomato")
```
Figure 5\.11: Scatterplot with nonparametric fit line
You can suppress the confidence bands by including the option `se = FALSE`.
Here is a complete (and more attractive) plot.
```
# scatterplot with loess smoothed line
# and better labeling and color
ggplot(Salaries,
aes(x = yrs.since.phd, y = salary)) +
geom_point(color="cornflowerblue",
size = 2,
alpha=.6) +
geom_smooth(size = 1.5,
color = "darkgrey") +
scale_y_continuous(label = scales::dollar,
limits=c(50000, 250000)) +
scale_x_continuous(breaks = seq(0, 60, 10),
limits=c(0, 60)) +
labs(x = "Years Since PhD",
y = "",
title = "Experience vs. Salary",
subtitle = "9-month salary for 2008-2009") +
theme_minimal()
```
Figure 5\.12: Scatterplot with nonparametric fit line
### 5\.2\.2 Line plot
When one of the two variables represents time, a line plot can be an effective method of displaying relationship. For example, the code below displays the relationship between time (*year*) and life expectancy (*lifeExp*) in the United States between 1952 and 2007\. The data comes from the [gapminder](Datasets.html#Gapminder) dataset (Appendix [A.8](Datasets.html#Gapminder)).
```
data(gapminder, package="gapminder")
# Select US cases
library(dplyr)
plotdata <- filter(gapminder, country == "United States")
# simple line plot
ggplot(plotdata, aes(x = year, y = lifeExp)) +
geom_line()
```
Figure 5\.13: Simple line plot
It is hard to read indivial values in the graph above. In the next plot, we’ll add points as well.
```
# line plot with points
# and improved labeling
ggplot(plotdata, aes(x = year, y = lifeExp)) +
geom_line(size = 1.5,
color = "lightgrey") +
geom_point(size = 3,
color = "steelblue") +
labs(y = "Life Expectancy (years)",
x = "Year",
title = "Life expectancy changes over time",
subtitle = "United States (1952-2007)",
caption = "Source: http://www.gapminder.org/data/")
```
Figure 5\.14: Line plot with points and labels
Time dependent data is covered in more detail under [Time series](Time.html#Time) (Section [8](Time.html#Time)). Customizing line graphs is covered in the [Customizing](Customizing.html#Customizing) graphs (Section [11](Customizing.html#Customizing)).
### 5\.2\.1 Scatterplot
The simplest display of two quantitative variables is a scatterplot, with each variable represented on an axis. Here, we will use the [Salaries](Datasets.html#Salaries) dataset described in Appendix [A.1](Datasets.html#Salaries). First, let’s plot experience (*yrs.since.phd*) vs. academic salary (*salary*) for college professors.
```
library(ggplot2)
data(Salaries, package="carData")
# simple scatterplot
ggplot(Salaries,
aes(x = yrs.since.phd, y = salary)) +
geom_point()
```
Figure 5\.7: Simple scatterplot
As expected, salary tends to rise with experience, but the relationship may not be strictly linear. Note that salary appears to fall off after about 40 years of experience.
The `geom_point` function has options that can be used to change the
* `color` \- point color
* `size` \- point size
* `shape` \- point shape
* `alpha` \- point transparency. Transparency ranges from 0 (transparent) to 1 (opaque), and is a useful parameter when points overlap.
The functions `scale_x_continuous` and `scale_y_continuous` control the scaling on *x* and *y* axes respectively.
We can use these options and functions to create a more attractive scatterplot.
```
# enhanced scatter plot
ggplot(Salaries,
aes(x = yrs.since.phd, y = salary)) +
geom_point(color="cornflowerblue",
size = 2,
alpha=.8) +
scale_y_continuous(label = scales::dollar,
limits = c(50000, 250000)) +
scale_x_continuous(breaks = seq(0, 60, 10),
limits=c(0, 60)) +
labs(x = "Years Since PhD",
y = "",
title = "Experience vs. Salary",
subtitle = "9-month salary for 2008-2009")
```
Figure 5\.8: Scatterplot with color, transparency, and axis scaling
Again, see [Customizing](Customizing.html#Customizing) graphs (Section [11](Customizing.html#Customizing)) for more details.
#### 5\.2\.1\.1 Adding best fit lines
It is often useful to summarize the relationship displayed in the scatterplot, using a best fit line. Many types of lines are supported, including linear, polynomial, and nonparametric (loess). By default, 95% confidence limits for these lines are displayed.
```
# scatterplot with linear fit line
ggplot(Salaries, aes(x = yrs.since.phd, y = salary)) +
geom_point(color= "steelblue") +
geom_smooth(method = "lm")
```
Figure 5\.9: Scatterplot with linear fit line
Clearly, salary increases with experience. However, there seems to be a dip at the right end \- professors with significant experience, earning lower salaries. A straight line does not capture this non\-linear effect. A line with a bend will fit better here.
A polynomial regression line provides a fit line of the form \\\[\\hat{y} \= \\beta\_{0} \+\\beta\_{1}x \+ \\beta{2}x^{2} \+ \\beta{3}x^{3} \+ \\beta{4}x^{4} \+ \\dots\\]
Typically either a quadratic (one bend), or cubic (two bends) line is used. It is rarely necessary to use a higher order( \>3 ) polynomials. Adding a quadratic fit line to the salary dataset produces the following result.
```
# scatterplot with quadratic line of best fit
ggplot(Salaries, aes(x = yrs.since.phd, y = salary)) +
geom_point(color= "steelblue") +
geom_smooth(method = "lm",
formula = y ~ poly(x, 2),
color = "indianred3")
```
Figure 5\.10: Scatterplot with quadratic fit line
Finally, a smoothed nonparametric fit line can often provide a good picture of the relationship. The default in `ggplot2` is a [loess](https://www.ime.unicamp.br/~dias/loess.pdf) line which stands for for **lo**cally w**e**ighted **s**catterplot **s**moothing ([Cleveland 1979](#ref-RN5)).
```
# scatterplot with loess smoothed line
ggplot(Salaries, aes(x = yrs.since.phd, y = salary)) +
geom_point(color= "steelblue") +
geom_smooth(color = "tomato")
```
Figure 5\.11: Scatterplot with nonparametric fit line
You can suppress the confidence bands by including the option `se = FALSE`.
Here is a complete (and more attractive) plot.
```
# scatterplot with loess smoothed line
# and better labeling and color
ggplot(Salaries,
aes(x = yrs.since.phd, y = salary)) +
geom_point(color="cornflowerblue",
size = 2,
alpha=.6) +
geom_smooth(size = 1.5,
color = "darkgrey") +
scale_y_continuous(label = scales::dollar,
limits=c(50000, 250000)) +
scale_x_continuous(breaks = seq(0, 60, 10),
limits=c(0, 60)) +
labs(x = "Years Since PhD",
y = "",
title = "Experience vs. Salary",
subtitle = "9-month salary for 2008-2009") +
theme_minimal()
```
Figure 5\.12: Scatterplot with nonparametric fit line
#### 5\.2\.1\.1 Adding best fit lines
It is often useful to summarize the relationship displayed in the scatterplot, using a best fit line. Many types of lines are supported, including linear, polynomial, and nonparametric (loess). By default, 95% confidence limits for these lines are displayed.
```
# scatterplot with linear fit line
ggplot(Salaries, aes(x = yrs.since.phd, y = salary)) +
geom_point(color= "steelblue") +
geom_smooth(method = "lm")
```
Figure 5\.9: Scatterplot with linear fit line
Clearly, salary increases with experience. However, there seems to be a dip at the right end \- professors with significant experience, earning lower salaries. A straight line does not capture this non\-linear effect. A line with a bend will fit better here.
A polynomial regression line provides a fit line of the form \\\[\\hat{y} \= \\beta\_{0} \+\\beta\_{1}x \+ \\beta{2}x^{2} \+ \\beta{3}x^{3} \+ \\beta{4}x^{4} \+ \\dots\\]
Typically either a quadratic (one bend), or cubic (two bends) line is used. It is rarely necessary to use a higher order( \>3 ) polynomials. Adding a quadratic fit line to the salary dataset produces the following result.
```
# scatterplot with quadratic line of best fit
ggplot(Salaries, aes(x = yrs.since.phd, y = salary)) +
geom_point(color= "steelblue") +
geom_smooth(method = "lm",
formula = y ~ poly(x, 2),
color = "indianred3")
```
Figure 5\.10: Scatterplot with quadratic fit line
Finally, a smoothed nonparametric fit line can often provide a good picture of the relationship. The default in `ggplot2` is a [loess](https://www.ime.unicamp.br/~dias/loess.pdf) line which stands for for **lo**cally w**e**ighted **s**catterplot **s**moothing ([Cleveland 1979](#ref-RN5)).
```
# scatterplot with loess smoothed line
ggplot(Salaries, aes(x = yrs.since.phd, y = salary)) +
geom_point(color= "steelblue") +
geom_smooth(color = "tomato")
```
Figure 5\.11: Scatterplot with nonparametric fit line
You can suppress the confidence bands by including the option `se = FALSE`.
Here is a complete (and more attractive) plot.
```
# scatterplot with loess smoothed line
# and better labeling and color
ggplot(Salaries,
aes(x = yrs.since.phd, y = salary)) +
geom_point(color="cornflowerblue",
size = 2,
alpha=.6) +
geom_smooth(size = 1.5,
color = "darkgrey") +
scale_y_continuous(label = scales::dollar,
limits=c(50000, 250000)) +
scale_x_continuous(breaks = seq(0, 60, 10),
limits=c(0, 60)) +
labs(x = "Years Since PhD",
y = "",
title = "Experience vs. Salary",
subtitle = "9-month salary for 2008-2009") +
theme_minimal()
```
Figure 5\.12: Scatterplot with nonparametric fit line
### 5\.2\.2 Line plot
When one of the two variables represents time, a line plot can be an effective method of displaying relationship. For example, the code below displays the relationship between time (*year*) and life expectancy (*lifeExp*) in the United States between 1952 and 2007\. The data comes from the [gapminder](Datasets.html#Gapminder) dataset (Appendix [A.8](Datasets.html#Gapminder)).
```
data(gapminder, package="gapminder")
# Select US cases
library(dplyr)
plotdata <- filter(gapminder, country == "United States")
# simple line plot
ggplot(plotdata, aes(x = year, y = lifeExp)) +
geom_line()
```
Figure 5\.13: Simple line plot
It is hard to read indivial values in the graph above. In the next plot, we’ll add points as well.
```
# line plot with points
# and improved labeling
ggplot(plotdata, aes(x = year, y = lifeExp)) +
geom_line(size = 1.5,
color = "lightgrey") +
geom_point(size = 3,
color = "steelblue") +
labs(y = "Life Expectancy (years)",
x = "Year",
title = "Life expectancy changes over time",
subtitle = "United States (1952-2007)",
caption = "Source: http://www.gapminder.org/data/")
```
Figure 5\.14: Line plot with points and labels
Time dependent data is covered in more detail under [Time series](Time.html#Time) (Section [8](Time.html#Time)). Customizing line graphs is covered in the [Customizing](Customizing.html#Customizing) graphs (Section [11](Customizing.html#Customizing)).
5\.3 Categorical vs. Quantitative
---------------------------------
When plotting the relationship between a categorical variable and a quantitative variable, a large number of graph types are available. These include bar charts using summary statistics, grouped kernel density plots, side\-by\-side box plots, side\-by\-side violin plots, mean/sem plots, ridgeline plots, and Cleveland plots. Each is considered in turn.
### 5\.3\.1 Bar chart (on summary statistics)
In previous sections, bar charts were used to display the number of cases by category for a [single variable](Univariate.html#Barchart) (Section [4\.1\.1](Univariate.html#Barchart)) or for [two variables](Bivariate.html#Categorical-Categorical) (Section [5\.1](Bivariate.html#Categorical-Categorical)). You can also use bar charts to display other summary statistics (e.g., means or medians) on a quantitative variable for each level of a categorical variable.
For example, the following graph displays the mean salary for a sample of university professors by their academic rank.
```
data(Salaries, package="carData")
# calculate mean salary for each rank
library(dplyr)
plotdata <- Salaries %>%
group_by(rank) %>%
summarize(mean_salary = mean(salary))
# plot mean salaries
ggplot(plotdata, aes(x = rank, y = mean_salary)) +
geom_bar(stat = "identity")
```
Figure 5\.15: Bar chart displaying means
We can make it more attractive with some options. In particular, the `factor` function modifies the labels for each rank, the `scale_y_continuous` function improves the y\-axis labeling, and the `geom_text` function adds the mean values to each bar.
```
# plot mean salaries in a more attractive fashion
library(scales)
ggplot(plotdata,
aes(x = factor(rank,
labels = c("Assistant\nProfessor",
"Associate\nProfessor",
"Full\nProfessor")),
y = mean_salary)) +
geom_bar(stat = "identity",
fill = "cornflowerblue") +
geom_text(aes(label = dollar(mean_salary)),
vjust = -0.25) +
scale_y_continuous(breaks = seq(0, 130000, 20000),
label = dollar) +
labs(title = "Mean Salary by Rank",
subtitle = "9-month academic salary for 2008-2009",
x = "",
y = "")
```
Figure 5\.16: Bar chart displaying means
The `vjust` parameter in the `geom_text` function controls vertical justification and nudges the text above the bars. See [Annotations](Customizing.html#Annotations) (Section [11\.7](Customizing.html#Annotations)) for more details.
One limitation of such plots is that they do not display the distribution of the data \- only the summary statistic for each group. The plots below correct this limitation to some extent.
### 5\.3\.2 Grouped kernel density plots
One can compare groups on a numeric variable by superimposing [kernel density](Univariate.html#Kernel) plots (Section [4\.2\.2](Univariate.html#Kernel)) in a single graph.
```
# plot the distribution of salaries
# by rank using kernel density plots
ggplot(Salaries, aes(x = salary, fill = rank)) +
geom_density(alpha = 0.4) +
labs(title = "Salary distribution by rank")
```
Figure 5\.17: Grouped kernel density plots
The `alpha` option makes the density plots partially transparent, so that we can see what is happening under the overlaps. Alpha values range from 0 (transparent) to 1 (opaque). The graph makes clear that, in general, salary goes up with rank. However, the salary range for full professors is *very* wide.
### 5\.3\.3 Box plots
A boxplot displays the 25th percentile, median, and 75th percentile of a distribution. The whiskers (vertical lines) capture roughly 99% of a normal distribution, and observations outside this range are plotted as points representing outliers (see the figure below).
on a numerical variable.
```
# plot the distribution of salaries by rank using boxplots
ggplot(Salaries, aes(x = rank, y = salary)) +
geom_boxplot() +
labs(title = "Salary distribution by rank")
```
Figure 5\.18: Side\-by\-side boxplots
Notched boxplots provide an approximate method for visualizing whether groups differ. Although not a formal test, if the notches of two boxplots do not overlap, there is strong evidence (95% confidence) that the medians of the two groups differ ([McGill, Tukey, and Larsen 1978](#ref-RN6)).
```
# plot the distribution of salaries by rank using boxplots
ggplot(Salaries, aes(x = rank, y = salary)) +
geom_boxplot(notch = TRUE,
fill = "cornflowerblue",
alpha = .7) +
labs(title = "Salary distribution by rank")
```
Figure 5\.19: Side\-by\-side notched boxplots
In the example above, all three groups appear to differ.
One of the advantages of boxplots is that the width is usually not meaningful. This allows you to compare the distribution of many groups in a single graph.
### 5\.3\.4 Violin plots
Violin plots are similar to [kernel density](Univariate.html#Kernel) plots, but are mirrored and rotated 90o.
```
# plot the distribution of salaries
# by rank using violin plots
ggplot(Salaries, aes(x = rank, y = salary)) +
geom_violin() +
labs(title = "Salary distribution by rank")
```
Figure 5\.20: Side\-by\-side violin plots
A violin plots capture more a a distribution’s shape than a boxplot, but does not indicate median or middle 50% of the data. A useful variation is to superimpose boxplots on violin plots.
```
# plot the distribution using violin and boxplots
ggplot(Salaries, aes(x = rank, y = salary)) +
geom_violin(fill = "cornflowerblue") +
geom_boxplot(width = .15,
fill = "orange",
outlier.color = "orange",
outlier.size = 2) +
labs(title = "Salary distribution by rank")
```
Figure 5\.21: Side\-by\-side violin/box plots
Be sure to set the `width` parameter in the `geom_boxplot` in order to assure the boxplots fit within the violin plots. You may need to play around with this in order to find a value that works well. Since geoms are layered, it is also important for the `geom_boxplot` function to appear after the `geom_violin` function. Otherwise the boxplots will be hidden beneath the violin plots.
### 5\.3\.5 Ridgeline plots
A ridgeline plot (also called a joyplot) displays the distribution of a quantitative variable for several groups. They’re similar to [kernel density](Univariate.html#Kernel) plots with vertical [faceting](Multivariate.html#Faceting), but take up less room. Ridgeline plots are created with the **ggridges** package.
Using the [mpg](Datasets.html#MPG) dataset, let’s plot the distribution of city driving miles per gallon by car class.
```
# create ridgeline graph
library(ggplot2)
library(ggridges)
ggplot(mpg,
aes(x = cty, y = class, fill = class)) +
geom_density_ridges() +
theme_ridges() +
labs("Highway mileage by auto class") +
theme(legend.position = "none")
```
Figure 5\.22: Ridgeline graph with color fill
I’ve suppressed the legend here because it’s redundant (the distributions are already labeled on the *y*\-axis). Unsurprisingly, pickup trucks have the poorest mileage, while subcompacts and compact cars tend to achieve ratings. However, there is a very wide range of gas mileage scores for these smaller cars.
Note the the possible overlap of distributions is the trade\-off for a more compact graph. You can add transparency if the the overlap is severe using `geom_density_ridges(alpha = n)`, where *n* ranges from 0 (transparent) to 1 (opaque). See the package vignette ([https://cran.r\-project.org/web/packages/ggridges/vignettes/introduction.html](https://cran.r-project.org/web/packages/ggridges/vignettes/introduction.html)) for more details.
### 5\.3\.6 Mean/SEM plots
A popular method for comparing groups on a numeric variable is a mean plot with error bars. Error bars can represent standard deviations, standard errors of the means, or confidence intervals. In this section, we’ll calculate all three, but only plot means and standard errors to save space.
```
# calculate means, standard deviations,
# standard errors, and 95% confidence
# intervals by rank
library(dplyr)
plotdata <- Salaries %>%
group_by(rank) %>%
summarize(n = n(),
mean = mean(salary),
sd = sd(salary),
se = sd / sqrt(n),
ci = qt(0.975, df = n - 1) * sd / sqrt(n))
```
The resulting dataset is given below.
Table 5\.1: Plot data
| rank | n | mean | sd | se | ci |
| --- | --- | --- | --- | --- | --- |
| AsstProf | 67 | 80775\.99 | 8174\.113 | 998\.6268 | 1993\.823 |
| AssocProf | 64 | 93876\.44 | 13831\.700 | 1728\.9625 | 3455\.056 |
| Prof | 266 | 126772\.11 | 27718\.675 | 1699\.5410 | 3346\.322 |
```
# plot the means and standard errors
ggplot(plotdata,
aes(x = rank,
y = mean,
group = 1)) +
geom_point(size = 3) +
geom_line() +
geom_errorbar(aes(ymin = mean - se,
ymax = mean + se),
width = .1)
```
Figure 5\.23: Mean plots with standard error bars
Although we plotted error bars representing the standard error, we could have plotted standard deviations or 95% confidence intervals. Simply replace `se` with `sd` or `error` in the `aes` option.
We can use the same technique to compare salary across rank and sex. (Technically, this is not bivariate since we’re plotting rank, sex, and salary, but it seems to fit here.)
```
# calculate means and standard errors by rank and sex
plotdata <- Salaries %>%
group_by(rank, sex) %>%
summarize(n = n(),
mean = mean(salary),
sd = sd(salary),
se = sd/sqrt(n))
# plot the means and standard errors by sex
ggplot(plotdata, aes(x = rank,
y = mean,
group=sex,
color=sex)) +
geom_point(size = 3) +
geom_line(size = 1) +
geom_errorbar(aes(ymin =mean - se,
ymax = mean+se),
width = .1)
```
Figure 5\.24: Mean plots with standard error bars by sex
Unfortunately, the error bars overlap. We can dodge the horizontal positions a bit to overcome this.
```
# plot the means and standard errors by sex (dodged)
pd <- position_dodge(0.2)
ggplot(plotdata,
aes(x = rank,
y = mean,
group=sex,
color=sex)) +
geom_point(position = pd,
size = 3) +
geom_line(position = pd,
size = 1) +
geom_errorbar(aes(ymin = mean - se,
ymax = mean + se),
width = .1,
position= pd)
```
Figure 5\.25: Mean plots with standard error bars (dodged)
Finally, lets add some options to make the graph more attractive.
```
# improved means/standard error plot
pd <- position_dodge(0.2)
ggplot(plotdata,
aes(x = factor(rank,
labels = c("Assistant\nProfessor",
"Associate\nProfessor",
"Full\nProfessor")),
y = mean, group=sex, color=sex)) +
geom_point(position=pd,
size=3) +
geom_line(position=pd,
size = 1) +
geom_errorbar(aes(ymin = mean - se,
ymax = mean + se),
width = .1,
position=pd,
size=1) +
scale_y_continuous(label = scales::dollar) +
scale_color_brewer(palette="Set1") +
theme_minimal() +
labs(title = "Mean salary by rank and sex",
subtitle = "(mean +/- standard error)",
x = "",
y = "",
color = "Gender")
```
Figure 5\.26: Mean/se plot with better labels and colors
This is a graph you could publish in a journal.
### 5\.3\.7 Strip plots
The relationship between a grouping variable and a numeric variable can be also displayed with a scatter plot. For example
```
# plot the distribution of salaries
# by rank using strip plots
ggplot(Salaries, aes(y = rank, x = salary)) +
geom_point() +
labs(title = "Salary distribution by rank")
```
Figure 5\.27: Categorical by quantiative scatterplot
These one\-dimensional scatterplots are called strip plots. Unfortunately, overprinting of points makes interpretation difficult. The relationship is easier to see if the the points are jittered. Basically a small random number is added to each y\-coordinate. To jitter the points, replace `geom_point` with `geom_jitter`.
```
# plot the distribution of salaries
# by rank using jittering
ggplot(Salaries, aes(y = rank, x = salary)) +
geom_jitter() +
labs(title = "Salary distribution by rank")
```
Figure 5\.28: Jittered plot
It is easier to compare groups if we use color.
```
# plot the distribution of salaries
# by rank using jittering
library(scales)
ggplot(Salaries,
aes(y = factor(rank,
labels = c("Assistant\nProfessor",
"Associate\nProfessor",
"Full\nProfessor")),
x = salary, color = rank)) +
geom_jitter(alpha = 0.7) +
scale_x_continuous(label = dollar) +
labs(title = "Academic Salary by Rank",
subtitle = "9-month salary for 2008-2009",
x = "",
y = "") +
theme_minimal() +
theme(legend.position = "none")
```
Figure 5\.29: Fancy jittered plot
The option `legend.position = "none"` is used to suppress the legend (which is not needed here). Jittered plots work well when the number of points in not overly large. Here, we can not only compare groups, but see the salaries of each individual faculty member. As a college professor myself, I want to know who is making more than $200,000 on a nine month contract!
Finally, we can superimpose boxplots on the jitter plots.
```
# plot the distribution of salaries
# by rank using jittering
library(scales)
ggplot(Salaries,
aes(x = factor(rank,
labels = c("Assistant\nProfessor",
"Associate\nProfessor",
"Full\nProfessor")),
y = salary, color = rank)) +
geom_boxplot(size=1,
outlier.shape = 1,
outlier.color = "black",
outlier.size = 3) +
geom_jitter(alpha = 0.5,
width=.2) +
scale_y_continuous(label = dollar) +
labs(title = "Academic Salary by Rank",
subtitle = "9-month salary for 2008-2009",
x = "",
y = "") +
theme_minimal() +
theme(legend.position = "none") +
coord_flip()
```
Figure 5\.30: Jitter plot with superimposed box plots
Several options were added to create this plot.
For the boxplot
* `size = 1` makes the lines thicker
* `outlier.color = "black"` makes outliers black
* `outlier.shape = 1` specifies circles for outliers
* `outlier.size = 3` increases the size of the outlier symbol
For the jitter
* `alpha = 0.5` makes the points more transparent
* `width = .2` decreases the amount of jitter (.4 is the default)
Finally, the *x* and *y* axes are revered using the `coord_flip` function (i.e., the graph is turned on its side).
Before moving on, it is worth mentioning the [`geom_boxjitter`](https://www.rdocumentation.org/packages/ggpol/versions/0.0.1/topics/geom_boxjitter) function provided in the [**ggpol**](https://erocoar.github.io/ggpol/) package. It creates a hybrid boxplot \- half boxplot, half scaterplot.
```
# plot the distribution of salaries
# by rank using jittering
library(ggpol)
library(scales)
ggplot(Salaries,
aes(x = factor(rank,
labels = c("Assistant\nProfessor",
"Associate\nProfessor",
"Full\nProfessor")),
y = salary,
fill=rank)) +
geom_boxjitter(color="black",
jitter.color = "darkgrey",
errorbar.draw = TRUE) +
scale_y_continuous(label = dollar) +
labs(title = "Academic Salary by Rank",
subtitle = "9-month salary for 2008-2009",
x = "",
y = "") +
theme_minimal() +
theme(legend.position = "none")
```
Figure 5\.31: Using geom\_boxjitter
Choose the approach that you find most useful.
### 5\.3\.8 Cleveland Dot Charts
Cleveland plots are useful when you want to compare each observation on a numeric variable, or compare a large number of groups on a numeric summary statistic. For example, say that you want to compare the 2007 life expectancy for Asian country using the [gapminder](Datasets.html#Gapminder) dataset.
```
data(gapminder, package="gapminder")
# subset Asian countries in 2007
library(dplyr)
plotdata <- gapminder %>%
filter(continent == "Asia" &
year == 2007)
# basic Cleveland plot of life expectancy by country
ggplot(plotdata,
aes(x= lifeExp, y = country)) +
geom_point()
```
Figure 5\.32: Basic Cleveland dot plot
Comparisons are usually easier if the *y*\-axis is sorted.
```
# Sorted Cleveland plot
ggplot(plotdata, aes(x=lifeExp,
y=reorder(country, lifeExp))) +
geom_point()
```
Figure 5\.33: Sorted Cleveland dot plot
The difference in life expectancy between countries like Japan and Afghanistan is striking.
Finally, we can use options to make the graph more attractive by removing unnecessary elements, like the grey background panel and horizontal reference lines, and adding a line segment connecting each point to the y axis.
```
# Fancy Cleveland plot
ggplot(plotdata, aes(x=lifeExp,
y=reorder(country, lifeExp))) +
geom_point(color="blue", size = 2) +
geom_segment(aes(x = 40,
xend = lifeExp,
y = reorder(country, lifeExp),
yend = reorder(country, lifeExp)),
color = "lightgrey") +
labs (x = "Life Expectancy (years)",
y = "",
title = "Life Expectancy by Country",
subtitle = "GapMinder data for Asia - 2007") +
theme_minimal() +
theme(panel.grid.major = element_blank(),
panel.grid.minor = element_blank())
```
Figure 5\.34: Fancy Cleveland plot
This last plot is also called a lollipop graph (I wonder why?).
### 5\.3\.1 Bar chart (on summary statistics)
In previous sections, bar charts were used to display the number of cases by category for a [single variable](Univariate.html#Barchart) (Section [4\.1\.1](Univariate.html#Barchart)) or for [two variables](Bivariate.html#Categorical-Categorical) (Section [5\.1](Bivariate.html#Categorical-Categorical)). You can also use bar charts to display other summary statistics (e.g., means or medians) on a quantitative variable for each level of a categorical variable.
For example, the following graph displays the mean salary for a sample of university professors by their academic rank.
```
data(Salaries, package="carData")
# calculate mean salary for each rank
library(dplyr)
plotdata <- Salaries %>%
group_by(rank) %>%
summarize(mean_salary = mean(salary))
# plot mean salaries
ggplot(plotdata, aes(x = rank, y = mean_salary)) +
geom_bar(stat = "identity")
```
Figure 5\.15: Bar chart displaying means
We can make it more attractive with some options. In particular, the `factor` function modifies the labels for each rank, the `scale_y_continuous` function improves the y\-axis labeling, and the `geom_text` function adds the mean values to each bar.
```
# plot mean salaries in a more attractive fashion
library(scales)
ggplot(plotdata,
aes(x = factor(rank,
labels = c("Assistant\nProfessor",
"Associate\nProfessor",
"Full\nProfessor")),
y = mean_salary)) +
geom_bar(stat = "identity",
fill = "cornflowerblue") +
geom_text(aes(label = dollar(mean_salary)),
vjust = -0.25) +
scale_y_continuous(breaks = seq(0, 130000, 20000),
label = dollar) +
labs(title = "Mean Salary by Rank",
subtitle = "9-month academic salary for 2008-2009",
x = "",
y = "")
```
Figure 5\.16: Bar chart displaying means
The `vjust` parameter in the `geom_text` function controls vertical justification and nudges the text above the bars. See [Annotations](Customizing.html#Annotations) (Section [11\.7](Customizing.html#Annotations)) for more details.
One limitation of such plots is that they do not display the distribution of the data \- only the summary statistic for each group. The plots below correct this limitation to some extent.
### 5\.3\.2 Grouped kernel density plots
One can compare groups on a numeric variable by superimposing [kernel density](Univariate.html#Kernel) plots (Section [4\.2\.2](Univariate.html#Kernel)) in a single graph.
```
# plot the distribution of salaries
# by rank using kernel density plots
ggplot(Salaries, aes(x = salary, fill = rank)) +
geom_density(alpha = 0.4) +
labs(title = "Salary distribution by rank")
```
Figure 5\.17: Grouped kernel density plots
The `alpha` option makes the density plots partially transparent, so that we can see what is happening under the overlaps. Alpha values range from 0 (transparent) to 1 (opaque). The graph makes clear that, in general, salary goes up with rank. However, the salary range for full professors is *very* wide.
### 5\.3\.3 Box plots
A boxplot displays the 25th percentile, median, and 75th percentile of a distribution. The whiskers (vertical lines) capture roughly 99% of a normal distribution, and observations outside this range are plotted as points representing outliers (see the figure below).
on a numerical variable.
```
# plot the distribution of salaries by rank using boxplots
ggplot(Salaries, aes(x = rank, y = salary)) +
geom_boxplot() +
labs(title = "Salary distribution by rank")
```
Figure 5\.18: Side\-by\-side boxplots
Notched boxplots provide an approximate method for visualizing whether groups differ. Although not a formal test, if the notches of two boxplots do not overlap, there is strong evidence (95% confidence) that the medians of the two groups differ ([McGill, Tukey, and Larsen 1978](#ref-RN6)).
```
# plot the distribution of salaries by rank using boxplots
ggplot(Salaries, aes(x = rank, y = salary)) +
geom_boxplot(notch = TRUE,
fill = "cornflowerblue",
alpha = .7) +
labs(title = "Salary distribution by rank")
```
Figure 5\.19: Side\-by\-side notched boxplots
In the example above, all three groups appear to differ.
One of the advantages of boxplots is that the width is usually not meaningful. This allows you to compare the distribution of many groups in a single graph.
### 5\.3\.4 Violin plots
Violin plots are similar to [kernel density](Univariate.html#Kernel) plots, but are mirrored and rotated 90o.
```
# plot the distribution of salaries
# by rank using violin plots
ggplot(Salaries, aes(x = rank, y = salary)) +
geom_violin() +
labs(title = "Salary distribution by rank")
```
Figure 5\.20: Side\-by\-side violin plots
A violin plots capture more a a distribution’s shape than a boxplot, but does not indicate median or middle 50% of the data. A useful variation is to superimpose boxplots on violin plots.
```
# plot the distribution using violin and boxplots
ggplot(Salaries, aes(x = rank, y = salary)) +
geom_violin(fill = "cornflowerblue") +
geom_boxplot(width = .15,
fill = "orange",
outlier.color = "orange",
outlier.size = 2) +
labs(title = "Salary distribution by rank")
```
Figure 5\.21: Side\-by\-side violin/box plots
Be sure to set the `width` parameter in the `geom_boxplot` in order to assure the boxplots fit within the violin plots. You may need to play around with this in order to find a value that works well. Since geoms are layered, it is also important for the `geom_boxplot` function to appear after the `geom_violin` function. Otherwise the boxplots will be hidden beneath the violin plots.
### 5\.3\.5 Ridgeline plots
A ridgeline plot (also called a joyplot) displays the distribution of a quantitative variable for several groups. They’re similar to [kernel density](Univariate.html#Kernel) plots with vertical [faceting](Multivariate.html#Faceting), but take up less room. Ridgeline plots are created with the **ggridges** package.
Using the [mpg](Datasets.html#MPG) dataset, let’s plot the distribution of city driving miles per gallon by car class.
```
# create ridgeline graph
library(ggplot2)
library(ggridges)
ggplot(mpg,
aes(x = cty, y = class, fill = class)) +
geom_density_ridges() +
theme_ridges() +
labs("Highway mileage by auto class") +
theme(legend.position = "none")
```
Figure 5\.22: Ridgeline graph with color fill
I’ve suppressed the legend here because it’s redundant (the distributions are already labeled on the *y*\-axis). Unsurprisingly, pickup trucks have the poorest mileage, while subcompacts and compact cars tend to achieve ratings. However, there is a very wide range of gas mileage scores for these smaller cars.
Note the the possible overlap of distributions is the trade\-off for a more compact graph. You can add transparency if the the overlap is severe using `geom_density_ridges(alpha = n)`, where *n* ranges from 0 (transparent) to 1 (opaque). See the package vignette ([https://cran.r\-project.org/web/packages/ggridges/vignettes/introduction.html](https://cran.r-project.org/web/packages/ggridges/vignettes/introduction.html)) for more details.
### 5\.3\.6 Mean/SEM plots
A popular method for comparing groups on a numeric variable is a mean plot with error bars. Error bars can represent standard deviations, standard errors of the means, or confidence intervals. In this section, we’ll calculate all three, but only plot means and standard errors to save space.
```
# calculate means, standard deviations,
# standard errors, and 95% confidence
# intervals by rank
library(dplyr)
plotdata <- Salaries %>%
group_by(rank) %>%
summarize(n = n(),
mean = mean(salary),
sd = sd(salary),
se = sd / sqrt(n),
ci = qt(0.975, df = n - 1) * sd / sqrt(n))
```
The resulting dataset is given below.
Table 5\.1: Plot data
| rank | n | mean | sd | se | ci |
| --- | --- | --- | --- | --- | --- |
| AsstProf | 67 | 80775\.99 | 8174\.113 | 998\.6268 | 1993\.823 |
| AssocProf | 64 | 93876\.44 | 13831\.700 | 1728\.9625 | 3455\.056 |
| Prof | 266 | 126772\.11 | 27718\.675 | 1699\.5410 | 3346\.322 |
```
# plot the means and standard errors
ggplot(plotdata,
aes(x = rank,
y = mean,
group = 1)) +
geom_point(size = 3) +
geom_line() +
geom_errorbar(aes(ymin = mean - se,
ymax = mean + se),
width = .1)
```
Figure 5\.23: Mean plots with standard error bars
Although we plotted error bars representing the standard error, we could have plotted standard deviations or 95% confidence intervals. Simply replace `se` with `sd` or `error` in the `aes` option.
We can use the same technique to compare salary across rank and sex. (Technically, this is not bivariate since we’re plotting rank, sex, and salary, but it seems to fit here.)
```
# calculate means and standard errors by rank and sex
plotdata <- Salaries %>%
group_by(rank, sex) %>%
summarize(n = n(),
mean = mean(salary),
sd = sd(salary),
se = sd/sqrt(n))
# plot the means and standard errors by sex
ggplot(plotdata, aes(x = rank,
y = mean,
group=sex,
color=sex)) +
geom_point(size = 3) +
geom_line(size = 1) +
geom_errorbar(aes(ymin =mean - se,
ymax = mean+se),
width = .1)
```
Figure 5\.24: Mean plots with standard error bars by sex
Unfortunately, the error bars overlap. We can dodge the horizontal positions a bit to overcome this.
```
# plot the means and standard errors by sex (dodged)
pd <- position_dodge(0.2)
ggplot(plotdata,
aes(x = rank,
y = mean,
group=sex,
color=sex)) +
geom_point(position = pd,
size = 3) +
geom_line(position = pd,
size = 1) +
geom_errorbar(aes(ymin = mean - se,
ymax = mean + se),
width = .1,
position= pd)
```
Figure 5\.25: Mean plots with standard error bars (dodged)
Finally, lets add some options to make the graph more attractive.
```
# improved means/standard error plot
pd <- position_dodge(0.2)
ggplot(plotdata,
aes(x = factor(rank,
labels = c("Assistant\nProfessor",
"Associate\nProfessor",
"Full\nProfessor")),
y = mean, group=sex, color=sex)) +
geom_point(position=pd,
size=3) +
geom_line(position=pd,
size = 1) +
geom_errorbar(aes(ymin = mean - se,
ymax = mean + se),
width = .1,
position=pd,
size=1) +
scale_y_continuous(label = scales::dollar) +
scale_color_brewer(palette="Set1") +
theme_minimal() +
labs(title = "Mean salary by rank and sex",
subtitle = "(mean +/- standard error)",
x = "",
y = "",
color = "Gender")
```
Figure 5\.26: Mean/se plot with better labels and colors
This is a graph you could publish in a journal.
### 5\.3\.7 Strip plots
The relationship between a grouping variable and a numeric variable can be also displayed with a scatter plot. For example
```
# plot the distribution of salaries
# by rank using strip plots
ggplot(Salaries, aes(y = rank, x = salary)) +
geom_point() +
labs(title = "Salary distribution by rank")
```
Figure 5\.27: Categorical by quantiative scatterplot
These one\-dimensional scatterplots are called strip plots. Unfortunately, overprinting of points makes interpretation difficult. The relationship is easier to see if the the points are jittered. Basically a small random number is added to each y\-coordinate. To jitter the points, replace `geom_point` with `geom_jitter`.
```
# plot the distribution of salaries
# by rank using jittering
ggplot(Salaries, aes(y = rank, x = salary)) +
geom_jitter() +
labs(title = "Salary distribution by rank")
```
Figure 5\.28: Jittered plot
It is easier to compare groups if we use color.
```
# plot the distribution of salaries
# by rank using jittering
library(scales)
ggplot(Salaries,
aes(y = factor(rank,
labels = c("Assistant\nProfessor",
"Associate\nProfessor",
"Full\nProfessor")),
x = salary, color = rank)) +
geom_jitter(alpha = 0.7) +
scale_x_continuous(label = dollar) +
labs(title = "Academic Salary by Rank",
subtitle = "9-month salary for 2008-2009",
x = "",
y = "") +
theme_minimal() +
theme(legend.position = "none")
```
Figure 5\.29: Fancy jittered plot
The option `legend.position = "none"` is used to suppress the legend (which is not needed here). Jittered plots work well when the number of points in not overly large. Here, we can not only compare groups, but see the salaries of each individual faculty member. As a college professor myself, I want to know who is making more than $200,000 on a nine month contract!
Finally, we can superimpose boxplots on the jitter plots.
```
# plot the distribution of salaries
# by rank using jittering
library(scales)
ggplot(Salaries,
aes(x = factor(rank,
labels = c("Assistant\nProfessor",
"Associate\nProfessor",
"Full\nProfessor")),
y = salary, color = rank)) +
geom_boxplot(size=1,
outlier.shape = 1,
outlier.color = "black",
outlier.size = 3) +
geom_jitter(alpha = 0.5,
width=.2) +
scale_y_continuous(label = dollar) +
labs(title = "Academic Salary by Rank",
subtitle = "9-month salary for 2008-2009",
x = "",
y = "") +
theme_minimal() +
theme(legend.position = "none") +
coord_flip()
```
Figure 5\.30: Jitter plot with superimposed box plots
Several options were added to create this plot.
For the boxplot
* `size = 1` makes the lines thicker
* `outlier.color = "black"` makes outliers black
* `outlier.shape = 1` specifies circles for outliers
* `outlier.size = 3` increases the size of the outlier symbol
For the jitter
* `alpha = 0.5` makes the points more transparent
* `width = .2` decreases the amount of jitter (.4 is the default)
Finally, the *x* and *y* axes are revered using the `coord_flip` function (i.e., the graph is turned on its side).
Before moving on, it is worth mentioning the [`geom_boxjitter`](https://www.rdocumentation.org/packages/ggpol/versions/0.0.1/topics/geom_boxjitter) function provided in the [**ggpol**](https://erocoar.github.io/ggpol/) package. It creates a hybrid boxplot \- half boxplot, half scaterplot.
```
# plot the distribution of salaries
# by rank using jittering
library(ggpol)
library(scales)
ggplot(Salaries,
aes(x = factor(rank,
labels = c("Assistant\nProfessor",
"Associate\nProfessor",
"Full\nProfessor")),
y = salary,
fill=rank)) +
geom_boxjitter(color="black",
jitter.color = "darkgrey",
errorbar.draw = TRUE) +
scale_y_continuous(label = dollar) +
labs(title = "Academic Salary by Rank",
subtitle = "9-month salary for 2008-2009",
x = "",
y = "") +
theme_minimal() +
theme(legend.position = "none")
```
Figure 5\.31: Using geom\_boxjitter
Choose the approach that you find most useful.
### 5\.3\.8 Cleveland Dot Charts
Cleveland plots are useful when you want to compare each observation on a numeric variable, or compare a large number of groups on a numeric summary statistic. For example, say that you want to compare the 2007 life expectancy for Asian country using the [gapminder](Datasets.html#Gapminder) dataset.
```
data(gapminder, package="gapminder")
# subset Asian countries in 2007
library(dplyr)
plotdata <- gapminder %>%
filter(continent == "Asia" &
year == 2007)
# basic Cleveland plot of life expectancy by country
ggplot(plotdata,
aes(x= lifeExp, y = country)) +
geom_point()
```
Figure 5\.32: Basic Cleveland dot plot
Comparisons are usually easier if the *y*\-axis is sorted.
```
# Sorted Cleveland plot
ggplot(plotdata, aes(x=lifeExp,
y=reorder(country, lifeExp))) +
geom_point()
```
Figure 5\.33: Sorted Cleveland dot plot
The difference in life expectancy between countries like Japan and Afghanistan is striking.
Finally, we can use options to make the graph more attractive by removing unnecessary elements, like the grey background panel and horizontal reference lines, and adding a line segment connecting each point to the y axis.
```
# Fancy Cleveland plot
ggplot(plotdata, aes(x=lifeExp,
y=reorder(country, lifeExp))) +
geom_point(color="blue", size = 2) +
geom_segment(aes(x = 40,
xend = lifeExp,
y = reorder(country, lifeExp),
yend = reorder(country, lifeExp)),
color = "lightgrey") +
labs (x = "Life Expectancy (years)",
y = "",
title = "Life Expectancy by Country",
subtitle = "GapMinder data for Asia - 2007") +
theme_minimal() +
theme(panel.grid.major = element_blank(),
panel.grid.minor = element_blank())
```
Figure 5\.34: Fancy Cleveland plot
This last plot is also called a lollipop graph (I wonder why?).
| Data Visualization |
rkabacoff.github.io | https://rkabacoff.github.io/datavis/Multivariate.html |
Chapter 6 Multivariate Graphs
=============================
In the last two chapters, you looked at ways to display the distribution of a single variable, or the relationship between two variables. We are usually interested in understanding the relations among several variables. Multivariate graphs display the relationships among three or more variables. There are two common methods for accommodating multiple variables: grouping and faceting.
6\.1 Grouping
-------------
In *grouping*, the values of the first two variables are mapped to the *x* and *y* axes. Then additional variables are mapped to other visual characteristics such as color, shape, size, line type, and transparency. Grouping allows you to plot the data for multiple groups in a single graph.
Using the [Salaries](Datasets.html#Salaries) dataset, let’s display the relationship between *yrs.since.phd* and *salary*.
```
library(ggplot2)
data(Salaries, package="carData")
# plot experience vs. salary
ggplot(Salaries,
aes(x = yrs.since.phd, y = salary)) +
geom_point() +
labs(title = "Academic salary by years since degree")
```
Figure 6\.1: Simple scatterplot
Next, let’s include the rank of the professor, using color.
```
# plot experience vs. salary (color represents rank)
ggplot(Salaries, aes(x = yrs.since.phd,
y = salary,
color=rank)) +
geom_point() +
labs(title = "Academic salary by rank and years since degree")
```
Figure 6\.2: Scatterplot with color mapping
Finally, let’s add the gender of professor, using shape of the points to indicate sex. We’ll increase the point size and transparency to make the individual points clearer.
```
# plot experience vs. salary
# (color represents rank, shape represents sex)
ggplot(Salaries, aes(x = yrs.since.phd,
y = salary,
color = rank,
shape = sex)) +
geom_point(size = 3, alpha = .6) +
labs(title = "Academic salary by rank, sex, and years since degree")
```
Figure 6\.3: Scatterplot with color and shape mapping
Notice the difference between specifying a constant value (such as `size = 3`) and a mapping of a variable to a visual characteristic (e.g., `color = rank`). Mappings are always placed within the `aes` function, while the assignment of a constant value always appear outside of the `aes` function.
Here is another example. We’ll graph the relationship between years since Ph.D. and salary using the size of the points to indicate years of service. This is called a bubble plot.
```
library(ggplot2)
data(Salaries, package="carData")
# plot experience vs. salary
# (color represents rank and size represents service)
ggplot(Salaries, aes(x = yrs.since.phd,
y = salary,
color = rank,
size = yrs.service)) +
geom_point(alpha = .6) +
labs(title = paste0("Academic salary by rank, years of service, ",
"and years since degree"))
```
Figure 6\.4: Scatterplot with size and color mapping
[Bubble plots](Other.html#Bubble) are described in more detail in a later chapter.
As a final example, let’s look at the *yrs.since.phd* vs *salary* and add *sex* using color and [quadratic best fit](Bivariate.html#BestFit) lines.
```
# plot experience vs. salary with
# fit lines (color represents sex)
ggplot(Salaries,
aes(x = yrs.since.phd,
y = salary,
color = sex)) +
geom_point(alpha = .4,
size=3) +
geom_smooth(se=FALSE,
method="lm",
formula=y~poly(x,2),
size = 1.5) +
labs(x = "Years Since Ph.D.",
title = "Academic Salary by Sex and Years Experience",
subtitle = "9-month salary for 2008-2009",
y = "",
color = "Sex") +
scale_y_continuous(label = scales::dollar) +
scale_color_brewer(palette="Set1") +
theme_minimal()
```
Figure 6\.5: Scatterplot with color mapping and quadratic fit lines
6\.2 Faceting
-------------
[Grouping](Multivariate.html#Grouping) allows you to plot multiple variables in a single graph, using visual characteristics such as color, shape, and size. In *faceting*, a graph consists of several separate plots or *small multiples*, one for each level of a third variable, or combination of two variables. It is easiest to understand this with an example.
```
# plot salary histograms by rank
ggplot(Salaries, aes(x = salary)) +
geom_histogram() +
facet_wrap(~rank, ncol = 1) +
labs(title = "Salary histograms by rank")
```
Figure 6\.6: Salary distribution by rank
The `facet_wrap` function creates a separate graph for each level of rank. The `ncol` option controls the number of columns.
In the next example, two variables are used to define the facets.
```
# plot salary histograms by rank and sex
ggplot(Salaries, aes(x = salary/1000)) +
geom_histogram() +
facet_grid(sex ~ rank) +
labs(title = "Salary histograms by sex and rank",
x = "Salary ($1000)")
```
Figure 6\.7: Salary distribution by rank and sex
Here, the `facet_grid` function defines the rows (sex) and columns (rank) that separate the data into 6 plots in one graph.
We can also combine grouping and faceting.
```
# plot salary by years of experience by sex and discipline
ggplot(Salaries,
aes(x=yrs.since.phd, y = salary, color=sex)) +
geom_point() +
geom_smooth(method="lm",
se=FALSE) +
facet_wrap(~discipline,
ncol = 1)
```
Figure 6\.8: Salary by experience, rank, and sex
Let’s make this last plot more attractive.
```
# plot salary by years of experience by sex and discipline
ggplot(Salaries, aes(x=yrs.since.phd,
y = salary,
color=sex)) +
geom_point(size = 2,
alpha=.5) +
geom_smooth(method="lm",
se=FALSE,
size = 1.5) +
facet_wrap(~factor(discipline,
labels = c("Theoretical", "Applied")),
ncol = 1) +
scale_y_continuous(labels = scales::dollar) +
theme_minimal() +
scale_color_brewer(palette="Set1") +
labs(title = paste0("Relationship of salary and years ",
"since degree by sex and discipline"),
subtitle = "9-month salary for 2008-2009",
color = "Gender",
x = "Years since Ph.D.",
y = "Academic Salary")
```
Figure 6\.9: Salary by experience, rank, and sex (better labeled)
See the [Customizing](Customizing.html#Customizing) section to learn more about customizing the appearance of a graph.
As a final example, we’ll shift to a new dataset and plot the change in life expectancy over time for countries in the “Americas”. The data comes from the [gapminder](Datasets.html#Gapminder) dataset in the **gapminder** package. Each country appears in its own facet. The **theme** functions are used to simplify the background color, rotate the x\-axis text, and make the font size smaller.
```
# plot life expectancy by year separately
# for each country in the Americas
data(gapminder, package = "gapminder")
# Select the Americas data
plotdata <- dplyr::filter(gapminder,
continent == "Americas")
# plot life expectancy by year, for each country
ggplot(plotdata, aes(x=year, y = lifeExp)) +
geom_line(color="grey") +
geom_point(color="blue") +
facet_wrap(~country) +
theme_minimal(base_size = 9) +
theme(axis.text.x = element_text(angle = 45,
hjust = 1)) +
labs(title = "Changes in Life Expectancy",
x = "Year",
y = "Life Expectancy")
```
Figure 6\.10: Changes in life expectancy by country
We can see that life expectancy is increasing in each country, but that Haiti is lagging behind.
Combining grouping and faceting with graphs for one (Chapter [4](Univariate.html#Univariate)) or two (Chapter [5](Bivariate.html#Bivariate)) variables allows you to create a wide range of visualizations for exploring data! You are limited only by your imagination and the over\-riding goal of communicating information clearly.
6\.1 Grouping
-------------
In *grouping*, the values of the first two variables are mapped to the *x* and *y* axes. Then additional variables are mapped to other visual characteristics such as color, shape, size, line type, and transparency. Grouping allows you to plot the data for multiple groups in a single graph.
Using the [Salaries](Datasets.html#Salaries) dataset, let’s display the relationship between *yrs.since.phd* and *salary*.
```
library(ggplot2)
data(Salaries, package="carData")
# plot experience vs. salary
ggplot(Salaries,
aes(x = yrs.since.phd, y = salary)) +
geom_point() +
labs(title = "Academic salary by years since degree")
```
Figure 6\.1: Simple scatterplot
Next, let’s include the rank of the professor, using color.
```
# plot experience vs. salary (color represents rank)
ggplot(Salaries, aes(x = yrs.since.phd,
y = salary,
color=rank)) +
geom_point() +
labs(title = "Academic salary by rank and years since degree")
```
Figure 6\.2: Scatterplot with color mapping
Finally, let’s add the gender of professor, using shape of the points to indicate sex. We’ll increase the point size and transparency to make the individual points clearer.
```
# plot experience vs. salary
# (color represents rank, shape represents sex)
ggplot(Salaries, aes(x = yrs.since.phd,
y = salary,
color = rank,
shape = sex)) +
geom_point(size = 3, alpha = .6) +
labs(title = "Academic salary by rank, sex, and years since degree")
```
Figure 6\.3: Scatterplot with color and shape mapping
Notice the difference between specifying a constant value (such as `size = 3`) and a mapping of a variable to a visual characteristic (e.g., `color = rank`). Mappings are always placed within the `aes` function, while the assignment of a constant value always appear outside of the `aes` function.
Here is another example. We’ll graph the relationship between years since Ph.D. and salary using the size of the points to indicate years of service. This is called a bubble plot.
```
library(ggplot2)
data(Salaries, package="carData")
# plot experience vs. salary
# (color represents rank and size represents service)
ggplot(Salaries, aes(x = yrs.since.phd,
y = salary,
color = rank,
size = yrs.service)) +
geom_point(alpha = .6) +
labs(title = paste0("Academic salary by rank, years of service, ",
"and years since degree"))
```
Figure 6\.4: Scatterplot with size and color mapping
[Bubble plots](Other.html#Bubble) are described in more detail in a later chapter.
As a final example, let’s look at the *yrs.since.phd* vs *salary* and add *sex* using color and [quadratic best fit](Bivariate.html#BestFit) lines.
```
# plot experience vs. salary with
# fit lines (color represents sex)
ggplot(Salaries,
aes(x = yrs.since.phd,
y = salary,
color = sex)) +
geom_point(alpha = .4,
size=3) +
geom_smooth(se=FALSE,
method="lm",
formula=y~poly(x,2),
size = 1.5) +
labs(x = "Years Since Ph.D.",
title = "Academic Salary by Sex and Years Experience",
subtitle = "9-month salary for 2008-2009",
y = "",
color = "Sex") +
scale_y_continuous(label = scales::dollar) +
scale_color_brewer(palette="Set1") +
theme_minimal()
```
Figure 6\.5: Scatterplot with color mapping and quadratic fit lines
6\.2 Faceting
-------------
[Grouping](Multivariate.html#Grouping) allows you to plot multiple variables in a single graph, using visual characteristics such as color, shape, and size. In *faceting*, a graph consists of several separate plots or *small multiples*, one for each level of a third variable, or combination of two variables. It is easiest to understand this with an example.
```
# plot salary histograms by rank
ggplot(Salaries, aes(x = salary)) +
geom_histogram() +
facet_wrap(~rank, ncol = 1) +
labs(title = "Salary histograms by rank")
```
Figure 6\.6: Salary distribution by rank
The `facet_wrap` function creates a separate graph for each level of rank. The `ncol` option controls the number of columns.
In the next example, two variables are used to define the facets.
```
# plot salary histograms by rank and sex
ggplot(Salaries, aes(x = salary/1000)) +
geom_histogram() +
facet_grid(sex ~ rank) +
labs(title = "Salary histograms by sex and rank",
x = "Salary ($1000)")
```
Figure 6\.7: Salary distribution by rank and sex
Here, the `facet_grid` function defines the rows (sex) and columns (rank) that separate the data into 6 plots in one graph.
We can also combine grouping and faceting.
```
# plot salary by years of experience by sex and discipline
ggplot(Salaries,
aes(x=yrs.since.phd, y = salary, color=sex)) +
geom_point() +
geom_smooth(method="lm",
se=FALSE) +
facet_wrap(~discipline,
ncol = 1)
```
Figure 6\.8: Salary by experience, rank, and sex
Let’s make this last plot more attractive.
```
# plot salary by years of experience by sex and discipline
ggplot(Salaries, aes(x=yrs.since.phd,
y = salary,
color=sex)) +
geom_point(size = 2,
alpha=.5) +
geom_smooth(method="lm",
se=FALSE,
size = 1.5) +
facet_wrap(~factor(discipline,
labels = c("Theoretical", "Applied")),
ncol = 1) +
scale_y_continuous(labels = scales::dollar) +
theme_minimal() +
scale_color_brewer(palette="Set1") +
labs(title = paste0("Relationship of salary and years ",
"since degree by sex and discipline"),
subtitle = "9-month salary for 2008-2009",
color = "Gender",
x = "Years since Ph.D.",
y = "Academic Salary")
```
Figure 6\.9: Salary by experience, rank, and sex (better labeled)
See the [Customizing](Customizing.html#Customizing) section to learn more about customizing the appearance of a graph.
As a final example, we’ll shift to a new dataset and plot the change in life expectancy over time for countries in the “Americas”. The data comes from the [gapminder](Datasets.html#Gapminder) dataset in the **gapminder** package. Each country appears in its own facet. The **theme** functions are used to simplify the background color, rotate the x\-axis text, and make the font size smaller.
```
# plot life expectancy by year separately
# for each country in the Americas
data(gapminder, package = "gapminder")
# Select the Americas data
plotdata <- dplyr::filter(gapminder,
continent == "Americas")
# plot life expectancy by year, for each country
ggplot(plotdata, aes(x=year, y = lifeExp)) +
geom_line(color="grey") +
geom_point(color="blue") +
facet_wrap(~country) +
theme_minimal(base_size = 9) +
theme(axis.text.x = element_text(angle = 45,
hjust = 1)) +
labs(title = "Changes in Life Expectancy",
x = "Year",
y = "Life Expectancy")
```
Figure 6\.10: Changes in life expectancy by country
We can see that life expectancy is increasing in each country, but that Haiti is lagging behind.
Combining grouping and faceting with graphs for one (Chapter [4](Univariate.html#Univariate)) or two (Chapter [5](Bivariate.html#Bivariate)) variables allows you to create a wide range of visualizations for exploring data! You are limited only by your imagination and the over\-riding goal of communicating information clearly.
| Data Visualization |
rkabacoff.github.io | https://rkabacoff.github.io/datavis/Maps.html |
Chapter 7 Maps
==============
Data are often tied to geographic locations. Examples include traffic accidents in a city, inoculation rates by state, or life expectancy by country. Viewing data superimposed onto a map can help you discover important patterns, outliers, and trends. It can also be an impactful way of conveying information to others.
In order to plot data on a map, you will need position information that ties each observation to a location. Typically position information is provided in the form of street addresses, geographic coordinates (longitude and latitude), or the names of counties, cities, or countries.
R provides a myriad of methods for creating both static and interactive maps containing spatial information. In this chapter, you’ll use of **tidygeocoder**, **ggmap**, **mapview**, **choroplethr**, and **sf** to plot data onto maps.
7\.1 Geocoding
--------------
Geocoding translates physical addresses (e.g. street addresses) to geographical coordinates (such as longitude and latitude.) The **tidygeocoder** package contains functions that can accomplish this translation in either direction.
Consider the following dataset.
```
location <- c("lunch", "view")
addr <- c( "10 Main Street, Middletown, CT",
"20 W 34th St., New York, NY, 10001")
df <- data.frame(location, addr)
```
Table 7\.1: Address data
| location | addr |
| --- | --- |
| lunch | 10 Main Street, Middletown, CT |
| view | 20 W 34th St., New York, NY, 10001 |
The first observation contains the street address of my favorite pizzeria. The second address is location of the Empire State Building. I can get the latitude and longitude of these addresses using the `geocode` function.
```
library(tidygeocoder)
df <- tidygeocoder::geocode(df, address = addr, method = "osm")
```
The *address* argument points to the variable containing the street address. The *method* refers to the geocoding service employed (*osm* or Open Street Maps here).
Table 7\.2: Address data with latitude and longitude
| location | addr | lat | long |
| --- | --- | --- | --- |
| lunch | 10 Main Street, Middletown, CT | 41\.55713 | \-72\.64697 |
| view | 20 W 34th St., New York, NY, 10001 | 40\.74865 | \-73\.98530 |
The `geocode` function supports many other services including the US Census, ArcGIS, Google, MapQuest, TomTom and others. See `?geocode` and `?geo` for details.
7\.2 Dot density maps
---------------------
Now that we know to to obtain latitude/longitude from address data, let’s look at dot density maps. Dot density graphs plot observations as points on a map.
The [Houston crime](Datasets.html#HoustonCrime) dataset (see Appendix [A.10](Datasets.html#HoustonCrime)) contains the date, time, and address of six types of criminal offenses reported between January and August 2010\. We’ll use this dataset to plot the locations of homicide reports.
```
library(ggmap)
# subset the data
library(dplyr)
homicide <- filter(crime, offense == "murder") %>%
select(date, offense, address, lon, lat)
# view data
head(homicide, 3)
```
```
## date offense address lon lat
## 1 1/1/2010 murder 9650 marlive ln -95.43739 29.67790
## 2 1/5/2010 murder 1350 greens pkwy -95.43944 29.94292
## 3 1/5/2010 murder 10250 bissonnet st -95.55906 29.67480
```
We can create a dot density maps using either the **mapview** or **ggmap** packages. The mapview package uses the **sf** and **leaflet** packages (primarily) to quickly create interactive maps. The **ggmap** package uses ggplot2 to creates static maps.
### 7\.2\.1 Interactive maps with mapview
Let’s create an interactive map using the **mapview** and **sf** packages. If you are reading a hardcopy version of this chapter, be sure to run the code in order to to interact with the graph.
First, the sf function `st_as_sf`converts the data frame to an *sf* object. An sf or *simple features* object, is a data frame containing attributes and spatial geometries that follows a widely accepted format for geographic vector data. The argument `crs = 4326` specifies a popular coordinate reference system. The `mapview` function takes this sf object and generates an interactive graph.
```
library(mapview)
library(sf)
mymap <- st_as_sf(homicide, coords = c("lon", "lat"), crs = 4326)
mapview(mymap)
```
> Clicking on a point, opens a pop\-up box containing the observation’s data. You can zoom in or out of the graph using the scroll wheel on your mouse, or via the \+ and \- in the upper left corner of the plot. Below that is an option for choosing the base graph type. There is a home button in the lower right corner of the graph that resets the orientation of the graph.
There are numerous options for changing the the plot. For example, let’s change the point outline to black, the point fill to red, and the transparency to 0\.5 (halfway between transparent and opaque). We’ll also suppress the legend and home button and set the base map source to *OpenstreetMap*.
```
library(sf)
library(mapview)
mymap <- st_as_sf(homicide, coords = c("lon", "lat"), crs = 4326)
mapview(mymap, color="black", col.regions="red",
alpha.regions=0.5, legend = FALSE,
homebutton = FALSE, map.types = "OpenStreetMap" )
```
Other map types include include CartoDB.Positron, CartoDB.DarkMatter, Esri.WorldImagery, and OpenTopoMap.
#### 7\.2\.1\.1 Using leaflet
**Leaflet** (<https://leafletjs.com/>) is a javascript library for interactive maps and the `leaflet` package can be used to generate leaflet graphs in R. The mapview package uses the leaflet package when creating maps. I’ve focused on mapview because of its ease of use. For completeness, let’s use leaflet directly.
The following is a simple example. You can click on the pin, zoom in and out with the \+/\- buttons or mouse wheel, and drag the map around with the hand cursor.
```
# create leaflet graph
library(leaflet)
leaflet() %>%
addTiles() %>%
addMarkers(lng=-72.6560002,
lat=41.5541829,
popup="The birthplace of quantitative wisdom.</br>
No, Waldo is not here.")
```
Figure 7\.1: Interactive map (leaflet)
Leaflet allows you to create both dot density and choropleth maps. The package website (<https://rstudio.github.io/leaflet/>) offers a detailed tutorial and numerous examples.
### 7\.2\.2 Static maps with ggmap
You can create a static map using the free Stamen Map Tiles (<http://maps.stamen.com>), or the paid Google maps platform (<http://developers.google.com/maps>). We’ll consider each in turn.
#### 7\.2\.2\.1 Stamen maps
As of July 31, 2023, Stamen Map Tiles are served by [Stadia Maps](stadiamaps.com). To create a stamen map, you will need to obtain a Stadia Maps API key. The service is free from non\-commercial use.
The steps are
* Sign up for an account at [stadiamaps.com](https://client.stadiamaps.com/signup/).
* Go to the [client dashboard](https://client.stadiamaps.com/dashboard/).The client dashboard lets you generate, view, or revoke your API key.
* Click on “Manage Properties”. Under “Authentication Configuration”, generate your API key. Save this key and keep it private.
* In R, use ggmap::register\_stadiamaps(“your API key”) to register your key.
```
ggmap::register_stadiamaps("your API key")
```
To create a stamen map, you’ll need a bounding box \- the latitude and longitude for each corner of the map. The `getbb` function in the `osmdata` package can provide this.
```
# find a bounding box for Houston, Texas
library(osmdata)
bb <- getbb("houston, tx")
bb
```
```
## min max
## x -95.90974 -95.01205
## y 29.53707 30.11035
```
The `get_stadiamap` function takes this information and returns the map. The `ggmap` function then plots the map.
```
library(ggmap)
houston <- get_stadiamap(bbox = c(bb[1,1], bb[2,1],
bb[1,2], bb[2,2]),
maptype="stamen_toner_lite")
ggmap(houston)
```
Figure 7\.2: Static Houston map
The map returned by the ggmap function if a ggplot2 map. We can add to this graph using the standard **ggplot2** functions.
```
# add incident locations
ggmap(houston) +
geom_point(aes(x=lon,y=lat),data=homicide,
color = "red", size = 2, alpha = 0.5)
```
Figure 7\.3: Houston map with crime locations
To clean up the results, remove the axes and add meaningful labels.
```
# remove long and lat numbers and add titles
ggmap(houston) +
geom_point(aes(x=lon,y=lat),data=homicide,
color = "red", size = 2, alpha = 0.5) +
theme_void() +
labs(title = "Location of reported homocides",
subtitle = "Houston Jan - Aug 2010",
caption = "source: http://www.houstontx.gov/police/cs/")
```
Figure 7\.4: Crime locations with titles, and without longitude and latitude
#### 7\.2\.2\.2 Google maps
To use a Google map as the base map, you will need a Google API key. Unfortunately this requires an account and valid credit card. Fortunately, Google provides a large number of uses for free, and a very reasonable rate afterwards (but I take no responsibility for any costs you incur!).
Go to [mapsplatform.google.com](http://mapsplatform.google.com) to create an account. Activate static maps and geocoding (you need to activate each separately), and receive your Google API key. Keep this API key safe and private! Once you have your key, you can create the dot density plot. The steps are listed below.
1. Find the center coordinates for Houston, TX
```
library(ggmap)
# using geocode function to obtain the center coordinates
register_google(key="PutYourGoogleAPIKeyHere")
houston_center <- geocode("Houston, TX")
```
```
houston_center
```
```
## lon lat
## -95.36980 29.76043
```
2. Get the background map image.
* Specify a `zoom` factor from 3 (continent) to 21 (building). The default is 10 (city).
* Specify a `maptype`. Types include terrain, terrain\-background, satellite, roadmap, hybrid, watercolor, and toner.
```
# get Houston map
houston_map <- get_map(houston_center,
zoom = 13,
maptype = "roadmap")
ggmap(houston_map)
```
Figure 7\.5: Houston map using Google Maps
3. Add crime locations to the map.
```
# add incident locations
ggmap(houston_map) +
geom_point(aes(x=lon,y=lat),data=homicide,
color = "red", size = 2, alpha = 0.5)
```
Figure 7\.6: Houston crime locations using Google Maps
4. Clean up the plot and add labels.
```
# add incident locations
ggmap(houston_map) +
geom_point(aes(x=lon,y=lat),data=homicide,
color = "red", size = 2, alpha = 0.5) +
theme_void() +
labs(title = "Location of reported homocides",
subtitle = "Houston Jan - Aug 2010",
caption = "source: http://www.houstontx.gov/police/cs/")
```
Figure 7\.7: Customize Houston crime locations using Google Maps
There seems to be a concentration of homicide reports in the souther portion of the city. However this could simply reflect population density. More investigation is needed. To learn more about ggmap, see [ggmap: Spatial Visualization with ggplot2](https://journal.r-project.org/archive/2013-1/kahle-wickham.pdf) ([Kahle and Wickham 2013](#ref-RN8)).
7\.3 Choropleth maps
--------------------
Choropleth maps use color or shading on predefined areas to indicate the values of a numeric variable in that area. There are numerous approaches to creating chorpleth maps. One of the easiest relies on Ari Lamstein’s excellent `choroplethr` package which can create maps that display information by country, US state, and US county.
There may be times that you want to create a map for an area not covered in the choroplethr package. Additionally, you may want to create a map with greater customization. Towards this end, we’ll also look at a more customizable approach using a shapefile and the **sf** and **ggplot2** packages.
### 7\.3\.1 Data by country
Let’s create a world map and color the countries by life expectancy using the 2007 [gapminder](Datasets.html#Gapminder) data.
The **choroplethr** package has numerous functions that simplify the task of creating a choropleth map. To plot the life expectancy data, we’ll use the [`country_choropleth`](https://www.rdocumentation.org/packages/choroplethr/versions/3.6.1/topics/county_choropleth) function.
The function requires that the data frame to be plotted has a column named *region* and a column named *value*. Additionally, the entries in the *region* column must exactly match how the entries are named in the *region* column of the dataset `country.map` from the **choroplethrMaps** package.
```
# view the first 12 region names in country.map
data(country.map, package = "choroplethrMaps")
head(unique(country.map$region), 12)
```
```
## [1] "afghanistan" "angola" "azerbaijan" "moldova" "madagascar"
## [6] "mexico" "macedonia" "mali" "myanmar" "montenegro"
## [11] "mongolia" "mozambique"
```
Note that the region entries are all lower case.
To continue, we need to make some edits to our gapminder dataset. Specifically, we need to
1. select the 2007 data
2. rename the *country* variable to *region*
3. rename the *lifeExp* variable to *value*
4. recode *region* values to lower case
5. recode some *region* values to match the region values in the country.map data frame. The `recode` function in the **dplyr** package take the form `recode(variable, oldvalue1 = newvalue1, oldvalue2 = newvalue2, ...)`
```
# prepare dataset
data(gapminder, package = "gapminder")
plotdata <- gapminder %>%
filter(year == 2007) %>%
rename(region = country,
value = lifeExp) %>%
mutate(region = tolower(region)) %>%
mutate(region =
recode(region,
"united states" = "united states of america",
"congo, dem. rep." = "democratic republic of the congo",
"congo, rep." = "republic of congo",
"korea, dem. rep." = "south korea",
"korea. rep." = "north korea",
"tanzania" = "united republic of tanzania",
"serbia" = "republic of serbia",
"slovak republic" = "slovakia",
"yemen, rep." = "yemen"))
```
Now lets create the map.
```
library(choroplethr)
country_choropleth(plotdata)
```
Figure 7\.8: Choropleth map of life expectancy
choroplethr functions return ggplot2 graphs. Let’s make it a bit more attractive by modifying the code with additional ggplot2 functions.
```
country_choropleth(plotdata,
num_colors=9) +
scale_fill_brewer(palette="YlOrRd") +
labs(title = "Life expectancy by country",
subtitle = "Gapminder 2007 data",
caption = "source: https://www.gapminder.org",
fill = "Years")
```
Figure 7\.9: Choropleth map of life expectancy with labels and a better color scheme
Note that the `num_colors` option controls how many colors are used in graph. The default is seven and the maximum is nine.
### 7\.3\.2 Data by US state
For US data, the choroplethr package provides functions for creating maps by county, state, zip code, and census tract. Additionally, map regions can be labeled.
Let’s plot US states by [Hispanic and Latino populations](Datasets.html#HispLat), using the 2010 Census (see Appendix [A.11](Datasets.html#HispLat)).
To plot the population data, we’ll use the [`state_choropleth`](https://www.rdocumentation.org/packages/choroplethr/versions/3.6.1/topics/state_choropleth) function. The function requires that the data frame to be plotted has a column named *region* to represent state, and a column named *value* (the quantity to be plotted). Additionally, the entries in the *region* column must exactly match how the entries are named in the *region* column of the dataset state.map from the **choroplethrMaps** package.
The `zoom = continental_us_states` option will create a map that excludes Hawaii and Alaska.
```
library(ggplot2)
library(choroplethr)
data(continental_us_states)
# input the data
library(readr)
hisplat <- read_tsv("hisplat.csv")
# prepare the data
hisplat$region <- tolower(hisplat$state)
hisplat$value <- hisplat$percent
# create the map
state_choropleth(hisplat,
num_colors=9,
zoom = continental_us_states) +
scale_fill_brewer(palette="YlGnBu") +
labs(title = "Hispanic and Latino Population",
subtitle = "2010 US Census",
caption = "source: https://tinyurl.com/2fp7c5bw",
fill = "Percent")
```
Figure 7\.10: Choropleth map of US States
### 7\.3\.3 Data by US county
Finally, let’s plot data by US counties. We’ll plot the violent crime rate per 1000 individuals for Connecticut counties in 2012\. Data come from the FBI Uniform Crime Statistics.
We’ll use the `county_choropleth` function. Again, the function requires that the data frame to be plotted has a column named *region* and a column named *value*.
Additionally, the entries in the *region* column must be numeric codes and exactly match how the entries are given in the *region* column of the dataset `county.map` from the `choroplethrMaps` package.
Our dataset has country names (e.g. fairfield). However, we need region codes (e.g., 9001\). We can use the `county.regions` dataset to look up the region code for each county name.
Additionally, we’ll use the option `reference_map = TRUE` to add a reference map from Google Maps.
```
library(ggplot2)
library(choroplethr)
library(dplyr)
# enter violent crime rates by county
crimes_ct <- data.frame(
county = c("fairfield", "hartford",
"litchfield", "middlesex",
"new haven", "new london",
"tolland", "windham"),
value = c(3.00, 3.32,
1.02, 1.24,
4.13, 4.61,
0.16, 1.60)
)
crimes_ct
```
```
## county value
## 1 fairfield 3.00
## 2 hartford 3.32
## 3 litchfield 1.02
## 4 middlesex 1.24
## 5 new haven 4.13
## 6 new london 4.61
## 7 tolland 0.16
## 8 windham 1.60
```
```
# obtain region codes for connecticut
data(county.regions,
package = "choroplethrMaps")
region <- county.regions %>%
filter(state.name == "connecticut")
region
```
```
# join crime data to region code data
plotdata <- inner_join(crimes_ct,
region,
by=c("county" = "county.name"))
plotdata
```
```
## county value region county.fips.character state.name
## 1 fairfield 3.00 9001 09001 connecticut
## 2 hartford 3.32 9003 09003 connecticut
## 3 litchfield 1.02 9005 09005 connecticut
## 4 middlesex 1.24 9007 09007 connecticut
## 5 new haven 4.13 9009 09009 connecticut
## 6 new london 4.61 9011 09011 connecticut
## 7 tolland 0.16 9013 09013 connecticut
## 8 windham 1.60 9015 09015 connecticut
## state.fips.character state.abb
## 1 09 CT
## 2 09 CT
## 3 09 CT
## 4 09 CT
## 5 09 CT
## 6 09 CT
## 7 09 CT
## 8 09 CT
```
```
# create choropleth map
county_choropleth(plotdata,
state_zoom = "connecticut",
reference_map = TRUE,
num_colors = 8) +
scale_fill_brewer(palette="YlOrRd") +
labs(title = "Connecticut Violent Crime Rates",
subtitle = "FBI 2012 data",
caption = "source: https://ucr.fbi.gov",
fill = "Violent Crime\n Rate Per 1000")
```
See the *choroplethr help* ([https://cran.r\-project.org/web/packages/choroplethr/choroplethr.pdf](https://cran.r-project.org/web/packages/choroplethr/choroplethr.pdf)) for more details.
### 7\.3\.4 Building a choropleth map using the sf and ggplot2 packages and a shapefile
As stated previously, there may be times that you want to map a region not covered by the choroplethr package. Additionally, you may want greater control over the customization.
In this section, we’ll create a map of the continental United States and color each states by their 2023 literacy rate (the percent of individuals who can both read and write). The [literacy rates](Datasets.html#Literacy) were obtained from the World Population Review (see Appendix [A.7](Datasets.html#Literacy)). Rather than using the choroplethr package, we’ll download a US state shapefile and create the map using the sf and ggplot2 packages.
1. Prepare a shapefile
A *shapefile* is a data format that spatially describes vector features such as points, lines, and polygons. The shapefile is used to draw the geographic boundaries of the map.
You will need to find a shapefile for your the geographic area you want to plot. There are a wide range of shapefiles for cities, regions, states, and countries freely available on the internet. Natural Earth (<http://naturalearthdata.com>) is a good place to start. The shapefile used in the current example comes from the US Census Bureau ([https://www.census.gov/geographies/mapping\-files/time\-series/geo/cartographic\-boundary.html](https://www.census.gov/geographies/mapping-files/time-series/geo/cartographic-boundary.html)).
A shapefile will download as a zipped file. The code below unzips the file into a folder of the same name in the working directory (of course you can also do this by hand). The sf function `st_read` then converts the shapefile into a data frame that ggplot2 can access.
```
library(sf)
# unzip shape file
shapefile <- "cb_2022_us_state_20m.zip"
shapedir <- tools::file_path_sans_ext(shapefile)
if(!dir.exists(shapedir)){
unzip(shapefile, exdir=shapedir)
}
# convert the shapefile into a data frame
# of class sf (simple features)
USMap <- st_read("cb_2022_us_state_20m/cb_2022_us_state_20m.shp")
```
```
head(USMap, 3)
```
> Note that although the sf\_read function points the .shp file, all the files in the folder must be present.
The *NAME* column contains the state identifier, *STUPSPS* contains state abbreviations, and the *geometry* column is a special list object containing the coordinates need to draw the state boundaries.
2. Prepare the data file
The literacy rates are contained in the comma delimited file named *USLitRates.csv*.
```
litRates <- read.csv("USLitRates.csv")
head(litRates, 3)
```
```
## State Rate
## 1 New Hampshire 94.2
## 2 Minnesota 94.0
## 3 North Dakota 93.7
```
One of the most annoying aspects of creating a choropleth map is that the location variable in the data file (*State* in this case) must match the location file in the sf data frame (*NAME* in this case) exactly.
The following code will help identify any mismatches. Mismatches are printed and can be corrected.
```
# states in litRates not in USMap
setdiff(litRates$State, USMap$NAME)
```
```
## character(0)
```
We have no mismatches, so we are ready to move on.
3. Merge the data frames
The next step combine the two data frames. Since we want to focus the on lower 48 states, we’ll also eliminate Hawaii, Alaska, and Puerto Rico.
```
continentalUS <- USMap %>%
left_join(litRates, by=c("NAME"="State")) %>%
filter(NAME != "Hawaii" & NAME != "Alaska" &
NAME != "Puerto Rico")
head(continentalUS, 3)
```
4. Create the graph
The graph is created using ggplot2\. Rather than specifying `aes(x=, y=)`, `aes(geometry = geometry)` is used. The fill color is mapped to the literacy rate. The `geom_sf` function generates the map.
```
library(ggplot2)
ggplot(continentalUS, aes(geometry=geometry, fill=Rate)) +
geom_sf()
```
Figure 7\.11: Choropleth map of state literacy rates
5. Customize the graph
Before finishing, lets customize the graph by
* removing the axes
* adding state labels
* modifying the fill colors and legend
* adding a title, subtitle, and caption
```
library(dplyr)
ggplot(continentalUS, aes(geometry=geometry, fill=Rate)) +
geom_sf() +
theme_void() +
geom_sf_text(aes(label=STUSPS), size=2) +
scale_fill_steps(low="yellow", high="royalblue",
n.breaks = 10) +
labs(title="Literacy Rates by State",
fill = "% literate",
x = "", y = "",
subtitle="Updated May 2023",
caption="source: https://worldpopulationreview.com")
```
Figure 7\.12: Customized choropleth map
The map clearly displays the range of literacy rates among the states. Rates are lowest in New York and California.
7\.4 Going further
------------------
We’ve just scratched the surface of what you can do with maps in R. To learn more, see the CRAN Task View on the [Analysis of Spacial Data](https://cran.r-project.org/web/views/Spatial.html) ([https://cran.r\-project.org/web/views/Spatial.html](https://cran.r-project.org/web/views/Spatial.html)) and [Geocomputation with R](https://r.geocompx.org/index.html), an comprehensive on\-line book and hard\-copy book ([Lovelace R., Nowasad, and Meuchow 2019](#ref-RN7)).
7\.1 Geocoding
--------------
Geocoding translates physical addresses (e.g. street addresses) to geographical coordinates (such as longitude and latitude.) The **tidygeocoder** package contains functions that can accomplish this translation in either direction.
Consider the following dataset.
```
location <- c("lunch", "view")
addr <- c( "10 Main Street, Middletown, CT",
"20 W 34th St., New York, NY, 10001")
df <- data.frame(location, addr)
```
Table 7\.1: Address data
| location | addr |
| --- | --- |
| lunch | 10 Main Street, Middletown, CT |
| view | 20 W 34th St., New York, NY, 10001 |
The first observation contains the street address of my favorite pizzeria. The second address is location of the Empire State Building. I can get the latitude and longitude of these addresses using the `geocode` function.
```
library(tidygeocoder)
df <- tidygeocoder::geocode(df, address = addr, method = "osm")
```
The *address* argument points to the variable containing the street address. The *method* refers to the geocoding service employed (*osm* or Open Street Maps here).
Table 7\.2: Address data with latitude and longitude
| location | addr | lat | long |
| --- | --- | --- | --- |
| lunch | 10 Main Street, Middletown, CT | 41\.55713 | \-72\.64697 |
| view | 20 W 34th St., New York, NY, 10001 | 40\.74865 | \-73\.98530 |
The `geocode` function supports many other services including the US Census, ArcGIS, Google, MapQuest, TomTom and others. See `?geocode` and `?geo` for details.
7\.2 Dot density maps
---------------------
Now that we know to to obtain latitude/longitude from address data, let’s look at dot density maps. Dot density graphs plot observations as points on a map.
The [Houston crime](Datasets.html#HoustonCrime) dataset (see Appendix [A.10](Datasets.html#HoustonCrime)) contains the date, time, and address of six types of criminal offenses reported between January and August 2010\. We’ll use this dataset to plot the locations of homicide reports.
```
library(ggmap)
# subset the data
library(dplyr)
homicide <- filter(crime, offense == "murder") %>%
select(date, offense, address, lon, lat)
# view data
head(homicide, 3)
```
```
## date offense address lon lat
## 1 1/1/2010 murder 9650 marlive ln -95.43739 29.67790
## 2 1/5/2010 murder 1350 greens pkwy -95.43944 29.94292
## 3 1/5/2010 murder 10250 bissonnet st -95.55906 29.67480
```
We can create a dot density maps using either the **mapview** or **ggmap** packages. The mapview package uses the **sf** and **leaflet** packages (primarily) to quickly create interactive maps. The **ggmap** package uses ggplot2 to creates static maps.
### 7\.2\.1 Interactive maps with mapview
Let’s create an interactive map using the **mapview** and **sf** packages. If you are reading a hardcopy version of this chapter, be sure to run the code in order to to interact with the graph.
First, the sf function `st_as_sf`converts the data frame to an *sf* object. An sf or *simple features* object, is a data frame containing attributes and spatial geometries that follows a widely accepted format for geographic vector data. The argument `crs = 4326` specifies a popular coordinate reference system. The `mapview` function takes this sf object and generates an interactive graph.
```
library(mapview)
library(sf)
mymap <- st_as_sf(homicide, coords = c("lon", "lat"), crs = 4326)
mapview(mymap)
```
> Clicking on a point, opens a pop\-up box containing the observation’s data. You can zoom in or out of the graph using the scroll wheel on your mouse, or via the \+ and \- in the upper left corner of the plot. Below that is an option for choosing the base graph type. There is a home button in the lower right corner of the graph that resets the orientation of the graph.
There are numerous options for changing the the plot. For example, let’s change the point outline to black, the point fill to red, and the transparency to 0\.5 (halfway between transparent and opaque). We’ll also suppress the legend and home button and set the base map source to *OpenstreetMap*.
```
library(sf)
library(mapview)
mymap <- st_as_sf(homicide, coords = c("lon", "lat"), crs = 4326)
mapview(mymap, color="black", col.regions="red",
alpha.regions=0.5, legend = FALSE,
homebutton = FALSE, map.types = "OpenStreetMap" )
```
Other map types include include CartoDB.Positron, CartoDB.DarkMatter, Esri.WorldImagery, and OpenTopoMap.
#### 7\.2\.1\.1 Using leaflet
**Leaflet** (<https://leafletjs.com/>) is a javascript library for interactive maps and the `leaflet` package can be used to generate leaflet graphs in R. The mapview package uses the leaflet package when creating maps. I’ve focused on mapview because of its ease of use. For completeness, let’s use leaflet directly.
The following is a simple example. You can click on the pin, zoom in and out with the \+/\- buttons or mouse wheel, and drag the map around with the hand cursor.
```
# create leaflet graph
library(leaflet)
leaflet() %>%
addTiles() %>%
addMarkers(lng=-72.6560002,
lat=41.5541829,
popup="The birthplace of quantitative wisdom.</br>
No, Waldo is not here.")
```
Figure 7\.1: Interactive map (leaflet)
Leaflet allows you to create both dot density and choropleth maps. The package website (<https://rstudio.github.io/leaflet/>) offers a detailed tutorial and numerous examples.
### 7\.2\.2 Static maps with ggmap
You can create a static map using the free Stamen Map Tiles (<http://maps.stamen.com>), or the paid Google maps platform (<http://developers.google.com/maps>). We’ll consider each in turn.
#### 7\.2\.2\.1 Stamen maps
As of July 31, 2023, Stamen Map Tiles are served by [Stadia Maps](stadiamaps.com). To create a stamen map, you will need to obtain a Stadia Maps API key. The service is free from non\-commercial use.
The steps are
* Sign up for an account at [stadiamaps.com](https://client.stadiamaps.com/signup/).
* Go to the [client dashboard](https://client.stadiamaps.com/dashboard/).The client dashboard lets you generate, view, or revoke your API key.
* Click on “Manage Properties”. Under “Authentication Configuration”, generate your API key. Save this key and keep it private.
* In R, use ggmap::register\_stadiamaps(“your API key”) to register your key.
```
ggmap::register_stadiamaps("your API key")
```
To create a stamen map, you’ll need a bounding box \- the latitude and longitude for each corner of the map. The `getbb` function in the `osmdata` package can provide this.
```
# find a bounding box for Houston, Texas
library(osmdata)
bb <- getbb("houston, tx")
bb
```
```
## min max
## x -95.90974 -95.01205
## y 29.53707 30.11035
```
The `get_stadiamap` function takes this information and returns the map. The `ggmap` function then plots the map.
```
library(ggmap)
houston <- get_stadiamap(bbox = c(bb[1,1], bb[2,1],
bb[1,2], bb[2,2]),
maptype="stamen_toner_lite")
ggmap(houston)
```
Figure 7\.2: Static Houston map
The map returned by the ggmap function if a ggplot2 map. We can add to this graph using the standard **ggplot2** functions.
```
# add incident locations
ggmap(houston) +
geom_point(aes(x=lon,y=lat),data=homicide,
color = "red", size = 2, alpha = 0.5)
```
Figure 7\.3: Houston map with crime locations
To clean up the results, remove the axes and add meaningful labels.
```
# remove long and lat numbers and add titles
ggmap(houston) +
geom_point(aes(x=lon,y=lat),data=homicide,
color = "red", size = 2, alpha = 0.5) +
theme_void() +
labs(title = "Location of reported homocides",
subtitle = "Houston Jan - Aug 2010",
caption = "source: http://www.houstontx.gov/police/cs/")
```
Figure 7\.4: Crime locations with titles, and without longitude and latitude
#### 7\.2\.2\.2 Google maps
To use a Google map as the base map, you will need a Google API key. Unfortunately this requires an account and valid credit card. Fortunately, Google provides a large number of uses for free, and a very reasonable rate afterwards (but I take no responsibility for any costs you incur!).
Go to [mapsplatform.google.com](http://mapsplatform.google.com) to create an account. Activate static maps and geocoding (you need to activate each separately), and receive your Google API key. Keep this API key safe and private! Once you have your key, you can create the dot density plot. The steps are listed below.
1. Find the center coordinates for Houston, TX
```
library(ggmap)
# using geocode function to obtain the center coordinates
register_google(key="PutYourGoogleAPIKeyHere")
houston_center <- geocode("Houston, TX")
```
```
houston_center
```
```
## lon lat
## -95.36980 29.76043
```
2. Get the background map image.
* Specify a `zoom` factor from 3 (continent) to 21 (building). The default is 10 (city).
* Specify a `maptype`. Types include terrain, terrain\-background, satellite, roadmap, hybrid, watercolor, and toner.
```
# get Houston map
houston_map <- get_map(houston_center,
zoom = 13,
maptype = "roadmap")
ggmap(houston_map)
```
Figure 7\.5: Houston map using Google Maps
3. Add crime locations to the map.
```
# add incident locations
ggmap(houston_map) +
geom_point(aes(x=lon,y=lat),data=homicide,
color = "red", size = 2, alpha = 0.5)
```
Figure 7\.6: Houston crime locations using Google Maps
4. Clean up the plot and add labels.
```
# add incident locations
ggmap(houston_map) +
geom_point(aes(x=lon,y=lat),data=homicide,
color = "red", size = 2, alpha = 0.5) +
theme_void() +
labs(title = "Location of reported homocides",
subtitle = "Houston Jan - Aug 2010",
caption = "source: http://www.houstontx.gov/police/cs/")
```
Figure 7\.7: Customize Houston crime locations using Google Maps
There seems to be a concentration of homicide reports in the souther portion of the city. However this could simply reflect population density. More investigation is needed. To learn more about ggmap, see [ggmap: Spatial Visualization with ggplot2](https://journal.r-project.org/archive/2013-1/kahle-wickham.pdf) ([Kahle and Wickham 2013](#ref-RN8)).
### 7\.2\.1 Interactive maps with mapview
Let’s create an interactive map using the **mapview** and **sf** packages. If you are reading a hardcopy version of this chapter, be sure to run the code in order to to interact with the graph.
First, the sf function `st_as_sf`converts the data frame to an *sf* object. An sf or *simple features* object, is a data frame containing attributes and spatial geometries that follows a widely accepted format for geographic vector data. The argument `crs = 4326` specifies a popular coordinate reference system. The `mapview` function takes this sf object and generates an interactive graph.
```
library(mapview)
library(sf)
mymap <- st_as_sf(homicide, coords = c("lon", "lat"), crs = 4326)
mapview(mymap)
```
> Clicking on a point, opens a pop\-up box containing the observation’s data. You can zoom in or out of the graph using the scroll wheel on your mouse, or via the \+ and \- in the upper left corner of the plot. Below that is an option for choosing the base graph type. There is a home button in the lower right corner of the graph that resets the orientation of the graph.
There are numerous options for changing the the plot. For example, let’s change the point outline to black, the point fill to red, and the transparency to 0\.5 (halfway between transparent and opaque). We’ll also suppress the legend and home button and set the base map source to *OpenstreetMap*.
```
library(sf)
library(mapview)
mymap <- st_as_sf(homicide, coords = c("lon", "lat"), crs = 4326)
mapview(mymap, color="black", col.regions="red",
alpha.regions=0.5, legend = FALSE,
homebutton = FALSE, map.types = "OpenStreetMap" )
```
Other map types include include CartoDB.Positron, CartoDB.DarkMatter, Esri.WorldImagery, and OpenTopoMap.
#### 7\.2\.1\.1 Using leaflet
**Leaflet** (<https://leafletjs.com/>) is a javascript library for interactive maps and the `leaflet` package can be used to generate leaflet graphs in R. The mapview package uses the leaflet package when creating maps. I’ve focused on mapview because of its ease of use. For completeness, let’s use leaflet directly.
The following is a simple example. You can click on the pin, zoom in and out with the \+/\- buttons or mouse wheel, and drag the map around with the hand cursor.
```
# create leaflet graph
library(leaflet)
leaflet() %>%
addTiles() %>%
addMarkers(lng=-72.6560002,
lat=41.5541829,
popup="The birthplace of quantitative wisdom.</br>
No, Waldo is not here.")
```
Figure 7\.1: Interactive map (leaflet)
Leaflet allows you to create both dot density and choropleth maps. The package website (<https://rstudio.github.io/leaflet/>) offers a detailed tutorial and numerous examples.
#### 7\.2\.1\.1 Using leaflet
**Leaflet** (<https://leafletjs.com/>) is a javascript library for interactive maps and the `leaflet` package can be used to generate leaflet graphs in R. The mapview package uses the leaflet package when creating maps. I’ve focused on mapview because of its ease of use. For completeness, let’s use leaflet directly.
The following is a simple example. You can click on the pin, zoom in and out with the \+/\- buttons or mouse wheel, and drag the map around with the hand cursor.
```
# create leaflet graph
library(leaflet)
leaflet() %>%
addTiles() %>%
addMarkers(lng=-72.6560002,
lat=41.5541829,
popup="The birthplace of quantitative wisdom.</br>
No, Waldo is not here.")
```
Figure 7\.1: Interactive map (leaflet)
Leaflet allows you to create both dot density and choropleth maps. The package website (<https://rstudio.github.io/leaflet/>) offers a detailed tutorial and numerous examples.
### 7\.2\.2 Static maps with ggmap
You can create a static map using the free Stamen Map Tiles (<http://maps.stamen.com>), or the paid Google maps platform (<http://developers.google.com/maps>). We’ll consider each in turn.
#### 7\.2\.2\.1 Stamen maps
As of July 31, 2023, Stamen Map Tiles are served by [Stadia Maps](stadiamaps.com). To create a stamen map, you will need to obtain a Stadia Maps API key. The service is free from non\-commercial use.
The steps are
* Sign up for an account at [stadiamaps.com](https://client.stadiamaps.com/signup/).
* Go to the [client dashboard](https://client.stadiamaps.com/dashboard/).The client dashboard lets you generate, view, or revoke your API key.
* Click on “Manage Properties”. Under “Authentication Configuration”, generate your API key. Save this key and keep it private.
* In R, use ggmap::register\_stadiamaps(“your API key”) to register your key.
```
ggmap::register_stadiamaps("your API key")
```
To create a stamen map, you’ll need a bounding box \- the latitude and longitude for each corner of the map. The `getbb` function in the `osmdata` package can provide this.
```
# find a bounding box for Houston, Texas
library(osmdata)
bb <- getbb("houston, tx")
bb
```
```
## min max
## x -95.90974 -95.01205
## y 29.53707 30.11035
```
The `get_stadiamap` function takes this information and returns the map. The `ggmap` function then plots the map.
```
library(ggmap)
houston <- get_stadiamap(bbox = c(bb[1,1], bb[2,1],
bb[1,2], bb[2,2]),
maptype="stamen_toner_lite")
ggmap(houston)
```
Figure 7\.2: Static Houston map
The map returned by the ggmap function if a ggplot2 map. We can add to this graph using the standard **ggplot2** functions.
```
# add incident locations
ggmap(houston) +
geom_point(aes(x=lon,y=lat),data=homicide,
color = "red", size = 2, alpha = 0.5)
```
Figure 7\.3: Houston map with crime locations
To clean up the results, remove the axes and add meaningful labels.
```
# remove long and lat numbers and add titles
ggmap(houston) +
geom_point(aes(x=lon,y=lat),data=homicide,
color = "red", size = 2, alpha = 0.5) +
theme_void() +
labs(title = "Location of reported homocides",
subtitle = "Houston Jan - Aug 2010",
caption = "source: http://www.houstontx.gov/police/cs/")
```
Figure 7\.4: Crime locations with titles, and without longitude and latitude
#### 7\.2\.2\.2 Google maps
To use a Google map as the base map, you will need a Google API key. Unfortunately this requires an account and valid credit card. Fortunately, Google provides a large number of uses for free, and a very reasonable rate afterwards (but I take no responsibility for any costs you incur!).
Go to [mapsplatform.google.com](http://mapsplatform.google.com) to create an account. Activate static maps and geocoding (you need to activate each separately), and receive your Google API key. Keep this API key safe and private! Once you have your key, you can create the dot density plot. The steps are listed below.
1. Find the center coordinates for Houston, TX
```
library(ggmap)
# using geocode function to obtain the center coordinates
register_google(key="PutYourGoogleAPIKeyHere")
houston_center <- geocode("Houston, TX")
```
```
houston_center
```
```
## lon lat
## -95.36980 29.76043
```
2. Get the background map image.
* Specify a `zoom` factor from 3 (continent) to 21 (building). The default is 10 (city).
* Specify a `maptype`. Types include terrain, terrain\-background, satellite, roadmap, hybrid, watercolor, and toner.
```
# get Houston map
houston_map <- get_map(houston_center,
zoom = 13,
maptype = "roadmap")
ggmap(houston_map)
```
Figure 7\.5: Houston map using Google Maps
3. Add crime locations to the map.
```
# add incident locations
ggmap(houston_map) +
geom_point(aes(x=lon,y=lat),data=homicide,
color = "red", size = 2, alpha = 0.5)
```
Figure 7\.6: Houston crime locations using Google Maps
4. Clean up the plot and add labels.
```
# add incident locations
ggmap(houston_map) +
geom_point(aes(x=lon,y=lat),data=homicide,
color = "red", size = 2, alpha = 0.5) +
theme_void() +
labs(title = "Location of reported homocides",
subtitle = "Houston Jan - Aug 2010",
caption = "source: http://www.houstontx.gov/police/cs/")
```
Figure 7\.7: Customize Houston crime locations using Google Maps
There seems to be a concentration of homicide reports in the souther portion of the city. However this could simply reflect population density. More investigation is needed. To learn more about ggmap, see [ggmap: Spatial Visualization with ggplot2](https://journal.r-project.org/archive/2013-1/kahle-wickham.pdf) ([Kahle and Wickham 2013](#ref-RN8)).
#### 7\.2\.2\.1 Stamen maps
As of July 31, 2023, Stamen Map Tiles are served by [Stadia Maps](stadiamaps.com). To create a stamen map, you will need to obtain a Stadia Maps API key. The service is free from non\-commercial use.
The steps are
* Sign up for an account at [stadiamaps.com](https://client.stadiamaps.com/signup/).
* Go to the [client dashboard](https://client.stadiamaps.com/dashboard/).The client dashboard lets you generate, view, or revoke your API key.
* Click on “Manage Properties”. Under “Authentication Configuration”, generate your API key. Save this key and keep it private.
* In R, use ggmap::register\_stadiamaps(“your API key”) to register your key.
```
ggmap::register_stadiamaps("your API key")
```
To create a stamen map, you’ll need a bounding box \- the latitude and longitude for each corner of the map. The `getbb` function in the `osmdata` package can provide this.
```
# find a bounding box for Houston, Texas
library(osmdata)
bb <- getbb("houston, tx")
bb
```
```
## min max
## x -95.90974 -95.01205
## y 29.53707 30.11035
```
The `get_stadiamap` function takes this information and returns the map. The `ggmap` function then plots the map.
```
library(ggmap)
houston <- get_stadiamap(bbox = c(bb[1,1], bb[2,1],
bb[1,2], bb[2,2]),
maptype="stamen_toner_lite")
ggmap(houston)
```
Figure 7\.2: Static Houston map
The map returned by the ggmap function if a ggplot2 map. We can add to this graph using the standard **ggplot2** functions.
```
# add incident locations
ggmap(houston) +
geom_point(aes(x=lon,y=lat),data=homicide,
color = "red", size = 2, alpha = 0.5)
```
Figure 7\.3: Houston map with crime locations
To clean up the results, remove the axes and add meaningful labels.
```
# remove long and lat numbers and add titles
ggmap(houston) +
geom_point(aes(x=lon,y=lat),data=homicide,
color = "red", size = 2, alpha = 0.5) +
theme_void() +
labs(title = "Location of reported homocides",
subtitle = "Houston Jan - Aug 2010",
caption = "source: http://www.houstontx.gov/police/cs/")
```
Figure 7\.4: Crime locations with titles, and without longitude and latitude
#### 7\.2\.2\.2 Google maps
To use a Google map as the base map, you will need a Google API key. Unfortunately this requires an account and valid credit card. Fortunately, Google provides a large number of uses for free, and a very reasonable rate afterwards (but I take no responsibility for any costs you incur!).
Go to [mapsplatform.google.com](http://mapsplatform.google.com) to create an account. Activate static maps and geocoding (you need to activate each separately), and receive your Google API key. Keep this API key safe and private! Once you have your key, you can create the dot density plot. The steps are listed below.
1. Find the center coordinates for Houston, TX
```
library(ggmap)
# using geocode function to obtain the center coordinates
register_google(key="PutYourGoogleAPIKeyHere")
houston_center <- geocode("Houston, TX")
```
```
houston_center
```
```
## lon lat
## -95.36980 29.76043
```
2. Get the background map image.
* Specify a `zoom` factor from 3 (continent) to 21 (building). The default is 10 (city).
* Specify a `maptype`. Types include terrain, terrain\-background, satellite, roadmap, hybrid, watercolor, and toner.
```
# get Houston map
houston_map <- get_map(houston_center,
zoom = 13,
maptype = "roadmap")
ggmap(houston_map)
```
Figure 7\.5: Houston map using Google Maps
3. Add crime locations to the map.
```
# add incident locations
ggmap(houston_map) +
geom_point(aes(x=lon,y=lat),data=homicide,
color = "red", size = 2, alpha = 0.5)
```
Figure 7\.6: Houston crime locations using Google Maps
4. Clean up the plot and add labels.
```
# add incident locations
ggmap(houston_map) +
geom_point(aes(x=lon,y=lat),data=homicide,
color = "red", size = 2, alpha = 0.5) +
theme_void() +
labs(title = "Location of reported homocides",
subtitle = "Houston Jan - Aug 2010",
caption = "source: http://www.houstontx.gov/police/cs/")
```
Figure 7\.7: Customize Houston crime locations using Google Maps
There seems to be a concentration of homicide reports in the souther portion of the city. However this could simply reflect population density. More investigation is needed. To learn more about ggmap, see [ggmap: Spatial Visualization with ggplot2](https://journal.r-project.org/archive/2013-1/kahle-wickham.pdf) ([Kahle and Wickham 2013](#ref-RN8)).
7\.3 Choropleth maps
--------------------
Choropleth maps use color or shading on predefined areas to indicate the values of a numeric variable in that area. There are numerous approaches to creating chorpleth maps. One of the easiest relies on Ari Lamstein’s excellent `choroplethr` package which can create maps that display information by country, US state, and US county.
There may be times that you want to create a map for an area not covered in the choroplethr package. Additionally, you may want to create a map with greater customization. Towards this end, we’ll also look at a more customizable approach using a shapefile and the **sf** and **ggplot2** packages.
### 7\.3\.1 Data by country
Let’s create a world map and color the countries by life expectancy using the 2007 [gapminder](Datasets.html#Gapminder) data.
The **choroplethr** package has numerous functions that simplify the task of creating a choropleth map. To plot the life expectancy data, we’ll use the [`country_choropleth`](https://www.rdocumentation.org/packages/choroplethr/versions/3.6.1/topics/county_choropleth) function.
The function requires that the data frame to be plotted has a column named *region* and a column named *value*. Additionally, the entries in the *region* column must exactly match how the entries are named in the *region* column of the dataset `country.map` from the **choroplethrMaps** package.
```
# view the first 12 region names in country.map
data(country.map, package = "choroplethrMaps")
head(unique(country.map$region), 12)
```
```
## [1] "afghanistan" "angola" "azerbaijan" "moldova" "madagascar"
## [6] "mexico" "macedonia" "mali" "myanmar" "montenegro"
## [11] "mongolia" "mozambique"
```
Note that the region entries are all lower case.
To continue, we need to make some edits to our gapminder dataset. Specifically, we need to
1. select the 2007 data
2. rename the *country* variable to *region*
3. rename the *lifeExp* variable to *value*
4. recode *region* values to lower case
5. recode some *region* values to match the region values in the country.map data frame. The `recode` function in the **dplyr** package take the form `recode(variable, oldvalue1 = newvalue1, oldvalue2 = newvalue2, ...)`
```
# prepare dataset
data(gapminder, package = "gapminder")
plotdata <- gapminder %>%
filter(year == 2007) %>%
rename(region = country,
value = lifeExp) %>%
mutate(region = tolower(region)) %>%
mutate(region =
recode(region,
"united states" = "united states of america",
"congo, dem. rep." = "democratic republic of the congo",
"congo, rep." = "republic of congo",
"korea, dem. rep." = "south korea",
"korea. rep." = "north korea",
"tanzania" = "united republic of tanzania",
"serbia" = "republic of serbia",
"slovak republic" = "slovakia",
"yemen, rep." = "yemen"))
```
Now lets create the map.
```
library(choroplethr)
country_choropleth(plotdata)
```
Figure 7\.8: Choropleth map of life expectancy
choroplethr functions return ggplot2 graphs. Let’s make it a bit more attractive by modifying the code with additional ggplot2 functions.
```
country_choropleth(plotdata,
num_colors=9) +
scale_fill_brewer(palette="YlOrRd") +
labs(title = "Life expectancy by country",
subtitle = "Gapminder 2007 data",
caption = "source: https://www.gapminder.org",
fill = "Years")
```
Figure 7\.9: Choropleth map of life expectancy with labels and a better color scheme
Note that the `num_colors` option controls how many colors are used in graph. The default is seven and the maximum is nine.
### 7\.3\.2 Data by US state
For US data, the choroplethr package provides functions for creating maps by county, state, zip code, and census tract. Additionally, map regions can be labeled.
Let’s plot US states by [Hispanic and Latino populations](Datasets.html#HispLat), using the 2010 Census (see Appendix [A.11](Datasets.html#HispLat)).
To plot the population data, we’ll use the [`state_choropleth`](https://www.rdocumentation.org/packages/choroplethr/versions/3.6.1/topics/state_choropleth) function. The function requires that the data frame to be plotted has a column named *region* to represent state, and a column named *value* (the quantity to be plotted). Additionally, the entries in the *region* column must exactly match how the entries are named in the *region* column of the dataset state.map from the **choroplethrMaps** package.
The `zoom = continental_us_states` option will create a map that excludes Hawaii and Alaska.
```
library(ggplot2)
library(choroplethr)
data(continental_us_states)
# input the data
library(readr)
hisplat <- read_tsv("hisplat.csv")
# prepare the data
hisplat$region <- tolower(hisplat$state)
hisplat$value <- hisplat$percent
# create the map
state_choropleth(hisplat,
num_colors=9,
zoom = continental_us_states) +
scale_fill_brewer(palette="YlGnBu") +
labs(title = "Hispanic and Latino Population",
subtitle = "2010 US Census",
caption = "source: https://tinyurl.com/2fp7c5bw",
fill = "Percent")
```
Figure 7\.10: Choropleth map of US States
### 7\.3\.3 Data by US county
Finally, let’s plot data by US counties. We’ll plot the violent crime rate per 1000 individuals for Connecticut counties in 2012\. Data come from the FBI Uniform Crime Statistics.
We’ll use the `county_choropleth` function. Again, the function requires that the data frame to be plotted has a column named *region* and a column named *value*.
Additionally, the entries in the *region* column must be numeric codes and exactly match how the entries are given in the *region* column of the dataset `county.map` from the `choroplethrMaps` package.
Our dataset has country names (e.g. fairfield). However, we need region codes (e.g., 9001\). We can use the `county.regions` dataset to look up the region code for each county name.
Additionally, we’ll use the option `reference_map = TRUE` to add a reference map from Google Maps.
```
library(ggplot2)
library(choroplethr)
library(dplyr)
# enter violent crime rates by county
crimes_ct <- data.frame(
county = c("fairfield", "hartford",
"litchfield", "middlesex",
"new haven", "new london",
"tolland", "windham"),
value = c(3.00, 3.32,
1.02, 1.24,
4.13, 4.61,
0.16, 1.60)
)
crimes_ct
```
```
## county value
## 1 fairfield 3.00
## 2 hartford 3.32
## 3 litchfield 1.02
## 4 middlesex 1.24
## 5 new haven 4.13
## 6 new london 4.61
## 7 tolland 0.16
## 8 windham 1.60
```
```
# obtain region codes for connecticut
data(county.regions,
package = "choroplethrMaps")
region <- county.regions %>%
filter(state.name == "connecticut")
region
```
```
# join crime data to region code data
plotdata <- inner_join(crimes_ct,
region,
by=c("county" = "county.name"))
plotdata
```
```
## county value region county.fips.character state.name
## 1 fairfield 3.00 9001 09001 connecticut
## 2 hartford 3.32 9003 09003 connecticut
## 3 litchfield 1.02 9005 09005 connecticut
## 4 middlesex 1.24 9007 09007 connecticut
## 5 new haven 4.13 9009 09009 connecticut
## 6 new london 4.61 9011 09011 connecticut
## 7 tolland 0.16 9013 09013 connecticut
## 8 windham 1.60 9015 09015 connecticut
## state.fips.character state.abb
## 1 09 CT
## 2 09 CT
## 3 09 CT
## 4 09 CT
## 5 09 CT
## 6 09 CT
## 7 09 CT
## 8 09 CT
```
```
# create choropleth map
county_choropleth(plotdata,
state_zoom = "connecticut",
reference_map = TRUE,
num_colors = 8) +
scale_fill_brewer(palette="YlOrRd") +
labs(title = "Connecticut Violent Crime Rates",
subtitle = "FBI 2012 data",
caption = "source: https://ucr.fbi.gov",
fill = "Violent Crime\n Rate Per 1000")
```
See the *choroplethr help* ([https://cran.r\-project.org/web/packages/choroplethr/choroplethr.pdf](https://cran.r-project.org/web/packages/choroplethr/choroplethr.pdf)) for more details.
### 7\.3\.4 Building a choropleth map using the sf and ggplot2 packages and a shapefile
As stated previously, there may be times that you want to map a region not covered by the choroplethr package. Additionally, you may want greater control over the customization.
In this section, we’ll create a map of the continental United States and color each states by their 2023 literacy rate (the percent of individuals who can both read and write). The [literacy rates](Datasets.html#Literacy) were obtained from the World Population Review (see Appendix [A.7](Datasets.html#Literacy)). Rather than using the choroplethr package, we’ll download a US state shapefile and create the map using the sf and ggplot2 packages.
1. Prepare a shapefile
A *shapefile* is a data format that spatially describes vector features such as points, lines, and polygons. The shapefile is used to draw the geographic boundaries of the map.
You will need to find a shapefile for your the geographic area you want to plot. There are a wide range of shapefiles for cities, regions, states, and countries freely available on the internet. Natural Earth (<http://naturalearthdata.com>) is a good place to start. The shapefile used in the current example comes from the US Census Bureau ([https://www.census.gov/geographies/mapping\-files/time\-series/geo/cartographic\-boundary.html](https://www.census.gov/geographies/mapping-files/time-series/geo/cartographic-boundary.html)).
A shapefile will download as a zipped file. The code below unzips the file into a folder of the same name in the working directory (of course you can also do this by hand). The sf function `st_read` then converts the shapefile into a data frame that ggplot2 can access.
```
library(sf)
# unzip shape file
shapefile <- "cb_2022_us_state_20m.zip"
shapedir <- tools::file_path_sans_ext(shapefile)
if(!dir.exists(shapedir)){
unzip(shapefile, exdir=shapedir)
}
# convert the shapefile into a data frame
# of class sf (simple features)
USMap <- st_read("cb_2022_us_state_20m/cb_2022_us_state_20m.shp")
```
```
head(USMap, 3)
```
> Note that although the sf\_read function points the .shp file, all the files in the folder must be present.
The *NAME* column contains the state identifier, *STUPSPS* contains state abbreviations, and the *geometry* column is a special list object containing the coordinates need to draw the state boundaries.
2. Prepare the data file
The literacy rates are contained in the comma delimited file named *USLitRates.csv*.
```
litRates <- read.csv("USLitRates.csv")
head(litRates, 3)
```
```
## State Rate
## 1 New Hampshire 94.2
## 2 Minnesota 94.0
## 3 North Dakota 93.7
```
One of the most annoying aspects of creating a choropleth map is that the location variable in the data file (*State* in this case) must match the location file in the sf data frame (*NAME* in this case) exactly.
The following code will help identify any mismatches. Mismatches are printed and can be corrected.
```
# states in litRates not in USMap
setdiff(litRates$State, USMap$NAME)
```
```
## character(0)
```
We have no mismatches, so we are ready to move on.
3. Merge the data frames
The next step combine the two data frames. Since we want to focus the on lower 48 states, we’ll also eliminate Hawaii, Alaska, and Puerto Rico.
```
continentalUS <- USMap %>%
left_join(litRates, by=c("NAME"="State")) %>%
filter(NAME != "Hawaii" & NAME != "Alaska" &
NAME != "Puerto Rico")
head(continentalUS, 3)
```
4. Create the graph
The graph is created using ggplot2\. Rather than specifying `aes(x=, y=)`, `aes(geometry = geometry)` is used. The fill color is mapped to the literacy rate. The `geom_sf` function generates the map.
```
library(ggplot2)
ggplot(continentalUS, aes(geometry=geometry, fill=Rate)) +
geom_sf()
```
Figure 7\.11: Choropleth map of state literacy rates
5. Customize the graph
Before finishing, lets customize the graph by
* removing the axes
* adding state labels
* modifying the fill colors and legend
* adding a title, subtitle, and caption
```
library(dplyr)
ggplot(continentalUS, aes(geometry=geometry, fill=Rate)) +
geom_sf() +
theme_void() +
geom_sf_text(aes(label=STUSPS), size=2) +
scale_fill_steps(low="yellow", high="royalblue",
n.breaks = 10) +
labs(title="Literacy Rates by State",
fill = "% literate",
x = "", y = "",
subtitle="Updated May 2023",
caption="source: https://worldpopulationreview.com")
```
Figure 7\.12: Customized choropleth map
The map clearly displays the range of literacy rates among the states. Rates are lowest in New York and California.
### 7\.3\.1 Data by country
Let’s create a world map and color the countries by life expectancy using the 2007 [gapminder](Datasets.html#Gapminder) data.
The **choroplethr** package has numerous functions that simplify the task of creating a choropleth map. To plot the life expectancy data, we’ll use the [`country_choropleth`](https://www.rdocumentation.org/packages/choroplethr/versions/3.6.1/topics/county_choropleth) function.
The function requires that the data frame to be plotted has a column named *region* and a column named *value*. Additionally, the entries in the *region* column must exactly match how the entries are named in the *region* column of the dataset `country.map` from the **choroplethrMaps** package.
```
# view the first 12 region names in country.map
data(country.map, package = "choroplethrMaps")
head(unique(country.map$region), 12)
```
```
## [1] "afghanistan" "angola" "azerbaijan" "moldova" "madagascar"
## [6] "mexico" "macedonia" "mali" "myanmar" "montenegro"
## [11] "mongolia" "mozambique"
```
Note that the region entries are all lower case.
To continue, we need to make some edits to our gapminder dataset. Specifically, we need to
1. select the 2007 data
2. rename the *country* variable to *region*
3. rename the *lifeExp* variable to *value*
4. recode *region* values to lower case
5. recode some *region* values to match the region values in the country.map data frame. The `recode` function in the **dplyr** package take the form `recode(variable, oldvalue1 = newvalue1, oldvalue2 = newvalue2, ...)`
```
# prepare dataset
data(gapminder, package = "gapminder")
plotdata <- gapminder %>%
filter(year == 2007) %>%
rename(region = country,
value = lifeExp) %>%
mutate(region = tolower(region)) %>%
mutate(region =
recode(region,
"united states" = "united states of america",
"congo, dem. rep." = "democratic republic of the congo",
"congo, rep." = "republic of congo",
"korea, dem. rep." = "south korea",
"korea. rep." = "north korea",
"tanzania" = "united republic of tanzania",
"serbia" = "republic of serbia",
"slovak republic" = "slovakia",
"yemen, rep." = "yemen"))
```
Now lets create the map.
```
library(choroplethr)
country_choropleth(plotdata)
```
Figure 7\.8: Choropleth map of life expectancy
choroplethr functions return ggplot2 graphs. Let’s make it a bit more attractive by modifying the code with additional ggplot2 functions.
```
country_choropleth(plotdata,
num_colors=9) +
scale_fill_brewer(palette="YlOrRd") +
labs(title = "Life expectancy by country",
subtitle = "Gapminder 2007 data",
caption = "source: https://www.gapminder.org",
fill = "Years")
```
Figure 7\.9: Choropleth map of life expectancy with labels and a better color scheme
Note that the `num_colors` option controls how many colors are used in graph. The default is seven and the maximum is nine.
### 7\.3\.2 Data by US state
For US data, the choroplethr package provides functions for creating maps by county, state, zip code, and census tract. Additionally, map regions can be labeled.
Let’s plot US states by [Hispanic and Latino populations](Datasets.html#HispLat), using the 2010 Census (see Appendix [A.11](Datasets.html#HispLat)).
To plot the population data, we’ll use the [`state_choropleth`](https://www.rdocumentation.org/packages/choroplethr/versions/3.6.1/topics/state_choropleth) function. The function requires that the data frame to be plotted has a column named *region* to represent state, and a column named *value* (the quantity to be plotted). Additionally, the entries in the *region* column must exactly match how the entries are named in the *region* column of the dataset state.map from the **choroplethrMaps** package.
The `zoom = continental_us_states` option will create a map that excludes Hawaii and Alaska.
```
library(ggplot2)
library(choroplethr)
data(continental_us_states)
# input the data
library(readr)
hisplat <- read_tsv("hisplat.csv")
# prepare the data
hisplat$region <- tolower(hisplat$state)
hisplat$value <- hisplat$percent
# create the map
state_choropleth(hisplat,
num_colors=9,
zoom = continental_us_states) +
scale_fill_brewer(palette="YlGnBu") +
labs(title = "Hispanic and Latino Population",
subtitle = "2010 US Census",
caption = "source: https://tinyurl.com/2fp7c5bw",
fill = "Percent")
```
Figure 7\.10: Choropleth map of US States
### 7\.3\.3 Data by US county
Finally, let’s plot data by US counties. We’ll plot the violent crime rate per 1000 individuals for Connecticut counties in 2012\. Data come from the FBI Uniform Crime Statistics.
We’ll use the `county_choropleth` function. Again, the function requires that the data frame to be plotted has a column named *region* and a column named *value*.
Additionally, the entries in the *region* column must be numeric codes and exactly match how the entries are given in the *region* column of the dataset `county.map` from the `choroplethrMaps` package.
Our dataset has country names (e.g. fairfield). However, we need region codes (e.g., 9001\). We can use the `county.regions` dataset to look up the region code for each county name.
Additionally, we’ll use the option `reference_map = TRUE` to add a reference map from Google Maps.
```
library(ggplot2)
library(choroplethr)
library(dplyr)
# enter violent crime rates by county
crimes_ct <- data.frame(
county = c("fairfield", "hartford",
"litchfield", "middlesex",
"new haven", "new london",
"tolland", "windham"),
value = c(3.00, 3.32,
1.02, 1.24,
4.13, 4.61,
0.16, 1.60)
)
crimes_ct
```
```
## county value
## 1 fairfield 3.00
## 2 hartford 3.32
## 3 litchfield 1.02
## 4 middlesex 1.24
## 5 new haven 4.13
## 6 new london 4.61
## 7 tolland 0.16
## 8 windham 1.60
```
```
# obtain region codes for connecticut
data(county.regions,
package = "choroplethrMaps")
region <- county.regions %>%
filter(state.name == "connecticut")
region
```
```
# join crime data to region code data
plotdata <- inner_join(crimes_ct,
region,
by=c("county" = "county.name"))
plotdata
```
```
## county value region county.fips.character state.name
## 1 fairfield 3.00 9001 09001 connecticut
## 2 hartford 3.32 9003 09003 connecticut
## 3 litchfield 1.02 9005 09005 connecticut
## 4 middlesex 1.24 9007 09007 connecticut
## 5 new haven 4.13 9009 09009 connecticut
## 6 new london 4.61 9011 09011 connecticut
## 7 tolland 0.16 9013 09013 connecticut
## 8 windham 1.60 9015 09015 connecticut
## state.fips.character state.abb
## 1 09 CT
## 2 09 CT
## 3 09 CT
## 4 09 CT
## 5 09 CT
## 6 09 CT
## 7 09 CT
## 8 09 CT
```
```
# create choropleth map
county_choropleth(plotdata,
state_zoom = "connecticut",
reference_map = TRUE,
num_colors = 8) +
scale_fill_brewer(palette="YlOrRd") +
labs(title = "Connecticut Violent Crime Rates",
subtitle = "FBI 2012 data",
caption = "source: https://ucr.fbi.gov",
fill = "Violent Crime\n Rate Per 1000")
```
See the *choroplethr help* ([https://cran.r\-project.org/web/packages/choroplethr/choroplethr.pdf](https://cran.r-project.org/web/packages/choroplethr/choroplethr.pdf)) for more details.
### 7\.3\.4 Building a choropleth map using the sf and ggplot2 packages and a shapefile
As stated previously, there may be times that you want to map a region not covered by the choroplethr package. Additionally, you may want greater control over the customization.
In this section, we’ll create a map of the continental United States and color each states by their 2023 literacy rate (the percent of individuals who can both read and write). The [literacy rates](Datasets.html#Literacy) were obtained from the World Population Review (see Appendix [A.7](Datasets.html#Literacy)). Rather than using the choroplethr package, we’ll download a US state shapefile and create the map using the sf and ggplot2 packages.
1. Prepare a shapefile
A *shapefile* is a data format that spatially describes vector features such as points, lines, and polygons. The shapefile is used to draw the geographic boundaries of the map.
You will need to find a shapefile for your the geographic area you want to plot. There are a wide range of shapefiles for cities, regions, states, and countries freely available on the internet. Natural Earth (<http://naturalearthdata.com>) is a good place to start. The shapefile used in the current example comes from the US Census Bureau ([https://www.census.gov/geographies/mapping\-files/time\-series/geo/cartographic\-boundary.html](https://www.census.gov/geographies/mapping-files/time-series/geo/cartographic-boundary.html)).
A shapefile will download as a zipped file. The code below unzips the file into a folder of the same name in the working directory (of course you can also do this by hand). The sf function `st_read` then converts the shapefile into a data frame that ggplot2 can access.
```
library(sf)
# unzip shape file
shapefile <- "cb_2022_us_state_20m.zip"
shapedir <- tools::file_path_sans_ext(shapefile)
if(!dir.exists(shapedir)){
unzip(shapefile, exdir=shapedir)
}
# convert the shapefile into a data frame
# of class sf (simple features)
USMap <- st_read("cb_2022_us_state_20m/cb_2022_us_state_20m.shp")
```
```
head(USMap, 3)
```
> Note that although the sf\_read function points the .shp file, all the files in the folder must be present.
The *NAME* column contains the state identifier, *STUPSPS* contains state abbreviations, and the *geometry* column is a special list object containing the coordinates need to draw the state boundaries.
2. Prepare the data file
The literacy rates are contained in the comma delimited file named *USLitRates.csv*.
```
litRates <- read.csv("USLitRates.csv")
head(litRates, 3)
```
```
## State Rate
## 1 New Hampshire 94.2
## 2 Minnesota 94.0
## 3 North Dakota 93.7
```
One of the most annoying aspects of creating a choropleth map is that the location variable in the data file (*State* in this case) must match the location file in the sf data frame (*NAME* in this case) exactly.
The following code will help identify any mismatches. Mismatches are printed and can be corrected.
```
# states in litRates not in USMap
setdiff(litRates$State, USMap$NAME)
```
```
## character(0)
```
We have no mismatches, so we are ready to move on.
3. Merge the data frames
The next step combine the two data frames. Since we want to focus the on lower 48 states, we’ll also eliminate Hawaii, Alaska, and Puerto Rico.
```
continentalUS <- USMap %>%
left_join(litRates, by=c("NAME"="State")) %>%
filter(NAME != "Hawaii" & NAME != "Alaska" &
NAME != "Puerto Rico")
head(continentalUS, 3)
```
4. Create the graph
The graph is created using ggplot2\. Rather than specifying `aes(x=, y=)`, `aes(geometry = geometry)` is used. The fill color is mapped to the literacy rate. The `geom_sf` function generates the map.
```
library(ggplot2)
ggplot(continentalUS, aes(geometry=geometry, fill=Rate)) +
geom_sf()
```
Figure 7\.11: Choropleth map of state literacy rates
5. Customize the graph
Before finishing, lets customize the graph by
* removing the axes
* adding state labels
* modifying the fill colors and legend
* adding a title, subtitle, and caption
```
library(dplyr)
ggplot(continentalUS, aes(geometry=geometry, fill=Rate)) +
geom_sf() +
theme_void() +
geom_sf_text(aes(label=STUSPS), size=2) +
scale_fill_steps(low="yellow", high="royalblue",
n.breaks = 10) +
labs(title="Literacy Rates by State",
fill = "% literate",
x = "", y = "",
subtitle="Updated May 2023",
caption="source: https://worldpopulationreview.com")
```
Figure 7\.12: Customized choropleth map
The map clearly displays the range of literacy rates among the states. Rates are lowest in New York and California.
7\.4 Going further
------------------
We’ve just scratched the surface of what you can do with maps in R. To learn more, see the CRAN Task View on the [Analysis of Spacial Data](https://cran.r-project.org/web/views/Spatial.html) ([https://cran.r\-project.org/web/views/Spatial.html](https://cran.r-project.org/web/views/Spatial.html)) and [Geocomputation with R](https://r.geocompx.org/index.html), an comprehensive on\-line book and hard\-copy book ([Lovelace R., Nowasad, and Meuchow 2019](#ref-RN7)).
| Data Visualization |
rkabacoff.github.io | https://rkabacoff.github.io/datavis/Time.html |
Chapter 8 Time\-dependent graphs
================================
A graph can be a powerful vehicle for displaying change over time. The most common time\-dependent graph is the time series line graph. Other options include the dumbbell charts and the slope graph.
8\.1 Time series
----------------
A time series is a set of quantitative values obtained at successive time points. The intervals between time points (e.g., hours, days, weeks, months, or years) are usually equal.
Consider the [Economics time series](Datasets.html#Economic) that come with the **ggplot2** package. It contains US monthly economic data collected from January 1967 thru January 2015\. Let’s plot the personal savings rate (*psavert*) over time. We can do this with a simple line plot.
```
library(ggplot2)
ggplot(economics, aes(x = date, y = psavert)) +
geom_line() +
labs(title = "Personal Savings Rate",
x = "Date",
y = "Personal Savings Rate")
```
Figure 8\.1: Simple time series
The `scale_x_date` function can be used to reformat dates (see Section [2\.2\.6](DataPrep.html#Dates)). In the graph below, tick marks appear every 5 years and dates are presented in MMM\-YY format. Additionally, the time series line is given an off\-red color and made thicker, a nonparametric trend line (loess, Section [5\.2\.1\.1](Bivariate.html#BestFit)) and titles are added, and the theme is simplified.
```
library(ggplot2)
library(scales)
ggplot(economics, aes(x = date, y = psavert)) +
geom_line(color = "indianred3",
size=1 ) +
geom_smooth() +
scale_x_date(date_breaks = '5 years',
labels = date_format("%b-%y")) +
labs(title = "Personal Savings Rate",
subtitle = "1967 to 2015",
x = "",
y = "Personal Savings Rate") +
theme_minimal()
```
Figure 8\.2: Simple time series with modified date axis
When plotting time series, be sure that the date variable is class `Date` and not class `character`. See Section [2\.2\.6](DataPrep.html#Dates) for details.
Let’s close this section with a multivariate time series (more than one series). We’ll compare closing prices for Apple and Meta from Jan 1, 2018 to July 31, 2023\. The `getSymbols` function in the **quantmod** package is used to obtain the stock data from Yahoo Finance.
```
# multivariate time series
# one time install
# install.packages("quantmod")
library(quantmod)
library(dplyr)
# get apple (AAPL) closing prices
apple <- getSymbols("AAPL",
return.class = "data.frame",
from="2023-01-01")
apple <- AAPL %>%
mutate(Date = as.Date(row.names(.))) %>%
select(Date, AAPL.Close) %>%
rename(Close = AAPL.Close) %>%
mutate(Company = "Apple")
# get Meta (META) closing prices
meta <- getSymbols("META",
return.class = "data.frame",
from="2023-01-01")
meta <- META %>%
mutate(Date = as.Date(row.names(.))) %>%
select(Date, META.Close) %>%
rename(Close = META.Close) %>%
mutate(Company = "Meta")
# combine data for both companies
mseries <- rbind(apple, meta)
# plot data
library(ggplot2)
ggplot(mseries,
aes(x=Date, y= Close, color=Company)) +
geom_line(size=1) +
scale_x_date(date_breaks = '1 month',
labels = scales::date_format("%b")) +
scale_y_continuous(limits = c(120, 280),
breaks = seq(120, 280, 20),
labels = scales::dollar) +
labs(title = "NASDAQ Closing Prices",
subtitle = "Jan - June 2023",
caption = "source: Yahoo Finance",
y = "Closing Price") +
theme_minimal() +
scale_color_brewer(palette = "Dark2")
```
Figure 8\.3: Multivariate time series
You can see the how the two stocks diverge after February.
8\.2 Dummbbell charts
---------------------
Dumbbell charts are useful for displaying change between two time points for several groups or observations. The `geom_dumbbell` function from the **ggalt** package is used.
Using the [gapminder](Datasets.html#Gapminder) dataset let’s plot the change in life expectancy from 1952 to 2007 in the Americas. The dataset is in long format (Section [2\.2\.7](DataPrep.html#Reshaping)). We will need to convert it to wide format in order to create the dumbbell plot
```
library(ggalt)
library(tidyr)
library(dplyr)
# load data
data(gapminder, package = "gapminder")
# subset data
plotdata_long <- filter(gapminder,
continent == "Americas" &
year %in% c(1952, 2007)) %>%
select(country, year, lifeExp)
# convert data to wide format
plotdata_wide <- pivot_wider(plotdata_long,
names_from = year,
values_from = lifeExp)
names(plotdata_wide) <- c("country", "y1952", "y2007")
# create dumbbell plot
ggplot(plotdata_wide, aes(y = country,
x = y1952,
xend = y2007)) +
geom_dumbbell()
```
Figure 8\.4: Simple dumbbell chart
The graph will be easier to read if the countries are sorted and the points are sized and colored. In the next graph, we’ll sort by 1952 life expectancy, and modify the line and point size, color the points, add titles and labels, and simplify the theme.
```
# create dumbbell plot
ggplot(plotdata_wide,
aes(y = reorder(country, y1952),
x = y1952,
xend = y2007)) +
geom_dumbbell(size = 1.2,
size_x = 3,
size_xend = 3,
colour = "grey",
colour_x = "red",
colour_xend = "blue") +
theme_minimal() +
labs(title = "Change in Life Expectancy",
subtitle = "1952 to 2007",
x = "Life Expectancy (years)",
y = "")
```
Figure 8\.5: Sorted, colored dumbbell chart
It is easier to discern patterns here. For example Haiti started with the lowest life expectancy in 1952 and still has the lowest in 2007\. Paraguay started relatively high by has made few gains.
8\.3 Slope graphs
-----------------
When there are several groups and several time points, a slope graph can be helpful. Let’s plot life expectancy for six Central American countries in 1992, 1997, 2002, and 2007\. Again we’ll use the [gapminder](Datasets.html#Gapminder) data.
To create a slope graph, we’ll use the `newggslopegraph` function from the `CGPfunctions` package.
The `newggslopegraph` function parameters are (in order)
* data frame
* time variable (which must be a factor)
* numeric variable to be plotted
* and grouping variable (creating one line per group).
```
library(CGPfunctions)
# Select Central American countries data
# for 1992, 1997, 2002, and 2007
df <- gapminder %>%
filter(year %in% c(1992, 1997, 2002, 2007) &
country %in% c("Panama", "Costa Rica",
"Nicaragua", "Honduras",
"El Salvador", "Guatemala",
"Belize")) %>%
mutate(year = factor(year),
lifeExp = round(lifeExp))
# create slope graph
newggslopegraph(df, year, lifeExp, country) +
labs(title="Life Expectancy by Country",
subtitle="Central America",
caption="source: gapminder")
```
Figure 8\.6: Slope graph
In the graph above, Costa Rica has the highest life expectancy across the range of years studied. Guatemala has the lowest, and caught up with Honduras (also low at 69\) in 2002\.
8\.4 Area Charts
----------------
A simple area chart is basically a line graph, with a fill from the line to the *x*\-axis.
```
# basic area chart
ggplot(economics, aes(x = date, y = psavert)) +
geom_area(fill="lightblue", color="black") +
labs(title = "Personal Savings Rate",
x = "Date",
y = "Personal Savings Rate")
```
Figure 8\.7: Basic area chart
A stacked area chart can be used to show differences between groups over time. Consider the [`uspopage`](Datasets.html#USpop) dataset from the **gcookbook** package. The dataset describes the age distribution of the US population from 1900 to 2002\. The variables are *year*, age group (*AgeGroup*), and number of people in thousands (*Thousands*). Let’s plot the population of each age group over time.
```
# stacked area chart
data(uspopage, package = "gcookbook")
ggplot(uspopage, aes(x = Year,
y = Thousands,
fill = AgeGroup)) +
geom_area() +
labs(title = "US Population by age",
x = "Year",
y = "Population in Thousands")
```
Figure 8\.8: Stacked area chart
It is best to avoid scientific notation in your graphs. How likely is it that the average reader will know that 3e\+05 means 300,000,000? It is easy to change the scale in ggplot2\. Simply divide the Thousands variable by 1000 and report it as Millions. While we are at it, let’s
* create black borders to highlight the difference between groups
* reverse the order the groups to match increasing age
* improve labeling
* choose a different color scheme
* choose a simpler theme.
The levels of the *AgeGroup* variable can be reversed using the `fct_rev` function in the `forcats` package.
```
# stacked area chart
data(uspopage, package = "gcookbook")
ggplot(uspopage, aes(x = Year,
y = Thousands/1000,
fill = forcats::fct_rev(AgeGroup))) +
geom_area(color = "black") +
labs(title = "US Population by age",
subtitle = "1900 to 2002",
caption = "source: U.S. Census Bureau, 2003, HS-3",
x = "Year",
y = "Population in Millions",
fill = "Age Group") +
scale_fill_brewer(palette = "Set2") +
theme_minimal()
```
Figure 8\.9: Stacked area chart with simpler scale
Apparently, the number of young children have not changed very much in the past 100 years.
Stacked area charts are most useful when interest is on both (1\) group change over time and (2\) overall change over time. Place the most important groups at the bottom. These are the easiest to interpret in this type of plot.
8\.5 Stream graph
-----------------
Stream graphs ([Byron and Wattenberg 2008](#ref-RN11)) are basically a variation on the stacked area chart. In a stream graph, the data is typically centered at each x\-value around a mid\-point and mirrored above and below that point. This is easiest to see in an example.
Let’s plot the previous stacked area chart (Figure [8\.9](Time.html#fig:areachart3)) as a stream graph.
```
# basic stream graph
data(uspopage, package = "gcookbook")
library(ggstream)
ggplot(uspopage, aes(x = Year,
y = Thousands/1000,
fill = forcats::fct_rev(AgeGroup))) +
geom_stream() +
labs(title = "US Population by age",
subtitle = "1900 to 2002",
caption = "source: U.S. Census Bureau, 2003, HS-3",
x = "Year",
y = "",
fill = "Age Group") +
scale_fill_brewer(palette = "Set2") +
theme_minimal() +
theme(panel.grid.major.y = element_blank(),
panel.grid.minor.y = element_blank(),
axis.text.y = element_blank())
```
Figure 8\.10: Basic stream graph
The `theme` function is used to surpress the *y*\-axis, whose values are not easily interpreted. To interpret this graph, look at each value on the *x*\-axis and compare the relative vertical heights of each group. You can see, for example, that the relative proportion of older people has increased significantly.
An interesting variation is the proportional steam graph displays in Figure [8\.11](Time.html#fig:streamgraph2)
```
# basic stream graph
data(uspopage, package = "gcookbook")
library(ggstream)
ggplot(uspopage, aes(x = Year,
y = Thousands/1000,
fill = forcats::fct_rev(AgeGroup))) +
geom_stream(type="proportional") +
labs(title = "US Population by age",
subtitle = "1900 to 2002",
caption = "source: U.S. Census Bureau, 2003, HS-3",
x = "Year",
y = "Proportion",
fill = "Age Group") +
scale_fill_brewer(palette = "Set2") +
theme_minimal()
```
Figure 8\.11: Proportional stream graph
This is similar to the filled bar chart (Section [5\.1\.3](Bivariate.html#Segmented)) and makes it easier to see the relative change in values by group across time.
8\.1 Time series
----------------
A time series is a set of quantitative values obtained at successive time points. The intervals between time points (e.g., hours, days, weeks, months, or years) are usually equal.
Consider the [Economics time series](Datasets.html#Economic) that come with the **ggplot2** package. It contains US monthly economic data collected from January 1967 thru January 2015\. Let’s plot the personal savings rate (*psavert*) over time. We can do this with a simple line plot.
```
library(ggplot2)
ggplot(economics, aes(x = date, y = psavert)) +
geom_line() +
labs(title = "Personal Savings Rate",
x = "Date",
y = "Personal Savings Rate")
```
Figure 8\.1: Simple time series
The `scale_x_date` function can be used to reformat dates (see Section [2\.2\.6](DataPrep.html#Dates)). In the graph below, tick marks appear every 5 years and dates are presented in MMM\-YY format. Additionally, the time series line is given an off\-red color and made thicker, a nonparametric trend line (loess, Section [5\.2\.1\.1](Bivariate.html#BestFit)) and titles are added, and the theme is simplified.
```
library(ggplot2)
library(scales)
ggplot(economics, aes(x = date, y = psavert)) +
geom_line(color = "indianred3",
size=1 ) +
geom_smooth() +
scale_x_date(date_breaks = '5 years',
labels = date_format("%b-%y")) +
labs(title = "Personal Savings Rate",
subtitle = "1967 to 2015",
x = "",
y = "Personal Savings Rate") +
theme_minimal()
```
Figure 8\.2: Simple time series with modified date axis
When plotting time series, be sure that the date variable is class `Date` and not class `character`. See Section [2\.2\.6](DataPrep.html#Dates) for details.
Let’s close this section with a multivariate time series (more than one series). We’ll compare closing prices for Apple and Meta from Jan 1, 2018 to July 31, 2023\. The `getSymbols` function in the **quantmod** package is used to obtain the stock data from Yahoo Finance.
```
# multivariate time series
# one time install
# install.packages("quantmod")
library(quantmod)
library(dplyr)
# get apple (AAPL) closing prices
apple <- getSymbols("AAPL",
return.class = "data.frame",
from="2023-01-01")
apple <- AAPL %>%
mutate(Date = as.Date(row.names(.))) %>%
select(Date, AAPL.Close) %>%
rename(Close = AAPL.Close) %>%
mutate(Company = "Apple")
# get Meta (META) closing prices
meta <- getSymbols("META",
return.class = "data.frame",
from="2023-01-01")
meta <- META %>%
mutate(Date = as.Date(row.names(.))) %>%
select(Date, META.Close) %>%
rename(Close = META.Close) %>%
mutate(Company = "Meta")
# combine data for both companies
mseries <- rbind(apple, meta)
# plot data
library(ggplot2)
ggplot(mseries,
aes(x=Date, y= Close, color=Company)) +
geom_line(size=1) +
scale_x_date(date_breaks = '1 month',
labels = scales::date_format("%b")) +
scale_y_continuous(limits = c(120, 280),
breaks = seq(120, 280, 20),
labels = scales::dollar) +
labs(title = "NASDAQ Closing Prices",
subtitle = "Jan - June 2023",
caption = "source: Yahoo Finance",
y = "Closing Price") +
theme_minimal() +
scale_color_brewer(palette = "Dark2")
```
Figure 8\.3: Multivariate time series
You can see the how the two stocks diverge after February.
8\.2 Dummbbell charts
---------------------
Dumbbell charts are useful for displaying change between two time points for several groups or observations. The `geom_dumbbell` function from the **ggalt** package is used.
Using the [gapminder](Datasets.html#Gapminder) dataset let’s plot the change in life expectancy from 1952 to 2007 in the Americas. The dataset is in long format (Section [2\.2\.7](DataPrep.html#Reshaping)). We will need to convert it to wide format in order to create the dumbbell plot
```
library(ggalt)
library(tidyr)
library(dplyr)
# load data
data(gapminder, package = "gapminder")
# subset data
plotdata_long <- filter(gapminder,
continent == "Americas" &
year %in% c(1952, 2007)) %>%
select(country, year, lifeExp)
# convert data to wide format
plotdata_wide <- pivot_wider(plotdata_long,
names_from = year,
values_from = lifeExp)
names(plotdata_wide) <- c("country", "y1952", "y2007")
# create dumbbell plot
ggplot(plotdata_wide, aes(y = country,
x = y1952,
xend = y2007)) +
geom_dumbbell()
```
Figure 8\.4: Simple dumbbell chart
The graph will be easier to read if the countries are sorted and the points are sized and colored. In the next graph, we’ll sort by 1952 life expectancy, and modify the line and point size, color the points, add titles and labels, and simplify the theme.
```
# create dumbbell plot
ggplot(plotdata_wide,
aes(y = reorder(country, y1952),
x = y1952,
xend = y2007)) +
geom_dumbbell(size = 1.2,
size_x = 3,
size_xend = 3,
colour = "grey",
colour_x = "red",
colour_xend = "blue") +
theme_minimal() +
labs(title = "Change in Life Expectancy",
subtitle = "1952 to 2007",
x = "Life Expectancy (years)",
y = "")
```
Figure 8\.5: Sorted, colored dumbbell chart
It is easier to discern patterns here. For example Haiti started with the lowest life expectancy in 1952 and still has the lowest in 2007\. Paraguay started relatively high by has made few gains.
8\.3 Slope graphs
-----------------
When there are several groups and several time points, a slope graph can be helpful. Let’s plot life expectancy for six Central American countries in 1992, 1997, 2002, and 2007\. Again we’ll use the [gapminder](Datasets.html#Gapminder) data.
To create a slope graph, we’ll use the `newggslopegraph` function from the `CGPfunctions` package.
The `newggslopegraph` function parameters are (in order)
* data frame
* time variable (which must be a factor)
* numeric variable to be plotted
* and grouping variable (creating one line per group).
```
library(CGPfunctions)
# Select Central American countries data
# for 1992, 1997, 2002, and 2007
df <- gapminder %>%
filter(year %in% c(1992, 1997, 2002, 2007) &
country %in% c("Panama", "Costa Rica",
"Nicaragua", "Honduras",
"El Salvador", "Guatemala",
"Belize")) %>%
mutate(year = factor(year),
lifeExp = round(lifeExp))
# create slope graph
newggslopegraph(df, year, lifeExp, country) +
labs(title="Life Expectancy by Country",
subtitle="Central America",
caption="source: gapminder")
```
Figure 8\.6: Slope graph
In the graph above, Costa Rica has the highest life expectancy across the range of years studied. Guatemala has the lowest, and caught up with Honduras (also low at 69\) in 2002\.
8\.4 Area Charts
----------------
A simple area chart is basically a line graph, with a fill from the line to the *x*\-axis.
```
# basic area chart
ggplot(economics, aes(x = date, y = psavert)) +
geom_area(fill="lightblue", color="black") +
labs(title = "Personal Savings Rate",
x = "Date",
y = "Personal Savings Rate")
```
Figure 8\.7: Basic area chart
A stacked area chart can be used to show differences between groups over time. Consider the [`uspopage`](Datasets.html#USpop) dataset from the **gcookbook** package. The dataset describes the age distribution of the US population from 1900 to 2002\. The variables are *year*, age group (*AgeGroup*), and number of people in thousands (*Thousands*). Let’s plot the population of each age group over time.
```
# stacked area chart
data(uspopage, package = "gcookbook")
ggplot(uspopage, aes(x = Year,
y = Thousands,
fill = AgeGroup)) +
geom_area() +
labs(title = "US Population by age",
x = "Year",
y = "Population in Thousands")
```
Figure 8\.8: Stacked area chart
It is best to avoid scientific notation in your graphs. How likely is it that the average reader will know that 3e\+05 means 300,000,000? It is easy to change the scale in ggplot2\. Simply divide the Thousands variable by 1000 and report it as Millions. While we are at it, let’s
* create black borders to highlight the difference between groups
* reverse the order the groups to match increasing age
* improve labeling
* choose a different color scheme
* choose a simpler theme.
The levels of the *AgeGroup* variable can be reversed using the `fct_rev` function in the `forcats` package.
```
# stacked area chart
data(uspopage, package = "gcookbook")
ggplot(uspopage, aes(x = Year,
y = Thousands/1000,
fill = forcats::fct_rev(AgeGroup))) +
geom_area(color = "black") +
labs(title = "US Population by age",
subtitle = "1900 to 2002",
caption = "source: U.S. Census Bureau, 2003, HS-3",
x = "Year",
y = "Population in Millions",
fill = "Age Group") +
scale_fill_brewer(palette = "Set2") +
theme_minimal()
```
Figure 8\.9: Stacked area chart with simpler scale
Apparently, the number of young children have not changed very much in the past 100 years.
Stacked area charts are most useful when interest is on both (1\) group change over time and (2\) overall change over time. Place the most important groups at the bottom. These are the easiest to interpret in this type of plot.
8\.5 Stream graph
-----------------
Stream graphs ([Byron and Wattenberg 2008](#ref-RN11)) are basically a variation on the stacked area chart. In a stream graph, the data is typically centered at each x\-value around a mid\-point and mirrored above and below that point. This is easiest to see in an example.
Let’s plot the previous stacked area chart (Figure [8\.9](Time.html#fig:areachart3)) as a stream graph.
```
# basic stream graph
data(uspopage, package = "gcookbook")
library(ggstream)
ggplot(uspopage, aes(x = Year,
y = Thousands/1000,
fill = forcats::fct_rev(AgeGroup))) +
geom_stream() +
labs(title = "US Population by age",
subtitle = "1900 to 2002",
caption = "source: U.S. Census Bureau, 2003, HS-3",
x = "Year",
y = "",
fill = "Age Group") +
scale_fill_brewer(palette = "Set2") +
theme_minimal() +
theme(panel.grid.major.y = element_blank(),
panel.grid.minor.y = element_blank(),
axis.text.y = element_blank())
```
Figure 8\.10: Basic stream graph
The `theme` function is used to surpress the *y*\-axis, whose values are not easily interpreted. To interpret this graph, look at each value on the *x*\-axis and compare the relative vertical heights of each group. You can see, for example, that the relative proportion of older people has increased significantly.
An interesting variation is the proportional steam graph displays in Figure [8\.11](Time.html#fig:streamgraph2)
```
# basic stream graph
data(uspopage, package = "gcookbook")
library(ggstream)
ggplot(uspopage, aes(x = Year,
y = Thousands/1000,
fill = forcats::fct_rev(AgeGroup))) +
geom_stream(type="proportional") +
labs(title = "US Population by age",
subtitle = "1900 to 2002",
caption = "source: U.S. Census Bureau, 2003, HS-3",
x = "Year",
y = "Proportion",
fill = "Age Group") +
scale_fill_brewer(palette = "Set2") +
theme_minimal()
```
Figure 8\.11: Proportional stream graph
This is similar to the filled bar chart (Section [5\.1\.3](Bivariate.html#Segmented)) and makes it easier to see the relative change in values by group across time.
| Data Visualization |
rkabacoff.github.io | https://rkabacoff.github.io/datavis/Models.html |
Chapter 9 Statistical Models
============================
A statistical model describes the relationship between one or more explanatory variables and one or more response variables. Graphs can help to visualize these relationships. In this section we’ll focus on models that have a single response variable that is either quantitative (a number) or binary (yes/no).
> This chapter describes the use of graphs to enhance the output from statistical models. It is assumed that the reader has a passing familiarity with these models. The book [R for Data Science](https://r4ds.had.co.nz/) ([Wickham and Grolemund 2017](#ref-RN9)) can provide the necessary background and freely available on\-line.
9\.1 Correlation plots
----------------------
Correlation plots help you to visualize the pairwise relationships between a set of quantitative variables by displaying their correlations using color or shading.
Consider the [Saratoga Houses](Datasets.html#SaratogaHousing) dataset, which contains the sale price and property characteristics of Saratoga County, NY homes in 2006 (Appendix [A.14](Datasets.html#SaratogaHousing)). In order to explore the relationships among the quantitative variables, we can calculate the Pearson Product\-Moment [correlation coefficients](http://www.statisticshowto.com/probability-and-statistics/correlation-coefficient-formula/).
In the code below, the `select_if` function in the **dplyr** package is used to select the numeric variables in the data frame. The `cor` function in base R calculates the correlations. The *use\=“complete.obs”* option deletes any cases with missing data. The `round` function rounds the printed results to 2 decimal places.
```
data(SaratogaHouses, package="mosaicData")
# select numeric variables
df <- dplyr::select_if(SaratogaHouses, is.numeric)
# calulate the correlations
r <- cor(df, use="complete.obs")
round(r,2)
```
The `ggcorrplot` function in the **ggcorrplot** package can be used to visualize these correlations. By default, it creates a ggplot2 graph where darker red indicates stronger positive correlations, darker blue indicates stronger negative correlations and white indicates no correlation.
```
library(ggplot2)
library(ggcorrplot)
ggcorrplot(r)
```
Figure 9\.1: Correlation matrix
From the graph, an increase in number of bathrooms and living area are associated with increased price, while older homes tend to be less expensive. Older homes also tend to have fewer bathrooms.
The [`ggcorrplot`](https://www.rdocumentation.org/packages/ggcorrplot/versions/0.1.1/topics/ggcorrplot) function has a number of options for customizing the output. For example
* `hc.order = TRUE` reorders the variables, placing variables with similar correlation patterns together.
* `type = "lower"` plots the lower portion of the correlation matrix.
* `lab = TRUE` overlays the correlation coefficients (as text) on the plot.
```
ggcorrplot(r,
hc.order = TRUE,
type = "lower",
lab = TRUE)
```
Figure 9\.2: Sorted lower triangel correlation matrix with options
These, and other options, can make the graph easier to read and interpret. See [`?ggcorrplot`](https://www.rdocumentation.org/packages/ggcorrplot/versions/0.1.1/topics/ggcorrplot) for details.
9\.2 Linear Regression
----------------------
Linear regression allows us to explore the relationship between a quantitative response variable and an explanatory variable while other variables are held constant.
Consider the prediction of home prices in the [Saratoga Houses](#SaratogaHouses) dataset from lot size (square feet), age (years), land value (1000s dollars), living area (square feet), number of bedrooms and bathrooms and whether the home is on the waterfront or not.
```
data(SaratogaHouses, package="mosaicData")
houses_lm <- lm(price ~ lotSize + age + landValue +
livingArea + bedrooms + bathrooms +
waterfront,
data = SaratogaHouses)
```
Table 9\.1: Linear Regression results
| term | estimate | std.error | statistic | p.value |
| --- | --- | --- | --- | --- |
| (Intercept) | 139878\.80 | 16472\.93 | 8\.49 | 0\.00 |
| lotSize | 7500\.79 | 2075\.14 | 3\.61 | 0\.00 |
| age | \-136\.04 | 54\.16 | \-2\.51 | 0\.01 |
| landValue | 0\.91 | 0\.05 | 19\.84 | 0\.00 |
| livingArea | 75\.18 | 4\.16 | 18\.08 | 0\.00 |
| bedrooms | \-5766\.76 | 2388\.43 | \-2\.41 | 0\.02 |
| bathrooms | 24547\.11 | 3332\.27 | 7\.37 | 0\.00 |
| waterfrontNo | \-120726\.62 | 15600\.83 | \-7\.74 | 0\.00 |
From the results, we can estimate that an increase of one square foot of living area is associated with a home price increase of $75, holding the other variables constant. Additionally, waterfront home cost approximately $120,726 more than non\-waterfront home, again controlling for the other variables in the model.
The **visreg** (<http://pbreheny.github.io/visreg>) package provides tools for visualizing these conditional relationships.
The `visreg` function takes (1\) the model and (2\) the variable of interest and plots the conditional relationship, controlling for the other variables. The option `gg = TRUE` is used to produce a ggplot2 graph.
```
# conditional plot of price vs. living area
library(ggplot2)
library(visreg)
visreg(houses_lm, "livingArea", gg = TRUE)
```
Figure 9\.3: Conditional plot of living area and price
The graph suggests that, after controlling for lot size, age, living area, number of bedrooms and bathrooms, and waterfront location, sales price increases with living area in a linear fashion.
> **How does `visreg` work?** The fitted model is used to predict values of the response variable, across the range of the chosen explanatory variable. The other variables are set to their median value (for numeric variables) or most frequent category (for categorical variables). The user can override these defaults and chose specific values for any variable in the model.
Continuing the example, the price difference between waterfront and non\-waterfront homes is plotted, controlling for the other seven variables. Since a **ggplot2** graph is produced, other ggplot2 functions can be added to customize the graph.
```
# conditional plot of price vs. waterfront location
visreg(houses_lm, "waterfront", gg = TRUE) +
scale_y_continuous(label = scales::dollar) +
labs(title = "Relationship between price and location",
subtitle = paste0("controlling for lot size, age, ",
"land value, bedrooms and bathrooms"),
caption = "source: Saratoga Housing Data (2006)",
y = "Home Price",
x = "Waterfront")
```
Figure 9\.4: Conditional plot of location and price
There are far fewer homes on the water, and they tend to be more expensive (even controlling for size, age, and land value).
The **vizreg** package provides a wide range of plotting capabilities. See *Visualization of regression models using visreg* ([Breheny and Burchett 2017](#ref-RN10)) for details.
9\.3 Logistic regression
------------------------
Logistic regression can be used to explore the relationship between a binary response variable and an explanatory variable while other variables are held constant. Binary response variables have two levels (yes/no, lived/died, pass/fail, malignant/benign). As with linear regression, we can use the [**visreg**](http://pbreheny.github.io/visreg/index.html) package to visualize these relationships.
The CPS85 dataset in the **mosaicData** package contains a random sample of from the 1985 Current Population Survey, with data on the demographics and work experience of 534 individuals.
Let’s use this data to predict the log\-odds of being married, given one’s sex, age, race and job sector. We’ll allow the relationship between age and marital status to vary between men and women by including an interaction term (*sex\*age*).
```
# fit logistic model for predicting
# marital status: married/single
data(CPS85, package = "mosaicData")
cps85_glm <- glm(married ~ sex + age + sex*age + race + sector,
family="binomial",
data=CPS85)
```
Using the fitted model, let’s visualize the relationship between age and the probability of being married, holding the other variables constant. Again, the `visreg` function takes the model and the variable of interest and plots the conditional relationship, controlling for the other variables. The option `gg = TRUE` is used to produce a ggplot2 graph. The `scale = "response"` option creates a plot based on a probability (rather than log\-odds) scale.
```
# plot results
library(ggplot2)
library(visreg)
visreg(cps85_glm, "age",
gg = TRUE,
scale="response") +
labs(y = "Prob(Married)",
x = "Age",
title = "Relationship of age and marital status",
subtitle = "controlling for sex, race, and job sector",
caption = "source: Current Population Survey 1985")
```
```
## Conditions used in construction of plot
## sex: M
## race: W
## sector: prof
```
Figure 9\.5: Conditional plot of age and marital status
For professional, white males, the probability of being married is roughly 0\.5 at age 25 and decreases to 0\.1 at age 55\.
We can create multiple conditional plots by adding a `by` option. For example, the following code will plot the probability of being married by age, separately for men and women, controlling for race and job sector.
```
# plot results
library(ggplot2)
library(visreg)
visreg(cps85_glm, "age",
by = "sex",
gg = TRUE,
scale="response") +
labs(y = "Prob(Married)",
x = "Age",
title = "Relationship of age and marital status",
subtitle = "controlling for race and job sector",
caption = "source: Current Population Survey 1985")
```
Figure 9\.6: Conditional plot of age and marital status
In this data, the probability of marriage for men and women differ significantly over the ages measured.
9\.4 Survival plots
-------------------
In many research settings, the response variable is the time to an event. This is frequently true in healthcare research, where we are interested in time to recovery, time to death, or time to relapse.
If the event has not occurred for an observation (either because the study ended or the patient dropped out) the observation is said to be *censored*.
The [NCCTG Lung Cancer](Datasets.html#Lung) dataset in the **survival** package provides data on the survival times of patients with advanced lung cancer following treatment. The study followed patients for up 34 months.
The outcome for each patient is measured by two variables
* *time* \- survival time in days
* *status* \- 1 \= censored, 2 \= dead
Thus a patient with *time \= 305 \& status \= 2* lived 305 days following treatment. Another patient with *time \= 400 \& status \= 1*, lived **at least** 400 days but was then lost to the study. A patient with *time \= 1022 \& status \= 1*, survived to the end of the study (34 months).
A survival plot (also called a Kaplan\-Meier Curve) can be used to illustrates the probability that an individual survives up to and including time *t*.
```
# plot survival curve
library(survival)
library(survminer)
data(lung)
sfit <- survfit(Surv(time, status) ~ 1, data=lung)
ggsurvplot(sfit,
title="Kaplan-Meier curve for lung cancer survival")
```
Figure 9\.7: Basic survival curve
Roughly 50% of patients are still alive 300 days post treatment. Run `summary(sfit)` for more details.
It is frequently of great interest whether groups of patients have the same survival probabilities. In the next graph, the survival curve for men and women are compared.
```
# plot survival curve for men and women
sfit <- survfit(Surv(time, status) ~ sex, data=lung)
ggsurvplot(sfit,
conf.int=TRUE,
pval=TRUE,
legend.labs=c("Male", "Female"),
legend.title="Sex",
palette=c("cornflowerblue", "indianred3"),
title="Kaplan-Meier Curve for lung cancer survival",
xlab = "Time (days)")
```
Figure 9\.8: Comparison of survival curve
The `ggsurvplot` function has many options (see [?ggsurvplot](https://www.rdocumentation.org/packages/survminer/versions/0.4.2/topics/ggsurvplot)). In particular, `conf.int` provides confidence intervals, while `pval` provides a log\-rank test comparing the survival curves.
The p\-value (0\.0013\) provides strong evidence that men and women have different survival probabilities following treatment. In this case, women are more likely to survive across the time period studied.
9\.5 Mosaic plots
-----------------
Mosaic charts can display the relationship between categorical variables using rectangles whose areas represent the proportion of cases for any given combination of levels. The color of the tiles can also indicate the degree relationship among the variables.
Although mosaic charts can be created with **ggplot2** using the [**ggmosaic**](https://cran.r-project.org/web/packages/ggmosaic/vignettes/ggmosaic.html) package, I recommend using the **vcd** package instead. Although it won’t create ggplot2 graphs, the package provides a more comprehensive approach to visualizing categorical data.
People are fascinated with the Titanic (or is it with Leo?). In the Titanic disaster, what role did sex and class play in survival? We can visualize the relationship between these three categorical variables using the code below.
The dataset (*titanic.csv*) describes the sex, passenger class, and survival status for each of the 2,201 passengers and crew. The `xtabs` function creates a cross\-tabulation of the data, and the `ftable` function prints the results in a nice compact format.
```
# input data
library(readr)
titanic <- read_csv("titanic.csv")
# create a table
tbl <- xtabs(~Survived + Class + Sex, titanic)
ftable(tbl)
```
```
## Sex Female Male
## Survived Class
## No 1st 4 118
## 2nd 13 154
## 3rd 106 422
## Crew 3 670
## Yes 1st 141 62
## 2nd 93 25
## 3rd 90 88
## Crew 20 192
```
The `mosaic` function in the `vcd` package plots the results.
```
# create a mosaic plot from the table
library(vcd)
mosaic(tbl, main = "Titanic data")
```
Figure 9\.9: Basic mosaic plot
The size of the tile is proportional to the percentage of cases in that combination of levels. Clearly more passengers perished, than survived. Those that perished were primarily 3rd class male passengers and male crew (the largest group).
If we assume that these three variables are independent, we can examine the residuals from the model and shade the tiles to match. The *shade \= TRUE* adds fill colors. Dark blue represents more cases than expected given independence. Dark red represents less cases than expected if independence holds.
The *l*`abeling_args`, `set_labels`, and `main` options improve the plot labeling.
```
mosaic(tbl,
shade = TRUE,
labeling_args =
list(set_varnames = c(Sex = "Gender",
Survived = "Survived",
Class = "Passenger Class")),
set_labels =
list(Survived = c("No", "Yes"),
Class = c("1st", "2nd", "3rd", "Crew"),
Sex = c("F", "M")),
main = "Titanic data")
```
Figure 9\.10: Mosaic plot with shading
We can see that if class, gender, and survival are independent, we are seeing many more male crew perishing, and 1st, 2nd and 3rd class females surviving than would be expected. Conversely, far fewer 1st class passengers (both male and female) died than would be expected by chance. Thus the assumption of independence is rejected. (Spoiler alert: Leo doesn’t make it.)
For complicated tables, labels can easily overlap. See [`?labeling_border`](https://www.rdocumentation.org/packages/vcd/versions/1.4-4/topics/labeling_border) for plotting options.
9\.1 Correlation plots
----------------------
Correlation plots help you to visualize the pairwise relationships between a set of quantitative variables by displaying their correlations using color or shading.
Consider the [Saratoga Houses](Datasets.html#SaratogaHousing) dataset, which contains the sale price and property characteristics of Saratoga County, NY homes in 2006 (Appendix [A.14](Datasets.html#SaratogaHousing)). In order to explore the relationships among the quantitative variables, we can calculate the Pearson Product\-Moment [correlation coefficients](http://www.statisticshowto.com/probability-and-statistics/correlation-coefficient-formula/).
In the code below, the `select_if` function in the **dplyr** package is used to select the numeric variables in the data frame. The `cor` function in base R calculates the correlations. The *use\=“complete.obs”* option deletes any cases with missing data. The `round` function rounds the printed results to 2 decimal places.
```
data(SaratogaHouses, package="mosaicData")
# select numeric variables
df <- dplyr::select_if(SaratogaHouses, is.numeric)
# calulate the correlations
r <- cor(df, use="complete.obs")
round(r,2)
```
The `ggcorrplot` function in the **ggcorrplot** package can be used to visualize these correlations. By default, it creates a ggplot2 graph where darker red indicates stronger positive correlations, darker blue indicates stronger negative correlations and white indicates no correlation.
```
library(ggplot2)
library(ggcorrplot)
ggcorrplot(r)
```
Figure 9\.1: Correlation matrix
From the graph, an increase in number of bathrooms and living area are associated with increased price, while older homes tend to be less expensive. Older homes also tend to have fewer bathrooms.
The [`ggcorrplot`](https://www.rdocumentation.org/packages/ggcorrplot/versions/0.1.1/topics/ggcorrplot) function has a number of options for customizing the output. For example
* `hc.order = TRUE` reorders the variables, placing variables with similar correlation patterns together.
* `type = "lower"` plots the lower portion of the correlation matrix.
* `lab = TRUE` overlays the correlation coefficients (as text) on the plot.
```
ggcorrplot(r,
hc.order = TRUE,
type = "lower",
lab = TRUE)
```
Figure 9\.2: Sorted lower triangel correlation matrix with options
These, and other options, can make the graph easier to read and interpret. See [`?ggcorrplot`](https://www.rdocumentation.org/packages/ggcorrplot/versions/0.1.1/topics/ggcorrplot) for details.
9\.2 Linear Regression
----------------------
Linear regression allows us to explore the relationship between a quantitative response variable and an explanatory variable while other variables are held constant.
Consider the prediction of home prices in the [Saratoga Houses](#SaratogaHouses) dataset from lot size (square feet), age (years), land value (1000s dollars), living area (square feet), number of bedrooms and bathrooms and whether the home is on the waterfront or not.
```
data(SaratogaHouses, package="mosaicData")
houses_lm <- lm(price ~ lotSize + age + landValue +
livingArea + bedrooms + bathrooms +
waterfront,
data = SaratogaHouses)
```
Table 9\.1: Linear Regression results
| term | estimate | std.error | statistic | p.value |
| --- | --- | --- | --- | --- |
| (Intercept) | 139878\.80 | 16472\.93 | 8\.49 | 0\.00 |
| lotSize | 7500\.79 | 2075\.14 | 3\.61 | 0\.00 |
| age | \-136\.04 | 54\.16 | \-2\.51 | 0\.01 |
| landValue | 0\.91 | 0\.05 | 19\.84 | 0\.00 |
| livingArea | 75\.18 | 4\.16 | 18\.08 | 0\.00 |
| bedrooms | \-5766\.76 | 2388\.43 | \-2\.41 | 0\.02 |
| bathrooms | 24547\.11 | 3332\.27 | 7\.37 | 0\.00 |
| waterfrontNo | \-120726\.62 | 15600\.83 | \-7\.74 | 0\.00 |
From the results, we can estimate that an increase of one square foot of living area is associated with a home price increase of $75, holding the other variables constant. Additionally, waterfront home cost approximately $120,726 more than non\-waterfront home, again controlling for the other variables in the model.
The **visreg** (<http://pbreheny.github.io/visreg>) package provides tools for visualizing these conditional relationships.
The `visreg` function takes (1\) the model and (2\) the variable of interest and plots the conditional relationship, controlling for the other variables. The option `gg = TRUE` is used to produce a ggplot2 graph.
```
# conditional plot of price vs. living area
library(ggplot2)
library(visreg)
visreg(houses_lm, "livingArea", gg = TRUE)
```
Figure 9\.3: Conditional plot of living area and price
The graph suggests that, after controlling for lot size, age, living area, number of bedrooms and bathrooms, and waterfront location, sales price increases with living area in a linear fashion.
> **How does `visreg` work?** The fitted model is used to predict values of the response variable, across the range of the chosen explanatory variable. The other variables are set to their median value (for numeric variables) or most frequent category (for categorical variables). The user can override these defaults and chose specific values for any variable in the model.
Continuing the example, the price difference between waterfront and non\-waterfront homes is plotted, controlling for the other seven variables. Since a **ggplot2** graph is produced, other ggplot2 functions can be added to customize the graph.
```
# conditional plot of price vs. waterfront location
visreg(houses_lm, "waterfront", gg = TRUE) +
scale_y_continuous(label = scales::dollar) +
labs(title = "Relationship between price and location",
subtitle = paste0("controlling for lot size, age, ",
"land value, bedrooms and bathrooms"),
caption = "source: Saratoga Housing Data (2006)",
y = "Home Price",
x = "Waterfront")
```
Figure 9\.4: Conditional plot of location and price
There are far fewer homes on the water, and they tend to be more expensive (even controlling for size, age, and land value).
The **vizreg** package provides a wide range of plotting capabilities. See *Visualization of regression models using visreg* ([Breheny and Burchett 2017](#ref-RN10)) for details.
9\.3 Logistic regression
------------------------
Logistic regression can be used to explore the relationship between a binary response variable and an explanatory variable while other variables are held constant. Binary response variables have two levels (yes/no, lived/died, pass/fail, malignant/benign). As with linear regression, we can use the [**visreg**](http://pbreheny.github.io/visreg/index.html) package to visualize these relationships.
The CPS85 dataset in the **mosaicData** package contains a random sample of from the 1985 Current Population Survey, with data on the demographics and work experience of 534 individuals.
Let’s use this data to predict the log\-odds of being married, given one’s sex, age, race and job sector. We’ll allow the relationship between age and marital status to vary between men and women by including an interaction term (*sex\*age*).
```
# fit logistic model for predicting
# marital status: married/single
data(CPS85, package = "mosaicData")
cps85_glm <- glm(married ~ sex + age + sex*age + race + sector,
family="binomial",
data=CPS85)
```
Using the fitted model, let’s visualize the relationship between age and the probability of being married, holding the other variables constant. Again, the `visreg` function takes the model and the variable of interest and plots the conditional relationship, controlling for the other variables. The option `gg = TRUE` is used to produce a ggplot2 graph. The `scale = "response"` option creates a plot based on a probability (rather than log\-odds) scale.
```
# plot results
library(ggplot2)
library(visreg)
visreg(cps85_glm, "age",
gg = TRUE,
scale="response") +
labs(y = "Prob(Married)",
x = "Age",
title = "Relationship of age and marital status",
subtitle = "controlling for sex, race, and job sector",
caption = "source: Current Population Survey 1985")
```
```
## Conditions used in construction of plot
## sex: M
## race: W
## sector: prof
```
Figure 9\.5: Conditional plot of age and marital status
For professional, white males, the probability of being married is roughly 0\.5 at age 25 and decreases to 0\.1 at age 55\.
We can create multiple conditional plots by adding a `by` option. For example, the following code will plot the probability of being married by age, separately for men and women, controlling for race and job sector.
```
# plot results
library(ggplot2)
library(visreg)
visreg(cps85_glm, "age",
by = "sex",
gg = TRUE,
scale="response") +
labs(y = "Prob(Married)",
x = "Age",
title = "Relationship of age and marital status",
subtitle = "controlling for race and job sector",
caption = "source: Current Population Survey 1985")
```
Figure 9\.6: Conditional plot of age and marital status
In this data, the probability of marriage for men and women differ significantly over the ages measured.
9\.4 Survival plots
-------------------
In many research settings, the response variable is the time to an event. This is frequently true in healthcare research, where we are interested in time to recovery, time to death, or time to relapse.
If the event has not occurred for an observation (either because the study ended or the patient dropped out) the observation is said to be *censored*.
The [NCCTG Lung Cancer](Datasets.html#Lung) dataset in the **survival** package provides data on the survival times of patients with advanced lung cancer following treatment. The study followed patients for up 34 months.
The outcome for each patient is measured by two variables
* *time* \- survival time in days
* *status* \- 1 \= censored, 2 \= dead
Thus a patient with *time \= 305 \& status \= 2* lived 305 days following treatment. Another patient with *time \= 400 \& status \= 1*, lived **at least** 400 days but was then lost to the study. A patient with *time \= 1022 \& status \= 1*, survived to the end of the study (34 months).
A survival plot (also called a Kaplan\-Meier Curve) can be used to illustrates the probability that an individual survives up to and including time *t*.
```
# plot survival curve
library(survival)
library(survminer)
data(lung)
sfit <- survfit(Surv(time, status) ~ 1, data=lung)
ggsurvplot(sfit,
title="Kaplan-Meier curve for lung cancer survival")
```
Figure 9\.7: Basic survival curve
Roughly 50% of patients are still alive 300 days post treatment. Run `summary(sfit)` for more details.
It is frequently of great interest whether groups of patients have the same survival probabilities. In the next graph, the survival curve for men and women are compared.
```
# plot survival curve for men and women
sfit <- survfit(Surv(time, status) ~ sex, data=lung)
ggsurvplot(sfit,
conf.int=TRUE,
pval=TRUE,
legend.labs=c("Male", "Female"),
legend.title="Sex",
palette=c("cornflowerblue", "indianred3"),
title="Kaplan-Meier Curve for lung cancer survival",
xlab = "Time (days)")
```
Figure 9\.8: Comparison of survival curve
The `ggsurvplot` function has many options (see [?ggsurvplot](https://www.rdocumentation.org/packages/survminer/versions/0.4.2/topics/ggsurvplot)). In particular, `conf.int` provides confidence intervals, while `pval` provides a log\-rank test comparing the survival curves.
The p\-value (0\.0013\) provides strong evidence that men and women have different survival probabilities following treatment. In this case, women are more likely to survive across the time period studied.
9\.5 Mosaic plots
-----------------
Mosaic charts can display the relationship between categorical variables using rectangles whose areas represent the proportion of cases for any given combination of levels. The color of the tiles can also indicate the degree relationship among the variables.
Although mosaic charts can be created with **ggplot2** using the [**ggmosaic**](https://cran.r-project.org/web/packages/ggmosaic/vignettes/ggmosaic.html) package, I recommend using the **vcd** package instead. Although it won’t create ggplot2 graphs, the package provides a more comprehensive approach to visualizing categorical data.
People are fascinated with the Titanic (or is it with Leo?). In the Titanic disaster, what role did sex and class play in survival? We can visualize the relationship between these three categorical variables using the code below.
The dataset (*titanic.csv*) describes the sex, passenger class, and survival status for each of the 2,201 passengers and crew. The `xtabs` function creates a cross\-tabulation of the data, and the `ftable` function prints the results in a nice compact format.
```
# input data
library(readr)
titanic <- read_csv("titanic.csv")
# create a table
tbl <- xtabs(~Survived + Class + Sex, titanic)
ftable(tbl)
```
```
## Sex Female Male
## Survived Class
## No 1st 4 118
## 2nd 13 154
## 3rd 106 422
## Crew 3 670
## Yes 1st 141 62
## 2nd 93 25
## 3rd 90 88
## Crew 20 192
```
The `mosaic` function in the `vcd` package plots the results.
```
# create a mosaic plot from the table
library(vcd)
mosaic(tbl, main = "Titanic data")
```
Figure 9\.9: Basic mosaic plot
The size of the tile is proportional to the percentage of cases in that combination of levels. Clearly more passengers perished, than survived. Those that perished were primarily 3rd class male passengers and male crew (the largest group).
If we assume that these three variables are independent, we can examine the residuals from the model and shade the tiles to match. The *shade \= TRUE* adds fill colors. Dark blue represents more cases than expected given independence. Dark red represents less cases than expected if independence holds.
The *l*`abeling_args`, `set_labels`, and `main` options improve the plot labeling.
```
mosaic(tbl,
shade = TRUE,
labeling_args =
list(set_varnames = c(Sex = "Gender",
Survived = "Survived",
Class = "Passenger Class")),
set_labels =
list(Survived = c("No", "Yes"),
Class = c("1st", "2nd", "3rd", "Crew"),
Sex = c("F", "M")),
main = "Titanic data")
```
Figure 9\.10: Mosaic plot with shading
We can see that if class, gender, and survival are independent, we are seeing many more male crew perishing, and 1st, 2nd and 3rd class females surviving than would be expected. Conversely, far fewer 1st class passengers (both male and female) died than would be expected by chance. Thus the assumption of independence is rejected. (Spoiler alert: Leo doesn’t make it.)
For complicated tables, labels can easily overlap. See [`?labeling_border`](https://www.rdocumentation.org/packages/vcd/versions/1.4-4/topics/labeling_border) for plotting options.
| Data Visualization |
rkabacoff.github.io | https://rkabacoff.github.io/datavis/Other.html |
Chapter 10 Other Graphs
=======================
Graphs in this chapter can be very useful, but don’t fit in easily within the other chapters. Feel free to look through these sections and see if any of these graphs meet your needs.
10\.1 3\-D Scatterplot
----------------------
A scatterplot displays the relationship between **two** quantitative variables (Section [5\.2\.1](Bivariate.html#Scatterplot)). But what do you do when you want to observe the relation between **three** variables? One approach is the 3\-D scatterplot.
The **ggplot2** package and its extensions can’t create a 3\-D plot. However, you can create a 3\-D scatterplot with the `scatterplot3d` function in the **scatterplot3d** package.
Let’s say that we want to plot automobile mileage vs. engine displacement vs. car weight using the data in the mtcars data frame the comes installed with base R.
```
# basic 3-D scatterplot
library(scatterplot3d)
with(mtcars, {
scatterplot3d(x = disp,
y = wt,
z = mpg,
main="3-D Scatterplot Example 1")
})
```
Figure 10\.1: Basic 3\-D scatterplot
Now lets, modify the graph by replacing the points with filled blue circles, add drop lines to the x\-y plane, and create more meaningful labels.
```
library(scatterplot3d)
with(mtcars, {
scatterplot3d(x = disp,
y = wt,
z = mpg,
# filled blue circles
color="blue",
pch = 19,
# lines to the horizontal plane
type = "h",
main = "3-D Scatterplot Example 2",
xlab = "Displacement (cu. in.)",
ylab = "Weight (lb/1000)",
zlab = "Miles/(US) Gallon")
})
```
Figure 10\.2: 3\-D scatterplot with vertical lines
In the previous code, `pch = 19` is the way we tell base R graphing function to plot points as a filled circle (pch stands for plotting character and 19 is the code for a filled circle). Similarly, `type = "h"` asks for vertical lines (like a histogram).
Next, let’s label the points. We can do this by saving the results of the `scatterplot3d` function to an object, using the `xyz.convert` function to convert coordinates from 3\-D (x, y, z) to 2D\-projections (x, y), and apply the `text` function to add labels to the graph.
```
library(scatterplot3d)
with(mtcars, {
s3d <- scatterplot3d(
x = disp,
y = wt,
z = mpg,
color = "blue",
pch = 19,
type = "h",
main = "3-D Scatterplot Example 3",
xlab = "Displacement (cu. in.)",
ylab = "Weight (lb/1000)",
zlab = "Miles/(US) Gallon")
# convert 3-D coords to 2D projection
s3d.coords <- s3d$xyz.convert(disp, wt, mpg)
# plot text with 50% shrink and place to right of points
text(s3d.coords$x,
s3d.coords$y,
labels = row.names(mtcars),
cex = .5,
pos = 4)
})
```
Figure 10\.3: 3\-D scatterplot with vertical lines and point labels
Almost there. As a final step, we’ll add information on the number of cylinders in each car. To do this, we’ll add a column to the [mtcars](https://www.rdocumentation.org/packages/datasets/versions/3.5.0/topics/mtcars) data frame indicating the color for each point. For good measure, we will shorten the *y*\-axis, change the drop lines to dashed lines, and add a legend.
```
library(scatterplot3d)
# create column indicating point color
mtcars$pcolor[mtcars$cyl == 4] <- "red"
mtcars$pcolor[mtcars$cyl == 6] <- "blue"
mtcars$pcolor[mtcars$cyl == 8] <- "darkgreen"
with(mtcars, {
s3d <- scatterplot3d(
x = disp,
y = wt,
z = mpg,
color = pcolor,
pch = 19,
type = "h",
lty.hplot = 2,
scale.y = .75,
main = "3-D Scatterplot Example 4",
xlab = "Displacement (cu. in.)",
ylab = "Weight (lb/1000)",
zlab = "Miles/(US) Gallon")
s3d.coords <- s3d$xyz.convert(disp, wt, mpg)
text(s3d.coords$x,
s3d.coords$y,
labels = row.names(mtcars),
pos = 4,
cex = .5)
# add the legend
legend(# top left and indented
"topleft", inset=.05,
# suppress legend box, shrink text 50%
bty="n", cex=.5,
title="Number of Cylinders",
c("4", "6", "8"),
fill=c("red", "blue", "darkgreen"))
})
```
Figure 10\.4: 3\-D scatterplot with vertical lines and point labels and legend
We can easily see that the car with the highest mileage (Toyota Corolla) has low engine displacement, low weight, and 4 cylinders.
3\-D scatterplots can be difficult to interpret because they are static. The `scatter3d` function in the **car** package allows you to create an interactive 3\-D graph that can be manually rotated.
```
library(car)
with(mtcars,
scatter3d(disp, wt, mpg))
```
Figure 10\.5: Interative 3\-D scatterplot
You can now use your mouse to rotate the axes and zoom in and out with the mouse scroll wheel. Note that this will only work if you actually run the code on your desktop. If you are trying to manipulate the graph in this book you’ll drive yourself crazy!
The graph can be highly customized. In the next graph,
* each axis is colored black
* points are colored red for automatic transmission and blue for manual transmission
* all 32 data points are labeled with their rowname
* the default best fit surface is suppressed
* all three axes are given longer labels
```
library(car)
with(mtcars,
scatter3d(disp, wt, mpg,
axis.col = c("black", "black", "black"),
groups = factor(am),
surface.col = c("red", "blue"),
col = c("red", "blue"),
text.col = "grey",
id = list(n=nrow(mtcars),
labels=rownames(mtcars)),
surface = FALSE,
xlab = "Displacement",
ylab = "Weight",
zlab = "Miles Per Gallon"))
```
The `id` option consists of a list of options that control the identification of points. If `n` is less than the total number of points, the *n* most extreme points are labelled. Here, all points are labeled and the row.names are used for the labels.
The `axis.col` option specifies the color of the x, y, and z axis respectively. The `surface.col` option specifies the colors of the points by group. The *col* option specifies by colors of the point labels by group. The `text.col` option specifies the color of the axis labels.
The `surface` option indicates if a surface fit should be plotted (default \= TRUE) and the `lab` options adds labels to the axes.
Figure 10\.6: Interative 3\-D scatterplot with labelled points
Labelled 3\-D scatterplots are most effective when the number of labelled points is small. Otherwise label overlap becomes a significant issue.
10\.2 Bubble charts
-------------------
A bubble chart is also useful when plotting the relationship between **three** quantitative variables. A bubble chart is basically just a scatterplot where the point size is proportional to the values of a third quantitative variable.
Using the [mtcars](https://www.rdocumentation.org/packages/datasets/versions/3.5.0/topics/mtcars) dataset, let’s plot car weight vs. mileage and use point size to represent horsepower.
```
# create a bubble plot
data(mtcars)
library(ggplot2)
ggplot(mtcars,
aes(x = wt, y = mpg, size = hp)) +
geom_point()
```
Figure 10\.7: Basic bubble plot
We can improve the default appearance by increasing the size of the bubbles, choosing a different point shape and color, and adding some transparency.
```
# create a bubble plot with modifications
ggplot(mtcars,
aes(x = wt, y = mpg, size = hp)) +
geom_point(alpha = .5,
fill="cornflowerblue",
color="black",
shape=21) +
scale_size_continuous(range = c(1, 14)) +
labs(title = "Auto mileage by weight and horsepower",
subtitle = "Motor Trend US Magazine (1973-74 models)",
x = "Weight (1000 lbs)",
y = "Miles/(US) gallon",
size = "Gross horsepower")
```
Figure 10\.8: Bubble plot with modifications
The `range` parameter in the `scale_size_continuous` function specifies the minimum and maximum size of the plotting symbol. The default is `range = c(1, 6)`.
The `shape` option in the `geom_point` function specifies an circle with a border color and fill color.
Clearly, miles per gallon decreases with increased car weight and horsepower. However, there is one car with low weight, high horsepower, and high gas mileage. Going back to the data, it’s the Lotus Europa.
Bubble charts are controversial for the same reason that pie charts are controversial. People are better at judging length than volume. However, they are quite popular.
10\.3 Biplots
-------------
3\-D scatterplots and bubble charts plot the relation between three quantitative variables. With more than three quantitative variables, a biplot ([Nishisato et al. 2021](#ref-RN12)) can be very useful. A biplot is a specialized graph that attempts to represent the relationship between observations, between variables, and between observations and variables, in a low (usually two) dimensional space.
It’s easiest to see how this works with an example. Let’s create a biplot for the [`mtcars`](https://www.rdocumentation.org/packages/datasets/versions/3.5.0/topics/mtcars) dataset, using the `fviz_pca` function from the **factoextra** package.
```
# create a biplot
# load data
data(mtcars)
# fit a principal components model
fit <- prcomp(x = mtcars,
center = TRUE,
scale = TRUE)
# plot the results
library(factoextra)
fviz_pca(fit,
repel = TRUE,
labelsize = 3) +
theme_bw() +
labs(title = "Biplot of mtcars data")
```
Figure 10\.9: Basic biplot
The `fviz_pca` function produces a ggplot2 graph.
*Dim1* and *Dim2* are the first two [principal components](https://towardsdatascience.com/a-one-stop-shop-for-principal-component-analysis-5582fb7e0a9c) \- linear combinations of the original *p* variables.
\\\[PC\_{1} \= \\beta\_{10} \+\\beta\_{11}x\_{1} \+ \\beta\_{12}x\_{2} \+ \\beta\_{13}x\_{3} \+ \\dots \+ \\beta\_{1p}x\_{p}\\] \\\[PC\_{2} \= \\beta\_{20} \+\\beta\_{21}x\_{1} \+ \\beta\_{22}x\_{2} \+ \\beta\_{23}x\_{3} \+ \\dots \+ \\beta\_{2p}x\_{p}\\]
The weights of these linear combinations (\\(\\beta\_{ij}s\\)) are chosen to maximize the variance accounted for in the original variables. Additionally, the principal components (PCs) are constrained to be uncorrelated with each other.
In this graph, the first PC accounts for 60% of the variability in the original data. The second PC accounts for 24%. Together, they account for 84% of the variability in the original *p* \= 11 variables.
As you can see, both the observations (cars) and variables (car characteristics) are plotted in the same graph.
* Points represent observations. Smaller distances between points suggest similar values on the original set of variables. For example, the *Toyota Corolla* and *Honda Civic* are similar to each other, as are the *Chrysler Imperial* and *Liconln Continental*. However, the *Toyota Corolla* is very different from the *Lincoln Continental*.
* The vectors (arrows) represent variables. The angle between vectors are proportional to the correlation between the variables. Smaller angles indicate stronger correlations. For example, *gear* and *am* are positively correlated, *gear* and *qsec* are uncorrelated (90 degree angle), and *am* and *wt* are negatively correlated (angle greater then 90 degrees).
* The observations that are are farthest along the direction of a variable’s vector, have the highest values on that variable. For example, the *Toyoto Corolla* and *Honda Civic* have higher values on *mpg*. The *Toyota Corona* has a higher *qsec*. The *Duster 360* has more *cylinders*.
As you can see, biplots convey an amazing amount of information in a single graph. However, care must be taken in interpreting biplots. They are only accurate when the percentage of variance accounted for is high. Always check your conclusion with the original data. For example, if the graph suggests that two cars are similar, go back to the original data and do a spot\-check to see if that is so.
See the article by Forrest Young ([https://www.uv.es/visualstats/vista\-frames/help/lecturenotes/lecture13/biplot.html](https://www.uv.es/visualstats/vista-frames/help/lecturenotes/lecture13/biplot.html)) to learn more about interpreting biplots correctly.
A flow diagram represents a set of dynamic relationships. It usually captures the physical or metaphorical flow of people, materials, communications, or objects through a set of nodes in a network.
10\.4 Alluvial diagrams
-----------------------
Alluvial diagrams are useful for displaying the relation among two or more categorical variables. They use a flow analogy to represent in changes in group composition across variables. This will be more understandable when you see an example.
Alluvial diagrams are created with **ggalluvial** package, generating ggplot2 graphs. As an example, let’s diagram the survival of Titanic passengers, using the [Titanic](Datasets.html#Titanic) dataset. We will look at the relationship between passenger class, sex, and survival.
To create an alluvial diagram, first count the frequency of of each combination of the categorical variables.
```
# input data
library(readr)
titanic <- read_csv("titanic.csv")
# summarize data
library(dplyr)
titanic_table <- titanic %>%
group_by(Class, Sex, Survived) %>%
count()
# convert survived to a factor with labels
titanic_table$Survived <- factor(titanic_table$Survived,
levels = c("Yes", "No"))
# view the first 6 cases
head(titanic_table)
```
```
## # A tibble: 6 × 4
## # Groups: Class, Sex, Survived [6]
## Class Sex Survived n
## <chr> <chr> <fct> <int>
## 1 1st Female No 4
## 2 1st Female Yes 141
## 3 1st Male No 118
## 4 1st Male Yes 62
## 5 2nd Female No 13
## 6 2nd Female Yes 93
```
Next create an alluvial diagram in ggplot2 using the `ggplot`, `geom_alluvium` and `geom_stratum` functions. The categorical variables are mapped to *axes* and n to *y*. This will produce Figure [10\.10](Other.html#fig:alluvial2)
```
library(ggalluvial)
ggplot(titanic_table,
aes(axis1 = Class,
axis2 = Sex,
axis3 = Survived,
y = n)) +
geom_alluvium(aes(fill = Class)) +
geom_stratum() +
geom_text(stat = "stratum",
aes(label = after_stat(stratum)))
```
Figure 10\.10: Basic alluvial diagram
To interpret the graph, start with the variable on the left (*Class*) and follow the flow to the right. The height of the category level represent the proportion of observations in that level. For example the crew made up roughly 40% of the passengers. Roughly, 30% of passengers survived.
The height of the flow represents the proportion of observations contained in the two variable levels they connect. About 50% of first class passengers were females and all female first class passengers survived. The crew was overwhelmingly male and roughly 75% of this group perished.
As a second example, let’s look at the relationship between the number carburetors, cylinders, gears, and the transmission type (manual or automatic) for the 32 cars in the [mtcars](https://www.rdocumentation.org/packages/datasets/versions/3.5.0/topics/mtcars) dataset. We’ll treat each variable as categorical.
First, we need to prepare the data.
```
library(dplyr)
data(mtcars)
mtcars_table <- mtcars %>%
mutate(am = factor(am, labels = c("Auto", "Man")),
cyl = factor(cyl),
gear = factor(gear),
carb = factor(carb)) %>%
group_by(cyl, gear, carb, am) %>%
count()
head(mtcars_table)
```
```
## # A tibble: 6 × 5
## # Groups: cyl, gear, carb, am [6]
## cyl gear carb am n
## <fct> <fct> <fct> <fct> <int>
## 1 4 3 1 Auto 1
## 2 4 4 1 Man 4
## 3 4 4 2 Auto 2
## 4 4 4 2 Man 2
## 5 4 5 2 Man 2
## 6 6 3 1 Auto 2
```
Next create the graph. Several options and functions are added to enhance the results. Specifically,
* the flow borders are set to black (`geom_alluvium`)
* the strata are given transparency (`geom_strata`)
* the strata are labeled and made wider (`scale_x_discrete`)
* titles are added (`labs`)
* the theme is simplified (`theme_minima`*l*)
* and the legend is suppressed (`theme`)
```
ggplot(mtcars_table,
aes(axis1 = carb,
axis2 = cyl,
axis3 = gear,
axis4 = am,
y = n)) +
geom_alluvium(aes(fill = carb), color="black") +
geom_stratum(alpha=.8) +
geom_text(stat = "stratum",
aes(label = after_stat(stratum))) +
scale_x_discrete(limits = c("Carburetors", "Cylinders",
"Gears", "Transmission"),
expand = c(.1, .1)) +
# scale_fill_brewer(palette="Paired") +
labs(title = "Mtcars data",
subtitle = "stratified by carb, cyl, gear, and am",
y = "Frequency") +
theme_minimal() +
theme(legend.position = "none")
```
Figure 10\.11: Basic alluvial diagram for the mtcars dataset
I think that these changes make the graph easier to follow. For example, all 8 carburetor cars have 8 cylinders, 5 gears, and a manual transmission. Most 4 carburetor cars have 8 cylinders, 3 gears, and an automatic transmission.
See the [ggalluvial website](https://github.com/corybrunson/ggalluvial) (<https://github.com/corybrunson/ggalluvial>) for additional details.
10\.5 Heatmaps
--------------
A heatmap displays a set of data using colored tiles for each variable value within each observation. There are many varieties of heatmaps. Although base R comes with a `heatmap` function, we’ll use the more powerful [**superheat**](https://rlbarter.github.io/superheat/) package (I love these names).
First, let’s create a heatmap for the [mtcars](https://www.rdocumentation.org/packages/datasets/versions/3.5.0/topics/mtcars) dataset that come with base R. The mtcars dataset contains information on 32 cars measured on 11 variables.
```
# create a heatmap
data(mtcars)
library(superheat)
superheat(mtcars, scale = TRUE)
```
Figure 10\.12: Basic heatmap
The `scale = TRUE` options standardizes the columns to a mean of zero and standard deviation of one. Looking at the graph, we can see that the Merc 230 has a quarter mile time (*qsec*) the is well above average (bright yellow). The Lotus Europa has a weight is well below average (dark blue).
We can use clustering to sort the rows and/or columns. In the next example, we’ll sort the rows so that cars that are similar appear near each other. We will also adjust the text and label sizes.
```
# sorted heat map
superheat(mtcars,
scale = TRUE,
left.label.text.size=3,
bottom.label.text.size=3,
bottom.label.size = .05,
row.dendrogram = TRUE )
```
Figure 10\.13: Sorted heatmap
Here we can see that the Toyota Corolla and Fiat 128 have similar characteristics. The Lincoln Continental and Cadillac Fleetwood also have similar characteristics.
The `superheat` function requires that the data be in particular format. Specifically
* the data most be all numeric
* the row names are used to label the left axis. If the desired labels are in a column variable, the variable must be converted to row names (more on this below)
* missing values are allowed
Let’s use a heatmap to display changes in life expectancies over time for Asian countries. The data come from the [`gapminder`](Datasets.html#Gapminder) dataset (Appendix [A.8](Datasets.html#Gapminder).
Since the data is in [long format](DataPrep.html#Reshaping) (Section [2\.2\.7](DataPrep.html#Reshaping)), we first have to convert to wide format. Then we need to ensure that it is a data frame and convert the variable *country* into row names. Finally, we’ll sort the data by 2007 life expectancy. While we are at it, let’s change the color scheme.
```
# create heatmap for gapminder data (Asia)
library(tidyr)
library(dplyr)
# load data
data(gapminder, package="gapminder")
# subset Asian countries
asia <- gapminder %>%
filter(continent == "Asia") %>%
select(year, country, lifeExp)
# convert to long to wide format
plotdata <- pivot_wider(asia, names_from = year,
values_from = lifeExp)
# save country as row names
plotdata <- as.data.frame(plotdata)
row.names(plotdata) <- plotdata$country
plotdata$country <- NULL
# row order
sort.order <- order(plotdata$"2007")
# color scheme
library(RColorBrewer)
colors <- rev(brewer.pal(5, "Blues"))
# create the heat map
superheat(plotdata,
scale = FALSE,
left.label.text.size=3,
bottom.label.text.size=3,
bottom.label.size = .05,
heat.pal = colors,
order.rows = sort.order,
title = "Life Expectancy in Asia")
```
Figure 10\.14: Heatmap for time series
Japan, Hong Kong, and Israel have the highest life expectancies. South Korea was doing well in the 80s but has lost some ground. Life expectancy in Cambodia took a sharp hit in 1977\.
To see what you can do with heat maps, see the extensive `superheat` [vignette](https://rlbarter.github.io/superheat/) (<https://rlbarter.github.io/superheat/>).
10\.6 Radar charts
------------------
A radar chart (also called a spider or star chart) displays one or more groups or observations on three or more quantitative variables.
In the example below, we’ll compare dogs, pigs, and cows in terms of body size, brain size, and sleep characteristics (total sleep time, length of sleep cycle, and amount of REM sleep). The data come from the `msleep` dataset that ships with ggplot2\.
Radar charts can be created with `ggradar` function in the **ggradar** package.
Next, we have to put the data in a specific format:
* The first variable should be called *group* and contain the identifier for each observation
* The numeric variables have to be rescaled so that their values range from 0 to 1
```
# create a radar chart
# prepare data
data(msleep, package = "ggplot2")
library(ggplot2)
library(ggradar)
library(scales)
library(dplyr)
plotdata <- msleep %>%
filter(name %in% c("Cow", "Dog", "Pig")) %>%
select(name, sleep_total, sleep_rem,
sleep_cycle, brainwt, bodywt) %>%
rename(group = name) %>%
mutate_at(vars(-group),
funs(rescale))
plotdata
```
```
## # A tibble: 3 × 6
## group sleep_total sleep_rem sleep_cycle brainwt bodywt
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Cow 0 0 1 1 1
## 2 Dog 1 1 0 0 0
## 3 Pig 0.836 0.773 0.5 0.312 0.123
```
```
# generate radar chart
ggradar(plotdata,
grid.label.size = 4,
axis.label.size = 4,
group.point.size = 5,
group.line.width = 1.5,
legend.text.size= 10) +
labs(title = "Mammals, size, and sleep")
```
Figure 10\.15: Basic radar chart
In the previous chart, the `mutate_at` function rescales all variables except *group*. The various `size` options control the font sizes for the percent labels, variable names, point size, line width, and legend labels respectively.
We can see from the chart that, relatively speaking, cows have large brain and body weights, long sleep cycles, short total sleep time and little time in REM sleep. Dogs in comparison, have small body and brain weights, short sleep cycles, and a large total sleep time and time in REM sleep (The obvious conclusion is that I want to be a dog \- but with a bigger brain).
10\.7 Scatterplot matrix
------------------------
A scatterplot matrix is a collection of [scatterplots](Bivariate.html#Scatterplot) (Section [5\.2\.1](Bivariate.html#Scatterplot)) organized as a grid. It is similar to a [correlation plot](Models.html#Corrplot) (Section [9\.1](Models.html#Corrplot) but instead of displaying correlations, displays the underlying data.
You can create a scatterplot matrix using the [`ggpairs`](https://ggobi.github.io/ggally/#ggallyggpairs) function in the [**GGally**](https://ggobi.github.io/ggally/index.html) package.
We can illustrate its use by examining the relationships between mammal size and sleep characteristics using msleep dataset. Brain weight and body weight are highly skewed (think mouse and elephant) so we’ll transform them to log brain weight and log body weight before creating the graph.
```
library(GGally)
# prepare data
data(msleep, package="ggplot2")
library(dplyr)
df <- msleep %>%
mutate(log_brainwt = log(brainwt),
log_bodywt = log(bodywt)) %>%
select(log_brainwt, log_bodywt, sleep_total, sleep_rem)
# create a scatterplot matrix
ggpairs(df)
```
Figure 10\.16: Scatterplot matrix
By default,
* the principal diagonal contains the [kernel density](Univariate.html#Kernel) charts (Section [4\.2\.2](Univariate.html#Kernel)) for each variable.
* The cells below the principal diagonal contain the scatterplots represented by the intersection of the row and column variables. The variables across the top are the *x*\-axes and the variables down the right side are the *y*\-axes.
* The cells above the principal diagonal contain the correlation coefficients.
For example, as brain weight increases, total sleep time and time in REM sleep decrease.
The graph can be modified by creating custom functions.
```
# custom function for density plot
my_density <- function(data, mapping, ...){
ggplot(data = data, mapping = mapping) +
geom_density(alpha = 0.5,
fill = "cornflowerblue", ...)
}
# custom function for scatterplot
my_scatter <- function(data, mapping, ...){
ggplot(data = data, mapping = mapping) +
geom_point(alpha = 0.5,
color = "cornflowerblue") +
geom_smooth(method=lm,
se=FALSE, ...)
}
# create scatterplot matrix
ggpairs(df,
lower=list(continuous = my_scatter),
diag = list(continuous = my_density)) +
labs(title = "Mammal size and sleep characteristics") +
theme_bw()
```
Figure 10\.17: Customized scatterplot matrix
Being able to write your own functions provides a great deal of flexibility. Additionally, since the resulting plot is a ggplot2 graph, addition functions can be added to alter the theme, title, labels, etc. See the [`?ggpairs`](https://ggobi.github.io/ggally/#ggallyggpairs) for more details.
10\.8 Waterfall charts
----------------------
A waterfall chart illustrates the cumulative effect of a sequence of positive and negative values.
For example, we can plot the cumulative effect of revenue and expenses for a fictional company. First, let’s create a dataset
```
# create company income statement
category <- c("Sales", "Services", "Fixed Costs",
"Variable Costs", "Taxes")
amount <- c(101000, 52000, -23000, -15000, -10000)
income <- data.frame(category, amount)
```
Now we can visualize this with a waterfall chart, using the [`waterfall`](https://www.rdocumentation.org/packages/waterfalls/versions/0.1.2/topics/waterfall) function in the **waterfalls** package.
```
# create waterfall chart
library(ggplot2)
library(waterfalls)
waterfall(income)
```
Figure 10\.18: Basic waterfall chart
We can also add a total (net) column. Since the result is a ggplot2 graph, we can use additional functions to customize the results.
```
# create waterfall chart with total column
waterfall(income,
calc_total=TRUE,
total_axis_text = "Net",
total_rect_text_color="black",
total_rect_color="goldenrod1") +
scale_y_continuous(label=scales::dollar) +
labs(title = "West Coast Profit and Loss",
subtitle = "Year 2017",
y="",
x="") +
theme_minimal()
```
Figure 10\.19: Waterfall chart with total column
Waterfall charts are particularly useful when you want to show change from a starting point to an end point and when there are positive and negative values.
10\.9 Word clouds
-----------------
A word cloud (also called a tag cloud), is basically an infographic that indicates the frequency of words in a collection of text (e.g., tweets, a text document, a set of text documents). There is a very nice script produced by [STHDA](http://www.sthda.com/english/) (<http://www.sthda.com/english/>) that will generate a word cloud directly from a text file.
To demonstrate, we’ll use [President Kennedy’s Address](Datasets.html#JFKspeech) (Appendix [A.17](Datasets.html#JFKspeech)) during the Cuban Missile crisis.
To use the script, there are several packages you need to install first. The were not mentioned earlier because they are only needed for this section.
```
# install packages for text mining
install.packages(c("tm", "SnowballC",
"wordcloud", "RColorBrewer",
"RCurl", "XML"))
```
Once the packages are installed, you can run the script on your text file.
```
# create a word cloud
script <- "http://www.sthda.com/upload/rquery_wordcloud.r"
source(script)
res<-rquery.wordcloud("JFKspeech.txt",
type ="file",
lang = "english",
textStemming=FALSE,
min.freq=3,
max.words=200)
```
Figure 10\.20: Word cloud
First, the script
* coverts each word to lowercase
* removes numbers, punctuation, and whitespace
* removes stopwords (common words such as “a”, “and”, and “the”)
* if the `textStemming = TRUE` (default is FALSE), words are stemmed (reducing words such as cats, and catty to cat)
* counts the number of times each word appears
* drops words that appear less than 3 times (*min.freq*)
The script then plots up to 200 words (*max.words*) with word size proportional to the number of times the word appears.
As you can see, the most common words in the speech are *soviet*, *cuba*, *world*, *weapons*, etc. The terms *missile* and *ballistic* are used rarely.
The `rquery.wordcloud` function supports several languages, including Danish, Dutch, English, Finnish, french, German, Italian, Norwegian, Portuguese, Russian, Spanish, and Swedish! See [http://www.sthda.com/english/wiki/word\-cloud\-generator\-in\-r\-one\-killer\-function\-to\-do\-everything\-you\-need](http://www.sthda.com/english/wiki/word-cloud-generator-in-r-one-killer-function-to-do-everything-you-need) for details.
10\.1 3\-D Scatterplot
----------------------
A scatterplot displays the relationship between **two** quantitative variables (Section [5\.2\.1](Bivariate.html#Scatterplot)). But what do you do when you want to observe the relation between **three** variables? One approach is the 3\-D scatterplot.
The **ggplot2** package and its extensions can’t create a 3\-D plot. However, you can create a 3\-D scatterplot with the `scatterplot3d` function in the **scatterplot3d** package.
Let’s say that we want to plot automobile mileage vs. engine displacement vs. car weight using the data in the mtcars data frame the comes installed with base R.
```
# basic 3-D scatterplot
library(scatterplot3d)
with(mtcars, {
scatterplot3d(x = disp,
y = wt,
z = mpg,
main="3-D Scatterplot Example 1")
})
```
Figure 10\.1: Basic 3\-D scatterplot
Now lets, modify the graph by replacing the points with filled blue circles, add drop lines to the x\-y plane, and create more meaningful labels.
```
library(scatterplot3d)
with(mtcars, {
scatterplot3d(x = disp,
y = wt,
z = mpg,
# filled blue circles
color="blue",
pch = 19,
# lines to the horizontal plane
type = "h",
main = "3-D Scatterplot Example 2",
xlab = "Displacement (cu. in.)",
ylab = "Weight (lb/1000)",
zlab = "Miles/(US) Gallon")
})
```
Figure 10\.2: 3\-D scatterplot with vertical lines
In the previous code, `pch = 19` is the way we tell base R graphing function to plot points as a filled circle (pch stands for plotting character and 19 is the code for a filled circle). Similarly, `type = "h"` asks for vertical lines (like a histogram).
Next, let’s label the points. We can do this by saving the results of the `scatterplot3d` function to an object, using the `xyz.convert` function to convert coordinates from 3\-D (x, y, z) to 2D\-projections (x, y), and apply the `text` function to add labels to the graph.
```
library(scatterplot3d)
with(mtcars, {
s3d <- scatterplot3d(
x = disp,
y = wt,
z = mpg,
color = "blue",
pch = 19,
type = "h",
main = "3-D Scatterplot Example 3",
xlab = "Displacement (cu. in.)",
ylab = "Weight (lb/1000)",
zlab = "Miles/(US) Gallon")
# convert 3-D coords to 2D projection
s3d.coords <- s3d$xyz.convert(disp, wt, mpg)
# plot text with 50% shrink and place to right of points
text(s3d.coords$x,
s3d.coords$y,
labels = row.names(mtcars),
cex = .5,
pos = 4)
})
```
Figure 10\.3: 3\-D scatterplot with vertical lines and point labels
Almost there. As a final step, we’ll add information on the number of cylinders in each car. To do this, we’ll add a column to the [mtcars](https://www.rdocumentation.org/packages/datasets/versions/3.5.0/topics/mtcars) data frame indicating the color for each point. For good measure, we will shorten the *y*\-axis, change the drop lines to dashed lines, and add a legend.
```
library(scatterplot3d)
# create column indicating point color
mtcars$pcolor[mtcars$cyl == 4] <- "red"
mtcars$pcolor[mtcars$cyl == 6] <- "blue"
mtcars$pcolor[mtcars$cyl == 8] <- "darkgreen"
with(mtcars, {
s3d <- scatterplot3d(
x = disp,
y = wt,
z = mpg,
color = pcolor,
pch = 19,
type = "h",
lty.hplot = 2,
scale.y = .75,
main = "3-D Scatterplot Example 4",
xlab = "Displacement (cu. in.)",
ylab = "Weight (lb/1000)",
zlab = "Miles/(US) Gallon")
s3d.coords <- s3d$xyz.convert(disp, wt, mpg)
text(s3d.coords$x,
s3d.coords$y,
labels = row.names(mtcars),
pos = 4,
cex = .5)
# add the legend
legend(# top left and indented
"topleft", inset=.05,
# suppress legend box, shrink text 50%
bty="n", cex=.5,
title="Number of Cylinders",
c("4", "6", "8"),
fill=c("red", "blue", "darkgreen"))
})
```
Figure 10\.4: 3\-D scatterplot with vertical lines and point labels and legend
We can easily see that the car with the highest mileage (Toyota Corolla) has low engine displacement, low weight, and 4 cylinders.
3\-D scatterplots can be difficult to interpret because they are static. The `scatter3d` function in the **car** package allows you to create an interactive 3\-D graph that can be manually rotated.
```
library(car)
with(mtcars,
scatter3d(disp, wt, mpg))
```
Figure 10\.5: Interative 3\-D scatterplot
You can now use your mouse to rotate the axes and zoom in and out with the mouse scroll wheel. Note that this will only work if you actually run the code on your desktop. If you are trying to manipulate the graph in this book you’ll drive yourself crazy!
The graph can be highly customized. In the next graph,
* each axis is colored black
* points are colored red for automatic transmission and blue for manual transmission
* all 32 data points are labeled with their rowname
* the default best fit surface is suppressed
* all three axes are given longer labels
```
library(car)
with(mtcars,
scatter3d(disp, wt, mpg,
axis.col = c("black", "black", "black"),
groups = factor(am),
surface.col = c("red", "blue"),
col = c("red", "blue"),
text.col = "grey",
id = list(n=nrow(mtcars),
labels=rownames(mtcars)),
surface = FALSE,
xlab = "Displacement",
ylab = "Weight",
zlab = "Miles Per Gallon"))
```
The `id` option consists of a list of options that control the identification of points. If `n` is less than the total number of points, the *n* most extreme points are labelled. Here, all points are labeled and the row.names are used for the labels.
The `axis.col` option specifies the color of the x, y, and z axis respectively. The `surface.col` option specifies the colors of the points by group. The *col* option specifies by colors of the point labels by group. The `text.col` option specifies the color of the axis labels.
The `surface` option indicates if a surface fit should be plotted (default \= TRUE) and the `lab` options adds labels to the axes.
Figure 10\.6: Interative 3\-D scatterplot with labelled points
Labelled 3\-D scatterplots are most effective when the number of labelled points is small. Otherwise label overlap becomes a significant issue.
10\.2 Bubble charts
-------------------
A bubble chart is also useful when plotting the relationship between **three** quantitative variables. A bubble chart is basically just a scatterplot where the point size is proportional to the values of a third quantitative variable.
Using the [mtcars](https://www.rdocumentation.org/packages/datasets/versions/3.5.0/topics/mtcars) dataset, let’s plot car weight vs. mileage and use point size to represent horsepower.
```
# create a bubble plot
data(mtcars)
library(ggplot2)
ggplot(mtcars,
aes(x = wt, y = mpg, size = hp)) +
geom_point()
```
Figure 10\.7: Basic bubble plot
We can improve the default appearance by increasing the size of the bubbles, choosing a different point shape and color, and adding some transparency.
```
# create a bubble plot with modifications
ggplot(mtcars,
aes(x = wt, y = mpg, size = hp)) +
geom_point(alpha = .5,
fill="cornflowerblue",
color="black",
shape=21) +
scale_size_continuous(range = c(1, 14)) +
labs(title = "Auto mileage by weight and horsepower",
subtitle = "Motor Trend US Magazine (1973-74 models)",
x = "Weight (1000 lbs)",
y = "Miles/(US) gallon",
size = "Gross horsepower")
```
Figure 10\.8: Bubble plot with modifications
The `range` parameter in the `scale_size_continuous` function specifies the minimum and maximum size of the plotting symbol. The default is `range = c(1, 6)`.
The `shape` option in the `geom_point` function specifies an circle with a border color and fill color.
Clearly, miles per gallon decreases with increased car weight and horsepower. However, there is one car with low weight, high horsepower, and high gas mileage. Going back to the data, it’s the Lotus Europa.
Bubble charts are controversial for the same reason that pie charts are controversial. People are better at judging length than volume. However, they are quite popular.
10\.3 Biplots
-------------
3\-D scatterplots and bubble charts plot the relation between three quantitative variables. With more than three quantitative variables, a biplot ([Nishisato et al. 2021](#ref-RN12)) can be very useful. A biplot is a specialized graph that attempts to represent the relationship between observations, between variables, and between observations and variables, in a low (usually two) dimensional space.
It’s easiest to see how this works with an example. Let’s create a biplot for the [`mtcars`](https://www.rdocumentation.org/packages/datasets/versions/3.5.0/topics/mtcars) dataset, using the `fviz_pca` function from the **factoextra** package.
```
# create a biplot
# load data
data(mtcars)
# fit a principal components model
fit <- prcomp(x = mtcars,
center = TRUE,
scale = TRUE)
# plot the results
library(factoextra)
fviz_pca(fit,
repel = TRUE,
labelsize = 3) +
theme_bw() +
labs(title = "Biplot of mtcars data")
```
Figure 10\.9: Basic biplot
The `fviz_pca` function produces a ggplot2 graph.
*Dim1* and *Dim2* are the first two [principal components](https://towardsdatascience.com/a-one-stop-shop-for-principal-component-analysis-5582fb7e0a9c) \- linear combinations of the original *p* variables.
\\\[PC\_{1} \= \\beta\_{10} \+\\beta\_{11}x\_{1} \+ \\beta\_{12}x\_{2} \+ \\beta\_{13}x\_{3} \+ \\dots \+ \\beta\_{1p}x\_{p}\\] \\\[PC\_{2} \= \\beta\_{20} \+\\beta\_{21}x\_{1} \+ \\beta\_{22}x\_{2} \+ \\beta\_{23}x\_{3} \+ \\dots \+ \\beta\_{2p}x\_{p}\\]
The weights of these linear combinations (\\(\\beta\_{ij}s\\)) are chosen to maximize the variance accounted for in the original variables. Additionally, the principal components (PCs) are constrained to be uncorrelated with each other.
In this graph, the first PC accounts for 60% of the variability in the original data. The second PC accounts for 24%. Together, they account for 84% of the variability in the original *p* \= 11 variables.
As you can see, both the observations (cars) and variables (car characteristics) are plotted in the same graph.
* Points represent observations. Smaller distances between points suggest similar values on the original set of variables. For example, the *Toyota Corolla* and *Honda Civic* are similar to each other, as are the *Chrysler Imperial* and *Liconln Continental*. However, the *Toyota Corolla* is very different from the *Lincoln Continental*.
* The vectors (arrows) represent variables. The angle between vectors are proportional to the correlation between the variables. Smaller angles indicate stronger correlations. For example, *gear* and *am* are positively correlated, *gear* and *qsec* are uncorrelated (90 degree angle), and *am* and *wt* are negatively correlated (angle greater then 90 degrees).
* The observations that are are farthest along the direction of a variable’s vector, have the highest values on that variable. For example, the *Toyoto Corolla* and *Honda Civic* have higher values on *mpg*. The *Toyota Corona* has a higher *qsec*. The *Duster 360* has more *cylinders*.
As you can see, biplots convey an amazing amount of information in a single graph. However, care must be taken in interpreting biplots. They are only accurate when the percentage of variance accounted for is high. Always check your conclusion with the original data. For example, if the graph suggests that two cars are similar, go back to the original data and do a spot\-check to see if that is so.
See the article by Forrest Young ([https://www.uv.es/visualstats/vista\-frames/help/lecturenotes/lecture13/biplot.html](https://www.uv.es/visualstats/vista-frames/help/lecturenotes/lecture13/biplot.html)) to learn more about interpreting biplots correctly.
A flow diagram represents a set of dynamic relationships. It usually captures the physical or metaphorical flow of people, materials, communications, or objects through a set of nodes in a network.
10\.4 Alluvial diagrams
-----------------------
Alluvial diagrams are useful for displaying the relation among two or more categorical variables. They use a flow analogy to represent in changes in group composition across variables. This will be more understandable when you see an example.
Alluvial diagrams are created with **ggalluvial** package, generating ggplot2 graphs. As an example, let’s diagram the survival of Titanic passengers, using the [Titanic](Datasets.html#Titanic) dataset. We will look at the relationship between passenger class, sex, and survival.
To create an alluvial diagram, first count the frequency of of each combination of the categorical variables.
```
# input data
library(readr)
titanic <- read_csv("titanic.csv")
# summarize data
library(dplyr)
titanic_table <- titanic %>%
group_by(Class, Sex, Survived) %>%
count()
# convert survived to a factor with labels
titanic_table$Survived <- factor(titanic_table$Survived,
levels = c("Yes", "No"))
# view the first 6 cases
head(titanic_table)
```
```
## # A tibble: 6 × 4
## # Groups: Class, Sex, Survived [6]
## Class Sex Survived n
## <chr> <chr> <fct> <int>
## 1 1st Female No 4
## 2 1st Female Yes 141
## 3 1st Male No 118
## 4 1st Male Yes 62
## 5 2nd Female No 13
## 6 2nd Female Yes 93
```
Next create an alluvial diagram in ggplot2 using the `ggplot`, `geom_alluvium` and `geom_stratum` functions. The categorical variables are mapped to *axes* and n to *y*. This will produce Figure [10\.10](Other.html#fig:alluvial2)
```
library(ggalluvial)
ggplot(titanic_table,
aes(axis1 = Class,
axis2 = Sex,
axis3 = Survived,
y = n)) +
geom_alluvium(aes(fill = Class)) +
geom_stratum() +
geom_text(stat = "stratum",
aes(label = after_stat(stratum)))
```
Figure 10\.10: Basic alluvial diagram
To interpret the graph, start with the variable on the left (*Class*) and follow the flow to the right. The height of the category level represent the proportion of observations in that level. For example the crew made up roughly 40% of the passengers. Roughly, 30% of passengers survived.
The height of the flow represents the proportion of observations contained in the two variable levels they connect. About 50% of first class passengers were females and all female first class passengers survived. The crew was overwhelmingly male and roughly 75% of this group perished.
As a second example, let’s look at the relationship between the number carburetors, cylinders, gears, and the transmission type (manual or automatic) for the 32 cars in the [mtcars](https://www.rdocumentation.org/packages/datasets/versions/3.5.0/topics/mtcars) dataset. We’ll treat each variable as categorical.
First, we need to prepare the data.
```
library(dplyr)
data(mtcars)
mtcars_table <- mtcars %>%
mutate(am = factor(am, labels = c("Auto", "Man")),
cyl = factor(cyl),
gear = factor(gear),
carb = factor(carb)) %>%
group_by(cyl, gear, carb, am) %>%
count()
head(mtcars_table)
```
```
## # A tibble: 6 × 5
## # Groups: cyl, gear, carb, am [6]
## cyl gear carb am n
## <fct> <fct> <fct> <fct> <int>
## 1 4 3 1 Auto 1
## 2 4 4 1 Man 4
## 3 4 4 2 Auto 2
## 4 4 4 2 Man 2
## 5 4 5 2 Man 2
## 6 6 3 1 Auto 2
```
Next create the graph. Several options and functions are added to enhance the results. Specifically,
* the flow borders are set to black (`geom_alluvium`)
* the strata are given transparency (`geom_strata`)
* the strata are labeled and made wider (`scale_x_discrete`)
* titles are added (`labs`)
* the theme is simplified (`theme_minima`*l*)
* and the legend is suppressed (`theme`)
```
ggplot(mtcars_table,
aes(axis1 = carb,
axis2 = cyl,
axis3 = gear,
axis4 = am,
y = n)) +
geom_alluvium(aes(fill = carb), color="black") +
geom_stratum(alpha=.8) +
geom_text(stat = "stratum",
aes(label = after_stat(stratum))) +
scale_x_discrete(limits = c("Carburetors", "Cylinders",
"Gears", "Transmission"),
expand = c(.1, .1)) +
# scale_fill_brewer(palette="Paired") +
labs(title = "Mtcars data",
subtitle = "stratified by carb, cyl, gear, and am",
y = "Frequency") +
theme_minimal() +
theme(legend.position = "none")
```
Figure 10\.11: Basic alluvial diagram for the mtcars dataset
I think that these changes make the graph easier to follow. For example, all 8 carburetor cars have 8 cylinders, 5 gears, and a manual transmission. Most 4 carburetor cars have 8 cylinders, 3 gears, and an automatic transmission.
See the [ggalluvial website](https://github.com/corybrunson/ggalluvial) (<https://github.com/corybrunson/ggalluvial>) for additional details.
10\.5 Heatmaps
--------------
A heatmap displays a set of data using colored tiles for each variable value within each observation. There are many varieties of heatmaps. Although base R comes with a `heatmap` function, we’ll use the more powerful [**superheat**](https://rlbarter.github.io/superheat/) package (I love these names).
First, let’s create a heatmap for the [mtcars](https://www.rdocumentation.org/packages/datasets/versions/3.5.0/topics/mtcars) dataset that come with base R. The mtcars dataset contains information on 32 cars measured on 11 variables.
```
# create a heatmap
data(mtcars)
library(superheat)
superheat(mtcars, scale = TRUE)
```
Figure 10\.12: Basic heatmap
The `scale = TRUE` options standardizes the columns to a mean of zero and standard deviation of one. Looking at the graph, we can see that the Merc 230 has a quarter mile time (*qsec*) the is well above average (bright yellow). The Lotus Europa has a weight is well below average (dark blue).
We can use clustering to sort the rows and/or columns. In the next example, we’ll sort the rows so that cars that are similar appear near each other. We will also adjust the text and label sizes.
```
# sorted heat map
superheat(mtcars,
scale = TRUE,
left.label.text.size=3,
bottom.label.text.size=3,
bottom.label.size = .05,
row.dendrogram = TRUE )
```
Figure 10\.13: Sorted heatmap
Here we can see that the Toyota Corolla and Fiat 128 have similar characteristics. The Lincoln Continental and Cadillac Fleetwood also have similar characteristics.
The `superheat` function requires that the data be in particular format. Specifically
* the data most be all numeric
* the row names are used to label the left axis. If the desired labels are in a column variable, the variable must be converted to row names (more on this below)
* missing values are allowed
Let’s use a heatmap to display changes in life expectancies over time for Asian countries. The data come from the [`gapminder`](Datasets.html#Gapminder) dataset (Appendix [A.8](Datasets.html#Gapminder).
Since the data is in [long format](DataPrep.html#Reshaping) (Section [2\.2\.7](DataPrep.html#Reshaping)), we first have to convert to wide format. Then we need to ensure that it is a data frame and convert the variable *country* into row names. Finally, we’ll sort the data by 2007 life expectancy. While we are at it, let’s change the color scheme.
```
# create heatmap for gapminder data (Asia)
library(tidyr)
library(dplyr)
# load data
data(gapminder, package="gapminder")
# subset Asian countries
asia <- gapminder %>%
filter(continent == "Asia") %>%
select(year, country, lifeExp)
# convert to long to wide format
plotdata <- pivot_wider(asia, names_from = year,
values_from = lifeExp)
# save country as row names
plotdata <- as.data.frame(plotdata)
row.names(plotdata) <- plotdata$country
plotdata$country <- NULL
# row order
sort.order <- order(plotdata$"2007")
# color scheme
library(RColorBrewer)
colors <- rev(brewer.pal(5, "Blues"))
# create the heat map
superheat(plotdata,
scale = FALSE,
left.label.text.size=3,
bottom.label.text.size=3,
bottom.label.size = .05,
heat.pal = colors,
order.rows = sort.order,
title = "Life Expectancy in Asia")
```
Figure 10\.14: Heatmap for time series
Japan, Hong Kong, and Israel have the highest life expectancies. South Korea was doing well in the 80s but has lost some ground. Life expectancy in Cambodia took a sharp hit in 1977\.
To see what you can do with heat maps, see the extensive `superheat` [vignette](https://rlbarter.github.io/superheat/) (<https://rlbarter.github.io/superheat/>).
10\.6 Radar charts
------------------
A radar chart (also called a spider or star chart) displays one or more groups or observations on three or more quantitative variables.
In the example below, we’ll compare dogs, pigs, and cows in terms of body size, brain size, and sleep characteristics (total sleep time, length of sleep cycle, and amount of REM sleep). The data come from the `msleep` dataset that ships with ggplot2\.
Radar charts can be created with `ggradar` function in the **ggradar** package.
Next, we have to put the data in a specific format:
* The first variable should be called *group* and contain the identifier for each observation
* The numeric variables have to be rescaled so that their values range from 0 to 1
```
# create a radar chart
# prepare data
data(msleep, package = "ggplot2")
library(ggplot2)
library(ggradar)
library(scales)
library(dplyr)
plotdata <- msleep %>%
filter(name %in% c("Cow", "Dog", "Pig")) %>%
select(name, sleep_total, sleep_rem,
sleep_cycle, brainwt, bodywt) %>%
rename(group = name) %>%
mutate_at(vars(-group),
funs(rescale))
plotdata
```
```
## # A tibble: 3 × 6
## group sleep_total sleep_rem sleep_cycle brainwt bodywt
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Cow 0 0 1 1 1
## 2 Dog 1 1 0 0 0
## 3 Pig 0.836 0.773 0.5 0.312 0.123
```
```
# generate radar chart
ggradar(plotdata,
grid.label.size = 4,
axis.label.size = 4,
group.point.size = 5,
group.line.width = 1.5,
legend.text.size= 10) +
labs(title = "Mammals, size, and sleep")
```
Figure 10\.15: Basic radar chart
In the previous chart, the `mutate_at` function rescales all variables except *group*. The various `size` options control the font sizes for the percent labels, variable names, point size, line width, and legend labels respectively.
We can see from the chart that, relatively speaking, cows have large brain and body weights, long sleep cycles, short total sleep time and little time in REM sleep. Dogs in comparison, have small body and brain weights, short sleep cycles, and a large total sleep time and time in REM sleep (The obvious conclusion is that I want to be a dog \- but with a bigger brain).
10\.7 Scatterplot matrix
------------------------
A scatterplot matrix is a collection of [scatterplots](Bivariate.html#Scatterplot) (Section [5\.2\.1](Bivariate.html#Scatterplot)) organized as a grid. It is similar to a [correlation plot](Models.html#Corrplot) (Section [9\.1](Models.html#Corrplot) but instead of displaying correlations, displays the underlying data.
You can create a scatterplot matrix using the [`ggpairs`](https://ggobi.github.io/ggally/#ggallyggpairs) function in the [**GGally**](https://ggobi.github.io/ggally/index.html) package.
We can illustrate its use by examining the relationships between mammal size and sleep characteristics using msleep dataset. Brain weight and body weight are highly skewed (think mouse and elephant) so we’ll transform them to log brain weight and log body weight before creating the graph.
```
library(GGally)
# prepare data
data(msleep, package="ggplot2")
library(dplyr)
df <- msleep %>%
mutate(log_brainwt = log(brainwt),
log_bodywt = log(bodywt)) %>%
select(log_brainwt, log_bodywt, sleep_total, sleep_rem)
# create a scatterplot matrix
ggpairs(df)
```
Figure 10\.16: Scatterplot matrix
By default,
* the principal diagonal contains the [kernel density](Univariate.html#Kernel) charts (Section [4\.2\.2](Univariate.html#Kernel)) for each variable.
* The cells below the principal diagonal contain the scatterplots represented by the intersection of the row and column variables. The variables across the top are the *x*\-axes and the variables down the right side are the *y*\-axes.
* The cells above the principal diagonal contain the correlation coefficients.
For example, as brain weight increases, total sleep time and time in REM sleep decrease.
The graph can be modified by creating custom functions.
```
# custom function for density plot
my_density <- function(data, mapping, ...){
ggplot(data = data, mapping = mapping) +
geom_density(alpha = 0.5,
fill = "cornflowerblue", ...)
}
# custom function for scatterplot
my_scatter <- function(data, mapping, ...){
ggplot(data = data, mapping = mapping) +
geom_point(alpha = 0.5,
color = "cornflowerblue") +
geom_smooth(method=lm,
se=FALSE, ...)
}
# create scatterplot matrix
ggpairs(df,
lower=list(continuous = my_scatter),
diag = list(continuous = my_density)) +
labs(title = "Mammal size and sleep characteristics") +
theme_bw()
```
Figure 10\.17: Customized scatterplot matrix
Being able to write your own functions provides a great deal of flexibility. Additionally, since the resulting plot is a ggplot2 graph, addition functions can be added to alter the theme, title, labels, etc. See the [`?ggpairs`](https://ggobi.github.io/ggally/#ggallyggpairs) for more details.
10\.8 Waterfall charts
----------------------
A waterfall chart illustrates the cumulative effect of a sequence of positive and negative values.
For example, we can plot the cumulative effect of revenue and expenses for a fictional company. First, let’s create a dataset
```
# create company income statement
category <- c("Sales", "Services", "Fixed Costs",
"Variable Costs", "Taxes")
amount <- c(101000, 52000, -23000, -15000, -10000)
income <- data.frame(category, amount)
```
Now we can visualize this with a waterfall chart, using the [`waterfall`](https://www.rdocumentation.org/packages/waterfalls/versions/0.1.2/topics/waterfall) function in the **waterfalls** package.
```
# create waterfall chart
library(ggplot2)
library(waterfalls)
waterfall(income)
```
Figure 10\.18: Basic waterfall chart
We can also add a total (net) column. Since the result is a ggplot2 graph, we can use additional functions to customize the results.
```
# create waterfall chart with total column
waterfall(income,
calc_total=TRUE,
total_axis_text = "Net",
total_rect_text_color="black",
total_rect_color="goldenrod1") +
scale_y_continuous(label=scales::dollar) +
labs(title = "West Coast Profit and Loss",
subtitle = "Year 2017",
y="",
x="") +
theme_minimal()
```
Figure 10\.19: Waterfall chart with total column
Waterfall charts are particularly useful when you want to show change from a starting point to an end point and when there are positive and negative values.
10\.9 Word clouds
-----------------
A word cloud (also called a tag cloud), is basically an infographic that indicates the frequency of words in a collection of text (e.g., tweets, a text document, a set of text documents). There is a very nice script produced by [STHDA](http://www.sthda.com/english/) (<http://www.sthda.com/english/>) that will generate a word cloud directly from a text file.
To demonstrate, we’ll use [President Kennedy’s Address](Datasets.html#JFKspeech) (Appendix [A.17](Datasets.html#JFKspeech)) during the Cuban Missile crisis.
To use the script, there are several packages you need to install first. The were not mentioned earlier because they are only needed for this section.
```
# install packages for text mining
install.packages(c("tm", "SnowballC",
"wordcloud", "RColorBrewer",
"RCurl", "XML"))
```
Once the packages are installed, you can run the script on your text file.
```
# create a word cloud
script <- "http://www.sthda.com/upload/rquery_wordcloud.r"
source(script)
res<-rquery.wordcloud("JFKspeech.txt",
type ="file",
lang = "english",
textStemming=FALSE,
min.freq=3,
max.words=200)
```
Figure 10\.20: Word cloud
First, the script
* coverts each word to lowercase
* removes numbers, punctuation, and whitespace
* removes stopwords (common words such as “a”, “and”, and “the”)
* if the `textStemming = TRUE` (default is FALSE), words are stemmed (reducing words such as cats, and catty to cat)
* counts the number of times each word appears
* drops words that appear less than 3 times (*min.freq*)
The script then plots up to 200 words (*max.words*) with word size proportional to the number of times the word appears.
As you can see, the most common words in the speech are *soviet*, *cuba*, *world*, *weapons*, etc. The terms *missile* and *ballistic* are used rarely.
The `rquery.wordcloud` function supports several languages, including Danish, Dutch, English, Finnish, french, German, Italian, Norwegian, Portuguese, Russian, Spanish, and Swedish! See [http://www.sthda.com/english/wiki/word\-cloud\-generator\-in\-r\-one\-killer\-function\-to\-do\-everything\-you\-need](http://www.sthda.com/english/wiki/word-cloud-generator-in-r-one-killer-function-to-do-everything-you-need) for details.
| Data Visualization |
rkabacoff.github.io | https://rkabacoff.github.io/datavis/Customizing.html |
Chapter 11 Customizing Graphs
=============================
Graph defaults are fine for quick data exploration, but when you want to publish your results to a blog, paper, article or poster, you’ll probably want to customize the results. Customization can improve the clarity and attractiveness of a graph.
This chapter describes how to customize a graph’s axes, gridlines, colors, fonts, labels, and legend. It also describes how to add annotations (text and lines). The last section describes how to combine two of graphs together into one composite image.
11\.1 Axes
----------
The *x*\-axis and *y*\-axis represent numeric, categorical, or date values. You can modify the default scales and labels with the functions below.
### 11\.1\.1 Quantitative axes
A quantitative axis is modified using the `scale_x_continuous` or `scale_y_continuous` function.
Options include
* `breaks` \- a numeric vector of positions
* `limits` \- a numeric vector with the min and max for the scale
```
# customize numerical x and y axes
library(ggplot2)
ggplot(mpg, aes(x=displ, y=hwy)) +
geom_point() +
scale_x_continuous(breaks = seq(1, 7, 1),
limits=c(1, 7)) +
scale_y_continuous(breaks = seq(10, 45, 5),
limits=c(10, 45))
```
Figure 11\.1: Customized quantitative axes
The `seq(from, to, by)` function generates a vector of numbers starting with *from*, ending with *to*, and incremented by *by*. For example
```
seq(1, 8, 2)
```
is equivalent to
```
c(1, 3, 5, 7)
```
#### 11\.1\.1\.1 Numeric formats
The `scales` package provides a number of functions for formatting numeric labels. Some of the most useful are
* `dollar`
* `comma`
* `percent`
Let’s demonstrate these functions with some synthetic data.
```
# create some data
set.seed(1234)
df <- data.frame(xaxis = rnorm(50, 100000, 50000),
yaxis = runif(50, 0, 1),
pointsize = rnorm(50, 1000, 1000))
library(ggplot2)
# plot the axes and legend with formats
ggplot(df, aes(x = xaxis,
y = yaxis,
size=pointsize)) +
geom_point(color = "cornflowerblue",
alpha = .6) +
scale_x_continuous(label = scales::comma) +
scale_y_continuous(label = scales::percent) +
scale_size(range = c(1,10), # point size range
label = scales::dollar)
```
Figure 11\.2: Formatted axes
To format currency values as euros, you can use
`label = scales::dollar_format(prefix = "", suffix = "\u20ac")`.
### 11\.1\.2 Categorical axes
A categorical axis is modified using the `scale_x_discrete` or `scale_y_discrete` function.
Options include
* `limits` \- a character vector (the levels of the quantitative variable in the desired order)
* `labels` \- a character vector of labels (optional labels for these levels)
```
library(ggplot2)
# customize categorical x axis
ggplot(mpg, aes(x = class)) +
geom_bar(fill = "steelblue") +
scale_x_discrete(limits = c("pickup", "suv", "minivan",
"midsize", "compact", "subcompact",
"2seater"),
labels = c("Pickup\nTruck",
"Sport Utility\nVehicle",
"Minivan", "Mid-size", "Compact",
"Subcompact", "2-Seater"))
```
Figure 11\.3: Customized categorical axis
### 11\.1\.3 Date axes
A date axis is modified using the `scale_x_date` or `scale_y_date` function.
Options include
* `date_breaks` \- a string giving the distance between breaks like “2 weeks” or “10 years”
* `date_labels` \- A string giving the formatting specification for the labels
The table below gives the formatting specifications for date values.
| Symbol | Meaning | Example |
| --- | --- | --- |
| %d | day as a number (0\-31\) | 01\-31 |
| %a | abbreviated weekday | Mon |
| %A | unabbreviated weekday | Monday |
| %m | month (00\-12\) | 00\-12 |
| %b | abbreviated month | Jan |
| %B | unabbreviated month | January |
| %y | 2\-digit year | 07 |
| %Y | 4\-digit year | 2007 |
```
library(ggplot2)
# customize date scale on x axis
ggplot(economics, aes(x = date, y = unemploy)) +
geom_line(color="darkgreen") +
scale_x_date(date_breaks = "5 years",
date_labels = "%b-%y")
```
Figure 11\.4: Customized date axis
11\.2 Colors
------------
The default colors in ggplot2 graphs are functional, but often not as visually appealing as they can be. Happily this is easy to change.
Specific colors can be
* specified for points, lines, bars, areas, and text, or
* mapped to the levels of a variable in the dataset.
### 11\.2\.1 Specifying colors manually
To specify a color for points, lines, or text, use the `color = "colorname"` option in the appropriate geom. To specify a color for bars and areas, use the `fill = "colorname"` option.
Examples:
* `geom_point(color = "blue")`
* `geom_bar(fill = "steelblue")`
Colors can be specified by name or hex code ([https://r\-charts.com/colors/](https://r-charts.com/colors/)).
To assign colors to the levels of a variable, use the `scale_color_manual` and `scale_fill_manual` functions. The former is used to specify the colors for points and lines, while the later is used for bars and areas.
Here is an example, using the `diamonds` dataset that ships with `ggplot2`. The dataset contains the prices and attributes of 54,000 round cut diamonds.
```
# specify fill color manually
library(ggplot2)
ggplot(diamonds, aes(x = cut, fill = clarity)) +
geom_bar() +
scale_fill_manual(values = c("darkred", "steelblue",
"darkgreen", "gold",
"brown", "purple",
"grey", "khaki4"))
```
Figure 11\.5: Manual color selection
If you are aesthetically challenged like me, an alternative is to use a predefined palette.
### 11\.2\.2 Color palettes
There are *many* predefined color palettes available in R.
#### 11\.2\.2\.1 RColorBrewer
The most popular alternative palettes are probably the [ColorBrewer](http://colorbrewer2.org/#type=sequential&scheme=BuGn&n=3) palettes.
Figure 11\.6: RColorBrewer palettes
You can specify these palettes with the `scale_color_brewer` and `scale_fill_brewer` functions.
```
# use an ColorBrewer fill palette
ggplot(diamonds, aes(x = cut, fill = clarity)) +
geom_bar() +
scale_fill_brewer(palette = "Dark2")
```
Figure 11\.7: Using RColorBrewer
Adding `direction = -1` to these functions reverses the order of the colors in a palette.
#### 11\.2\.2\.2 Viridis
The [viridis](https://cran.r-project.org/web/packages/viridis/vignettes/intro-to-viridis.html) palette is another popular choice.
For continuous scales use
* `scale_fill_viridis_c`
* `scale_color_viridis_c`
For discrete (categorical scales) use
* `scale_fill_viridis_d`
* `scale_color_viridis_d`
```
# Use a viridis fill palette
ggplot(diamonds, aes(x = cut, fill = clarity)) +
geom_bar() +
scale_fill_viridis_d()
```
Figure 11\.8: Using the viridis palette
#### 11\.2\.2\.3 Other palettes
Other palettes to explore include
| Package | URL |
| --- | --- |
| **dutchmasters** | <https://github.com/EdwinTh/dutchmasters> |
| **ggpomological** | <https://github.com/gadenbuie/ggpomological> |
| **LaCroixColoR** | <https://github.com/johannesbjork/LaCroixColoR> |
| **nord** | <https://github.com/jkaupp/nord> |
| **ochRe** | <https://github.com/ropenscilabs/ochRe> |
| **palettetown** | <https://github.com/timcdlucas/palettetown> |
| **pals** | <https://github.com/kwstat/pals> |
| **rcartocolor** | <https://github.com/Nowosad/rcartocolor> |
| **wesanderson** | <https://github.com/karthik/wesanderson> |
If you want to explore **all** the palette options (or nearly all), take a look at the **paletter** (<https://github.com/EmilHvitfeldt/paletteer>) package.
To learn more about color specifications, see the *R Cookpage* page on ggplot2 colors ([http://www.cookbook\-r.com/Graphs/Colors\_(ggplot2\)/](http://www.cookbook-r.com/Graphs/Colors_(ggplot2)/)). For advice on selecting colors, see Section [14\.3](Advice.html#ColorChoice).
11\.3 Points \& Lines
---------------------
### 11\.3\.1 Points
For `ggplot2` graphs, the default point is a filled circle. To specify a different shape, use the `shape = #` option in the `geom_point` function. To map shapes to the levels of a categorical variable use the `shape = variablename` option in the `aes` function.
Examples:
* `geom_point(shape = 1)`
* geom\_point(`aes(shape = sex)`)
Availabe shapes are given in the table below.
Figure 11\.9: Point shapes
Shapes 21 through 26 provide for both a fill color and a border color.
### 11\.3\.2 Lines
The default line type is a solid line. To change the linetype, use the `linetype = #` option in the `geom_line` function. To map linetypes to the levels of a categorical variable use the `linetype = variablename` option in the `aes` function.
Examples:
* `geom_line(linetype = 1)`
* geom\_line(`aes(linetype = sex)`)
Availabe linetypes are given in the table below.
Figure 11\.10: Linetypes
11\.4 Fonts
-----------
R does not have great support for fonts, but with a bit of work, you can change the fonts that appear in your graphs. First you need to install and set\-up the `extrafont` package.
```
# one time install
install.packages("extrafont")
library(extrafont)
font_import()
# see what fonts are now available
fonts()
```
Apply the new font(s) using the `text` option in the `theme` function.
```
# specify new font
library(extrafont)
ggplot(mpg, aes(x = displ, y=hwy)) +
geom_point() +
labs(title = "Diplacement by Highway Mileage",
subtitle = "MPG dataset") +
theme(text = element_text(size = 16, family = "Comic Sans MS"))
```
Figure 11\.11: Alternative fonts
To learn more about customizing fonts, see Andrew Heiss’s blog on **Working with R, Cairo graphics, custom fonts, and ggplot** ([https://www.andrewheiss.com/blog/2017/09/27/working\-with\-r\-cairo\-graphics\-custom\-fonts\-and\-ggplot/\#windows](https://www.andrewheiss.com/blog/2017/09/27/working-with-r-cairo-graphics-custom-fonts-and-ggplot/#windows)).
11\.5 Legends
-------------
In `ggplot2`, legends are automatically created when variables are mapped to color, fill, linetype, shape, size, or alpha.
You have a great deal of control over the look and feel of these legends. Modifications are usually made through the `theme` function and/or the `labs` function. Here are some of the most sought after changes.
### 11\.5\.1 Legend location
The legend can appear anywhere in the graph. By default, it’s placed on the right. You can change the default with
`theme(legend.position = position)`
where
| Position | Location |
| --- | --- |
| “top” | above the plot area |
| “right” | right of the plot area |
| “bottom” | below the plot area |
| “left” | left of the plot area |
| c(*x*, *y*) | within the plot area. The *x* and *y* values must range between 0 and 1\. c(0,0\) represents (left, bottom) and c(1,1\) represents (right, top). |
| “none” | suppress the legend |
For example, to place the legend at the top, use the following code.
```
# place legend on top
ggplot(mpg,
aes(x = displ, y=hwy, color = class)) +
geom_point(size = 4) +
labs(title = "Diplacement by Highway Mileage") +
theme_minimal() +
theme(legend.position = "top")
```
Figure 11\.12: Moving the legend to the top
### 11\.5\.2 Legend title
You can change the legend title through the `labs` function. Use `color`, `fill`, `size`, `shape`, `linetype`, and `alpha` to give new titles to the corresponding legends.
The alignment of the legend title is controlled through the `legend.title.align` option in the `theme` function. (0\=left, 0\.5\=center, 1\=right)
```
# change the default legend title
ggplot(mpg,
aes(x = displ, y=hwy, color = class)) +
geom_point(size = 4) +
labs(title = "Diplacement by Highway Mileage",
color = "Automobile\nClass") +
theme_minimal() +
theme(legend.title.align=0.5)
```
Figure 11\.13: Changing the legend title
11\.6 Labels
------------
Labels are a key ingredient in rendering a graph understandable. They’re are added with the `labs` function. Available options are given below.
| option | Use |
| --- | --- |
| title | main title |
| subtitle | subtitle |
| caption | caption (bottom right by default) |
| x | horizontal axis |
| y | vertical axis |
| color | color legend title |
| fill | fill legend title |
| size | size legend title |
| linetype | linetype legend title |
| shape | shape legend title |
| alpha | transparency legend title |
| size | size legend title |
For example
```
# add plot labels
ggplot(mpg,
aes(x = displ, y=hwy,
color = class,
shape = factor(year))) +
geom_point(size = 3,
alpha = .5) +
labs(title = "Mileage by engine displacement",
subtitle = "Data from 1999 and 2008",
caption = "Source: EPA (http://fueleconomy.gov)",
x = "Engine displacement (litres)",
y = "Highway miles per gallon",
color = "Car Class",
shape = "Year") +
theme_minimal()
```
Figure 11\.14: Graph with labels
This is not a great graph \- it is too busy, making the identification of patterns difficult. It would better to facet the year variable, the class variable or both (Section [6\.2](Multivariate.html#Faceting)). Trend lines would also be helpful (Section [5\.2\.1\.1](Bivariate.html#BestFit)).
11\.7 Annotations
-----------------
Annotations are additional information added to a graph to highlight important points.
### 11\.7\.1 Adding text
There are two primary reasons to add text to a graph.
One is to identify the numeric qualities of a geom. For example, we may want to identify points with labels in a scatterplot, or label the heights of bars in a bar chart.
Another reason is to provide additional information. We may want to add notes about the data, point out outliers, etc.
#### 11\.7\.1\.1 Labeling values
Consider the following scatterplot, based on the car data in the [mtcars](https://www.rdocumentation.org/packages/datasets/versions/3.5.0/topics/mtcars) dataset.
```
# basic scatterplot
data(mtcars)
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point()
```
Figure 11\.15: Simple scatterplot
Let’s label each point with the name of the car it represents.
```
# scatterplot with labels
data(mtcars)
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
geom_text(label = row.names(mtcars))
```
Figure 11\.16: Scatterplot with labels
The overlapping labels make this chart difficult to read. The `ggrepel` package can help us here. It nudges text to avoid overlaps.
```
# scatterplot with non-overlapping labels
data(mtcars)
library(ggrepel)
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
geom_text_repel(label = row.names(mtcars),
size=3)
```
Figure 11\.17: Scatterplot with non\-overlapping labels
Much better.
Adding labels to bar charts is covered in the aptly named *labeling bars* section (Section [4\.1\.1\.3](Univariate.html#LabelingBars)).
#### 11\.7\.1\.2 Adding additional information
We can place text anywhere on a graph using the `annotate` function. The format is
```
annotate("text",
x, y,
label = "Some text",
color = "colorname",
size=textsize)
```
where *x* and *y* are the coordinates on which to place the text. The `color` and `size` parameters are optional.
By default, the text will be centered. Use `hjust` and `vjust` to change the alignment.
* `hjust` 0 \= left justified, 0\.5 \= centered, and 1 \= right centered.
* `vjust` 0 \= above, 0\.5 \= centered, and 1 \= below.
Continuing the previous example.
```
# scatterplot with explanatory text
data(mtcars)
library(ggrepel)
txt <- paste("The relationship between car weight",
"and mileage appears to be roughly linear",
sep = "\n")
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point(color = "red") +
geom_text_repel(label = row.names(mtcars),
size=3) +
ggplot2::annotate("text",
6, 30,
label=txt,
color = "red",
hjust = 1) +
theme_bw()
```
Figure 11\.18: Scatterplot with arranged labels
See the this stackoverflow blog post ([https://stackoverflow.com/questions/7263849/what\-do\-hjust\-and\-vjust\-do\-when\-making\-a\-plot\-using\-ggplot](https://stackoverflow.com/questions/7263849/what-do-hjust-and-vjust-do-when-making-a-plot-using-ggplot)) for more details.
### 11\.7\.2 Adding lines
Horizontal and vertical lines can be added using:
* `geom_hline(yintercept = a)`
* `geom_vline(xintercept = b)`
where *a* is a number on the *y*\-axis and *b* is a number on the *x*\-axis respectively. Other options include `linetype` and `color`.
In the following example, we plot city vs. highway miles and indicate the mean highway miles with a horizontal line and label.
```
# add annotation line and text label
min_cty <- min(mpg$cty)
mean_hwy <- mean(mpg$hwy)
ggplot(mpg,
aes(x = cty, y=hwy, color=drv)) +
geom_point(size = 3) +
geom_hline(yintercept = mean_hwy,
color = "darkred",
linetype = "dashed") +
ggplot2::annotate("text",
min_cty,
mean_hwy + 1,
label = "Mean",
color = "darkred") +
labs(title = "Mileage by drive type",
x = "City miles per gallon",
y = "Highway miles per gallon",
color = "Drive")
```
Figure 11\.19: Graph with line annotation
We could add a vertical line for the mean city miles per gallon as well. In any case, always label your annotation lines in some way. Otherwise the reader will not know what they mean.
### 11\.7\.3 Highlighting a single group
Sometimes you want to highlight a single group in your graph. The [`gghighlight`](https://www.rdocumentation.org/packages/gghighlight/versions/0.0.1/topics/gghighlight) function in the `gghighlight` package is designed for this.
Here is an example with a scatterplot. Midsize cars are highlighted.
```
# highlight a set of points
library(ggplot2)
library(gghighlight)
ggplot(mpg, aes(x = cty, y = hwy)) +
geom_point(color = "red",
size=2) +
gghighlight(class == "midsize")
```
Figure 11\.20: Highlighting a group
Below is an example with a bar chart. Again, midsize cars are highlighted.
```
# highlight a single bar
library(gghighlight)
ggplot(mpg, aes(x = class)) +
geom_bar(fill = "red") +
gghighlight(class == "midsize")
```
Figure 11\.21: Highlighting a group
Highlighting is helpful for drawing the reader’s attention to a particular group of observations and their standing with respect to the other observations in the data.
11\.8 Themes
------------
`ggplot2` themes control the appearance of all non\-data related components of a plot. You can change the look and feel of a graph by altering the elements of its theme.
### 11\.8\.1 Altering theme elements
The `theme` function is used to modify individual components of a theme.
Consider the following graph. It shows the number of male and female faculty by rank and discipline at a particular university in 2008\-2009\. The data come from the `salaries` dataset in the `carData` package.
```
# create graph
data(Salaries, package = "carData")
p <- ggplot(Salaries, aes(x = rank, fill = sex)) +
geom_bar() +
facet_wrap(~discipline) +
labs(title = "Academic Rank by Gender and Discipline",
x = "Rank",
y = "Frequency",
fill = "Gender")
p
```
Figure 11\.22: Graph with default theme
Let’s make some changes to the theme.
* Change label text from black to navy blue
* Change the panel background color from grey to white
* Add solid grey lines for major y\-axis grid lines
* Add dashed grey lines for minor x\-axis grid lines
* Eliminate x\-axis grid lines
* Change the strip background color to white with a grey border
Using the `?theme` help in ggplot2 gives us
```
p +
theme(text = element_text(color = "navy"),
panel.background = element_rect(fill = "white"),
panel.grid.major.y = element_line(color = "grey"),
panel.grid.minor.y = element_line(color = "grey",
linetype = "dashed"),
panel.grid.major.x = element_blank(),
panel.grid.minor.x = element_blank(),
strip.background = element_rect(fill = "white", color="grey"))
```
Figure 11\.23: Graph with modified theme
Wow, this looks pretty awful, but you get the idea.
#### 11\.8\.1\.1 ggThemeAssist
If you would like to create your own theme using a GUI, take a look at [`ggThemeAssist`](https://github.com/calligross/ggthemeassist) package. After you install the package, a new menu item will appear under Addins in RStudio.
Highlight the code that creates your graph, then choose the `ggThemeAssist` option from the **Addins** drop\-down menu. You can change many of the features of your theme using point\-and\-click. When you’re done, the `theme` code will be appended to your graph code.
### 11\.8\.2 Pre\-packaged themes
I’m not a very good artist (just look at the last example), so I often look for pre\-packaged themes that can be applied to my graphs. There are many available.
Some come with `ggplot2`. These include *theme\_classic*, *theme\_dark*, *theme\_gray*, *theme\_grey*, *theme\_light* *theme\_linedraw*, *theme\_minimal*, and *theme\_void*. We’ve used *theme\_minimal* often in this book. Others are available through add\-on packages.
#### 11\.8\.2\.1 ggthemes
The `ggthemes` package come with 19 themes.
| Theme | Description |
| --- | --- |
| theme\_base | Theme Base |
| theme\_calc | Theme Calc |
| theme\_economist | ggplot color theme based on the Economist |
| theme\_economist\_white | ggplot color theme based on the Economist |
| theme\_excel | ggplot color theme based on old Excel plots |
| theme\_few | Theme based on Few’s “Practical Rules for Using Color in Charts” |
| theme\_fivethirtyeight | Theme inspired by fivethirtyeight.com plots |
| theme\_foundation | Foundation Theme |
| theme\_gdocs | Theme with Google Docs Chart defaults |
| theme\_hc | Highcharts JS theme |
| theme\_igray | Inverse gray theme |
| theme\_map | Clean theme for maps |
| theme\_pander | A ggplot theme originated from the pander package |
| theme\_par | Theme which takes its values from the current ‘base’ graphics parameter values in ‘par’. |
| theme\_solarized | ggplot color themes based on the Solarized palette |
| theme\_solarized\_2 | ggplot color themes based on the Solarized palette |
| theme\_solid | Theme with nothing other than a background color |
| theme\_stata | Themes based on Stata graph schemes |
| theme\_tufte | Tufte Maximal Data, Minimal Ink Theme |
| theme\_wsj | Wall Street Journal theme |
To demonstrate their use, we’ll first create and save a graph.
```
# create basic plot
library(ggplot2)
p <- ggplot(mpg,
aes(x = displ, y=hwy,
color = class)) +
geom_point(size = 3,
alpha = .5) +
labs(title = "Mileage by engine displacement",
subtitle = "Data from 1999 and 2008",
caption = "Source: EPA (http://fueleconomy.gov)",
x = "Engine displacement (litres)",
y = "Highway miles per gallon",
color = "Car Class")
# display graph
p
```
Figure 11\.24: Default theme
Now let’s apply some themes.
```
# add economist theme
library(ggthemes)
p + theme_economist()
```
Figure 11\.25: Economist theme
```
# add fivethirtyeight theme
p + theme_fivethirtyeight()
```
Figure 11\.26: Five Thirty Eight theme
```
# add wsj theme
p + theme_wsj(base_size=8)
```
Figure 11\.27: Wall Street Journal theme
By default, the font size for the wsj theme is usually too large. Changing the `base_size` option can help.
Each theme also comes with scales for colors and fills. In the next example, both the `few` theme and colors are used.
```
# add few theme
p + theme_few() + scale_color_few()
```
Figure 11\.28: Few theme and colors
Try out different themes and scales to find one that you like.
#### 11\.8\.2\.2 hrbrthemes
The [`hrbrthemes`]((https://github.com/hrbrmstr/hrbrthemes)) package is focused on typography\-centric themes. The results are charts that tend to have a clean look.
Continuing the example plot from above
```
# add few theme
library(hrbrthemes)
p + theme_ipsum()
```
Figure 11\.29: Ipsum theme
See the hrbrthemes homepage (<https://github.com/hrbrmstr/hrbrthemes>) for additional examples.
#### 11\.8\.2\.3 ggthemer
The [`ggthemer`](https://github.com/cttobin/ggthemr) package offers a wide range of themes (17 as of this printing).
The package is not available on CRAN and must be installed from GitHub.
```
# one time install
install.packages("remotes")
remotes::install_github('cttobin/ggthemr')
```
The functions work a bit differently. Use the `ggthemr("themename")` function to set future graphs to a given theme. Use `ggthemr_reset()` to return future graphs to the ggplot2 default theme.
Current themes include *flat*, *flat dark*, *camoflauge*, *chalk*, *copper*, *dust*, *earth*, *fresh*, *grape*, *grass*, *greyscale*, *light*, *lilac*, *pale*, *sea*, *sky*, and *solarized*.
```
# set graphs to the flat dark theme
library(ggthemr)
ggthemr("flat dark")
p
```
Figure 11\.30: Ipsum theme
```
ggthemr_reset()
```
I would not actually use this theme for this particular graph. It is difficult to distinguish colors. Which green represents compact cars and which represents subcompact cars?
Select a theme that best conveys the graph’s information to your audience.
11\.9 Combining graphs
----------------------
At times, you may want to combine several graphs together into a single image. Doing so can help you describe several relationships at once. The **patchwork** package can be used to combine ggplot2 graphs into a mosaic and save the results as a ggplot2 graph.
First save each graph as a ggplot2 object. Then combine them using `|` to combine graphs horizontally and `/` to combine graphs vertically. You can use parentheses to group graphs.
Here is an example using the Salaries dataset from the **carData** package. The combined plot will display the relationship between sex, salary, experience, and rank.
```
data(Salaries, package = "carData")
library(ggplot2)
library(patchwork)
# boxplot of salary by sex
p1 <- ggplot(Salaries, aes(x = sex, y = salary, fill=sex)) +
geom_boxplot()
# scatterplot of salary by experience and sex
p2 <- ggplot(Salaries,
aes(x = yrs.since.phd, y = salary, color=sex)) +
geom_point()
# barchart of rank and sex
p3 <- ggplot(Salaries, aes(x = rank, fill = sex)) +
geom_bar()
# combine the graphs and tweak the theme and colors
(p1 | p2)/p3 +
plot_annotation(title = "Salaries for college professors") &
theme_minimal() &
scale_fill_viridis_d() &
scale_color_viridis_d()
```
Figure 11\.31: Combining graphs using the patchwork package
The `plot_annotation` function allows you to add a title and subtitle to the entire graph. Note that the `&` operator applies a function to *all* graphs in a plot. If we had used `+ theme_minimal()` only the bar chart (the last graph) would have been affected..
The patchwork package allows for exact placement and sizing of graphs, and even supports insets (placing one graph within another). See [https://patchwork.data\-imaginist.com](https://patchwork.data-imaginist.com) for details.
11\.1 Axes
----------
The *x*\-axis and *y*\-axis represent numeric, categorical, or date values. You can modify the default scales and labels with the functions below.
### 11\.1\.1 Quantitative axes
A quantitative axis is modified using the `scale_x_continuous` or `scale_y_continuous` function.
Options include
* `breaks` \- a numeric vector of positions
* `limits` \- a numeric vector with the min and max for the scale
```
# customize numerical x and y axes
library(ggplot2)
ggplot(mpg, aes(x=displ, y=hwy)) +
geom_point() +
scale_x_continuous(breaks = seq(1, 7, 1),
limits=c(1, 7)) +
scale_y_continuous(breaks = seq(10, 45, 5),
limits=c(10, 45))
```
Figure 11\.1: Customized quantitative axes
The `seq(from, to, by)` function generates a vector of numbers starting with *from*, ending with *to*, and incremented by *by*. For example
```
seq(1, 8, 2)
```
is equivalent to
```
c(1, 3, 5, 7)
```
#### 11\.1\.1\.1 Numeric formats
The `scales` package provides a number of functions for formatting numeric labels. Some of the most useful are
* `dollar`
* `comma`
* `percent`
Let’s demonstrate these functions with some synthetic data.
```
# create some data
set.seed(1234)
df <- data.frame(xaxis = rnorm(50, 100000, 50000),
yaxis = runif(50, 0, 1),
pointsize = rnorm(50, 1000, 1000))
library(ggplot2)
# plot the axes and legend with formats
ggplot(df, aes(x = xaxis,
y = yaxis,
size=pointsize)) +
geom_point(color = "cornflowerblue",
alpha = .6) +
scale_x_continuous(label = scales::comma) +
scale_y_continuous(label = scales::percent) +
scale_size(range = c(1,10), # point size range
label = scales::dollar)
```
Figure 11\.2: Formatted axes
To format currency values as euros, you can use
`label = scales::dollar_format(prefix = "", suffix = "\u20ac")`.
### 11\.1\.2 Categorical axes
A categorical axis is modified using the `scale_x_discrete` or `scale_y_discrete` function.
Options include
* `limits` \- a character vector (the levels of the quantitative variable in the desired order)
* `labels` \- a character vector of labels (optional labels for these levels)
```
library(ggplot2)
# customize categorical x axis
ggplot(mpg, aes(x = class)) +
geom_bar(fill = "steelblue") +
scale_x_discrete(limits = c("pickup", "suv", "minivan",
"midsize", "compact", "subcompact",
"2seater"),
labels = c("Pickup\nTruck",
"Sport Utility\nVehicle",
"Minivan", "Mid-size", "Compact",
"Subcompact", "2-Seater"))
```
Figure 11\.3: Customized categorical axis
### 11\.1\.3 Date axes
A date axis is modified using the `scale_x_date` or `scale_y_date` function.
Options include
* `date_breaks` \- a string giving the distance between breaks like “2 weeks” or “10 years”
* `date_labels` \- A string giving the formatting specification for the labels
The table below gives the formatting specifications for date values.
| Symbol | Meaning | Example |
| --- | --- | --- |
| %d | day as a number (0\-31\) | 01\-31 |
| %a | abbreviated weekday | Mon |
| %A | unabbreviated weekday | Monday |
| %m | month (00\-12\) | 00\-12 |
| %b | abbreviated month | Jan |
| %B | unabbreviated month | January |
| %y | 2\-digit year | 07 |
| %Y | 4\-digit year | 2007 |
```
library(ggplot2)
# customize date scale on x axis
ggplot(economics, aes(x = date, y = unemploy)) +
geom_line(color="darkgreen") +
scale_x_date(date_breaks = "5 years",
date_labels = "%b-%y")
```
Figure 11\.4: Customized date axis
### 11\.1\.1 Quantitative axes
A quantitative axis is modified using the `scale_x_continuous` or `scale_y_continuous` function.
Options include
* `breaks` \- a numeric vector of positions
* `limits` \- a numeric vector with the min and max for the scale
```
# customize numerical x and y axes
library(ggplot2)
ggplot(mpg, aes(x=displ, y=hwy)) +
geom_point() +
scale_x_continuous(breaks = seq(1, 7, 1),
limits=c(1, 7)) +
scale_y_continuous(breaks = seq(10, 45, 5),
limits=c(10, 45))
```
Figure 11\.1: Customized quantitative axes
The `seq(from, to, by)` function generates a vector of numbers starting with *from*, ending with *to*, and incremented by *by*. For example
```
seq(1, 8, 2)
```
is equivalent to
```
c(1, 3, 5, 7)
```
#### 11\.1\.1\.1 Numeric formats
The `scales` package provides a number of functions for formatting numeric labels. Some of the most useful are
* `dollar`
* `comma`
* `percent`
Let’s demonstrate these functions with some synthetic data.
```
# create some data
set.seed(1234)
df <- data.frame(xaxis = rnorm(50, 100000, 50000),
yaxis = runif(50, 0, 1),
pointsize = rnorm(50, 1000, 1000))
library(ggplot2)
# plot the axes and legend with formats
ggplot(df, aes(x = xaxis,
y = yaxis,
size=pointsize)) +
geom_point(color = "cornflowerblue",
alpha = .6) +
scale_x_continuous(label = scales::comma) +
scale_y_continuous(label = scales::percent) +
scale_size(range = c(1,10), # point size range
label = scales::dollar)
```
Figure 11\.2: Formatted axes
To format currency values as euros, you can use
`label = scales::dollar_format(prefix = "", suffix = "\u20ac")`.
#### 11\.1\.1\.1 Numeric formats
The `scales` package provides a number of functions for formatting numeric labels. Some of the most useful are
* `dollar`
* `comma`
* `percent`
Let’s demonstrate these functions with some synthetic data.
```
# create some data
set.seed(1234)
df <- data.frame(xaxis = rnorm(50, 100000, 50000),
yaxis = runif(50, 0, 1),
pointsize = rnorm(50, 1000, 1000))
library(ggplot2)
# plot the axes and legend with formats
ggplot(df, aes(x = xaxis,
y = yaxis,
size=pointsize)) +
geom_point(color = "cornflowerblue",
alpha = .6) +
scale_x_continuous(label = scales::comma) +
scale_y_continuous(label = scales::percent) +
scale_size(range = c(1,10), # point size range
label = scales::dollar)
```
Figure 11\.2: Formatted axes
To format currency values as euros, you can use
`label = scales::dollar_format(prefix = "", suffix = "\u20ac")`.
### 11\.1\.2 Categorical axes
A categorical axis is modified using the `scale_x_discrete` or `scale_y_discrete` function.
Options include
* `limits` \- a character vector (the levels of the quantitative variable in the desired order)
* `labels` \- a character vector of labels (optional labels for these levels)
```
library(ggplot2)
# customize categorical x axis
ggplot(mpg, aes(x = class)) +
geom_bar(fill = "steelblue") +
scale_x_discrete(limits = c("pickup", "suv", "minivan",
"midsize", "compact", "subcompact",
"2seater"),
labels = c("Pickup\nTruck",
"Sport Utility\nVehicle",
"Minivan", "Mid-size", "Compact",
"Subcompact", "2-Seater"))
```
Figure 11\.3: Customized categorical axis
### 11\.1\.3 Date axes
A date axis is modified using the `scale_x_date` or `scale_y_date` function.
Options include
* `date_breaks` \- a string giving the distance between breaks like “2 weeks” or “10 years”
* `date_labels` \- A string giving the formatting specification for the labels
The table below gives the formatting specifications for date values.
| Symbol | Meaning | Example |
| --- | --- | --- |
| %d | day as a number (0\-31\) | 01\-31 |
| %a | abbreviated weekday | Mon |
| %A | unabbreviated weekday | Monday |
| %m | month (00\-12\) | 00\-12 |
| %b | abbreviated month | Jan |
| %B | unabbreviated month | January |
| %y | 2\-digit year | 07 |
| %Y | 4\-digit year | 2007 |
```
library(ggplot2)
# customize date scale on x axis
ggplot(economics, aes(x = date, y = unemploy)) +
geom_line(color="darkgreen") +
scale_x_date(date_breaks = "5 years",
date_labels = "%b-%y")
```
Figure 11\.4: Customized date axis
11\.2 Colors
------------
The default colors in ggplot2 graphs are functional, but often not as visually appealing as they can be. Happily this is easy to change.
Specific colors can be
* specified for points, lines, bars, areas, and text, or
* mapped to the levels of a variable in the dataset.
### 11\.2\.1 Specifying colors manually
To specify a color for points, lines, or text, use the `color = "colorname"` option in the appropriate geom. To specify a color for bars and areas, use the `fill = "colorname"` option.
Examples:
* `geom_point(color = "blue")`
* `geom_bar(fill = "steelblue")`
Colors can be specified by name or hex code ([https://r\-charts.com/colors/](https://r-charts.com/colors/)).
To assign colors to the levels of a variable, use the `scale_color_manual` and `scale_fill_manual` functions. The former is used to specify the colors for points and lines, while the later is used for bars and areas.
Here is an example, using the `diamonds` dataset that ships with `ggplot2`. The dataset contains the prices and attributes of 54,000 round cut diamonds.
```
# specify fill color manually
library(ggplot2)
ggplot(diamonds, aes(x = cut, fill = clarity)) +
geom_bar() +
scale_fill_manual(values = c("darkred", "steelblue",
"darkgreen", "gold",
"brown", "purple",
"grey", "khaki4"))
```
Figure 11\.5: Manual color selection
If you are aesthetically challenged like me, an alternative is to use a predefined palette.
### 11\.2\.2 Color palettes
There are *many* predefined color palettes available in R.
#### 11\.2\.2\.1 RColorBrewer
The most popular alternative palettes are probably the [ColorBrewer](http://colorbrewer2.org/#type=sequential&scheme=BuGn&n=3) palettes.
Figure 11\.6: RColorBrewer palettes
You can specify these palettes with the `scale_color_brewer` and `scale_fill_brewer` functions.
```
# use an ColorBrewer fill palette
ggplot(diamonds, aes(x = cut, fill = clarity)) +
geom_bar() +
scale_fill_brewer(palette = "Dark2")
```
Figure 11\.7: Using RColorBrewer
Adding `direction = -1` to these functions reverses the order of the colors in a palette.
#### 11\.2\.2\.2 Viridis
The [viridis](https://cran.r-project.org/web/packages/viridis/vignettes/intro-to-viridis.html) palette is another popular choice.
For continuous scales use
* `scale_fill_viridis_c`
* `scale_color_viridis_c`
For discrete (categorical scales) use
* `scale_fill_viridis_d`
* `scale_color_viridis_d`
```
# Use a viridis fill palette
ggplot(diamonds, aes(x = cut, fill = clarity)) +
geom_bar() +
scale_fill_viridis_d()
```
Figure 11\.8: Using the viridis palette
#### 11\.2\.2\.3 Other palettes
Other palettes to explore include
| Package | URL |
| --- | --- |
| **dutchmasters** | <https://github.com/EdwinTh/dutchmasters> |
| **ggpomological** | <https://github.com/gadenbuie/ggpomological> |
| **LaCroixColoR** | <https://github.com/johannesbjork/LaCroixColoR> |
| **nord** | <https://github.com/jkaupp/nord> |
| **ochRe** | <https://github.com/ropenscilabs/ochRe> |
| **palettetown** | <https://github.com/timcdlucas/palettetown> |
| **pals** | <https://github.com/kwstat/pals> |
| **rcartocolor** | <https://github.com/Nowosad/rcartocolor> |
| **wesanderson** | <https://github.com/karthik/wesanderson> |
If you want to explore **all** the palette options (or nearly all), take a look at the **paletter** (<https://github.com/EmilHvitfeldt/paletteer>) package.
To learn more about color specifications, see the *R Cookpage* page on ggplot2 colors ([http://www.cookbook\-r.com/Graphs/Colors\_(ggplot2\)/](http://www.cookbook-r.com/Graphs/Colors_(ggplot2)/)). For advice on selecting colors, see Section [14\.3](Advice.html#ColorChoice).
### 11\.2\.1 Specifying colors manually
To specify a color for points, lines, or text, use the `color = "colorname"` option in the appropriate geom. To specify a color for bars and areas, use the `fill = "colorname"` option.
Examples:
* `geom_point(color = "blue")`
* `geom_bar(fill = "steelblue")`
Colors can be specified by name or hex code ([https://r\-charts.com/colors/](https://r-charts.com/colors/)).
To assign colors to the levels of a variable, use the `scale_color_manual` and `scale_fill_manual` functions. The former is used to specify the colors for points and lines, while the later is used for bars and areas.
Here is an example, using the `diamonds` dataset that ships with `ggplot2`. The dataset contains the prices and attributes of 54,000 round cut diamonds.
```
# specify fill color manually
library(ggplot2)
ggplot(diamonds, aes(x = cut, fill = clarity)) +
geom_bar() +
scale_fill_manual(values = c("darkred", "steelblue",
"darkgreen", "gold",
"brown", "purple",
"grey", "khaki4"))
```
Figure 11\.5: Manual color selection
If you are aesthetically challenged like me, an alternative is to use a predefined palette.
### 11\.2\.2 Color palettes
There are *many* predefined color palettes available in R.
#### 11\.2\.2\.1 RColorBrewer
The most popular alternative palettes are probably the [ColorBrewer](http://colorbrewer2.org/#type=sequential&scheme=BuGn&n=3) palettes.
Figure 11\.6: RColorBrewer palettes
You can specify these palettes with the `scale_color_brewer` and `scale_fill_brewer` functions.
```
# use an ColorBrewer fill palette
ggplot(diamonds, aes(x = cut, fill = clarity)) +
geom_bar() +
scale_fill_brewer(palette = "Dark2")
```
Figure 11\.7: Using RColorBrewer
Adding `direction = -1` to these functions reverses the order of the colors in a palette.
#### 11\.2\.2\.2 Viridis
The [viridis](https://cran.r-project.org/web/packages/viridis/vignettes/intro-to-viridis.html) palette is another popular choice.
For continuous scales use
* `scale_fill_viridis_c`
* `scale_color_viridis_c`
For discrete (categorical scales) use
* `scale_fill_viridis_d`
* `scale_color_viridis_d`
```
# Use a viridis fill palette
ggplot(diamonds, aes(x = cut, fill = clarity)) +
geom_bar() +
scale_fill_viridis_d()
```
Figure 11\.8: Using the viridis palette
#### 11\.2\.2\.3 Other palettes
Other palettes to explore include
| Package | URL |
| --- | --- |
| **dutchmasters** | <https://github.com/EdwinTh/dutchmasters> |
| **ggpomological** | <https://github.com/gadenbuie/ggpomological> |
| **LaCroixColoR** | <https://github.com/johannesbjork/LaCroixColoR> |
| **nord** | <https://github.com/jkaupp/nord> |
| **ochRe** | <https://github.com/ropenscilabs/ochRe> |
| **palettetown** | <https://github.com/timcdlucas/palettetown> |
| **pals** | <https://github.com/kwstat/pals> |
| **rcartocolor** | <https://github.com/Nowosad/rcartocolor> |
| **wesanderson** | <https://github.com/karthik/wesanderson> |
If you want to explore **all** the palette options (or nearly all), take a look at the **paletter** (<https://github.com/EmilHvitfeldt/paletteer>) package.
To learn more about color specifications, see the *R Cookpage* page on ggplot2 colors ([http://www.cookbook\-r.com/Graphs/Colors\_(ggplot2\)/](http://www.cookbook-r.com/Graphs/Colors_(ggplot2)/)). For advice on selecting colors, see Section [14\.3](Advice.html#ColorChoice).
#### 11\.2\.2\.1 RColorBrewer
The most popular alternative palettes are probably the [ColorBrewer](http://colorbrewer2.org/#type=sequential&scheme=BuGn&n=3) palettes.
Figure 11\.6: RColorBrewer palettes
You can specify these palettes with the `scale_color_brewer` and `scale_fill_brewer` functions.
```
# use an ColorBrewer fill palette
ggplot(diamonds, aes(x = cut, fill = clarity)) +
geom_bar() +
scale_fill_brewer(palette = "Dark2")
```
Figure 11\.7: Using RColorBrewer
Adding `direction = -1` to these functions reverses the order of the colors in a palette.
#### 11\.2\.2\.2 Viridis
The [viridis](https://cran.r-project.org/web/packages/viridis/vignettes/intro-to-viridis.html) palette is another popular choice.
For continuous scales use
* `scale_fill_viridis_c`
* `scale_color_viridis_c`
For discrete (categorical scales) use
* `scale_fill_viridis_d`
* `scale_color_viridis_d`
```
# Use a viridis fill palette
ggplot(diamonds, aes(x = cut, fill = clarity)) +
geom_bar() +
scale_fill_viridis_d()
```
Figure 11\.8: Using the viridis palette
#### 11\.2\.2\.3 Other palettes
Other palettes to explore include
| Package | URL |
| --- | --- |
| **dutchmasters** | <https://github.com/EdwinTh/dutchmasters> |
| **ggpomological** | <https://github.com/gadenbuie/ggpomological> |
| **LaCroixColoR** | <https://github.com/johannesbjork/LaCroixColoR> |
| **nord** | <https://github.com/jkaupp/nord> |
| **ochRe** | <https://github.com/ropenscilabs/ochRe> |
| **palettetown** | <https://github.com/timcdlucas/palettetown> |
| **pals** | <https://github.com/kwstat/pals> |
| **rcartocolor** | <https://github.com/Nowosad/rcartocolor> |
| **wesanderson** | <https://github.com/karthik/wesanderson> |
If you want to explore **all** the palette options (or nearly all), take a look at the **paletter** (<https://github.com/EmilHvitfeldt/paletteer>) package.
To learn more about color specifications, see the *R Cookpage* page on ggplot2 colors ([http://www.cookbook\-r.com/Graphs/Colors\_(ggplot2\)/](http://www.cookbook-r.com/Graphs/Colors_(ggplot2)/)). For advice on selecting colors, see Section [14\.3](Advice.html#ColorChoice).
11\.3 Points \& Lines
---------------------
### 11\.3\.1 Points
For `ggplot2` graphs, the default point is a filled circle. To specify a different shape, use the `shape = #` option in the `geom_point` function. To map shapes to the levels of a categorical variable use the `shape = variablename` option in the `aes` function.
Examples:
* `geom_point(shape = 1)`
* geom\_point(`aes(shape = sex)`)
Availabe shapes are given in the table below.
Figure 11\.9: Point shapes
Shapes 21 through 26 provide for both a fill color and a border color.
### 11\.3\.2 Lines
The default line type is a solid line. To change the linetype, use the `linetype = #` option in the `geom_line` function. To map linetypes to the levels of a categorical variable use the `linetype = variablename` option in the `aes` function.
Examples:
* `geom_line(linetype = 1)`
* geom\_line(`aes(linetype = sex)`)
Availabe linetypes are given in the table below.
Figure 11\.10: Linetypes
### 11\.3\.1 Points
For `ggplot2` graphs, the default point is a filled circle. To specify a different shape, use the `shape = #` option in the `geom_point` function. To map shapes to the levels of a categorical variable use the `shape = variablename` option in the `aes` function.
Examples:
* `geom_point(shape = 1)`
* geom\_point(`aes(shape = sex)`)
Availabe shapes are given in the table below.
Figure 11\.9: Point shapes
Shapes 21 through 26 provide for both a fill color and a border color.
### 11\.3\.2 Lines
The default line type is a solid line. To change the linetype, use the `linetype = #` option in the `geom_line` function. To map linetypes to the levels of a categorical variable use the `linetype = variablename` option in the `aes` function.
Examples:
* `geom_line(linetype = 1)`
* geom\_line(`aes(linetype = sex)`)
Availabe linetypes are given in the table below.
Figure 11\.10: Linetypes
11\.4 Fonts
-----------
R does not have great support for fonts, but with a bit of work, you can change the fonts that appear in your graphs. First you need to install and set\-up the `extrafont` package.
```
# one time install
install.packages("extrafont")
library(extrafont)
font_import()
# see what fonts are now available
fonts()
```
Apply the new font(s) using the `text` option in the `theme` function.
```
# specify new font
library(extrafont)
ggplot(mpg, aes(x = displ, y=hwy)) +
geom_point() +
labs(title = "Diplacement by Highway Mileage",
subtitle = "MPG dataset") +
theme(text = element_text(size = 16, family = "Comic Sans MS"))
```
Figure 11\.11: Alternative fonts
To learn more about customizing fonts, see Andrew Heiss’s blog on **Working with R, Cairo graphics, custom fonts, and ggplot** ([https://www.andrewheiss.com/blog/2017/09/27/working\-with\-r\-cairo\-graphics\-custom\-fonts\-and\-ggplot/\#windows](https://www.andrewheiss.com/blog/2017/09/27/working-with-r-cairo-graphics-custom-fonts-and-ggplot/#windows)).
11\.5 Legends
-------------
In `ggplot2`, legends are automatically created when variables are mapped to color, fill, linetype, shape, size, or alpha.
You have a great deal of control over the look and feel of these legends. Modifications are usually made through the `theme` function and/or the `labs` function. Here are some of the most sought after changes.
### 11\.5\.1 Legend location
The legend can appear anywhere in the graph. By default, it’s placed on the right. You can change the default with
`theme(legend.position = position)`
where
| Position | Location |
| --- | --- |
| “top” | above the plot area |
| “right” | right of the plot area |
| “bottom” | below the plot area |
| “left” | left of the plot area |
| c(*x*, *y*) | within the plot area. The *x* and *y* values must range between 0 and 1\. c(0,0\) represents (left, bottom) and c(1,1\) represents (right, top). |
| “none” | suppress the legend |
For example, to place the legend at the top, use the following code.
```
# place legend on top
ggplot(mpg,
aes(x = displ, y=hwy, color = class)) +
geom_point(size = 4) +
labs(title = "Diplacement by Highway Mileage") +
theme_minimal() +
theme(legend.position = "top")
```
Figure 11\.12: Moving the legend to the top
### 11\.5\.2 Legend title
You can change the legend title through the `labs` function. Use `color`, `fill`, `size`, `shape`, `linetype`, and `alpha` to give new titles to the corresponding legends.
The alignment of the legend title is controlled through the `legend.title.align` option in the `theme` function. (0\=left, 0\.5\=center, 1\=right)
```
# change the default legend title
ggplot(mpg,
aes(x = displ, y=hwy, color = class)) +
geom_point(size = 4) +
labs(title = "Diplacement by Highway Mileage",
color = "Automobile\nClass") +
theme_minimal() +
theme(legend.title.align=0.5)
```
Figure 11\.13: Changing the legend title
### 11\.5\.1 Legend location
The legend can appear anywhere in the graph. By default, it’s placed on the right. You can change the default with
`theme(legend.position = position)`
where
| Position | Location |
| --- | --- |
| “top” | above the plot area |
| “right” | right of the plot area |
| “bottom” | below the plot area |
| “left” | left of the plot area |
| c(*x*, *y*) | within the plot area. The *x* and *y* values must range between 0 and 1\. c(0,0\) represents (left, bottom) and c(1,1\) represents (right, top). |
| “none” | suppress the legend |
For example, to place the legend at the top, use the following code.
```
# place legend on top
ggplot(mpg,
aes(x = displ, y=hwy, color = class)) +
geom_point(size = 4) +
labs(title = "Diplacement by Highway Mileage") +
theme_minimal() +
theme(legend.position = "top")
```
Figure 11\.12: Moving the legend to the top
### 11\.5\.2 Legend title
You can change the legend title through the `labs` function. Use `color`, `fill`, `size`, `shape`, `linetype`, and `alpha` to give new titles to the corresponding legends.
The alignment of the legend title is controlled through the `legend.title.align` option in the `theme` function. (0\=left, 0\.5\=center, 1\=right)
```
# change the default legend title
ggplot(mpg,
aes(x = displ, y=hwy, color = class)) +
geom_point(size = 4) +
labs(title = "Diplacement by Highway Mileage",
color = "Automobile\nClass") +
theme_minimal() +
theme(legend.title.align=0.5)
```
Figure 11\.13: Changing the legend title
11\.6 Labels
------------
Labels are a key ingredient in rendering a graph understandable. They’re are added with the `labs` function. Available options are given below.
| option | Use |
| --- | --- |
| title | main title |
| subtitle | subtitle |
| caption | caption (bottom right by default) |
| x | horizontal axis |
| y | vertical axis |
| color | color legend title |
| fill | fill legend title |
| size | size legend title |
| linetype | linetype legend title |
| shape | shape legend title |
| alpha | transparency legend title |
| size | size legend title |
For example
```
# add plot labels
ggplot(mpg,
aes(x = displ, y=hwy,
color = class,
shape = factor(year))) +
geom_point(size = 3,
alpha = .5) +
labs(title = "Mileage by engine displacement",
subtitle = "Data from 1999 and 2008",
caption = "Source: EPA (http://fueleconomy.gov)",
x = "Engine displacement (litres)",
y = "Highway miles per gallon",
color = "Car Class",
shape = "Year") +
theme_minimal()
```
Figure 11\.14: Graph with labels
This is not a great graph \- it is too busy, making the identification of patterns difficult. It would better to facet the year variable, the class variable or both (Section [6\.2](Multivariate.html#Faceting)). Trend lines would also be helpful (Section [5\.2\.1\.1](Bivariate.html#BestFit)).
11\.7 Annotations
-----------------
Annotations are additional information added to a graph to highlight important points.
### 11\.7\.1 Adding text
There are two primary reasons to add text to a graph.
One is to identify the numeric qualities of a geom. For example, we may want to identify points with labels in a scatterplot, or label the heights of bars in a bar chart.
Another reason is to provide additional information. We may want to add notes about the data, point out outliers, etc.
#### 11\.7\.1\.1 Labeling values
Consider the following scatterplot, based on the car data in the [mtcars](https://www.rdocumentation.org/packages/datasets/versions/3.5.0/topics/mtcars) dataset.
```
# basic scatterplot
data(mtcars)
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point()
```
Figure 11\.15: Simple scatterplot
Let’s label each point with the name of the car it represents.
```
# scatterplot with labels
data(mtcars)
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
geom_text(label = row.names(mtcars))
```
Figure 11\.16: Scatterplot with labels
The overlapping labels make this chart difficult to read. The `ggrepel` package can help us here. It nudges text to avoid overlaps.
```
# scatterplot with non-overlapping labels
data(mtcars)
library(ggrepel)
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
geom_text_repel(label = row.names(mtcars),
size=3)
```
Figure 11\.17: Scatterplot with non\-overlapping labels
Much better.
Adding labels to bar charts is covered in the aptly named *labeling bars* section (Section [4\.1\.1\.3](Univariate.html#LabelingBars)).
#### 11\.7\.1\.2 Adding additional information
We can place text anywhere on a graph using the `annotate` function. The format is
```
annotate("text",
x, y,
label = "Some text",
color = "colorname",
size=textsize)
```
where *x* and *y* are the coordinates on which to place the text. The `color` and `size` parameters are optional.
By default, the text will be centered. Use `hjust` and `vjust` to change the alignment.
* `hjust` 0 \= left justified, 0\.5 \= centered, and 1 \= right centered.
* `vjust` 0 \= above, 0\.5 \= centered, and 1 \= below.
Continuing the previous example.
```
# scatterplot with explanatory text
data(mtcars)
library(ggrepel)
txt <- paste("The relationship between car weight",
"and mileage appears to be roughly linear",
sep = "\n")
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point(color = "red") +
geom_text_repel(label = row.names(mtcars),
size=3) +
ggplot2::annotate("text",
6, 30,
label=txt,
color = "red",
hjust = 1) +
theme_bw()
```
Figure 11\.18: Scatterplot with arranged labels
See the this stackoverflow blog post ([https://stackoverflow.com/questions/7263849/what\-do\-hjust\-and\-vjust\-do\-when\-making\-a\-plot\-using\-ggplot](https://stackoverflow.com/questions/7263849/what-do-hjust-and-vjust-do-when-making-a-plot-using-ggplot)) for more details.
### 11\.7\.2 Adding lines
Horizontal and vertical lines can be added using:
* `geom_hline(yintercept = a)`
* `geom_vline(xintercept = b)`
where *a* is a number on the *y*\-axis and *b* is a number on the *x*\-axis respectively. Other options include `linetype` and `color`.
In the following example, we plot city vs. highway miles and indicate the mean highway miles with a horizontal line and label.
```
# add annotation line and text label
min_cty <- min(mpg$cty)
mean_hwy <- mean(mpg$hwy)
ggplot(mpg,
aes(x = cty, y=hwy, color=drv)) +
geom_point(size = 3) +
geom_hline(yintercept = mean_hwy,
color = "darkred",
linetype = "dashed") +
ggplot2::annotate("text",
min_cty,
mean_hwy + 1,
label = "Mean",
color = "darkred") +
labs(title = "Mileage by drive type",
x = "City miles per gallon",
y = "Highway miles per gallon",
color = "Drive")
```
Figure 11\.19: Graph with line annotation
We could add a vertical line for the mean city miles per gallon as well. In any case, always label your annotation lines in some way. Otherwise the reader will not know what they mean.
### 11\.7\.3 Highlighting a single group
Sometimes you want to highlight a single group in your graph. The [`gghighlight`](https://www.rdocumentation.org/packages/gghighlight/versions/0.0.1/topics/gghighlight) function in the `gghighlight` package is designed for this.
Here is an example with a scatterplot. Midsize cars are highlighted.
```
# highlight a set of points
library(ggplot2)
library(gghighlight)
ggplot(mpg, aes(x = cty, y = hwy)) +
geom_point(color = "red",
size=2) +
gghighlight(class == "midsize")
```
Figure 11\.20: Highlighting a group
Below is an example with a bar chart. Again, midsize cars are highlighted.
```
# highlight a single bar
library(gghighlight)
ggplot(mpg, aes(x = class)) +
geom_bar(fill = "red") +
gghighlight(class == "midsize")
```
Figure 11\.21: Highlighting a group
Highlighting is helpful for drawing the reader’s attention to a particular group of observations and their standing with respect to the other observations in the data.
### 11\.7\.1 Adding text
There are two primary reasons to add text to a graph.
One is to identify the numeric qualities of a geom. For example, we may want to identify points with labels in a scatterplot, or label the heights of bars in a bar chart.
Another reason is to provide additional information. We may want to add notes about the data, point out outliers, etc.
#### 11\.7\.1\.1 Labeling values
Consider the following scatterplot, based on the car data in the [mtcars](https://www.rdocumentation.org/packages/datasets/versions/3.5.0/topics/mtcars) dataset.
```
# basic scatterplot
data(mtcars)
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point()
```
Figure 11\.15: Simple scatterplot
Let’s label each point with the name of the car it represents.
```
# scatterplot with labels
data(mtcars)
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
geom_text(label = row.names(mtcars))
```
Figure 11\.16: Scatterplot with labels
The overlapping labels make this chart difficult to read. The `ggrepel` package can help us here. It nudges text to avoid overlaps.
```
# scatterplot with non-overlapping labels
data(mtcars)
library(ggrepel)
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
geom_text_repel(label = row.names(mtcars),
size=3)
```
Figure 11\.17: Scatterplot with non\-overlapping labels
Much better.
Adding labels to bar charts is covered in the aptly named *labeling bars* section (Section [4\.1\.1\.3](Univariate.html#LabelingBars)).
#### 11\.7\.1\.2 Adding additional information
We can place text anywhere on a graph using the `annotate` function. The format is
```
annotate("text",
x, y,
label = "Some text",
color = "colorname",
size=textsize)
```
where *x* and *y* are the coordinates on which to place the text. The `color` and `size` parameters are optional.
By default, the text will be centered. Use `hjust` and `vjust` to change the alignment.
* `hjust` 0 \= left justified, 0\.5 \= centered, and 1 \= right centered.
* `vjust` 0 \= above, 0\.5 \= centered, and 1 \= below.
Continuing the previous example.
```
# scatterplot with explanatory text
data(mtcars)
library(ggrepel)
txt <- paste("The relationship between car weight",
"and mileage appears to be roughly linear",
sep = "\n")
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point(color = "red") +
geom_text_repel(label = row.names(mtcars),
size=3) +
ggplot2::annotate("text",
6, 30,
label=txt,
color = "red",
hjust = 1) +
theme_bw()
```
Figure 11\.18: Scatterplot with arranged labels
See the this stackoverflow blog post ([https://stackoverflow.com/questions/7263849/what\-do\-hjust\-and\-vjust\-do\-when\-making\-a\-plot\-using\-ggplot](https://stackoverflow.com/questions/7263849/what-do-hjust-and-vjust-do-when-making-a-plot-using-ggplot)) for more details.
#### 11\.7\.1\.1 Labeling values
Consider the following scatterplot, based on the car data in the [mtcars](https://www.rdocumentation.org/packages/datasets/versions/3.5.0/topics/mtcars) dataset.
```
# basic scatterplot
data(mtcars)
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point()
```
Figure 11\.15: Simple scatterplot
Let’s label each point with the name of the car it represents.
```
# scatterplot with labels
data(mtcars)
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
geom_text(label = row.names(mtcars))
```
Figure 11\.16: Scatterplot with labels
The overlapping labels make this chart difficult to read. The `ggrepel` package can help us here. It nudges text to avoid overlaps.
```
# scatterplot with non-overlapping labels
data(mtcars)
library(ggrepel)
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
geom_text_repel(label = row.names(mtcars),
size=3)
```
Figure 11\.17: Scatterplot with non\-overlapping labels
Much better.
Adding labels to bar charts is covered in the aptly named *labeling bars* section (Section [4\.1\.1\.3](Univariate.html#LabelingBars)).
#### 11\.7\.1\.2 Adding additional information
We can place text anywhere on a graph using the `annotate` function. The format is
```
annotate("text",
x, y,
label = "Some text",
color = "colorname",
size=textsize)
```
where *x* and *y* are the coordinates on which to place the text. The `color` and `size` parameters are optional.
By default, the text will be centered. Use `hjust` and `vjust` to change the alignment.
* `hjust` 0 \= left justified, 0\.5 \= centered, and 1 \= right centered.
* `vjust` 0 \= above, 0\.5 \= centered, and 1 \= below.
Continuing the previous example.
```
# scatterplot with explanatory text
data(mtcars)
library(ggrepel)
txt <- paste("The relationship between car weight",
"and mileage appears to be roughly linear",
sep = "\n")
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point(color = "red") +
geom_text_repel(label = row.names(mtcars),
size=3) +
ggplot2::annotate("text",
6, 30,
label=txt,
color = "red",
hjust = 1) +
theme_bw()
```
Figure 11\.18: Scatterplot with arranged labels
See the this stackoverflow blog post ([https://stackoverflow.com/questions/7263849/what\-do\-hjust\-and\-vjust\-do\-when\-making\-a\-plot\-using\-ggplot](https://stackoverflow.com/questions/7263849/what-do-hjust-and-vjust-do-when-making-a-plot-using-ggplot)) for more details.
### 11\.7\.2 Adding lines
Horizontal and vertical lines can be added using:
* `geom_hline(yintercept = a)`
* `geom_vline(xintercept = b)`
where *a* is a number on the *y*\-axis and *b* is a number on the *x*\-axis respectively. Other options include `linetype` and `color`.
In the following example, we plot city vs. highway miles and indicate the mean highway miles with a horizontal line and label.
```
# add annotation line and text label
min_cty <- min(mpg$cty)
mean_hwy <- mean(mpg$hwy)
ggplot(mpg,
aes(x = cty, y=hwy, color=drv)) +
geom_point(size = 3) +
geom_hline(yintercept = mean_hwy,
color = "darkred",
linetype = "dashed") +
ggplot2::annotate("text",
min_cty,
mean_hwy + 1,
label = "Mean",
color = "darkred") +
labs(title = "Mileage by drive type",
x = "City miles per gallon",
y = "Highway miles per gallon",
color = "Drive")
```
Figure 11\.19: Graph with line annotation
We could add a vertical line for the mean city miles per gallon as well. In any case, always label your annotation lines in some way. Otherwise the reader will not know what they mean.
### 11\.7\.3 Highlighting a single group
Sometimes you want to highlight a single group in your graph. The [`gghighlight`](https://www.rdocumentation.org/packages/gghighlight/versions/0.0.1/topics/gghighlight) function in the `gghighlight` package is designed for this.
Here is an example with a scatterplot. Midsize cars are highlighted.
```
# highlight a set of points
library(ggplot2)
library(gghighlight)
ggplot(mpg, aes(x = cty, y = hwy)) +
geom_point(color = "red",
size=2) +
gghighlight(class == "midsize")
```
Figure 11\.20: Highlighting a group
Below is an example with a bar chart. Again, midsize cars are highlighted.
```
# highlight a single bar
library(gghighlight)
ggplot(mpg, aes(x = class)) +
geom_bar(fill = "red") +
gghighlight(class == "midsize")
```
Figure 11\.21: Highlighting a group
Highlighting is helpful for drawing the reader’s attention to a particular group of observations and their standing with respect to the other observations in the data.
11\.8 Themes
------------
`ggplot2` themes control the appearance of all non\-data related components of a plot. You can change the look and feel of a graph by altering the elements of its theme.
### 11\.8\.1 Altering theme elements
The `theme` function is used to modify individual components of a theme.
Consider the following graph. It shows the number of male and female faculty by rank and discipline at a particular university in 2008\-2009\. The data come from the `salaries` dataset in the `carData` package.
```
# create graph
data(Salaries, package = "carData")
p <- ggplot(Salaries, aes(x = rank, fill = sex)) +
geom_bar() +
facet_wrap(~discipline) +
labs(title = "Academic Rank by Gender and Discipline",
x = "Rank",
y = "Frequency",
fill = "Gender")
p
```
Figure 11\.22: Graph with default theme
Let’s make some changes to the theme.
* Change label text from black to navy blue
* Change the panel background color from grey to white
* Add solid grey lines for major y\-axis grid lines
* Add dashed grey lines for minor x\-axis grid lines
* Eliminate x\-axis grid lines
* Change the strip background color to white with a grey border
Using the `?theme` help in ggplot2 gives us
```
p +
theme(text = element_text(color = "navy"),
panel.background = element_rect(fill = "white"),
panel.grid.major.y = element_line(color = "grey"),
panel.grid.minor.y = element_line(color = "grey",
linetype = "dashed"),
panel.grid.major.x = element_blank(),
panel.grid.minor.x = element_blank(),
strip.background = element_rect(fill = "white", color="grey"))
```
Figure 11\.23: Graph with modified theme
Wow, this looks pretty awful, but you get the idea.
#### 11\.8\.1\.1 ggThemeAssist
If you would like to create your own theme using a GUI, take a look at [`ggThemeAssist`](https://github.com/calligross/ggthemeassist) package. After you install the package, a new menu item will appear under Addins in RStudio.
Highlight the code that creates your graph, then choose the `ggThemeAssist` option from the **Addins** drop\-down menu. You can change many of the features of your theme using point\-and\-click. When you’re done, the `theme` code will be appended to your graph code.
### 11\.8\.2 Pre\-packaged themes
I’m not a very good artist (just look at the last example), so I often look for pre\-packaged themes that can be applied to my graphs. There are many available.
Some come with `ggplot2`. These include *theme\_classic*, *theme\_dark*, *theme\_gray*, *theme\_grey*, *theme\_light* *theme\_linedraw*, *theme\_minimal*, and *theme\_void*. We’ve used *theme\_minimal* often in this book. Others are available through add\-on packages.
#### 11\.8\.2\.1 ggthemes
The `ggthemes` package come with 19 themes.
| Theme | Description |
| --- | --- |
| theme\_base | Theme Base |
| theme\_calc | Theme Calc |
| theme\_economist | ggplot color theme based on the Economist |
| theme\_economist\_white | ggplot color theme based on the Economist |
| theme\_excel | ggplot color theme based on old Excel plots |
| theme\_few | Theme based on Few’s “Practical Rules for Using Color in Charts” |
| theme\_fivethirtyeight | Theme inspired by fivethirtyeight.com plots |
| theme\_foundation | Foundation Theme |
| theme\_gdocs | Theme with Google Docs Chart defaults |
| theme\_hc | Highcharts JS theme |
| theme\_igray | Inverse gray theme |
| theme\_map | Clean theme for maps |
| theme\_pander | A ggplot theme originated from the pander package |
| theme\_par | Theme which takes its values from the current ‘base’ graphics parameter values in ‘par’. |
| theme\_solarized | ggplot color themes based on the Solarized palette |
| theme\_solarized\_2 | ggplot color themes based on the Solarized palette |
| theme\_solid | Theme with nothing other than a background color |
| theme\_stata | Themes based on Stata graph schemes |
| theme\_tufte | Tufte Maximal Data, Minimal Ink Theme |
| theme\_wsj | Wall Street Journal theme |
To demonstrate their use, we’ll first create and save a graph.
```
# create basic plot
library(ggplot2)
p <- ggplot(mpg,
aes(x = displ, y=hwy,
color = class)) +
geom_point(size = 3,
alpha = .5) +
labs(title = "Mileage by engine displacement",
subtitle = "Data from 1999 and 2008",
caption = "Source: EPA (http://fueleconomy.gov)",
x = "Engine displacement (litres)",
y = "Highway miles per gallon",
color = "Car Class")
# display graph
p
```
Figure 11\.24: Default theme
Now let’s apply some themes.
```
# add economist theme
library(ggthemes)
p + theme_economist()
```
Figure 11\.25: Economist theme
```
# add fivethirtyeight theme
p + theme_fivethirtyeight()
```
Figure 11\.26: Five Thirty Eight theme
```
# add wsj theme
p + theme_wsj(base_size=8)
```
Figure 11\.27: Wall Street Journal theme
By default, the font size for the wsj theme is usually too large. Changing the `base_size` option can help.
Each theme also comes with scales for colors and fills. In the next example, both the `few` theme and colors are used.
```
# add few theme
p + theme_few() + scale_color_few()
```
Figure 11\.28: Few theme and colors
Try out different themes and scales to find one that you like.
#### 11\.8\.2\.2 hrbrthemes
The [`hrbrthemes`]((https://github.com/hrbrmstr/hrbrthemes)) package is focused on typography\-centric themes. The results are charts that tend to have a clean look.
Continuing the example plot from above
```
# add few theme
library(hrbrthemes)
p + theme_ipsum()
```
Figure 11\.29: Ipsum theme
See the hrbrthemes homepage (<https://github.com/hrbrmstr/hrbrthemes>) for additional examples.
#### 11\.8\.2\.3 ggthemer
The [`ggthemer`](https://github.com/cttobin/ggthemr) package offers a wide range of themes (17 as of this printing).
The package is not available on CRAN and must be installed from GitHub.
```
# one time install
install.packages("remotes")
remotes::install_github('cttobin/ggthemr')
```
The functions work a bit differently. Use the `ggthemr("themename")` function to set future graphs to a given theme. Use `ggthemr_reset()` to return future graphs to the ggplot2 default theme.
Current themes include *flat*, *flat dark*, *camoflauge*, *chalk*, *copper*, *dust*, *earth*, *fresh*, *grape*, *grass*, *greyscale*, *light*, *lilac*, *pale*, *sea*, *sky*, and *solarized*.
```
# set graphs to the flat dark theme
library(ggthemr)
ggthemr("flat dark")
p
```
Figure 11\.30: Ipsum theme
```
ggthemr_reset()
```
I would not actually use this theme for this particular graph. It is difficult to distinguish colors. Which green represents compact cars and which represents subcompact cars?
Select a theme that best conveys the graph’s information to your audience.
### 11\.8\.1 Altering theme elements
The `theme` function is used to modify individual components of a theme.
Consider the following graph. It shows the number of male and female faculty by rank and discipline at a particular university in 2008\-2009\. The data come from the `salaries` dataset in the `carData` package.
```
# create graph
data(Salaries, package = "carData")
p <- ggplot(Salaries, aes(x = rank, fill = sex)) +
geom_bar() +
facet_wrap(~discipline) +
labs(title = "Academic Rank by Gender and Discipline",
x = "Rank",
y = "Frequency",
fill = "Gender")
p
```
Figure 11\.22: Graph with default theme
Let’s make some changes to the theme.
* Change label text from black to navy blue
* Change the panel background color from grey to white
* Add solid grey lines for major y\-axis grid lines
* Add dashed grey lines for minor x\-axis grid lines
* Eliminate x\-axis grid lines
* Change the strip background color to white with a grey border
Using the `?theme` help in ggplot2 gives us
```
p +
theme(text = element_text(color = "navy"),
panel.background = element_rect(fill = "white"),
panel.grid.major.y = element_line(color = "grey"),
panel.grid.minor.y = element_line(color = "grey",
linetype = "dashed"),
panel.grid.major.x = element_blank(),
panel.grid.minor.x = element_blank(),
strip.background = element_rect(fill = "white", color="grey"))
```
Figure 11\.23: Graph with modified theme
Wow, this looks pretty awful, but you get the idea.
#### 11\.8\.1\.1 ggThemeAssist
If you would like to create your own theme using a GUI, take a look at [`ggThemeAssist`](https://github.com/calligross/ggthemeassist) package. After you install the package, a new menu item will appear under Addins in RStudio.
Highlight the code that creates your graph, then choose the `ggThemeAssist` option from the **Addins** drop\-down menu. You can change many of the features of your theme using point\-and\-click. When you’re done, the `theme` code will be appended to your graph code.
#### 11\.8\.1\.1 ggThemeAssist
If you would like to create your own theme using a GUI, take a look at [`ggThemeAssist`](https://github.com/calligross/ggthemeassist) package. After you install the package, a new menu item will appear under Addins in RStudio.
Highlight the code that creates your graph, then choose the `ggThemeAssist` option from the **Addins** drop\-down menu. You can change many of the features of your theme using point\-and\-click. When you’re done, the `theme` code will be appended to your graph code.
### 11\.8\.2 Pre\-packaged themes
I’m not a very good artist (just look at the last example), so I often look for pre\-packaged themes that can be applied to my graphs. There are many available.
Some come with `ggplot2`. These include *theme\_classic*, *theme\_dark*, *theme\_gray*, *theme\_grey*, *theme\_light* *theme\_linedraw*, *theme\_minimal*, and *theme\_void*. We’ve used *theme\_minimal* often in this book. Others are available through add\-on packages.
#### 11\.8\.2\.1 ggthemes
The `ggthemes` package come with 19 themes.
| Theme | Description |
| --- | --- |
| theme\_base | Theme Base |
| theme\_calc | Theme Calc |
| theme\_economist | ggplot color theme based on the Economist |
| theme\_economist\_white | ggplot color theme based on the Economist |
| theme\_excel | ggplot color theme based on old Excel plots |
| theme\_few | Theme based on Few’s “Practical Rules for Using Color in Charts” |
| theme\_fivethirtyeight | Theme inspired by fivethirtyeight.com plots |
| theme\_foundation | Foundation Theme |
| theme\_gdocs | Theme with Google Docs Chart defaults |
| theme\_hc | Highcharts JS theme |
| theme\_igray | Inverse gray theme |
| theme\_map | Clean theme for maps |
| theme\_pander | A ggplot theme originated from the pander package |
| theme\_par | Theme which takes its values from the current ‘base’ graphics parameter values in ‘par’. |
| theme\_solarized | ggplot color themes based on the Solarized palette |
| theme\_solarized\_2 | ggplot color themes based on the Solarized palette |
| theme\_solid | Theme with nothing other than a background color |
| theme\_stata | Themes based on Stata graph schemes |
| theme\_tufte | Tufte Maximal Data, Minimal Ink Theme |
| theme\_wsj | Wall Street Journal theme |
To demonstrate their use, we’ll first create and save a graph.
```
# create basic plot
library(ggplot2)
p <- ggplot(mpg,
aes(x = displ, y=hwy,
color = class)) +
geom_point(size = 3,
alpha = .5) +
labs(title = "Mileage by engine displacement",
subtitle = "Data from 1999 and 2008",
caption = "Source: EPA (http://fueleconomy.gov)",
x = "Engine displacement (litres)",
y = "Highway miles per gallon",
color = "Car Class")
# display graph
p
```
Figure 11\.24: Default theme
Now let’s apply some themes.
```
# add economist theme
library(ggthemes)
p + theme_economist()
```
Figure 11\.25: Economist theme
```
# add fivethirtyeight theme
p + theme_fivethirtyeight()
```
Figure 11\.26: Five Thirty Eight theme
```
# add wsj theme
p + theme_wsj(base_size=8)
```
Figure 11\.27: Wall Street Journal theme
By default, the font size for the wsj theme is usually too large. Changing the `base_size` option can help.
Each theme also comes with scales for colors and fills. In the next example, both the `few` theme and colors are used.
```
# add few theme
p + theme_few() + scale_color_few()
```
Figure 11\.28: Few theme and colors
Try out different themes and scales to find one that you like.
#### 11\.8\.2\.2 hrbrthemes
The [`hrbrthemes`]((https://github.com/hrbrmstr/hrbrthemes)) package is focused on typography\-centric themes. The results are charts that tend to have a clean look.
Continuing the example plot from above
```
# add few theme
library(hrbrthemes)
p + theme_ipsum()
```
Figure 11\.29: Ipsum theme
See the hrbrthemes homepage (<https://github.com/hrbrmstr/hrbrthemes>) for additional examples.
#### 11\.8\.2\.3 ggthemer
The [`ggthemer`](https://github.com/cttobin/ggthemr) package offers a wide range of themes (17 as of this printing).
The package is not available on CRAN and must be installed from GitHub.
```
# one time install
install.packages("remotes")
remotes::install_github('cttobin/ggthemr')
```
The functions work a bit differently. Use the `ggthemr("themename")` function to set future graphs to a given theme. Use `ggthemr_reset()` to return future graphs to the ggplot2 default theme.
Current themes include *flat*, *flat dark*, *camoflauge*, *chalk*, *copper*, *dust*, *earth*, *fresh*, *grape*, *grass*, *greyscale*, *light*, *lilac*, *pale*, *sea*, *sky*, and *solarized*.
```
# set graphs to the flat dark theme
library(ggthemr)
ggthemr("flat dark")
p
```
Figure 11\.30: Ipsum theme
```
ggthemr_reset()
```
I would not actually use this theme for this particular graph. It is difficult to distinguish colors. Which green represents compact cars and which represents subcompact cars?
Select a theme that best conveys the graph’s information to your audience.
#### 11\.8\.2\.1 ggthemes
The `ggthemes` package come with 19 themes.
| Theme | Description |
| --- | --- |
| theme\_base | Theme Base |
| theme\_calc | Theme Calc |
| theme\_economist | ggplot color theme based on the Economist |
| theme\_economist\_white | ggplot color theme based on the Economist |
| theme\_excel | ggplot color theme based on old Excel plots |
| theme\_few | Theme based on Few’s “Practical Rules for Using Color in Charts” |
| theme\_fivethirtyeight | Theme inspired by fivethirtyeight.com plots |
| theme\_foundation | Foundation Theme |
| theme\_gdocs | Theme with Google Docs Chart defaults |
| theme\_hc | Highcharts JS theme |
| theme\_igray | Inverse gray theme |
| theme\_map | Clean theme for maps |
| theme\_pander | A ggplot theme originated from the pander package |
| theme\_par | Theme which takes its values from the current ‘base’ graphics parameter values in ‘par’. |
| theme\_solarized | ggplot color themes based on the Solarized palette |
| theme\_solarized\_2 | ggplot color themes based on the Solarized palette |
| theme\_solid | Theme with nothing other than a background color |
| theme\_stata | Themes based on Stata graph schemes |
| theme\_tufte | Tufte Maximal Data, Minimal Ink Theme |
| theme\_wsj | Wall Street Journal theme |
To demonstrate their use, we’ll first create and save a graph.
```
# create basic plot
library(ggplot2)
p <- ggplot(mpg,
aes(x = displ, y=hwy,
color = class)) +
geom_point(size = 3,
alpha = .5) +
labs(title = "Mileage by engine displacement",
subtitle = "Data from 1999 and 2008",
caption = "Source: EPA (http://fueleconomy.gov)",
x = "Engine displacement (litres)",
y = "Highway miles per gallon",
color = "Car Class")
# display graph
p
```
Figure 11\.24: Default theme
Now let’s apply some themes.
```
# add economist theme
library(ggthemes)
p + theme_economist()
```
Figure 11\.25: Economist theme
```
# add fivethirtyeight theme
p + theme_fivethirtyeight()
```
Figure 11\.26: Five Thirty Eight theme
```
# add wsj theme
p + theme_wsj(base_size=8)
```
Figure 11\.27: Wall Street Journal theme
By default, the font size for the wsj theme is usually too large. Changing the `base_size` option can help.
Each theme also comes with scales for colors and fills. In the next example, both the `few` theme and colors are used.
```
# add few theme
p + theme_few() + scale_color_few()
```
Figure 11\.28: Few theme and colors
Try out different themes and scales to find one that you like.
#### 11\.8\.2\.2 hrbrthemes
The [`hrbrthemes`]((https://github.com/hrbrmstr/hrbrthemes)) package is focused on typography\-centric themes. The results are charts that tend to have a clean look.
Continuing the example plot from above
```
# add few theme
library(hrbrthemes)
p + theme_ipsum()
```
Figure 11\.29: Ipsum theme
See the hrbrthemes homepage (<https://github.com/hrbrmstr/hrbrthemes>) for additional examples.
#### 11\.8\.2\.3 ggthemer
The [`ggthemer`](https://github.com/cttobin/ggthemr) package offers a wide range of themes (17 as of this printing).
The package is not available on CRAN and must be installed from GitHub.
```
# one time install
install.packages("remotes")
remotes::install_github('cttobin/ggthemr')
```
The functions work a bit differently. Use the `ggthemr("themename")` function to set future graphs to a given theme. Use `ggthemr_reset()` to return future graphs to the ggplot2 default theme.
Current themes include *flat*, *flat dark*, *camoflauge*, *chalk*, *copper*, *dust*, *earth*, *fresh*, *grape*, *grass*, *greyscale*, *light*, *lilac*, *pale*, *sea*, *sky*, and *solarized*.
```
# set graphs to the flat dark theme
library(ggthemr)
ggthemr("flat dark")
p
```
Figure 11\.30: Ipsum theme
```
ggthemr_reset()
```
I would not actually use this theme for this particular graph. It is difficult to distinguish colors. Which green represents compact cars and which represents subcompact cars?
Select a theme that best conveys the graph’s information to your audience.
11\.9 Combining graphs
----------------------
At times, you may want to combine several graphs together into a single image. Doing so can help you describe several relationships at once. The **patchwork** package can be used to combine ggplot2 graphs into a mosaic and save the results as a ggplot2 graph.
First save each graph as a ggplot2 object. Then combine them using `|` to combine graphs horizontally and `/` to combine graphs vertically. You can use parentheses to group graphs.
Here is an example using the Salaries dataset from the **carData** package. The combined plot will display the relationship between sex, salary, experience, and rank.
```
data(Salaries, package = "carData")
library(ggplot2)
library(patchwork)
# boxplot of salary by sex
p1 <- ggplot(Salaries, aes(x = sex, y = salary, fill=sex)) +
geom_boxplot()
# scatterplot of salary by experience and sex
p2 <- ggplot(Salaries,
aes(x = yrs.since.phd, y = salary, color=sex)) +
geom_point()
# barchart of rank and sex
p3 <- ggplot(Salaries, aes(x = rank, fill = sex)) +
geom_bar()
# combine the graphs and tweak the theme and colors
(p1 | p2)/p3 +
plot_annotation(title = "Salaries for college professors") &
theme_minimal() &
scale_fill_viridis_d() &
scale_color_viridis_d()
```
Figure 11\.31: Combining graphs using the patchwork package
The `plot_annotation` function allows you to add a title and subtitle to the entire graph. Note that the `&` operator applies a function to *all* graphs in a plot. If we had used `+ theme_minimal()` only the bar chart (the last graph) would have been affected..
The patchwork package allows for exact placement and sizing of graphs, and even supports insets (placing one graph within another). See [https://patchwork.data\-imaginist.com](https://patchwork.data-imaginist.com) for details.
| Data Visualization |
rkabacoff.github.io | https://rkabacoff.github.io/datavis/SavingGraphs.html |
Chapter 12 Saving Graphs
========================
Graphs can be saved via the RStudio interface or through code.
12\.1 Via menus
---------------
To save a graph using the RStudio menus, go to the **Plots** tab and choose Export.
Figure 12\.1: RStudio image export menu
12\.2 Via code
--------------
Any ggplot2 graph can be saved as an object. Then you can use the [`ggsave`](https://www.rdocumentation.org/packages/ggplot2/versions/1.0.0/topics/ggsave) function to save the graph to disk.
```
# save a graph
library(ggplot2)
p <- ggplot(mtcars,
aes(x = wt , y = mpg)) +
geom_point()
ggsave(p, filename = "mygraph.png")
```
The graph will be saved in the format defined by the file extension (*png* in the example above). Common formats are *pdf*, *jpeg*, *tiff*, *png*, *svg*, and *wmf* (windows only).
12\.3 File formats
------------------
Graphs can be saved in several formats. The most popular choices are given below.
| Format | Extension |
| --- | --- |
| Portable Document Format | pdf |
| JPEG | jpeg |
| Tagged Image File Format | tiff |
| Portable Network Graphics | png |
| Scaleable Vector Graphics | svg |
| Windows Metafile | wmf |
The *pdf*, *svg*, and *wmf* formats are lossless \- they resize without fuzziness or pixelation. The other formats are lossy \- they will pixelate when resized. This is especially noticeable when small images are enlarged.
If you are creating graphs for webpages, the *png* format is recommended.
The *jpeg* and *tif* formats are usually reserved for photographs.
The *wmf* format is usually recommended for graphs that will appear in Microsoft Word or PowerPoint documents. MS Office does not support *pdf* or *svg* files, and the *wmf* format will rescale well. However, note that *wmf* files will lose any transparency settings that have been set.
If you want to continue editing the graph after saving it, use the *pdf* or *svg* format.
12\.4 External editing
----------------------
Sometimes it’s difficult to get a graph just right programmatically. Most magazines and newspapers (print and electronic) fine\-tune graphs after they have been created. They change the fonts, move labels around, add callouts, change colors, add additional images or logos, and the like.
If you save the graph in *svg* or *pdf* format, you can use a vector graphics editing program to modify it using point and click tools. Two popular vector graphics editors are **Illustrator** and **Inkscape**.
**Inkscape** (<https://inkscape.org>) is an opensource application that can be freely downloaded for Mac OS X, Windows, and Linux. Open the graph file in *Inkscape*, edit it to suite your needs, and save it in the format desired.
Figure 12\.2: Inkscape
12\.1 Via menus
---------------
To save a graph using the RStudio menus, go to the **Plots** tab and choose Export.
Figure 12\.1: RStudio image export menu
12\.2 Via code
--------------
Any ggplot2 graph can be saved as an object. Then you can use the [`ggsave`](https://www.rdocumentation.org/packages/ggplot2/versions/1.0.0/topics/ggsave) function to save the graph to disk.
```
# save a graph
library(ggplot2)
p <- ggplot(mtcars,
aes(x = wt , y = mpg)) +
geom_point()
ggsave(p, filename = "mygraph.png")
```
The graph will be saved in the format defined by the file extension (*png* in the example above). Common formats are *pdf*, *jpeg*, *tiff*, *png*, *svg*, and *wmf* (windows only).
12\.3 File formats
------------------
Graphs can be saved in several formats. The most popular choices are given below.
| Format | Extension |
| --- | --- |
| Portable Document Format | pdf |
| JPEG | jpeg |
| Tagged Image File Format | tiff |
| Portable Network Graphics | png |
| Scaleable Vector Graphics | svg |
| Windows Metafile | wmf |
The *pdf*, *svg*, and *wmf* formats are lossless \- they resize without fuzziness or pixelation. The other formats are lossy \- they will pixelate when resized. This is especially noticeable when small images are enlarged.
If you are creating graphs for webpages, the *png* format is recommended.
The *jpeg* and *tif* formats are usually reserved for photographs.
The *wmf* format is usually recommended for graphs that will appear in Microsoft Word or PowerPoint documents. MS Office does not support *pdf* or *svg* files, and the *wmf* format will rescale well. However, note that *wmf* files will lose any transparency settings that have been set.
If you want to continue editing the graph after saving it, use the *pdf* or *svg* format.
12\.4 External editing
----------------------
Sometimes it’s difficult to get a graph just right programmatically. Most magazines and newspapers (print and electronic) fine\-tune graphs after they have been created. They change the fonts, move labels around, add callouts, change colors, add additional images or logos, and the like.
If you save the graph in *svg* or *pdf* format, you can use a vector graphics editing program to modify it using point and click tools. Two popular vector graphics editors are **Illustrator** and **Inkscape**.
**Inkscape** (<https://inkscape.org>) is an opensource application that can be freely downloaded for Mac OS X, Windows, and Linux. Open the graph file in *Inkscape*, edit it to suite your needs, and save it in the format desired.
Figure 12\.2: Inkscape
| Data Visualization |
rkabacoff.github.io | https://rkabacoff.github.io/datavis/Interactive.html |
Chapter 13 Interactive Graphs
=============================
Interactive graphs allow for greater exploration and reader engagement. With the exception of maps (Section [7](Maps.html#Maps)) and 3\-D scatterplots (Section [10\.1](Other.html#Scatter3D)), this book has focused on static graphs \- images that can be placed in papers, posters, slides, and journal articles. Through connections with JavaScript libraries, such as **htmlwidgets for R** (<https://www.htmlwidgets.org>), R can generate interactive graphs that can be explored in RStudio’s viewer window or placed on external web pages.
This chapter will explore several approaches including **plotly**, **ggiraph**, **rbokeh**, **rcharts**, and **highcharter**. The focus is on simple, straight\-forward approaches to adding interactivity to graphs. Be sure to run the code so that you can experience the interactivity.
> The **Shiny** framework offers a comprehensive approach to interactivity in R (<https://www.rstudio.com/products/shiny/>). However, **Shiny** has a higher learning curve and requires access to a **Shiny** server, so it is not considered here. Interested readers are referred to this excellent text ([Sievert 2020](#ref-RN13)).
13\.1 plotly
------------
**Plotly** (<https://plot.ly/>) is both a commercial service and open source product for creating high end interactive visualizations. The [**plotly**](https://www.rdocumentation.org/packages/datasets/versions/3.5.0/topics/mtcars) package allows you to create plotly interactive graphs from within R. In addition, any ggplot2 graph can be turned into a plotly graph.
Using the mpg data that comes with the **ggplot2** package, we’ll create an interactive graph displaying highway mileage vs. engine displace by car class.
Mousing over a point displays information about that point. Clicking on a legend point, removes that class from the plot. Clicking on it again, returns it. Popup tools on the upper right of the plot allow you to zoom in and out of the image, pan, select, reset axes, and download the image as a *png* file.
```
# create plotly graph.
library(ggplot2)
library(plotly)
p <- ggplot(mpg, aes(x=displ,
y=hwy,
color=class)) +
geom_point(size=3) +
labs(x = "Engine displacement",
y = "Highway Mileage",
color = "Car Class") +
theme_bw()
ggplotly(p)
```
Figure 13\.1: Plotly graph
By default, the mouse over provides pop\-up tooltip with values used to create the plot (*dipl*, *hwy*, and *class* here). However you can customize the tooltip. This involves adding a label*n* \= *variable*n\* to the `aes` function and to the `ggplotly` function.
```
# create plotly graph.
library(ggplot2)
library(plotly)
p <- ggplot(mpg, aes(x=displ,
y=hwy,
color=class,
label1 = manufacturer,
label2 = model,
label3 = year)) +
geom_point(size=3) +
labs(x = "Engine displacement",
y = "Highway Mileage",
color = "Car Class") +
theme_bw()
ggplotly(p, tooltip = c("label1", "label2", "label3"))
```
Figure 13\.2: Plotly graph with custom tooltip
The tooltip now displays the car manufacturer, make, and year (see Figure [13\.2](Interactive.html#fig:plotly2)).
You can fully customize the tooltip by creating your own label and including it as a variable in the data frame. Then place it in the aesthetic as text and in the `ggplotly` function as a label.
```
# create plotly graph.
library(ggplot2)
library(plotly)
library(dplyr)
mpg <- mpg %>%
mutate(mylabel = paste("This is a", manufacturer, model, "\n",
"released in", year, "."))
p <- ggplot(mpg, aes(x=displ,
y=hwy,
color=class,
text = mylabel)) +
geom_point(size=3) +
labs(x = "Engine displacement",
y = "Highway Mileage",
color = "Car Class") +
theme_bw()
ggplotly(p, tooltip = c("mylabel"))
```
Figure 13\.3: Plotly graph with fully customized tooltip
There are several sources of good information on plotly. See the *plotly R pages* (<https://plot.ly/r/>) and the book *Interactive web\-based data visualization with R, Plotly, and Shiny* ([Sievert 2020](#ref-RN13)). An online version of the book is available at [https://plotly\-book.cpsievert.me/](https://plotly-book.cpsievert.me/).
13\.2 ggiraph
-------------
It is easy to create interactive ggplot2 graphs using the `ggiraph` package. There are three steps:
1. add `_interactive` to the geom names (for example, change *geom\_point* to *geom\_point\_interactive*)
2. add `tooltip`, `data_id`, or both to the `aes` function
3. render the plot with the `girafe` function (note the spelling)
The next example uses the mpg dataset in **ggplot2** package to create an interactive scatter plot between engine displacement and highway mileage.
```
library(ggplot2)
library(ggiraph)
p <- ggplot(mpg, aes(x=displ,
y=hwy,
color=class,
tooltip = manufacturer)) +
geom_point_interactive()
girafe(ggobj = p)
```
Figure 13\.4: Basic interactive ggiraph graph
When you mouse over a point, the manufacturer’s name pops up. The ggiraph package only allows one tools tip, but you can customize it by creating a column in the data containing the desired information. By default the tooltip is white text on a black background. You can change this with the `options` argument to the `girafe` function.
```
library(ggplot2)
library(ggiraph)
library(patchwork)
library(dplyr)
mpg <- mpg %>%
mutate(tooltip = paste(manufacturer, model, class))
p <- ggplot(mpg, aes(x=displ,
y=hwy,
color=class,
tooltip = tooltip)) +
geom_point_interactive()
girafe(ggobj = p, options = list(opts_tooltip(use_fill = TRUE)))
```
Figure 13\.5: Interactive ggiraph graph with customized tooltip
Section [11\.9](Customizing.html#Patchwork) described how to combine two or more ggplot2 graphs into one over all plot using the **patchwork** package. One of the great strengths of the **ggiraph** package is that it allows you to *link* graphs. When graphs are linked, selecting observations on one graph, highlight the same observations on the other linked graphs.
The next example uses the fuel efficiency data from the mtcars dataset. Three plots are created (1\) a scatterplot of weight vs mpg, (2\) a scatterplot of rear axle ratio vs 1/4 mile time, and (3\) a bar chart of number of cylinders.
Unlike previous graphs, the these three graphs are linked by a unique id (the car names in this case). This is accomplished by adding `data_id = rownames(mtcars)` to the aesthetic. The three plots are then arranged into one graph using *patchwork* and `code=print` is added to the *girafe* function.
In the Figure [13\.6](Interactive.html#fig:ggiraph3), clicking on location in any one of the three plots, highlights the car in the other plots. Run the code and try it out!
```
library(patchwork)
p1 <- ggplot(mtcars, aes(x=wt,
y=mpg,
tooltip = rownames(mtcars),
data_id = rownames(mtcars))) +
geom_point_interactive(size=3, alpha =.6)
p2 <- ggplot(mtcars, aes(x=drat,
y=qsec,
tooltip = rownames(mtcars),
data_id = rownames(mtcars))) +
geom_point_interactive(size = 3, alpha = .6)
p3 <- ggplot(mtcars, aes(x=cyl,
data_id = rownames(mtcars))) +
geom_bar_interactive()
p3 <- (p1 | p2)/p3
girafe(code = print (p3))
```
Figure 13\.6: Linked interactive graphs
Here’s one more example. The gapminder dataset is used to create two bar charts for Asian countries: (1\) life expectancy in 1982 and life expectancy in 2007\. The plots are linked by `data_id = country`. As you mouse over a bar in one chart, the corresponding bar in the other is highlighted. If you move from the bottom to the top in the left hand chart, it become clear how life expectancy has changed. Note the jump when you hit Vietnam and Iraq.
```
data(gapminder, package="gapminder")
# subset Asian countries
asia <- gapminder %>%
filter(continent == "Asia") %>%
select(year, country, lifeExp)
p1 <- ggplot(asia[asia$year == 1982,],
aes(y = reorder(country, lifeExp),
x=lifeExp,
tooltip = lifeExp,
data_id = country)) +
geom_bar_interactive(stat="identity",
fill="steelblue") +
labs(y="", x="1982") +
theme_minimal()
p2 <- ggplot(asia[asia$year == 2007,],
aes(y = reorder(country, lifeExp),
x=lifeExp,
tooltip = lifeExp,
data_id = country)) +
geom_bar_interactive(stat="identity",
fill="steelblue") +
labs(y="", x="2007") +
theme_minimal()
p3 <- (p1 | p2) +
plot_annotation(title = "Life Expectancy in Asia")
girafe(code = print (p3))
```
Figure 13\.7: Linked bar charts
Graphs created with ggiraph are highly customizable. The ggiraph\-book website ([https://www.ardata.fr/ggiraph\-book/](https://www.ardata.fr/ggiraph-book/)) is a great resource for getting started.
13\.3 Other approaches
----------------------
While **Plotly** is the most popular approach for turning static ggplot2 graphs into interactive plots, many other approaches exist. Describing each in detail is beyond the scope of this book. Examples of other approaches are included here in order to give you a taste of what each is like. You can then follow the references to learn more about the ones that interest you.
### 13\.3\.1 rbokeh
[**rbokeh**](http://hafen.github.io/rbokeh) is an interface to the **Bokeh** (<https://bokeh.pydata.org/en/latest/>) graphics library.
We’ll create another graph using the mtcars dataset, showing engine displace vs. miles per gallon by number of engine cylinders. Mouse over, and try the various control to the right of the image.
```
# create rbokeh graph
# prepare data
data(mtcars)
mtcars$name <- row.names(mtcars)
mtcars$cyl <- factor(mtcars$cyl)
# graph it
library(rbokeh)
figure() %>%
ly_points(disp, mpg, data=mtcars,
color = cyl, glyph = cyl,
hover = list(name, mpg, wt))
```
Figure 13\.8: Bokeh graph
You can create some remarkable graphs with Bokeh. See the homepage (<http://hafen.github.io/rbokeh/>) for examples.
### 13\.3\.2 rCharts
[**rCharts**](https://ramnathv.github.io/rCharts/) can create a wide range of interactive graphics. In the example below, a bar chart of hair vs. eye color is created. Try mousing over the bars. You can interactively choose between grouped vs. stacked plots and include or exclude cases by eye color by clicking on the legends at the top of the image.
```
# create interactive bar chart
library(rCharts)
hair_eye_male = subset(as.data.frame(HairEyeColor),
Sex == "Male")
n1 <- nPlot(Freq ~ Hair,
group = 'Eye',
data = hair_eye_male,
type = 'multiBarChart'
)
n1$set(width = 600)
n1$show('iframesrc', cdn=TRUE)
```
To learn more, visit the project homepage (<https://github.com/ramnathv/rCharts>).
### 13\.3\.3 highcharter
The [**highcharter**](http://jkunst.com/highcharter/) package provides access to the *Highcharts* (<https://www.highcharts.com/>) JavaScript graphics library. The library is free for non\-commercial use.
Let’s use **highcharter** to create an interactive line chart displaying life expectancy over time for several Asian countries. The data come from the [Gapminder](Datasets.html#Gapminder) dataset. Again, mouse over the lines and try clicking on the legend names.
```
# create interactive line chart
library(highcharter)
# prepare data
data(gapminder, package = "gapminder")
library(dplyr)
asia <- gapminder %>%
filter(continent == "Asia") %>%
select(year, country, lifeExp)
# convert to long to wide format
library(tidyr)
plotdata <- spread(asia, country, lifeExp)
# generate graph
h <- highchart() %>%
hc_xAxis(categories = plotdata$year) %>%
hc_add_series(name = "Afghanistan",
data = plotdata$Afghanistan) %>%
hc_add_series(name = "Bahrain",
data = plotdata$Bahrain) %>%
hc_add_series(name = "Cambodia",
data = plotdata$Cambodia) %>%
hc_add_series(name = "China",
data = plotdata$China) %>%
hc_add_series(name = "India",
data = plotdata$India) %>%
hc_add_series(name = "Iran",
data = plotdata$Iran)
h
```
Figure 13\.9: HighCharts graph
In Figure [13\.9](Interactive.html#fig:highcharts1) I’ve clicked on the Afghanistan point in 1962\. The line is highlighted, the other lines are dimmed, and a pop\-up box shows the values at that point.
Like all of the interactive graphs in this chapter, there are options that allow the graph to be customized.
```
# customize interactive line chart
h <- h %>%
hc_title(text = "Life Expectancy by Country",
margin = 20,
align = "left",
style = list(color = "steelblue")) %>%
hc_subtitle(text = "1952 to 2007",
align = "left",
style = list(color = "#2b908f",
fontWeight = "bold")) %>%
hc_credits(enabled = TRUE, # add credits
text = "Gapminder Data",
href = "http://gapminder.com") %>%
hc_legend(align = "left",
verticalAlign = "top",
layout = "vertical",
x = 0,
y = 100) %>%
hc_tooltip(crosshairs = TRUE,
backgroundColor = "#FCFFC5",
shared = TRUE,
borderWidth = 4) %>%
hc_exporting(enabled = TRUE)
h
```
Figure 13\.10: HighCharts graph with customization
In Figure [13\.10](Interactive.html#fig:highcharts2) I’ve moused over 1982\. All points in that year are highlighted and a pop\-up menu shows the country values for that year.
There is a wealth of interactive plots available through the marriage of R and JavaScript. Choose the approach that works best for you.
13\.1 plotly
------------
**Plotly** (<https://plot.ly/>) is both a commercial service and open source product for creating high end interactive visualizations. The [**plotly**](https://www.rdocumentation.org/packages/datasets/versions/3.5.0/topics/mtcars) package allows you to create plotly interactive graphs from within R. In addition, any ggplot2 graph can be turned into a plotly graph.
Using the mpg data that comes with the **ggplot2** package, we’ll create an interactive graph displaying highway mileage vs. engine displace by car class.
Mousing over a point displays information about that point. Clicking on a legend point, removes that class from the plot. Clicking on it again, returns it. Popup tools on the upper right of the plot allow you to zoom in and out of the image, pan, select, reset axes, and download the image as a *png* file.
```
# create plotly graph.
library(ggplot2)
library(plotly)
p <- ggplot(mpg, aes(x=displ,
y=hwy,
color=class)) +
geom_point(size=3) +
labs(x = "Engine displacement",
y = "Highway Mileage",
color = "Car Class") +
theme_bw()
ggplotly(p)
```
Figure 13\.1: Plotly graph
By default, the mouse over provides pop\-up tooltip with values used to create the plot (*dipl*, *hwy*, and *class* here). However you can customize the tooltip. This involves adding a label*n* \= *variable*n\* to the `aes` function and to the `ggplotly` function.
```
# create plotly graph.
library(ggplot2)
library(plotly)
p <- ggplot(mpg, aes(x=displ,
y=hwy,
color=class,
label1 = manufacturer,
label2 = model,
label3 = year)) +
geom_point(size=3) +
labs(x = "Engine displacement",
y = "Highway Mileage",
color = "Car Class") +
theme_bw()
ggplotly(p, tooltip = c("label1", "label2", "label3"))
```
Figure 13\.2: Plotly graph with custom tooltip
The tooltip now displays the car manufacturer, make, and year (see Figure [13\.2](Interactive.html#fig:plotly2)).
You can fully customize the tooltip by creating your own label and including it as a variable in the data frame. Then place it in the aesthetic as text and in the `ggplotly` function as a label.
```
# create plotly graph.
library(ggplot2)
library(plotly)
library(dplyr)
mpg <- mpg %>%
mutate(mylabel = paste("This is a", manufacturer, model, "\n",
"released in", year, "."))
p <- ggplot(mpg, aes(x=displ,
y=hwy,
color=class,
text = mylabel)) +
geom_point(size=3) +
labs(x = "Engine displacement",
y = "Highway Mileage",
color = "Car Class") +
theme_bw()
ggplotly(p, tooltip = c("mylabel"))
```
Figure 13\.3: Plotly graph with fully customized tooltip
There are several sources of good information on plotly. See the *plotly R pages* (<https://plot.ly/r/>) and the book *Interactive web\-based data visualization with R, Plotly, and Shiny* ([Sievert 2020](#ref-RN13)). An online version of the book is available at [https://plotly\-book.cpsievert.me/](https://plotly-book.cpsievert.me/).
13\.2 ggiraph
-------------
It is easy to create interactive ggplot2 graphs using the `ggiraph` package. There are three steps:
1. add `_interactive` to the geom names (for example, change *geom\_point* to *geom\_point\_interactive*)
2. add `tooltip`, `data_id`, or both to the `aes` function
3. render the plot with the `girafe` function (note the spelling)
The next example uses the mpg dataset in **ggplot2** package to create an interactive scatter plot between engine displacement and highway mileage.
```
library(ggplot2)
library(ggiraph)
p <- ggplot(mpg, aes(x=displ,
y=hwy,
color=class,
tooltip = manufacturer)) +
geom_point_interactive()
girafe(ggobj = p)
```
Figure 13\.4: Basic interactive ggiraph graph
When you mouse over a point, the manufacturer’s name pops up. The ggiraph package only allows one tools tip, but you can customize it by creating a column in the data containing the desired information. By default the tooltip is white text on a black background. You can change this with the `options` argument to the `girafe` function.
```
library(ggplot2)
library(ggiraph)
library(patchwork)
library(dplyr)
mpg <- mpg %>%
mutate(tooltip = paste(manufacturer, model, class))
p <- ggplot(mpg, aes(x=displ,
y=hwy,
color=class,
tooltip = tooltip)) +
geom_point_interactive()
girafe(ggobj = p, options = list(opts_tooltip(use_fill = TRUE)))
```
Figure 13\.5: Interactive ggiraph graph with customized tooltip
Section [11\.9](Customizing.html#Patchwork) described how to combine two or more ggplot2 graphs into one over all plot using the **patchwork** package. One of the great strengths of the **ggiraph** package is that it allows you to *link* graphs. When graphs are linked, selecting observations on one graph, highlight the same observations on the other linked graphs.
The next example uses the fuel efficiency data from the mtcars dataset. Three plots are created (1\) a scatterplot of weight vs mpg, (2\) a scatterplot of rear axle ratio vs 1/4 mile time, and (3\) a bar chart of number of cylinders.
Unlike previous graphs, the these three graphs are linked by a unique id (the car names in this case). This is accomplished by adding `data_id = rownames(mtcars)` to the aesthetic. The three plots are then arranged into one graph using *patchwork* and `code=print` is added to the *girafe* function.
In the Figure [13\.6](Interactive.html#fig:ggiraph3), clicking on location in any one of the three plots, highlights the car in the other plots. Run the code and try it out!
```
library(patchwork)
p1 <- ggplot(mtcars, aes(x=wt,
y=mpg,
tooltip = rownames(mtcars),
data_id = rownames(mtcars))) +
geom_point_interactive(size=3, alpha =.6)
p2 <- ggplot(mtcars, aes(x=drat,
y=qsec,
tooltip = rownames(mtcars),
data_id = rownames(mtcars))) +
geom_point_interactive(size = 3, alpha = .6)
p3 <- ggplot(mtcars, aes(x=cyl,
data_id = rownames(mtcars))) +
geom_bar_interactive()
p3 <- (p1 | p2)/p3
girafe(code = print (p3))
```
Figure 13\.6: Linked interactive graphs
Here’s one more example. The gapminder dataset is used to create two bar charts for Asian countries: (1\) life expectancy in 1982 and life expectancy in 2007\. The plots are linked by `data_id = country`. As you mouse over a bar in one chart, the corresponding bar in the other is highlighted. If you move from the bottom to the top in the left hand chart, it become clear how life expectancy has changed. Note the jump when you hit Vietnam and Iraq.
```
data(gapminder, package="gapminder")
# subset Asian countries
asia <- gapminder %>%
filter(continent == "Asia") %>%
select(year, country, lifeExp)
p1 <- ggplot(asia[asia$year == 1982,],
aes(y = reorder(country, lifeExp),
x=lifeExp,
tooltip = lifeExp,
data_id = country)) +
geom_bar_interactive(stat="identity",
fill="steelblue") +
labs(y="", x="1982") +
theme_minimal()
p2 <- ggplot(asia[asia$year == 2007,],
aes(y = reorder(country, lifeExp),
x=lifeExp,
tooltip = lifeExp,
data_id = country)) +
geom_bar_interactive(stat="identity",
fill="steelblue") +
labs(y="", x="2007") +
theme_minimal()
p3 <- (p1 | p2) +
plot_annotation(title = "Life Expectancy in Asia")
girafe(code = print (p3))
```
Figure 13\.7: Linked bar charts
Graphs created with ggiraph are highly customizable. The ggiraph\-book website ([https://www.ardata.fr/ggiraph\-book/](https://www.ardata.fr/ggiraph-book/)) is a great resource for getting started.
13\.3 Other approaches
----------------------
While **Plotly** is the most popular approach for turning static ggplot2 graphs into interactive plots, many other approaches exist. Describing each in detail is beyond the scope of this book. Examples of other approaches are included here in order to give you a taste of what each is like. You can then follow the references to learn more about the ones that interest you.
### 13\.3\.1 rbokeh
[**rbokeh**](http://hafen.github.io/rbokeh) is an interface to the **Bokeh** (<https://bokeh.pydata.org/en/latest/>) graphics library.
We’ll create another graph using the mtcars dataset, showing engine displace vs. miles per gallon by number of engine cylinders. Mouse over, and try the various control to the right of the image.
```
# create rbokeh graph
# prepare data
data(mtcars)
mtcars$name <- row.names(mtcars)
mtcars$cyl <- factor(mtcars$cyl)
# graph it
library(rbokeh)
figure() %>%
ly_points(disp, mpg, data=mtcars,
color = cyl, glyph = cyl,
hover = list(name, mpg, wt))
```
Figure 13\.8: Bokeh graph
You can create some remarkable graphs with Bokeh. See the homepage (<http://hafen.github.io/rbokeh/>) for examples.
### 13\.3\.2 rCharts
[**rCharts**](https://ramnathv.github.io/rCharts/) can create a wide range of interactive graphics. In the example below, a bar chart of hair vs. eye color is created. Try mousing over the bars. You can interactively choose between grouped vs. stacked plots and include or exclude cases by eye color by clicking on the legends at the top of the image.
```
# create interactive bar chart
library(rCharts)
hair_eye_male = subset(as.data.frame(HairEyeColor),
Sex == "Male")
n1 <- nPlot(Freq ~ Hair,
group = 'Eye',
data = hair_eye_male,
type = 'multiBarChart'
)
n1$set(width = 600)
n1$show('iframesrc', cdn=TRUE)
```
To learn more, visit the project homepage (<https://github.com/ramnathv/rCharts>).
### 13\.3\.3 highcharter
The [**highcharter**](http://jkunst.com/highcharter/) package provides access to the *Highcharts* (<https://www.highcharts.com/>) JavaScript graphics library. The library is free for non\-commercial use.
Let’s use **highcharter** to create an interactive line chart displaying life expectancy over time for several Asian countries. The data come from the [Gapminder](Datasets.html#Gapminder) dataset. Again, mouse over the lines and try clicking on the legend names.
```
# create interactive line chart
library(highcharter)
# prepare data
data(gapminder, package = "gapminder")
library(dplyr)
asia <- gapminder %>%
filter(continent == "Asia") %>%
select(year, country, lifeExp)
# convert to long to wide format
library(tidyr)
plotdata <- spread(asia, country, lifeExp)
# generate graph
h <- highchart() %>%
hc_xAxis(categories = plotdata$year) %>%
hc_add_series(name = "Afghanistan",
data = plotdata$Afghanistan) %>%
hc_add_series(name = "Bahrain",
data = plotdata$Bahrain) %>%
hc_add_series(name = "Cambodia",
data = plotdata$Cambodia) %>%
hc_add_series(name = "China",
data = plotdata$China) %>%
hc_add_series(name = "India",
data = plotdata$India) %>%
hc_add_series(name = "Iran",
data = plotdata$Iran)
h
```
Figure 13\.9: HighCharts graph
In Figure [13\.9](Interactive.html#fig:highcharts1) I’ve clicked on the Afghanistan point in 1962\. The line is highlighted, the other lines are dimmed, and a pop\-up box shows the values at that point.
Like all of the interactive graphs in this chapter, there are options that allow the graph to be customized.
```
# customize interactive line chart
h <- h %>%
hc_title(text = "Life Expectancy by Country",
margin = 20,
align = "left",
style = list(color = "steelblue")) %>%
hc_subtitle(text = "1952 to 2007",
align = "left",
style = list(color = "#2b908f",
fontWeight = "bold")) %>%
hc_credits(enabled = TRUE, # add credits
text = "Gapminder Data",
href = "http://gapminder.com") %>%
hc_legend(align = "left",
verticalAlign = "top",
layout = "vertical",
x = 0,
y = 100) %>%
hc_tooltip(crosshairs = TRUE,
backgroundColor = "#FCFFC5",
shared = TRUE,
borderWidth = 4) %>%
hc_exporting(enabled = TRUE)
h
```
Figure 13\.10: HighCharts graph with customization
In Figure [13\.10](Interactive.html#fig:highcharts2) I’ve moused over 1982\. All points in that year are highlighted and a pop\-up menu shows the country values for that year.
There is a wealth of interactive plots available through the marriage of R and JavaScript. Choose the approach that works best for you.
### 13\.3\.1 rbokeh
[**rbokeh**](http://hafen.github.io/rbokeh) is an interface to the **Bokeh** (<https://bokeh.pydata.org/en/latest/>) graphics library.
We’ll create another graph using the mtcars dataset, showing engine displace vs. miles per gallon by number of engine cylinders. Mouse over, and try the various control to the right of the image.
```
# create rbokeh graph
# prepare data
data(mtcars)
mtcars$name <- row.names(mtcars)
mtcars$cyl <- factor(mtcars$cyl)
# graph it
library(rbokeh)
figure() %>%
ly_points(disp, mpg, data=mtcars,
color = cyl, glyph = cyl,
hover = list(name, mpg, wt))
```
Figure 13\.8: Bokeh graph
You can create some remarkable graphs with Bokeh. See the homepage (<http://hafen.github.io/rbokeh/>) for examples.
### 13\.3\.2 rCharts
[**rCharts**](https://ramnathv.github.io/rCharts/) can create a wide range of interactive graphics. In the example below, a bar chart of hair vs. eye color is created. Try mousing over the bars. You can interactively choose between grouped vs. stacked plots and include or exclude cases by eye color by clicking on the legends at the top of the image.
```
# create interactive bar chart
library(rCharts)
hair_eye_male = subset(as.data.frame(HairEyeColor),
Sex == "Male")
n1 <- nPlot(Freq ~ Hair,
group = 'Eye',
data = hair_eye_male,
type = 'multiBarChart'
)
n1$set(width = 600)
n1$show('iframesrc', cdn=TRUE)
```
To learn more, visit the project homepage (<https://github.com/ramnathv/rCharts>).
### 13\.3\.3 highcharter
The [**highcharter**](http://jkunst.com/highcharter/) package provides access to the *Highcharts* (<https://www.highcharts.com/>) JavaScript graphics library. The library is free for non\-commercial use.
Let’s use **highcharter** to create an interactive line chart displaying life expectancy over time for several Asian countries. The data come from the [Gapminder](Datasets.html#Gapminder) dataset. Again, mouse over the lines and try clicking on the legend names.
```
# create interactive line chart
library(highcharter)
# prepare data
data(gapminder, package = "gapminder")
library(dplyr)
asia <- gapminder %>%
filter(continent == "Asia") %>%
select(year, country, lifeExp)
# convert to long to wide format
library(tidyr)
plotdata <- spread(asia, country, lifeExp)
# generate graph
h <- highchart() %>%
hc_xAxis(categories = plotdata$year) %>%
hc_add_series(name = "Afghanistan",
data = plotdata$Afghanistan) %>%
hc_add_series(name = "Bahrain",
data = plotdata$Bahrain) %>%
hc_add_series(name = "Cambodia",
data = plotdata$Cambodia) %>%
hc_add_series(name = "China",
data = plotdata$China) %>%
hc_add_series(name = "India",
data = plotdata$India) %>%
hc_add_series(name = "Iran",
data = plotdata$Iran)
h
```
Figure 13\.9: HighCharts graph
In Figure [13\.9](Interactive.html#fig:highcharts1) I’ve clicked on the Afghanistan point in 1962\. The line is highlighted, the other lines are dimmed, and a pop\-up box shows the values at that point.
Like all of the interactive graphs in this chapter, there are options that allow the graph to be customized.
```
# customize interactive line chart
h <- h %>%
hc_title(text = "Life Expectancy by Country",
margin = 20,
align = "left",
style = list(color = "steelblue")) %>%
hc_subtitle(text = "1952 to 2007",
align = "left",
style = list(color = "#2b908f",
fontWeight = "bold")) %>%
hc_credits(enabled = TRUE, # add credits
text = "Gapminder Data",
href = "http://gapminder.com") %>%
hc_legend(align = "left",
verticalAlign = "top",
layout = "vertical",
x = 0,
y = 100) %>%
hc_tooltip(crosshairs = TRUE,
backgroundColor = "#FCFFC5",
shared = TRUE,
borderWidth = 4) %>%
hc_exporting(enabled = TRUE)
h
```
Figure 13\.10: HighCharts graph with customization
In Figure [13\.10](Interactive.html#fig:highcharts2) I’ve moused over 1982\. All points in that year are highlighted and a pop\-up menu shows the country values for that year.
There is a wealth of interactive plots available through the marriage of R and JavaScript. Choose the approach that works best for you.
| Data Visualization |
rkabacoff.github.io | https://rkabacoff.github.io/datavis/Advice.html |
Chapter 14 Advice / Best Practices
==================================
This section contains some thoughts on what makes a good data visualization. Most come from books and posts that others have written, but I’ll take responsibility for putting them here.
14\.1 Labeling
--------------
Everything on your graph should be clearly labeled. Typically this will include a
* *title* \- a clear short title letting the reader know what they’re looking at
+ *Relationship between experience and wages by gender*
* *subtitle* \- an optional second (smaller font) title giving additional information
+ *Years 2016\-2018*
* *caption* \- source attribution for the data
+ *source: US Department of Labor \- www.bls.gov/bls/blswage.htm*
* *axis label*s \- clear labels for the *x* and *y* axes
+ short but descriptive
+ include units of measurement
- *Engine displacement (cu. in.)*
- *Survival time (days)*
- *Patient age (years)*
* *legend* \- short informative title and labels
+ *Male* and *Female* \- not 0 and 1 !!
* *lines* and *bars* \- label any trend lines, annotation lines, and error bars
Basically, the reader should be able to understand your graph without having to wade through paragraphs of text. When in doubt, show your data visualization to someone who has not read your article or poster and ask them if anything is unclear.
14\.2 Signal to noise ratio
---------------------------
In data science, the goal of data visualization is to communicate information. Anything that doesn’t support this goal should be reduced or eliminated.
> **Chart Junk** \- visual elements of charts that aren’t necessary to comprehend the information represented by the chart or that distract from this information. (Wikipedia (<https://en.wikipedia.org/wiki/Chartjunk>))
Consider the following graph. The goal is to compare the calories in bacon to the other four foods. The data come from <http://blog.cheapism.com>. I increased the serving size for bacon from 1 slice to 3 slices (let’s be real, its BACON!).
> Disclaimer: I got the idea for this graph from one I saw on the internet years ago, but I can’t remember where. If you know, let me know so that I can give proper credit.
Figure 14\.1: Graph with chart junk
If the goal is to compare the calories in bacon to other breakfast foods, much of this visualization is unnecessary and distracts from the task.
Think of all the things that are superfluous:
* the speckled blue background border
* the blueberries photo image
* the 3\-D effect on the bars
* the legend (it doesn’t add anything, the bars are already labeled)
* the colors of bars (they don’t signify anything)
Here is an alternative.
Figure 14\.2: Graph with chart junk removed
The chart junk has been removed. In addition
* the *x*\-axis label isn’t needed \- these are obviously foods
* the *y*\-axis is given a better label
* the title has been simplified (the word *different* is redundant)
* the bacon bar is the only colored bar \- it makes comparisons easier
* the grid lines have been made lighter (gray rather than black) so they don’t distract
* calorie values have been added to each bar so that the reader doesn’t have to keep refering to the *y*\-axis.
I may have gone a bit far leaving out the *x*\-axis label. It’s a fine line, knowing when to stop simplifying.
In general, you want to reduce chart junk to a minimum. In other words, **more signal, less noise**.
14\.3 Color choice
------------------
Color choice is about more than aesthetics. Choose colors that help convey the information contained in the plot.
The article *How to Choose Colors for Data Visualizations* by Mike Yi ([https://chartio.com/learn/charts/how\-to\-choose\-colors\-data\-visualization](https://chartio.com/learn/charts/how-to-choose-colors-data-visualization)) is a great place to start.
Basically, think about selecting among sequential, diverging, and qualitative color schemes:
* sequential \- for plotting a quantitative variable that goes from low to high
* diverging \- for contrasting the extremes (low, medium, and high) of a quantitative variable
* qualitative \- for distinguishing among the levels of a categorical variable
The article above can help you to choose among these schemes. Additionally, the `RColorBrewer` package (Section [11\.2\.2\.1](Customizing.html#RColorBrewer)) provides palettes categorized in this way. The *YlOrRd* to *Blues* palettes are sequential, *Set3* to *Accent* are qualitative, and *Spectral* to *BrBg* are diverging.
Other things to keep in mind:
* Make sure that text is legible \- avoid dark text on dark backgrounds, light text on light backgrounds, and colors that clash in a discordant fashion (i.e. they hurt to look at!)
* Avoid combinations of red and green \- it can be difficult for a colorblind audience to distinguish these colors
Other helpful resources are Stephen Few’s *Practical Rules for Using Color in Charts* (<http://www.perceptualedge.com/articles/visual_business_intelligence/rules_for_using_color.pdf>) and Maureen Stone’s *Expert Color Choices for Presenting Data* ([https://courses.washington.edu/info424/2007/documents/Stone\-Color%20Choices.pdf](https://courses.washington.edu/info424/2007/documents/Stone-Color%20Choices.pdf)).
14\.4 *y*\-Axis scaling
-----------------------
OK, this is a big one. You can make an effect seem massive or insignificant depending on how you scale a numeric *y*\-axis.
Consider the following the example comparing the 9\-month salaries of male and female assistant professors. The data come from the [Academic Salaries](Datasets.html#Salaries) dataset.
```
# load data
data(Salaries, package="carData")
# get means, standard deviations, and
# 95% confidence intervals for
# assistant professor salary by sex
library(dplyr)
df <- Salaries %>%
filter(rank == "AsstProf") %>%
group_by(sex) %>%
summarize(n = n(),
mean = mean(salary),
sd = sd(salary),
se = sd / sqrt(n),
ci = qt(0.975, df = n - 1) * se)
df
```
```
## # A tibble: 2 × 6
## sex n mean sd se ci
## <fct> <int> <dbl> <dbl> <dbl> <dbl>
## 1 Female 11 78050. 9372. 2826. 6296.
## 2 Male 56 81311. 7901. 1056. 2116.
```
```
# create and save the plot
library(ggplot2)
p <- ggplot(df,
aes(x = sex, y = mean, group=1)) +
geom_point(size = 4) +
geom_line() +
scale_y_continuous(limits = c(77000, 82000),
label = scales::dollar) +
labs(title = "Mean salary differences by gender",
subtitle = "9-mo academic salary in 2007-2008",
caption = paste("source: Fox J. and Weisberg, S. (2011)",
"An R Companion to Applied Regression,",
"Second Edition Sage"),
x = "Gender",
y = "Salary") +
scale_y_continuous(labels = scales::dollar)
```
First, let’s plot this with a *y*\-axis going from 77,000 to 82,000\.
```
# plot in a narrow range of y
p + scale_y_continuous(limits=c(77000, 82000))
```
Figure 14\.3: Plot with limited range of Y
There appears to be a very large gender difference.
Next, let’s plot the same data with the *y*\-axis going from 0 to 125,000\.
```
# plot in a wide range of y
p + scale_y_continuous(limits = c(0, 125000))
```
Figure 14\.4: Plot with limited range of Y
There doesn’t appear to be any gender difference!
The goal of ethical data visualization is to represent findings with as little distortion as possible. This means choosing an appropriate range for the *y*\-axis. Bar charts should almost always start at y \= 0\. For other charts, the limits really depends on a subject matter knowledge of the expected range of values.
We can also improve the graph by adding in an indicator of the uncertainty (see the section on [Mean/SE plots](Bivariate.html#MeanSEM)).
```
# plot with confidence limits
p + geom_errorbar(aes(ymin = mean - ci,
ymax = mean + ci),
width = .1) +
ggplot2::annotate("text",
label = "I-bars are 95% \nconfidence intervals",
x=2,
y=73500,
fontface = "italic",
size = 3)
```
Figure 14\.5: Plot with error bars
The difference doesn’t appear to exceeds chance variation.
14\.5 Attribution
-----------------
Unless it’s your data, each graphic should come with an attribution \- a note directing the reader to the source of the data. This will usually appear in the caption for the graph.
14\.6 Going further
-------------------
If you would like to learn more about `ggplot2` there are several good sources, including
* the `ggplot2` homepage ([https://ggplot2\.tidyverse.org](https://ggplot2.tidyverse.org))
* *ggplot2: Elegant graphics for data anaysis (2nd ed.)* ([Wickham 2016](#ref-RN16)). A draft of the third edition is available at [https://ggplot2\-book.org](https://ggplot2-book.org).
* chapter 3 in *R for data science* ([Wickham and Grolemund 2017](#ref-RN9)). An online version is available at [https://r4ds.had.co.nz/data\-visualisation.html](https://r4ds.had.co.nz/data-visualisation.html).
* the `ggplot2` cheatsheet (<https://posit.co/resources/cheatsheets/>)
If you would like to learn more about data visualization in general, here are some useful resources:
* Scott Berinato’s Harvard Business Review article *Visualizations that really work* ([https://hbr.org/2016/06/visualizations\-that\-really\-work](https://hbr.org/2016/06/visualizations-that-really-work))
* *Wall Street Journal’s guide to information graphics: The dos and don’ts of presenting data, facts and figures* ([Wong 2010](#ref-RN19))
* *A practical guide to graphics reporting : Information graphics for print, web \& broadcast* ([George\-Palilonis 2017](#ref-RN20))
* *Beautiful data: The stories behind elegant data solutions* ([Hammerbacher and Jeff 2009](#ref-RN18))
* *The truthful art: Data, charts, and maps for communication* ([Cairo 2016](#ref-RN21))
* the *Information is beautiful* website (<https://informationisbeautiful.net>)
The best graphs are rarely created on the first attempt. Experiment until you have a visualization that clarifies the data and helps communicates a meaning story. And have fun!
14\.1 Labeling
--------------
Everything on your graph should be clearly labeled. Typically this will include a
* *title* \- a clear short title letting the reader know what they’re looking at
+ *Relationship between experience and wages by gender*
* *subtitle* \- an optional second (smaller font) title giving additional information
+ *Years 2016\-2018*
* *caption* \- source attribution for the data
+ *source: US Department of Labor \- www.bls.gov/bls/blswage.htm*
* *axis label*s \- clear labels for the *x* and *y* axes
+ short but descriptive
+ include units of measurement
- *Engine displacement (cu. in.)*
- *Survival time (days)*
- *Patient age (years)*
* *legend* \- short informative title and labels
+ *Male* and *Female* \- not 0 and 1 !!
* *lines* and *bars* \- label any trend lines, annotation lines, and error bars
Basically, the reader should be able to understand your graph without having to wade through paragraphs of text. When in doubt, show your data visualization to someone who has not read your article or poster and ask them if anything is unclear.
14\.2 Signal to noise ratio
---------------------------
In data science, the goal of data visualization is to communicate information. Anything that doesn’t support this goal should be reduced or eliminated.
> **Chart Junk** \- visual elements of charts that aren’t necessary to comprehend the information represented by the chart or that distract from this information. (Wikipedia (<https://en.wikipedia.org/wiki/Chartjunk>))
Consider the following graph. The goal is to compare the calories in bacon to the other four foods. The data come from <http://blog.cheapism.com>. I increased the serving size for bacon from 1 slice to 3 slices (let’s be real, its BACON!).
> Disclaimer: I got the idea for this graph from one I saw on the internet years ago, but I can’t remember where. If you know, let me know so that I can give proper credit.
Figure 14\.1: Graph with chart junk
If the goal is to compare the calories in bacon to other breakfast foods, much of this visualization is unnecessary and distracts from the task.
Think of all the things that are superfluous:
* the speckled blue background border
* the blueberries photo image
* the 3\-D effect on the bars
* the legend (it doesn’t add anything, the bars are already labeled)
* the colors of bars (they don’t signify anything)
Here is an alternative.
Figure 14\.2: Graph with chart junk removed
The chart junk has been removed. In addition
* the *x*\-axis label isn’t needed \- these are obviously foods
* the *y*\-axis is given a better label
* the title has been simplified (the word *different* is redundant)
* the bacon bar is the only colored bar \- it makes comparisons easier
* the grid lines have been made lighter (gray rather than black) so they don’t distract
* calorie values have been added to each bar so that the reader doesn’t have to keep refering to the *y*\-axis.
I may have gone a bit far leaving out the *x*\-axis label. It’s a fine line, knowing when to stop simplifying.
In general, you want to reduce chart junk to a minimum. In other words, **more signal, less noise**.
14\.3 Color choice
------------------
Color choice is about more than aesthetics. Choose colors that help convey the information contained in the plot.
The article *How to Choose Colors for Data Visualizations* by Mike Yi ([https://chartio.com/learn/charts/how\-to\-choose\-colors\-data\-visualization](https://chartio.com/learn/charts/how-to-choose-colors-data-visualization)) is a great place to start.
Basically, think about selecting among sequential, diverging, and qualitative color schemes:
* sequential \- for plotting a quantitative variable that goes from low to high
* diverging \- for contrasting the extremes (low, medium, and high) of a quantitative variable
* qualitative \- for distinguishing among the levels of a categorical variable
The article above can help you to choose among these schemes. Additionally, the `RColorBrewer` package (Section [11\.2\.2\.1](Customizing.html#RColorBrewer)) provides palettes categorized in this way. The *YlOrRd* to *Blues* palettes are sequential, *Set3* to *Accent* are qualitative, and *Spectral* to *BrBg* are diverging.
Other things to keep in mind:
* Make sure that text is legible \- avoid dark text on dark backgrounds, light text on light backgrounds, and colors that clash in a discordant fashion (i.e. they hurt to look at!)
* Avoid combinations of red and green \- it can be difficult for a colorblind audience to distinguish these colors
Other helpful resources are Stephen Few’s *Practical Rules for Using Color in Charts* (<http://www.perceptualedge.com/articles/visual_business_intelligence/rules_for_using_color.pdf>) and Maureen Stone’s *Expert Color Choices for Presenting Data* ([https://courses.washington.edu/info424/2007/documents/Stone\-Color%20Choices.pdf](https://courses.washington.edu/info424/2007/documents/Stone-Color%20Choices.pdf)).
14\.4 *y*\-Axis scaling
-----------------------
OK, this is a big one. You can make an effect seem massive or insignificant depending on how you scale a numeric *y*\-axis.
Consider the following the example comparing the 9\-month salaries of male and female assistant professors. The data come from the [Academic Salaries](Datasets.html#Salaries) dataset.
```
# load data
data(Salaries, package="carData")
# get means, standard deviations, and
# 95% confidence intervals for
# assistant professor salary by sex
library(dplyr)
df <- Salaries %>%
filter(rank == "AsstProf") %>%
group_by(sex) %>%
summarize(n = n(),
mean = mean(salary),
sd = sd(salary),
se = sd / sqrt(n),
ci = qt(0.975, df = n - 1) * se)
df
```
```
## # A tibble: 2 × 6
## sex n mean sd se ci
## <fct> <int> <dbl> <dbl> <dbl> <dbl>
## 1 Female 11 78050. 9372. 2826. 6296.
## 2 Male 56 81311. 7901. 1056. 2116.
```
```
# create and save the plot
library(ggplot2)
p <- ggplot(df,
aes(x = sex, y = mean, group=1)) +
geom_point(size = 4) +
geom_line() +
scale_y_continuous(limits = c(77000, 82000),
label = scales::dollar) +
labs(title = "Mean salary differences by gender",
subtitle = "9-mo academic salary in 2007-2008",
caption = paste("source: Fox J. and Weisberg, S. (2011)",
"An R Companion to Applied Regression,",
"Second Edition Sage"),
x = "Gender",
y = "Salary") +
scale_y_continuous(labels = scales::dollar)
```
First, let’s plot this with a *y*\-axis going from 77,000 to 82,000\.
```
# plot in a narrow range of y
p + scale_y_continuous(limits=c(77000, 82000))
```
Figure 14\.3: Plot with limited range of Y
There appears to be a very large gender difference.
Next, let’s plot the same data with the *y*\-axis going from 0 to 125,000\.
```
# plot in a wide range of y
p + scale_y_continuous(limits = c(0, 125000))
```
Figure 14\.4: Plot with limited range of Y
There doesn’t appear to be any gender difference!
The goal of ethical data visualization is to represent findings with as little distortion as possible. This means choosing an appropriate range for the *y*\-axis. Bar charts should almost always start at y \= 0\. For other charts, the limits really depends on a subject matter knowledge of the expected range of values.
We can also improve the graph by adding in an indicator of the uncertainty (see the section on [Mean/SE plots](Bivariate.html#MeanSEM)).
```
# plot with confidence limits
p + geom_errorbar(aes(ymin = mean - ci,
ymax = mean + ci),
width = .1) +
ggplot2::annotate("text",
label = "I-bars are 95% \nconfidence intervals",
x=2,
y=73500,
fontface = "italic",
size = 3)
```
Figure 14\.5: Plot with error bars
The difference doesn’t appear to exceeds chance variation.
14\.5 Attribution
-----------------
Unless it’s your data, each graphic should come with an attribution \- a note directing the reader to the source of the data. This will usually appear in the caption for the graph.
14\.6 Going further
-------------------
If you would like to learn more about `ggplot2` there are several good sources, including
* the `ggplot2` homepage ([https://ggplot2\.tidyverse.org](https://ggplot2.tidyverse.org))
* *ggplot2: Elegant graphics for data anaysis (2nd ed.)* ([Wickham 2016](#ref-RN16)). A draft of the third edition is available at [https://ggplot2\-book.org](https://ggplot2-book.org).
* chapter 3 in *R for data science* ([Wickham and Grolemund 2017](#ref-RN9)). An online version is available at [https://r4ds.had.co.nz/data\-visualisation.html](https://r4ds.had.co.nz/data-visualisation.html).
* the `ggplot2` cheatsheet (<https://posit.co/resources/cheatsheets/>)
If you would like to learn more about data visualization in general, here are some useful resources:
* Scott Berinato’s Harvard Business Review article *Visualizations that really work* ([https://hbr.org/2016/06/visualizations\-that\-really\-work](https://hbr.org/2016/06/visualizations-that-really-work))
* *Wall Street Journal’s guide to information graphics: The dos and don’ts of presenting data, facts and figures* ([Wong 2010](#ref-RN19))
* *A practical guide to graphics reporting : Information graphics for print, web \& broadcast* ([George\-Palilonis 2017](#ref-RN20))
* *Beautiful data: The stories behind elegant data solutions* ([Hammerbacher and Jeff 2009](#ref-RN18))
* *The truthful art: Data, charts, and maps for communication* ([Cairo 2016](#ref-RN21))
* the *Information is beautiful* website (<https://informationisbeautiful.net>)
The best graphs are rarely created on the first attempt. Experiment until you have a visualization that clarifies the data and helps communicates a meaning story. And have fun!
| Data Visualization |
rkabacoff.github.io | https://rkabacoff.github.io/datavis/references.html | Data Visualization |
|
rkabacoff.github.io | https://rkabacoff.github.io/datavis/Datasets.html |
A Datasets
==========
The appendix describes the datasets used in this book.
A.1 Academic salaries
---------------------
The [Salaries for Professors](https://www.rdocumentation.org/packages/carData/versions/3.0-1/topics/Salaries) dataset comes from the `carData` package. It describes the 9 month academic salaries of 397 college professors at a single institution in 2008\-2009\. The data were collected as part of the administration’s monitoring of gender differences in salary.
The dataset can be accessed using
```
data(Salaries, package="carData")
```
It is also provided in other formats, so that you can practice [importing data](DataPrep.html#importing).
| Format | File |
| --- | --- |
| Comma delimited text | <Salaries.csv> |
| Tab delimited text | <Salaries.txt> |
| Excel spreadsheet | <Salaries.xlsx> |
| SAS file | <Salaries.sas7bdat> |
| Stata file | <Salaries.dta> |
| SPSS file | <Salaries.sav> |
A.2 Starwars
------------
The [starwars](https://dplyr.tidyverse.org/reference/starwars.html) dataset comes from the **dplyr** package. It describes 13 characteristics of 87 characters from the Starwars universe. The data are extracted from the [Star Wars API](http://swapi.co).
A.3 Mammal sleep
----------------
The [msleep](http://ggplot2.tidyverse.org/reference/msleep.html) dataset comes from the **ggplot2** package. It is an updated and expanded version of a dataset by Save and West, describing the sleeping characteristics of 83 mammals.
The dataset can be accessed using
```
data(msleep, package="ggplot2")
```
A.4 Medical insurance costs
---------------------------
The [insurance](https://github.com/dataspelunking/MLwR) dataset is described in the book **Machine Learning with R** by Brett Lantz. A cleaned version of the dataset is also available on [Kaggle](https://www.kaggle.com/datasets/mirichoi0218/insurance). The dataset describes medical information and costs billed by health insurance companies in 2013, as compiled by the United States Census Bureau. Variables include age, sex, body mass index, number of children covered by health insurance, smoker status, US region, and individual medical costs billed by health insurance for 1338 individuals.
A.5 Marriage records
--------------------
The [Marriage](https://rdrr.io/cran/mosaicData/man/Marriage.html) dataset comes from the **mosiacData** package. It is contains the marriage records of 98 individuals collected from a probate court in Mobile County, Alabama.
The dataset can be accessed using
```
data(Marriage, package="mosaicData")
```
A.6 Fuel economy data
---------------------
The [mpg](https://ggplot2.tidyverse.org/reference/mpg.html) dataset from the **ggplot2** package, contains fuel economy data for 38 popular models of car, for the years 1999 and 2008\.
The dataset can be accessed using
```
data(mpg, package="ggplot2")
```
A.7 Literacy Rates
------------------
This dataset provides the literacy rates (percent of the population that can both read and write) for each US State in 2023\. The data were obtained from the World Population Review ([http://https://worldpopulationreview.com/state\-rankings/us\-literacy\-rates\-by\-state](http://https://worldpopulationreview.com/state-rankings/us-literacy-rates-by-state)).
The dataset can be accessed using
```
library(readr)
litRates <- read_csv("USLitRates.csv")
```
A.8 Gapminder data
------------------
The [gapminder](https://www.rdocumentation.org/packages/gapminder/versions/0.3.0/topics/gapminder) dataset from the **gapminder** package, contains longitudinal data (1952\-2007\) on life expectancy, GDP per capita, and population for 142 countries.
The dataset can be accessed using
```
data(gapminder, package="gapminder")
```
A.9 Current Population Survey (1985\)
-------------------------------------
The [CPS85](https://www.rdocumentation.org/packages/mosaicData/versions/0.16.0/topics/CPS85) dataset from the **mosaicData** package, contains 1985 data on wages and other characteristics of workers.
The dataset can be accessed using
```
data(CPS85, package="mosaicData")
```
A.10 Houston crime data
-----------------------
The [crime](https://www.rdocumentation.org/packages/ggmap/versions/2.6.1/topics/crime) dataset from the **ggmap** package, contains the time, date, and location of six types of crimes in Houston, Texas between January 2010 and August 2010\.
The dataset can be accessed using
```
data(crime, package="ggmap")
```
A.11 Hispanic and Latino Population
-----------------------------------
The Hispanic and Latino Population data is a raw tab delimited text file containing the percentage of Hispanic and Latinos by US state from the 2010 Census. The actual dataset was obtained from Wikipedia (<https://en.wikipedia.org/wiki/List_of_U.S._states_by_Hispanic_and_Latino_population>).
The data can be accessed using
```
library(readr)
text <- read_csv("hisplat.csv")
```
A.12 US economic timeseries
---------------------------
The [economics](https://ggplot2.tidyverse.org/reference/economics.html) dataset from the **ggplot2** package, contains the monthly economic data gathered from Jan 1967 to Jan 2015\.
The dataset can be accessed using
```
data(economics, package="ggplot2")
```
A.13 US population by age and year
----------------------------------
The [uspopage](https://www.rdocumentation.org/packages/gcookbook/versions/1.0/topics/uspopage) dataset describes the age distribution of the US population from 1900 to 2002\.
The dataset can be accessed using
```
data(uspopage, package="gcookbook")
```
A.14 Saratoga housing data
--------------------------
The [Saratoga housing](https://www.rdocumentation.org/packages/mosaicData/versions/0.17.0/topics/SaratogaHouses) dataset contains information on 1,728 houses in Saratoga Country, NY, USA in 2006\. Variables include price (in thousands of US dollars) and 15 property characteristics (lotsize, living area, age, number of bathrooms, etc.)
The dataset can be accessed using
```
data(SaratogaHouses, package="mosaicData")
```
A.15 NCCTG lung cancer data
---------------------------
The [lung](https://stat.ethz.ch/R-manual/R-devel/library/survival/html/lung.html) dataset describes the survival time of 228 patients with advanced lung cancer from the North Central Cancer Treatment Group.
The dataset can be accessed using
```
data(lung, package="survival")
```
A.16 Titanic data
-----------------
The [Titanic](https://stat.ethz.ch/R-manual/R-devel/library/datasets/html/Titanic.html) dataset provides information on the fate of Titanic passengers, based on class, sex, and age. The dataset comes in table form with base R. It is provided [here](titanic.csv) as data frame.
The dataset can be accessed using
```
library(readr)
titanic <- read_csv("titanic.csv")
```
A.17 JFK Cuban Missle speech
----------------------------
The John F. Kennedy Address is a [raw text file](JFKspeech.txt) containing the president’s October 22, 1962 speech on the Cuban Missle Crisis. The text was obtained from the [JFK Presidential Library and Museum](https://www.jfklibrary.org/JFK/Historic-Speeches.aspx).
The text can be accessed using
```
library(readr)
text <- read_csv("JFKspeech.txt")
```
A.18 UK Energy forecast data
----------------------------
The UK energy forecast dataset contains data forecasts for energy production and consumption in 2050\. The data are in an [RData file](%22Energy.RData%22) that contains two data frames.
* The `node` data frame contains the names of the nodes (production and consumption types).
* The `links` data fame contains the *source* (originating node), *target* (target node), and *value* (flow amount between the nodes).
The data come from Mike Bostock’s [Sankey Diagrams page](https://bost.ocks.org/mike/sankey/) and the `network3D` [homepage](https://christophergandrud.github.io/networkD3/) and can be accessed with the statement
```
load("Energy.RData")
```
A.1 Academic salaries
---------------------
The [Salaries for Professors](https://www.rdocumentation.org/packages/carData/versions/3.0-1/topics/Salaries) dataset comes from the `carData` package. It describes the 9 month academic salaries of 397 college professors at a single institution in 2008\-2009\. The data were collected as part of the administration’s monitoring of gender differences in salary.
The dataset can be accessed using
```
data(Salaries, package="carData")
```
It is also provided in other formats, so that you can practice [importing data](DataPrep.html#importing).
| Format | File |
| --- | --- |
| Comma delimited text | <Salaries.csv> |
| Tab delimited text | <Salaries.txt> |
| Excel spreadsheet | <Salaries.xlsx> |
| SAS file | <Salaries.sas7bdat> |
| Stata file | <Salaries.dta> |
| SPSS file | <Salaries.sav> |
A.2 Starwars
------------
The [starwars](https://dplyr.tidyverse.org/reference/starwars.html) dataset comes from the **dplyr** package. It describes 13 characteristics of 87 characters from the Starwars universe. The data are extracted from the [Star Wars API](http://swapi.co).
A.3 Mammal sleep
----------------
The [msleep](http://ggplot2.tidyverse.org/reference/msleep.html) dataset comes from the **ggplot2** package. It is an updated and expanded version of a dataset by Save and West, describing the sleeping characteristics of 83 mammals.
The dataset can be accessed using
```
data(msleep, package="ggplot2")
```
A.4 Medical insurance costs
---------------------------
The [insurance](https://github.com/dataspelunking/MLwR) dataset is described in the book **Machine Learning with R** by Brett Lantz. A cleaned version of the dataset is also available on [Kaggle](https://www.kaggle.com/datasets/mirichoi0218/insurance). The dataset describes medical information and costs billed by health insurance companies in 2013, as compiled by the United States Census Bureau. Variables include age, sex, body mass index, number of children covered by health insurance, smoker status, US region, and individual medical costs billed by health insurance for 1338 individuals.
A.5 Marriage records
--------------------
The [Marriage](https://rdrr.io/cran/mosaicData/man/Marriage.html) dataset comes from the **mosiacData** package. It is contains the marriage records of 98 individuals collected from a probate court in Mobile County, Alabama.
The dataset can be accessed using
```
data(Marriage, package="mosaicData")
```
A.6 Fuel economy data
---------------------
The [mpg](https://ggplot2.tidyverse.org/reference/mpg.html) dataset from the **ggplot2** package, contains fuel economy data for 38 popular models of car, for the years 1999 and 2008\.
The dataset can be accessed using
```
data(mpg, package="ggplot2")
```
A.7 Literacy Rates
------------------
This dataset provides the literacy rates (percent of the population that can both read and write) for each US State in 2023\. The data were obtained from the World Population Review ([http://https://worldpopulationreview.com/state\-rankings/us\-literacy\-rates\-by\-state](http://https://worldpopulationreview.com/state-rankings/us-literacy-rates-by-state)).
The dataset can be accessed using
```
library(readr)
litRates <- read_csv("USLitRates.csv")
```
A.8 Gapminder data
------------------
The [gapminder](https://www.rdocumentation.org/packages/gapminder/versions/0.3.0/topics/gapminder) dataset from the **gapminder** package, contains longitudinal data (1952\-2007\) on life expectancy, GDP per capita, and population for 142 countries.
The dataset can be accessed using
```
data(gapminder, package="gapminder")
```
A.9 Current Population Survey (1985\)
-------------------------------------
The [CPS85](https://www.rdocumentation.org/packages/mosaicData/versions/0.16.0/topics/CPS85) dataset from the **mosaicData** package, contains 1985 data on wages and other characteristics of workers.
The dataset can be accessed using
```
data(CPS85, package="mosaicData")
```
A.10 Houston crime data
-----------------------
The [crime](https://www.rdocumentation.org/packages/ggmap/versions/2.6.1/topics/crime) dataset from the **ggmap** package, contains the time, date, and location of six types of crimes in Houston, Texas between January 2010 and August 2010\.
The dataset can be accessed using
```
data(crime, package="ggmap")
```
A.11 Hispanic and Latino Population
-----------------------------------
The Hispanic and Latino Population data is a raw tab delimited text file containing the percentage of Hispanic and Latinos by US state from the 2010 Census. The actual dataset was obtained from Wikipedia (<https://en.wikipedia.org/wiki/List_of_U.S._states_by_Hispanic_and_Latino_population>).
The data can be accessed using
```
library(readr)
text <- read_csv("hisplat.csv")
```
A.12 US economic timeseries
---------------------------
The [economics](https://ggplot2.tidyverse.org/reference/economics.html) dataset from the **ggplot2** package, contains the monthly economic data gathered from Jan 1967 to Jan 2015\.
The dataset can be accessed using
```
data(economics, package="ggplot2")
```
A.13 US population by age and year
----------------------------------
The [uspopage](https://www.rdocumentation.org/packages/gcookbook/versions/1.0/topics/uspopage) dataset describes the age distribution of the US population from 1900 to 2002\.
The dataset can be accessed using
```
data(uspopage, package="gcookbook")
```
A.14 Saratoga housing data
--------------------------
The [Saratoga housing](https://www.rdocumentation.org/packages/mosaicData/versions/0.17.0/topics/SaratogaHouses) dataset contains information on 1,728 houses in Saratoga Country, NY, USA in 2006\. Variables include price (in thousands of US dollars) and 15 property characteristics (lotsize, living area, age, number of bathrooms, etc.)
The dataset can be accessed using
```
data(SaratogaHouses, package="mosaicData")
```
A.15 NCCTG lung cancer data
---------------------------
The [lung](https://stat.ethz.ch/R-manual/R-devel/library/survival/html/lung.html) dataset describes the survival time of 228 patients with advanced lung cancer from the North Central Cancer Treatment Group.
The dataset can be accessed using
```
data(lung, package="survival")
```
A.16 Titanic data
-----------------
The [Titanic](https://stat.ethz.ch/R-manual/R-devel/library/datasets/html/Titanic.html) dataset provides information on the fate of Titanic passengers, based on class, sex, and age. The dataset comes in table form with base R. It is provided [here](titanic.csv) as data frame.
The dataset can be accessed using
```
library(readr)
titanic <- read_csv("titanic.csv")
```
A.17 JFK Cuban Missle speech
----------------------------
The John F. Kennedy Address is a [raw text file](JFKspeech.txt) containing the president’s October 22, 1962 speech on the Cuban Missle Crisis. The text was obtained from the [JFK Presidential Library and Museum](https://www.jfklibrary.org/JFK/Historic-Speeches.aspx).
The text can be accessed using
```
library(readr)
text <- read_csv("JFKspeech.txt")
```
A.18 UK Energy forecast data
----------------------------
The UK energy forecast dataset contains data forecasts for energy production and consumption in 2050\. The data are in an [RData file](%22Energy.RData%22) that contains two data frames.
* The `node` data frame contains the names of the nodes (production and consumption types).
* The `links` data fame contains the *source* (originating node), *target* (target node), and *value* (flow amount between the nodes).
The data come from Mike Bostock’s [Sankey Diagrams page](https://bost.ocks.org/mike/sankey/) and the `network3D` [homepage](https://christophergandrud.github.io/networkD3/) and can be accessed with the statement
```
load("Energy.RData")
```
| Data Visualization |
rkabacoff.github.io | https://rkabacoff.github.io/datavis/about-the-author.html |
B About the Author
==================
.
| Data Visualization |
rkabacoff.github.io | https://rkabacoff.github.io/datavis/QAC.html |
C About the QAC
===============
. It coordinates support for quantitative analysis across the curriculum, and provides educational and research support for both students and faculty.
| Data Visualization |
smithjd.github.io | https://smithjd.github.io/sql-pet/index.html |
Chapter 1 Introduction
======================
> This chapter introduces:
>
>
> * The motivation for this book and the strategies we have adopted
> * Our approach to exploring issues “behind the enterprise firewall” using Docker to demonstrate access to a service like PostgreSQL from R
> * Our team and how this project came about
1\.1 Using R to query a DBMS in your organization
-------------------------------------------------
Many R users (or *useRs*) live a dual life: in the vibrant open\-source R community where R is created, improved, discussed, and taught. And then they go to work in a secured, complex, closed organizational environment where they may be on their own. Here is [a request on the Rstudio community site](https://community.rstudio.com/t/moving-from-rjdbc-to-odbc/22419) for help that has been lightly edited to emphasize the generality that we see:
> I’m trying to migrate some inherited scripts that \[…] to connect to a \[…] database to \[…] instead. I’ve reviewed the <https://db.rstudio.com> docs and tried a number of configurations but haven’t been able to connect. *I’m in uncharted territory within my org, so haven’t been able to get much help internally.*
This book will help you create a hybrid environment on your machine that can mimic some of the uncharted territory in your organization. It goes far beyond the basic connection issues and covers issues that you face when you are finding your way around or writing queries to your organization’s databases, not just when maintaining inherited scripts.
* **Technology hurdles**. The interfaces (passwords, packages, etc.) and gaps between R and a back end database are hidden from public view as a matter of security, so pinpointing exactly where a problem is can be difficult. A **simulated** environment such as we offer here can be an important learning resource.
* **Scale issues**. We see at least two types of scale issues. Handling large volumes of data so that performance issues must be a consideration requires a basic understanding of what’s happening in “the back end” (which is necessarily hidden from view). Therefore mastering techniques for drawing samples or small batches of data are essential. In addition to their size, your organization’s databases will often have structural characteristics that are complex and obscure. Data documentation is often incomplete and emphasizes operational characteristics, rather than analytic opportunities. A careful useR often needs to confirm the documentation on the fly and de\-normalize data carefully.
* **Use cases**. R users frequently need to make sense of an organization’s complex data structures and coding schemes to address incompletely formed questions so that informal exploratory data analysis has to be intuitive and fast. The technology details should not get in the way. Sharing and discussing exploratory and diagnostic retrieval techniquesis best in public, but is constrained by organizational requirements.
We have found that PostgreSQL in a Docker container solves many of the foregoing problems.
1\.2 Docker as a tool for UseRs
-------------------------------
Noam Ross’s “[Docker for the UseR](https://nyhackr.blob.core.windows.net/presentations/Docker-for-the-UseR_Noam-Ross.pdf)” (Ross [2018](#ref-Ross2018a)[a](#ref-Ross2018a)) suggests that there are four distinct Docker use\-cases for useRs.
1. Make a fixed working environment for reproducible analysis
2. Access a service outside of R **(e.g., PostgreSQL)**
3. Create an R based service (e.g., with `plumber`)
4. Send our compute jobs to the cloud with minimal reconfiguration or revision
This book explores \#2 because it allows us to work on the database access issues described above and to practice on an industrial\-scale DBMS.
* Docker is a comparatively easy way to simulate the relationship between an R/RStudio session and a database – all on on your machine (provided you have Docker installed and running).
* Running PostgreSQL on a Docker container avoids OS or system dependencies or conflicts that cause confusion and limit reproducibility.
* A Docker environment consumes relatively few resources. Our sandbox does much less but only includes PostgreSQL and sample data, so it takes up about 5% of the space taken up by the Vagrant environment that inspired this project. (Makubuya [2018](#ref-Makubuya2018))
* A simple Docker container such as the one used in our sandbox is easy to use and could be extended for other uses.
* Docker is a widely used technology for deploying applications in the cloud, so for many useRs it’s worth mastering.
1\.3 Alternatives to Docker
---------------------------
We have found Docker to be a great tool for simulating the complexities of an enterprise environment. However, installing Docker can be challenging, especially for Windows users. Therefore the code in this book depends on PostgreSQL(Group [2019](#ref-postgresql2019)) in a Docker container, but it can all be readily adapted to either SQLite(Consortium [2019](#ref-sqlite2019)), PostgreSQL running natively on your computer, or even PostgreSQL running in the cloud. The technical details of these alternatives are all in separate chapters.
1\.4 Packages used in this book
-------------------------------
The following packages are used in this book:
* bookdown
* DBI
* dbplyr
* devtools
* DiagrammeR
* downloader
* glue
* here
* knitr
* RPostgres
* skimr
* sqlpetr (installs with: `remotes::install_github("smithjd/sqlpetr", force = TRUE, quiet = TRUE, build = TRUE, build_opts = "")`)
* tidyverse
Note that when you install `sqlpetr`, it will install all the other packages you need as dependencies.
1\.5 Who are we?
----------------
We have been collaborating on this book since the Summer of 2018, each of us chipping into the project as time permits:
* Ian Franz \- [@ianfrantz](https://github.com/ianfrantz)
* Jim Tyhurst \- [@jimtyhurst](https://github.com/jimtyhurst)
* John David Smith \- [@smithjd](https://github.com/smithjd)
* M. Edward (Ed) Borasky \- [@znmeb](https://github.com/znmeb)
* Maryanne Thygesen [@maryannet](https://github.com/maryannet)
* Scott Came \- [@scottcame](https://github.com/scottcame)
* Sophie Yang \- [@SophieMYang](https://github.com/SophieMYang)
1\.6 How did this project come about?
-------------------------------------
We trace this book back to the [June 2, 2018 Cascadia R Conf](https://cascadiarconf.com/) where Aaron Makubuya gave [a presentation using Vagrant hosting](https://github.com/Cascadia-R/Using_R_With_Databases) (Makubuya [2018](#ref-Makubuya2018)). After that [John Smith](https://github.com/smithjd), [Ian Franz](https://github.com/ianfrantz), and [Sophie Yang](https://github.com/SophieMYang) had discussions after the monthly [Data Discussion Meetups](https://www.meetup.com/Portland-Data-Science-Group/events/fxvhbnywmbgb/) about the difficulties around setting up [Vagrant](https://www.vagrantup.com/) (a virtual environment), connecting to an enterprise database, and having realistic **public** environment to demo or practice the issues that come up behind corporate firewalls. [Scott Came’s](https://github.com/scottcame) tutorial on [R and Docker](http://www.cascadia-analytics.com/2018/07/21/docker-r-p1.html) (Came [2018](#ref-Came2018)) (an alternative to Vagrant) at the 2018 UseR Conference in Melbourne was provocative and it turned out he lived nearby. We re\-connected with [M. Edward (Ed) Borasky](https://github.com/znmeb) who had done extensive development for a [Hack Oregon data science containerization project](https://github.com/hackoregon/data-science-pet-containers) (Borasky [2018](#ref-Borasky2018)).
1\.7 Navigation
---------------
If this is the first `bookdown` (Xie [2016](#ref-Xie2016)) book you’ve read, here’s how to navigate the website.
1. The controls on the upper left: there are four controls on the upper left.
* A “hamburger” menu: this toggles the table of contents on the left side of the page on or off.
* A magnifying glass: this toggles a search box on or off.
* A letter “A”: this lets you pick how you want the site to display. You have your choice of small or large text, a serif or sans\-serif font, and a white, sepia or night theme.
* A pencil: this is the “Edit” button. This will take you to a GitHub edit dialog for the chapter you’re reading. If you’re a committer to the repository, you’ll be able to edit the source directly.
If not, GitHub will fork a copy of the repository to your own account and you’ll be able to edit that version. Then you can make a pull request.
2. The share buttons in the upper right hand corner. There’s one for Twitter, one for Facebook, and one that gives a menu of options, including LinkedIn.
1\.1 Using R to query a DBMS in your organization
-------------------------------------------------
Many R users (or *useRs*) live a dual life: in the vibrant open\-source R community where R is created, improved, discussed, and taught. And then they go to work in a secured, complex, closed organizational environment where they may be on their own. Here is [a request on the Rstudio community site](https://community.rstudio.com/t/moving-from-rjdbc-to-odbc/22419) for help that has been lightly edited to emphasize the generality that we see:
> I’m trying to migrate some inherited scripts that \[…] to connect to a \[…] database to \[…] instead. I’ve reviewed the <https://db.rstudio.com> docs and tried a number of configurations but haven’t been able to connect. *I’m in uncharted territory within my org, so haven’t been able to get much help internally.*
This book will help you create a hybrid environment on your machine that can mimic some of the uncharted territory in your organization. It goes far beyond the basic connection issues and covers issues that you face when you are finding your way around or writing queries to your organization’s databases, not just when maintaining inherited scripts.
* **Technology hurdles**. The interfaces (passwords, packages, etc.) and gaps between R and a back end database are hidden from public view as a matter of security, so pinpointing exactly where a problem is can be difficult. A **simulated** environment such as we offer here can be an important learning resource.
* **Scale issues**. We see at least two types of scale issues. Handling large volumes of data so that performance issues must be a consideration requires a basic understanding of what’s happening in “the back end” (which is necessarily hidden from view). Therefore mastering techniques for drawing samples or small batches of data are essential. In addition to their size, your organization’s databases will often have structural characteristics that are complex and obscure. Data documentation is often incomplete and emphasizes operational characteristics, rather than analytic opportunities. A careful useR often needs to confirm the documentation on the fly and de\-normalize data carefully.
* **Use cases**. R users frequently need to make sense of an organization’s complex data structures and coding schemes to address incompletely formed questions so that informal exploratory data analysis has to be intuitive and fast. The technology details should not get in the way. Sharing and discussing exploratory and diagnostic retrieval techniquesis best in public, but is constrained by organizational requirements.
We have found that PostgreSQL in a Docker container solves many of the foregoing problems.
1\.2 Docker as a tool for UseRs
-------------------------------
Noam Ross’s “[Docker for the UseR](https://nyhackr.blob.core.windows.net/presentations/Docker-for-the-UseR_Noam-Ross.pdf)” (Ross [2018](#ref-Ross2018a)[a](#ref-Ross2018a)) suggests that there are four distinct Docker use\-cases for useRs.
1. Make a fixed working environment for reproducible analysis
2. Access a service outside of R **(e.g., PostgreSQL)**
3. Create an R based service (e.g., with `plumber`)
4. Send our compute jobs to the cloud with minimal reconfiguration or revision
This book explores \#2 because it allows us to work on the database access issues described above and to practice on an industrial\-scale DBMS.
* Docker is a comparatively easy way to simulate the relationship between an R/RStudio session and a database – all on on your machine (provided you have Docker installed and running).
* Running PostgreSQL on a Docker container avoids OS or system dependencies or conflicts that cause confusion and limit reproducibility.
* A Docker environment consumes relatively few resources. Our sandbox does much less but only includes PostgreSQL and sample data, so it takes up about 5% of the space taken up by the Vagrant environment that inspired this project. (Makubuya [2018](#ref-Makubuya2018))
* A simple Docker container such as the one used in our sandbox is easy to use and could be extended for other uses.
* Docker is a widely used technology for deploying applications in the cloud, so for many useRs it’s worth mastering.
1\.3 Alternatives to Docker
---------------------------
We have found Docker to be a great tool for simulating the complexities of an enterprise environment. However, installing Docker can be challenging, especially for Windows users. Therefore the code in this book depends on PostgreSQL(Group [2019](#ref-postgresql2019)) in a Docker container, but it can all be readily adapted to either SQLite(Consortium [2019](#ref-sqlite2019)), PostgreSQL running natively on your computer, or even PostgreSQL running in the cloud. The technical details of these alternatives are all in separate chapters.
1\.4 Packages used in this book
-------------------------------
The following packages are used in this book:
* bookdown
* DBI
* dbplyr
* devtools
* DiagrammeR
* downloader
* glue
* here
* knitr
* RPostgres
* skimr
* sqlpetr (installs with: `remotes::install_github("smithjd/sqlpetr", force = TRUE, quiet = TRUE, build = TRUE, build_opts = "")`)
* tidyverse
Note that when you install `sqlpetr`, it will install all the other packages you need as dependencies.
1\.5 Who are we?
----------------
We have been collaborating on this book since the Summer of 2018, each of us chipping into the project as time permits:
* Ian Franz \- [@ianfrantz](https://github.com/ianfrantz)
* Jim Tyhurst \- [@jimtyhurst](https://github.com/jimtyhurst)
* John David Smith \- [@smithjd](https://github.com/smithjd)
* M. Edward (Ed) Borasky \- [@znmeb](https://github.com/znmeb)
* Maryanne Thygesen [@maryannet](https://github.com/maryannet)
* Scott Came \- [@scottcame](https://github.com/scottcame)
* Sophie Yang \- [@SophieMYang](https://github.com/SophieMYang)
1\.6 How did this project come about?
-------------------------------------
We trace this book back to the [June 2, 2018 Cascadia R Conf](https://cascadiarconf.com/) where Aaron Makubuya gave [a presentation using Vagrant hosting](https://github.com/Cascadia-R/Using_R_With_Databases) (Makubuya [2018](#ref-Makubuya2018)). After that [John Smith](https://github.com/smithjd), [Ian Franz](https://github.com/ianfrantz), and [Sophie Yang](https://github.com/SophieMYang) had discussions after the monthly [Data Discussion Meetups](https://www.meetup.com/Portland-Data-Science-Group/events/fxvhbnywmbgb/) about the difficulties around setting up [Vagrant](https://www.vagrantup.com/) (a virtual environment), connecting to an enterprise database, and having realistic **public** environment to demo or practice the issues that come up behind corporate firewalls. [Scott Came’s](https://github.com/scottcame) tutorial on [R and Docker](http://www.cascadia-analytics.com/2018/07/21/docker-r-p1.html) (Came [2018](#ref-Came2018)) (an alternative to Vagrant) at the 2018 UseR Conference in Melbourne was provocative and it turned out he lived nearby. We re\-connected with [M. Edward (Ed) Borasky](https://github.com/znmeb) who had done extensive development for a [Hack Oregon data science containerization project](https://github.com/hackoregon/data-science-pet-containers) (Borasky [2018](#ref-Borasky2018)).
1\.7 Navigation
---------------
If this is the first `bookdown` (Xie [2016](#ref-Xie2016)) book you’ve read, here’s how to navigate the website.
1. The controls on the upper left: there are four controls on the upper left.
* A “hamburger” menu: this toggles the table of contents on the left side of the page on or off.
* A magnifying glass: this toggles a search box on or off.
* A letter “A”: this lets you pick how you want the site to display. You have your choice of small or large text, a serif or sans\-serif font, and a white, sepia or night theme.
* A pencil: this is the “Edit” button. This will take you to a GitHub edit dialog for the chapter you’re reading. If you’re a committer to the repository, you’ll be able to edit the source directly.
If not, GitHub will fork a copy of the repository to your own account and you’ll be able to edit that version. Then you can make a pull request.
2. The share buttons in the upper right hand corner. There’s one for Twitter, one for Facebook, and one that gives a menu of options, including LinkedIn.
| Data Databases and Engineering |
smithjd.github.io | https://smithjd.github.io/sql-pet/chapter-how-to-use-this-book.html |
Chapter 2 How to use this book
==============================
> This chapter explains:
>
>
> * Getting the code used in this book
> * How you can contribute to the book project
This book is full of examples that you can replicate on your computer.
2\.1 Retrieve the code from GitHub
----------------------------------
The code to generate the book and the exercises it contains can be downloaded from [this repo](https://github.com/smithjd/sql-pet).
2\.2 Read along, experiment as you go
-------------------------------------
We have never been sure whether we’re writing an expository book or a massive tutorial. You may use it either way. The best way to learn the material we cover is to *experiment*.
After the introductory chapters and the chapter that [creates the persistent database](chapter-setup-adventureworks-db.html#chapter_setup-adventureworks-db), you can jump around and each chapter stands on its own.
2\.3 Participating
------------------
### 2\.3\.1 Browsing the book
If you just want to read the book and copy / paste code into your working environment, simply browse to [https://smithjd.github.io/sql\-pet](https://smithjd.github.io/sql-pet). If you get stuck, or find things aren’t working, open an issue at [https://github.com/smithjd/sql\-pet/issues/new/](https://github.com/smithjd/sql-pet/issues/new/).
### 2\.3\.2 Diving in
If you want to experiment with the code in the book, run it in RStudio and interact with it, you’ll need to do two more things:
1. Install the `sqlpetr` R package (Borasky et al. [2018](#ref-Borasky2018a)). See <https://smithjd.github.io/sqlpetr> for the package documentation. Installation may take some time if it has to install or update packages not available on your computer.
2. Clone the Git repository [https://github.com/smithjd/sql\-pet.git](https://github.com/smithjd/sql-pet.git) and open the project file `sql-pet.Rproj` in RStudio.
Enjoy!
2\.1 Retrieve the code from GitHub
----------------------------------
The code to generate the book and the exercises it contains can be downloaded from [this repo](https://github.com/smithjd/sql-pet).
2\.2 Read along, experiment as you go
-------------------------------------
We have never been sure whether we’re writing an expository book or a massive tutorial. You may use it either way. The best way to learn the material we cover is to *experiment*.
After the introductory chapters and the chapter that [creates the persistent database](chapter-setup-adventureworks-db.html#chapter_setup-adventureworks-db), you can jump around and each chapter stands on its own.
2\.3 Participating
------------------
### 2\.3\.1 Browsing the book
If you just want to read the book and copy / paste code into your working environment, simply browse to [https://smithjd.github.io/sql\-pet](https://smithjd.github.io/sql-pet). If you get stuck, or find things aren’t working, open an issue at [https://github.com/smithjd/sql\-pet/issues/new/](https://github.com/smithjd/sql-pet/issues/new/).
### 2\.3\.2 Diving in
If you want to experiment with the code in the book, run it in RStudio and interact with it, you’ll need to do two more things:
1. Install the `sqlpetr` R package (Borasky et al. [2018](#ref-Borasky2018a)). See <https://smithjd.github.io/sqlpetr> for the package documentation. Installation may take some time if it has to install or update packages not available on your computer.
2. Clone the Git repository [https://github.com/smithjd/sql\-pet.git](https://github.com/smithjd/sql-pet.git) and open the project file `sql-pet.Rproj` in RStudio.
Enjoy!
### 2\.3\.1 Browsing the book
If you just want to read the book and copy / paste code into your working environment, simply browse to [https://smithjd.github.io/sql\-pet](https://smithjd.github.io/sql-pet). If you get stuck, or find things aren’t working, open an issue at [https://github.com/smithjd/sql\-pet/issues/new/](https://github.com/smithjd/sql-pet/issues/new/).
### 2\.3\.2 Diving in
If you want to experiment with the code in the book, run it in RStudio and interact with it, you’ll need to do two more things:
1. Install the `sqlpetr` R package (Borasky et al. [2018](#ref-Borasky2018a)). See <https://smithjd.github.io/sqlpetr> for the package documentation. Installation may take some time if it has to install or update packages not available on your computer.
2. Clone the Git repository [https://github.com/smithjd/sql\-pet.git](https://github.com/smithjd/sql-pet.git) and open the project file `sql-pet.Rproj` in RStudio.
Enjoy!
| Data Databases and Engineering |
smithjd.github.io | https://smithjd.github.io/sql-pet/chapter-learning-goals.html |
Chapter 3 Chapter Learning Goals and Use Cases
==============================================
> This chapter sets the context for the book by:
>
>
> * Describing our assumptions about the reader of this book: the challenges you face, your R skills, your learning goals, and context.
> * Describing what the book offers in terms of:
> + Problems that are addressed
> + Learning objectives
> + Sequence of topics, ranging from connecting to the database to exploring an issue in response to questions from an executive
> + R packages used
> * Describing the sample database used in the book
3\.1 The Book’s Challenge: goals, context and expectations
----------------------------------------------------------
* Working with the data that’s behind the enterprise firewall is challenging in a unique way. Most of us R users are accustomed to a vast learning community that shares resources, discusses methods in public, and that can help each other trouble\-shoot a problem. The very necessary enterprise firewall makes all of that difficult, if not impossible. And yet enterprise database environment is very important because in so many cases that’s where the data (and possibly your paycheck) are coming from.
* Differences between production and data warehouse environments. We are simulating a production environment. There are many similarities. Data models are different. Performance is a bigger deal in the OLTP.
* Data in a organizational environment around the database. Learning to keep your DBAs happy:
+ You are your own DBA in this simulation, so you can wreak havoc and learn from it, but you can learn to be DBA\-friendly here.
+ In the end it’s the subject\-matter experts (people using the data every day) that really understand your data, but you have to work with your DBAs first.
+ You can’t believe all the data you pull out of the database.
### 3\.1\.1 The Challenge: Investigating a question using an organization’s database
Using an enterprise database to create meaningful management insights requires a combination of very different skills:
* Need both familiarity with the data and a focus question
+ An iterative process where
- the data resource can shape your understanding of the question
- the question you need to answer will frame how you see the data resource
+ You need to go back and forth between the two, asking
- do I understand the question?
- do I understand the data?
* A “good enough” understanding of the data resource (in the DBMS)
+ Nobody knows everything about an entire organization’s data resources. We do, however, need to know what more we need to know and estimate what we don’t know yet.
+ Use all available documentation and understand its limits
+ Use your own tools and skills to examine the data resource
+ What is *missing* from the database: (columns, records, cells)
+ Why is the data missing?
* A “good enough” understanding of the question you seek to answer
+ How general or specific is your question?
+ How aligned is it with the purpose for which the database was designed and is being operated?
+ How different are your assumptions and concerns from those of the people who enter and use the data on a day to day basis?
* Some cycles in this iteration between question refinement and reformulation on the one hand and data retrieval and investigation on the other feel like a waste time. That’s inevitable.
* Bringing R tools and skills to bear on these
+ R is a powerful tool for data access, manipulation, modeling and presentation
+ Different R packages and techniques are available for each of the elements involved in exploring, analyzing and reporting on enterprise behavior using the enterprise database.
### 3\.1\.2 Strategies
* Local, idiosyncratic optimization (entry and use of data). For example, different individuals might code a variable differently.
* Drifting use / bastardization of a column
* Turf wars and acquisitions
* Partial recollection / history: find the people who know where the skeletons are
### 3\.1\.3 Problems that we address in the book
* This book emphasizes database exploration and the R techniques that are needed.
* We are emphasizing a tidyverse approach. \& graphics to really makes sense of what we find.
* We can’t call on real people in the adventureworks company, obviously, but we invent some characters to illustrate the investigation process as we have experienced it in various organizational settings.
### 3\.1\.4 Signposts
> **Practice Tips**
>
>
> *Here’s how we do it.
>
> \+ Conventions like always using the `labs()` function in ggplot
>
> \+ Specifying the package the first time a function is used*
### 3\.1\.5 Book structure
The book explores R techniques and and investigation strategies using progressively more complex queries, that lead to this scenario: There is a new Executive VP of Sales at Adventure Works. She wants an overview of sales and the sales organization’s performance at *Adventure Works*. Once her questions are satisfied, a monthly report is developed that can run automatically and appear in her mailbox.
* Early chapters demonstrate now to connect to a database and find your way around it, with a pause to discuss how to secure your credentials.
* Both Rstudio and R script methods are shown for the same database overview.
* The `salesordedrheader` table in the `sales` schema is used to demonstrate packages and functions that show what a single table contains.
* Then the same table is used but the investigation adopts a business perspective, demonstrating R techniques that are motivated by questions like “How sales for the *Adventure Works* company?”
* Starting with base tables, then use views (that contain knowledge about the application)
* More involved queries join three tables in three different schemas: `salesperson`, `employee`, and `person`. The relevant question might be “Who is my top salesperson? Are the 3 top salespersons older or younger?”
* Finally, we build a series of queries that explore the sales workflow: sales territories, sales people, top customers by product, product mixture that gives top 80% of sales. What are they producing in detail? Seasonal? Type of product, region, etc.?
* The book ends by demonstrating how R code can be used for standard reports from the database that are emailed to a list of recipients.
3\.2 Making your way through the book
-------------------------------------
After working through the code in this book, you can expect to be able to:
* R, SQL and PostgreSQL
+ Run queries against PostgreSQL in an environment that simulates what is found in a enterprise setting.
+ Understand techniques and some of the trade\-offs between:
- queries aimed at exploration or informal investigation using [dplyr](https://cran.r-project.org/package=dplyr) (Wickham [2018](#ref-Wickham2018)); and
- queries that should be written in SQL, because performance is important due to the size of the database or the frequency with which a query is to be run.
+ Understand the equivalence between `dplyr` and SQL queries, and how R translates one into the other.
+ Gain familiarity with techniques that help you explore a database and verify its documentation.
+ Gain familiarity with the standard metadata that a SQL database contains to describe its own contents.
+ Understand some advanced SQL techniques.
+ Gain some understanding of techniques for assessing query structure and performance.
* Docker related
+ Set up a PostgreSQL database in a Docker environment.
+ Gain familiarity with the various ways of interacting with the Docker and PostgreSQL environments
+ Understand enough about Docker to swap databases, e.g. [Sports DB](http://www.sportsdb.org/sd/samples) for the [DVD rental database](http://www.postgresqltutorial.com/postgresql-sample-database/) used in this tutorial. Or swap the database management system (DBMS), e.g. [MySQL](https://www.mysql.com/) for [PostgreSQL](https://www.postgresql.org/).
### 3\.2\.1 R Packages
These R packages are discussed or used in exercises:
* [DBI](https://cran.r-project.org/package=DBI)
* [dbplyr](https://cran.r-project.org/package=dbplyr)
* [devtools](https://cran.r-project.org/package=devtools)
* [downloader](https://cran.r-project.org/package=downloader)
* [glue](https://cran.r-project.org/package=glue)
* [gt](https://cran.r-project.org/package=gt)
* [here](https://cran.r-project.org/package=here)
* [knitr](https://cran.r-project.org/package=knitr)
* [RPostgres](https://cran.r-project.org/package=RPostgres)
* [skimr](https://cran.r-project.org/package=skimr)
* [sqlpetr](https://github.com/smithjd/sqlpetr) (installs with: `remotes::install_github("smithjd/sqlpetr", force = TRUE, quiet = TRUE, build = TRUE, build_opts = "")`)
* [tidyverse](https://cran.r-project.org/package=tidyverse)
In addition, these are used to render the book:
\* [bookdown](https://cran.r-project.org/package=bookdown)
\* [DiagrammeR](https://cran.r-project.org/package=DiagrammeR)
3\.3 Adventure Works
--------------------
In this book we have adopted the Microsoft Adventure Works online transaction processing database for our examples. It is
[https://docs.microsoft.com/en\-us/previous\-versions/sql/sql\-server\-2008/ms124438(v\=sql.100\)](https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008/ms124438(v=sql.100))
See Sections 3 and 4
Journal of Information Systems Education, Vol. 26(3\) Summer 2015\. “*Teaching Tip Active Learning via a Sample Database: The Case of Microsoft’s Adventure Works*” by Michel Mitri
[http://jise.org/Volume26/n3/JISEv26n3p177\.pdf](http://jise.org/Volume26/n3/JISEv26n3p177.pdf)
See the [AdventureWorks Data Dictionary](https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008/ms124438%28v%3dsql.100%29) and a sample table ([employee](https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008/ms124432(v=sql.100))).
Here is a (link to an ERD diagram)\[<https://i.stack.imgur.com/LMu4W.gif>]
3\.1 The Book’s Challenge: goals, context and expectations
----------------------------------------------------------
* Working with the data that’s behind the enterprise firewall is challenging in a unique way. Most of us R users are accustomed to a vast learning community that shares resources, discusses methods in public, and that can help each other trouble\-shoot a problem. The very necessary enterprise firewall makes all of that difficult, if not impossible. And yet enterprise database environment is very important because in so many cases that’s where the data (and possibly your paycheck) are coming from.
* Differences between production and data warehouse environments. We are simulating a production environment. There are many similarities. Data models are different. Performance is a bigger deal in the OLTP.
* Data in a organizational environment around the database. Learning to keep your DBAs happy:
+ You are your own DBA in this simulation, so you can wreak havoc and learn from it, but you can learn to be DBA\-friendly here.
+ In the end it’s the subject\-matter experts (people using the data every day) that really understand your data, but you have to work with your DBAs first.
+ You can’t believe all the data you pull out of the database.
### 3\.1\.1 The Challenge: Investigating a question using an organization’s database
Using an enterprise database to create meaningful management insights requires a combination of very different skills:
* Need both familiarity with the data and a focus question
+ An iterative process where
- the data resource can shape your understanding of the question
- the question you need to answer will frame how you see the data resource
+ You need to go back and forth between the two, asking
- do I understand the question?
- do I understand the data?
* A “good enough” understanding of the data resource (in the DBMS)
+ Nobody knows everything about an entire organization’s data resources. We do, however, need to know what more we need to know and estimate what we don’t know yet.
+ Use all available documentation and understand its limits
+ Use your own tools and skills to examine the data resource
+ What is *missing* from the database: (columns, records, cells)
+ Why is the data missing?
* A “good enough” understanding of the question you seek to answer
+ How general or specific is your question?
+ How aligned is it with the purpose for which the database was designed and is being operated?
+ How different are your assumptions and concerns from those of the people who enter and use the data on a day to day basis?
* Some cycles in this iteration between question refinement and reformulation on the one hand and data retrieval and investigation on the other feel like a waste time. That’s inevitable.
* Bringing R tools and skills to bear on these
+ R is a powerful tool for data access, manipulation, modeling and presentation
+ Different R packages and techniques are available for each of the elements involved in exploring, analyzing and reporting on enterprise behavior using the enterprise database.
### 3\.1\.2 Strategies
* Local, idiosyncratic optimization (entry and use of data). For example, different individuals might code a variable differently.
* Drifting use / bastardization of a column
* Turf wars and acquisitions
* Partial recollection / history: find the people who know where the skeletons are
### 3\.1\.3 Problems that we address in the book
* This book emphasizes database exploration and the R techniques that are needed.
* We are emphasizing a tidyverse approach. \& graphics to really makes sense of what we find.
* We can’t call on real people in the adventureworks company, obviously, but we invent some characters to illustrate the investigation process as we have experienced it in various organizational settings.
### 3\.1\.4 Signposts
> **Practice Tips**
>
>
> *Here’s how we do it.
>
> \+ Conventions like always using the `labs()` function in ggplot
>
> \+ Specifying the package the first time a function is used*
### 3\.1\.5 Book structure
The book explores R techniques and and investigation strategies using progressively more complex queries, that lead to this scenario: There is a new Executive VP of Sales at Adventure Works. She wants an overview of sales and the sales organization’s performance at *Adventure Works*. Once her questions are satisfied, a monthly report is developed that can run automatically and appear in her mailbox.
* Early chapters demonstrate now to connect to a database and find your way around it, with a pause to discuss how to secure your credentials.
* Both Rstudio and R script methods are shown for the same database overview.
* The `salesordedrheader` table in the `sales` schema is used to demonstrate packages and functions that show what a single table contains.
* Then the same table is used but the investigation adopts a business perspective, demonstrating R techniques that are motivated by questions like “How sales for the *Adventure Works* company?”
* Starting with base tables, then use views (that contain knowledge about the application)
* More involved queries join three tables in three different schemas: `salesperson`, `employee`, and `person`. The relevant question might be “Who is my top salesperson? Are the 3 top salespersons older or younger?”
* Finally, we build a series of queries that explore the sales workflow: sales territories, sales people, top customers by product, product mixture that gives top 80% of sales. What are they producing in detail? Seasonal? Type of product, region, etc.?
* The book ends by demonstrating how R code can be used for standard reports from the database that are emailed to a list of recipients.
### 3\.1\.1 The Challenge: Investigating a question using an organization’s database
Using an enterprise database to create meaningful management insights requires a combination of very different skills:
* Need both familiarity with the data and a focus question
+ An iterative process where
- the data resource can shape your understanding of the question
- the question you need to answer will frame how you see the data resource
+ You need to go back and forth between the two, asking
- do I understand the question?
- do I understand the data?
* A “good enough” understanding of the data resource (in the DBMS)
+ Nobody knows everything about an entire organization’s data resources. We do, however, need to know what more we need to know and estimate what we don’t know yet.
+ Use all available documentation and understand its limits
+ Use your own tools and skills to examine the data resource
+ What is *missing* from the database: (columns, records, cells)
+ Why is the data missing?
* A “good enough” understanding of the question you seek to answer
+ How general or specific is your question?
+ How aligned is it with the purpose for which the database was designed and is being operated?
+ How different are your assumptions and concerns from those of the people who enter and use the data on a day to day basis?
* Some cycles in this iteration between question refinement and reformulation on the one hand and data retrieval and investigation on the other feel like a waste time. That’s inevitable.
* Bringing R tools and skills to bear on these
+ R is a powerful tool for data access, manipulation, modeling and presentation
+ Different R packages and techniques are available for each of the elements involved in exploring, analyzing and reporting on enterprise behavior using the enterprise database.
### 3\.1\.2 Strategies
* Local, idiosyncratic optimization (entry and use of data). For example, different individuals might code a variable differently.
* Drifting use / bastardization of a column
* Turf wars and acquisitions
* Partial recollection / history: find the people who know where the skeletons are
### 3\.1\.3 Problems that we address in the book
* This book emphasizes database exploration and the R techniques that are needed.
* We are emphasizing a tidyverse approach. \& graphics to really makes sense of what we find.
* We can’t call on real people in the adventureworks company, obviously, but we invent some characters to illustrate the investigation process as we have experienced it in various organizational settings.
### 3\.1\.4 Signposts
> **Practice Tips**
>
>
> *Here’s how we do it.
>
> \+ Conventions like always using the `labs()` function in ggplot
>
> \+ Specifying the package the first time a function is used*
### 3\.1\.5 Book structure
The book explores R techniques and and investigation strategies using progressively more complex queries, that lead to this scenario: There is a new Executive VP of Sales at Adventure Works. She wants an overview of sales and the sales organization’s performance at *Adventure Works*. Once her questions are satisfied, a monthly report is developed that can run automatically and appear in her mailbox.
* Early chapters demonstrate now to connect to a database and find your way around it, with a pause to discuss how to secure your credentials.
* Both Rstudio and R script methods are shown for the same database overview.
* The `salesordedrheader` table in the `sales` schema is used to demonstrate packages and functions that show what a single table contains.
* Then the same table is used but the investigation adopts a business perspective, demonstrating R techniques that are motivated by questions like “How sales for the *Adventure Works* company?”
* Starting with base tables, then use views (that contain knowledge about the application)
* More involved queries join three tables in three different schemas: `salesperson`, `employee`, and `person`. The relevant question might be “Who is my top salesperson? Are the 3 top salespersons older or younger?”
* Finally, we build a series of queries that explore the sales workflow: sales territories, sales people, top customers by product, product mixture that gives top 80% of sales. What are they producing in detail? Seasonal? Type of product, region, etc.?
* The book ends by demonstrating how R code can be used for standard reports from the database that are emailed to a list of recipients.
3\.2 Making your way through the book
-------------------------------------
After working through the code in this book, you can expect to be able to:
* R, SQL and PostgreSQL
+ Run queries against PostgreSQL in an environment that simulates what is found in a enterprise setting.
+ Understand techniques and some of the trade\-offs between:
- queries aimed at exploration or informal investigation using [dplyr](https://cran.r-project.org/package=dplyr) (Wickham [2018](#ref-Wickham2018)); and
- queries that should be written in SQL, because performance is important due to the size of the database or the frequency with which a query is to be run.
+ Understand the equivalence between `dplyr` and SQL queries, and how R translates one into the other.
+ Gain familiarity with techniques that help you explore a database and verify its documentation.
+ Gain familiarity with the standard metadata that a SQL database contains to describe its own contents.
+ Understand some advanced SQL techniques.
+ Gain some understanding of techniques for assessing query structure and performance.
* Docker related
+ Set up a PostgreSQL database in a Docker environment.
+ Gain familiarity with the various ways of interacting with the Docker and PostgreSQL environments
+ Understand enough about Docker to swap databases, e.g. [Sports DB](http://www.sportsdb.org/sd/samples) for the [DVD rental database](http://www.postgresqltutorial.com/postgresql-sample-database/) used in this tutorial. Or swap the database management system (DBMS), e.g. [MySQL](https://www.mysql.com/) for [PostgreSQL](https://www.postgresql.org/).
### 3\.2\.1 R Packages
These R packages are discussed or used in exercises:
* [DBI](https://cran.r-project.org/package=DBI)
* [dbplyr](https://cran.r-project.org/package=dbplyr)
* [devtools](https://cran.r-project.org/package=devtools)
* [downloader](https://cran.r-project.org/package=downloader)
* [glue](https://cran.r-project.org/package=glue)
* [gt](https://cran.r-project.org/package=gt)
* [here](https://cran.r-project.org/package=here)
* [knitr](https://cran.r-project.org/package=knitr)
* [RPostgres](https://cran.r-project.org/package=RPostgres)
* [skimr](https://cran.r-project.org/package=skimr)
* [sqlpetr](https://github.com/smithjd/sqlpetr) (installs with: `remotes::install_github("smithjd/sqlpetr", force = TRUE, quiet = TRUE, build = TRUE, build_opts = "")`)
* [tidyverse](https://cran.r-project.org/package=tidyverse)
In addition, these are used to render the book:
\* [bookdown](https://cran.r-project.org/package=bookdown)
\* [DiagrammeR](https://cran.r-project.org/package=DiagrammeR)
### 3\.2\.1 R Packages
These R packages are discussed or used in exercises:
* [DBI](https://cran.r-project.org/package=DBI)
* [dbplyr](https://cran.r-project.org/package=dbplyr)
* [devtools](https://cran.r-project.org/package=devtools)
* [downloader](https://cran.r-project.org/package=downloader)
* [glue](https://cran.r-project.org/package=glue)
* [gt](https://cran.r-project.org/package=gt)
* [here](https://cran.r-project.org/package=here)
* [knitr](https://cran.r-project.org/package=knitr)
* [RPostgres](https://cran.r-project.org/package=RPostgres)
* [skimr](https://cran.r-project.org/package=skimr)
* [sqlpetr](https://github.com/smithjd/sqlpetr) (installs with: `remotes::install_github("smithjd/sqlpetr", force = TRUE, quiet = TRUE, build = TRUE, build_opts = "")`)
* [tidyverse](https://cran.r-project.org/package=tidyverse)
In addition, these are used to render the book:
\* [bookdown](https://cran.r-project.org/package=bookdown)
\* [DiagrammeR](https://cran.r-project.org/package=DiagrammeR)
3\.3 Adventure Works
--------------------
In this book we have adopted the Microsoft Adventure Works online transaction processing database for our examples. It is
[https://docs.microsoft.com/en\-us/previous\-versions/sql/sql\-server\-2008/ms124438(v\=sql.100\)](https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008/ms124438(v=sql.100))
See Sections 3 and 4
Journal of Information Systems Education, Vol. 26(3\) Summer 2015\. “*Teaching Tip Active Learning via a Sample Database: The Case of Microsoft’s Adventure Works*” by Michel Mitri
[http://jise.org/Volume26/n3/JISEv26n3p177\.pdf](http://jise.org/Volume26/n3/JISEv26n3p177.pdf)
See the [AdventureWorks Data Dictionary](https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008/ms124438%28v%3dsql.100%29) and a sample table ([employee](https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008/ms124432(v=sql.100))).
Here is a (link to an ERD diagram)\[<https://i.stack.imgur.com/LMu4W.gif>]
| Data Databases and Engineering |
smithjd.github.io | https://smithjd.github.io/sql-pet/chapter-setup-adventureworks-db.html |
Chapter 4 Create and connect to the adventureworks database in PostgreSQL
=========================================================================
> This chapter demonstrates how to:
>
>
> * Create and connect to the PostgreSQL `adventureworks` database in Docker
> * Keep necessary credentials secret while being available to R when it executes.
> * Leverage Rstudio features to get an overview of the database
> * Set up the environment for subsequent chapters
4\.1 Overview
-------------
Docker commands can be run from a terminal (e.g., the Rstudio Terminal pane) or with a `system2()` command. The necessary functions to start, stop Docker containers and do other busy work are provided in the `sqlpetr` package.
> Note: The functions in the package are designed to help you focus on interacting with a dbms from R. You can ignore how they work until you are ready to delve into the details. They are all named to begin with `sp_`. The first time a function is called in the book, we provide a note explaining its use.
Please install the `sqlpetr` package if not already installed:
```
library(devtools)
if (!require(sqlpetr)) {
remotes::install_github(
"smithjd/sqlpetr",
force = TRUE, build = FALSE, quiet = TRUE)
}
```
Note that when you install this package the first time, it will ask you to update the packages it uses and that may take some time.
These packages are called in this Chapter:
```
library(tidyverse)
library(DBI)
library(RPostgres)
library(glue)
require(knitr)
library(dbplyr)
library(sqlpetr)
library(bookdown)
library(here)
library(connections)
sleep_default <- 3
theme_set(theme_light())
```
4\.2 Verify that Docker is up, running, and clean up if necessary
-----------------------------------------------------------------
> The `sp_check_that_docker_is_up` function from the `sqlpetr` package checks whether Docker is up and running. If it’s not, then you need to install, launch or re\-install Docker.
```
sp_check_that_docker_is_up()
```
```
## [1] "Docker is up, running these containers:"
## [2] "CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES"
## [3] "611b69c981a7 postgres:11 \"docker-entrypoint.s…\" 3 days ago Up About a minute 0.0.0.0:5432->5432/tcp adventureworks"
```
4\.3 Clean up if appropriate
----------------------------
Force\-remove the `adventureworks` container if it was left over (e.g., from a prior runs):
```
sp_docker_remove_container("adventureworks")
```
```
## [1] 0
```
4\.4 Build the adventureworks Docker image
------------------------------------------
Now we set up a “realistic” database named `adventureworks` in Docker.
> NOTE: This chapter doesn’t go into the details of *creating* or *restoring* the `adventureworks` database. For more detail on what’s going on behind the scenes, you can examine the step\-by\-step code in:
>
>
> `source('book-src/restore-adventureworks-postgres-on-docker.R')`
To save space here in the book, we’ve created a function
in `sqlpetr` to build this image, called *OUT OF DATE!!* . Vignette [Building the `adventureworks` Docker Image](https://smithjd.github.io/sqlpetr/articles/building-the-dvdrental-docker-image.html) describes the build process.
\*Ignore the errors in the following step:
```
source(here("book-src", "restore-adventureworks-postgres-on-docker.R"))
```
```
## docker run --detach --name adventureworks --publish 5432:5432 --mount type=bind,source="/Users/jds/Documents/Library/R/r-system/sql-pet",target=/petdir postgres:11
```
```
Sys.sleep(sleep_default * 2)
```
4\.5 Run the adventureworks Docker Image
----------------------------------------
Now we can run the image in a container and connect to the database. To run the
image we use an `sqlpetr` function called *OUT OF DATE* [`sp_pg_docker_run`](https://smithjd.github.io/sqlpetr/reference/sp_pg_docker_run.html)
For the rest of the book we will assume that you have a Docker container called
`adventureworks` that can be stopped and started. In that sense each chapter in the book is independent.
```
sp_docker_start("adventureworks")
```
4\.6 Connect to PostgreSQL
--------------------------
\*CHECK for `sqlpetr` update!`The`sp\_make\_simple\_pg`function we called above created a container from the`postgres:11`library image downloaded from Docker Hub. As part of the process, it set the password for the PostgreSQL database superuser`postgres\` to the value
“postgres”.
For simplicity, we are using a weak password at this point and it’s shown here
and in the code in plain text. That is bad practice because user credentials
should not be shared in open code like that. A [subsequent chapter](#dbms-login)
demonstrates how to store and use credentials to access the DBMS so that they
are kept private.
> The `sp_get_postgres_connection` function from the `sqlpetr` package gets a DBI connection string to a PostgreSQL database, waiting if it is not ready. This function connects to an instance of PostgreSQL and we assign it to a symbol, `con`, for subsequent use. The `connctions_tab = TRUE` parameter opens a connections tab that’s useful for navigating a database.
> Note that we are using port *5439* for PostgreSQL inside the container and published to `localhost`. Why? If you have PostgreSQL already running on the host or another container, it probably claimed port 5432, since that’s the default. So we need to use a different port for *our* PostgreSQL container.
Use the DBI package to connect to the `adventureworks` database in PostgreSQL. Remember the settings discussion about \[keeping passwords hidden]\[Pause for some security considerations]
```
Sys.sleep(sleep_default)
# con <- connection_open( # use in an interactive session
con <- dbConnect( # use in other settings
RPostgres::Postgres(),
# without the previous and next lines, some functions fail with bigint data
# so change int64 to integer
bigint = "integer",
host = "localhost",
port = 5432, # this version still using 5432!!!
user = "postgres",
password = "postgres",
dbname = "adventureworks"
)
```
4\.7 Adventureworks Schemas
---------------------------
Think of the Adventureworks database as a model of the Adventureworks business. The business is organized around different departments (humanresources, sales, and purchasing), business processes (production), and resources (person). Each schema is a container for the all the database objects needed to model the departments, business processes, and resources. As a data analyst, the connections tab has three of the five database objects of interest. These are schemas, tables and views. The other two database objects of interest not shown in the connetions tab are the table primary and foreign keys, PK and FK. Those database objects enforce the referential integrity of the data and the performance of the application. Let the DBA’s worry about them.
The Connections tab has three icons. The node icon represents a schema. The schema helps organize the structure and design of the database. The schema contains the views, the grid with the glasses, and tables, the grids without the glasses, that are of interest to the data analyst. A table is a database object usually represents something useful to a business process. For example, a sales person may enter a new order. The first screen is typically called the sales order header screen which contains information about the customer placing the order. This information is captured in *salesorderheader* table. The customers ordered items are typically entered via multiple screens. These are captured in the *salesorderdetail* table.
A view is a database object that maybe a subset of either the columns or rows of a single table. For example, the customer table has information on all the customers, but the customer view, *c*, shows only a single customer.
Or a view may have data from a primary/driving table and joined to other tables to provide a better understanding/view of the information in the primary table. For example, the primary table typically has a primary key column, *PK*, and zero or more foreign key columns, *FK*. The *PK* and *FK* are usually an integer which is great for a computer, but not so nice us mere mortals. An extended view pulls information associated with the *FK*. For example a sales order view a customer foreign key, can show the actual customer name.
4\.8 Investigate the database using Rstudio
-------------------------------------------
The Rstudio Connections tab shows that you are connected to Postgres and that the `adventureworks` database has a many schemas each of which has multiple tables and views in it. The drop\-down icon to the left of a table lists the table’s columns.
Connections tab \- adventureworks
Clicking on the icon to the left of a `schema` expands the list of `tables` and `views` in that `schema`. Clicking on the `View` or `Table` icon opens up Rstudio’s `View` pane to get a peek at the data:
View of employee table
The number of rows and columns shown in the View pane depends on the size of the window.
4\.9 Cleaning up: diconnect from the database and stop Docker
-------------------------------------------------------------
Always have R disconnect from the database when you’re done.
```
dbDisconnect(con)
# or if using the connections package, use:
# connection_close(con)
```
Stop the `adventureworks` container:
```
sp_docker_stop("adventureworks")
```
Show that the container still exists even though it’s not running
```
sp_show_all_docker_containers()
```
```
## CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
## 37f05f6c5c62 postgres:11 "docker-entrypoint.s…" 26 seconds ago Exited (0) Less than a second ago adventureworks
```
Next time, you can just use this command to start the container:
> `sp_docker_start("adventureworks")`
And once stopped, the container can be removed with:
> `sp_check_that_docker_is_up("adventureworks")`
4\.10 Using the `adventureworks` container in the rest of the book
------------------------------------------------------------------
After this point in the book, we assume that Docker is up and that we can always start up our *adventureworks database* with:
> `sp_docker_start("adventureworks")`
4\.1 Overview
-------------
Docker commands can be run from a terminal (e.g., the Rstudio Terminal pane) or with a `system2()` command. The necessary functions to start, stop Docker containers and do other busy work are provided in the `sqlpetr` package.
> Note: The functions in the package are designed to help you focus on interacting with a dbms from R. You can ignore how they work until you are ready to delve into the details. They are all named to begin with `sp_`. The first time a function is called in the book, we provide a note explaining its use.
Please install the `sqlpetr` package if not already installed:
```
library(devtools)
if (!require(sqlpetr)) {
remotes::install_github(
"smithjd/sqlpetr",
force = TRUE, build = FALSE, quiet = TRUE)
}
```
Note that when you install this package the first time, it will ask you to update the packages it uses and that may take some time.
These packages are called in this Chapter:
```
library(tidyverse)
library(DBI)
library(RPostgres)
library(glue)
require(knitr)
library(dbplyr)
library(sqlpetr)
library(bookdown)
library(here)
library(connections)
sleep_default <- 3
theme_set(theme_light())
```
4\.2 Verify that Docker is up, running, and clean up if necessary
-----------------------------------------------------------------
> The `sp_check_that_docker_is_up` function from the `sqlpetr` package checks whether Docker is up and running. If it’s not, then you need to install, launch or re\-install Docker.
```
sp_check_that_docker_is_up()
```
```
## [1] "Docker is up, running these containers:"
## [2] "CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES"
## [3] "611b69c981a7 postgres:11 \"docker-entrypoint.s…\" 3 days ago Up About a minute 0.0.0.0:5432->5432/tcp adventureworks"
```
4\.3 Clean up if appropriate
----------------------------
Force\-remove the `adventureworks` container if it was left over (e.g., from a prior runs):
```
sp_docker_remove_container("adventureworks")
```
```
## [1] 0
```
4\.4 Build the adventureworks Docker image
------------------------------------------
Now we set up a “realistic” database named `adventureworks` in Docker.
> NOTE: This chapter doesn’t go into the details of *creating* or *restoring* the `adventureworks` database. For more detail on what’s going on behind the scenes, you can examine the step\-by\-step code in:
>
>
> `source('book-src/restore-adventureworks-postgres-on-docker.R')`
To save space here in the book, we’ve created a function
in `sqlpetr` to build this image, called *OUT OF DATE!!* . Vignette [Building the `adventureworks` Docker Image](https://smithjd.github.io/sqlpetr/articles/building-the-dvdrental-docker-image.html) describes the build process.
\*Ignore the errors in the following step:
```
source(here("book-src", "restore-adventureworks-postgres-on-docker.R"))
```
```
## docker run --detach --name adventureworks --publish 5432:5432 --mount type=bind,source="/Users/jds/Documents/Library/R/r-system/sql-pet",target=/petdir postgres:11
```
```
Sys.sleep(sleep_default * 2)
```
4\.5 Run the adventureworks Docker Image
----------------------------------------
Now we can run the image in a container and connect to the database. To run the
image we use an `sqlpetr` function called *OUT OF DATE* [`sp_pg_docker_run`](https://smithjd.github.io/sqlpetr/reference/sp_pg_docker_run.html)
For the rest of the book we will assume that you have a Docker container called
`adventureworks` that can be stopped and started. In that sense each chapter in the book is independent.
```
sp_docker_start("adventureworks")
```
4\.6 Connect to PostgreSQL
--------------------------
\*CHECK for `sqlpetr` update!`The`sp\_make\_simple\_pg`function we called above created a container from the`postgres:11`library image downloaded from Docker Hub. As part of the process, it set the password for the PostgreSQL database superuser`postgres\` to the value
“postgres”.
For simplicity, we are using a weak password at this point and it’s shown here
and in the code in plain text. That is bad practice because user credentials
should not be shared in open code like that. A [subsequent chapter](#dbms-login)
demonstrates how to store and use credentials to access the DBMS so that they
are kept private.
> The `sp_get_postgres_connection` function from the `sqlpetr` package gets a DBI connection string to a PostgreSQL database, waiting if it is not ready. This function connects to an instance of PostgreSQL and we assign it to a symbol, `con`, for subsequent use. The `connctions_tab = TRUE` parameter opens a connections tab that’s useful for navigating a database.
> Note that we are using port *5439* for PostgreSQL inside the container and published to `localhost`. Why? If you have PostgreSQL already running on the host or another container, it probably claimed port 5432, since that’s the default. So we need to use a different port for *our* PostgreSQL container.
Use the DBI package to connect to the `adventureworks` database in PostgreSQL. Remember the settings discussion about \[keeping passwords hidden]\[Pause for some security considerations]
```
Sys.sleep(sleep_default)
# con <- connection_open( # use in an interactive session
con <- dbConnect( # use in other settings
RPostgres::Postgres(),
# without the previous and next lines, some functions fail with bigint data
# so change int64 to integer
bigint = "integer",
host = "localhost",
port = 5432, # this version still using 5432!!!
user = "postgres",
password = "postgres",
dbname = "adventureworks"
)
```
4\.7 Adventureworks Schemas
---------------------------
Think of the Adventureworks database as a model of the Adventureworks business. The business is organized around different departments (humanresources, sales, and purchasing), business processes (production), and resources (person). Each schema is a container for the all the database objects needed to model the departments, business processes, and resources. As a data analyst, the connections tab has three of the five database objects of interest. These are schemas, tables and views. The other two database objects of interest not shown in the connetions tab are the table primary and foreign keys, PK and FK. Those database objects enforce the referential integrity of the data and the performance of the application. Let the DBA’s worry about them.
The Connections tab has three icons. The node icon represents a schema. The schema helps organize the structure and design of the database. The schema contains the views, the grid with the glasses, and tables, the grids without the glasses, that are of interest to the data analyst. A table is a database object usually represents something useful to a business process. For example, a sales person may enter a new order. The first screen is typically called the sales order header screen which contains information about the customer placing the order. This information is captured in *salesorderheader* table. The customers ordered items are typically entered via multiple screens. These are captured in the *salesorderdetail* table.
A view is a database object that maybe a subset of either the columns or rows of a single table. For example, the customer table has information on all the customers, but the customer view, *c*, shows only a single customer.
Or a view may have data from a primary/driving table and joined to other tables to provide a better understanding/view of the information in the primary table. For example, the primary table typically has a primary key column, *PK*, and zero or more foreign key columns, *FK*. The *PK* and *FK* are usually an integer which is great for a computer, but not so nice us mere mortals. An extended view pulls information associated with the *FK*. For example a sales order view a customer foreign key, can show the actual customer name.
4\.8 Investigate the database using Rstudio
-------------------------------------------
The Rstudio Connections tab shows that you are connected to Postgres and that the `adventureworks` database has a many schemas each of which has multiple tables and views in it. The drop\-down icon to the left of a table lists the table’s columns.
Connections tab \- adventureworks
Clicking on the icon to the left of a `schema` expands the list of `tables` and `views` in that `schema`. Clicking on the `View` or `Table` icon opens up Rstudio’s `View` pane to get a peek at the data:
View of employee table
The number of rows and columns shown in the View pane depends on the size of the window.
4\.9 Cleaning up: diconnect from the database and stop Docker
-------------------------------------------------------------
Always have R disconnect from the database when you’re done.
```
dbDisconnect(con)
# or if using the connections package, use:
# connection_close(con)
```
Stop the `adventureworks` container:
```
sp_docker_stop("adventureworks")
```
Show that the container still exists even though it’s not running
```
sp_show_all_docker_containers()
```
```
## CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
## 37f05f6c5c62 postgres:11 "docker-entrypoint.s…" 26 seconds ago Exited (0) Less than a second ago adventureworks
```
Next time, you can just use this command to start the container:
> `sp_docker_start("adventureworks")`
And once stopped, the container can be removed with:
> `sp_check_that_docker_is_up("adventureworks")`
4\.10 Using the `adventureworks` container in the rest of the book
------------------------------------------------------------------
After this point in the book, we assume that Docker is up and that we can always start up our *adventureworks database* with:
> `sp_docker_start("adventureworks")`
| Data Databases and Engineering |
smithjd.github.io | https://smithjd.github.io/sql-pet/chapter-dbms-login-credentials.html |
Chapter 5 Securing and using your dbms log\-in credentials
==========================================================
> This chapter demonstrates how to:
>
>
> * Keep necessary credentials secret or at least invisible
> * Interact with PostgreSQL using your stored dbms credentials
Connecting to a dbms can be very frustrating at first. In many organizations, simply **getting** access credentials takes time and may involve jumping through multiple hoops. In addition, a dbms is terse or deliberately inscrutable when your credetials are incorrect. That’s a security strategy, not a limitation of your understanding or of your software. When R can’t log you on to a dbms, you usually will have no information as to what went wrong.
There are many different strategies for managing credentials. See [Securing Credentials](https://db.rstudio.com/best-practices/managing-credentials/) in RStudio’s *Databases using R* documentation for some alternatives to the method we adopt in this book. We provide more details about [PostgreSQL Authentication](chapter-appendix-postresql-authentication.html#chapter_appendix-postresql-authentication) in our sandbox environment in an appendix.
The following packages are used in this chapter:
```
library(tidyverse)
library(DBI)
library(RPostgres)
require(knitr)
library(sqlpetr)
library(connections)
sleep_default <- 3
theme_set(theme_light())
```
5\.1 Set up the adventureworks Docker container
-----------------------------------------------
### 5\.1\.1 Verify that Docker is running
Check that Docker is up and running:
```
sp_check_that_docker_is_up()
```
```
## [1] "Docker is up but running no containers"
```
### 5\.1\.2 Start the Docker container:
Start the adventureworks Docker container:
```
sp_docker_start("adventureworks")
```
5\.2 Storing your dbms credentials
----------------------------------
In previous chapters the connection string for connecting to the dbms has used default credentials specified in plain text as follows:
`user= 'postgres', password = 'postgres'`
When we call `sp_get_postgres_connection` below we’ll use environment variables that R obtains from reading the *.Renviron* file when R starts up. This approach has two benefits: that file is not uploaded to GitHub and R looks for it in your default directory every time it loads. To see whether you have already created that file, use the R Studio Files tab to look at your **home directory**:
That file should contain lines that **look like** the example below. Although in this example it contains the PostgreSQL **default values** for the username and password, they are obviously not secret. But this approach demonstrates where you should put secrets that R needs while not risking accidental uploaded to GitHub or some other public location..
Open your `.Renviron` file with this command:
> `file.edit("~/.Renviron")`
Or you can execute <define_postgresql_params.R> to create the file or you could copy / paste the following into your **.Renviron** file:
```
DEFAULT_POSTGRES_PASSWORD=postgres
DEFAULT_POSTGRES_USER_NAME=postgres
```
Once that file is created, restart R, and after that R reads it every time it comes up.
### 5\.2\.1 Connect with Postgres using the Sys.getenv function
Connect to the postgrSQL using the `sp_get_postgres_connection` function:
```
Sys.sleep(sleep_default)
# con <- connection_open( # use in an interactive session
con <- dbConnect( # use in other settings
RPostgres::Postgres(),
# without the previous and next lines, some functions fail with bigint data
# so change int64 to integer
bigint = "integer",
user = Sys.getenv("DEFAULT_POSTGRES_USER_NAME"),
password = Sys.getenv("DEFAULT_POSTGRES_PASSWORD"),
host = "localhost",
port = 5432,
dbname = "adventureworks")
```
Once the connection object has been created, you can list all of the tables in one of the schemas:
```
dbExecute(con, "set search_path to humanresources, public;") # watch for duplicates!
```
```
## [1] 0
```
```
dbListTables(con)
```
```
## [1] "employee" "shift"
## [3] "employeepayhistory" "jobcandidate"
## [5] "department" "vemployee"
## [7] "vemployeedepartment" "vemployeedepartmenthistory"
## [9] "vjobcandidate" "vjobcandidateeducation"
## [11] "vjobcandidateemployment" "employeedepartmenthistory"
```
5\.3 Disconnect from the database and stop Docker
-------------------------------------------------
```
dbDisconnect(con)
# or if using the connections package, use:
# connection_close(con)
sp_docker_stop("adventureworks")
```
5\.1 Set up the adventureworks Docker container
-----------------------------------------------
### 5\.1\.1 Verify that Docker is running
Check that Docker is up and running:
```
sp_check_that_docker_is_up()
```
```
## [1] "Docker is up but running no containers"
```
### 5\.1\.2 Start the Docker container:
Start the adventureworks Docker container:
```
sp_docker_start("adventureworks")
```
### 5\.1\.1 Verify that Docker is running
Check that Docker is up and running:
```
sp_check_that_docker_is_up()
```
```
## [1] "Docker is up but running no containers"
```
### 5\.1\.2 Start the Docker container:
Start the adventureworks Docker container:
```
sp_docker_start("adventureworks")
```
5\.2 Storing your dbms credentials
----------------------------------
In previous chapters the connection string for connecting to the dbms has used default credentials specified in plain text as follows:
`user= 'postgres', password = 'postgres'`
When we call `sp_get_postgres_connection` below we’ll use environment variables that R obtains from reading the *.Renviron* file when R starts up. This approach has two benefits: that file is not uploaded to GitHub and R looks for it in your default directory every time it loads. To see whether you have already created that file, use the R Studio Files tab to look at your **home directory**:
That file should contain lines that **look like** the example below. Although in this example it contains the PostgreSQL **default values** for the username and password, they are obviously not secret. But this approach demonstrates where you should put secrets that R needs while not risking accidental uploaded to GitHub or some other public location..
Open your `.Renviron` file with this command:
> `file.edit("~/.Renviron")`
Or you can execute <define_postgresql_params.R> to create the file or you could copy / paste the following into your **.Renviron** file:
```
DEFAULT_POSTGRES_PASSWORD=postgres
DEFAULT_POSTGRES_USER_NAME=postgres
```
Once that file is created, restart R, and after that R reads it every time it comes up.
### 5\.2\.1 Connect with Postgres using the Sys.getenv function
Connect to the postgrSQL using the `sp_get_postgres_connection` function:
```
Sys.sleep(sleep_default)
# con <- connection_open( # use in an interactive session
con <- dbConnect( # use in other settings
RPostgres::Postgres(),
# without the previous and next lines, some functions fail with bigint data
# so change int64 to integer
bigint = "integer",
user = Sys.getenv("DEFAULT_POSTGRES_USER_NAME"),
password = Sys.getenv("DEFAULT_POSTGRES_PASSWORD"),
host = "localhost",
port = 5432,
dbname = "adventureworks")
```
Once the connection object has been created, you can list all of the tables in one of the schemas:
```
dbExecute(con, "set search_path to humanresources, public;") # watch for duplicates!
```
```
## [1] 0
```
```
dbListTables(con)
```
```
## [1] "employee" "shift"
## [3] "employeepayhistory" "jobcandidate"
## [5] "department" "vemployee"
## [7] "vemployeedepartment" "vemployeedepartmenthistory"
## [9] "vjobcandidate" "vjobcandidateeducation"
## [11] "vjobcandidateemployment" "employeedepartmenthistory"
```
### 5\.2\.1 Connect with Postgres using the Sys.getenv function
Connect to the postgrSQL using the `sp_get_postgres_connection` function:
```
Sys.sleep(sleep_default)
# con <- connection_open( # use in an interactive session
con <- dbConnect( # use in other settings
RPostgres::Postgres(),
# without the previous and next lines, some functions fail with bigint data
# so change int64 to integer
bigint = "integer",
user = Sys.getenv("DEFAULT_POSTGRES_USER_NAME"),
password = Sys.getenv("DEFAULT_POSTGRES_PASSWORD"),
host = "localhost",
port = 5432,
dbname = "adventureworks")
```
Once the connection object has been created, you can list all of the tables in one of the schemas:
```
dbExecute(con, "set search_path to humanresources, public;") # watch for duplicates!
```
```
## [1] 0
```
```
dbListTables(con)
```
```
## [1] "employee" "shift"
## [3] "employeepayhistory" "jobcandidate"
## [5] "department" "vemployee"
## [7] "vemployeedepartment" "vemployeedepartmenthistory"
## [9] "vjobcandidate" "vjobcandidateeducation"
## [11] "vjobcandidateemployment" "employeedepartmenthistory"
```
5\.3 Disconnect from the database and stop Docker
-------------------------------------------------
```
dbDisconnect(con)
# or if using the connections package, use:
# connection_close(con)
sp_docker_stop("adventureworks")
```
| Data Databases and Engineering |
smithjd.github.io | https://smithjd.github.io/sql-pet/chapter-connect-to-db-with-r-code.html |
Chapter 6 Connecting to the database with R code
================================================
> This chapter demonstrates how to:
>
>
> * Connect to and disconnect R from the `adventureworks` database
> * Use dplyr to get an overview of the database, replicating the facilities provided by RStudio
These packages are called in this Chapter:
```
library(tidyverse)
library(DBI)
library(RPostgres)
library(glue)
require(knitr)
library(dbplyr)
library(sqlpetr)
library(bookdown)
library(here)
library(connections)
sleep_default <- 3
```
6\.1 Verify that Docker is up and running, and start the database
-----------------------------------------------------------------
> The `sp_check_that_docker_is_up` function from the `sqlpetr` package checks whether Docker is up and running. If it’s not, then you need to install, launch or re\-install Docker.
```
sp_check_that_docker_is_up()
```
```
## [1] "Docker is up but running no containers"
```
```
sp_docker_start("adventureworks")
```
6\.2 Connect to PostgreSQL
--------------------------
\*CHECK for `sqlpetr` update!`The`sp\_make\_simple\_pg`function we called above created a container from the`postgres:11`library image downloaded from Docker Hub. As part of the process, it set the password for the PostgreSQL database superuser`postgres\` to the value
“postgres”.
For simplicity, we are using a weak password at this point and it’s shown here
and in the code in plain text. That is bad practice because user credentials
should not be shared in open code like that. A [subsequent chapter](#dbms-login)
demonstrates how to store and use credentials to access the DBMS so that they
are kept private.
> The `sp_get_postgres_connection` function from the `sqlpetr` package gets a DBI connection string to a PostgreSQL database, waiting if it is not ready. This function connects to an instance of PostgreSQL and we assign it to a symbol, `con`, for subsequent use. The `connctions_tab = TRUE` parameter opens a connections tab that’s useful for navigating a database.
> Note that we are using port *5439* for PostgreSQL inside the container and published to `localhost`. Why? If you have PostgreSQL already running on the host or another container, it probably claimed port 5432, since that’s the default. So we need to use a different port for *our* PostgreSQL container.
Use the DBI package to connect to the `adventureworks` database in PostgreSQL. Remember the settings discussion about \[keeping passwords hidden]\[Pause for some security considerations]
```
# con <- connection_open( # use in an interactive session
Sys.sleep(sleep_default)
con <- dbConnect(
RPostgres::Postgres(),
# without the previous and next lines, some functions fail with bigint data
# so change int64 to integer
bigint = "integer",
host = "localhost",
port = 5432,
user = "postgres",
password = "postgres",
dbname = "adventureworks")
```
6\.3 Set schema search path and list its contents
-------------------------------------------------
Schemas will be discussed later on because multiple schemas are the norm in an enterprise database environment, but they are a side issue at this point. So we switch the order in which PostgreSQL searches for objects with the following SQL code:
```
dbExecute(con, "set search_path to sales;")
```
```
## [1] 0
```
With the custom `search_path`, the following command shows the tables in the `sales` schema. In the `adventureworks` database, there are no tables in the `public` schema.
```
dbListTables(con)
```
```
## [1] "countryregioncurrency" "customer"
## [3] "currencyrate" "creditcard"
## [5] "personcreditcard" "specialoffer"
## [7] "specialofferproduct" "salesorderheadersalesreason"
## [9] "shoppingcartitem" "salespersonquotahistory"
## [11] "salesperson" "currency"
## [13] "store" "salesorderheader"
## [15] "salesorderdetail" "salesreason"
## [17] "salesterritoryhistory" "vindividualcustomer"
## [19] "vpersondemographics" "vsalesperson"
## [21] "vsalespersonsalesbyfiscalyears" "vsalespersonsalesbyfiscalyearsdata"
## [23] "vstorewithaddresses" "vstorewithcontacts"
## [25] "vstorewithdemographics" "salestaxrate"
## [27] "salesterritory"
```
Notice there are several tables that start with the letter *v*: they are actually *views* which will turn out to be important. They are clearly distinguished in the connections tab, but the naming is a matter of convention.
Same for `dbListFields`:
```
dbListFields(con, "salesorderheader")
```
```
## [1] "salesorderid" "revisionnumber" "orderdate"
## [4] "duedate" "shipdate" "status"
## [7] "onlineorderflag" "purchaseordernumber" "accountnumber"
## [10] "customerid" "salespersonid" "territoryid"
## [13] "billtoaddressid" "shiptoaddressid" "shipmethodid"
## [16] "creditcardid" "creditcardapprovalcode" "currencyrateid"
## [19] "subtotal" "taxamt" "freight"
## [22] "totaldue" "comment" "rowguid"
## [25] "modifieddate"
```
Thus with this search order, the following two produce identical results:
```
tbl(con, in_schema("sales", "salesorderheader")) %>%
head()
```
```
## # Source: lazy query [?? x 25]
## # Database: postgres [postgres@localhost:5432/adventureworks]
## salesorderid revisionnumber orderdate duedate
## <int> <int> <dttm> <dttm>
## 1 43659 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 2 43660 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 3 43661 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 4 43662 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 5 43663 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 6 43664 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## # … with 21 more variables: shipdate <dttm>, status <int>,
## # onlineorderflag <lgl>, purchaseordernumber <chr>, accountnumber <chr>,
## # customerid <int>, salespersonid <int>, territoryid <int>,
## # billtoaddressid <int>, shiptoaddressid <int>, shipmethodid <int>,
## # creditcardid <int>, creditcardapprovalcode <chr>, currencyrateid <int>,
## # subtotal <dbl>, taxamt <dbl>, freight <dbl>, totaldue <dbl>, comment <chr>,
## # rowguid <chr>, modifieddate <dttm>
```
```
tbl(con, "salesorderheader") %>%
head()
```
```
## # Source: lazy query [?? x 25]
## # Database: postgres [postgres@localhost:5432/adventureworks]
## salesorderid revisionnumber orderdate duedate
## <int> <int> <dttm> <dttm>
## 1 43659 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 2 43660 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 3 43661 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 4 43662 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 5 43663 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 6 43664 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## # … with 21 more variables: shipdate <dttm>, status <int>,
## # onlineorderflag <lgl>, purchaseordernumber <chr>, accountnumber <chr>,
## # customerid <int>, salespersonid <int>, territoryid <int>,
## # billtoaddressid <int>, shiptoaddressid <int>, shipmethodid <int>,
## # creditcardid <int>, creditcardapprovalcode <chr>, currencyrateid <int>,
## # subtotal <dbl>, taxamt <dbl>, freight <dbl>, totaldue <dbl>, comment <chr>,
## # rowguid <chr>, modifieddate <dttm>
```
6\.4 Anatomy of a `dplyr` connection object
-------------------------------------------
As introduced in the previous chapter, the `dplyr::tbl` function creates an object that might **look** like a data frame in that when you enter it on the command line, it prints a bunch of rows from the dbms table. But it is actually a **list** object that `dplyr` uses for constructing queries and retrieving data from the DBMS.
The following code illustrates these issues. The `dplyr::tbl` function creates the connection object that we store in an object named `salesorderheader_table`:
```
salesorderheader_table <- dplyr::tbl(con, in_schema("sales", "salesorderheader")) %>%
select(-rowguid) %>%
rename(salesorderheader_details_updated = modifieddate)
```
At first glance, it *acts* like a data frame when you print it, although it only prints 10 of the table’s 31,465 rows:
```
salesorderheader_table
```
```
## # Source: lazy query [?? x 24]
## # Database: postgres [postgres@localhost:5432/adventureworks]
## salesorderid revisionnumber orderdate duedate
## <int> <int> <dttm> <dttm>
## 1 43659 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 2 43660 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 3 43661 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 4 43662 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 5 43663 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 6 43664 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 7 43665 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 8 43666 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 9 43667 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 10 43668 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## # … with more rows, and 20 more variables: shipdate <dttm>, status <int>,
## # onlineorderflag <lgl>, purchaseordernumber <chr>, accountnumber <chr>,
## # customerid <int>, salespersonid <int>, territoryid <int>,
## # billtoaddressid <int>, shiptoaddressid <int>, shipmethodid <int>,
## # creditcardid <int>, creditcardapprovalcode <chr>, currencyrateid <int>,
## # subtotal <dbl>, taxamt <dbl>, freight <dbl>, totaldue <dbl>, comment <chr>,
## # salesorderheader_details_updated <dttm>
```
However, notice that the first output line shows `??`, rather than providing the number of rows in the table. Similarly, the next to last line shows:
```
… with more rows, and 20 more variables:
```
whereas the output for a normal `tbl` of this salesorderheader data would say:
```
… with 31,455 more rows, and 20 more variables:
```
So even though `salesorderheader_table` is a `tbl`, it’s **also** a `tbl_PqConnection`:
```
class(salesorderheader_table)
```
```
## [1] "tbl_PqConnection" "tbl_dbi" "tbl_sql" "tbl_lazy"
## [5] "tbl"
```
It is not just a normal `tbl` of data. We can see that from the structure of `salesorderheader_table`:
```
str(salesorderheader_table, max.level = 3)
```
```
## List of 2
## $ src:List of 2
## ..$ con :Formal class 'PqConnection' [package "RPostgres"] with 3 slots
## ..$ disco: NULL
## ..- attr(*, "class")= chr [1:4] "src_PqConnection" "src_dbi" "src_sql" "src"
## $ ops:List of 4
## ..$ name: chr "select"
## ..$ x :List of 2
## .. ..$ x : 'ident_q' chr "sales.salesorderheader"
## .. ..$ vars: chr [1:25] "salesorderid" "revisionnumber" "orderdate" "duedate" ...
## .. ..- attr(*, "class")= chr [1:3] "op_base_remote" "op_base" "op"
## ..$ dots: list()
## ..$ args:List of 1
## .. ..$ vars:List of 24
## ..- attr(*, "class")= chr [1:3] "op_select" "op_single" "op"
## - attr(*, "class")= chr [1:5] "tbl_PqConnection" "tbl_dbi" "tbl_sql" "tbl_lazy" ...
```
It has only *two* rows! The first row contains all the information in the `con` object, which contains information about all the tables and objects in the database. Here is a sample:
```
salesorderheader_table$src$con@typnames$typname[387:418]
```
```
## [1] "AccountNumber" "_AccountNumber"
## [3] "Flag" "_Flag"
## [5] "Name" "_Name"
## [7] "NameStyle" "_NameStyle"
## [9] "OrderNumber" "_OrderNumber"
## [11] "Phone" "_Phone"
## [13] "department" "_department"
## [15] "pg_toast_16439" "d"
## [17] "_d" "employee"
## [19] "_employee" "pg_toast_16450"
## [21] "e" "_e"
## [23] "employeedepartmenthistory" "_employeedepartmenthistory"
## [25] "edh" "_edh"
## [27] "employeepayhistory" "_employeepayhistory"
## [29] "pg_toast_16482" "eph"
## [31] "_eph" "jobcandidate"
```
The second row contains a list of the columns in the `salesorderheader` table, among other things:
```
salesorderheader_table$ops$x$vars
```
```
## [1] "salesorderid" "revisionnumber" "orderdate"
## [4] "duedate" "shipdate" "status"
## [7] "onlineorderflag" "purchaseordernumber" "accountnumber"
## [10] "customerid" "salespersonid" "territoryid"
## [13] "billtoaddressid" "shiptoaddressid" "shipmethodid"
## [16] "creditcardid" "creditcardapprovalcode" "currencyrateid"
## [19] "subtotal" "taxamt" "freight"
## [22] "totaldue" "comment" "rowguid"
## [25] "modifieddate"
```
`salesorderheader_table` holds information needed to get the data from the ‘salesorderheader’ table, but `salesorderheader_table` does not hold the data itself. In the following sections, we will examine more closely this relationship between the `salesorderheader_table` object and the data in the database’s ‘salesorderheader’ table.
6\.5 Disconnect from the database and stop Docker
-------------------------------------------------
```
dbDisconnect(con)
# or if using the connections package, use:
# connection_close(con)
sp_docker_stop("adventureworks")
```
6\.1 Verify that Docker is up and running, and start the database
-----------------------------------------------------------------
> The `sp_check_that_docker_is_up` function from the `sqlpetr` package checks whether Docker is up and running. If it’s not, then you need to install, launch or re\-install Docker.
```
sp_check_that_docker_is_up()
```
```
## [1] "Docker is up but running no containers"
```
```
sp_docker_start("adventureworks")
```
6\.2 Connect to PostgreSQL
--------------------------
\*CHECK for `sqlpetr` update!`The`sp\_make\_simple\_pg`function we called above created a container from the`postgres:11`library image downloaded from Docker Hub. As part of the process, it set the password for the PostgreSQL database superuser`postgres\` to the value
“postgres”.
For simplicity, we are using a weak password at this point and it’s shown here
and in the code in plain text. That is bad practice because user credentials
should not be shared in open code like that. A [subsequent chapter](#dbms-login)
demonstrates how to store and use credentials to access the DBMS so that they
are kept private.
> The `sp_get_postgres_connection` function from the `sqlpetr` package gets a DBI connection string to a PostgreSQL database, waiting if it is not ready. This function connects to an instance of PostgreSQL and we assign it to a symbol, `con`, for subsequent use. The `connctions_tab = TRUE` parameter opens a connections tab that’s useful for navigating a database.
> Note that we are using port *5439* for PostgreSQL inside the container and published to `localhost`. Why? If you have PostgreSQL already running on the host or another container, it probably claimed port 5432, since that’s the default. So we need to use a different port for *our* PostgreSQL container.
Use the DBI package to connect to the `adventureworks` database in PostgreSQL. Remember the settings discussion about \[keeping passwords hidden]\[Pause for some security considerations]
```
# con <- connection_open( # use in an interactive session
Sys.sleep(sleep_default)
con <- dbConnect(
RPostgres::Postgres(),
# without the previous and next lines, some functions fail with bigint data
# so change int64 to integer
bigint = "integer",
host = "localhost",
port = 5432,
user = "postgres",
password = "postgres",
dbname = "adventureworks")
```
6\.3 Set schema search path and list its contents
-------------------------------------------------
Schemas will be discussed later on because multiple schemas are the norm in an enterprise database environment, but they are a side issue at this point. So we switch the order in which PostgreSQL searches for objects with the following SQL code:
```
dbExecute(con, "set search_path to sales;")
```
```
## [1] 0
```
With the custom `search_path`, the following command shows the tables in the `sales` schema. In the `adventureworks` database, there are no tables in the `public` schema.
```
dbListTables(con)
```
```
## [1] "countryregioncurrency" "customer"
## [3] "currencyrate" "creditcard"
## [5] "personcreditcard" "specialoffer"
## [7] "specialofferproduct" "salesorderheadersalesreason"
## [9] "shoppingcartitem" "salespersonquotahistory"
## [11] "salesperson" "currency"
## [13] "store" "salesorderheader"
## [15] "salesorderdetail" "salesreason"
## [17] "salesterritoryhistory" "vindividualcustomer"
## [19] "vpersondemographics" "vsalesperson"
## [21] "vsalespersonsalesbyfiscalyears" "vsalespersonsalesbyfiscalyearsdata"
## [23] "vstorewithaddresses" "vstorewithcontacts"
## [25] "vstorewithdemographics" "salestaxrate"
## [27] "salesterritory"
```
Notice there are several tables that start with the letter *v*: they are actually *views* which will turn out to be important. They are clearly distinguished in the connections tab, but the naming is a matter of convention.
Same for `dbListFields`:
```
dbListFields(con, "salesorderheader")
```
```
## [1] "salesorderid" "revisionnumber" "orderdate"
## [4] "duedate" "shipdate" "status"
## [7] "onlineorderflag" "purchaseordernumber" "accountnumber"
## [10] "customerid" "salespersonid" "territoryid"
## [13] "billtoaddressid" "shiptoaddressid" "shipmethodid"
## [16] "creditcardid" "creditcardapprovalcode" "currencyrateid"
## [19] "subtotal" "taxamt" "freight"
## [22] "totaldue" "comment" "rowguid"
## [25] "modifieddate"
```
Thus with this search order, the following two produce identical results:
```
tbl(con, in_schema("sales", "salesorderheader")) %>%
head()
```
```
## # Source: lazy query [?? x 25]
## # Database: postgres [postgres@localhost:5432/adventureworks]
## salesorderid revisionnumber orderdate duedate
## <int> <int> <dttm> <dttm>
## 1 43659 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 2 43660 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 3 43661 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 4 43662 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 5 43663 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 6 43664 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## # … with 21 more variables: shipdate <dttm>, status <int>,
## # onlineorderflag <lgl>, purchaseordernumber <chr>, accountnumber <chr>,
## # customerid <int>, salespersonid <int>, territoryid <int>,
## # billtoaddressid <int>, shiptoaddressid <int>, shipmethodid <int>,
## # creditcardid <int>, creditcardapprovalcode <chr>, currencyrateid <int>,
## # subtotal <dbl>, taxamt <dbl>, freight <dbl>, totaldue <dbl>, comment <chr>,
## # rowguid <chr>, modifieddate <dttm>
```
```
tbl(con, "salesorderheader") %>%
head()
```
```
## # Source: lazy query [?? x 25]
## # Database: postgres [postgres@localhost:5432/adventureworks]
## salesorderid revisionnumber orderdate duedate
## <int> <int> <dttm> <dttm>
## 1 43659 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 2 43660 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 3 43661 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 4 43662 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 5 43663 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 6 43664 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## # … with 21 more variables: shipdate <dttm>, status <int>,
## # onlineorderflag <lgl>, purchaseordernumber <chr>, accountnumber <chr>,
## # customerid <int>, salespersonid <int>, territoryid <int>,
## # billtoaddressid <int>, shiptoaddressid <int>, shipmethodid <int>,
## # creditcardid <int>, creditcardapprovalcode <chr>, currencyrateid <int>,
## # subtotal <dbl>, taxamt <dbl>, freight <dbl>, totaldue <dbl>, comment <chr>,
## # rowguid <chr>, modifieddate <dttm>
```
6\.4 Anatomy of a `dplyr` connection object
-------------------------------------------
As introduced in the previous chapter, the `dplyr::tbl` function creates an object that might **look** like a data frame in that when you enter it on the command line, it prints a bunch of rows from the dbms table. But it is actually a **list** object that `dplyr` uses for constructing queries and retrieving data from the DBMS.
The following code illustrates these issues. The `dplyr::tbl` function creates the connection object that we store in an object named `salesorderheader_table`:
```
salesorderheader_table <- dplyr::tbl(con, in_schema("sales", "salesorderheader")) %>%
select(-rowguid) %>%
rename(salesorderheader_details_updated = modifieddate)
```
At first glance, it *acts* like a data frame when you print it, although it only prints 10 of the table’s 31,465 rows:
```
salesorderheader_table
```
```
## # Source: lazy query [?? x 24]
## # Database: postgres [postgres@localhost:5432/adventureworks]
## salesorderid revisionnumber orderdate duedate
## <int> <int> <dttm> <dttm>
## 1 43659 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 2 43660 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 3 43661 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 4 43662 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 5 43663 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 6 43664 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 7 43665 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 8 43666 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 9 43667 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## 10 43668 8 2011-05-31 00:00:00 2011-06-12 00:00:00
## # … with more rows, and 20 more variables: shipdate <dttm>, status <int>,
## # onlineorderflag <lgl>, purchaseordernumber <chr>, accountnumber <chr>,
## # customerid <int>, salespersonid <int>, territoryid <int>,
## # billtoaddressid <int>, shiptoaddressid <int>, shipmethodid <int>,
## # creditcardid <int>, creditcardapprovalcode <chr>, currencyrateid <int>,
## # subtotal <dbl>, taxamt <dbl>, freight <dbl>, totaldue <dbl>, comment <chr>,
## # salesorderheader_details_updated <dttm>
```
However, notice that the first output line shows `??`, rather than providing the number of rows in the table. Similarly, the next to last line shows:
```
… with more rows, and 20 more variables:
```
whereas the output for a normal `tbl` of this salesorderheader data would say:
```
… with 31,455 more rows, and 20 more variables:
```
So even though `salesorderheader_table` is a `tbl`, it’s **also** a `tbl_PqConnection`:
```
class(salesorderheader_table)
```
```
## [1] "tbl_PqConnection" "tbl_dbi" "tbl_sql" "tbl_lazy"
## [5] "tbl"
```
It is not just a normal `tbl` of data. We can see that from the structure of `salesorderheader_table`:
```
str(salesorderheader_table, max.level = 3)
```
```
## List of 2
## $ src:List of 2
## ..$ con :Formal class 'PqConnection' [package "RPostgres"] with 3 slots
## ..$ disco: NULL
## ..- attr(*, "class")= chr [1:4] "src_PqConnection" "src_dbi" "src_sql" "src"
## $ ops:List of 4
## ..$ name: chr "select"
## ..$ x :List of 2
## .. ..$ x : 'ident_q' chr "sales.salesorderheader"
## .. ..$ vars: chr [1:25] "salesorderid" "revisionnumber" "orderdate" "duedate" ...
## .. ..- attr(*, "class")= chr [1:3] "op_base_remote" "op_base" "op"
## ..$ dots: list()
## ..$ args:List of 1
## .. ..$ vars:List of 24
## ..- attr(*, "class")= chr [1:3] "op_select" "op_single" "op"
## - attr(*, "class")= chr [1:5] "tbl_PqConnection" "tbl_dbi" "tbl_sql" "tbl_lazy" ...
```
It has only *two* rows! The first row contains all the information in the `con` object, which contains information about all the tables and objects in the database. Here is a sample:
```
salesorderheader_table$src$con@typnames$typname[387:418]
```
```
## [1] "AccountNumber" "_AccountNumber"
## [3] "Flag" "_Flag"
## [5] "Name" "_Name"
## [7] "NameStyle" "_NameStyle"
## [9] "OrderNumber" "_OrderNumber"
## [11] "Phone" "_Phone"
## [13] "department" "_department"
## [15] "pg_toast_16439" "d"
## [17] "_d" "employee"
## [19] "_employee" "pg_toast_16450"
## [21] "e" "_e"
## [23] "employeedepartmenthistory" "_employeedepartmenthistory"
## [25] "edh" "_edh"
## [27] "employeepayhistory" "_employeepayhistory"
## [29] "pg_toast_16482" "eph"
## [31] "_eph" "jobcandidate"
```
The second row contains a list of the columns in the `salesorderheader` table, among other things:
```
salesorderheader_table$ops$x$vars
```
```
## [1] "salesorderid" "revisionnumber" "orderdate"
## [4] "duedate" "shipdate" "status"
## [7] "onlineorderflag" "purchaseordernumber" "accountnumber"
## [10] "customerid" "salespersonid" "territoryid"
## [13] "billtoaddressid" "shiptoaddressid" "shipmethodid"
## [16] "creditcardid" "creditcardapprovalcode" "currencyrateid"
## [19] "subtotal" "taxamt" "freight"
## [22] "totaldue" "comment" "rowguid"
## [25] "modifieddate"
```
`salesorderheader_table` holds information needed to get the data from the ‘salesorderheader’ table, but `salesorderheader_table` does not hold the data itself. In the following sections, we will examine more closely this relationship between the `salesorderheader_table` object and the data in the database’s ‘salesorderheader’ table.
6\.5 Disconnect from the database and stop Docker
-------------------------------------------------
```
dbDisconnect(con)
# or if using the connections package, use:
# connection_close(con)
sp_docker_stop("adventureworks")
```
| Data Databases and Engineering |
smithjd.github.io | https://smithjd.github.io/sql-pet/chapter-dbms-queries-intro.html |
Chapter 7 Introduction to DBMS queries
======================================
> This chapter demonstrates how to:
>
>
> * Download all or part of a table from the DBMS, including different kinds of subsets
> * See how `dplyr` code is translated into `SQL` commands and how they can be mixed
> * Get acquainted with some useful functions and packages for investigating a single table
> * Begin thinking about how to divide the work between your local R session and the DBMS
7\.1 Setup
----------
The following packages are used in this chapter:
```
library(tidyverse)
library(DBI)
library(RPostgres)
library(dbplyr)
require(knitr)
library(bookdown)
library(sqlpetr)
library(skimr)
library(connections)
sleep_default <- 3
```
Assume that the Docker container with PostgreSQL and the adventureworks database are ready to go. If not go back to \[Chapter 6]\[\#chapter\_setup\-adventureworks\-db]
```
sqlpetr::sp_docker_start("adventureworks")
Sys.sleep(sleep_default)
```
Connect to the database:
```
# con <- connection_open( # use in an interactive session
con <- dbConnect( # use in other settings
RPostgres::Postgres(),
# without the previous and next lines, some functions fail with bigint data
# so change int64 to integer
bigint = "integer",
host = "localhost",
user = Sys.getenv("DEFAULT_POSTGRES_USER_NAME"),
password = Sys.getenv("DEFAULT_POSTGRES_PASSWORD"),
dbname = "adventureworks",
port = 5432
)
```
7\.2 Methods for downloading a single table
-------------------------------------------
For the moment, assume you know something about the database and specifically what table you need to retrieve. We return to the topic of investigating the whole database later on.
```
dbExecute(con, "set search_path to sales, humanresources;")
```
```
## [1] 0
```
### 7\.2\.1 Read the entire table
There are a few different methods of getting data from a DBMS, and we’ll explore the different ways of controlling each one of them.
`DBI::dbReadTable` will download an entire table into an R [tibble](https://tibble.tidyverse.org/).
```
salesorderheader_tibble <- DBI::dbReadTable(con, "salesorderheader")
str(salesorderheader_tibble)
```
```
## 'data.frame': 31465 obs. of 25 variables:
## $ salesorderid : int 43659 43660 43661 43662 43663 43664 43665 43666 43667 43668 ...
## $ revisionnumber : int 8 8 8 8 8 8 8 8 8 8 ...
## $ orderdate : POSIXct, format: "2011-05-31" "2011-05-31" ...
## $ duedate : POSIXct, format: "2011-06-12" "2011-06-12" ...
## $ shipdate : POSIXct, format: "2011-06-07" "2011-06-07" ...
## $ status : int 5 5 5 5 5 5 5 5 5 5 ...
## $ onlineorderflag : logi FALSE FALSE FALSE FALSE FALSE FALSE ...
## $ purchaseordernumber : chr "PO522145787" "PO18850127500" "PO18473189620" "PO18444174044" ...
## $ accountnumber : chr "10-4020-000676" "10-4020-000117" "10-4020-000442" "10-4020-000227" ...
## $ customerid : int 29825 29672 29734 29994 29565 29898 29580 30052 29974 29614 ...
## $ salespersonid : int 279 279 282 282 276 280 283 276 277 282 ...
## $ territoryid : int 5 5 6 6 4 1 1 4 3 6 ...
## $ billtoaddressid : int 985 921 517 482 1073 876 849 1074 629 529 ...
## $ shiptoaddressid : int 985 921 517 482 1073 876 849 1074 629 529 ...
## $ shipmethodid : int 5 5 5 5 5 5 5 5 5 5 ...
## $ creditcardid : int 16281 5618 1346 10456 4322 806 15232 13349 10370 1566 ...
## $ creditcardapprovalcode: chr "105041Vi84182" "115213Vi29411" "85274Vi6854" "125295Vi53935" ...
## $ currencyrateid : int NA NA 4 4 NA NA NA NA NA 4 ...
## $ subtotal : num 20566 1294 32726 28833 419 ...
## $ taxamt : num 1971.5 124.2 3153.8 2775.2 40.3 ...
## $ freight : num 616.1 38.8 985.6 867.2 12.6 ...
## $ totaldue : num 23153 1457 36866 32475 472 ...
## $ comment : chr NA NA NA NA ...
## $ rowguid : chr "79b65321-39ca-4115-9cba-8fe0903e12e6" "738dc42d-d03b-48a1-9822-f95a67ea7389" "d91b9131-18a4-4a11-bc3a-90b6f53e9d74" "4a1ecfc0-cc3a-4740-b028-1c50bb48711c" ...
## $ modifieddate : POSIXct, format: "2011-06-07" "2011-06-07" ...
```
That’s very simple, but if the table is very large it may not be a problem, since R is designed to keep the entire table in memory. The tables that are found in an enterprise database such as `adventureworks` may be large, they are most often records kept by people. That somewhat limits their size (relative to data generated by machines) and expands the possibilities for human error.
Note that the first line of the str() output reports the total number of observations.
Later on we’ll use this tibble to demonstrate several packages and functions, but use only the first 13 columns for simplicity.
```
salesorderheader_tibble <- salesorderheader_tibble[,1:13]
```
### 7\.2\.2 Create a pointer to a table that can be reused
The `dplyr::tbl` function gives us more control over access to a table by enabling control over which columns and rows to download. It creates an object that might **look** like a data frame, but it’s actually a list object that `dplyr` uses for constructing queries and retrieving data from the DBMS.
```
salesorderheader_table <- dplyr::tbl(con, "salesorderheader")
class(salesorderheader_table)
```
```
## [1] "tbl_PqConnection" "tbl_dbi" "tbl_sql" "tbl_lazy"
## [5] "tbl"
```
### 7\.2\.3 Controlling the number of rows returned with `collect()`
The `collect` function triggers the creation of a tibble and controls the number of rows that the DBMS sends to R. For more complex queries, the `dplyr::collect()` function provides a mechanism to indicate what’s processed on on the DBMS server and what’s processed by R on the local machine. The chapter on [Lazy Evaluation and Execution Environment](chapter-lazy-evaluation-and-timing.html#chapter_lazy-evaluation-and-timing) discusses this issue in detail.
```
salesorderheader_table %>% dplyr::collect(n = 3) %>% dim()
```
```
## [1] 3 25
```
```
salesorderheader_table %>% dplyr::collect(n = 500) %>% dim()
```
```
## [1] 500 25
```
### 7\.2\.4 Retrieving random rows from the DBMS
When the DBMS contains many rows, a sample of the data may be plenty for your purposes. Although `dplyr` has nice functions to sample a data frame that’s already in R (e.g., the `sample_n` and `sample_frac` functions), to get a sample from the DBMS we have to use `dbGetQuery` to send native SQL to the database. To peek ahead, here is one example of a query that retrieves 20 rows from a 1% sample:
```
one_percent_sample <- DBI::dbGetQuery(
con,
"SELECT orderdate, subtotal, taxamt, freight, totaldue
FROM salesorderheader TABLESAMPLE BERNOULLI(3) LIMIT 20;
"
)
one_percent_sample
```
```
## orderdate subtotal taxamt freight totaldue
## 1 2011-06-22 699.0982 55.9279 17.4775 772.5036
## 2 2011-06-25 3578.2700 286.2616 89.4568 3953.9884
## 3 2011-06-29 3374.9900 269.9992 84.3748 3729.3640
## 4 2011-06-30 3578.2700 286.2616 89.4568 3953.9884
## 5 2011-07-01 32492.6040 3118.7048 974.5952 36585.9040
## 6 2011-07-03 3578.2700 286.2616 89.4568 3953.9884
## 7 2011-07-22 3578.2700 286.2616 89.4568 3953.9884
## 8 2011-08-01 2039.9940 195.8394 61.1998 2297.0332
## 9 2011-08-01 1362.3067 130.1463 40.6707 1533.1237
## 10 2011-08-07 3578.2700 286.2616 89.4568 3953.9884
## 11 2011-08-07 3578.2700 286.2616 89.4568 3953.9884
## 12 2011-08-14 3578.2700 286.2616 89.4568 3953.9884
## 13 2011-09-06 3578.2700 286.2616 89.4568 3953.9884
## 14 2011-09-08 3374.9900 269.9992 84.3748 3729.3640
## 15 2011-09-08 699.0982 55.9279 17.4775 772.5036
## 16 2011-09-10 3578.2700 286.2616 89.4568 3953.9884
## 17 2011-09-11 3578.2700 286.2616 89.4568 3953.9884
## 18 2011-09-12 3578.2700 286.2616 89.4568 3953.9884
## 19 2011-09-19 3578.2700 286.2616 89.4568 3953.9884
## 20 2011-10-01 35651.0339 3424.4400 1070.1375 40145.6114
```
**Exact sample of 100 records**
This technique depends on knowing the range of a record index, such as the `businessentityid` in the `salesorderheader` table of our `adventureworks` database.
Start by finding the min and max values.
```
DBI::dbListFields(con, "salesorderheader")
```
```
## [1] "salesorderid" "revisionnumber" "orderdate"
## [4] "duedate" "shipdate" "status"
## [7] "onlineorderflag" "purchaseordernumber" "accountnumber"
## [10] "customerid" "salespersonid" "territoryid"
## [13] "billtoaddressid" "shiptoaddressid" "shipmethodid"
## [16] "creditcardid" "creditcardapprovalcode" "currencyrateid"
## [19] "subtotal" "taxamt" "freight"
## [22] "totaldue" "comment" "rowguid"
## [25] "modifieddate"
```
```
salesorderheader_df <- DBI::dbReadTable(con, "salesorderheader")
(max_id <- max(salesorderheader_df$salesorderid))
```
```
## [1] 75123
```
```
(min_id <- min(salesorderheader_df$salesorderid))
```
```
## [1] 43659
```
Set the random number seed and draw the sample.
```
set.seed(123)
sample_rows <- sample(1:max(salesorderheader_df$salesorderid), 10)
salesorderheader_table <- dplyr::tbl(con, "salesorderheader")
```
Run query with the filter verb listing the randomly sampled rows to be retrieved:
```
salesorderheader_sample <- salesorderheader_table %>%
dplyr::filter(salesorderid %in% sample_rows) %>%
dplyr::collect()
str(salesorderheader_sample)
```
```
## Classes 'tbl_df', 'tbl' and 'data.frame': 7 obs. of 25 variables:
## $ salesorderid : int 45404 46435 51663 57870 62555 65161 68293
## $ revisionnumber : int 8 8 8 8 8 8 8
## $ orderdate : POSIXct, format: "2012-01-10" "2012-05-06" ...
## $ duedate : POSIXct, format: "2012-01-22" "2012-05-18" ...
## $ shipdate : POSIXct, format: "2012-01-17" "2012-05-13" ...
## $ status : int 5 5 5 5 5 5 5
## $ onlineorderflag : logi TRUE TRUE TRUE TRUE TRUE FALSE ...
## $ purchaseordernumber : chr NA NA NA NA ...
## $ accountnumber : chr "10-4030-011217" "10-4030-012251" "10-4030-016327" "10-4030-018572" ...
## $ customerid : int 11217 12251 16327 18572 13483 29799 13239
## $ salespersonid : int NA NA NA NA NA 281 NA
## $ territoryid : int 1 9 8 4 1 4 6
## $ billtoaddressid : int 19321 24859 19265 16902 15267 997 27923
## $ shiptoaddressid : int 19321 24859 19265 16902 15267 997 27923
## $ shipmethodid : int 1 1 1 1 1 5 1
## $ creditcardid : int 8241 13188 16357 1884 4409 12582 1529
## $ creditcardapprovalcode: chr "332581Vi42712" "635144Vi68383" "420152Vi84562" "1224478Vi9772" ...
## $ currencyrateid : int NA 4121 NA NA NA NA 11581
## $ subtotal : num 3578 3375 2466 14 57 ...
## $ taxamt : num 286.26 270 197.31 1.12 4.56 ...
## $ freight : num 89.457 84.375 61.658 0.349 1.424 ...
## $ totaldue : num 3954 3729.4 2725.3 15.4 63 ...
## $ comment : chr NA NA NA NA ...
## $ rowguid : chr "358f91b2-dadd-4014-8d4f-7f9736cb664e" "eb312409-fcd5-4bac-bd3b-16d4bd7889db" "ddc60552-af98-4166-9249-d09d424d8430" "fe46e631-47b9-4e14-9da5-1e4a4a135364" ...
## $ modifieddate : POSIXct, format: "2012-01-17" "2012-05-13" ...
```
### 7\.2\.5 Sub\-setting variables
A table in the DBMS may not only have many more rows than you want, but also many more columns. The `select` command controls which columns are retrieved.
```
salesorderheader_table %>% dplyr::select(orderdate, subtotal, taxamt, freight, totaldue) %>%
head()
```
```
## # Source: lazy query [?? x 5]
## # Database: postgres [postgres@localhost:5432/adventureworks]
## orderdate subtotal taxamt freight totaldue
## <dttm> <dbl> <dbl> <dbl> <dbl>
## 1 2011-05-31 00:00:00 20566. 1972. 616. 23153.
## 2 2011-05-31 00:00:00 1294. 124. 38.8 1457.
## 3 2011-05-31 00:00:00 32726. 3154. 986. 36866.
## 4 2011-05-31 00:00:00 28833. 2775. 867. 32475.
## 5 2011-05-31 00:00:00 419. 40.3 12.6 472.
## 6 2011-05-31 00:00:00 24433. 2345. 733. 27510.
```
That’s exactly equivalent to submitting the following SQL commands directly:
```
DBI::dbGetQuery(
con,
'SELECT "orderdate", "subtotal", "taxamt", "freight", "totaldue"
FROM "salesorderheader"
LIMIT 6')
```
```
## orderdate subtotal taxamt freight totaldue
## 1 2011-05-31 20565.6206 1971.5149 616.0984 23153.2339
## 2 2011-05-31 1294.2529 124.2483 38.8276 1457.3288
## 3 2011-05-31 32726.4786 3153.7696 985.5530 36865.8012
## 4 2011-05-31 28832.5289 2775.1646 867.2389 32474.9324
## 5 2011-05-31 419.4589 40.2681 12.5838 472.3108
## 6 2011-05-31 24432.6088 2344.9921 732.8100 27510.4109
```
We won’t discuss `dplyr` methods for sub\-setting variables, deriving new ones, or sub\-setting rows based on the values found in the table, because they are covered well in other places, including:
* Comprehensive reference: <https://dplyr.tidyverse.org/>
* Good tutorial: <https://suzan.rbind.io/tags/dplyr/>
In practice we find that, **renaming variables** is often quite important because the names in an SQL database might not meet your needs as an analyst. In “the wild”, you will find names that are ambiguous or overly specified, with spaces in them, and other problems that will make them difficult to use in R. It is good practice to do whatever renaming you are going to do in a predictable place like at the top of your code. The names in the `adventureworks` database are simple and clear, but if they were not, you might rename them for subsequent use in this way:
```
tbl(con, "salesorderheader") %>%
dplyr::rename(order_date = orderdate, sub_total_amount = subtotal,
tax_amount = taxamt, freight_amount = freight, total_due_amount = totaldue) %>%
dplyr::select(order_date, sub_total_amount, tax_amount, freight_amount, total_due_amount ) %>%
show_query()
```
```
## <SQL>
## SELECT "orderdate" AS "order_date", "subtotal" AS "sub_total_amount", "taxamt" AS "tax_amount", "freight" AS "freight_amount", "totaldue" AS "total_due_amount"
## FROM "salesorderheader"
```
That’s equivalent to the following SQL code:
```
DBI::dbGetQuery(
con,
'SELECT "orderdate" AS "order_date",
"subtotal" AS "sub_total_amount",
"taxamt" AS "tax_amount",
"freight" AS "freight_amount",
"totaldue" AS "total_due_amount"
FROM "salesorderheader"' ) %>%
head()
```
```
## order_date sub_total_amount tax_amount freight_amount total_due_amount
## 1 2011-05-31 20565.6206 1971.5149 616.0984 23153.2339
## 2 2011-05-31 1294.2529 124.2483 38.8276 1457.3288
## 3 2011-05-31 32726.4786 3153.7696 985.5530 36865.8012
## 4 2011-05-31 28832.5289 2775.1646 867.2389 32474.9324
## 5 2011-05-31 419.4589 40.2681 12.5838 472.3108
## 6 2011-05-31 24432.6088 2344.9921 732.8100 27510.4109
```
The one difference is that the `SQL` code returns a regular data frame and the `dplyr` code returns a `tibble`. Notice that the seconds are grayed out in the `tibble` display.
7\.3 Translating `dplyr` code to `SQL` queries
----------------------------------------------
Where did the translations we’ve shown above come from? The `show_query` function shows how `dplyr` is translating your query to the dialect of the target DBMS.
> The `show_query()` function shows you what dplyr is sending to the DBMS. It might be handy for inspecting what dplyr is doing or for showing your code to someone who is more SQL\- than R\-literate. In general we have used the function extensively in writing this book but in the final product we will not use it unless there is something in the SQL or the translation process that needs to be explained.
```
salesorderheader_table %>%
dplyr::tally() %>%
dplyr::show_query()
```
```
## <SQL>
## SELECT COUNT(*) AS "n"
## FROM "salesorderheader"
```
Here is an extensive discussion of how `dplyr` code is translated into SQL:
* [https://dbplyr.tidyverse.org/articles/sql\-translation.html](https://dbplyr.tidyverse.org/articles/sql-translation.html)
If you prefer to use SQL directly, rather than `dplyr`, you can submit SQL code to the DBMS through the `DBI::dbGetQuery` function:
```
DBI::dbGetQuery(
con,
'SELECT COUNT(*) AS "n"
FROM "salesorderheader" '
)
```
```
## n
## 1 31465
```
When you create a report to run repeatedly, you might want to put that query into R markdown. That way you can also execute that SQL code in a chunk with the following header:
{`sql, connection=con, output.var = "query_results"`}
```
SELECT COUNT(*) AS "n"
FROM "salesorderheader";
```
R markdown stores that query result in a tibble which can be printed by referring to it:
```
query_results
```
```
## n
## 1 31465
```
7\.4 Mixing dplyr and SQL
-------------------------
When dplyr finds code that it does not know how to translate into SQL, it will simply pass it along to the DBMS. Therefore you can interleave native commands that your DBMS will understand in the middle of dplyr code. Consider this example that’s derived from (Ruiz [2019](#ref-Ruiz2019)):
```
salesorderheader_table %>%
dplyr::select_at(vars(subtotal, contains("date"))) %>%
dplyr::mutate(today = now()) %>%
dplyr::show_query()
```
```
## <SQL>
## SELECT "subtotal", "orderdate", "duedate", "shipdate", "modifieddate", CURRENT_TIMESTAMP AS "today"
## FROM "salesorderheader"
```
That is native to PostgreSQL, not [ANSI standard](https://en.wikipedia.org/wiki/SQL#Interoperability_and_standardization) SQL.
Verify that it works:
```
salesorderheader_table %>%
dplyr::select_at(vars(subtotal, contains("date"))) %>%
head() %>%
dplyr::mutate(today = now()) %>%
dplyr::collect()
```
```
## # A tibble: 6 x 6
## subtotal orderdate duedate shipdate
## <dbl> <dttm> <dttm> <dttm>
## 1 20566. 2011-05-31 00:00:00 2011-06-12 00:00:00 2011-06-07 00:00:00
## 2 1294. 2011-05-31 00:00:00 2011-06-12 00:00:00 2011-06-07 00:00:00
## 3 32726. 2011-05-31 00:00:00 2011-06-12 00:00:00 2011-06-07 00:00:00
## 4 28833. 2011-05-31 00:00:00 2011-06-12 00:00:00 2011-06-07 00:00:00
## 5 419. 2011-05-31 00:00:00 2011-06-12 00:00:00 2011-06-07 00:00:00
## 6 24433. 2011-05-31 00:00:00 2011-06-12 00:00:00 2011-06-07 00:00:00
## # … with 2 more variables: modifieddate <dttm>, today <dttm>
```
7\.5 Examining a single table with R
------------------------------------
Dealing with a large, complex database highlights the utility of specific tools in R. We include brief examples that we find to be handy:
* Base R structure: `str`
* Printing out some of the data: `datatable`, `kable`, and `View`
* Summary statistics: `summary`
* `glimpse` in the `tibble` package, which is included in the `tidyverse`
* `skim` in the `skimr` package
### 7\.5\.1 `str` \- a base package workhorse
`str` is a workhorse function that lists variables, their type and a sample of the first few variable values.
```
str(salesorderheader_tibble)
```
```
## 'data.frame': 31465 obs. of 13 variables:
## $ salesorderid : int 43659 43660 43661 43662 43663 43664 43665 43666 43667 43668 ...
## $ revisionnumber : int 8 8 8 8 8 8 8 8 8 8 ...
## $ orderdate : POSIXct, format: "2011-05-31" "2011-05-31" ...
## $ duedate : POSIXct, format: "2011-06-12" "2011-06-12" ...
## $ shipdate : POSIXct, format: "2011-06-07" "2011-06-07" ...
## $ status : int 5 5 5 5 5 5 5 5 5 5 ...
## $ onlineorderflag : logi FALSE FALSE FALSE FALSE FALSE FALSE ...
## $ purchaseordernumber: chr "PO522145787" "PO18850127500" "PO18473189620" "PO18444174044" ...
## $ accountnumber : chr "10-4020-000676" "10-4020-000117" "10-4020-000442" "10-4020-000227" ...
## $ customerid : int 29825 29672 29734 29994 29565 29898 29580 30052 29974 29614 ...
## $ salespersonid : int 279 279 282 282 276 280 283 276 277 282 ...
## $ territoryid : int 5 5 6 6 4 1 1 4 3 6 ...
## $ billtoaddressid : int 985 921 517 482 1073 876 849 1074 629 529 ...
```
### 7\.5\.2 Always **look** at your data with `head`, `View`, or `kable`
There is no substitute for looking at your data and R provides several ways to just browse it. The `head` function controls the number of rows that are displayed. Note that tail does not work against a database object. In every\-day practice you would look at more than the default 6 rows, but here we wrap `head` around the data frame:
```
sqlpetr::sp_print_df(head(salesorderheader_tibble))
```
### 7\.5\.3 The `summary` function in `base`
The `base` package’s `summary` function provides basic statistics that serve a unique diagnostic purpose in this context. For example, the following output shows that:
```
* `businessentityid` is a number from 1 to 16,049. In a previous section, we ran the `str` function and saw that there are 16,044 observations in this table. Therefore, the `businessentityid` seems to be sequential from 1:16049, but there are 5 values missing from that sequence. _Exercise for the Reader_: Which 5 values from 1:16049 are missing from `businessentityid` values in the `salesorderheader` table? (_Hint_: In the chapter on SQL Joins, you will learn the functions needed to answer this question.)
* The number of NA's in the `return_date` column is a good first guess as to the number of DVDs rented out or lost as of 2005-09-02 02:35:22.
```
```
summary(salesorderheader_tibble)
```
```
## salesorderid revisionnumber orderdate
## Min. :43659 Min. :8.000 Min. :2011-05-31 00:00:00
## 1st Qu.:51525 1st Qu.:8.000 1st Qu.:2013-06-20 00:00:00
## Median :59391 Median :8.000 Median :2013-11-03 00:00:00
## Mean :59391 Mean :8.001 Mean :2013-08-21 12:05:04
## 3rd Qu.:67257 3rd Qu.:8.000 3rd Qu.:2014-02-28 00:00:00
## Max. :75123 Max. :9.000 Max. :2014-06-30 00:00:00
##
## duedate shipdate status
## Min. :2011-06-12 00:00:00 Min. :2011-06-07 00:00:00 Min. :5
## 1st Qu.:2013-07-02 00:00:00 1st Qu.:2013-06-27 00:00:00 1st Qu.:5
## Median :2013-11-15 00:00:00 Median :2013-11-10 00:00:00 Median :5
## Mean :2013-09-02 12:05:41 Mean :2013-08-28 12:06:06 Mean :5
## 3rd Qu.:2014-03-13 00:00:00 3rd Qu.:2014-03-08 00:00:00 3rd Qu.:5
## Max. :2014-07-12 00:00:00 Max. :2014-07-07 00:00:00 Max. :5
##
## onlineorderflag purchaseordernumber accountnumber customerid
## Mode :logical Length:31465 Length:31465 Min. :11000
## FALSE:3806 Class :character Class :character 1st Qu.:14432
## TRUE :27659 Mode :character Mode :character Median :19452
## Mean :20170
## 3rd Qu.:25994
## Max. :30118
##
## salespersonid territoryid billtoaddressid
## Min. :274.0 Min. : 1.000 Min. : 405
## 1st Qu.:277.0 1st Qu.: 4.000 1st Qu.:14080
## Median :279.0 Median : 6.000 Median :19449
## Mean :280.6 Mean : 6.091 Mean :18263
## 3rd Qu.:284.0 3rd Qu.: 9.000 3rd Qu.:24678
## Max. :290.0 Max. :10.000 Max. :29883
## NA's :27659
```
So the `summary` function is surprisingly useful as we first start to look at the table contents.
### 7\.5\.4 The `glimpse` function in the `tibble` package
The `tibble` package’s `glimpse` function is a more compact version of `str`:
```
tibble::glimpse(salesorderheader_tibble)
```
```
## Observations: 31,465
## Variables: 13
## $ salesorderid <int> 43659, 43660, 43661, 43662, 43663, 43664, 43665, …
## $ revisionnumber <int> 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8…
## $ orderdate <dttm> 2011-05-31, 2011-05-31, 2011-05-31, 2011-05-31, …
## $ duedate <dttm> 2011-06-12, 2011-06-12, 2011-06-12, 2011-06-12, …
## $ shipdate <dttm> 2011-06-07, 2011-06-07, 2011-06-07, 2011-06-07, …
## $ status <int> 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5…
## $ onlineorderflag <lgl> FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, …
## $ purchaseordernumber <chr> "PO522145787", "PO18850127500", "PO18473189620", …
## $ accountnumber <chr> "10-4020-000676", "10-4020-000117", "10-4020-0004…
## $ customerid <int> 29825, 29672, 29734, 29994, 29565, 29898, 29580, …
## $ salespersonid <int> 279, 279, 282, 282, 276, 280, 283, 276, 277, 282,…
## $ territoryid <int> 5, 5, 6, 6, 4, 1, 1, 4, 3, 6, 1, 3, 1, 6, 2, 6, 3…
## $ billtoaddressid <int> 985, 921, 517, 482, 1073, 876, 849, 1074, 629, 52…
```
### 7\.5\.5 The `skim` function in the `skimr` package
The `skimr` package has several functions that make it easy to examine an unknown data frame and assess what it contains. It is also extensible.
```
skimr::skim(salesorderheader_tibble)
```
Table 7\.1: Data summary
| Name | salesorderheader\_tibble |
| --- | --- |
| Number of rows | 31465 |
| Number of columns | 13 |
| \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ | |
| Column type frequency: | |
| character | 2 |
| logical | 1 |
| numeric | 7 |
| POSIXct | 3 |
| \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ | |
| Group variables | None |
**Variable type: character**
| skim\_variable | n\_missing | complete\_rate | min | max | empty | n\_unique | whitespace |
| --- | --- | --- | --- | --- | --- | --- | --- |
| purchaseordernumber | 27659 | 0\.12 | 10 | 13 | 0 | 3806 | 0 |
| accountnumber | 0 | 1\.00 | 14 | 14 | 0 | 19119 | 0 |
**Variable type: logical**
| skim\_variable | n\_missing | complete\_rate | mean | count |
| --- | --- | --- | --- | --- |
| onlineorderflag | 0 | 1 | 0\.88 | TRU: 27659, FAL: 3806 |
**Variable type: numeric**
| skim\_variable | n\_missing | complete\_rate | mean | sd | p0 | p25 | p50 | p75 | p100 | hist |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| salesorderid | 0 | 1\.00 | 59391\.00 | 9083\.31 | 43659 | 51525 | 59391 | 67257 | 75123 | ▇▇▇▇▇ |
| revisionnumber | 0 | 1\.00 | 8\.00 | 0\.03 | 8 | 8 | 8 | 8 | 9 | ▇▁▁▁▁ |
| status | 0 | 1\.00 | 5\.00 | 0\.00 | 5 | 5 | 5 | 5 | 5 | ▁▁▇▁▁ |
| customerid | 0 | 1\.00 | 20170\.18 | 6261\.73 | 11000 | 14432 | 19452 | 25994 | 30118 | ▇▆▅▅▇ |
| salespersonid | 27659 | 0\.12 | 280\.61 | 4\.85 | 274 | 277 | 279 | 284 | 290 | ▇▅▅▂▃ |
| territoryid | 0 | 1\.00 | 6\.09 | 2\.96 | 1 | 4 | 6 | 9 | 10 | ▃▅▃▅▇ |
| billtoaddressid | 0 | 1\.00 | 18263\.15 | 8210\.07 | 405 | 14080 | 19449 | 24678 | 29883 | ▃▁▇▇▇ |
**Variable type: POSIXct**
| skim\_variable | n\_missing | complete\_rate | min | max | median | n\_unique |
| --- | --- | --- | --- | --- | --- | --- |
| orderdate | 0 | 1 | 2011\-05\-31 | 2014\-06\-30 | 2013\-11\-03 | 1124 |
| duedate | 0 | 1 | 2011\-06\-12 | 2014\-07\-12 | 2013\-11\-15 | 1124 |
| shipdate | 0 | 1 | 2011\-06\-07 | 2014\-07\-07 | 2013\-11\-10 | 1124 |
```
skimr::skim_to_wide(salesorderheader_tibble) #skimr doesn't like certain kinds of columns
```
```
## Warning: 'skimr::skim_to_wide' is deprecated.
## Use 'skim()' instead.
## See help("Deprecated")
```
Table 7\.1: Data summary
| Name | .data |
| --- | --- |
| Number of rows | 31465 |
| Number of columns | 13 |
| \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ | |
| Column type frequency: | |
| character | 2 |
| logical | 1 |
| numeric | 7 |
| POSIXct | 3 |
| \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ | |
| Group variables | None |
**Variable type: character**
| skim\_variable | n\_missing | complete\_rate | min | max | empty | n\_unique | whitespace |
| --- | --- | --- | --- | --- | --- | --- | --- |
| purchaseordernumber | 27659 | 0\.12 | 10 | 13 | 0 | 3806 | 0 |
| accountnumber | 0 | 1\.00 | 14 | 14 | 0 | 19119 | 0 |
**Variable type: logical**
| skim\_variable | n\_missing | complete\_rate | mean | count |
| --- | --- | --- | --- | --- |
| onlineorderflag | 0 | 1 | 0\.88 | TRU: 27659, FAL: 3806 |
**Variable type: numeric**
| skim\_variable | n\_missing | complete\_rate | mean | sd | p0 | p25 | p50 | p75 | p100 | hist |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| salesorderid | 0 | 1\.00 | 59391\.00 | 9083\.31 | 43659 | 51525 | 59391 | 67257 | 75123 | ▇▇▇▇▇ |
| revisionnumber | 0 | 1\.00 | 8\.00 | 0\.03 | 8 | 8 | 8 | 8 | 9 | ▇▁▁▁▁ |
| status | 0 | 1\.00 | 5\.00 | 0\.00 | 5 | 5 | 5 | 5 | 5 | ▁▁▇▁▁ |
| customerid | 0 | 1\.00 | 20170\.18 | 6261\.73 | 11000 | 14432 | 19452 | 25994 | 30118 | ▇▆▅▅▇ |
| salespersonid | 27659 | 0\.12 | 280\.61 | 4\.85 | 274 | 277 | 279 | 284 | 290 | ▇▅▅▂▃ |
| territoryid | 0 | 1\.00 | 6\.09 | 2\.96 | 1 | 4 | 6 | 9 | 10 | ▃▅▃▅▇ |
| billtoaddressid | 0 | 1\.00 | 18263\.15 | 8210\.07 | 405 | 14080 | 19449 | 24678 | 29883 | ▃▁▇▇▇ |
**Variable type: POSIXct**
| skim\_variable | n\_missing | complete\_rate | min | max | median | n\_unique |
| --- | --- | --- | --- | --- | --- | --- |
| orderdate | 0 | 1 | 2011\-05\-31 | 2014\-06\-30 | 2013\-11\-03 | 1124 |
| duedate | 0 | 1 | 2011\-06\-12 | 2014\-07\-12 | 2013\-11\-15 | 1124 |
| shipdate | 0 | 1 | 2011\-06\-07 | 2014\-07\-07 | 2013\-11\-10 | 1124 |
7\.6 Disconnect from the database and stop Docker
-------------------------------------------------
```
dbDisconnect(con)
# or if using the connections package, use:
# connection_close(con)
sp_docker_stop("adventureworks")
```
7\.7 Additional reading
-----------------------
* (Wickham [2018](#ref-Wickham2018))
* (Baumer [2018](#ref-Baumer2018))
7\.1 Setup
----------
The following packages are used in this chapter:
```
library(tidyverse)
library(DBI)
library(RPostgres)
library(dbplyr)
require(knitr)
library(bookdown)
library(sqlpetr)
library(skimr)
library(connections)
sleep_default <- 3
```
Assume that the Docker container with PostgreSQL and the adventureworks database are ready to go. If not go back to \[Chapter 6]\[\#chapter\_setup\-adventureworks\-db]
```
sqlpetr::sp_docker_start("adventureworks")
Sys.sleep(sleep_default)
```
Connect to the database:
```
# con <- connection_open( # use in an interactive session
con <- dbConnect( # use in other settings
RPostgres::Postgres(),
# without the previous and next lines, some functions fail with bigint data
# so change int64 to integer
bigint = "integer",
host = "localhost",
user = Sys.getenv("DEFAULT_POSTGRES_USER_NAME"),
password = Sys.getenv("DEFAULT_POSTGRES_PASSWORD"),
dbname = "adventureworks",
port = 5432
)
```
7\.2 Methods for downloading a single table
-------------------------------------------
For the moment, assume you know something about the database and specifically what table you need to retrieve. We return to the topic of investigating the whole database later on.
```
dbExecute(con, "set search_path to sales, humanresources;")
```
```
## [1] 0
```
### 7\.2\.1 Read the entire table
There are a few different methods of getting data from a DBMS, and we’ll explore the different ways of controlling each one of them.
`DBI::dbReadTable` will download an entire table into an R [tibble](https://tibble.tidyverse.org/).
```
salesorderheader_tibble <- DBI::dbReadTable(con, "salesorderheader")
str(salesorderheader_tibble)
```
```
## 'data.frame': 31465 obs. of 25 variables:
## $ salesorderid : int 43659 43660 43661 43662 43663 43664 43665 43666 43667 43668 ...
## $ revisionnumber : int 8 8 8 8 8 8 8 8 8 8 ...
## $ orderdate : POSIXct, format: "2011-05-31" "2011-05-31" ...
## $ duedate : POSIXct, format: "2011-06-12" "2011-06-12" ...
## $ shipdate : POSIXct, format: "2011-06-07" "2011-06-07" ...
## $ status : int 5 5 5 5 5 5 5 5 5 5 ...
## $ onlineorderflag : logi FALSE FALSE FALSE FALSE FALSE FALSE ...
## $ purchaseordernumber : chr "PO522145787" "PO18850127500" "PO18473189620" "PO18444174044" ...
## $ accountnumber : chr "10-4020-000676" "10-4020-000117" "10-4020-000442" "10-4020-000227" ...
## $ customerid : int 29825 29672 29734 29994 29565 29898 29580 30052 29974 29614 ...
## $ salespersonid : int 279 279 282 282 276 280 283 276 277 282 ...
## $ territoryid : int 5 5 6 6 4 1 1 4 3 6 ...
## $ billtoaddressid : int 985 921 517 482 1073 876 849 1074 629 529 ...
## $ shiptoaddressid : int 985 921 517 482 1073 876 849 1074 629 529 ...
## $ shipmethodid : int 5 5 5 5 5 5 5 5 5 5 ...
## $ creditcardid : int 16281 5618 1346 10456 4322 806 15232 13349 10370 1566 ...
## $ creditcardapprovalcode: chr "105041Vi84182" "115213Vi29411" "85274Vi6854" "125295Vi53935" ...
## $ currencyrateid : int NA NA 4 4 NA NA NA NA NA 4 ...
## $ subtotal : num 20566 1294 32726 28833 419 ...
## $ taxamt : num 1971.5 124.2 3153.8 2775.2 40.3 ...
## $ freight : num 616.1 38.8 985.6 867.2 12.6 ...
## $ totaldue : num 23153 1457 36866 32475 472 ...
## $ comment : chr NA NA NA NA ...
## $ rowguid : chr "79b65321-39ca-4115-9cba-8fe0903e12e6" "738dc42d-d03b-48a1-9822-f95a67ea7389" "d91b9131-18a4-4a11-bc3a-90b6f53e9d74" "4a1ecfc0-cc3a-4740-b028-1c50bb48711c" ...
## $ modifieddate : POSIXct, format: "2011-06-07" "2011-06-07" ...
```
That’s very simple, but if the table is very large it may not be a problem, since R is designed to keep the entire table in memory. The tables that are found in an enterprise database such as `adventureworks` may be large, they are most often records kept by people. That somewhat limits their size (relative to data generated by machines) and expands the possibilities for human error.
Note that the first line of the str() output reports the total number of observations.
Later on we’ll use this tibble to demonstrate several packages and functions, but use only the first 13 columns for simplicity.
```
salesorderheader_tibble <- salesorderheader_tibble[,1:13]
```
### 7\.2\.2 Create a pointer to a table that can be reused
The `dplyr::tbl` function gives us more control over access to a table by enabling control over which columns and rows to download. It creates an object that might **look** like a data frame, but it’s actually a list object that `dplyr` uses for constructing queries and retrieving data from the DBMS.
```
salesorderheader_table <- dplyr::tbl(con, "salesorderheader")
class(salesorderheader_table)
```
```
## [1] "tbl_PqConnection" "tbl_dbi" "tbl_sql" "tbl_lazy"
## [5] "tbl"
```
### 7\.2\.3 Controlling the number of rows returned with `collect()`
The `collect` function triggers the creation of a tibble and controls the number of rows that the DBMS sends to R. For more complex queries, the `dplyr::collect()` function provides a mechanism to indicate what’s processed on on the DBMS server and what’s processed by R on the local machine. The chapter on [Lazy Evaluation and Execution Environment](chapter-lazy-evaluation-and-timing.html#chapter_lazy-evaluation-and-timing) discusses this issue in detail.
```
salesorderheader_table %>% dplyr::collect(n = 3) %>% dim()
```
```
## [1] 3 25
```
```
salesorderheader_table %>% dplyr::collect(n = 500) %>% dim()
```
```
## [1] 500 25
```
### 7\.2\.4 Retrieving random rows from the DBMS
When the DBMS contains many rows, a sample of the data may be plenty for your purposes. Although `dplyr` has nice functions to sample a data frame that’s already in R (e.g., the `sample_n` and `sample_frac` functions), to get a sample from the DBMS we have to use `dbGetQuery` to send native SQL to the database. To peek ahead, here is one example of a query that retrieves 20 rows from a 1% sample:
```
one_percent_sample <- DBI::dbGetQuery(
con,
"SELECT orderdate, subtotal, taxamt, freight, totaldue
FROM salesorderheader TABLESAMPLE BERNOULLI(3) LIMIT 20;
"
)
one_percent_sample
```
```
## orderdate subtotal taxamt freight totaldue
## 1 2011-06-22 699.0982 55.9279 17.4775 772.5036
## 2 2011-06-25 3578.2700 286.2616 89.4568 3953.9884
## 3 2011-06-29 3374.9900 269.9992 84.3748 3729.3640
## 4 2011-06-30 3578.2700 286.2616 89.4568 3953.9884
## 5 2011-07-01 32492.6040 3118.7048 974.5952 36585.9040
## 6 2011-07-03 3578.2700 286.2616 89.4568 3953.9884
## 7 2011-07-22 3578.2700 286.2616 89.4568 3953.9884
## 8 2011-08-01 2039.9940 195.8394 61.1998 2297.0332
## 9 2011-08-01 1362.3067 130.1463 40.6707 1533.1237
## 10 2011-08-07 3578.2700 286.2616 89.4568 3953.9884
## 11 2011-08-07 3578.2700 286.2616 89.4568 3953.9884
## 12 2011-08-14 3578.2700 286.2616 89.4568 3953.9884
## 13 2011-09-06 3578.2700 286.2616 89.4568 3953.9884
## 14 2011-09-08 3374.9900 269.9992 84.3748 3729.3640
## 15 2011-09-08 699.0982 55.9279 17.4775 772.5036
## 16 2011-09-10 3578.2700 286.2616 89.4568 3953.9884
## 17 2011-09-11 3578.2700 286.2616 89.4568 3953.9884
## 18 2011-09-12 3578.2700 286.2616 89.4568 3953.9884
## 19 2011-09-19 3578.2700 286.2616 89.4568 3953.9884
## 20 2011-10-01 35651.0339 3424.4400 1070.1375 40145.6114
```
**Exact sample of 100 records**
This technique depends on knowing the range of a record index, such as the `businessentityid` in the `salesorderheader` table of our `adventureworks` database.
Start by finding the min and max values.
```
DBI::dbListFields(con, "salesorderheader")
```
```
## [1] "salesorderid" "revisionnumber" "orderdate"
## [4] "duedate" "shipdate" "status"
## [7] "onlineorderflag" "purchaseordernumber" "accountnumber"
## [10] "customerid" "salespersonid" "territoryid"
## [13] "billtoaddressid" "shiptoaddressid" "shipmethodid"
## [16] "creditcardid" "creditcardapprovalcode" "currencyrateid"
## [19] "subtotal" "taxamt" "freight"
## [22] "totaldue" "comment" "rowguid"
## [25] "modifieddate"
```
```
salesorderheader_df <- DBI::dbReadTable(con, "salesorderheader")
(max_id <- max(salesorderheader_df$salesorderid))
```
```
## [1] 75123
```
```
(min_id <- min(salesorderheader_df$salesorderid))
```
```
## [1] 43659
```
Set the random number seed and draw the sample.
```
set.seed(123)
sample_rows <- sample(1:max(salesorderheader_df$salesorderid), 10)
salesorderheader_table <- dplyr::tbl(con, "salesorderheader")
```
Run query with the filter verb listing the randomly sampled rows to be retrieved:
```
salesorderheader_sample <- salesorderheader_table %>%
dplyr::filter(salesorderid %in% sample_rows) %>%
dplyr::collect()
str(salesorderheader_sample)
```
```
## Classes 'tbl_df', 'tbl' and 'data.frame': 7 obs. of 25 variables:
## $ salesorderid : int 45404 46435 51663 57870 62555 65161 68293
## $ revisionnumber : int 8 8 8 8 8 8 8
## $ orderdate : POSIXct, format: "2012-01-10" "2012-05-06" ...
## $ duedate : POSIXct, format: "2012-01-22" "2012-05-18" ...
## $ shipdate : POSIXct, format: "2012-01-17" "2012-05-13" ...
## $ status : int 5 5 5 5 5 5 5
## $ onlineorderflag : logi TRUE TRUE TRUE TRUE TRUE FALSE ...
## $ purchaseordernumber : chr NA NA NA NA ...
## $ accountnumber : chr "10-4030-011217" "10-4030-012251" "10-4030-016327" "10-4030-018572" ...
## $ customerid : int 11217 12251 16327 18572 13483 29799 13239
## $ salespersonid : int NA NA NA NA NA 281 NA
## $ territoryid : int 1 9 8 4 1 4 6
## $ billtoaddressid : int 19321 24859 19265 16902 15267 997 27923
## $ shiptoaddressid : int 19321 24859 19265 16902 15267 997 27923
## $ shipmethodid : int 1 1 1 1 1 5 1
## $ creditcardid : int 8241 13188 16357 1884 4409 12582 1529
## $ creditcardapprovalcode: chr "332581Vi42712" "635144Vi68383" "420152Vi84562" "1224478Vi9772" ...
## $ currencyrateid : int NA 4121 NA NA NA NA 11581
## $ subtotal : num 3578 3375 2466 14 57 ...
## $ taxamt : num 286.26 270 197.31 1.12 4.56 ...
## $ freight : num 89.457 84.375 61.658 0.349 1.424 ...
## $ totaldue : num 3954 3729.4 2725.3 15.4 63 ...
## $ comment : chr NA NA NA NA ...
## $ rowguid : chr "358f91b2-dadd-4014-8d4f-7f9736cb664e" "eb312409-fcd5-4bac-bd3b-16d4bd7889db" "ddc60552-af98-4166-9249-d09d424d8430" "fe46e631-47b9-4e14-9da5-1e4a4a135364" ...
## $ modifieddate : POSIXct, format: "2012-01-17" "2012-05-13" ...
```
### 7\.2\.5 Sub\-setting variables
A table in the DBMS may not only have many more rows than you want, but also many more columns. The `select` command controls which columns are retrieved.
```
salesorderheader_table %>% dplyr::select(orderdate, subtotal, taxamt, freight, totaldue) %>%
head()
```
```
## # Source: lazy query [?? x 5]
## # Database: postgres [postgres@localhost:5432/adventureworks]
## orderdate subtotal taxamt freight totaldue
## <dttm> <dbl> <dbl> <dbl> <dbl>
## 1 2011-05-31 00:00:00 20566. 1972. 616. 23153.
## 2 2011-05-31 00:00:00 1294. 124. 38.8 1457.
## 3 2011-05-31 00:00:00 32726. 3154. 986. 36866.
## 4 2011-05-31 00:00:00 28833. 2775. 867. 32475.
## 5 2011-05-31 00:00:00 419. 40.3 12.6 472.
## 6 2011-05-31 00:00:00 24433. 2345. 733. 27510.
```
That’s exactly equivalent to submitting the following SQL commands directly:
```
DBI::dbGetQuery(
con,
'SELECT "orderdate", "subtotal", "taxamt", "freight", "totaldue"
FROM "salesorderheader"
LIMIT 6')
```
```
## orderdate subtotal taxamt freight totaldue
## 1 2011-05-31 20565.6206 1971.5149 616.0984 23153.2339
## 2 2011-05-31 1294.2529 124.2483 38.8276 1457.3288
## 3 2011-05-31 32726.4786 3153.7696 985.5530 36865.8012
## 4 2011-05-31 28832.5289 2775.1646 867.2389 32474.9324
## 5 2011-05-31 419.4589 40.2681 12.5838 472.3108
## 6 2011-05-31 24432.6088 2344.9921 732.8100 27510.4109
```
We won’t discuss `dplyr` methods for sub\-setting variables, deriving new ones, or sub\-setting rows based on the values found in the table, because they are covered well in other places, including:
* Comprehensive reference: <https://dplyr.tidyverse.org/>
* Good tutorial: <https://suzan.rbind.io/tags/dplyr/>
In practice we find that, **renaming variables** is often quite important because the names in an SQL database might not meet your needs as an analyst. In “the wild”, you will find names that are ambiguous or overly specified, with spaces in them, and other problems that will make them difficult to use in R. It is good practice to do whatever renaming you are going to do in a predictable place like at the top of your code. The names in the `adventureworks` database are simple and clear, but if they were not, you might rename them for subsequent use in this way:
```
tbl(con, "salesorderheader") %>%
dplyr::rename(order_date = orderdate, sub_total_amount = subtotal,
tax_amount = taxamt, freight_amount = freight, total_due_amount = totaldue) %>%
dplyr::select(order_date, sub_total_amount, tax_amount, freight_amount, total_due_amount ) %>%
show_query()
```
```
## <SQL>
## SELECT "orderdate" AS "order_date", "subtotal" AS "sub_total_amount", "taxamt" AS "tax_amount", "freight" AS "freight_amount", "totaldue" AS "total_due_amount"
## FROM "salesorderheader"
```
That’s equivalent to the following SQL code:
```
DBI::dbGetQuery(
con,
'SELECT "orderdate" AS "order_date",
"subtotal" AS "sub_total_amount",
"taxamt" AS "tax_amount",
"freight" AS "freight_amount",
"totaldue" AS "total_due_amount"
FROM "salesorderheader"' ) %>%
head()
```
```
## order_date sub_total_amount tax_amount freight_amount total_due_amount
## 1 2011-05-31 20565.6206 1971.5149 616.0984 23153.2339
## 2 2011-05-31 1294.2529 124.2483 38.8276 1457.3288
## 3 2011-05-31 32726.4786 3153.7696 985.5530 36865.8012
## 4 2011-05-31 28832.5289 2775.1646 867.2389 32474.9324
## 5 2011-05-31 419.4589 40.2681 12.5838 472.3108
## 6 2011-05-31 24432.6088 2344.9921 732.8100 27510.4109
```
The one difference is that the `SQL` code returns a regular data frame and the `dplyr` code returns a `tibble`. Notice that the seconds are grayed out in the `tibble` display.
### 7\.2\.1 Read the entire table
There are a few different methods of getting data from a DBMS, and we’ll explore the different ways of controlling each one of them.
`DBI::dbReadTable` will download an entire table into an R [tibble](https://tibble.tidyverse.org/).
```
salesorderheader_tibble <- DBI::dbReadTable(con, "salesorderheader")
str(salesorderheader_tibble)
```
```
## 'data.frame': 31465 obs. of 25 variables:
## $ salesorderid : int 43659 43660 43661 43662 43663 43664 43665 43666 43667 43668 ...
## $ revisionnumber : int 8 8 8 8 8 8 8 8 8 8 ...
## $ orderdate : POSIXct, format: "2011-05-31" "2011-05-31" ...
## $ duedate : POSIXct, format: "2011-06-12" "2011-06-12" ...
## $ shipdate : POSIXct, format: "2011-06-07" "2011-06-07" ...
## $ status : int 5 5 5 5 5 5 5 5 5 5 ...
## $ onlineorderflag : logi FALSE FALSE FALSE FALSE FALSE FALSE ...
## $ purchaseordernumber : chr "PO522145787" "PO18850127500" "PO18473189620" "PO18444174044" ...
## $ accountnumber : chr "10-4020-000676" "10-4020-000117" "10-4020-000442" "10-4020-000227" ...
## $ customerid : int 29825 29672 29734 29994 29565 29898 29580 30052 29974 29614 ...
## $ salespersonid : int 279 279 282 282 276 280 283 276 277 282 ...
## $ territoryid : int 5 5 6 6 4 1 1 4 3 6 ...
## $ billtoaddressid : int 985 921 517 482 1073 876 849 1074 629 529 ...
## $ shiptoaddressid : int 985 921 517 482 1073 876 849 1074 629 529 ...
## $ shipmethodid : int 5 5 5 5 5 5 5 5 5 5 ...
## $ creditcardid : int 16281 5618 1346 10456 4322 806 15232 13349 10370 1566 ...
## $ creditcardapprovalcode: chr "105041Vi84182" "115213Vi29411" "85274Vi6854" "125295Vi53935" ...
## $ currencyrateid : int NA NA 4 4 NA NA NA NA NA 4 ...
## $ subtotal : num 20566 1294 32726 28833 419 ...
## $ taxamt : num 1971.5 124.2 3153.8 2775.2 40.3 ...
## $ freight : num 616.1 38.8 985.6 867.2 12.6 ...
## $ totaldue : num 23153 1457 36866 32475 472 ...
## $ comment : chr NA NA NA NA ...
## $ rowguid : chr "79b65321-39ca-4115-9cba-8fe0903e12e6" "738dc42d-d03b-48a1-9822-f95a67ea7389" "d91b9131-18a4-4a11-bc3a-90b6f53e9d74" "4a1ecfc0-cc3a-4740-b028-1c50bb48711c" ...
## $ modifieddate : POSIXct, format: "2011-06-07" "2011-06-07" ...
```
That’s very simple, but if the table is very large it may not be a problem, since R is designed to keep the entire table in memory. The tables that are found in an enterprise database such as `adventureworks` may be large, they are most often records kept by people. That somewhat limits their size (relative to data generated by machines) and expands the possibilities for human error.
Note that the first line of the str() output reports the total number of observations.
Later on we’ll use this tibble to demonstrate several packages and functions, but use only the first 13 columns for simplicity.
```
salesorderheader_tibble <- salesorderheader_tibble[,1:13]
```
### 7\.2\.2 Create a pointer to a table that can be reused
The `dplyr::tbl` function gives us more control over access to a table by enabling control over which columns and rows to download. It creates an object that might **look** like a data frame, but it’s actually a list object that `dplyr` uses for constructing queries and retrieving data from the DBMS.
```
salesorderheader_table <- dplyr::tbl(con, "salesorderheader")
class(salesorderheader_table)
```
```
## [1] "tbl_PqConnection" "tbl_dbi" "tbl_sql" "tbl_lazy"
## [5] "tbl"
```
### 7\.2\.3 Controlling the number of rows returned with `collect()`
The `collect` function triggers the creation of a tibble and controls the number of rows that the DBMS sends to R. For more complex queries, the `dplyr::collect()` function provides a mechanism to indicate what’s processed on on the DBMS server and what’s processed by R on the local machine. The chapter on [Lazy Evaluation and Execution Environment](chapter-lazy-evaluation-and-timing.html#chapter_lazy-evaluation-and-timing) discusses this issue in detail.
```
salesorderheader_table %>% dplyr::collect(n = 3) %>% dim()
```
```
## [1] 3 25
```
```
salesorderheader_table %>% dplyr::collect(n = 500) %>% dim()
```
```
## [1] 500 25
```
### 7\.2\.4 Retrieving random rows from the DBMS
When the DBMS contains many rows, a sample of the data may be plenty for your purposes. Although `dplyr` has nice functions to sample a data frame that’s already in R (e.g., the `sample_n` and `sample_frac` functions), to get a sample from the DBMS we have to use `dbGetQuery` to send native SQL to the database. To peek ahead, here is one example of a query that retrieves 20 rows from a 1% sample:
```
one_percent_sample <- DBI::dbGetQuery(
con,
"SELECT orderdate, subtotal, taxamt, freight, totaldue
FROM salesorderheader TABLESAMPLE BERNOULLI(3) LIMIT 20;
"
)
one_percent_sample
```
```
## orderdate subtotal taxamt freight totaldue
## 1 2011-06-22 699.0982 55.9279 17.4775 772.5036
## 2 2011-06-25 3578.2700 286.2616 89.4568 3953.9884
## 3 2011-06-29 3374.9900 269.9992 84.3748 3729.3640
## 4 2011-06-30 3578.2700 286.2616 89.4568 3953.9884
## 5 2011-07-01 32492.6040 3118.7048 974.5952 36585.9040
## 6 2011-07-03 3578.2700 286.2616 89.4568 3953.9884
## 7 2011-07-22 3578.2700 286.2616 89.4568 3953.9884
## 8 2011-08-01 2039.9940 195.8394 61.1998 2297.0332
## 9 2011-08-01 1362.3067 130.1463 40.6707 1533.1237
## 10 2011-08-07 3578.2700 286.2616 89.4568 3953.9884
## 11 2011-08-07 3578.2700 286.2616 89.4568 3953.9884
## 12 2011-08-14 3578.2700 286.2616 89.4568 3953.9884
## 13 2011-09-06 3578.2700 286.2616 89.4568 3953.9884
## 14 2011-09-08 3374.9900 269.9992 84.3748 3729.3640
## 15 2011-09-08 699.0982 55.9279 17.4775 772.5036
## 16 2011-09-10 3578.2700 286.2616 89.4568 3953.9884
## 17 2011-09-11 3578.2700 286.2616 89.4568 3953.9884
## 18 2011-09-12 3578.2700 286.2616 89.4568 3953.9884
## 19 2011-09-19 3578.2700 286.2616 89.4568 3953.9884
## 20 2011-10-01 35651.0339 3424.4400 1070.1375 40145.6114
```
**Exact sample of 100 records**
This technique depends on knowing the range of a record index, such as the `businessentityid` in the `salesorderheader` table of our `adventureworks` database.
Start by finding the min and max values.
```
DBI::dbListFields(con, "salesorderheader")
```
```
## [1] "salesorderid" "revisionnumber" "orderdate"
## [4] "duedate" "shipdate" "status"
## [7] "onlineorderflag" "purchaseordernumber" "accountnumber"
## [10] "customerid" "salespersonid" "territoryid"
## [13] "billtoaddressid" "shiptoaddressid" "shipmethodid"
## [16] "creditcardid" "creditcardapprovalcode" "currencyrateid"
## [19] "subtotal" "taxamt" "freight"
## [22] "totaldue" "comment" "rowguid"
## [25] "modifieddate"
```
```
salesorderheader_df <- DBI::dbReadTable(con, "salesorderheader")
(max_id <- max(salesorderheader_df$salesorderid))
```
```
## [1] 75123
```
```
(min_id <- min(salesorderheader_df$salesorderid))
```
```
## [1] 43659
```
Set the random number seed and draw the sample.
```
set.seed(123)
sample_rows <- sample(1:max(salesorderheader_df$salesorderid), 10)
salesorderheader_table <- dplyr::tbl(con, "salesorderheader")
```
Run query with the filter verb listing the randomly sampled rows to be retrieved:
```
salesorderheader_sample <- salesorderheader_table %>%
dplyr::filter(salesorderid %in% sample_rows) %>%
dplyr::collect()
str(salesorderheader_sample)
```
```
## Classes 'tbl_df', 'tbl' and 'data.frame': 7 obs. of 25 variables:
## $ salesorderid : int 45404 46435 51663 57870 62555 65161 68293
## $ revisionnumber : int 8 8 8 8 8 8 8
## $ orderdate : POSIXct, format: "2012-01-10" "2012-05-06" ...
## $ duedate : POSIXct, format: "2012-01-22" "2012-05-18" ...
## $ shipdate : POSIXct, format: "2012-01-17" "2012-05-13" ...
## $ status : int 5 5 5 5 5 5 5
## $ onlineorderflag : logi TRUE TRUE TRUE TRUE TRUE FALSE ...
## $ purchaseordernumber : chr NA NA NA NA ...
## $ accountnumber : chr "10-4030-011217" "10-4030-012251" "10-4030-016327" "10-4030-018572" ...
## $ customerid : int 11217 12251 16327 18572 13483 29799 13239
## $ salespersonid : int NA NA NA NA NA 281 NA
## $ territoryid : int 1 9 8 4 1 4 6
## $ billtoaddressid : int 19321 24859 19265 16902 15267 997 27923
## $ shiptoaddressid : int 19321 24859 19265 16902 15267 997 27923
## $ shipmethodid : int 1 1 1 1 1 5 1
## $ creditcardid : int 8241 13188 16357 1884 4409 12582 1529
## $ creditcardapprovalcode: chr "332581Vi42712" "635144Vi68383" "420152Vi84562" "1224478Vi9772" ...
## $ currencyrateid : int NA 4121 NA NA NA NA 11581
## $ subtotal : num 3578 3375 2466 14 57 ...
## $ taxamt : num 286.26 270 197.31 1.12 4.56 ...
## $ freight : num 89.457 84.375 61.658 0.349 1.424 ...
## $ totaldue : num 3954 3729.4 2725.3 15.4 63 ...
## $ comment : chr NA NA NA NA ...
## $ rowguid : chr "358f91b2-dadd-4014-8d4f-7f9736cb664e" "eb312409-fcd5-4bac-bd3b-16d4bd7889db" "ddc60552-af98-4166-9249-d09d424d8430" "fe46e631-47b9-4e14-9da5-1e4a4a135364" ...
## $ modifieddate : POSIXct, format: "2012-01-17" "2012-05-13" ...
```
### 7\.2\.5 Sub\-setting variables
A table in the DBMS may not only have many more rows than you want, but also many more columns. The `select` command controls which columns are retrieved.
```
salesorderheader_table %>% dplyr::select(orderdate, subtotal, taxamt, freight, totaldue) %>%
head()
```
```
## # Source: lazy query [?? x 5]
## # Database: postgres [postgres@localhost:5432/adventureworks]
## orderdate subtotal taxamt freight totaldue
## <dttm> <dbl> <dbl> <dbl> <dbl>
## 1 2011-05-31 00:00:00 20566. 1972. 616. 23153.
## 2 2011-05-31 00:00:00 1294. 124. 38.8 1457.
## 3 2011-05-31 00:00:00 32726. 3154. 986. 36866.
## 4 2011-05-31 00:00:00 28833. 2775. 867. 32475.
## 5 2011-05-31 00:00:00 419. 40.3 12.6 472.
## 6 2011-05-31 00:00:00 24433. 2345. 733. 27510.
```
That’s exactly equivalent to submitting the following SQL commands directly:
```
DBI::dbGetQuery(
con,
'SELECT "orderdate", "subtotal", "taxamt", "freight", "totaldue"
FROM "salesorderheader"
LIMIT 6')
```
```
## orderdate subtotal taxamt freight totaldue
## 1 2011-05-31 20565.6206 1971.5149 616.0984 23153.2339
## 2 2011-05-31 1294.2529 124.2483 38.8276 1457.3288
## 3 2011-05-31 32726.4786 3153.7696 985.5530 36865.8012
## 4 2011-05-31 28832.5289 2775.1646 867.2389 32474.9324
## 5 2011-05-31 419.4589 40.2681 12.5838 472.3108
## 6 2011-05-31 24432.6088 2344.9921 732.8100 27510.4109
```
We won’t discuss `dplyr` methods for sub\-setting variables, deriving new ones, or sub\-setting rows based on the values found in the table, because they are covered well in other places, including:
* Comprehensive reference: <https://dplyr.tidyverse.org/>
* Good tutorial: <https://suzan.rbind.io/tags/dplyr/>
In practice we find that, **renaming variables** is often quite important because the names in an SQL database might not meet your needs as an analyst. In “the wild”, you will find names that are ambiguous or overly specified, with spaces in them, and other problems that will make them difficult to use in R. It is good practice to do whatever renaming you are going to do in a predictable place like at the top of your code. The names in the `adventureworks` database are simple and clear, but if they were not, you might rename them for subsequent use in this way:
```
tbl(con, "salesorderheader") %>%
dplyr::rename(order_date = orderdate, sub_total_amount = subtotal,
tax_amount = taxamt, freight_amount = freight, total_due_amount = totaldue) %>%
dplyr::select(order_date, sub_total_amount, tax_amount, freight_amount, total_due_amount ) %>%
show_query()
```
```
## <SQL>
## SELECT "orderdate" AS "order_date", "subtotal" AS "sub_total_amount", "taxamt" AS "tax_amount", "freight" AS "freight_amount", "totaldue" AS "total_due_amount"
## FROM "salesorderheader"
```
That’s equivalent to the following SQL code:
```
DBI::dbGetQuery(
con,
'SELECT "orderdate" AS "order_date",
"subtotal" AS "sub_total_amount",
"taxamt" AS "tax_amount",
"freight" AS "freight_amount",
"totaldue" AS "total_due_amount"
FROM "salesorderheader"' ) %>%
head()
```
```
## order_date sub_total_amount tax_amount freight_amount total_due_amount
## 1 2011-05-31 20565.6206 1971.5149 616.0984 23153.2339
## 2 2011-05-31 1294.2529 124.2483 38.8276 1457.3288
## 3 2011-05-31 32726.4786 3153.7696 985.5530 36865.8012
## 4 2011-05-31 28832.5289 2775.1646 867.2389 32474.9324
## 5 2011-05-31 419.4589 40.2681 12.5838 472.3108
## 6 2011-05-31 24432.6088 2344.9921 732.8100 27510.4109
```
The one difference is that the `SQL` code returns a regular data frame and the `dplyr` code returns a `tibble`. Notice that the seconds are grayed out in the `tibble` display.
7\.3 Translating `dplyr` code to `SQL` queries
----------------------------------------------
Where did the translations we’ve shown above come from? The `show_query` function shows how `dplyr` is translating your query to the dialect of the target DBMS.
> The `show_query()` function shows you what dplyr is sending to the DBMS. It might be handy for inspecting what dplyr is doing or for showing your code to someone who is more SQL\- than R\-literate. In general we have used the function extensively in writing this book but in the final product we will not use it unless there is something in the SQL or the translation process that needs to be explained.
```
salesorderheader_table %>%
dplyr::tally() %>%
dplyr::show_query()
```
```
## <SQL>
## SELECT COUNT(*) AS "n"
## FROM "salesorderheader"
```
Here is an extensive discussion of how `dplyr` code is translated into SQL:
* [https://dbplyr.tidyverse.org/articles/sql\-translation.html](https://dbplyr.tidyverse.org/articles/sql-translation.html)
If you prefer to use SQL directly, rather than `dplyr`, you can submit SQL code to the DBMS through the `DBI::dbGetQuery` function:
```
DBI::dbGetQuery(
con,
'SELECT COUNT(*) AS "n"
FROM "salesorderheader" '
)
```
```
## n
## 1 31465
```
When you create a report to run repeatedly, you might want to put that query into R markdown. That way you can also execute that SQL code in a chunk with the following header:
{`sql, connection=con, output.var = "query_results"`}
```
SELECT COUNT(*) AS "n"
FROM "salesorderheader";
```
R markdown stores that query result in a tibble which can be printed by referring to it:
```
query_results
```
```
## n
## 1 31465
```
7\.4 Mixing dplyr and SQL
-------------------------
When dplyr finds code that it does not know how to translate into SQL, it will simply pass it along to the DBMS. Therefore you can interleave native commands that your DBMS will understand in the middle of dplyr code. Consider this example that’s derived from (Ruiz [2019](#ref-Ruiz2019)):
```
salesorderheader_table %>%
dplyr::select_at(vars(subtotal, contains("date"))) %>%
dplyr::mutate(today = now()) %>%
dplyr::show_query()
```
```
## <SQL>
## SELECT "subtotal", "orderdate", "duedate", "shipdate", "modifieddate", CURRENT_TIMESTAMP AS "today"
## FROM "salesorderheader"
```
That is native to PostgreSQL, not [ANSI standard](https://en.wikipedia.org/wiki/SQL#Interoperability_and_standardization) SQL.
Verify that it works:
```
salesorderheader_table %>%
dplyr::select_at(vars(subtotal, contains("date"))) %>%
head() %>%
dplyr::mutate(today = now()) %>%
dplyr::collect()
```
```
## # A tibble: 6 x 6
## subtotal orderdate duedate shipdate
## <dbl> <dttm> <dttm> <dttm>
## 1 20566. 2011-05-31 00:00:00 2011-06-12 00:00:00 2011-06-07 00:00:00
## 2 1294. 2011-05-31 00:00:00 2011-06-12 00:00:00 2011-06-07 00:00:00
## 3 32726. 2011-05-31 00:00:00 2011-06-12 00:00:00 2011-06-07 00:00:00
## 4 28833. 2011-05-31 00:00:00 2011-06-12 00:00:00 2011-06-07 00:00:00
## 5 419. 2011-05-31 00:00:00 2011-06-12 00:00:00 2011-06-07 00:00:00
## 6 24433. 2011-05-31 00:00:00 2011-06-12 00:00:00 2011-06-07 00:00:00
## # … with 2 more variables: modifieddate <dttm>, today <dttm>
```
7\.5 Examining a single table with R
------------------------------------
Dealing with a large, complex database highlights the utility of specific tools in R. We include brief examples that we find to be handy:
* Base R structure: `str`
* Printing out some of the data: `datatable`, `kable`, and `View`
* Summary statistics: `summary`
* `glimpse` in the `tibble` package, which is included in the `tidyverse`
* `skim` in the `skimr` package
### 7\.5\.1 `str` \- a base package workhorse
`str` is a workhorse function that lists variables, their type and a sample of the first few variable values.
```
str(salesorderheader_tibble)
```
```
## 'data.frame': 31465 obs. of 13 variables:
## $ salesorderid : int 43659 43660 43661 43662 43663 43664 43665 43666 43667 43668 ...
## $ revisionnumber : int 8 8 8 8 8 8 8 8 8 8 ...
## $ orderdate : POSIXct, format: "2011-05-31" "2011-05-31" ...
## $ duedate : POSIXct, format: "2011-06-12" "2011-06-12" ...
## $ shipdate : POSIXct, format: "2011-06-07" "2011-06-07" ...
## $ status : int 5 5 5 5 5 5 5 5 5 5 ...
## $ onlineorderflag : logi FALSE FALSE FALSE FALSE FALSE FALSE ...
## $ purchaseordernumber: chr "PO522145787" "PO18850127500" "PO18473189620" "PO18444174044" ...
## $ accountnumber : chr "10-4020-000676" "10-4020-000117" "10-4020-000442" "10-4020-000227" ...
## $ customerid : int 29825 29672 29734 29994 29565 29898 29580 30052 29974 29614 ...
## $ salespersonid : int 279 279 282 282 276 280 283 276 277 282 ...
## $ territoryid : int 5 5 6 6 4 1 1 4 3 6 ...
## $ billtoaddressid : int 985 921 517 482 1073 876 849 1074 629 529 ...
```
### 7\.5\.2 Always **look** at your data with `head`, `View`, or `kable`
There is no substitute for looking at your data and R provides several ways to just browse it. The `head` function controls the number of rows that are displayed. Note that tail does not work against a database object. In every\-day practice you would look at more than the default 6 rows, but here we wrap `head` around the data frame:
```
sqlpetr::sp_print_df(head(salesorderheader_tibble))
```
### 7\.5\.3 The `summary` function in `base`
The `base` package’s `summary` function provides basic statistics that serve a unique diagnostic purpose in this context. For example, the following output shows that:
```
* `businessentityid` is a number from 1 to 16,049. In a previous section, we ran the `str` function and saw that there are 16,044 observations in this table. Therefore, the `businessentityid` seems to be sequential from 1:16049, but there are 5 values missing from that sequence. _Exercise for the Reader_: Which 5 values from 1:16049 are missing from `businessentityid` values in the `salesorderheader` table? (_Hint_: In the chapter on SQL Joins, you will learn the functions needed to answer this question.)
* The number of NA's in the `return_date` column is a good first guess as to the number of DVDs rented out or lost as of 2005-09-02 02:35:22.
```
```
summary(salesorderheader_tibble)
```
```
## salesorderid revisionnumber orderdate
## Min. :43659 Min. :8.000 Min. :2011-05-31 00:00:00
## 1st Qu.:51525 1st Qu.:8.000 1st Qu.:2013-06-20 00:00:00
## Median :59391 Median :8.000 Median :2013-11-03 00:00:00
## Mean :59391 Mean :8.001 Mean :2013-08-21 12:05:04
## 3rd Qu.:67257 3rd Qu.:8.000 3rd Qu.:2014-02-28 00:00:00
## Max. :75123 Max. :9.000 Max. :2014-06-30 00:00:00
##
## duedate shipdate status
## Min. :2011-06-12 00:00:00 Min. :2011-06-07 00:00:00 Min. :5
## 1st Qu.:2013-07-02 00:00:00 1st Qu.:2013-06-27 00:00:00 1st Qu.:5
## Median :2013-11-15 00:00:00 Median :2013-11-10 00:00:00 Median :5
## Mean :2013-09-02 12:05:41 Mean :2013-08-28 12:06:06 Mean :5
## 3rd Qu.:2014-03-13 00:00:00 3rd Qu.:2014-03-08 00:00:00 3rd Qu.:5
## Max. :2014-07-12 00:00:00 Max. :2014-07-07 00:00:00 Max. :5
##
## onlineorderflag purchaseordernumber accountnumber customerid
## Mode :logical Length:31465 Length:31465 Min. :11000
## FALSE:3806 Class :character Class :character 1st Qu.:14432
## TRUE :27659 Mode :character Mode :character Median :19452
## Mean :20170
## 3rd Qu.:25994
## Max. :30118
##
## salespersonid territoryid billtoaddressid
## Min. :274.0 Min. : 1.000 Min. : 405
## 1st Qu.:277.0 1st Qu.: 4.000 1st Qu.:14080
## Median :279.0 Median : 6.000 Median :19449
## Mean :280.6 Mean : 6.091 Mean :18263
## 3rd Qu.:284.0 3rd Qu.: 9.000 3rd Qu.:24678
## Max. :290.0 Max. :10.000 Max. :29883
## NA's :27659
```
So the `summary` function is surprisingly useful as we first start to look at the table contents.
### 7\.5\.4 The `glimpse` function in the `tibble` package
The `tibble` package’s `glimpse` function is a more compact version of `str`:
```
tibble::glimpse(salesorderheader_tibble)
```
```
## Observations: 31,465
## Variables: 13
## $ salesorderid <int> 43659, 43660, 43661, 43662, 43663, 43664, 43665, …
## $ revisionnumber <int> 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8…
## $ orderdate <dttm> 2011-05-31, 2011-05-31, 2011-05-31, 2011-05-31, …
## $ duedate <dttm> 2011-06-12, 2011-06-12, 2011-06-12, 2011-06-12, …
## $ shipdate <dttm> 2011-06-07, 2011-06-07, 2011-06-07, 2011-06-07, …
## $ status <int> 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5…
## $ onlineorderflag <lgl> FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, …
## $ purchaseordernumber <chr> "PO522145787", "PO18850127500", "PO18473189620", …
## $ accountnumber <chr> "10-4020-000676", "10-4020-000117", "10-4020-0004…
## $ customerid <int> 29825, 29672, 29734, 29994, 29565, 29898, 29580, …
## $ salespersonid <int> 279, 279, 282, 282, 276, 280, 283, 276, 277, 282,…
## $ territoryid <int> 5, 5, 6, 6, 4, 1, 1, 4, 3, 6, 1, 3, 1, 6, 2, 6, 3…
## $ billtoaddressid <int> 985, 921, 517, 482, 1073, 876, 849, 1074, 629, 52…
```
### 7\.5\.5 The `skim` function in the `skimr` package
The `skimr` package has several functions that make it easy to examine an unknown data frame and assess what it contains. It is also extensible.
```
skimr::skim(salesorderheader_tibble)
```
Table 7\.1: Data summary
| Name | salesorderheader\_tibble |
| --- | --- |
| Number of rows | 31465 |
| Number of columns | 13 |
| \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ | |
| Column type frequency: | |
| character | 2 |
| logical | 1 |
| numeric | 7 |
| POSIXct | 3 |
| \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ | |
| Group variables | None |
**Variable type: character**
| skim\_variable | n\_missing | complete\_rate | min | max | empty | n\_unique | whitespace |
| --- | --- | --- | --- | --- | --- | --- | --- |
| purchaseordernumber | 27659 | 0\.12 | 10 | 13 | 0 | 3806 | 0 |
| accountnumber | 0 | 1\.00 | 14 | 14 | 0 | 19119 | 0 |
**Variable type: logical**
| skim\_variable | n\_missing | complete\_rate | mean | count |
| --- | --- | --- | --- | --- |
| onlineorderflag | 0 | 1 | 0\.88 | TRU: 27659, FAL: 3806 |
**Variable type: numeric**
| skim\_variable | n\_missing | complete\_rate | mean | sd | p0 | p25 | p50 | p75 | p100 | hist |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| salesorderid | 0 | 1\.00 | 59391\.00 | 9083\.31 | 43659 | 51525 | 59391 | 67257 | 75123 | ▇▇▇▇▇ |
| revisionnumber | 0 | 1\.00 | 8\.00 | 0\.03 | 8 | 8 | 8 | 8 | 9 | ▇▁▁▁▁ |
| status | 0 | 1\.00 | 5\.00 | 0\.00 | 5 | 5 | 5 | 5 | 5 | ▁▁▇▁▁ |
| customerid | 0 | 1\.00 | 20170\.18 | 6261\.73 | 11000 | 14432 | 19452 | 25994 | 30118 | ▇▆▅▅▇ |
| salespersonid | 27659 | 0\.12 | 280\.61 | 4\.85 | 274 | 277 | 279 | 284 | 290 | ▇▅▅▂▃ |
| territoryid | 0 | 1\.00 | 6\.09 | 2\.96 | 1 | 4 | 6 | 9 | 10 | ▃▅▃▅▇ |
| billtoaddressid | 0 | 1\.00 | 18263\.15 | 8210\.07 | 405 | 14080 | 19449 | 24678 | 29883 | ▃▁▇▇▇ |
**Variable type: POSIXct**
| skim\_variable | n\_missing | complete\_rate | min | max | median | n\_unique |
| --- | --- | --- | --- | --- | --- | --- |
| orderdate | 0 | 1 | 2011\-05\-31 | 2014\-06\-30 | 2013\-11\-03 | 1124 |
| duedate | 0 | 1 | 2011\-06\-12 | 2014\-07\-12 | 2013\-11\-15 | 1124 |
| shipdate | 0 | 1 | 2011\-06\-07 | 2014\-07\-07 | 2013\-11\-10 | 1124 |
```
skimr::skim_to_wide(salesorderheader_tibble) #skimr doesn't like certain kinds of columns
```
```
## Warning: 'skimr::skim_to_wide' is deprecated.
## Use 'skim()' instead.
## See help("Deprecated")
```
Table 7\.1: Data summary
| Name | .data |
| --- | --- |
| Number of rows | 31465 |
| Number of columns | 13 |
| \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ | |
| Column type frequency: | |
| character | 2 |
| logical | 1 |
| numeric | 7 |
| POSIXct | 3 |
| \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ | |
| Group variables | None |
**Variable type: character**
| skim\_variable | n\_missing | complete\_rate | min | max | empty | n\_unique | whitespace |
| --- | --- | --- | --- | --- | --- | --- | --- |
| purchaseordernumber | 27659 | 0\.12 | 10 | 13 | 0 | 3806 | 0 |
| accountnumber | 0 | 1\.00 | 14 | 14 | 0 | 19119 | 0 |
**Variable type: logical**
| skim\_variable | n\_missing | complete\_rate | mean | count |
| --- | --- | --- | --- | --- |
| onlineorderflag | 0 | 1 | 0\.88 | TRU: 27659, FAL: 3806 |
**Variable type: numeric**
| skim\_variable | n\_missing | complete\_rate | mean | sd | p0 | p25 | p50 | p75 | p100 | hist |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| salesorderid | 0 | 1\.00 | 59391\.00 | 9083\.31 | 43659 | 51525 | 59391 | 67257 | 75123 | ▇▇▇▇▇ |
| revisionnumber | 0 | 1\.00 | 8\.00 | 0\.03 | 8 | 8 | 8 | 8 | 9 | ▇▁▁▁▁ |
| status | 0 | 1\.00 | 5\.00 | 0\.00 | 5 | 5 | 5 | 5 | 5 | ▁▁▇▁▁ |
| customerid | 0 | 1\.00 | 20170\.18 | 6261\.73 | 11000 | 14432 | 19452 | 25994 | 30118 | ▇▆▅▅▇ |
| salespersonid | 27659 | 0\.12 | 280\.61 | 4\.85 | 274 | 277 | 279 | 284 | 290 | ▇▅▅▂▃ |
| territoryid | 0 | 1\.00 | 6\.09 | 2\.96 | 1 | 4 | 6 | 9 | 10 | ▃▅▃▅▇ |
| billtoaddressid | 0 | 1\.00 | 18263\.15 | 8210\.07 | 405 | 14080 | 19449 | 24678 | 29883 | ▃▁▇▇▇ |
**Variable type: POSIXct**
| skim\_variable | n\_missing | complete\_rate | min | max | median | n\_unique |
| --- | --- | --- | --- | --- | --- | --- |
| orderdate | 0 | 1 | 2011\-05\-31 | 2014\-06\-30 | 2013\-11\-03 | 1124 |
| duedate | 0 | 1 | 2011\-06\-12 | 2014\-07\-12 | 2013\-11\-15 | 1124 |
| shipdate | 0 | 1 | 2011\-06\-07 | 2014\-07\-07 | 2013\-11\-10 | 1124 |
### 7\.5\.1 `str` \- a base package workhorse
`str` is a workhorse function that lists variables, their type and a sample of the first few variable values.
```
str(salesorderheader_tibble)
```
```
## 'data.frame': 31465 obs. of 13 variables:
## $ salesorderid : int 43659 43660 43661 43662 43663 43664 43665 43666 43667 43668 ...
## $ revisionnumber : int 8 8 8 8 8 8 8 8 8 8 ...
## $ orderdate : POSIXct, format: "2011-05-31" "2011-05-31" ...
## $ duedate : POSIXct, format: "2011-06-12" "2011-06-12" ...
## $ shipdate : POSIXct, format: "2011-06-07" "2011-06-07" ...
## $ status : int 5 5 5 5 5 5 5 5 5 5 ...
## $ onlineorderflag : logi FALSE FALSE FALSE FALSE FALSE FALSE ...
## $ purchaseordernumber: chr "PO522145787" "PO18850127500" "PO18473189620" "PO18444174044" ...
## $ accountnumber : chr "10-4020-000676" "10-4020-000117" "10-4020-000442" "10-4020-000227" ...
## $ customerid : int 29825 29672 29734 29994 29565 29898 29580 30052 29974 29614 ...
## $ salespersonid : int 279 279 282 282 276 280 283 276 277 282 ...
## $ territoryid : int 5 5 6 6 4 1 1 4 3 6 ...
## $ billtoaddressid : int 985 921 517 482 1073 876 849 1074 629 529 ...
```
### 7\.5\.2 Always **look** at your data with `head`, `View`, or `kable`
There is no substitute for looking at your data and R provides several ways to just browse it. The `head` function controls the number of rows that are displayed. Note that tail does not work against a database object. In every\-day practice you would look at more than the default 6 rows, but here we wrap `head` around the data frame:
```
sqlpetr::sp_print_df(head(salesorderheader_tibble))
```
### 7\.5\.3 The `summary` function in `base`
The `base` package’s `summary` function provides basic statistics that serve a unique diagnostic purpose in this context. For example, the following output shows that:
```
* `businessentityid` is a number from 1 to 16,049. In a previous section, we ran the `str` function and saw that there are 16,044 observations in this table. Therefore, the `businessentityid` seems to be sequential from 1:16049, but there are 5 values missing from that sequence. _Exercise for the Reader_: Which 5 values from 1:16049 are missing from `businessentityid` values in the `salesorderheader` table? (_Hint_: In the chapter on SQL Joins, you will learn the functions needed to answer this question.)
* The number of NA's in the `return_date` column is a good first guess as to the number of DVDs rented out or lost as of 2005-09-02 02:35:22.
```
```
summary(salesorderheader_tibble)
```
```
## salesorderid revisionnumber orderdate
## Min. :43659 Min. :8.000 Min. :2011-05-31 00:00:00
## 1st Qu.:51525 1st Qu.:8.000 1st Qu.:2013-06-20 00:00:00
## Median :59391 Median :8.000 Median :2013-11-03 00:00:00
## Mean :59391 Mean :8.001 Mean :2013-08-21 12:05:04
## 3rd Qu.:67257 3rd Qu.:8.000 3rd Qu.:2014-02-28 00:00:00
## Max. :75123 Max. :9.000 Max. :2014-06-30 00:00:00
##
## duedate shipdate status
## Min. :2011-06-12 00:00:00 Min. :2011-06-07 00:00:00 Min. :5
## 1st Qu.:2013-07-02 00:00:00 1st Qu.:2013-06-27 00:00:00 1st Qu.:5
## Median :2013-11-15 00:00:00 Median :2013-11-10 00:00:00 Median :5
## Mean :2013-09-02 12:05:41 Mean :2013-08-28 12:06:06 Mean :5
## 3rd Qu.:2014-03-13 00:00:00 3rd Qu.:2014-03-08 00:00:00 3rd Qu.:5
## Max. :2014-07-12 00:00:00 Max. :2014-07-07 00:00:00 Max. :5
##
## onlineorderflag purchaseordernumber accountnumber customerid
## Mode :logical Length:31465 Length:31465 Min. :11000
## FALSE:3806 Class :character Class :character 1st Qu.:14432
## TRUE :27659 Mode :character Mode :character Median :19452
## Mean :20170
## 3rd Qu.:25994
## Max. :30118
##
## salespersonid territoryid billtoaddressid
## Min. :274.0 Min. : 1.000 Min. : 405
## 1st Qu.:277.0 1st Qu.: 4.000 1st Qu.:14080
## Median :279.0 Median : 6.000 Median :19449
## Mean :280.6 Mean : 6.091 Mean :18263
## 3rd Qu.:284.0 3rd Qu.: 9.000 3rd Qu.:24678
## Max. :290.0 Max. :10.000 Max. :29883
## NA's :27659
```
So the `summary` function is surprisingly useful as we first start to look at the table contents.
### 7\.5\.4 The `glimpse` function in the `tibble` package
The `tibble` package’s `glimpse` function is a more compact version of `str`:
```
tibble::glimpse(salesorderheader_tibble)
```
```
## Observations: 31,465
## Variables: 13
## $ salesorderid <int> 43659, 43660, 43661, 43662, 43663, 43664, 43665, …
## $ revisionnumber <int> 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8…
## $ orderdate <dttm> 2011-05-31, 2011-05-31, 2011-05-31, 2011-05-31, …
## $ duedate <dttm> 2011-06-12, 2011-06-12, 2011-06-12, 2011-06-12, …
## $ shipdate <dttm> 2011-06-07, 2011-06-07, 2011-06-07, 2011-06-07, …
## $ status <int> 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5…
## $ onlineorderflag <lgl> FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, …
## $ purchaseordernumber <chr> "PO522145787", "PO18850127500", "PO18473189620", …
## $ accountnumber <chr> "10-4020-000676", "10-4020-000117", "10-4020-0004…
## $ customerid <int> 29825, 29672, 29734, 29994, 29565, 29898, 29580, …
## $ salespersonid <int> 279, 279, 282, 282, 276, 280, 283, 276, 277, 282,…
## $ territoryid <int> 5, 5, 6, 6, 4, 1, 1, 4, 3, 6, 1, 3, 1, 6, 2, 6, 3…
## $ billtoaddressid <int> 985, 921, 517, 482, 1073, 876, 849, 1074, 629, 52…
```
### 7\.5\.5 The `skim` function in the `skimr` package
The `skimr` package has several functions that make it easy to examine an unknown data frame and assess what it contains. It is also extensible.
```
skimr::skim(salesorderheader_tibble)
```
Table 7\.1: Data summary
| Name | salesorderheader\_tibble |
| --- | --- |
| Number of rows | 31465 |
| Number of columns | 13 |
| \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ | |
| Column type frequency: | |
| character | 2 |
| logical | 1 |
| numeric | 7 |
| POSIXct | 3 |
| \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ | |
| Group variables | None |
**Variable type: character**
| skim\_variable | n\_missing | complete\_rate | min | max | empty | n\_unique | whitespace |
| --- | --- | --- | --- | --- | --- | --- | --- |
| purchaseordernumber | 27659 | 0\.12 | 10 | 13 | 0 | 3806 | 0 |
| accountnumber | 0 | 1\.00 | 14 | 14 | 0 | 19119 | 0 |
**Variable type: logical**
| skim\_variable | n\_missing | complete\_rate | mean | count |
| --- | --- | --- | --- | --- |
| onlineorderflag | 0 | 1 | 0\.88 | TRU: 27659, FAL: 3806 |
**Variable type: numeric**
| skim\_variable | n\_missing | complete\_rate | mean | sd | p0 | p25 | p50 | p75 | p100 | hist |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| salesorderid | 0 | 1\.00 | 59391\.00 | 9083\.31 | 43659 | 51525 | 59391 | 67257 | 75123 | ▇▇▇▇▇ |
| revisionnumber | 0 | 1\.00 | 8\.00 | 0\.03 | 8 | 8 | 8 | 8 | 9 | ▇▁▁▁▁ |
| status | 0 | 1\.00 | 5\.00 | 0\.00 | 5 | 5 | 5 | 5 | 5 | ▁▁▇▁▁ |
| customerid | 0 | 1\.00 | 20170\.18 | 6261\.73 | 11000 | 14432 | 19452 | 25994 | 30118 | ▇▆▅▅▇ |
| salespersonid | 27659 | 0\.12 | 280\.61 | 4\.85 | 274 | 277 | 279 | 284 | 290 | ▇▅▅▂▃ |
| territoryid | 0 | 1\.00 | 6\.09 | 2\.96 | 1 | 4 | 6 | 9 | 10 | ▃▅▃▅▇ |
| billtoaddressid | 0 | 1\.00 | 18263\.15 | 8210\.07 | 405 | 14080 | 19449 | 24678 | 29883 | ▃▁▇▇▇ |
**Variable type: POSIXct**
| skim\_variable | n\_missing | complete\_rate | min | max | median | n\_unique |
| --- | --- | --- | --- | --- | --- | --- |
| orderdate | 0 | 1 | 2011\-05\-31 | 2014\-06\-30 | 2013\-11\-03 | 1124 |
| duedate | 0 | 1 | 2011\-06\-12 | 2014\-07\-12 | 2013\-11\-15 | 1124 |
| shipdate | 0 | 1 | 2011\-06\-07 | 2014\-07\-07 | 2013\-11\-10 | 1124 |
```
skimr::skim_to_wide(salesorderheader_tibble) #skimr doesn't like certain kinds of columns
```
```
## Warning: 'skimr::skim_to_wide' is deprecated.
## Use 'skim()' instead.
## See help("Deprecated")
```
Table 7\.1: Data summary
| Name | .data |
| --- | --- |
| Number of rows | 31465 |
| Number of columns | 13 |
| \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ | |
| Column type frequency: | |
| character | 2 |
| logical | 1 |
| numeric | 7 |
| POSIXct | 3 |
| \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ | |
| Group variables | None |
**Variable type: character**
| skim\_variable | n\_missing | complete\_rate | min | max | empty | n\_unique | whitespace |
| --- | --- | --- | --- | --- | --- | --- | --- |
| purchaseordernumber | 27659 | 0\.12 | 10 | 13 | 0 | 3806 | 0 |
| accountnumber | 0 | 1\.00 | 14 | 14 | 0 | 19119 | 0 |
**Variable type: logical**
| skim\_variable | n\_missing | complete\_rate | mean | count |
| --- | --- | --- | --- | --- |
| onlineorderflag | 0 | 1 | 0\.88 | TRU: 27659, FAL: 3806 |
**Variable type: numeric**
| skim\_variable | n\_missing | complete\_rate | mean | sd | p0 | p25 | p50 | p75 | p100 | hist |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| salesorderid | 0 | 1\.00 | 59391\.00 | 9083\.31 | 43659 | 51525 | 59391 | 67257 | 75123 | ▇▇▇▇▇ |
| revisionnumber | 0 | 1\.00 | 8\.00 | 0\.03 | 8 | 8 | 8 | 8 | 9 | ▇▁▁▁▁ |
| status | 0 | 1\.00 | 5\.00 | 0\.00 | 5 | 5 | 5 | 5 | 5 | ▁▁▇▁▁ |
| customerid | 0 | 1\.00 | 20170\.18 | 6261\.73 | 11000 | 14432 | 19452 | 25994 | 30118 | ▇▆▅▅▇ |
| salespersonid | 27659 | 0\.12 | 280\.61 | 4\.85 | 274 | 277 | 279 | 284 | 290 | ▇▅▅▂▃ |
| territoryid | 0 | 1\.00 | 6\.09 | 2\.96 | 1 | 4 | 6 | 9 | 10 | ▃▅▃▅▇ |
| billtoaddressid | 0 | 1\.00 | 18263\.15 | 8210\.07 | 405 | 14080 | 19449 | 24678 | 29883 | ▃▁▇▇▇ |
**Variable type: POSIXct**
| skim\_variable | n\_missing | complete\_rate | min | max | median | n\_unique |
| --- | --- | --- | --- | --- | --- | --- |
| orderdate | 0 | 1 | 2011\-05\-31 | 2014\-06\-30 | 2013\-11\-03 | 1124 |
| duedate | 0 | 1 | 2011\-06\-12 | 2014\-07\-12 | 2013\-11\-15 | 1124 |
| shipdate | 0 | 1 | 2011\-06\-07 | 2014\-07\-07 | 2013\-11\-10 | 1124 |
7\.6 Disconnect from the database and stop Docker
-------------------------------------------------
```
dbDisconnect(con)
# or if using the connections package, use:
# connection_close(con)
sp_docker_stop("adventureworks")
```
7\.7 Additional reading
-----------------------
* (Wickham [2018](#ref-Wickham2018))
* (Baumer [2018](#ref-Baumer2018))
| Data Databases and Engineering |
smithjd.github.io | https://smithjd.github.io/sql-pet/chapter-exploring-a-single-table.html |
Chapter 8 Asking Business Questions From a Single Table
=======================================================
> This chapter explores:
>
>
> * Issues that come up when investigating a single table from a business perspective
> * Show the multiple data anomalies found in a single AdventureWorks table (*salesorderheader*)
> * The interplay between “data questions” and “business questions”
The previous chapter has demonstrated some of the automated techniques for showing what’s in a table using some standard R functions and packages. Now we demonstrate a step\-by\-step process of making sense of what’s in one table with more of a business perspective. We illustrate the kind of detective work that’s often involved as we investigate the *organizational meaning* of the data in a table. We’ll investigate the `salesorderheader` table in the `sales` schema in this example to understand the sales profile of the “AdventureWorks” business. We show that there are quite a few interpretation issues even when we are examining just 3 out of the 25 columns in one table.
For this kind of detective work we are seeking to understand the following elements separately and as they interact with each other:
* What data is stored in the database
* How information is represented
* How the data is entered at a day\-to\-day level to represent business activities
* How the business itself is changing over time
8\.1 Setup our standard working environment
-------------------------------------------
Use these libraries:
```
library(tidyverse)
library(DBI)
library(RPostgres)
library(connections)
library(glue)
require(knitr)
library(dbplyr)
library(sqlpetr)
library(bookdown)
library(here)
library(lubridate)
library(gt)
library(scales)
library(patchwork)
theme_set(theme_light())
```
Connect to `adventureworks`. In an interactive session we prefer to use `connections::connection_open` instead of dbConnect
```
sp_docker_start("adventureworks")
Sys.sleep(sleep_default)
con <- dbConnect(
RPostgres::Postgres(),
# without the previous and next lines, some functions fail with bigint data
# so change int64 to integer
bigint = "integer",
host = "localhost",
port = 5432,
user = "postgres",
password = "postgres",
dbname = "adventureworks")
```
Some queries generate big integers, so we need to include `RPostgres::Postgres()` and `bigint = "integer"` in the connections statement because some functions in the tidyverse object to the **bigint** datatype.
8\.2 A word on naming
---------------------
> You will find that many tables will have columns with the same name in an enterprise database. For example, in the *AdventureWorks* database, almost all tables have columns named `rowguid` and `modifieddate` and there are many other examples of names that are reused throughout the database. Duplicate columns are best renamed or deliberately dropped. The meaning of a column depends on the table that contains it, so as you pull a column out of a table, when renaming it the collumns provenance should be reflected in the new name.
>
>
> Naming columns carefully (whether retrieved from the database or calculated) will pay off, especially as our queries become more complex. Using `soh` as an abbreviation of *sales order header* to tag columns or statistics that are derived from the `salesorderheader` table, as we do in this book, is one example of an intentional naming strategy: it reminds us of the original source of the data. You, future you, and your collaborators will appreciate the effort no matter what naming convention you adopt. And a naming convention when rigidly applied can yield some long and ugly names.
>
>
> In the following example `soh` appears in different positions in the column name but it is easy to guess at a glance that the data comes from the `salesorderheader` table.
>
>
> Naming derived tables is just as important as naming derived columns.
8\.3 The overall AdventureWorks sales picture
---------------------------------------------
We begin by looking at Sales on a yearly basis, then consider monthly sales. We discover that half way through the period represented in the database, the business appears to begin selling online, which has very different characteristics than sales by Sales Reps. We then look at the details of how Sales Rep sales are recorded in the system and discover a data anomaly that we can correct.
8\.4 Annual sales
-----------------
On an annual basis, are sales dollars trending up, down or flat? We begin with annual revenue and number of orders.
```
annual_sales <- tbl(con, in_schema("sales", "salesorderheader")) %>%
mutate(year = substr(as.character(orderdate), 1, 4)) %>%
group_by(year) %>%
summarize(
min_soh_orderdate = min(orderdate, na.rm = TRUE),
max_soh_orderdate = max(orderdate, na.rm = TRUE),
total_soh_dollars = round(sum(subtotal, na.rm = TRUE), 2),
avg_total_soh_dollars = round(mean(subtotal, na.rm = TRUE), 2),
soh_count = n()
) %>%
arrange(year) %>%
select(
year, min_soh_orderdate, max_soh_orderdate, total_soh_dollars,
avg_total_soh_dollars, soh_count
) %>%
collect()
```
Note that all of this query is running on the server since the `collect()` statement is at the very end.
```
annual_sales %>% str()
```
```
## Classes 'tbl_df', 'tbl' and 'data.frame': 4 obs. of 6 variables:
## $ year : chr "2011" "2012" "2013" "2014"
## $ min_soh_orderdate : POSIXct, format: "2011-05-31" "2012-01-01" ...
## $ max_soh_orderdate : POSIXct, format: "2011-12-31" "2012-12-31" ...
## $ total_soh_dollars : num 12641672 33524301 43622479 20057929
## $ avg_total_soh_dollars: num 7867 8563 3076 1705
## $ soh_count : int 1607 3915 14182 11761
```
We hang on to some date information for later use in plot titles.
```
min_soh_dt <- min(annual_sales$min_soh_orderdate)
max_soh_dt <- max(annual_sales$max_soh_orderdate)
```
### 8\.4\.1 Annual summary of sales, number of transactions and average sale
```
tot_sales <- ggplot(data = annual_sales, aes(x = year, y = total_soh_dollars/100000)) +
geom_col() +
geom_text(aes(label = round(as.numeric(total_soh_dollars/100000), digits = 0)), vjust = 1.5, color = "white") +
scale_y_continuous(labels = scales::dollar_format()) +
labs(
title = "Total Sales per Year - Millions",
x = NULL,
y = "Sales $M"
)
```
Both 2011 and 2014 turn out to be are shorter time spans than the other two years, making comparison interpretation difficult. Still, it’s clear that 2013 was the best year for annual sales dollars.
Comparing the number of orders per year has roughly the same overall pattern (2013 ranks highest, etc.) but the proportions between the years are quite different.
Although 2013 was the best year in terms of total number of orders, there were many more in 2014 compared with 2012\. That suggests looking at the average dollars per sale for each year.
### 8\.4\.2 Average dollars per sale
```
(tot_sales + num_orders) / avg_sale
```
Figure 8\.1: AdventureWorks sales performance
That’s a big drop between average sale of more than $7,000 in the first two years down to the $3,000 range in the last two. There has been a remarkable change in this business. At the same time the total number of orders shot up from less than 4,000 a year to more than 14,000\. **Why are the number of orders increasing, but the average dollar amount of a sale is dropping?**
Perhaps monthly monthly sales has the answer. We adapt the first query to group by month and year.
8\.5 Monthly Sales
------------------
Our next iteration drills down from annual sales dollars to monthly sales dollars. For that we download the orderdate as a date, rather than a character variable for the year. R handles the conversion from the PostgreSQL date\-time to an R date\-time. We then convert it to a simple date with a `lubridate` function.
The following query uses the [postgreSQL function `date_trunc`](https://www.postgresqltutorial.com/postgresql-date_trunc/), which is equivalent to `lubridate`’s `round_date` function in R. If you want to push as much of the processing as possible onto the database server and thus possibly deal with smaller datasets in R, interleaving [postgreSQL functions](https://www.postgresql.org/docs/current/functions.html) into your dplyr code will help.
```
monthly_sales <- tbl(con, in_schema("sales", "salesorderheader")) %>%
select(orderdate, subtotal) %>%
mutate(
orderdate = date_trunc('month', orderdate)
) %>%
group_by(orderdate) %>%
summarize(
total_soh_dollars = round(sum(subtotal, na.rm = TRUE), 2),
avg_total_soh_dollars = round(mean(subtotal, na.rm = TRUE), 2),
soh_count = n()
) %>%
show_query() %>%
collect()
```
```
## <SQL>
## SELECT "orderdate", ROUND((SUM("subtotal")) :: numeric, 2) AS "total_soh_dollars", ROUND((AVG("subtotal")) :: numeric, 2) AS "avg_total_soh_dollars", COUNT(*) AS "soh_count"
## FROM (SELECT date_trunc('month', "orderdate") AS "orderdate", "subtotal"
## FROM sales.salesorderheader) "dbplyr_004"
## GROUP BY "orderdate"
```
> Note that `date_trunc('month', orderdate)` gets passed through exactly “as is.”
In many cases we don’t really care whether our queries are executed by R or by the SQL server, but if we do care we need to substitute the `postgreSQL` equivalent for the R functions we might ordinarily use. In those cases we have to check whether functions from R packages like `lubridate` and the equivalent `postgreSQL` functions are exactly alike. Often they are subtly different: in the previous query the `postgreSQL` function produces a `POSIXct` column, not a `Date` so we need to tack on a mutate function once the data is on the R side as shown here:
```
monthly_sales <- monthly_sales %>%
mutate(orderdate = as.Date(orderdate))
```
Next let’s plot the monthly sales data:
```
ggplot(data = monthly_sales, aes(x = orderdate, y = total_soh_dollars)) +
geom_col() +
scale_y_continuous(labels = dollar) +
theme(plot.title = element_text(hjust = 0.5)) +
labs(
title = glue("Sales by Month\n", {format(min_soh_dt, "%B %d, %Y")} , " to ",
{format(max_soh_dt, "%B %d, %Y")}),
x = "Month",
y = "Sales Dollars"
)
```
Figure 8\.2: Total Monthly Sales
That graph doesn’t show how the business might have changed, but it is remarkable how much variation there is from one month to another – particularly in 2012 and 2014\.
### 8\.5\.1 Check lagged monthly data
Because of the month\-over\-month sales variation. We’ll use `dplyr::lag` to help find the delta and later visualize just how much month\-to\-month difference there is.
```
monthly_sales <- arrange(monthly_sales, orderdate)
monthly_sales_lagged <- monthly_sales %>%
mutate(monthly_sales_change = (dplyr::lag(total_soh_dollars)) - total_soh_dollars)
monthly_sales_lagged[is.na(monthly_sales_lagged)] = 0
```
```
median(monthly_sales_lagged$monthly_sales_change, na.rm = TRUE)
```
```
## [1] -221690.505
```
```
(sum_lags <- summary(monthly_sales_lagged$monthly_sales_change))
```
```
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## -5879806.05 -1172995.19 -221690.51 11968.42 1159252.70 5420357.17
```
The average month over month change in sales looks OK ($ 11,968\) although the Median is negative: $ 11,968\. There is a very wide spread in our month\-over\-month sales data between the lower and upper quartile. We can plot the variation as follows:
```
ggplot(monthly_sales_lagged, aes(x = orderdate, y = monthly_sales_change)) +
scale_x_date(date_breaks = "year", date_labels = "%Y", date_minor_breaks = "3 months") +
geom_line() +
# geom_point() +
scale_y_continuous(limits = c(-6000000,5500000), labels = scales::dollar_format()) +
theme(plot.title = element_text(hjust = .5)) +
labs(
title = glue(
"Monthly Sales Change \n",
"Between ", {format(min_soh_dt, "%B %d, %Y")} , " and ",
{format(max_soh_dt, "%B %d, %Y")}
),
x = "Month",
y = "Dollar Change"
)
```
Figure 8\.3: Monthly Sales Change
It looks like the big change in the business occurred in the summer of 2013 when the number of orders jumped but the dollar volume just continued to bump along.
### 8\.5\.2 Comparing dollars and orders to a base year
To look at dollars and the number of orders together, we compare the monthly data to the yearly average for 2011\.
```
baseline_month <- "2011-07-01"
start_month <- monthly_sales %>%
filter(orderdate == as.Date(baseline_month))
```
Express monthly data relative to 2011\-07\-01, 2044600, 8851\.08, 231
```
monthly_sales_base_year_normalized_to_2011 <- monthly_sales %>%
mutate(
dollars = (100 * total_soh_dollars) / start_month$total_soh_dollars,
number_of_orders = (100 * soh_count) / start_month$soh_count
) %>%
ungroup()
monthly_sales_base_year_normalized_to_2011 <- monthly_sales_base_year_normalized_to_2011 %>%
select(orderdate, dollars, `# of orders` = number_of_orders) %>%
pivot_longer(-orderdate,
names_to = "relative_to_2011_average",
values_to = "amount"
)
```
```
monthly_sales_base_year_normalized_to_2011 %>%
ggplot(aes(orderdate, amount, color = relative_to_2011_average)) +
geom_line() +
geom_hline(yintercept = 100) +
scale_x_date(date_labels = "%Y-%m", date_breaks = "6 months") +
labs(
title = glue(
"Adventureworks Normalized Monthly Sales\n",
"Number of Sales Orders and Dollar Totals\n",
{format(min_soh_dt, "%B %d, %Y")} , " to ",
{format(max_soh_dt, "%B %d, %Y")}),
x = "Date",
y = "",
color = glue(baseline_month, " values = 100")
) +
theme(legend.position = c(.3,.75))
```
Figure 8\.4: Adventureworks Normalized Monthly Sales
8\.6 The effect of online sales
-------------------------------
We suspect that the business has changed a lot with the advent of online orders so we check the impact of `onlineorderflag` on annual sales. The `onlineorderflag` indicates which sales channel accounted for the sale, **Sales Reps** or **Online**.
### 8\.6\.1 Add `onlineorderflag` to our annual sales query
```
annual_sales_w_channel <- tbl(con, in_schema("sales", "salesorderheader")) %>%
select(orderdate, subtotal, onlineorderflag) %>%
collect() %>%
mutate(
orderdate = date(orderdate),
orderdate = round_date(orderdate, "month"),
onlineorderflag = if_else(onlineorderflag == FALSE,
"Sales Rep", "Online"
),
onlineorderflag = as.factor(onlineorderflag)
) %>%
group_by(orderdate, onlineorderflag) %>%
summarize(
min_soh_orderdate = min(orderdate, na.rm = TRUE),
max_soh_orderdate = max(orderdate, na.rm = TRUE),
total_soh_dollars = round(sum(subtotal, na.rm = TRUE), 2),
avg_total_soh_dollars = round(mean(subtotal, na.rm = TRUE), 2),
soh_count = n()
) %>%
select(
orderdate, onlineorderflag, min_soh_orderdate,
max_soh_orderdate, total_soh_dollars,
avg_total_soh_dollars, soh_count
)
```
Note that we are creating a factor and doing most of the calculations on the R side, not on the DBMS side.
### 8\.6\.2 Annual Sales comparison
Start by looking at total sales.
```
ggplot(data = annual_sales_w_channel, aes(x = orderdate, y = total_soh_dollars)) +
geom_col() +
scale_y_continuous(labels = scales::dollar_format()) +
facet_wrap("onlineorderflag") +
labs(
title = "AdventureWorks Monthly Sales",
caption = glue( "Between ", {format(min_soh_dt, "%B %d, %Y")} , " - ",
{format(max_soh_dt, "%B %d, %Y")}),
subtitle = "Comparing Online and Sales Rep sales channels",
x = "Year",
y = "Sales $"
)
```
(\#fig:Calculate annual sales dollars )Sales Channel Breakdown
It looks like there are two businesses represented in the AdventureWorks database that have very different growth profiles.
### 8\.6\.3 Order volume comparison
```
ggplot(data = annual_sales_w_channel, aes(x = orderdate, y = as.numeric(soh_count))) +
geom_col() +
facet_wrap("onlineorderflag") +
labs(
title = "AdventureWorks Monthly orders",
caption = glue( "Between ", {format(min_soh_dt, "%B %d, %Y")} , " - ",
{format(max_soh_dt, "%B %d, %Y")}),
subtitle = "Comparing Online and Sales Rep sales channels",
x = "Year",
y = "Total number of orders"
)
```
Figure 8\.5: AdventureWorks Monthly Orders by Channel
Comparing Online and Sales Rep sales, the difference in the number of orders is even more striking than the difference between annual sales.
### 8\.6\.4 Comparing average order size: **Sales Reps** to **Online** orders
```
ggplot(data = annual_sales_w_channel, aes(x = orderdate, y = avg_total_soh_dollars)) +
geom_col() +
facet_wrap("onlineorderflag") +
scale_y_continuous(labels = scales::dollar_format()) +
labs(
title = "AdventureWorks Average Dollars per Sale",
x = glue( "Year - between ", {format(min_soh_dt, "%B %d, %Y")} , " - ",
{format(max_soh_dt, "%B %d, %Y")}),
y = "Average sale amount"
)
```
Figure 8\.6: Average dollar per Sale comparison
8\.7 Impact of order type on monthly sales
------------------------------------------
To dig into the difference between **Sales Rep** and **Online** sales we can look at monthly data.
### 8\.7\.1 Retrieve monthly sales with the `onlineorderflag`
This query puts the `collect` statement earlier than the previous queries.
```
monthly_sales_w_channel <- tbl(con, in_schema("sales", "salesorderheader")) %>%
select(orderdate, subtotal, onlineorderflag) %>%
collect() %>% # From here on we're in R
mutate(
orderdate = date(orderdate),
orderdate = floor_date(orderdate, unit = "month"),
onlineorderflag = if_else(onlineorderflag == FALSE,
"Sales Rep", "Online")
) %>%
group_by(orderdate, onlineorderflag) %>%
summarize(
min_soh_orderdate = min(orderdate, na.rm = TRUE),
max_soh_orderdate = max(orderdate, na.rm = TRUE),
total_soh_dollars = round(sum(subtotal, na.rm = TRUE), 2),
avg_total_soh_dollars = round(mean(subtotal, na.rm = TRUE), 2),
soh_count = n()
) %>%
ungroup()
```
```
monthly_sales_w_channel %>%
rename(`Sales Channel` = onlineorderflag) %>%
group_by(`Sales Channel`) %>%
summarize(
unique_dates = n(),
start_date = min(min_soh_orderdate),
end_date = max(max_soh_orderdate),
total_sales = round(sum(total_soh_dollars)),
days_span = end_date - start_date
) %>%
gt()
```
| Sales Channel | unique\_dates | start\_date | end\_date | total\_sales | days\_span |
| --- | --- | --- | --- | --- | --- |
| Online | 38 | 2011\-05\-01 | 2014\-06\-01 | 29358677 | 1127 days |
| Sales Rep | 34 | 2011\-05\-01 | 2014\-05\-01 | 80487704 | 1096 days |
As this table shows, the **Sales Rep** dates don’t match the **Online** dates. They start on the same date, but have a different end. The **Online** dates include 2 months that are not included in the Sales Rep sales (which are the main sales channel by dollar volume).
### 8\.7\.2 Monthly variation compared to a trend line
Jumping to the trend line comparison, we see that the big the source of variation is on the Sales Rep side.
```
ggplot(
data = monthly_sales_w_channel,
aes(
x = orderdate, y = total_soh_dollars
)
) +
geom_line() +
geom_smooth(se = FALSE) +
facet_grid("onlineorderflag", scales = "free") +
scale_y_continuous(labels = dollar) +
scale_x_date(date_breaks = "year", date_labels = "%Y", date_minor_breaks = "3 months") +
theme(plot.title = element_text(hjust = .5)) + # Center ggplot title
labs(
title = glue(
"AdventureWorks Monthly Sales Trend"
),
x = glue( "Month - between ", {format(min_soh_dt, "%B %d, %Y")} , " - ",
{format(max_soh_dt, "%B %d, %Y")}),
y = "Sales Dollars"
)
```
Figure 8\.7: Monthly Sales Trend
The **monthly** gyrations are much larger on the Sales Rep side, amounting to differences in a million dollars compared to small monthly variations of around $25,000 for the Online orders.
### 8\.7\.3 Compare monthly lagged data by Sales Channel
First consider month\-to\-month change.
```
monthly_sales_w_channel_lagged_by_month <- monthly_sales_w_channel %>%
group_by(onlineorderflag) %>%
mutate(
lag_soh_count = lag(soh_count, 1),
lag_soh_total_dollars = lag(total_soh_dollars, 1),
pct_monthly_soh_dollar_change =
(total_soh_dollars - lag_soh_total_dollars) / lag_soh_total_dollars * 100,
pct_monthly_soh_count_change =
(soh_count - lag_soh_count) / lag_soh_count * 100
)
```
The following table shows some wild changes in dollar amounts and number of sales from one month to the next.
```
monthly_sales_w_channel_lagged_by_month %>%
filter(abs(pct_monthly_soh_count_change) > 150 |
abs(pct_monthly_soh_dollar_change) > 150 ) %>%
ungroup() %>%
arrange(onlineorderflag, orderdate) %>%
mutate(
total_soh_dollars = round(total_soh_dollars),
lag_soh_total_dollars = round(lag_soh_total_dollars),
pct_monthly_soh_dollar_change = round(pct_monthly_soh_dollar_change),
pct_monthly_soh_count_change = round(pct_monthly_soh_count_change)) %>%
select(orderdate, onlineorderflag, total_soh_dollars, lag_soh_total_dollars,
soh_count, lag_soh_count, pct_monthly_soh_dollar_change, pct_monthly_soh_count_change) %>%
# names()
gt() %>%
fmt_number(
columns = c(3:4), decimals = 0) %>%
fmt_percent(
columns = c(7:8), decimals = 0) %>%
cols_label(
onlineorderflag = "Channel",
total_soh_dollars = "$ this Month",
lag_soh_total_dollars = "$ last Month",
soh_count = "# this Month",
lag_soh_count = "# last Month",
pct_monthly_soh_dollar_change = "$ change",
pct_monthly_soh_count_change = "# change"
)
```
| orderdate | Channel | $ this Month | $ last Month | \# this Month | \# last Month | $ change | \# change |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 2011\-06\-01 | Online | 458,911 | 14,477 | 141 | 5 | 307,000% | 272,000% |
| 2013\-07\-01 | Online | 847,139 | 860,141 | 1564 | 533 | −200% | 19,300% |
| 2011\-07\-01 | Sales Rep | 1,538,408 | 489,329 | 75 | 38 | 21,400% | 9,700% |
| 2012\-01\-01 | Sales Rep | 3,356,069 | 713,117 | 143 | 40 | 37,100% | 25,800% |
| 2012\-03\-01 | Sales Rep | 2,269,117 | 882,900 | 85 | 37 | 15,700% | 13,000% |
| 2014\-03\-01 | Sales Rep | 5,526,352 | 3,231 | 271 | 3 | 17,096,000% | 893,300% |
| 2014\-05\-01 | Sales Rep | 3,415,479 | 1,285 | 179 | 2 | 26,573,900% | 885,000% |
We suspect that the business has changed a lot with the advent of **Online** orders.
8\.8 Detect and diagnose the day of the month problem
-----------------------------------------------------
There have been several indications that Sales Rep sales are recorded once a month while Online sales are recorded on a daily basis.
### 8\.8\.1 Sales Rep Orderdate Distribution
Look at the dates when sales are entered for sales by **Sales Reps**. The following query / plot combination shows this pattern. and the exception for transactions entered on the first day of the month.
```
tbl(con, in_schema("sales", "salesorderheader")) %>%
filter(onlineorderflag == FALSE) %>% # Drop online orders
mutate(orderday = day(orderdate)) %>%
count(orderday, name = "Orders") %>%
collect() %>%
full_join(tibble(orderday = seq(1:31))) %>%
mutate(orderday = as.factor(orderday)) %>%
ggplot(aes(orderday, Orders)) +
geom_col() +
coord_flip() +
labs(title = "The first day of the month looks odd",
x = "Day Number")
```
```
## Joining, by = "orderday"
```
```
## Warning: Removed 26 rows containing missing values (position_stack).
```
Figure 8\.8: Days of the month with Sales Rep activity recorded
We can check on which months have orders entered on the first of the month.
```
sales_rep_day_of_month_sales <- tbl(con, in_schema("sales", "salesorderheader")) %>%
filter(onlineorderflag == FALSE) %>% # Drop online orders
select(orderdate, subtotal) %>%
mutate(
year = year(orderdate),
month = month(orderdate),
day = day(orderdate)
) %>%
count(year, month, day) %>%
collect() %>%
pivot_wider(names_from = day, values_from = n, names_prefix = "day_", values_fill = list(day_1 = 0, day_28 = 0, day_29 = 0, day_30 = 0, day_31 = 0) ) %>%
as.data.frame() %>%
select(year, month, day_1, day_28, day_29, day_30, day_31) %>%
filter(day_1 > 0) %>%
arrange(year, month)
sales_rep_day_of_month_sales
```
```
## year month day_1 day_28 day_29 day_30 day_31
## 1 2011 7 75 NA NA NA NA
## 2 2011 8 60 NA NA NA 40
## 3 2011 10 90 NA NA NA 63
## 4 2011 12 40 NA NA NA NA
## 5 2012 1 79 NA 64 NA NA
## 6 2014 3 91 NA NA 2 178
## 7 2014 5 179 NA NA NA NA
```
There are two months with multiple sales rep order days for 2011, (11/08 and 11/10\), one for 2012, (1201\), and two in 2014, (14/01 and 14/03\). The 14/03 is the only three day sales rep order month.
Are there months where there were no sales recorded for the sales reps?
There are two approaches. The first is to generate a list of months between the beginning and end of history and compare that to the Sales Rep records
```
monthly_sales_rep_sales <- monthly_sales_w_channel %>%
filter(onlineorderflag == "Sales Rep") %>%
mutate(orderdate = as.Date(floor_date(orderdate, "month"))) %>%
count(orderdate)
str(monthly_sales_rep_sales)
```
```
## Classes 'tbl_df', 'tbl' and 'data.frame': 34 obs. of 2 variables:
## $ orderdate: Date, format: "2011-05-01" "2011-07-01" ...
## $ n : int 1 1 1 1 1 1 1 1 1 1 ...
```
```
date_list <- tibble(month_date = seq.Date(floor_date(as.Date(min_soh_dt), "month"),
floor_date(as.Date(max_soh_dt), "month"),
by = "month"),
date_exists = FALSE)
date_list %>%
anti_join(monthly_sales_rep_sales,
by = c("month_date" = "orderdate") )
```
```
## # A tibble: 4 x 2
## month_date date_exists
## <date> <lgl>
## 1 2011-06-01 FALSE
## 2 2011-09-01 FALSE
## 3 2011-11-01 FALSE
## 4 2014-06-01 FALSE
```
* June, September, and November are missing for 2011\.
* June for 2014
The second approach is to use the dates found in the database for online orders. Defining “complete” may not always be as simple as generating a complete list of months.
```
sales_order_header_online <- tbl(con, in_schema("sales", "salesorderheader")) %>%
filter(onlineorderflag == TRUE) %>%
mutate(
orderdate = date_trunc('month', orderdate)
) %>%
count(orderdate, name = "online_count")
sales_order_header_sales_rep <- tbl(con, in_schema("sales", "salesorderheader")) %>%
filter(onlineorderflag == FALSE) %>%
mutate(
orderdate = date_trunc('month', orderdate)
) %>%
count(orderdate, name = "sales_rep_count")
missing_dates <- sales_order_header_sales_rep %>%
full_join(sales_order_header_online) %>%
show_query() %>%
collect()
```
```
## Joining, by = "orderdate"
```
```
## <SQL>
## SELECT COALESCE("LHS"."orderdate", "RHS"."orderdate") AS "orderdate", "LHS"."sales_rep_count" AS "sales_rep_count", "RHS"."online_count" AS "online_count"
## FROM (SELECT "orderdate", COUNT(*) AS "sales_rep_count"
## FROM (SELECT "salesorderid", "revisionnumber", date_trunc('month', "orderdate") AS "orderdate", "duedate", "shipdate", "status", "onlineorderflag", "purchaseordernumber", "accountnumber", "customerid", "salespersonid", "territoryid", "billtoaddressid", "shiptoaddressid", "shipmethodid", "creditcardid", "creditcardapprovalcode", "currencyrateid", "subtotal", "taxamt", "freight", "totaldue", "comment", "rowguid", "modifieddate"
## FROM (SELECT *
## FROM sales.salesorderheader
## WHERE ("onlineorderflag" = FALSE)) "dbplyr_010") "dbplyr_011"
## GROUP BY "orderdate") "LHS"
## FULL JOIN (SELECT "orderdate", COUNT(*) AS "online_count"
## FROM (SELECT "salesorderid", "revisionnumber", date_trunc('month', "orderdate") AS "orderdate", "duedate", "shipdate", "status", "onlineorderflag", "purchaseordernumber", "accountnumber", "customerid", "salespersonid", "territoryid", "billtoaddressid", "shiptoaddressid", "shipmethodid", "creditcardid", "creditcardapprovalcode", "currencyrateid", "subtotal", "taxamt", "freight", "totaldue", "comment", "rowguid", "modifieddate"
## FROM (SELECT *
## FROM sales.salesorderheader
## WHERE ("onlineorderflag" = TRUE)) "dbplyr_012") "dbplyr_013"
## GROUP BY "orderdate") "RHS"
## ON ("LHS"."orderdate" = "RHS"."orderdate")
```
```
missing_dates <- sales_order_header_online %>%
anti_join(sales_order_header_sales_rep) %>%
arrange(orderdate) %>%
collect()
```
```
## Joining, by = "orderdate"
```
```
missing_dates
```
```
## # A tibble: 4 x 2
## orderdate online_count
## <dttm> <int>
## 1 2011-06-01 00:00:00 141
## 2 2011-09-01 00:00:00 157
## 3 2011-11-01 00:00:00 230
## 4 2014-06-01 00:00:00 939
```
```
str(missing_dates)
```
```
## Classes 'tbl_df', 'tbl' and 'data.frame': 4 obs. of 2 variables:
## $ orderdate : POSIXct, format: "2011-06-01" "2011-09-01" ...
## $ online_count: int 141 157 230 939
```
And in this case they agree!
discuss February issues. and stuff.
look at each year sepraately as a diagnostic
Use the same pivot strategy on the corrected data.
difference between detective work with a graph and just print it out. “now I see what’s driving the hint.”
We have xx months when we add the month before and the month after the **suspicious months**. We don’t know whether the problem postings have been carried forward or backward. We check for and eliminate duplicates as well.
* Most of the **Sales Reps**’ orders are entered on a single day of the month, unique days \= 1\. It is possible that these are monthly recurring orders that get released on a given day of the month. If that is the case, what are the **Sales Reps** doing the rest of the month?
* \*\* ?? The lines with multiple days, unique\_days \> 1, have a noticeable higher number of orders, so\_cnt, and associated so dollars.?? \*\*
8\.9 Correcting the order date for **Sales Reps**
-------------------------------------------------
### 8\.9\.1 Define a date correction function in R
This code does the date\-correction work on the R side:
```
monthly_sales_rep_adjusted <- tbl(con, in_schema("sales", "salesorderheader")) %>%
filter(onlineorderflag == FALSE) %>%
select(orderdate, subtotal, onlineorderflag) %>%
group_by(orderdate) %>%
summarize(
total_soh_dollars = round(sum(subtotal, na.rm = TRUE), 2),
soh_count = n()
) %>%
mutate(
orderdate = as.Date(orderdate),
day = day(orderdate)
) %>%
collect() %>%
ungroup() %>%
mutate(
adjusted_orderdate = case_when(
day == 1L ~ orderdate -1,
TRUE ~ orderdate
),
year_month = floor_date(adjusted_orderdate, "month")
) %>%
group_by(year_month) %>%
summarize(
total_soh_dollars = round(sum(total_soh_dollars, na.rm = TRUE), 2),
soh_count = sum(soh_count)
) %>%
ungroup()
```
Inspect:
```
str(monthly_sales_rep_adjusted)
```
```
## Classes 'tbl_df', 'tbl' and 'data.frame': 36 obs. of 3 variables:
## $ year_month : Date, format: "2011-05-01" "2011-06-01" ...
## $ total_soh_dollars: num 489329 1538408 1165897 844721 2324136 ...
## $ soh_count : int 38 75 60 40 90 63 40 79 64 37 ...
```
```
monthly_sales_rep_adjusted %>% filter(year(year_month) %in% c(2011,2014))
```
```
## # A tibble: 12 x 3
## year_month total_soh_dollars soh_count
## <date> <dbl> <int>
## 1 2011-05-01 489329. 38
## 2 2011-06-01 1538408. 75
## 3 2011-07-01 1165897. 60
## 4 2011-08-01 844721 40
## 5 2011-09-01 2324136. 90
## 6 2011-10-01 1702945. 63
## 7 2011-11-01 713117. 40
## 8 2011-12-01 1900789. 79
## 9 2014-01-01 2738752. 175
## 10 2014-02-01 2207772. 94
## 11 2014-03-01 3321810. 180
## 12 2014-04-01 3416764. 181
```
### 8\.9\.2 Define and store a PostgreSQL function to correct the date
The following code defines a function on the server side to correct the date:
```
dbExecute(
con,
"CREATE OR REPLACE FUNCTION so_adj_date(so_date timestamp, ONLINE_ORDER boolean) RETURNS timestamp AS $$
BEGIN
IF (ONLINE_ORDER) THEN
RETURN (SELECT so_date);
ELSE
RETURN(SELECT CASE WHEN EXTRACT(DAY FROM so_date) = 1
THEN so_date - '1 day'::interval
ELSE so_date
END
);
END IF;
END; $$
LANGUAGE PLPGSQL;
"
)
```
```
## [1] 0
```
### 8\.9\.3 Use the PostgreSQL function
If you can do the heavy lifting on the database side, that’s good. R can do it, but it’s best for finding the issues.
```
monthly_sales_rep_adjusted_with_psql_function <- tbl(con, in_schema("sales", "salesorderheader")) %>%
select(orderdate, subtotal, onlineorderflag) %>%
mutate(
orderdate = as.Date(orderdate)) %>%
mutate(adjusted_orderdate = as.Date(so_adj_date(orderdate, onlineorderflag))) %>%
filter(onlineorderflag == FALSE) %>%
group_by(adjusted_orderdate) %>%
summarize(
total_soh_dollars = round(sum(subtotal, na.rm = TRUE), 2),
soh_count = n()
) %>%
collect() %>%
mutate( year_month = floor_date(adjusted_orderdate, "month")) %>%
group_by(year_month) %>%
ungroup() %>%
arrange(year_month)
```
```
monthly_sales_rep_adjusted_with_psql_function %>%
filter(year(year_month) %in% c(2011,2014))
```
```
## # A tibble: 14 x 4
## adjusted_orderdate total_soh_dollars soh_count year_month
## <date> <dbl> <int> <date>
## 1 2011-05-31 489329. 38 2011-05-01
## 2 2011-06-30 1538408. 75 2011-06-01
## 3 2011-07-31 1165897. 60 2011-07-01
## 4 2011-08-31 844721 40 2011-08-01
## 5 2011-09-30 2324136. 90 2011-09-01
## 6 2011-10-31 1702945. 63 2011-10-01
## 7 2011-11-30 713117. 40 2011-11-01
## 8 2011-12-31 1900789. 79 2011-12-01
## 9 2014-01-28 1565. 2 2014-01-01
## 10 2014-01-29 2737188. 173 2014-01-01
## 11 2014-02-28 2207772. 94 2014-02-01
## 12 2014-03-30 7291. 2 2014-03-01
## 13 2014-03-31 3314519. 178 2014-03-01
## 14 2014-04-30 3416764. 181 2014-04-01
```
There’s one minor difference between the two:
```
all_equal(monthly_sales_rep_adjusted, monthly_sales_rep_adjusted_with_psql_function)
```
```
## [1] "Cols in y but not x: `adjusted_orderdate`. "
```
### 8\.9\.4 Monthly Sales by Order Type with corrected dates – relative to a trend line
```
monthly_sales_rep_as_is <- monthly_sales_w_channel %>%
filter(onlineorderflag == "Sales Rep")
ggplot(
data = monthly_sales_rep_adjusted,
aes(x = year_month, y = soh_count)
) +
geom_line(alpha = .5) +
geom_smooth(se = FALSE) +
geom_smooth(
data = monthly_sales_rep_as_is, aes(
orderdate, soh_count
), color = "red", alpha = .5,
se = FALSE
) +
theme(plot.title = element_text(hjust = .5)) + # Center ggplot title
labs(
title = glue(
"Number of Sales per month using corrected dates\n",
"Counting Sales Order Header records"
),
x = paste0("Monthly - between ", min_soh_dt, " - ", max_soh_dt),
y = "Number of Sales Recorded"
)
```
```
monthly_sales_rep_as_is <- monthly_sales_w_channel %>%
filter(onlineorderflag == "Sales Rep") %>%
mutate(orderdate = as.Date(floor_date(orderdate, unit = "month"))) %>%
group_by(orderdate) %>%
summarize(
total_soh_dollars = round(sum(total_soh_dollars, na.rm = TRUE), 2),
soh_count = sum(soh_count)
)
monthly_sales_rep_as_is %>%
filter(year(orderdate) %in% c(2011,2014))
```
```
## # A tibble: 10 x 3
## orderdate total_soh_dollars soh_count
## <date> <dbl> <int>
## 1 2011-05-01 489329. 38
## 2 2011-07-01 1538408. 75
## 3 2011-08-01 2010618. 100
## 4 2011-10-01 4027080. 153
## 5 2011-12-01 713117. 40
## 6 2014-01-01 2738752. 175
## 7 2014-02-01 3231. 3
## 8 2014-03-01 5526352. 271
## 9 2014-04-01 1285. 2
## 10 2014-05-01 3415479. 179
```
```
ggplot(
data = monthly_sales_rep_adjusted,
aes(x = year_month, y = soh_count)
) +
geom_line(alpha = .5 , color = "green") +
geom_point(alpha = .5 , color = "green") +
geom_point(
data = monthly_sales_rep_as_is, aes(
orderdate, soh_count), color = "red", alpha = .5) +
theme(plot.title = element_text(hjust = .5)) + # Center ggplot title
annotate(geom = "text", y = 250, x = as.Date("2011-06-01"),
label = "Orange dots: original data\nGreen dots: corrected data\nBrown dots: unchanged",
hjust = 0) +
labs(
title = glue(
"Number of Sales per Month"
),
subtitle = "Original and corrected amounts",
x = paste0("Monthly - between ", min_soh_dt, " - ", max_soh_dt),
y = "Number of Sales Recorded"
)
```
Figure 8\.9: Comparing monthly\_sales\_rep\_adjusted and monthly\_sales\_rep\_as\_is
```
mon_sales <- monthly_sales_rep_adjusted %>%
rename(orderdate = year_month)
sales_original_and_adjusted <- bind_rows(mon_sales, monthly_sales_rep_as_is, .id = "date_kind")
```
Sales still seem to gyrate! We have found that sales rep sales data is often very strange.
8\.10 Disconnect from the database and stop Docker
--------------------------------------------------
```
dbDisconnect(con)
# when running interactively use:
connection_close(con)
```
```
## Warning in connection_release(conn@ptr): Already disconnected
```
```
sp_docker_stop("adventureworks")
```
8\.1 Setup our standard working environment
-------------------------------------------
Use these libraries:
```
library(tidyverse)
library(DBI)
library(RPostgres)
library(connections)
library(glue)
require(knitr)
library(dbplyr)
library(sqlpetr)
library(bookdown)
library(here)
library(lubridate)
library(gt)
library(scales)
library(patchwork)
theme_set(theme_light())
```
Connect to `adventureworks`. In an interactive session we prefer to use `connections::connection_open` instead of dbConnect
```
sp_docker_start("adventureworks")
Sys.sleep(sleep_default)
con <- dbConnect(
RPostgres::Postgres(),
# without the previous and next lines, some functions fail with bigint data
# so change int64 to integer
bigint = "integer",
host = "localhost",
port = 5432,
user = "postgres",
password = "postgres",
dbname = "adventureworks")
```
Some queries generate big integers, so we need to include `RPostgres::Postgres()` and `bigint = "integer"` in the connections statement because some functions in the tidyverse object to the **bigint** datatype.
8\.2 A word on naming
---------------------
> You will find that many tables will have columns with the same name in an enterprise database. For example, in the *AdventureWorks* database, almost all tables have columns named `rowguid` and `modifieddate` and there are many other examples of names that are reused throughout the database. Duplicate columns are best renamed or deliberately dropped. The meaning of a column depends on the table that contains it, so as you pull a column out of a table, when renaming it the collumns provenance should be reflected in the new name.
>
>
> Naming columns carefully (whether retrieved from the database or calculated) will pay off, especially as our queries become more complex. Using `soh` as an abbreviation of *sales order header* to tag columns or statistics that are derived from the `salesorderheader` table, as we do in this book, is one example of an intentional naming strategy: it reminds us of the original source of the data. You, future you, and your collaborators will appreciate the effort no matter what naming convention you adopt. And a naming convention when rigidly applied can yield some long and ugly names.
>
>
> In the following example `soh` appears in different positions in the column name but it is easy to guess at a glance that the data comes from the `salesorderheader` table.
>
>
> Naming derived tables is just as important as naming derived columns.
8\.3 The overall AdventureWorks sales picture
---------------------------------------------
We begin by looking at Sales on a yearly basis, then consider monthly sales. We discover that half way through the period represented in the database, the business appears to begin selling online, which has very different characteristics than sales by Sales Reps. We then look at the details of how Sales Rep sales are recorded in the system and discover a data anomaly that we can correct.
8\.4 Annual sales
-----------------
On an annual basis, are sales dollars trending up, down or flat? We begin with annual revenue and number of orders.
```
annual_sales <- tbl(con, in_schema("sales", "salesorderheader")) %>%
mutate(year = substr(as.character(orderdate), 1, 4)) %>%
group_by(year) %>%
summarize(
min_soh_orderdate = min(orderdate, na.rm = TRUE),
max_soh_orderdate = max(orderdate, na.rm = TRUE),
total_soh_dollars = round(sum(subtotal, na.rm = TRUE), 2),
avg_total_soh_dollars = round(mean(subtotal, na.rm = TRUE), 2),
soh_count = n()
) %>%
arrange(year) %>%
select(
year, min_soh_orderdate, max_soh_orderdate, total_soh_dollars,
avg_total_soh_dollars, soh_count
) %>%
collect()
```
Note that all of this query is running on the server since the `collect()` statement is at the very end.
```
annual_sales %>% str()
```
```
## Classes 'tbl_df', 'tbl' and 'data.frame': 4 obs. of 6 variables:
## $ year : chr "2011" "2012" "2013" "2014"
## $ min_soh_orderdate : POSIXct, format: "2011-05-31" "2012-01-01" ...
## $ max_soh_orderdate : POSIXct, format: "2011-12-31" "2012-12-31" ...
## $ total_soh_dollars : num 12641672 33524301 43622479 20057929
## $ avg_total_soh_dollars: num 7867 8563 3076 1705
## $ soh_count : int 1607 3915 14182 11761
```
We hang on to some date information for later use in plot titles.
```
min_soh_dt <- min(annual_sales$min_soh_orderdate)
max_soh_dt <- max(annual_sales$max_soh_orderdate)
```
### 8\.4\.1 Annual summary of sales, number of transactions and average sale
```
tot_sales <- ggplot(data = annual_sales, aes(x = year, y = total_soh_dollars/100000)) +
geom_col() +
geom_text(aes(label = round(as.numeric(total_soh_dollars/100000), digits = 0)), vjust = 1.5, color = "white") +
scale_y_continuous(labels = scales::dollar_format()) +
labs(
title = "Total Sales per Year - Millions",
x = NULL,
y = "Sales $M"
)
```
Both 2011 and 2014 turn out to be are shorter time spans than the other two years, making comparison interpretation difficult. Still, it’s clear that 2013 was the best year for annual sales dollars.
Comparing the number of orders per year has roughly the same overall pattern (2013 ranks highest, etc.) but the proportions between the years are quite different.
Although 2013 was the best year in terms of total number of orders, there were many more in 2014 compared with 2012\. That suggests looking at the average dollars per sale for each year.
### 8\.4\.2 Average dollars per sale
```
(tot_sales + num_orders) / avg_sale
```
Figure 8\.1: AdventureWorks sales performance
That’s a big drop between average sale of more than $7,000 in the first two years down to the $3,000 range in the last two. There has been a remarkable change in this business. At the same time the total number of orders shot up from less than 4,000 a year to more than 14,000\. **Why are the number of orders increasing, but the average dollar amount of a sale is dropping?**
Perhaps monthly monthly sales has the answer. We adapt the first query to group by month and year.
### 8\.4\.1 Annual summary of sales, number of transactions and average sale
```
tot_sales <- ggplot(data = annual_sales, aes(x = year, y = total_soh_dollars/100000)) +
geom_col() +
geom_text(aes(label = round(as.numeric(total_soh_dollars/100000), digits = 0)), vjust = 1.5, color = "white") +
scale_y_continuous(labels = scales::dollar_format()) +
labs(
title = "Total Sales per Year - Millions",
x = NULL,
y = "Sales $M"
)
```
Both 2011 and 2014 turn out to be are shorter time spans than the other two years, making comparison interpretation difficult. Still, it’s clear that 2013 was the best year for annual sales dollars.
Comparing the number of orders per year has roughly the same overall pattern (2013 ranks highest, etc.) but the proportions between the years are quite different.
Although 2013 was the best year in terms of total number of orders, there were many more in 2014 compared with 2012\. That suggests looking at the average dollars per sale for each year.
### 8\.4\.2 Average dollars per sale
```
(tot_sales + num_orders) / avg_sale
```
Figure 8\.1: AdventureWorks sales performance
That’s a big drop between average sale of more than $7,000 in the first two years down to the $3,000 range in the last two. There has been a remarkable change in this business. At the same time the total number of orders shot up from less than 4,000 a year to more than 14,000\. **Why are the number of orders increasing, but the average dollar amount of a sale is dropping?**
Perhaps monthly monthly sales has the answer. We adapt the first query to group by month and year.
8\.5 Monthly Sales
------------------
Our next iteration drills down from annual sales dollars to monthly sales dollars. For that we download the orderdate as a date, rather than a character variable for the year. R handles the conversion from the PostgreSQL date\-time to an R date\-time. We then convert it to a simple date with a `lubridate` function.
The following query uses the [postgreSQL function `date_trunc`](https://www.postgresqltutorial.com/postgresql-date_trunc/), which is equivalent to `lubridate`’s `round_date` function in R. If you want to push as much of the processing as possible onto the database server and thus possibly deal with smaller datasets in R, interleaving [postgreSQL functions](https://www.postgresql.org/docs/current/functions.html) into your dplyr code will help.
```
monthly_sales <- tbl(con, in_schema("sales", "salesorderheader")) %>%
select(orderdate, subtotal) %>%
mutate(
orderdate = date_trunc('month', orderdate)
) %>%
group_by(orderdate) %>%
summarize(
total_soh_dollars = round(sum(subtotal, na.rm = TRUE), 2),
avg_total_soh_dollars = round(mean(subtotal, na.rm = TRUE), 2),
soh_count = n()
) %>%
show_query() %>%
collect()
```
```
## <SQL>
## SELECT "orderdate", ROUND((SUM("subtotal")) :: numeric, 2) AS "total_soh_dollars", ROUND((AVG("subtotal")) :: numeric, 2) AS "avg_total_soh_dollars", COUNT(*) AS "soh_count"
## FROM (SELECT date_trunc('month', "orderdate") AS "orderdate", "subtotal"
## FROM sales.salesorderheader) "dbplyr_004"
## GROUP BY "orderdate"
```
> Note that `date_trunc('month', orderdate)` gets passed through exactly “as is.”
In many cases we don’t really care whether our queries are executed by R or by the SQL server, but if we do care we need to substitute the `postgreSQL` equivalent for the R functions we might ordinarily use. In those cases we have to check whether functions from R packages like `lubridate` and the equivalent `postgreSQL` functions are exactly alike. Often they are subtly different: in the previous query the `postgreSQL` function produces a `POSIXct` column, not a `Date` so we need to tack on a mutate function once the data is on the R side as shown here:
```
monthly_sales <- monthly_sales %>%
mutate(orderdate = as.Date(orderdate))
```
Next let’s plot the monthly sales data:
```
ggplot(data = monthly_sales, aes(x = orderdate, y = total_soh_dollars)) +
geom_col() +
scale_y_continuous(labels = dollar) +
theme(plot.title = element_text(hjust = 0.5)) +
labs(
title = glue("Sales by Month\n", {format(min_soh_dt, "%B %d, %Y")} , " to ",
{format(max_soh_dt, "%B %d, %Y")}),
x = "Month",
y = "Sales Dollars"
)
```
Figure 8\.2: Total Monthly Sales
That graph doesn’t show how the business might have changed, but it is remarkable how much variation there is from one month to another – particularly in 2012 and 2014\.
### 8\.5\.1 Check lagged monthly data
Because of the month\-over\-month sales variation. We’ll use `dplyr::lag` to help find the delta and later visualize just how much month\-to\-month difference there is.
```
monthly_sales <- arrange(monthly_sales, orderdate)
monthly_sales_lagged <- monthly_sales %>%
mutate(monthly_sales_change = (dplyr::lag(total_soh_dollars)) - total_soh_dollars)
monthly_sales_lagged[is.na(monthly_sales_lagged)] = 0
```
```
median(monthly_sales_lagged$monthly_sales_change, na.rm = TRUE)
```
```
## [1] -221690.505
```
```
(sum_lags <- summary(monthly_sales_lagged$monthly_sales_change))
```
```
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## -5879806.05 -1172995.19 -221690.51 11968.42 1159252.70 5420357.17
```
The average month over month change in sales looks OK ($ 11,968\) although the Median is negative: $ 11,968\. There is a very wide spread in our month\-over\-month sales data between the lower and upper quartile. We can plot the variation as follows:
```
ggplot(monthly_sales_lagged, aes(x = orderdate, y = monthly_sales_change)) +
scale_x_date(date_breaks = "year", date_labels = "%Y", date_minor_breaks = "3 months") +
geom_line() +
# geom_point() +
scale_y_continuous(limits = c(-6000000,5500000), labels = scales::dollar_format()) +
theme(plot.title = element_text(hjust = .5)) +
labs(
title = glue(
"Monthly Sales Change \n",
"Between ", {format(min_soh_dt, "%B %d, %Y")} , " and ",
{format(max_soh_dt, "%B %d, %Y")}
),
x = "Month",
y = "Dollar Change"
)
```
Figure 8\.3: Monthly Sales Change
It looks like the big change in the business occurred in the summer of 2013 when the number of orders jumped but the dollar volume just continued to bump along.
### 8\.5\.2 Comparing dollars and orders to a base year
To look at dollars and the number of orders together, we compare the monthly data to the yearly average for 2011\.
```
baseline_month <- "2011-07-01"
start_month <- monthly_sales %>%
filter(orderdate == as.Date(baseline_month))
```
Express monthly data relative to 2011\-07\-01, 2044600, 8851\.08, 231
```
monthly_sales_base_year_normalized_to_2011 <- monthly_sales %>%
mutate(
dollars = (100 * total_soh_dollars) / start_month$total_soh_dollars,
number_of_orders = (100 * soh_count) / start_month$soh_count
) %>%
ungroup()
monthly_sales_base_year_normalized_to_2011 <- monthly_sales_base_year_normalized_to_2011 %>%
select(orderdate, dollars, `# of orders` = number_of_orders) %>%
pivot_longer(-orderdate,
names_to = "relative_to_2011_average",
values_to = "amount"
)
```
```
monthly_sales_base_year_normalized_to_2011 %>%
ggplot(aes(orderdate, amount, color = relative_to_2011_average)) +
geom_line() +
geom_hline(yintercept = 100) +
scale_x_date(date_labels = "%Y-%m", date_breaks = "6 months") +
labs(
title = glue(
"Adventureworks Normalized Monthly Sales\n",
"Number of Sales Orders and Dollar Totals\n",
{format(min_soh_dt, "%B %d, %Y")} , " to ",
{format(max_soh_dt, "%B %d, %Y")}),
x = "Date",
y = "",
color = glue(baseline_month, " values = 100")
) +
theme(legend.position = c(.3,.75))
```
Figure 8\.4: Adventureworks Normalized Monthly Sales
### 8\.5\.1 Check lagged monthly data
Because of the month\-over\-month sales variation. We’ll use `dplyr::lag` to help find the delta and later visualize just how much month\-to\-month difference there is.
```
monthly_sales <- arrange(monthly_sales, orderdate)
monthly_sales_lagged <- monthly_sales %>%
mutate(monthly_sales_change = (dplyr::lag(total_soh_dollars)) - total_soh_dollars)
monthly_sales_lagged[is.na(monthly_sales_lagged)] = 0
```
```
median(monthly_sales_lagged$monthly_sales_change, na.rm = TRUE)
```
```
## [1] -221690.505
```
```
(sum_lags <- summary(monthly_sales_lagged$monthly_sales_change))
```
```
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## -5879806.05 -1172995.19 -221690.51 11968.42 1159252.70 5420357.17
```
The average month over month change in sales looks OK ($ 11,968\) although the Median is negative: $ 11,968\. There is a very wide spread in our month\-over\-month sales data between the lower and upper quartile. We can plot the variation as follows:
```
ggplot(monthly_sales_lagged, aes(x = orderdate, y = monthly_sales_change)) +
scale_x_date(date_breaks = "year", date_labels = "%Y", date_minor_breaks = "3 months") +
geom_line() +
# geom_point() +
scale_y_continuous(limits = c(-6000000,5500000), labels = scales::dollar_format()) +
theme(plot.title = element_text(hjust = .5)) +
labs(
title = glue(
"Monthly Sales Change \n",
"Between ", {format(min_soh_dt, "%B %d, %Y")} , " and ",
{format(max_soh_dt, "%B %d, %Y")}
),
x = "Month",
y = "Dollar Change"
)
```
Figure 8\.3: Monthly Sales Change
It looks like the big change in the business occurred in the summer of 2013 when the number of orders jumped but the dollar volume just continued to bump along.
### 8\.5\.2 Comparing dollars and orders to a base year
To look at dollars and the number of orders together, we compare the monthly data to the yearly average for 2011\.
```
baseline_month <- "2011-07-01"
start_month <- monthly_sales %>%
filter(orderdate == as.Date(baseline_month))
```
Express monthly data relative to 2011\-07\-01, 2044600, 8851\.08, 231
```
monthly_sales_base_year_normalized_to_2011 <- monthly_sales %>%
mutate(
dollars = (100 * total_soh_dollars) / start_month$total_soh_dollars,
number_of_orders = (100 * soh_count) / start_month$soh_count
) %>%
ungroup()
monthly_sales_base_year_normalized_to_2011 <- monthly_sales_base_year_normalized_to_2011 %>%
select(orderdate, dollars, `# of orders` = number_of_orders) %>%
pivot_longer(-orderdate,
names_to = "relative_to_2011_average",
values_to = "amount"
)
```
```
monthly_sales_base_year_normalized_to_2011 %>%
ggplot(aes(orderdate, amount, color = relative_to_2011_average)) +
geom_line() +
geom_hline(yintercept = 100) +
scale_x_date(date_labels = "%Y-%m", date_breaks = "6 months") +
labs(
title = glue(
"Adventureworks Normalized Monthly Sales\n",
"Number of Sales Orders and Dollar Totals\n",
{format(min_soh_dt, "%B %d, %Y")} , " to ",
{format(max_soh_dt, "%B %d, %Y")}),
x = "Date",
y = "",
color = glue(baseline_month, " values = 100")
) +
theme(legend.position = c(.3,.75))
```
Figure 8\.4: Adventureworks Normalized Monthly Sales
8\.6 The effect of online sales
-------------------------------
We suspect that the business has changed a lot with the advent of online orders so we check the impact of `onlineorderflag` on annual sales. The `onlineorderflag` indicates which sales channel accounted for the sale, **Sales Reps** or **Online**.
### 8\.6\.1 Add `onlineorderflag` to our annual sales query
```
annual_sales_w_channel <- tbl(con, in_schema("sales", "salesorderheader")) %>%
select(orderdate, subtotal, onlineorderflag) %>%
collect() %>%
mutate(
orderdate = date(orderdate),
orderdate = round_date(orderdate, "month"),
onlineorderflag = if_else(onlineorderflag == FALSE,
"Sales Rep", "Online"
),
onlineorderflag = as.factor(onlineorderflag)
) %>%
group_by(orderdate, onlineorderflag) %>%
summarize(
min_soh_orderdate = min(orderdate, na.rm = TRUE),
max_soh_orderdate = max(orderdate, na.rm = TRUE),
total_soh_dollars = round(sum(subtotal, na.rm = TRUE), 2),
avg_total_soh_dollars = round(mean(subtotal, na.rm = TRUE), 2),
soh_count = n()
) %>%
select(
orderdate, onlineorderflag, min_soh_orderdate,
max_soh_orderdate, total_soh_dollars,
avg_total_soh_dollars, soh_count
)
```
Note that we are creating a factor and doing most of the calculations on the R side, not on the DBMS side.
### 8\.6\.2 Annual Sales comparison
Start by looking at total sales.
```
ggplot(data = annual_sales_w_channel, aes(x = orderdate, y = total_soh_dollars)) +
geom_col() +
scale_y_continuous(labels = scales::dollar_format()) +
facet_wrap("onlineorderflag") +
labs(
title = "AdventureWorks Monthly Sales",
caption = glue( "Between ", {format(min_soh_dt, "%B %d, %Y")} , " - ",
{format(max_soh_dt, "%B %d, %Y")}),
subtitle = "Comparing Online and Sales Rep sales channels",
x = "Year",
y = "Sales $"
)
```
(\#fig:Calculate annual sales dollars )Sales Channel Breakdown
It looks like there are two businesses represented in the AdventureWorks database that have very different growth profiles.
### 8\.6\.3 Order volume comparison
```
ggplot(data = annual_sales_w_channel, aes(x = orderdate, y = as.numeric(soh_count))) +
geom_col() +
facet_wrap("onlineorderflag") +
labs(
title = "AdventureWorks Monthly orders",
caption = glue( "Between ", {format(min_soh_dt, "%B %d, %Y")} , " - ",
{format(max_soh_dt, "%B %d, %Y")}),
subtitle = "Comparing Online and Sales Rep sales channels",
x = "Year",
y = "Total number of orders"
)
```
Figure 8\.5: AdventureWorks Monthly Orders by Channel
Comparing Online and Sales Rep sales, the difference in the number of orders is even more striking than the difference between annual sales.
### 8\.6\.4 Comparing average order size: **Sales Reps** to **Online** orders
```
ggplot(data = annual_sales_w_channel, aes(x = orderdate, y = avg_total_soh_dollars)) +
geom_col() +
facet_wrap("onlineorderflag") +
scale_y_continuous(labels = scales::dollar_format()) +
labs(
title = "AdventureWorks Average Dollars per Sale",
x = glue( "Year - between ", {format(min_soh_dt, "%B %d, %Y")} , " - ",
{format(max_soh_dt, "%B %d, %Y")}),
y = "Average sale amount"
)
```
Figure 8\.6: Average dollar per Sale comparison
### 8\.6\.1 Add `onlineorderflag` to our annual sales query
```
annual_sales_w_channel <- tbl(con, in_schema("sales", "salesorderheader")) %>%
select(orderdate, subtotal, onlineorderflag) %>%
collect() %>%
mutate(
orderdate = date(orderdate),
orderdate = round_date(orderdate, "month"),
onlineorderflag = if_else(onlineorderflag == FALSE,
"Sales Rep", "Online"
),
onlineorderflag = as.factor(onlineorderflag)
) %>%
group_by(orderdate, onlineorderflag) %>%
summarize(
min_soh_orderdate = min(orderdate, na.rm = TRUE),
max_soh_orderdate = max(orderdate, na.rm = TRUE),
total_soh_dollars = round(sum(subtotal, na.rm = TRUE), 2),
avg_total_soh_dollars = round(mean(subtotal, na.rm = TRUE), 2),
soh_count = n()
) %>%
select(
orderdate, onlineorderflag, min_soh_orderdate,
max_soh_orderdate, total_soh_dollars,
avg_total_soh_dollars, soh_count
)
```
Note that we are creating a factor and doing most of the calculations on the R side, not on the DBMS side.
### 8\.6\.2 Annual Sales comparison
Start by looking at total sales.
```
ggplot(data = annual_sales_w_channel, aes(x = orderdate, y = total_soh_dollars)) +
geom_col() +
scale_y_continuous(labels = scales::dollar_format()) +
facet_wrap("onlineorderflag") +
labs(
title = "AdventureWorks Monthly Sales",
caption = glue( "Between ", {format(min_soh_dt, "%B %d, %Y")} , " - ",
{format(max_soh_dt, "%B %d, %Y")}),
subtitle = "Comparing Online and Sales Rep sales channels",
x = "Year",
y = "Sales $"
)
```
(\#fig:Calculate annual sales dollars )Sales Channel Breakdown
It looks like there are two businesses represented in the AdventureWorks database that have very different growth profiles.
### 8\.6\.3 Order volume comparison
```
ggplot(data = annual_sales_w_channel, aes(x = orderdate, y = as.numeric(soh_count))) +
geom_col() +
facet_wrap("onlineorderflag") +
labs(
title = "AdventureWorks Monthly orders",
caption = glue( "Between ", {format(min_soh_dt, "%B %d, %Y")} , " - ",
{format(max_soh_dt, "%B %d, %Y")}),
subtitle = "Comparing Online and Sales Rep sales channels",
x = "Year",
y = "Total number of orders"
)
```
Figure 8\.5: AdventureWorks Monthly Orders by Channel
Comparing Online and Sales Rep sales, the difference in the number of orders is even more striking than the difference between annual sales.
### 8\.6\.4 Comparing average order size: **Sales Reps** to **Online** orders
```
ggplot(data = annual_sales_w_channel, aes(x = orderdate, y = avg_total_soh_dollars)) +
geom_col() +
facet_wrap("onlineorderflag") +
scale_y_continuous(labels = scales::dollar_format()) +
labs(
title = "AdventureWorks Average Dollars per Sale",
x = glue( "Year - between ", {format(min_soh_dt, "%B %d, %Y")} , " - ",
{format(max_soh_dt, "%B %d, %Y")}),
y = "Average sale amount"
)
```
Figure 8\.6: Average dollar per Sale comparison
8\.7 Impact of order type on monthly sales
------------------------------------------
To dig into the difference between **Sales Rep** and **Online** sales we can look at monthly data.
### 8\.7\.1 Retrieve monthly sales with the `onlineorderflag`
This query puts the `collect` statement earlier than the previous queries.
```
monthly_sales_w_channel <- tbl(con, in_schema("sales", "salesorderheader")) %>%
select(orderdate, subtotal, onlineorderflag) %>%
collect() %>% # From here on we're in R
mutate(
orderdate = date(orderdate),
orderdate = floor_date(orderdate, unit = "month"),
onlineorderflag = if_else(onlineorderflag == FALSE,
"Sales Rep", "Online")
) %>%
group_by(orderdate, onlineorderflag) %>%
summarize(
min_soh_orderdate = min(orderdate, na.rm = TRUE),
max_soh_orderdate = max(orderdate, na.rm = TRUE),
total_soh_dollars = round(sum(subtotal, na.rm = TRUE), 2),
avg_total_soh_dollars = round(mean(subtotal, na.rm = TRUE), 2),
soh_count = n()
) %>%
ungroup()
```
```
monthly_sales_w_channel %>%
rename(`Sales Channel` = onlineorderflag) %>%
group_by(`Sales Channel`) %>%
summarize(
unique_dates = n(),
start_date = min(min_soh_orderdate),
end_date = max(max_soh_orderdate),
total_sales = round(sum(total_soh_dollars)),
days_span = end_date - start_date
) %>%
gt()
```
| Sales Channel | unique\_dates | start\_date | end\_date | total\_sales | days\_span |
| --- | --- | --- | --- | --- | --- |
| Online | 38 | 2011\-05\-01 | 2014\-06\-01 | 29358677 | 1127 days |
| Sales Rep | 34 | 2011\-05\-01 | 2014\-05\-01 | 80487704 | 1096 days |
As this table shows, the **Sales Rep** dates don’t match the **Online** dates. They start on the same date, but have a different end. The **Online** dates include 2 months that are not included in the Sales Rep sales (which are the main sales channel by dollar volume).
### 8\.7\.2 Monthly variation compared to a trend line
Jumping to the trend line comparison, we see that the big the source of variation is on the Sales Rep side.
```
ggplot(
data = monthly_sales_w_channel,
aes(
x = orderdate, y = total_soh_dollars
)
) +
geom_line() +
geom_smooth(se = FALSE) +
facet_grid("onlineorderflag", scales = "free") +
scale_y_continuous(labels = dollar) +
scale_x_date(date_breaks = "year", date_labels = "%Y", date_minor_breaks = "3 months") +
theme(plot.title = element_text(hjust = .5)) + # Center ggplot title
labs(
title = glue(
"AdventureWorks Monthly Sales Trend"
),
x = glue( "Month - between ", {format(min_soh_dt, "%B %d, %Y")} , " - ",
{format(max_soh_dt, "%B %d, %Y")}),
y = "Sales Dollars"
)
```
Figure 8\.7: Monthly Sales Trend
The **monthly** gyrations are much larger on the Sales Rep side, amounting to differences in a million dollars compared to small monthly variations of around $25,000 for the Online orders.
### 8\.7\.3 Compare monthly lagged data by Sales Channel
First consider month\-to\-month change.
```
monthly_sales_w_channel_lagged_by_month <- monthly_sales_w_channel %>%
group_by(onlineorderflag) %>%
mutate(
lag_soh_count = lag(soh_count, 1),
lag_soh_total_dollars = lag(total_soh_dollars, 1),
pct_monthly_soh_dollar_change =
(total_soh_dollars - lag_soh_total_dollars) / lag_soh_total_dollars * 100,
pct_monthly_soh_count_change =
(soh_count - lag_soh_count) / lag_soh_count * 100
)
```
The following table shows some wild changes in dollar amounts and number of sales from one month to the next.
```
monthly_sales_w_channel_lagged_by_month %>%
filter(abs(pct_monthly_soh_count_change) > 150 |
abs(pct_monthly_soh_dollar_change) > 150 ) %>%
ungroup() %>%
arrange(onlineorderflag, orderdate) %>%
mutate(
total_soh_dollars = round(total_soh_dollars),
lag_soh_total_dollars = round(lag_soh_total_dollars),
pct_monthly_soh_dollar_change = round(pct_monthly_soh_dollar_change),
pct_monthly_soh_count_change = round(pct_monthly_soh_count_change)) %>%
select(orderdate, onlineorderflag, total_soh_dollars, lag_soh_total_dollars,
soh_count, lag_soh_count, pct_monthly_soh_dollar_change, pct_monthly_soh_count_change) %>%
# names()
gt() %>%
fmt_number(
columns = c(3:4), decimals = 0) %>%
fmt_percent(
columns = c(7:8), decimals = 0) %>%
cols_label(
onlineorderflag = "Channel",
total_soh_dollars = "$ this Month",
lag_soh_total_dollars = "$ last Month",
soh_count = "# this Month",
lag_soh_count = "# last Month",
pct_monthly_soh_dollar_change = "$ change",
pct_monthly_soh_count_change = "# change"
)
```
| orderdate | Channel | $ this Month | $ last Month | \# this Month | \# last Month | $ change | \# change |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 2011\-06\-01 | Online | 458,911 | 14,477 | 141 | 5 | 307,000% | 272,000% |
| 2013\-07\-01 | Online | 847,139 | 860,141 | 1564 | 533 | −200% | 19,300% |
| 2011\-07\-01 | Sales Rep | 1,538,408 | 489,329 | 75 | 38 | 21,400% | 9,700% |
| 2012\-01\-01 | Sales Rep | 3,356,069 | 713,117 | 143 | 40 | 37,100% | 25,800% |
| 2012\-03\-01 | Sales Rep | 2,269,117 | 882,900 | 85 | 37 | 15,700% | 13,000% |
| 2014\-03\-01 | Sales Rep | 5,526,352 | 3,231 | 271 | 3 | 17,096,000% | 893,300% |
| 2014\-05\-01 | Sales Rep | 3,415,479 | 1,285 | 179 | 2 | 26,573,900% | 885,000% |
We suspect that the business has changed a lot with the advent of **Online** orders.
### 8\.7\.1 Retrieve monthly sales with the `onlineorderflag`
This query puts the `collect` statement earlier than the previous queries.
```
monthly_sales_w_channel <- tbl(con, in_schema("sales", "salesorderheader")) %>%
select(orderdate, subtotal, onlineorderflag) %>%
collect() %>% # From here on we're in R
mutate(
orderdate = date(orderdate),
orderdate = floor_date(orderdate, unit = "month"),
onlineorderflag = if_else(onlineorderflag == FALSE,
"Sales Rep", "Online")
) %>%
group_by(orderdate, onlineorderflag) %>%
summarize(
min_soh_orderdate = min(orderdate, na.rm = TRUE),
max_soh_orderdate = max(orderdate, na.rm = TRUE),
total_soh_dollars = round(sum(subtotal, na.rm = TRUE), 2),
avg_total_soh_dollars = round(mean(subtotal, na.rm = TRUE), 2),
soh_count = n()
) %>%
ungroup()
```
```
monthly_sales_w_channel %>%
rename(`Sales Channel` = onlineorderflag) %>%
group_by(`Sales Channel`) %>%
summarize(
unique_dates = n(),
start_date = min(min_soh_orderdate),
end_date = max(max_soh_orderdate),
total_sales = round(sum(total_soh_dollars)),
days_span = end_date - start_date
) %>%
gt()
```
| Sales Channel | unique\_dates | start\_date | end\_date | total\_sales | days\_span |
| --- | --- | --- | --- | --- | --- |
| Online | 38 | 2011\-05\-01 | 2014\-06\-01 | 29358677 | 1127 days |
| Sales Rep | 34 | 2011\-05\-01 | 2014\-05\-01 | 80487704 | 1096 days |
As this table shows, the **Sales Rep** dates don’t match the **Online** dates. They start on the same date, but have a different end. The **Online** dates include 2 months that are not included in the Sales Rep sales (which are the main sales channel by dollar volume).
### 8\.7\.2 Monthly variation compared to a trend line
Jumping to the trend line comparison, we see that the big the source of variation is on the Sales Rep side.
```
ggplot(
data = monthly_sales_w_channel,
aes(
x = orderdate, y = total_soh_dollars
)
) +
geom_line() +
geom_smooth(se = FALSE) +
facet_grid("onlineorderflag", scales = "free") +
scale_y_continuous(labels = dollar) +
scale_x_date(date_breaks = "year", date_labels = "%Y", date_minor_breaks = "3 months") +
theme(plot.title = element_text(hjust = .5)) + # Center ggplot title
labs(
title = glue(
"AdventureWorks Monthly Sales Trend"
),
x = glue( "Month - between ", {format(min_soh_dt, "%B %d, %Y")} , " - ",
{format(max_soh_dt, "%B %d, %Y")}),
y = "Sales Dollars"
)
```
Figure 8\.7: Monthly Sales Trend
The **monthly** gyrations are much larger on the Sales Rep side, amounting to differences in a million dollars compared to small monthly variations of around $25,000 for the Online orders.
### 8\.7\.3 Compare monthly lagged data by Sales Channel
First consider month\-to\-month change.
```
monthly_sales_w_channel_lagged_by_month <- monthly_sales_w_channel %>%
group_by(onlineorderflag) %>%
mutate(
lag_soh_count = lag(soh_count, 1),
lag_soh_total_dollars = lag(total_soh_dollars, 1),
pct_monthly_soh_dollar_change =
(total_soh_dollars - lag_soh_total_dollars) / lag_soh_total_dollars * 100,
pct_monthly_soh_count_change =
(soh_count - lag_soh_count) / lag_soh_count * 100
)
```
The following table shows some wild changes in dollar amounts and number of sales from one month to the next.
```
monthly_sales_w_channel_lagged_by_month %>%
filter(abs(pct_monthly_soh_count_change) > 150 |
abs(pct_monthly_soh_dollar_change) > 150 ) %>%
ungroup() %>%
arrange(onlineorderflag, orderdate) %>%
mutate(
total_soh_dollars = round(total_soh_dollars),
lag_soh_total_dollars = round(lag_soh_total_dollars),
pct_monthly_soh_dollar_change = round(pct_monthly_soh_dollar_change),
pct_monthly_soh_count_change = round(pct_monthly_soh_count_change)) %>%
select(orderdate, onlineorderflag, total_soh_dollars, lag_soh_total_dollars,
soh_count, lag_soh_count, pct_monthly_soh_dollar_change, pct_monthly_soh_count_change) %>%
# names()
gt() %>%
fmt_number(
columns = c(3:4), decimals = 0) %>%
fmt_percent(
columns = c(7:8), decimals = 0) %>%
cols_label(
onlineorderflag = "Channel",
total_soh_dollars = "$ this Month",
lag_soh_total_dollars = "$ last Month",
soh_count = "# this Month",
lag_soh_count = "# last Month",
pct_monthly_soh_dollar_change = "$ change",
pct_monthly_soh_count_change = "# change"
)
```
| orderdate | Channel | $ this Month | $ last Month | \# this Month | \# last Month | $ change | \# change |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 2011\-06\-01 | Online | 458,911 | 14,477 | 141 | 5 | 307,000% | 272,000% |
| 2013\-07\-01 | Online | 847,139 | 860,141 | 1564 | 533 | −200% | 19,300% |
| 2011\-07\-01 | Sales Rep | 1,538,408 | 489,329 | 75 | 38 | 21,400% | 9,700% |
| 2012\-01\-01 | Sales Rep | 3,356,069 | 713,117 | 143 | 40 | 37,100% | 25,800% |
| 2012\-03\-01 | Sales Rep | 2,269,117 | 882,900 | 85 | 37 | 15,700% | 13,000% |
| 2014\-03\-01 | Sales Rep | 5,526,352 | 3,231 | 271 | 3 | 17,096,000% | 893,300% |
| 2014\-05\-01 | Sales Rep | 3,415,479 | 1,285 | 179 | 2 | 26,573,900% | 885,000% |
We suspect that the business has changed a lot with the advent of **Online** orders.
8\.8 Detect and diagnose the day of the month problem
-----------------------------------------------------
There have been several indications that Sales Rep sales are recorded once a month while Online sales are recorded on a daily basis.
### 8\.8\.1 Sales Rep Orderdate Distribution
Look at the dates when sales are entered for sales by **Sales Reps**. The following query / plot combination shows this pattern. and the exception for transactions entered on the first day of the month.
```
tbl(con, in_schema("sales", "salesorderheader")) %>%
filter(onlineorderflag == FALSE) %>% # Drop online orders
mutate(orderday = day(orderdate)) %>%
count(orderday, name = "Orders") %>%
collect() %>%
full_join(tibble(orderday = seq(1:31))) %>%
mutate(orderday = as.factor(orderday)) %>%
ggplot(aes(orderday, Orders)) +
geom_col() +
coord_flip() +
labs(title = "The first day of the month looks odd",
x = "Day Number")
```
```
## Joining, by = "orderday"
```
```
## Warning: Removed 26 rows containing missing values (position_stack).
```
Figure 8\.8: Days of the month with Sales Rep activity recorded
We can check on which months have orders entered on the first of the month.
```
sales_rep_day_of_month_sales <- tbl(con, in_schema("sales", "salesorderheader")) %>%
filter(onlineorderflag == FALSE) %>% # Drop online orders
select(orderdate, subtotal) %>%
mutate(
year = year(orderdate),
month = month(orderdate),
day = day(orderdate)
) %>%
count(year, month, day) %>%
collect() %>%
pivot_wider(names_from = day, values_from = n, names_prefix = "day_", values_fill = list(day_1 = 0, day_28 = 0, day_29 = 0, day_30 = 0, day_31 = 0) ) %>%
as.data.frame() %>%
select(year, month, day_1, day_28, day_29, day_30, day_31) %>%
filter(day_1 > 0) %>%
arrange(year, month)
sales_rep_day_of_month_sales
```
```
## year month day_1 day_28 day_29 day_30 day_31
## 1 2011 7 75 NA NA NA NA
## 2 2011 8 60 NA NA NA 40
## 3 2011 10 90 NA NA NA 63
## 4 2011 12 40 NA NA NA NA
## 5 2012 1 79 NA 64 NA NA
## 6 2014 3 91 NA NA 2 178
## 7 2014 5 179 NA NA NA NA
```
There are two months with multiple sales rep order days for 2011, (11/08 and 11/10\), one for 2012, (1201\), and two in 2014, (14/01 and 14/03\). The 14/03 is the only three day sales rep order month.
Are there months where there were no sales recorded for the sales reps?
There are two approaches. The first is to generate a list of months between the beginning and end of history and compare that to the Sales Rep records
```
monthly_sales_rep_sales <- monthly_sales_w_channel %>%
filter(onlineorderflag == "Sales Rep") %>%
mutate(orderdate = as.Date(floor_date(orderdate, "month"))) %>%
count(orderdate)
str(monthly_sales_rep_sales)
```
```
## Classes 'tbl_df', 'tbl' and 'data.frame': 34 obs. of 2 variables:
## $ orderdate: Date, format: "2011-05-01" "2011-07-01" ...
## $ n : int 1 1 1 1 1 1 1 1 1 1 ...
```
```
date_list <- tibble(month_date = seq.Date(floor_date(as.Date(min_soh_dt), "month"),
floor_date(as.Date(max_soh_dt), "month"),
by = "month"),
date_exists = FALSE)
date_list %>%
anti_join(monthly_sales_rep_sales,
by = c("month_date" = "orderdate") )
```
```
## # A tibble: 4 x 2
## month_date date_exists
## <date> <lgl>
## 1 2011-06-01 FALSE
## 2 2011-09-01 FALSE
## 3 2011-11-01 FALSE
## 4 2014-06-01 FALSE
```
* June, September, and November are missing for 2011\.
* June for 2014
The second approach is to use the dates found in the database for online orders. Defining “complete” may not always be as simple as generating a complete list of months.
```
sales_order_header_online <- tbl(con, in_schema("sales", "salesorderheader")) %>%
filter(onlineorderflag == TRUE) %>%
mutate(
orderdate = date_trunc('month', orderdate)
) %>%
count(orderdate, name = "online_count")
sales_order_header_sales_rep <- tbl(con, in_schema("sales", "salesorderheader")) %>%
filter(onlineorderflag == FALSE) %>%
mutate(
orderdate = date_trunc('month', orderdate)
) %>%
count(orderdate, name = "sales_rep_count")
missing_dates <- sales_order_header_sales_rep %>%
full_join(sales_order_header_online) %>%
show_query() %>%
collect()
```
```
## Joining, by = "orderdate"
```
```
## <SQL>
## SELECT COALESCE("LHS"."orderdate", "RHS"."orderdate") AS "orderdate", "LHS"."sales_rep_count" AS "sales_rep_count", "RHS"."online_count" AS "online_count"
## FROM (SELECT "orderdate", COUNT(*) AS "sales_rep_count"
## FROM (SELECT "salesorderid", "revisionnumber", date_trunc('month', "orderdate") AS "orderdate", "duedate", "shipdate", "status", "onlineorderflag", "purchaseordernumber", "accountnumber", "customerid", "salespersonid", "territoryid", "billtoaddressid", "shiptoaddressid", "shipmethodid", "creditcardid", "creditcardapprovalcode", "currencyrateid", "subtotal", "taxamt", "freight", "totaldue", "comment", "rowguid", "modifieddate"
## FROM (SELECT *
## FROM sales.salesorderheader
## WHERE ("onlineorderflag" = FALSE)) "dbplyr_010") "dbplyr_011"
## GROUP BY "orderdate") "LHS"
## FULL JOIN (SELECT "orderdate", COUNT(*) AS "online_count"
## FROM (SELECT "salesorderid", "revisionnumber", date_trunc('month', "orderdate") AS "orderdate", "duedate", "shipdate", "status", "onlineorderflag", "purchaseordernumber", "accountnumber", "customerid", "salespersonid", "territoryid", "billtoaddressid", "shiptoaddressid", "shipmethodid", "creditcardid", "creditcardapprovalcode", "currencyrateid", "subtotal", "taxamt", "freight", "totaldue", "comment", "rowguid", "modifieddate"
## FROM (SELECT *
## FROM sales.salesorderheader
## WHERE ("onlineorderflag" = TRUE)) "dbplyr_012") "dbplyr_013"
## GROUP BY "orderdate") "RHS"
## ON ("LHS"."orderdate" = "RHS"."orderdate")
```
```
missing_dates <- sales_order_header_online %>%
anti_join(sales_order_header_sales_rep) %>%
arrange(orderdate) %>%
collect()
```
```
## Joining, by = "orderdate"
```
```
missing_dates
```
```
## # A tibble: 4 x 2
## orderdate online_count
## <dttm> <int>
## 1 2011-06-01 00:00:00 141
## 2 2011-09-01 00:00:00 157
## 3 2011-11-01 00:00:00 230
## 4 2014-06-01 00:00:00 939
```
```
str(missing_dates)
```
```
## Classes 'tbl_df', 'tbl' and 'data.frame': 4 obs. of 2 variables:
## $ orderdate : POSIXct, format: "2011-06-01" "2011-09-01" ...
## $ online_count: int 141 157 230 939
```
And in this case they agree!
discuss February issues. and stuff.
look at each year sepraately as a diagnostic
Use the same pivot strategy on the corrected data.
difference between detective work with a graph and just print it out. “now I see what’s driving the hint.”
We have xx months when we add the month before and the month after the **suspicious months**. We don’t know whether the problem postings have been carried forward or backward. We check for and eliminate duplicates as well.
* Most of the **Sales Reps**’ orders are entered on a single day of the month, unique days \= 1\. It is possible that these are monthly recurring orders that get released on a given day of the month. If that is the case, what are the **Sales Reps** doing the rest of the month?
* \*\* ?? The lines with multiple days, unique\_days \> 1, have a noticeable higher number of orders, so\_cnt, and associated so dollars.?? \*\*
### 8\.8\.1 Sales Rep Orderdate Distribution
Look at the dates when sales are entered for sales by **Sales Reps**. The following query / plot combination shows this pattern. and the exception for transactions entered on the first day of the month.
```
tbl(con, in_schema("sales", "salesorderheader")) %>%
filter(onlineorderflag == FALSE) %>% # Drop online orders
mutate(orderday = day(orderdate)) %>%
count(orderday, name = "Orders") %>%
collect() %>%
full_join(tibble(orderday = seq(1:31))) %>%
mutate(orderday = as.factor(orderday)) %>%
ggplot(aes(orderday, Orders)) +
geom_col() +
coord_flip() +
labs(title = "The first day of the month looks odd",
x = "Day Number")
```
```
## Joining, by = "orderday"
```
```
## Warning: Removed 26 rows containing missing values (position_stack).
```
Figure 8\.8: Days of the month with Sales Rep activity recorded
We can check on which months have orders entered on the first of the month.
```
sales_rep_day_of_month_sales <- tbl(con, in_schema("sales", "salesorderheader")) %>%
filter(onlineorderflag == FALSE) %>% # Drop online orders
select(orderdate, subtotal) %>%
mutate(
year = year(orderdate),
month = month(orderdate),
day = day(orderdate)
) %>%
count(year, month, day) %>%
collect() %>%
pivot_wider(names_from = day, values_from = n, names_prefix = "day_", values_fill = list(day_1 = 0, day_28 = 0, day_29 = 0, day_30 = 0, day_31 = 0) ) %>%
as.data.frame() %>%
select(year, month, day_1, day_28, day_29, day_30, day_31) %>%
filter(day_1 > 0) %>%
arrange(year, month)
sales_rep_day_of_month_sales
```
```
## year month day_1 day_28 day_29 day_30 day_31
## 1 2011 7 75 NA NA NA NA
## 2 2011 8 60 NA NA NA 40
## 3 2011 10 90 NA NA NA 63
## 4 2011 12 40 NA NA NA NA
## 5 2012 1 79 NA 64 NA NA
## 6 2014 3 91 NA NA 2 178
## 7 2014 5 179 NA NA NA NA
```
There are two months with multiple sales rep order days for 2011, (11/08 and 11/10\), one for 2012, (1201\), and two in 2014, (14/01 and 14/03\). The 14/03 is the only three day sales rep order month.
Are there months where there were no sales recorded for the sales reps?
There are two approaches. The first is to generate a list of months between the beginning and end of history and compare that to the Sales Rep records
```
monthly_sales_rep_sales <- monthly_sales_w_channel %>%
filter(onlineorderflag == "Sales Rep") %>%
mutate(orderdate = as.Date(floor_date(orderdate, "month"))) %>%
count(orderdate)
str(monthly_sales_rep_sales)
```
```
## Classes 'tbl_df', 'tbl' and 'data.frame': 34 obs. of 2 variables:
## $ orderdate: Date, format: "2011-05-01" "2011-07-01" ...
## $ n : int 1 1 1 1 1 1 1 1 1 1 ...
```
```
date_list <- tibble(month_date = seq.Date(floor_date(as.Date(min_soh_dt), "month"),
floor_date(as.Date(max_soh_dt), "month"),
by = "month"),
date_exists = FALSE)
date_list %>%
anti_join(monthly_sales_rep_sales,
by = c("month_date" = "orderdate") )
```
```
## # A tibble: 4 x 2
## month_date date_exists
## <date> <lgl>
## 1 2011-06-01 FALSE
## 2 2011-09-01 FALSE
## 3 2011-11-01 FALSE
## 4 2014-06-01 FALSE
```
* June, September, and November are missing for 2011\.
* June for 2014
The second approach is to use the dates found in the database for online orders. Defining “complete” may not always be as simple as generating a complete list of months.
```
sales_order_header_online <- tbl(con, in_schema("sales", "salesorderheader")) %>%
filter(onlineorderflag == TRUE) %>%
mutate(
orderdate = date_trunc('month', orderdate)
) %>%
count(orderdate, name = "online_count")
sales_order_header_sales_rep <- tbl(con, in_schema("sales", "salesorderheader")) %>%
filter(onlineorderflag == FALSE) %>%
mutate(
orderdate = date_trunc('month', orderdate)
) %>%
count(orderdate, name = "sales_rep_count")
missing_dates <- sales_order_header_sales_rep %>%
full_join(sales_order_header_online) %>%
show_query() %>%
collect()
```
```
## Joining, by = "orderdate"
```
```
## <SQL>
## SELECT COALESCE("LHS"."orderdate", "RHS"."orderdate") AS "orderdate", "LHS"."sales_rep_count" AS "sales_rep_count", "RHS"."online_count" AS "online_count"
## FROM (SELECT "orderdate", COUNT(*) AS "sales_rep_count"
## FROM (SELECT "salesorderid", "revisionnumber", date_trunc('month', "orderdate") AS "orderdate", "duedate", "shipdate", "status", "onlineorderflag", "purchaseordernumber", "accountnumber", "customerid", "salespersonid", "territoryid", "billtoaddressid", "shiptoaddressid", "shipmethodid", "creditcardid", "creditcardapprovalcode", "currencyrateid", "subtotal", "taxamt", "freight", "totaldue", "comment", "rowguid", "modifieddate"
## FROM (SELECT *
## FROM sales.salesorderheader
## WHERE ("onlineorderflag" = FALSE)) "dbplyr_010") "dbplyr_011"
## GROUP BY "orderdate") "LHS"
## FULL JOIN (SELECT "orderdate", COUNT(*) AS "online_count"
## FROM (SELECT "salesorderid", "revisionnumber", date_trunc('month', "orderdate") AS "orderdate", "duedate", "shipdate", "status", "onlineorderflag", "purchaseordernumber", "accountnumber", "customerid", "salespersonid", "territoryid", "billtoaddressid", "shiptoaddressid", "shipmethodid", "creditcardid", "creditcardapprovalcode", "currencyrateid", "subtotal", "taxamt", "freight", "totaldue", "comment", "rowguid", "modifieddate"
## FROM (SELECT *
## FROM sales.salesorderheader
## WHERE ("onlineorderflag" = TRUE)) "dbplyr_012") "dbplyr_013"
## GROUP BY "orderdate") "RHS"
## ON ("LHS"."orderdate" = "RHS"."orderdate")
```
```
missing_dates <- sales_order_header_online %>%
anti_join(sales_order_header_sales_rep) %>%
arrange(orderdate) %>%
collect()
```
```
## Joining, by = "orderdate"
```
```
missing_dates
```
```
## # A tibble: 4 x 2
## orderdate online_count
## <dttm> <int>
## 1 2011-06-01 00:00:00 141
## 2 2011-09-01 00:00:00 157
## 3 2011-11-01 00:00:00 230
## 4 2014-06-01 00:00:00 939
```
```
str(missing_dates)
```
```
## Classes 'tbl_df', 'tbl' and 'data.frame': 4 obs. of 2 variables:
## $ orderdate : POSIXct, format: "2011-06-01" "2011-09-01" ...
## $ online_count: int 141 157 230 939
```
And in this case they agree!
discuss February issues. and stuff.
look at each year sepraately as a diagnostic
Use the same pivot strategy on the corrected data.
difference between detective work with a graph and just print it out. “now I see what’s driving the hint.”
We have xx months when we add the month before and the month after the **suspicious months**. We don’t know whether the problem postings have been carried forward or backward. We check for and eliminate duplicates as well.
* Most of the **Sales Reps**’ orders are entered on a single day of the month, unique days \= 1\. It is possible that these are monthly recurring orders that get released on a given day of the month. If that is the case, what are the **Sales Reps** doing the rest of the month?
* \*\* ?? The lines with multiple days, unique\_days \> 1, have a noticeable higher number of orders, so\_cnt, and associated so dollars.?? \*\*
8\.9 Correcting the order date for **Sales Reps**
-------------------------------------------------
### 8\.9\.1 Define a date correction function in R
This code does the date\-correction work on the R side:
```
monthly_sales_rep_adjusted <- tbl(con, in_schema("sales", "salesorderheader")) %>%
filter(onlineorderflag == FALSE) %>%
select(orderdate, subtotal, onlineorderflag) %>%
group_by(orderdate) %>%
summarize(
total_soh_dollars = round(sum(subtotal, na.rm = TRUE), 2),
soh_count = n()
) %>%
mutate(
orderdate = as.Date(orderdate),
day = day(orderdate)
) %>%
collect() %>%
ungroup() %>%
mutate(
adjusted_orderdate = case_when(
day == 1L ~ orderdate -1,
TRUE ~ orderdate
),
year_month = floor_date(adjusted_orderdate, "month")
) %>%
group_by(year_month) %>%
summarize(
total_soh_dollars = round(sum(total_soh_dollars, na.rm = TRUE), 2),
soh_count = sum(soh_count)
) %>%
ungroup()
```
Inspect:
```
str(monthly_sales_rep_adjusted)
```
```
## Classes 'tbl_df', 'tbl' and 'data.frame': 36 obs. of 3 variables:
## $ year_month : Date, format: "2011-05-01" "2011-06-01" ...
## $ total_soh_dollars: num 489329 1538408 1165897 844721 2324136 ...
## $ soh_count : int 38 75 60 40 90 63 40 79 64 37 ...
```
```
monthly_sales_rep_adjusted %>% filter(year(year_month) %in% c(2011,2014))
```
```
## # A tibble: 12 x 3
## year_month total_soh_dollars soh_count
## <date> <dbl> <int>
## 1 2011-05-01 489329. 38
## 2 2011-06-01 1538408. 75
## 3 2011-07-01 1165897. 60
## 4 2011-08-01 844721 40
## 5 2011-09-01 2324136. 90
## 6 2011-10-01 1702945. 63
## 7 2011-11-01 713117. 40
## 8 2011-12-01 1900789. 79
## 9 2014-01-01 2738752. 175
## 10 2014-02-01 2207772. 94
## 11 2014-03-01 3321810. 180
## 12 2014-04-01 3416764. 181
```
### 8\.9\.2 Define and store a PostgreSQL function to correct the date
The following code defines a function on the server side to correct the date:
```
dbExecute(
con,
"CREATE OR REPLACE FUNCTION so_adj_date(so_date timestamp, ONLINE_ORDER boolean) RETURNS timestamp AS $$
BEGIN
IF (ONLINE_ORDER) THEN
RETURN (SELECT so_date);
ELSE
RETURN(SELECT CASE WHEN EXTRACT(DAY FROM so_date) = 1
THEN so_date - '1 day'::interval
ELSE so_date
END
);
END IF;
END; $$
LANGUAGE PLPGSQL;
"
)
```
```
## [1] 0
```
### 8\.9\.3 Use the PostgreSQL function
If you can do the heavy lifting on the database side, that’s good. R can do it, but it’s best for finding the issues.
```
monthly_sales_rep_adjusted_with_psql_function <- tbl(con, in_schema("sales", "salesorderheader")) %>%
select(orderdate, subtotal, onlineorderflag) %>%
mutate(
orderdate = as.Date(orderdate)) %>%
mutate(adjusted_orderdate = as.Date(so_adj_date(orderdate, onlineorderflag))) %>%
filter(onlineorderflag == FALSE) %>%
group_by(adjusted_orderdate) %>%
summarize(
total_soh_dollars = round(sum(subtotal, na.rm = TRUE), 2),
soh_count = n()
) %>%
collect() %>%
mutate( year_month = floor_date(adjusted_orderdate, "month")) %>%
group_by(year_month) %>%
ungroup() %>%
arrange(year_month)
```
```
monthly_sales_rep_adjusted_with_psql_function %>%
filter(year(year_month) %in% c(2011,2014))
```
```
## # A tibble: 14 x 4
## adjusted_orderdate total_soh_dollars soh_count year_month
## <date> <dbl> <int> <date>
## 1 2011-05-31 489329. 38 2011-05-01
## 2 2011-06-30 1538408. 75 2011-06-01
## 3 2011-07-31 1165897. 60 2011-07-01
## 4 2011-08-31 844721 40 2011-08-01
## 5 2011-09-30 2324136. 90 2011-09-01
## 6 2011-10-31 1702945. 63 2011-10-01
## 7 2011-11-30 713117. 40 2011-11-01
## 8 2011-12-31 1900789. 79 2011-12-01
## 9 2014-01-28 1565. 2 2014-01-01
## 10 2014-01-29 2737188. 173 2014-01-01
## 11 2014-02-28 2207772. 94 2014-02-01
## 12 2014-03-30 7291. 2 2014-03-01
## 13 2014-03-31 3314519. 178 2014-03-01
## 14 2014-04-30 3416764. 181 2014-04-01
```
There’s one minor difference between the two:
```
all_equal(monthly_sales_rep_adjusted, monthly_sales_rep_adjusted_with_psql_function)
```
```
## [1] "Cols in y but not x: `adjusted_orderdate`. "
```
### 8\.9\.4 Monthly Sales by Order Type with corrected dates – relative to a trend line
```
monthly_sales_rep_as_is <- monthly_sales_w_channel %>%
filter(onlineorderflag == "Sales Rep")
ggplot(
data = monthly_sales_rep_adjusted,
aes(x = year_month, y = soh_count)
) +
geom_line(alpha = .5) +
geom_smooth(se = FALSE) +
geom_smooth(
data = monthly_sales_rep_as_is, aes(
orderdate, soh_count
), color = "red", alpha = .5,
se = FALSE
) +
theme(plot.title = element_text(hjust = .5)) + # Center ggplot title
labs(
title = glue(
"Number of Sales per month using corrected dates\n",
"Counting Sales Order Header records"
),
x = paste0("Monthly - between ", min_soh_dt, " - ", max_soh_dt),
y = "Number of Sales Recorded"
)
```
```
monthly_sales_rep_as_is <- monthly_sales_w_channel %>%
filter(onlineorderflag == "Sales Rep") %>%
mutate(orderdate = as.Date(floor_date(orderdate, unit = "month"))) %>%
group_by(orderdate) %>%
summarize(
total_soh_dollars = round(sum(total_soh_dollars, na.rm = TRUE), 2),
soh_count = sum(soh_count)
)
monthly_sales_rep_as_is %>%
filter(year(orderdate) %in% c(2011,2014))
```
```
## # A tibble: 10 x 3
## orderdate total_soh_dollars soh_count
## <date> <dbl> <int>
## 1 2011-05-01 489329. 38
## 2 2011-07-01 1538408. 75
## 3 2011-08-01 2010618. 100
## 4 2011-10-01 4027080. 153
## 5 2011-12-01 713117. 40
## 6 2014-01-01 2738752. 175
## 7 2014-02-01 3231. 3
## 8 2014-03-01 5526352. 271
## 9 2014-04-01 1285. 2
## 10 2014-05-01 3415479. 179
```
```
ggplot(
data = monthly_sales_rep_adjusted,
aes(x = year_month, y = soh_count)
) +
geom_line(alpha = .5 , color = "green") +
geom_point(alpha = .5 , color = "green") +
geom_point(
data = monthly_sales_rep_as_is, aes(
orderdate, soh_count), color = "red", alpha = .5) +
theme(plot.title = element_text(hjust = .5)) + # Center ggplot title
annotate(geom = "text", y = 250, x = as.Date("2011-06-01"),
label = "Orange dots: original data\nGreen dots: corrected data\nBrown dots: unchanged",
hjust = 0) +
labs(
title = glue(
"Number of Sales per Month"
),
subtitle = "Original and corrected amounts",
x = paste0("Monthly - between ", min_soh_dt, " - ", max_soh_dt),
y = "Number of Sales Recorded"
)
```
Figure 8\.9: Comparing monthly\_sales\_rep\_adjusted and monthly\_sales\_rep\_as\_is
```
mon_sales <- monthly_sales_rep_adjusted %>%
rename(orderdate = year_month)
sales_original_and_adjusted <- bind_rows(mon_sales, monthly_sales_rep_as_is, .id = "date_kind")
```
Sales still seem to gyrate! We have found that sales rep sales data is often very strange.
### 8\.9\.1 Define a date correction function in R
This code does the date\-correction work on the R side:
```
monthly_sales_rep_adjusted <- tbl(con, in_schema("sales", "salesorderheader")) %>%
filter(onlineorderflag == FALSE) %>%
select(orderdate, subtotal, onlineorderflag) %>%
group_by(orderdate) %>%
summarize(
total_soh_dollars = round(sum(subtotal, na.rm = TRUE), 2),
soh_count = n()
) %>%
mutate(
orderdate = as.Date(orderdate),
day = day(orderdate)
) %>%
collect() %>%
ungroup() %>%
mutate(
adjusted_orderdate = case_when(
day == 1L ~ orderdate -1,
TRUE ~ orderdate
),
year_month = floor_date(adjusted_orderdate, "month")
) %>%
group_by(year_month) %>%
summarize(
total_soh_dollars = round(sum(total_soh_dollars, na.rm = TRUE), 2),
soh_count = sum(soh_count)
) %>%
ungroup()
```
Inspect:
```
str(monthly_sales_rep_adjusted)
```
```
## Classes 'tbl_df', 'tbl' and 'data.frame': 36 obs. of 3 variables:
## $ year_month : Date, format: "2011-05-01" "2011-06-01" ...
## $ total_soh_dollars: num 489329 1538408 1165897 844721 2324136 ...
## $ soh_count : int 38 75 60 40 90 63 40 79 64 37 ...
```
```
monthly_sales_rep_adjusted %>% filter(year(year_month) %in% c(2011,2014))
```
```
## # A tibble: 12 x 3
## year_month total_soh_dollars soh_count
## <date> <dbl> <int>
## 1 2011-05-01 489329. 38
## 2 2011-06-01 1538408. 75
## 3 2011-07-01 1165897. 60
## 4 2011-08-01 844721 40
## 5 2011-09-01 2324136. 90
## 6 2011-10-01 1702945. 63
## 7 2011-11-01 713117. 40
## 8 2011-12-01 1900789. 79
## 9 2014-01-01 2738752. 175
## 10 2014-02-01 2207772. 94
## 11 2014-03-01 3321810. 180
## 12 2014-04-01 3416764. 181
```
### 8\.9\.2 Define and store a PostgreSQL function to correct the date
The following code defines a function on the server side to correct the date:
```
dbExecute(
con,
"CREATE OR REPLACE FUNCTION so_adj_date(so_date timestamp, ONLINE_ORDER boolean) RETURNS timestamp AS $$
BEGIN
IF (ONLINE_ORDER) THEN
RETURN (SELECT so_date);
ELSE
RETURN(SELECT CASE WHEN EXTRACT(DAY FROM so_date) = 1
THEN so_date - '1 day'::interval
ELSE so_date
END
);
END IF;
END; $$
LANGUAGE PLPGSQL;
"
)
```
```
## [1] 0
```
### 8\.9\.3 Use the PostgreSQL function
If you can do the heavy lifting on the database side, that’s good. R can do it, but it’s best for finding the issues.
```
monthly_sales_rep_adjusted_with_psql_function <- tbl(con, in_schema("sales", "salesorderheader")) %>%
select(orderdate, subtotal, onlineorderflag) %>%
mutate(
orderdate = as.Date(orderdate)) %>%
mutate(adjusted_orderdate = as.Date(so_adj_date(orderdate, onlineorderflag))) %>%
filter(onlineorderflag == FALSE) %>%
group_by(adjusted_orderdate) %>%
summarize(
total_soh_dollars = round(sum(subtotal, na.rm = TRUE), 2),
soh_count = n()
) %>%
collect() %>%
mutate( year_month = floor_date(adjusted_orderdate, "month")) %>%
group_by(year_month) %>%
ungroup() %>%
arrange(year_month)
```
```
monthly_sales_rep_adjusted_with_psql_function %>%
filter(year(year_month) %in% c(2011,2014))
```
```
## # A tibble: 14 x 4
## adjusted_orderdate total_soh_dollars soh_count year_month
## <date> <dbl> <int> <date>
## 1 2011-05-31 489329. 38 2011-05-01
## 2 2011-06-30 1538408. 75 2011-06-01
## 3 2011-07-31 1165897. 60 2011-07-01
## 4 2011-08-31 844721 40 2011-08-01
## 5 2011-09-30 2324136. 90 2011-09-01
## 6 2011-10-31 1702945. 63 2011-10-01
## 7 2011-11-30 713117. 40 2011-11-01
## 8 2011-12-31 1900789. 79 2011-12-01
## 9 2014-01-28 1565. 2 2014-01-01
## 10 2014-01-29 2737188. 173 2014-01-01
## 11 2014-02-28 2207772. 94 2014-02-01
## 12 2014-03-30 7291. 2 2014-03-01
## 13 2014-03-31 3314519. 178 2014-03-01
## 14 2014-04-30 3416764. 181 2014-04-01
```
There’s one minor difference between the two:
```
all_equal(monthly_sales_rep_adjusted, monthly_sales_rep_adjusted_with_psql_function)
```
```
## [1] "Cols in y but not x: `adjusted_orderdate`. "
```
### 8\.9\.4 Monthly Sales by Order Type with corrected dates – relative to a trend line
```
monthly_sales_rep_as_is <- monthly_sales_w_channel %>%
filter(onlineorderflag == "Sales Rep")
ggplot(
data = monthly_sales_rep_adjusted,
aes(x = year_month, y = soh_count)
) +
geom_line(alpha = .5) +
geom_smooth(se = FALSE) +
geom_smooth(
data = monthly_sales_rep_as_is, aes(
orderdate, soh_count
), color = "red", alpha = .5,
se = FALSE
) +
theme(plot.title = element_text(hjust = .5)) + # Center ggplot title
labs(
title = glue(
"Number of Sales per month using corrected dates\n",
"Counting Sales Order Header records"
),
x = paste0("Monthly - between ", min_soh_dt, " - ", max_soh_dt),
y = "Number of Sales Recorded"
)
```
```
monthly_sales_rep_as_is <- monthly_sales_w_channel %>%
filter(onlineorderflag == "Sales Rep") %>%
mutate(orderdate = as.Date(floor_date(orderdate, unit = "month"))) %>%
group_by(orderdate) %>%
summarize(
total_soh_dollars = round(sum(total_soh_dollars, na.rm = TRUE), 2),
soh_count = sum(soh_count)
)
monthly_sales_rep_as_is %>%
filter(year(orderdate) %in% c(2011,2014))
```
```
## # A tibble: 10 x 3
## orderdate total_soh_dollars soh_count
## <date> <dbl> <int>
## 1 2011-05-01 489329. 38
## 2 2011-07-01 1538408. 75
## 3 2011-08-01 2010618. 100
## 4 2011-10-01 4027080. 153
## 5 2011-12-01 713117. 40
## 6 2014-01-01 2738752. 175
## 7 2014-02-01 3231. 3
## 8 2014-03-01 5526352. 271
## 9 2014-04-01 1285. 2
## 10 2014-05-01 3415479. 179
```
```
ggplot(
data = monthly_sales_rep_adjusted,
aes(x = year_month, y = soh_count)
) +
geom_line(alpha = .5 , color = "green") +
geom_point(alpha = .5 , color = "green") +
geom_point(
data = monthly_sales_rep_as_is, aes(
orderdate, soh_count), color = "red", alpha = .5) +
theme(plot.title = element_text(hjust = .5)) + # Center ggplot title
annotate(geom = "text", y = 250, x = as.Date("2011-06-01"),
label = "Orange dots: original data\nGreen dots: corrected data\nBrown dots: unchanged",
hjust = 0) +
labs(
title = glue(
"Number of Sales per Month"
),
subtitle = "Original and corrected amounts",
x = paste0("Monthly - between ", min_soh_dt, " - ", max_soh_dt),
y = "Number of Sales Recorded"
)
```
Figure 8\.9: Comparing monthly\_sales\_rep\_adjusted and monthly\_sales\_rep\_as\_is
```
mon_sales <- monthly_sales_rep_adjusted %>%
rename(orderdate = year_month)
sales_original_and_adjusted <- bind_rows(mon_sales, monthly_sales_rep_as_is, .id = "date_kind")
```
Sales still seem to gyrate! We have found that sales rep sales data is often very strange.
8\.10 Disconnect from the database and stop Docker
--------------------------------------------------
```
dbDisconnect(con)
# when running interactively use:
connection_close(con)
```
```
## Warning in connection_release(conn@ptr): Already disconnected
```
```
sp_docker_stop("adventureworks")
```
| Data Databases and Engineering |
smithjd.github.io | https://smithjd.github.io/sql-pet/chapter-lazy-evaluation-queries.html |
Chapter 9 Lazy Evaluation and Lazy Queries
==========================================
> This chapter:
>
>
> * Reviews lazy loading, lazy evaluation and lazy query execution
> * Demonstrates how `dplyr` code gets executed (and how R determines what is translated to SQL and what is processed locally by R)
> * Offers some further resources on lazy loading, evaluation, execution, etc.
9\.1 Setup
----------
The following packages are used in this chapter:
```
library(tidyverse)
library(DBI)
library(RPostgres)
library(dbplyr)
require(knitr)
library(bookdown)
library(sqlpetr)
library(connections)
sleep_default <- 3
```
Start your `adventureworks` container:
```
sqlpetr::sp_docker_start("adventureworks")
Sys.sleep(sleep_default)
```
Connect to the database:
```
# con <- connection_open( # use in an interactive session
con <- dbConnect( # use in other settings
RPostgres::Postgres(),
# without the previous and next lines, some functions fail with bigint data
# so change int64 to integer
bigint = "integer",
user = Sys.getenv("DEFAULT_POSTGRES_USER_NAME"),
password = Sys.getenv("DEFAULT_POSTGRES_PASSWORD"),
dbname = "adventureworks",
host = "localhost",
port = 5432)
```
9\.2 R is lazy and comes with guardrails
----------------------------------------
By design, R is both a language and an interactive development environment (IDE). As a language, R tries to be as efficient as possible. As an IDE, R creates some guardrails to make it easy and safe to work with your data. For example `getOption("max.print")` prevents R from printing more rows of data than you want to handle in an interactive session, with a default of 99999 lines, which may or may not suit you.
On the other hand SQL is a *“Structured Query Language (SQL): a standard computer language for relational database management and data manipulation.”*.[1](#fn1) SQL has various database\-specific Interactive Development Environments (IDEs), such as [pgAdmin](https://www.pgadmin.org/) for PostgreSQL. Roger Peng explains in [R Programming for Data Science](https://bookdown.org/rdpeng/rprogdatascience/history-and-overview-of-r.html#basic-features-of-r) that:
> R has maintained the original S philosophy, which is that it provides a language that is both useful for interactive work, but contains a powerful programming language for developing new tools.
This is complicated when R interacts with SQL. In a [vignette for dbplyr](https://cran.r-project.org/web/packages/dbplyr/vignettes/dbplyr.html) Hadley Wickham explains:
> The most important difference between ordinary data frames and remote database queries is that your R code is translated into SQL and executed in the database on the remote server, not in R on your local machine. When working with databases, dplyr tries to be as lazy as possible:
>
>
> * It never pulls data into R unless you explicitly ask for it.
> * It delays doing any work until the last possible moment: it collects together everything you want to do and then sends it to the database in one step.
Exactly when, which, and how much data is returned from the dbms is the topic of this chapter. Exactly how the data is represented in the dbms and then translated to a data frame is discussed in the [DBI specification](https://cran.r-project.org/web/packages/DBI/vignettes/spec.html#_fetch_records_from_a_previously_executed_query_).
Eventually, if you are interacting with a dbms from R you will need to understand the differences between lazy loading, lazy evaluation, and lazy queries.
### 9\.2\.1 Lazy loading
“*Lazy loading is always used for code in packages but is optional (selected by the package maintainer) for datasets in packages.*”[2](#fn2) Lazy loading means that the code for a particular function doesn’t actually get loaded into memory until the last minute – when it’s actually being used.
### 9\.2\.2 Lazy evaluation
Essentially “Lazy evaluation is a programming strategy that allows a symbol to be evaluated only when needed.”[3](#fn3) That means that lazy evaluation is about **symbols** such as function arguments[4](#fn4) when they are evaluated. Tidy evaluation complicates lazy evaluation.[5](#fn5)
### 9\.2\.3 Lazy Queries
“*When you create a "lazy" query, you’re creating a pointer to a set of conditions on the database, but the query isn’t actually run and the data isn’t actually loaded until you call "next" or some similar method to actually fetch the data and load it into an object.*”[6](#fn6)
9\.3 Lazy evaluation and lazy queries
-------------------------------------
When does a lazy query trigger data retrieval? It depends on a lot of factors, as we explore below:
### 9\.3\.1 Create a black box query for experimentation
Define the three tables discussed in the previous chapter to build a *black box* query:
```
sales_person_table <- tbl(con, in_schema("sales", "salesperson")) %>%
select(-rowguid) %>%
rename(sale_info_updated = modifieddate)
employee_table <- tbl(con, in_schema("humanresources", "employee")) %>%
select(-modifieddate, -rowguid)
person_table <- tbl(con, in_schema("person", "person")) %>%
select(-modifieddate, -rowguid)
```
Here is a typical string of `dplyr` verbs strung together with the magrittr `%>%` pipe command that will be used to tease out the several different behaviors that a lazy query has when passed to different R functions. This query joins three connection objects into a query we’ll call `Q`:
```
Q <- sales_person_table %>%
dplyr::left_join(employee_table, by = c("businessentityid" = "businessentityid")) %>%
dplyr::left_join(person_table , by = c("businessentityid" = "businessentityid")) %>%
dplyr::select(firstname, lastname, salesytd, birthdate)
```
The `str` function gives us a hint at how R is collecting information that can be used to construct and execute a query later on:
```
str(Q, max.level = 2)
```
```
## List of 2
## $ src:List of 2
## ..$ con :Formal class 'PqConnection' [package "RPostgres"] with 3 slots
## ..$ disco: NULL
## ..- attr(*, "class")= chr [1:4] "src_PqConnection" "src_dbi" "src_sql" "src"
## $ ops:List of 4
## ..$ name: chr "select"
## ..$ x :List of 4
## .. ..- attr(*, "class")= chr [1:3] "op_join" "op_double" "op"
## ..$ dots: list()
## ..$ args:List of 1
## ..- attr(*, "class")= chr [1:3] "op_select" "op_single" "op"
## - attr(*, "class")= chr [1:5] "tbl_PqConnection" "tbl_dbi" "tbl_sql" "tbl_lazy" ...
```
### 9\.3\.2 Experiment overview
Think of `Q` as a black box for the moment. The following examples will show how `Q` is interpreted differently by different functions. It’s important to remember in the following discussion that the “**and then**” operator (`%>%`) actually wraps the subsequent code inside the preceding code so that `Q %>% print()` is equivalent to `print(Q)`.
**Notation**
> A single green check indicates that some rows are returned.
>
> Two green checks indicate that all the rows are returned.
>
> The red X indicates that no rows are returned.
> | R code | Result |
> | --- | --- |
> | [`Q %>% print()`](chapter-lazy-evaluation-queries.html#lazy_q_print) | Prints x rows; same as just entering `Q` |
> | [`Q %>% dplyr::as_tibble()`](#Q-as-tibble) | Forces `Q` to be a tibble |
> | [`Q %>% head()`](chapter-lazy-evaluation-queries.html#lazy_q_head) | Prints the first 6 rows |
> | [`Q %>% tail()`](chapter-lazy-evaluation-queries.html#lazy_q_tail) | Error: tail() is not supported by sql sources |
> | [`Q %>% length()`](chapter-lazy-evaluation-queries.html#lazy_q_length) | Counts the rows in `Q` |
> | [`Q %>% str()`](chapter-lazy-evaluation-queries.html#lazy_q_str) | Shows the top 3 levels of the **object** `Q` |
> | [`Q %>% nrow()`](chapter-lazy-evaluation-queries.html#lazy_q_nrow) | **Attempts** to determine the number of rows |
> | [`Q %>% dplyr::tally()`](chapter-lazy-evaluation-queries.html#lazy_q_tally) | Counts all the rows – on the dbms side |
> | [`Q %>% dplyr::collect(n = 20)`](chapter-lazy-evaluation-queries.html#lazy_q_collect) | Prints 20 rows |
> | [`Q %>% dplyr::collect(n = 20) %>% head()`](chapter-lazy-evaluation-queries.html#lazy_q_collect) | Prints 6 rows |
> | [`Q %>% ggplot`](chapter-lazy-evaluation-queries.html#lazy_q_plot-categories) | Plots a barchart |
> | [`Q %>% dplyr::show_query()`](#lazy-q-show-query) | **Translates** the lazy query object into SQL |
The next chapter will discuss how to build queries and how to explore intermediate steps. But first, the following subsections provide a more detailed discussion of each row in the preceding table.
### 9\.3\.3 Q %\>% print()
Remember that `Q %>% print()` is equivalent to `print(Q)` and the same as just entering `Q` on the command line. We use the magrittr pipe operator here, because chaining functions highlights how the same object behaves differently in each use.
```
Q %>% print()
```
```
## # Source: lazy query [?? x 4]
## # Database: postgres [postgres@localhost:5432/adventureworks]
## firstname lastname salesytd birthdate
## <chr> <chr> <dbl> <date>
## 1 Stephen Jiang 559698. 1951-10-17
## 2 Michael Blythe 3763178. 1968-12-25
## 3 Linda Mitchell 4251369. 1980-02-27
## 4 Jillian Carson 3189418. 1962-08-29
## 5 Garrett Vargas 1453719. 1975-02-04
## 6 Tsvi Reiter 2315186. 1974-01-18
## 7 Pamela Ansman-Wolfe 1352577. 1974-12-06
## 8 Shu Ito 2458536. 1968-03-09
## 9 José Saraiva 2604541. 1963-12-11
## 10 David Campbell 1573013. 1974-02-11
## # … with more rows
```
R retrieves 10 observations and 3 columns. In its role as IDE, R has provided nicely formatted output that is similar to what it prints for a tibble, with descriptive information about the dataset and each column:
> 9\.3\.3 Source: lazy query \[?? x 4]
> ====================================
>
>
> 9\.3\.3 Database: postgres
> ==========================
>
>
> 9\.3\.3 \[[postgres@localhost](mailto:postgres@localhost):5432/adventureworks]
> ==============================================================================
>
>
> firstname lastname salesytd birthdate
R has not determined how many rows are left to retrieve as it shows with `[?? x 4]` and `... with more rows` in the data summary.
### 9\.3\.4 Q %\>% dplyr::as\_tibble()
` function causes R to download the whole table, using tibble’s default of displaying only the first 10 rows.
```
Q %>% dplyr::as_tibble()
```
```
## # A tibble: 17 x 4
## firstname lastname salesytd birthdate
## <chr> <chr> <dbl> <date>
## 1 Stephen Jiang 559698. 1951-10-17
## 2 Michael Blythe 3763178. 1968-12-25
## 3 Linda Mitchell 4251369. 1980-02-27
## 4 Jillian Carson 3189418. 1962-08-29
## 5 Garrett Vargas 1453719. 1975-02-04
## 6 Tsvi Reiter 2315186. 1974-01-18
## 7 Pamela Ansman-Wolfe 1352577. 1974-12-06
## 8 Shu Ito 2458536. 1968-03-09
## 9 José Saraiva 2604541. 1963-12-11
## 10 David Campbell 1573013. 1974-02-11
## 11 Tete Mensa-Annan 1576562. 1978-01-05
## 12 Syed Abbas 172524. 1975-01-11
## 13 Lynn Tsoflias 1421811. 1977-02-14
## 14 Amy Alberts 519906. 1957-09-20
## 15 Rachel Valdez 1827067. 1975-07-09
## 16 Jae Pak 4116871. 1968-03-17
## 17 Ranjit Varkey Chudukatil 3121616. 1975-09-30
```
### 9\.3\.5 Q %\>% head()
` function is very similar to print but has a different “`max.print`” value.
```
Q %>% head()
```
```
## # Source: lazy query [?? x 4]
## # Database: postgres [postgres@localhost:5432/adventureworks]
## firstname lastname salesytd birthdate
## <chr> <chr> <dbl> <date>
## 1 Stephen Jiang 559698. 1951-10-17
## 2 Michael Blythe 3763178. 1968-12-25
## 3 Linda Mitchell 4251369. 1980-02-27
## 4 Jillian Carson 3189418. 1962-08-29
## 5 Garrett Vargas 1453719. 1975-02-04
## 6 Tsvi Reiter 2315186. 1974-01-18
```
### 9\.3\.6 Q %\>% tail()
Produces an error, because `Q` does not hold all of the data, so it is not possible to list the last few items from the table:
```
try(
Q %>% tail(),
silent = FALSE,
outFile = stdout()
)
```
```
## Error : tail() is not supported by sql sources
```
### 9\.3\.7 Q %\>% length()
`:
```
Q %>% length()
```
```
## [1] 2
```
### 9\.3\.8 Q %\>% str()
:
```
Q %>% str(max.level = 3)
```
```
## List of 2
## $ src:List of 2
## ..$ con :Formal class 'PqConnection' [package "RPostgres"] with 3 slots
## ..$ disco: NULL
## ..- attr(*, "class")= chr [1:4] "src_PqConnection" "src_dbi" "src_sql" "src"
## $ ops:List of 4
## ..$ name: chr "select"
## ..$ x :List of 4
## .. ..$ name: chr "join"
## .. ..$ x :List of 2
## .. .. ..- attr(*, "class")= chr [1:5] "tbl_PqConnection" "tbl_dbi" "tbl_sql" "tbl_lazy" ...
## .. ..$ y :List of 2
## .. .. ..- attr(*, "class")= chr [1:5] "tbl_PqConnection" "tbl_dbi" "tbl_sql" "tbl_lazy" ...
## .. ..$ args:List of 4
## .. ..- attr(*, "class")= chr [1:3] "op_join" "op_double" "op"
## ..$ dots: list()
## ..$ args:List of 1
## .. ..$ vars:List of 4
## ..- attr(*, "class")= chr [1:3] "op_select" "op_single" "op"
## - attr(*, "class")= chr [1:5] "tbl_PqConnection" "tbl_dbi" "tbl_sql" "tbl_lazy" ...
```
### 9\.3\.9 Q %\>% nrow()
`. The `nrow` functions returns `NA` and does not execute a query:
```
Q %>% nrow()
```
```
## [1] NA
```
### 9\.3\.10 Q %\>% dplyr::tally()
The `tally` function actually counts all the rows.
```
Q %>% dplyr::tally()
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [postgres@localhost:5432/adventureworks]
## n
## <int>
## 1 17
```
The `nrow()` function knows that `Q` is a list. On the other hand, the `tally()` function tells SQL to go count all the rows. Notice that `Q` results in 1,000 rows – the same number of rows as `film`.
### 9\.3\.11 Q %\>% dplyr::collect()
` function behind the scenes, which forces R to download a specified number of rows:
```
Q %>% dplyr::collect(n = 20)
```
```
## # A tibble: 17 x 4
## firstname lastname salesytd birthdate
## <chr> <chr> <dbl> <date>
## 1 Stephen Jiang 559698. 1951-10-17
## 2 Michael Blythe 3763178. 1968-12-25
## 3 Linda Mitchell 4251369. 1980-02-27
## 4 Jillian Carson 3189418. 1962-08-29
## 5 Garrett Vargas 1453719. 1975-02-04
## 6 Tsvi Reiter 2315186. 1974-01-18
## 7 Pamela Ansman-Wolfe 1352577. 1974-12-06
## 8 Shu Ito 2458536. 1968-03-09
## 9 José Saraiva 2604541. 1963-12-11
## 10 David Campbell 1573013. 1974-02-11
## 11 Tete Mensa-Annan 1576562. 1978-01-05
## 12 Syed Abbas 172524. 1975-01-11
## 13 Lynn Tsoflias 1421811. 1977-02-14
## 14 Amy Alberts 519906. 1957-09-20
## 15 Rachel Valdez 1827067. 1975-07-09
## 16 Jae Pak 4116871. 1968-03-17
## 17 Ranjit Varkey Chudukatil 3121616. 1975-09-30
```
```
Q %>% dplyr::collect(n = 20) %>% head()
```
```
## # A tibble: 6 x 4
## firstname lastname salesytd birthdate
## <chr> <chr> <dbl> <date>
## 1 Stephen Jiang 559698. 1951-10-17
## 2 Michael Blythe 3763178. 1968-12-25
## 3 Linda Mitchell 4251369. 1980-02-27
## 4 Jillian Carson 3189418. 1962-08-29
## 5 Garrett Vargas 1453719. 1975-02-04
## 6 Tsvi Reiter 2315186. 1974-01-18
```
The `dplyr::collect` function triggers the creation of a tibble and controls the number of rows that the DBMS sends to R. Notice that `head` only prints 6 of the 20 rows that R has retrieved.
If you do not provide a value for the `n` argument, *all* of the rows will be retrieved into your R workspace.
### 9\.3\.12 Q %\>% ggplot
Passing the `Q` object to `ggplot` executes the query and plots the result.
```
Q %>% ggplot2::ggplot(aes(birthdate, salesytd)) + geom_point()
```
\* Rewrite previous query and this comment with adventureworks in mind.
Comment on the plot…
### 9\.3\.13 Q %\>% dplyr::show\_query()
```
Q %>% dplyr::show_query()
```
```
## <SQL>
## SELECT "firstname", "lastname", "salesytd", "birthdate"
## FROM (SELECT "LHS"."businessentityid" AS "businessentityid", "LHS"."territoryid" AS "territoryid", "LHS"."salesquota" AS "salesquota", "LHS"."bonus" AS "bonus", "LHS"."commissionpct" AS "commissionpct", "LHS"."salesytd" AS "salesytd", "LHS"."saleslastyear" AS "saleslastyear", "LHS"."sale_info_updated" AS "sale_info_updated", "LHS"."nationalidnumber" AS "nationalidnumber", "LHS"."loginid" AS "loginid", "LHS"."jobtitle" AS "jobtitle", "LHS"."birthdate" AS "birthdate", "LHS"."maritalstatus" AS "maritalstatus", "LHS"."gender" AS "gender", "LHS"."hiredate" AS "hiredate", "LHS"."salariedflag" AS "salariedflag", "LHS"."vacationhours" AS "vacationhours", "LHS"."sickleavehours" AS "sickleavehours", "LHS"."currentflag" AS "currentflag", "LHS"."organizationnode" AS "organizationnode", "RHS"."persontype" AS "persontype", "RHS"."namestyle" AS "namestyle", "RHS"."title" AS "title", "RHS"."firstname" AS "firstname", "RHS"."middlename" AS "middlename", "RHS"."lastname" AS "lastname", "RHS"."suffix" AS "suffix", "RHS"."emailpromotion" AS "emailpromotion", "RHS"."additionalcontactinfo" AS "additionalcontactinfo", "RHS"."demographics" AS "demographics"
## FROM (SELECT "LHS"."businessentityid" AS "businessentityid", "LHS"."territoryid" AS "territoryid", "LHS"."salesquota" AS "salesquota", "LHS"."bonus" AS "bonus", "LHS"."commissionpct" AS "commissionpct", "LHS"."salesytd" AS "salesytd", "LHS"."saleslastyear" AS "saleslastyear", "LHS"."sale_info_updated" AS "sale_info_updated", "RHS"."nationalidnumber" AS "nationalidnumber", "RHS"."loginid" AS "loginid", "RHS"."jobtitle" AS "jobtitle", "RHS"."birthdate" AS "birthdate", "RHS"."maritalstatus" AS "maritalstatus", "RHS"."gender" AS "gender", "RHS"."hiredate" AS "hiredate", "RHS"."salariedflag" AS "salariedflag", "RHS"."vacationhours" AS "vacationhours", "RHS"."sickleavehours" AS "sickleavehours", "RHS"."currentflag" AS "currentflag", "RHS"."organizationnode" AS "organizationnode"
## FROM (SELECT "businessentityid", "territoryid", "salesquota", "bonus", "commissionpct", "salesytd", "saleslastyear", "modifieddate" AS "sale_info_updated"
## FROM sales.salesperson) "LHS"
## LEFT JOIN (SELECT "businessentityid", "nationalidnumber", "loginid", "jobtitle", "birthdate", "maritalstatus", "gender", "hiredate", "salariedflag", "vacationhours", "sickleavehours", "currentflag", "organizationnode"
## FROM humanresources.employee) "RHS"
## ON ("LHS"."businessentityid" = "RHS"."businessentityid")
## ) "LHS"
## LEFT JOIN (SELECT "businessentityid", "persontype", "namestyle", "title", "firstname", "middlename", "lastname", "suffix", "emailpromotion", "additionalcontactinfo", "demographics"
## FROM person.person) "RHS"
## ON ("LHS"."businessentityid" = "RHS"."businessentityid")
## ) "dbplyr_009"
```
Hand\-written SQL code to do the same job will probably look a lot nicer and could be more efficient, but functionally `dplyr` does the job.
\#\# Disconnect from the database and stop Docker
```
dbDisconnect(con)
# or if using the connections package, use:
# connection_close(con)
sp_docker_stop("adventureworks")
```
9\.4 Other resources
--------------------
* Benjamin S. Baumer. 2017\. A Grammar for Reproducible and Painless Extract\-Transform\-Load Operations on Medium Data. [https://arxiv.org/abs/1708\.07073](https://arxiv.org/abs/1708.07073)
* dplyr Reference documentation: Remote tables. [https://dplyr.tidyverse.org/reference/index.html\#section\-remote\-tables](https://dplyr.tidyverse.org/reference/index.html#section-remote-tables)
* Data Carpentry. SQL Databases and R. [https://datacarpentry.org/R\-ecology\-lesson/05\-r\-and\-databases.html](https://datacarpentry.org/R-ecology-lesson/05-r-and-databases.html)
9\.1 Setup
----------
The following packages are used in this chapter:
```
library(tidyverse)
library(DBI)
library(RPostgres)
library(dbplyr)
require(knitr)
library(bookdown)
library(sqlpetr)
library(connections)
sleep_default <- 3
```
Start your `adventureworks` container:
```
sqlpetr::sp_docker_start("adventureworks")
Sys.sleep(sleep_default)
```
Connect to the database:
```
# con <- connection_open( # use in an interactive session
con <- dbConnect( # use in other settings
RPostgres::Postgres(),
# without the previous and next lines, some functions fail with bigint data
# so change int64 to integer
bigint = "integer",
user = Sys.getenv("DEFAULT_POSTGRES_USER_NAME"),
password = Sys.getenv("DEFAULT_POSTGRES_PASSWORD"),
dbname = "adventureworks",
host = "localhost",
port = 5432)
```
9\.2 R is lazy and comes with guardrails
----------------------------------------
By design, R is both a language and an interactive development environment (IDE). As a language, R tries to be as efficient as possible. As an IDE, R creates some guardrails to make it easy and safe to work with your data. For example `getOption("max.print")` prevents R from printing more rows of data than you want to handle in an interactive session, with a default of 99999 lines, which may or may not suit you.
On the other hand SQL is a *“Structured Query Language (SQL): a standard computer language for relational database management and data manipulation.”*.[1](#fn1) SQL has various database\-specific Interactive Development Environments (IDEs), such as [pgAdmin](https://www.pgadmin.org/) for PostgreSQL. Roger Peng explains in [R Programming for Data Science](https://bookdown.org/rdpeng/rprogdatascience/history-and-overview-of-r.html#basic-features-of-r) that:
> R has maintained the original S philosophy, which is that it provides a language that is both useful for interactive work, but contains a powerful programming language for developing new tools.
This is complicated when R interacts with SQL. In a [vignette for dbplyr](https://cran.r-project.org/web/packages/dbplyr/vignettes/dbplyr.html) Hadley Wickham explains:
> The most important difference between ordinary data frames and remote database queries is that your R code is translated into SQL and executed in the database on the remote server, not in R on your local machine. When working with databases, dplyr tries to be as lazy as possible:
>
>
> * It never pulls data into R unless you explicitly ask for it.
> * It delays doing any work until the last possible moment: it collects together everything you want to do and then sends it to the database in one step.
Exactly when, which, and how much data is returned from the dbms is the topic of this chapter. Exactly how the data is represented in the dbms and then translated to a data frame is discussed in the [DBI specification](https://cran.r-project.org/web/packages/DBI/vignettes/spec.html#_fetch_records_from_a_previously_executed_query_).
Eventually, if you are interacting with a dbms from R you will need to understand the differences between lazy loading, lazy evaluation, and lazy queries.
### 9\.2\.1 Lazy loading
“*Lazy loading is always used for code in packages but is optional (selected by the package maintainer) for datasets in packages.*”[2](#fn2) Lazy loading means that the code for a particular function doesn’t actually get loaded into memory until the last minute – when it’s actually being used.
### 9\.2\.2 Lazy evaluation
Essentially “Lazy evaluation is a programming strategy that allows a symbol to be evaluated only when needed.”[3](#fn3) That means that lazy evaluation is about **symbols** such as function arguments[4](#fn4) when they are evaluated. Tidy evaluation complicates lazy evaluation.[5](#fn5)
### 9\.2\.3 Lazy Queries
“*When you create a "lazy" query, you’re creating a pointer to a set of conditions on the database, but the query isn’t actually run and the data isn’t actually loaded until you call "next" or some similar method to actually fetch the data and load it into an object.*”[6](#fn6)
### 9\.2\.1 Lazy loading
“*Lazy loading is always used for code in packages but is optional (selected by the package maintainer) for datasets in packages.*”[2](#fn2) Lazy loading means that the code for a particular function doesn’t actually get loaded into memory until the last minute – when it’s actually being used.
### 9\.2\.2 Lazy evaluation
Essentially “Lazy evaluation is a programming strategy that allows a symbol to be evaluated only when needed.”[3](#fn3) That means that lazy evaluation is about **symbols** such as function arguments[4](#fn4) when they are evaluated. Tidy evaluation complicates lazy evaluation.[5](#fn5)
### 9\.2\.3 Lazy Queries
“*When you create a "lazy" query, you’re creating a pointer to a set of conditions on the database, but the query isn’t actually run and the data isn’t actually loaded until you call "next" or some similar method to actually fetch the data and load it into an object.*”[6](#fn6)
9\.3 Lazy evaluation and lazy queries
-------------------------------------
When does a lazy query trigger data retrieval? It depends on a lot of factors, as we explore below:
### 9\.3\.1 Create a black box query for experimentation
Define the three tables discussed in the previous chapter to build a *black box* query:
```
sales_person_table <- tbl(con, in_schema("sales", "salesperson")) %>%
select(-rowguid) %>%
rename(sale_info_updated = modifieddate)
employee_table <- tbl(con, in_schema("humanresources", "employee")) %>%
select(-modifieddate, -rowguid)
person_table <- tbl(con, in_schema("person", "person")) %>%
select(-modifieddate, -rowguid)
```
Here is a typical string of `dplyr` verbs strung together with the magrittr `%>%` pipe command that will be used to tease out the several different behaviors that a lazy query has when passed to different R functions. This query joins three connection objects into a query we’ll call `Q`:
```
Q <- sales_person_table %>%
dplyr::left_join(employee_table, by = c("businessentityid" = "businessentityid")) %>%
dplyr::left_join(person_table , by = c("businessentityid" = "businessentityid")) %>%
dplyr::select(firstname, lastname, salesytd, birthdate)
```
The `str` function gives us a hint at how R is collecting information that can be used to construct and execute a query later on:
```
str(Q, max.level = 2)
```
```
## List of 2
## $ src:List of 2
## ..$ con :Formal class 'PqConnection' [package "RPostgres"] with 3 slots
## ..$ disco: NULL
## ..- attr(*, "class")= chr [1:4] "src_PqConnection" "src_dbi" "src_sql" "src"
## $ ops:List of 4
## ..$ name: chr "select"
## ..$ x :List of 4
## .. ..- attr(*, "class")= chr [1:3] "op_join" "op_double" "op"
## ..$ dots: list()
## ..$ args:List of 1
## ..- attr(*, "class")= chr [1:3] "op_select" "op_single" "op"
## - attr(*, "class")= chr [1:5] "tbl_PqConnection" "tbl_dbi" "tbl_sql" "tbl_lazy" ...
```
### 9\.3\.2 Experiment overview
Think of `Q` as a black box for the moment. The following examples will show how `Q` is interpreted differently by different functions. It’s important to remember in the following discussion that the “**and then**” operator (`%>%`) actually wraps the subsequent code inside the preceding code so that `Q %>% print()` is equivalent to `print(Q)`.
**Notation**
> A single green check indicates that some rows are returned.
>
> Two green checks indicate that all the rows are returned.
>
> The red X indicates that no rows are returned.
> | R code | Result |
> | --- | --- |
> | [`Q %>% print()`](chapter-lazy-evaluation-queries.html#lazy_q_print) | Prints x rows; same as just entering `Q` |
> | [`Q %>% dplyr::as_tibble()`](#Q-as-tibble) | Forces `Q` to be a tibble |
> | [`Q %>% head()`](chapter-lazy-evaluation-queries.html#lazy_q_head) | Prints the first 6 rows |
> | [`Q %>% tail()`](chapter-lazy-evaluation-queries.html#lazy_q_tail) | Error: tail() is not supported by sql sources |
> | [`Q %>% length()`](chapter-lazy-evaluation-queries.html#lazy_q_length) | Counts the rows in `Q` |
> | [`Q %>% str()`](chapter-lazy-evaluation-queries.html#lazy_q_str) | Shows the top 3 levels of the **object** `Q` |
> | [`Q %>% nrow()`](chapter-lazy-evaluation-queries.html#lazy_q_nrow) | **Attempts** to determine the number of rows |
> | [`Q %>% dplyr::tally()`](chapter-lazy-evaluation-queries.html#lazy_q_tally) | Counts all the rows – on the dbms side |
> | [`Q %>% dplyr::collect(n = 20)`](chapter-lazy-evaluation-queries.html#lazy_q_collect) | Prints 20 rows |
> | [`Q %>% dplyr::collect(n = 20) %>% head()`](chapter-lazy-evaluation-queries.html#lazy_q_collect) | Prints 6 rows |
> | [`Q %>% ggplot`](chapter-lazy-evaluation-queries.html#lazy_q_plot-categories) | Plots a barchart |
> | [`Q %>% dplyr::show_query()`](#lazy-q-show-query) | **Translates** the lazy query object into SQL |
The next chapter will discuss how to build queries and how to explore intermediate steps. But first, the following subsections provide a more detailed discussion of each row in the preceding table.
### 9\.3\.3 Q %\>% print()
Remember that `Q %>% print()` is equivalent to `print(Q)` and the same as just entering `Q` on the command line. We use the magrittr pipe operator here, because chaining functions highlights how the same object behaves differently in each use.
```
Q %>% print()
```
```
## # Source: lazy query [?? x 4]
## # Database: postgres [postgres@localhost:5432/adventureworks]
## firstname lastname salesytd birthdate
## <chr> <chr> <dbl> <date>
## 1 Stephen Jiang 559698. 1951-10-17
## 2 Michael Blythe 3763178. 1968-12-25
## 3 Linda Mitchell 4251369. 1980-02-27
## 4 Jillian Carson 3189418. 1962-08-29
## 5 Garrett Vargas 1453719. 1975-02-04
## 6 Tsvi Reiter 2315186. 1974-01-18
## 7 Pamela Ansman-Wolfe 1352577. 1974-12-06
## 8 Shu Ito 2458536. 1968-03-09
## 9 José Saraiva 2604541. 1963-12-11
## 10 David Campbell 1573013. 1974-02-11
## # … with more rows
```
R retrieves 10 observations and 3 columns. In its role as IDE, R has provided nicely formatted output that is similar to what it prints for a tibble, with descriptive information about the dataset and each column:
> 9\.3\.3 Source: lazy query \[?? x 4]
> ====================================
>
>
> 9\.3\.3 Database: postgres
> ==========================
>
>
> 9\.3\.3 \[[postgres@localhost](mailto:postgres@localhost):5432/adventureworks]
> ==============================================================================
>
>
> firstname lastname salesytd birthdate
R has not determined how many rows are left to retrieve as it shows with `[?? x 4]` and `... with more rows` in the data summary.
### 9\.3\.4 Q %\>% dplyr::as\_tibble()
` function causes R to download the whole table, using tibble’s default of displaying only the first 10 rows.
```
Q %>% dplyr::as_tibble()
```
```
## # A tibble: 17 x 4
## firstname lastname salesytd birthdate
## <chr> <chr> <dbl> <date>
## 1 Stephen Jiang 559698. 1951-10-17
## 2 Michael Blythe 3763178. 1968-12-25
## 3 Linda Mitchell 4251369. 1980-02-27
## 4 Jillian Carson 3189418. 1962-08-29
## 5 Garrett Vargas 1453719. 1975-02-04
## 6 Tsvi Reiter 2315186. 1974-01-18
## 7 Pamela Ansman-Wolfe 1352577. 1974-12-06
## 8 Shu Ito 2458536. 1968-03-09
## 9 José Saraiva 2604541. 1963-12-11
## 10 David Campbell 1573013. 1974-02-11
## 11 Tete Mensa-Annan 1576562. 1978-01-05
## 12 Syed Abbas 172524. 1975-01-11
## 13 Lynn Tsoflias 1421811. 1977-02-14
## 14 Amy Alberts 519906. 1957-09-20
## 15 Rachel Valdez 1827067. 1975-07-09
## 16 Jae Pak 4116871. 1968-03-17
## 17 Ranjit Varkey Chudukatil 3121616. 1975-09-30
```
### 9\.3\.5 Q %\>% head()
` function is very similar to print but has a different “`max.print`” value.
```
Q %>% head()
```
```
## # Source: lazy query [?? x 4]
## # Database: postgres [postgres@localhost:5432/adventureworks]
## firstname lastname salesytd birthdate
## <chr> <chr> <dbl> <date>
## 1 Stephen Jiang 559698. 1951-10-17
## 2 Michael Blythe 3763178. 1968-12-25
## 3 Linda Mitchell 4251369. 1980-02-27
## 4 Jillian Carson 3189418. 1962-08-29
## 5 Garrett Vargas 1453719. 1975-02-04
## 6 Tsvi Reiter 2315186. 1974-01-18
```
### 9\.3\.6 Q %\>% tail()
Produces an error, because `Q` does not hold all of the data, so it is not possible to list the last few items from the table:
```
try(
Q %>% tail(),
silent = FALSE,
outFile = stdout()
)
```
```
## Error : tail() is not supported by sql sources
```
### 9\.3\.7 Q %\>% length()
`:
```
Q %>% length()
```
```
## [1] 2
```
### 9\.3\.8 Q %\>% str()
:
```
Q %>% str(max.level = 3)
```
```
## List of 2
## $ src:List of 2
## ..$ con :Formal class 'PqConnection' [package "RPostgres"] with 3 slots
## ..$ disco: NULL
## ..- attr(*, "class")= chr [1:4] "src_PqConnection" "src_dbi" "src_sql" "src"
## $ ops:List of 4
## ..$ name: chr "select"
## ..$ x :List of 4
## .. ..$ name: chr "join"
## .. ..$ x :List of 2
## .. .. ..- attr(*, "class")= chr [1:5] "tbl_PqConnection" "tbl_dbi" "tbl_sql" "tbl_lazy" ...
## .. ..$ y :List of 2
## .. .. ..- attr(*, "class")= chr [1:5] "tbl_PqConnection" "tbl_dbi" "tbl_sql" "tbl_lazy" ...
## .. ..$ args:List of 4
## .. ..- attr(*, "class")= chr [1:3] "op_join" "op_double" "op"
## ..$ dots: list()
## ..$ args:List of 1
## .. ..$ vars:List of 4
## ..- attr(*, "class")= chr [1:3] "op_select" "op_single" "op"
## - attr(*, "class")= chr [1:5] "tbl_PqConnection" "tbl_dbi" "tbl_sql" "tbl_lazy" ...
```
### 9\.3\.9 Q %\>% nrow()
`. The `nrow` functions returns `NA` and does not execute a query:
```
Q %>% nrow()
```
```
## [1] NA
```
### 9\.3\.10 Q %\>% dplyr::tally()
The `tally` function actually counts all the rows.
```
Q %>% dplyr::tally()
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [postgres@localhost:5432/adventureworks]
## n
## <int>
## 1 17
```
The `nrow()` function knows that `Q` is a list. On the other hand, the `tally()` function tells SQL to go count all the rows. Notice that `Q` results in 1,000 rows – the same number of rows as `film`.
### 9\.3\.11 Q %\>% dplyr::collect()
` function behind the scenes, which forces R to download a specified number of rows:
```
Q %>% dplyr::collect(n = 20)
```
```
## # A tibble: 17 x 4
## firstname lastname salesytd birthdate
## <chr> <chr> <dbl> <date>
## 1 Stephen Jiang 559698. 1951-10-17
## 2 Michael Blythe 3763178. 1968-12-25
## 3 Linda Mitchell 4251369. 1980-02-27
## 4 Jillian Carson 3189418. 1962-08-29
## 5 Garrett Vargas 1453719. 1975-02-04
## 6 Tsvi Reiter 2315186. 1974-01-18
## 7 Pamela Ansman-Wolfe 1352577. 1974-12-06
## 8 Shu Ito 2458536. 1968-03-09
## 9 José Saraiva 2604541. 1963-12-11
## 10 David Campbell 1573013. 1974-02-11
## 11 Tete Mensa-Annan 1576562. 1978-01-05
## 12 Syed Abbas 172524. 1975-01-11
## 13 Lynn Tsoflias 1421811. 1977-02-14
## 14 Amy Alberts 519906. 1957-09-20
## 15 Rachel Valdez 1827067. 1975-07-09
## 16 Jae Pak 4116871. 1968-03-17
## 17 Ranjit Varkey Chudukatil 3121616. 1975-09-30
```
```
Q %>% dplyr::collect(n = 20) %>% head()
```
```
## # A tibble: 6 x 4
## firstname lastname salesytd birthdate
## <chr> <chr> <dbl> <date>
## 1 Stephen Jiang 559698. 1951-10-17
## 2 Michael Blythe 3763178. 1968-12-25
## 3 Linda Mitchell 4251369. 1980-02-27
## 4 Jillian Carson 3189418. 1962-08-29
## 5 Garrett Vargas 1453719. 1975-02-04
## 6 Tsvi Reiter 2315186. 1974-01-18
```
The `dplyr::collect` function triggers the creation of a tibble and controls the number of rows that the DBMS sends to R. Notice that `head` only prints 6 of the 20 rows that R has retrieved.
If you do not provide a value for the `n` argument, *all* of the rows will be retrieved into your R workspace.
### 9\.3\.12 Q %\>% ggplot
Passing the `Q` object to `ggplot` executes the query and plots the result.
```
Q %>% ggplot2::ggplot(aes(birthdate, salesytd)) + geom_point()
```
\* Rewrite previous query and this comment with adventureworks in mind.
Comment on the plot…
### 9\.3\.13 Q %\>% dplyr::show\_query()
```
Q %>% dplyr::show_query()
```
```
## <SQL>
## SELECT "firstname", "lastname", "salesytd", "birthdate"
## FROM (SELECT "LHS"."businessentityid" AS "businessentityid", "LHS"."territoryid" AS "territoryid", "LHS"."salesquota" AS "salesquota", "LHS"."bonus" AS "bonus", "LHS"."commissionpct" AS "commissionpct", "LHS"."salesytd" AS "salesytd", "LHS"."saleslastyear" AS "saleslastyear", "LHS"."sale_info_updated" AS "sale_info_updated", "LHS"."nationalidnumber" AS "nationalidnumber", "LHS"."loginid" AS "loginid", "LHS"."jobtitle" AS "jobtitle", "LHS"."birthdate" AS "birthdate", "LHS"."maritalstatus" AS "maritalstatus", "LHS"."gender" AS "gender", "LHS"."hiredate" AS "hiredate", "LHS"."salariedflag" AS "salariedflag", "LHS"."vacationhours" AS "vacationhours", "LHS"."sickleavehours" AS "sickleavehours", "LHS"."currentflag" AS "currentflag", "LHS"."organizationnode" AS "organizationnode", "RHS"."persontype" AS "persontype", "RHS"."namestyle" AS "namestyle", "RHS"."title" AS "title", "RHS"."firstname" AS "firstname", "RHS"."middlename" AS "middlename", "RHS"."lastname" AS "lastname", "RHS"."suffix" AS "suffix", "RHS"."emailpromotion" AS "emailpromotion", "RHS"."additionalcontactinfo" AS "additionalcontactinfo", "RHS"."demographics" AS "demographics"
## FROM (SELECT "LHS"."businessentityid" AS "businessentityid", "LHS"."territoryid" AS "territoryid", "LHS"."salesquota" AS "salesquota", "LHS"."bonus" AS "bonus", "LHS"."commissionpct" AS "commissionpct", "LHS"."salesytd" AS "salesytd", "LHS"."saleslastyear" AS "saleslastyear", "LHS"."sale_info_updated" AS "sale_info_updated", "RHS"."nationalidnumber" AS "nationalidnumber", "RHS"."loginid" AS "loginid", "RHS"."jobtitle" AS "jobtitle", "RHS"."birthdate" AS "birthdate", "RHS"."maritalstatus" AS "maritalstatus", "RHS"."gender" AS "gender", "RHS"."hiredate" AS "hiredate", "RHS"."salariedflag" AS "salariedflag", "RHS"."vacationhours" AS "vacationhours", "RHS"."sickleavehours" AS "sickleavehours", "RHS"."currentflag" AS "currentflag", "RHS"."organizationnode" AS "organizationnode"
## FROM (SELECT "businessentityid", "territoryid", "salesquota", "bonus", "commissionpct", "salesytd", "saleslastyear", "modifieddate" AS "sale_info_updated"
## FROM sales.salesperson) "LHS"
## LEFT JOIN (SELECT "businessentityid", "nationalidnumber", "loginid", "jobtitle", "birthdate", "maritalstatus", "gender", "hiredate", "salariedflag", "vacationhours", "sickleavehours", "currentflag", "organizationnode"
## FROM humanresources.employee) "RHS"
## ON ("LHS"."businessentityid" = "RHS"."businessentityid")
## ) "LHS"
## LEFT JOIN (SELECT "businessentityid", "persontype", "namestyle", "title", "firstname", "middlename", "lastname", "suffix", "emailpromotion", "additionalcontactinfo", "demographics"
## FROM person.person) "RHS"
## ON ("LHS"."businessentityid" = "RHS"."businessentityid")
## ) "dbplyr_009"
```
Hand\-written SQL code to do the same job will probably look a lot nicer and could be more efficient, but functionally `dplyr` does the job.
\#\# Disconnect from the database and stop Docker
```
dbDisconnect(con)
# or if using the connections package, use:
# connection_close(con)
sp_docker_stop("adventureworks")
```
### 9\.3\.1 Create a black box query for experimentation
Define the three tables discussed in the previous chapter to build a *black box* query:
```
sales_person_table <- tbl(con, in_schema("sales", "salesperson")) %>%
select(-rowguid) %>%
rename(sale_info_updated = modifieddate)
employee_table <- tbl(con, in_schema("humanresources", "employee")) %>%
select(-modifieddate, -rowguid)
person_table <- tbl(con, in_schema("person", "person")) %>%
select(-modifieddate, -rowguid)
```
Here is a typical string of `dplyr` verbs strung together with the magrittr `%>%` pipe command that will be used to tease out the several different behaviors that a lazy query has when passed to different R functions. This query joins three connection objects into a query we’ll call `Q`:
```
Q <- sales_person_table %>%
dplyr::left_join(employee_table, by = c("businessentityid" = "businessentityid")) %>%
dplyr::left_join(person_table , by = c("businessentityid" = "businessentityid")) %>%
dplyr::select(firstname, lastname, salesytd, birthdate)
```
The `str` function gives us a hint at how R is collecting information that can be used to construct and execute a query later on:
```
str(Q, max.level = 2)
```
```
## List of 2
## $ src:List of 2
## ..$ con :Formal class 'PqConnection' [package "RPostgres"] with 3 slots
## ..$ disco: NULL
## ..- attr(*, "class")= chr [1:4] "src_PqConnection" "src_dbi" "src_sql" "src"
## $ ops:List of 4
## ..$ name: chr "select"
## ..$ x :List of 4
## .. ..- attr(*, "class")= chr [1:3] "op_join" "op_double" "op"
## ..$ dots: list()
## ..$ args:List of 1
## ..- attr(*, "class")= chr [1:3] "op_select" "op_single" "op"
## - attr(*, "class")= chr [1:5] "tbl_PqConnection" "tbl_dbi" "tbl_sql" "tbl_lazy" ...
```
### 9\.3\.2 Experiment overview
Think of `Q` as a black box for the moment. The following examples will show how `Q` is interpreted differently by different functions. It’s important to remember in the following discussion that the “**and then**” operator (`%>%`) actually wraps the subsequent code inside the preceding code so that `Q %>% print()` is equivalent to `print(Q)`.
**Notation**
> A single green check indicates that some rows are returned.
>
> Two green checks indicate that all the rows are returned.
>
> The red X indicates that no rows are returned.
> | R code | Result |
> | --- | --- |
> | [`Q %>% print()`](chapter-lazy-evaluation-queries.html#lazy_q_print) | Prints x rows; same as just entering `Q` |
> | [`Q %>% dplyr::as_tibble()`](#Q-as-tibble) | Forces `Q` to be a tibble |
> | [`Q %>% head()`](chapter-lazy-evaluation-queries.html#lazy_q_head) | Prints the first 6 rows |
> | [`Q %>% tail()`](chapter-lazy-evaluation-queries.html#lazy_q_tail) | Error: tail() is not supported by sql sources |
> | [`Q %>% length()`](chapter-lazy-evaluation-queries.html#lazy_q_length) | Counts the rows in `Q` |
> | [`Q %>% str()`](chapter-lazy-evaluation-queries.html#lazy_q_str) | Shows the top 3 levels of the **object** `Q` |
> | [`Q %>% nrow()`](chapter-lazy-evaluation-queries.html#lazy_q_nrow) | **Attempts** to determine the number of rows |
> | [`Q %>% dplyr::tally()`](chapter-lazy-evaluation-queries.html#lazy_q_tally) | Counts all the rows – on the dbms side |
> | [`Q %>% dplyr::collect(n = 20)`](chapter-lazy-evaluation-queries.html#lazy_q_collect) | Prints 20 rows |
> | [`Q %>% dplyr::collect(n = 20) %>% head()`](chapter-lazy-evaluation-queries.html#lazy_q_collect) | Prints 6 rows |
> | [`Q %>% ggplot`](chapter-lazy-evaluation-queries.html#lazy_q_plot-categories) | Plots a barchart |
> | [`Q %>% dplyr::show_query()`](#lazy-q-show-query) | **Translates** the lazy query object into SQL |
The next chapter will discuss how to build queries and how to explore intermediate steps. But first, the following subsections provide a more detailed discussion of each row in the preceding table.
### 9\.3\.3 Q %\>% print()
Remember that `Q %>% print()` is equivalent to `print(Q)` and the same as just entering `Q` on the command line. We use the magrittr pipe operator here, because chaining functions highlights how the same object behaves differently in each use.
```
Q %>% print()
```
```
## # Source: lazy query [?? x 4]
## # Database: postgres [postgres@localhost:5432/adventureworks]
## firstname lastname salesytd birthdate
## <chr> <chr> <dbl> <date>
## 1 Stephen Jiang 559698. 1951-10-17
## 2 Michael Blythe 3763178. 1968-12-25
## 3 Linda Mitchell 4251369. 1980-02-27
## 4 Jillian Carson 3189418. 1962-08-29
## 5 Garrett Vargas 1453719. 1975-02-04
## 6 Tsvi Reiter 2315186. 1974-01-18
## 7 Pamela Ansman-Wolfe 1352577. 1974-12-06
## 8 Shu Ito 2458536. 1968-03-09
## 9 José Saraiva 2604541. 1963-12-11
## 10 David Campbell 1573013. 1974-02-11
## # … with more rows
```
R retrieves 10 observations and 3 columns. In its role as IDE, R has provided nicely formatted output that is similar to what it prints for a tibble, with descriptive information about the dataset and each column:
> 9\.3\.3 Source: lazy query \[?? x 4]
> ====================================
>
>
> 9\.3\.3 Database: postgres
> ==========================
>
>
> 9\.3\.3 \[[postgres@localhost](mailto:postgres@localhost):5432/adventureworks]
> ==============================================================================
>
>
> firstname lastname salesytd birthdate
R has not determined how many rows are left to retrieve as it shows with `[?? x 4]` and `... with more rows` in the data summary.
### 9\.3\.4 Q %\>% dplyr::as\_tibble()
` function causes R to download the whole table, using tibble’s default of displaying only the first 10 rows.
```
Q %>% dplyr::as_tibble()
```
```
## # A tibble: 17 x 4
## firstname lastname salesytd birthdate
## <chr> <chr> <dbl> <date>
## 1 Stephen Jiang 559698. 1951-10-17
## 2 Michael Blythe 3763178. 1968-12-25
## 3 Linda Mitchell 4251369. 1980-02-27
## 4 Jillian Carson 3189418. 1962-08-29
## 5 Garrett Vargas 1453719. 1975-02-04
## 6 Tsvi Reiter 2315186. 1974-01-18
## 7 Pamela Ansman-Wolfe 1352577. 1974-12-06
## 8 Shu Ito 2458536. 1968-03-09
## 9 José Saraiva 2604541. 1963-12-11
## 10 David Campbell 1573013. 1974-02-11
## 11 Tete Mensa-Annan 1576562. 1978-01-05
## 12 Syed Abbas 172524. 1975-01-11
## 13 Lynn Tsoflias 1421811. 1977-02-14
## 14 Amy Alberts 519906. 1957-09-20
## 15 Rachel Valdez 1827067. 1975-07-09
## 16 Jae Pak 4116871. 1968-03-17
## 17 Ranjit Varkey Chudukatil 3121616. 1975-09-30
```
### 9\.3\.5 Q %\>% head()
` function is very similar to print but has a different “`max.print`” value.
```
Q %>% head()
```
```
## # Source: lazy query [?? x 4]
## # Database: postgres [postgres@localhost:5432/adventureworks]
## firstname lastname salesytd birthdate
## <chr> <chr> <dbl> <date>
## 1 Stephen Jiang 559698. 1951-10-17
## 2 Michael Blythe 3763178. 1968-12-25
## 3 Linda Mitchell 4251369. 1980-02-27
## 4 Jillian Carson 3189418. 1962-08-29
## 5 Garrett Vargas 1453719. 1975-02-04
## 6 Tsvi Reiter 2315186. 1974-01-18
```
### 9\.3\.6 Q %\>% tail()
Produces an error, because `Q` does not hold all of the data, so it is not possible to list the last few items from the table:
```
try(
Q %>% tail(),
silent = FALSE,
outFile = stdout()
)
```
```
## Error : tail() is not supported by sql sources
```
### 9\.3\.7 Q %\>% length()
`:
```
Q %>% length()
```
```
## [1] 2
```
### 9\.3\.8 Q %\>% str()
:
```
Q %>% str(max.level = 3)
```
```
## List of 2
## $ src:List of 2
## ..$ con :Formal class 'PqConnection' [package "RPostgres"] with 3 slots
## ..$ disco: NULL
## ..- attr(*, "class")= chr [1:4] "src_PqConnection" "src_dbi" "src_sql" "src"
## $ ops:List of 4
## ..$ name: chr "select"
## ..$ x :List of 4
## .. ..$ name: chr "join"
## .. ..$ x :List of 2
## .. .. ..- attr(*, "class")= chr [1:5] "tbl_PqConnection" "tbl_dbi" "tbl_sql" "tbl_lazy" ...
## .. ..$ y :List of 2
## .. .. ..- attr(*, "class")= chr [1:5] "tbl_PqConnection" "tbl_dbi" "tbl_sql" "tbl_lazy" ...
## .. ..$ args:List of 4
## .. ..- attr(*, "class")= chr [1:3] "op_join" "op_double" "op"
## ..$ dots: list()
## ..$ args:List of 1
## .. ..$ vars:List of 4
## ..- attr(*, "class")= chr [1:3] "op_select" "op_single" "op"
## - attr(*, "class")= chr [1:5] "tbl_PqConnection" "tbl_dbi" "tbl_sql" "tbl_lazy" ...
```
### 9\.3\.9 Q %\>% nrow()
`. The `nrow` functions returns `NA` and does not execute a query:
```
Q %>% nrow()
```
```
## [1] NA
```
### 9\.3\.10 Q %\>% dplyr::tally()
The `tally` function actually counts all the rows.
```
Q %>% dplyr::tally()
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [postgres@localhost:5432/adventureworks]
## n
## <int>
## 1 17
```
The `nrow()` function knows that `Q` is a list. On the other hand, the `tally()` function tells SQL to go count all the rows. Notice that `Q` results in 1,000 rows – the same number of rows as `film`.
### 9\.3\.11 Q %\>% dplyr::collect()
` function behind the scenes, which forces R to download a specified number of rows:
```
Q %>% dplyr::collect(n = 20)
```
```
## # A tibble: 17 x 4
## firstname lastname salesytd birthdate
## <chr> <chr> <dbl> <date>
## 1 Stephen Jiang 559698. 1951-10-17
## 2 Michael Blythe 3763178. 1968-12-25
## 3 Linda Mitchell 4251369. 1980-02-27
## 4 Jillian Carson 3189418. 1962-08-29
## 5 Garrett Vargas 1453719. 1975-02-04
## 6 Tsvi Reiter 2315186. 1974-01-18
## 7 Pamela Ansman-Wolfe 1352577. 1974-12-06
## 8 Shu Ito 2458536. 1968-03-09
## 9 José Saraiva 2604541. 1963-12-11
## 10 David Campbell 1573013. 1974-02-11
## 11 Tete Mensa-Annan 1576562. 1978-01-05
## 12 Syed Abbas 172524. 1975-01-11
## 13 Lynn Tsoflias 1421811. 1977-02-14
## 14 Amy Alberts 519906. 1957-09-20
## 15 Rachel Valdez 1827067. 1975-07-09
## 16 Jae Pak 4116871. 1968-03-17
## 17 Ranjit Varkey Chudukatil 3121616. 1975-09-30
```
```
Q %>% dplyr::collect(n = 20) %>% head()
```
```
## # A tibble: 6 x 4
## firstname lastname salesytd birthdate
## <chr> <chr> <dbl> <date>
## 1 Stephen Jiang 559698. 1951-10-17
## 2 Michael Blythe 3763178. 1968-12-25
## 3 Linda Mitchell 4251369. 1980-02-27
## 4 Jillian Carson 3189418. 1962-08-29
## 5 Garrett Vargas 1453719. 1975-02-04
## 6 Tsvi Reiter 2315186. 1974-01-18
```
The `dplyr::collect` function triggers the creation of a tibble and controls the number of rows that the DBMS sends to R. Notice that `head` only prints 6 of the 20 rows that R has retrieved.
If you do not provide a value for the `n` argument, *all* of the rows will be retrieved into your R workspace.
### 9\.3\.12 Q %\>% ggplot
Passing the `Q` object to `ggplot` executes the query and plots the result.
```
Q %>% ggplot2::ggplot(aes(birthdate, salesytd)) + geom_point()
```
\* Rewrite previous query and this comment with adventureworks in mind.
Comment on the plot…
### 9\.3\.13 Q %\>% dplyr::show\_query()
```
Q %>% dplyr::show_query()
```
```
## <SQL>
## SELECT "firstname", "lastname", "salesytd", "birthdate"
## FROM (SELECT "LHS"."businessentityid" AS "businessentityid", "LHS"."territoryid" AS "territoryid", "LHS"."salesquota" AS "salesquota", "LHS"."bonus" AS "bonus", "LHS"."commissionpct" AS "commissionpct", "LHS"."salesytd" AS "salesytd", "LHS"."saleslastyear" AS "saleslastyear", "LHS"."sale_info_updated" AS "sale_info_updated", "LHS"."nationalidnumber" AS "nationalidnumber", "LHS"."loginid" AS "loginid", "LHS"."jobtitle" AS "jobtitle", "LHS"."birthdate" AS "birthdate", "LHS"."maritalstatus" AS "maritalstatus", "LHS"."gender" AS "gender", "LHS"."hiredate" AS "hiredate", "LHS"."salariedflag" AS "salariedflag", "LHS"."vacationhours" AS "vacationhours", "LHS"."sickleavehours" AS "sickleavehours", "LHS"."currentflag" AS "currentflag", "LHS"."organizationnode" AS "organizationnode", "RHS"."persontype" AS "persontype", "RHS"."namestyle" AS "namestyle", "RHS"."title" AS "title", "RHS"."firstname" AS "firstname", "RHS"."middlename" AS "middlename", "RHS"."lastname" AS "lastname", "RHS"."suffix" AS "suffix", "RHS"."emailpromotion" AS "emailpromotion", "RHS"."additionalcontactinfo" AS "additionalcontactinfo", "RHS"."demographics" AS "demographics"
## FROM (SELECT "LHS"."businessentityid" AS "businessentityid", "LHS"."territoryid" AS "territoryid", "LHS"."salesquota" AS "salesquota", "LHS"."bonus" AS "bonus", "LHS"."commissionpct" AS "commissionpct", "LHS"."salesytd" AS "salesytd", "LHS"."saleslastyear" AS "saleslastyear", "LHS"."sale_info_updated" AS "sale_info_updated", "RHS"."nationalidnumber" AS "nationalidnumber", "RHS"."loginid" AS "loginid", "RHS"."jobtitle" AS "jobtitle", "RHS"."birthdate" AS "birthdate", "RHS"."maritalstatus" AS "maritalstatus", "RHS"."gender" AS "gender", "RHS"."hiredate" AS "hiredate", "RHS"."salariedflag" AS "salariedflag", "RHS"."vacationhours" AS "vacationhours", "RHS"."sickleavehours" AS "sickleavehours", "RHS"."currentflag" AS "currentflag", "RHS"."organizationnode" AS "organizationnode"
## FROM (SELECT "businessentityid", "territoryid", "salesquota", "bonus", "commissionpct", "salesytd", "saleslastyear", "modifieddate" AS "sale_info_updated"
## FROM sales.salesperson) "LHS"
## LEFT JOIN (SELECT "businessentityid", "nationalidnumber", "loginid", "jobtitle", "birthdate", "maritalstatus", "gender", "hiredate", "salariedflag", "vacationhours", "sickleavehours", "currentflag", "organizationnode"
## FROM humanresources.employee) "RHS"
## ON ("LHS"."businessentityid" = "RHS"."businessentityid")
## ) "LHS"
## LEFT JOIN (SELECT "businessentityid", "persontype", "namestyle", "title", "firstname", "middlename", "lastname", "suffix", "emailpromotion", "additionalcontactinfo", "demographics"
## FROM person.person) "RHS"
## ON ("LHS"."businessentityid" = "RHS"."businessentityid")
## ) "dbplyr_009"
```
Hand\-written SQL code to do the same job will probably look a lot nicer and could be more efficient, but functionally `dplyr` does the job.
\#\# Disconnect from the database and stop Docker
```
dbDisconnect(con)
# or if using the connections package, use:
# connection_close(con)
sp_docker_stop("adventureworks")
```
9\.4 Other resources
--------------------
* Benjamin S. Baumer. 2017\. A Grammar for Reproducible and Painless Extract\-Transform\-Load Operations on Medium Data. [https://arxiv.org/abs/1708\.07073](https://arxiv.org/abs/1708.07073)
* dplyr Reference documentation: Remote tables. [https://dplyr.tidyverse.org/reference/index.html\#section\-remote\-tables](https://dplyr.tidyverse.org/reference/index.html#section-remote-tables)
* Data Carpentry. SQL Databases and R. [https://datacarpentry.org/R\-ecology\-lesson/05\-r\-and\-databases.html](https://datacarpentry.org/R-ecology-lesson/05-r-and-databases.html)
| Data Databases and Engineering |
smithjd.github.io | https://smithjd.github.io/sql-pet/chapter-lazy-evaluation-and-timing.html |
Chapter 10 Lazy Evaluation and Execution Environment
====================================================
> This chapter:
>
>
> * Builds on the lazy loading discussion in the previous chapter
> * Demonstrates how the use of the `dplyr::collect()` creates a boundary between code that is sent to a dbms and code that is executed locally
10\.1 Setup
-----------
The following packages are used in this chapter:
```
library(tidyverse)
library(DBI)
library(RPostgres)
library(dbplyr)
require(knitr)
library(bookdown)
library(sqlpetr)
sleep_default <- 3
```
If you have not yet set up the Docker container with PostgreSQL and the dvdrental database, go back to \[those instructions]\[Build the pet\-sql Docker Image] to configure your environment. Otherwise, start your `adventureworks` container:
```
sqlpetr::sp_docker_start("adventureworks")
Sys.sleep(sleep_default)
```
Connect to the database:
```
con <- dbConnect(
RPostgres::Postgres(),
# without the previous and next lines, some functions fail with bigint data
# so change int64 to integer
bigint = "integer",
host = "localhost",
user = Sys.getenv("DEFAULT_POSTGRES_USER_NAME"),
password = Sys.getenv("DEFAULT_POSTGRES_PASSWORD"),
dbname = "adventureworks",
port = 5432)
```
Here is a simple string of `dplyr` verbs similar to the query used to illustrate issues in the last chapter:
Note that in the previous example we follow this book’s convention of creating a connection object to each table and fully qualifying function names (e.g., specifying the package). In practice, it’s possible and convenient to use more abbreviated notation.
```
Q <- tbl(con, in_schema("sales", "salesperson")) %>%
left_join(tbl(con, in_schema("humanresources", "employee")), by = c("businessentityid" = "businessentityid")) %>%
select(birthdate, saleslastyear)
Q
```
```
## # Source: lazy query [?? x 2]
## # Database: postgres [postgres@localhost:5432/adventureworks]
## birthdate saleslastyear
## <date> <dbl>
## 1 1951-10-17 0
## 2 1968-12-25 1750406.
## 3 1980-02-27 1439156.
## 4 1962-08-29 1997186.
## 5 1975-02-04 1620277.
## 6 1974-01-18 1849641.
## 7 1974-12-06 1927059.
## 8 1968-03-09 2073506.
## 9 1963-12-11 2038235.
## 10 1974-02-11 1371635.
## # … with more rows
```
### 10\.1\.1 Experiment overview
Think of `Q` as a black box for the moment. The following examples will show how `Q` is interpreted differently by different functions. It’s important to remember in the following discussion that the “**and then**” operator (`%>%`) actually wraps the subsequent code inside the preceding code so that `Q %>% print()` is equivalent to `print(Q)`.
**Notation**
> | Symbol | Explanation |
> | --- | --- |
> | | A single green check indicates that some rows are returned. |
> | | Two green checks indicate that all the rows are returned. |
> | | The red X indicates that no rows are returned. |
> | R code | Result |
> | --- | --- |
> | **Time\-based, execution environment issues** | |
> | [`Qc <- Q %>% count(saleslastyear, sort = TRUE)`](chapter-lazy-evaluation-and-timing.html#lazy_q_build) | **Extends** the lazy query object |
The next chapter will discuss how to build queries and how to explore intermediate steps. But first, the following subsections provide a more detailed discussion of each row in the preceding table.
### 10\.1\.2 Time\-based, execution environment issues
Remember that if the expression is assigned to an object, it is not executed. If an expression is entered on the command line or appears in your script by itself, a `print()` function is implied.
> *These two are different:*
> Q %\>% sum(saleslastyear)
> Q\_query \<\- Q %\>% sum(saleslastyear)
This behavior is the basis of a useful debugging and development process where queries are built up incrementally.
### 10\.1\.3 Q %\>% `more dplyr`
` function at the end, we can run it repeatedly, adding dplyr expressions, and only get 10 rows back. Every time we add a dplyr expression to a chain, R will rewrite the SQL code. For example:
As we understand more about the data, we simply add dplyr expressions to pinpoint what we are looking for:
```
Q %>% filter(saleslastyear > 40) %>%
arrange(desc(saleslastyear))
```
```
## # Source: lazy query [?? x 2]
## # Database: postgres [postgres@localhost:5432/adventureworks]
## # Ordered by: desc(saleslastyear)
## birthdate saleslastyear
## <date> <dbl>
## 1 1975-09-30 2396540.
## 2 1977-02-14 2278549.
## 3 1968-03-09 2073506.
## 4 1963-12-11 2038235.
## 5 1962-08-29 1997186.
## 6 1974-12-06 1927059.
## 7 1974-01-18 1849641.
## 8 1968-12-25 1750406.
## 9 1968-03-17 1635823.
## 10 1975-02-04 1620277.
## # … with more rows
```
```
Q %>% summarize(total_sales = sum(saleslastyear, na.rm = TRUE), sales_persons_count = n())
```
```
## # Source: lazy query [?? x 2]
## # Database: postgres [postgres@localhost:5432/adventureworks]
## total_sales sales_persons_count
## <dbl> <int>
## 1 23685964. 17
```
When all the accumulated `dplyr` verbs are executed, they are submitted to the dbms and the number of rows that are returned follow the same rules as discussed above.
\#\#\# Interspersing SQL and dplyr
```
Q %>%
# mutate(birthdate = date(birthdate)) %>%
show_query()
```
```
## <SQL>
## SELECT "birthdate", "saleslastyear"
## FROM (SELECT "LHS"."businessentityid" AS "businessentityid", "LHS"."territoryid" AS "territoryid", "LHS"."salesquota" AS "salesquota", "LHS"."bonus" AS "bonus", "LHS"."commissionpct" AS "commissionpct", "LHS"."salesytd" AS "salesytd", "LHS"."saleslastyear" AS "saleslastyear", "LHS"."rowguid" AS "rowguid.x", "LHS"."modifieddate" AS "modifieddate.x", "RHS"."nationalidnumber" AS "nationalidnumber", "RHS"."loginid" AS "loginid", "RHS"."jobtitle" AS "jobtitle", "RHS"."birthdate" AS "birthdate", "RHS"."maritalstatus" AS "maritalstatus", "RHS"."gender" AS "gender", "RHS"."hiredate" AS "hiredate", "RHS"."salariedflag" AS "salariedflag", "RHS"."vacationhours" AS "vacationhours", "RHS"."sickleavehours" AS "sickleavehours", "RHS"."currentflag" AS "currentflag", "RHS"."rowguid" AS "rowguid.y", "RHS"."modifieddate" AS "modifieddate.y", "RHS"."organizationnode" AS "organizationnode"
## FROM sales.salesperson AS "LHS"
## LEFT JOIN humanresources.employee AS "RHS"
## ON ("LHS"."businessentityid" = "RHS"."businessentityid")
## ) "dbplyr_006"
```
```
# Need to come up with a different example illustrating where
# the `collect` statement goes.
# sales_person_table %>%
# mutate(birthdate = date(birthdate))
#
# try(sales_person_table %>%
# mutate(birthdate = lubridate::date(birthdate))
# )
#
# sales_person_table %>% collect() %>%
# mutate(birthdate = lubridate::date(birthdate))
```
This may not be relevant in the context where it turns out that dates in adventureworks come through as date!
The idea is to show how functions are interpreted BEFORE sending to the SQL translator.
```
to_char <- function(date, fmt) {return(fmt)}
# sales_person_table %>%
# mutate(birthdate = to_char(birthdate, "YYYY-MM")) %>%
# show_query()
#
# sales_person_table %>%
# mutate(birthdate = to_char(birthdate, "YYYY-MM"))
```
### 10\.1\.4 Many handy R functions can’t be translated to SQL
It just so happens that PostgreSQL has a `date` function that does the same thing as the `date` function in the `lubridate` package. In the following code the `date` function is executed by PostreSQL.
```
# sales_person_table %>% mutate(birthdate = date(birthdate))
```
they are passed to the dbms unless we explicitly tell `dplyr` to stop translating and bring the results back to the R environment for local processing.
```
try(sales_person_table %>% collect() %>%
mutate(birthdate = lubridate::date(birthdate)))
```
```
## Error in eval(lhs, parent, parent) :
## object 'sales_person_table' not found
```
### 10\.1\.5 Further lazy execution examples
See more examples of lazy execution [here](https://datacarpentry.org/R-ecology-lesson/05-r-and-databases.html).
10\.2 Disconnect from the database and stop Docker
--------------------------------------------------
```
dbDisconnect(con)
sp_docker_stop("adventureworks")
```
10\.3 Other resources
---------------------
* Benjamin S. Baumer. 2017\. A Grammar for Reproducible and Painless Extract\-Transform\-Load Operations on Medium Data. [https://arxiv.org/abs/1708\.07073](https://arxiv.org/abs/1708.07073)
* dplyr Reference documentation: Remote tables. [https://dplyr.tidyverse.org/reference/index.html\#section\-remote\-tables](https://dplyr.tidyverse.org/reference/index.html#section-remote-tables)
* Data Carpentry. SQL Databases and R. [https://datacarpentry.org/R\-ecology\-lesson/05\-r\-and\-databases.html](https://datacarpentry.org/R-ecology-lesson/05-r-and-databases.html)
10\.1 Setup
-----------
The following packages are used in this chapter:
```
library(tidyverse)
library(DBI)
library(RPostgres)
library(dbplyr)
require(knitr)
library(bookdown)
library(sqlpetr)
sleep_default <- 3
```
If you have not yet set up the Docker container with PostgreSQL and the dvdrental database, go back to \[those instructions]\[Build the pet\-sql Docker Image] to configure your environment. Otherwise, start your `adventureworks` container:
```
sqlpetr::sp_docker_start("adventureworks")
Sys.sleep(sleep_default)
```
Connect to the database:
```
con <- dbConnect(
RPostgres::Postgres(),
# without the previous and next lines, some functions fail with bigint data
# so change int64 to integer
bigint = "integer",
host = "localhost",
user = Sys.getenv("DEFAULT_POSTGRES_USER_NAME"),
password = Sys.getenv("DEFAULT_POSTGRES_PASSWORD"),
dbname = "adventureworks",
port = 5432)
```
Here is a simple string of `dplyr` verbs similar to the query used to illustrate issues in the last chapter:
Note that in the previous example we follow this book’s convention of creating a connection object to each table and fully qualifying function names (e.g., specifying the package). In practice, it’s possible and convenient to use more abbreviated notation.
```
Q <- tbl(con, in_schema("sales", "salesperson")) %>%
left_join(tbl(con, in_schema("humanresources", "employee")), by = c("businessentityid" = "businessentityid")) %>%
select(birthdate, saleslastyear)
Q
```
```
## # Source: lazy query [?? x 2]
## # Database: postgres [postgres@localhost:5432/adventureworks]
## birthdate saleslastyear
## <date> <dbl>
## 1 1951-10-17 0
## 2 1968-12-25 1750406.
## 3 1980-02-27 1439156.
## 4 1962-08-29 1997186.
## 5 1975-02-04 1620277.
## 6 1974-01-18 1849641.
## 7 1974-12-06 1927059.
## 8 1968-03-09 2073506.
## 9 1963-12-11 2038235.
## 10 1974-02-11 1371635.
## # … with more rows
```
### 10\.1\.1 Experiment overview
Think of `Q` as a black box for the moment. The following examples will show how `Q` is interpreted differently by different functions. It’s important to remember in the following discussion that the “**and then**” operator (`%>%`) actually wraps the subsequent code inside the preceding code so that `Q %>% print()` is equivalent to `print(Q)`.
**Notation**
> | Symbol | Explanation |
> | --- | --- |
> | | A single green check indicates that some rows are returned. |
> | | Two green checks indicate that all the rows are returned. |
> | | The red X indicates that no rows are returned. |
> | R code | Result |
> | --- | --- |
> | **Time\-based, execution environment issues** | |
> | [`Qc <- Q %>% count(saleslastyear, sort = TRUE)`](chapter-lazy-evaluation-and-timing.html#lazy_q_build) | **Extends** the lazy query object |
The next chapter will discuss how to build queries and how to explore intermediate steps. But first, the following subsections provide a more detailed discussion of each row in the preceding table.
### 10\.1\.2 Time\-based, execution environment issues
Remember that if the expression is assigned to an object, it is not executed. If an expression is entered on the command line or appears in your script by itself, a `print()` function is implied.
> *These two are different:*
> Q %\>% sum(saleslastyear)
> Q\_query \<\- Q %\>% sum(saleslastyear)
This behavior is the basis of a useful debugging and development process where queries are built up incrementally.
### 10\.1\.3 Q %\>% `more dplyr`
` function at the end, we can run it repeatedly, adding dplyr expressions, and only get 10 rows back. Every time we add a dplyr expression to a chain, R will rewrite the SQL code. For example:
As we understand more about the data, we simply add dplyr expressions to pinpoint what we are looking for:
```
Q %>% filter(saleslastyear > 40) %>%
arrange(desc(saleslastyear))
```
```
## # Source: lazy query [?? x 2]
## # Database: postgres [postgres@localhost:5432/adventureworks]
## # Ordered by: desc(saleslastyear)
## birthdate saleslastyear
## <date> <dbl>
## 1 1975-09-30 2396540.
## 2 1977-02-14 2278549.
## 3 1968-03-09 2073506.
## 4 1963-12-11 2038235.
## 5 1962-08-29 1997186.
## 6 1974-12-06 1927059.
## 7 1974-01-18 1849641.
## 8 1968-12-25 1750406.
## 9 1968-03-17 1635823.
## 10 1975-02-04 1620277.
## # … with more rows
```
```
Q %>% summarize(total_sales = sum(saleslastyear, na.rm = TRUE), sales_persons_count = n())
```
```
## # Source: lazy query [?? x 2]
## # Database: postgres [postgres@localhost:5432/adventureworks]
## total_sales sales_persons_count
## <dbl> <int>
## 1 23685964. 17
```
When all the accumulated `dplyr` verbs are executed, they are submitted to the dbms and the number of rows that are returned follow the same rules as discussed above.
\#\#\# Interspersing SQL and dplyr
```
Q %>%
# mutate(birthdate = date(birthdate)) %>%
show_query()
```
```
## <SQL>
## SELECT "birthdate", "saleslastyear"
## FROM (SELECT "LHS"."businessentityid" AS "businessentityid", "LHS"."territoryid" AS "territoryid", "LHS"."salesquota" AS "salesquota", "LHS"."bonus" AS "bonus", "LHS"."commissionpct" AS "commissionpct", "LHS"."salesytd" AS "salesytd", "LHS"."saleslastyear" AS "saleslastyear", "LHS"."rowguid" AS "rowguid.x", "LHS"."modifieddate" AS "modifieddate.x", "RHS"."nationalidnumber" AS "nationalidnumber", "RHS"."loginid" AS "loginid", "RHS"."jobtitle" AS "jobtitle", "RHS"."birthdate" AS "birthdate", "RHS"."maritalstatus" AS "maritalstatus", "RHS"."gender" AS "gender", "RHS"."hiredate" AS "hiredate", "RHS"."salariedflag" AS "salariedflag", "RHS"."vacationhours" AS "vacationhours", "RHS"."sickleavehours" AS "sickleavehours", "RHS"."currentflag" AS "currentflag", "RHS"."rowguid" AS "rowguid.y", "RHS"."modifieddate" AS "modifieddate.y", "RHS"."organizationnode" AS "organizationnode"
## FROM sales.salesperson AS "LHS"
## LEFT JOIN humanresources.employee AS "RHS"
## ON ("LHS"."businessentityid" = "RHS"."businessentityid")
## ) "dbplyr_006"
```
```
# Need to come up with a different example illustrating where
# the `collect` statement goes.
# sales_person_table %>%
# mutate(birthdate = date(birthdate))
#
# try(sales_person_table %>%
# mutate(birthdate = lubridate::date(birthdate))
# )
#
# sales_person_table %>% collect() %>%
# mutate(birthdate = lubridate::date(birthdate))
```
This may not be relevant in the context where it turns out that dates in adventureworks come through as date!
The idea is to show how functions are interpreted BEFORE sending to the SQL translator.
```
to_char <- function(date, fmt) {return(fmt)}
# sales_person_table %>%
# mutate(birthdate = to_char(birthdate, "YYYY-MM")) %>%
# show_query()
#
# sales_person_table %>%
# mutate(birthdate = to_char(birthdate, "YYYY-MM"))
```
### 10\.1\.4 Many handy R functions can’t be translated to SQL
It just so happens that PostgreSQL has a `date` function that does the same thing as the `date` function in the `lubridate` package. In the following code the `date` function is executed by PostreSQL.
```
# sales_person_table %>% mutate(birthdate = date(birthdate))
```
they are passed to the dbms unless we explicitly tell `dplyr` to stop translating and bring the results back to the R environment for local processing.
```
try(sales_person_table %>% collect() %>%
mutate(birthdate = lubridate::date(birthdate)))
```
```
## Error in eval(lhs, parent, parent) :
## object 'sales_person_table' not found
```
### 10\.1\.5 Further lazy execution examples
See more examples of lazy execution [here](https://datacarpentry.org/R-ecology-lesson/05-r-and-databases.html).
### 10\.1\.1 Experiment overview
Think of `Q` as a black box for the moment. The following examples will show how `Q` is interpreted differently by different functions. It’s important to remember in the following discussion that the “**and then**” operator (`%>%`) actually wraps the subsequent code inside the preceding code so that `Q %>% print()` is equivalent to `print(Q)`.
**Notation**
> | Symbol | Explanation |
> | --- | --- |
> | | A single green check indicates that some rows are returned. |
> | | Two green checks indicate that all the rows are returned. |
> | | The red X indicates that no rows are returned. |
> | R code | Result |
> | --- | --- |
> | **Time\-based, execution environment issues** | |
> | [`Qc <- Q %>% count(saleslastyear, sort = TRUE)`](chapter-lazy-evaluation-and-timing.html#lazy_q_build) | **Extends** the lazy query object |
The next chapter will discuss how to build queries and how to explore intermediate steps. But first, the following subsections provide a more detailed discussion of each row in the preceding table.
### 10\.1\.2 Time\-based, execution environment issues
Remember that if the expression is assigned to an object, it is not executed. If an expression is entered on the command line or appears in your script by itself, a `print()` function is implied.
> *These two are different:*
> Q %\>% sum(saleslastyear)
> Q\_query \<\- Q %\>% sum(saleslastyear)
This behavior is the basis of a useful debugging and development process where queries are built up incrementally.
### 10\.1\.3 Q %\>% `more dplyr`
` function at the end, we can run it repeatedly, adding dplyr expressions, and only get 10 rows back. Every time we add a dplyr expression to a chain, R will rewrite the SQL code. For example:
As we understand more about the data, we simply add dplyr expressions to pinpoint what we are looking for:
```
Q %>% filter(saleslastyear > 40) %>%
arrange(desc(saleslastyear))
```
```
## # Source: lazy query [?? x 2]
## # Database: postgres [postgres@localhost:5432/adventureworks]
## # Ordered by: desc(saleslastyear)
## birthdate saleslastyear
## <date> <dbl>
## 1 1975-09-30 2396540.
## 2 1977-02-14 2278549.
## 3 1968-03-09 2073506.
## 4 1963-12-11 2038235.
## 5 1962-08-29 1997186.
## 6 1974-12-06 1927059.
## 7 1974-01-18 1849641.
## 8 1968-12-25 1750406.
## 9 1968-03-17 1635823.
## 10 1975-02-04 1620277.
## # … with more rows
```
```
Q %>% summarize(total_sales = sum(saleslastyear, na.rm = TRUE), sales_persons_count = n())
```
```
## # Source: lazy query [?? x 2]
## # Database: postgres [postgres@localhost:5432/adventureworks]
## total_sales sales_persons_count
## <dbl> <int>
## 1 23685964. 17
```
When all the accumulated `dplyr` verbs are executed, they are submitted to the dbms and the number of rows that are returned follow the same rules as discussed above.
\#\#\# Interspersing SQL and dplyr
```
Q %>%
# mutate(birthdate = date(birthdate)) %>%
show_query()
```
```
## <SQL>
## SELECT "birthdate", "saleslastyear"
## FROM (SELECT "LHS"."businessentityid" AS "businessentityid", "LHS"."territoryid" AS "territoryid", "LHS"."salesquota" AS "salesquota", "LHS"."bonus" AS "bonus", "LHS"."commissionpct" AS "commissionpct", "LHS"."salesytd" AS "salesytd", "LHS"."saleslastyear" AS "saleslastyear", "LHS"."rowguid" AS "rowguid.x", "LHS"."modifieddate" AS "modifieddate.x", "RHS"."nationalidnumber" AS "nationalidnumber", "RHS"."loginid" AS "loginid", "RHS"."jobtitle" AS "jobtitle", "RHS"."birthdate" AS "birthdate", "RHS"."maritalstatus" AS "maritalstatus", "RHS"."gender" AS "gender", "RHS"."hiredate" AS "hiredate", "RHS"."salariedflag" AS "salariedflag", "RHS"."vacationhours" AS "vacationhours", "RHS"."sickleavehours" AS "sickleavehours", "RHS"."currentflag" AS "currentflag", "RHS"."rowguid" AS "rowguid.y", "RHS"."modifieddate" AS "modifieddate.y", "RHS"."organizationnode" AS "organizationnode"
## FROM sales.salesperson AS "LHS"
## LEFT JOIN humanresources.employee AS "RHS"
## ON ("LHS"."businessentityid" = "RHS"."businessentityid")
## ) "dbplyr_006"
```
```
# Need to come up with a different example illustrating where
# the `collect` statement goes.
# sales_person_table %>%
# mutate(birthdate = date(birthdate))
#
# try(sales_person_table %>%
# mutate(birthdate = lubridate::date(birthdate))
# )
#
# sales_person_table %>% collect() %>%
# mutate(birthdate = lubridate::date(birthdate))
```
This may not be relevant in the context where it turns out that dates in adventureworks come through as date!
The idea is to show how functions are interpreted BEFORE sending to the SQL translator.
```
to_char <- function(date, fmt) {return(fmt)}
# sales_person_table %>%
# mutate(birthdate = to_char(birthdate, "YYYY-MM")) %>%
# show_query()
#
# sales_person_table %>%
# mutate(birthdate = to_char(birthdate, "YYYY-MM"))
```
### 10\.1\.4 Many handy R functions can’t be translated to SQL
It just so happens that PostgreSQL has a `date` function that does the same thing as the `date` function in the `lubridate` package. In the following code the `date` function is executed by PostreSQL.
```
# sales_person_table %>% mutate(birthdate = date(birthdate))
```
they are passed to the dbms unless we explicitly tell `dplyr` to stop translating and bring the results back to the R environment for local processing.
```
try(sales_person_table %>% collect() %>%
mutate(birthdate = lubridate::date(birthdate)))
```
```
## Error in eval(lhs, parent, parent) :
## object 'sales_person_table' not found
```
### 10\.1\.5 Further lazy execution examples
See more examples of lazy execution [here](https://datacarpentry.org/R-ecology-lesson/05-r-and-databases.html).
10\.2 Disconnect from the database and stop Docker
--------------------------------------------------
```
dbDisconnect(con)
sp_docker_stop("adventureworks")
```
10\.3 Other resources
---------------------
* Benjamin S. Baumer. 2017\. A Grammar for Reproducible and Painless Extract\-Transform\-Load Operations on Medium Data. [https://arxiv.org/abs/1708\.07073](https://arxiv.org/abs/1708.07073)
* dplyr Reference documentation: Remote tables. [https://dplyr.tidyverse.org/reference/index.html\#section\-remote\-tables](https://dplyr.tidyverse.org/reference/index.html#section-remote-tables)
* Data Carpentry. SQL Databases and R. [https://datacarpentry.org/R\-ecology\-lesson/05\-r\-and\-databases.html](https://datacarpentry.org/R-ecology-lesson/05-r-and-databases.html)
| Data Databases and Engineering |
smithjd.github.io | https://smithjd.github.io/sql-pet/chapter-leveraging-database-views.html |
Chapter 11 Leveraging Database Views
====================================
> This chapter demonstrates how to:
>
>
> * Understand database views and their uses
> * Unpack a database view to see what it’s doing
> * Reproduce a database view with dplyr code
> * Write an alternative to a view that provides more details
> * Create a database view either for personal use or for submittal to your enterprise DBA
11\.1 Setup our standard working environment
--------------------------------------------
Use these libraries:
```
library(tidyverse)
library(DBI)
library(RPostgres)
library(connections)
library(glue)
require(knitr)
library(dbplyr)
library(sqlpetr)
library(bookdown)
library(lubridate)
library(gt)
```
Connect to `adventureworks`:
```
sp_docker_start("adventureworks")
Sys.sleep(sleep_default)
```
```
# con <- connection_open( # use in an interactive session
con <- dbConnect( # use in other settings
RPostgres::Postgres(),
# without the previous and next lines, some functions fail with bigint data
# so change int64 to integer
bigint = "integer",
host = "localhost",
port = 5432,
user = "postgres",
password = "postgres",
dbname = "adventureworks"
)
dbExecute(con, "set search_path to sales;") # so that `dbListFields()` works
```
```
## [1] 0
```
11\.2 The role of database `views`
----------------------------------
A database `view` is an SQL query that is stored in the database. Most `views` are used for data retrieval, since they usually denormalize the tables involved. Because they are standardized and well\-understood, they can save you a lot of work and document a query that can serve as a model to build on.
### 11\.2\.1 Why database `views` are useful
Database `views` are useful for many reasons.
* **Authoritative**: database `views` are typically written by the business application vendor or DBA, so they contain authoritative knowledge about the structure and intended use of the database.
* **Performance**: `views` are designed to gather data in an efficient way, using all the indexes in an efficient sequence and doing as much work on the database server as possible.
* **Abstraction**: `views` are abstractions or simplifications of complex queries that provide customary (useful) aggregations. Common examples would be monthly totals or aggregation of activity tied to one individual.
* **Reuse**: a `view` puts commonly used code in one place where it can be used for many purposes by many people. If there is a change or a problem found in a `view`, it only needs to be fixed in one place, rather than having to change many places downstream.
* **Security**: a view can give selective access to someone who does not have access to underlying tables or columns.
* **Provenance**: `views` standardize data provenance. For example, the `AdventureWorks` database all of them are named in a consistent way that suggests the underlying tables that they query. And they all start with a **v**.
The bottom line is that `views` can save you a lot of work.
### 11\.2\.2 Rely on – **and** be critical of – `views`
Because they represent a commonly used view of the database, it might seem like a `view` is always right. Even though they are conventional and authorized, they may still need verification or auditing, especially when used for a purpose other than the original intent. They can guide you toward what you need from the database but they could also mislead because they are easy to use and available. People may forget why a specific view exists and who is using it. Therefore any given view might be a forgotten vestige. part of a production data pipeline or a priceless nugget of insight. Who knows? Consider the `view`’s owner, schema, whether it’s a materialized index view or not, if it has a trigger and what the likely intention was behind the view.
11\.3 Unpacking the elements of a `view` in the Tidyverse
---------------------------------------------------------
Since a view is in some ways just like an ordinary table, we can use familiar tools in the same way as they are used on a database table. For example, the simplest way of getting a list of columns in a `view` is the same as it is for a regular table:
```
dbListFields(con, "vsalespersonsalesbyfiscalyearsdata")
```
```
## [1] "salespersonid" "fullname" "jobtitle" "salesterritory"
## [5] "salestotal" "fiscalyear"
```
### 11\.3\.1 Use a `view` just like any other table
From a retrieval perspective a database `view` is just like any other table. Using a view to retrieve data from the database will be completely standard across all flavors of SQL.
```
v_salesperson_sales_by_fiscal_years_data <-
tbl(con, in_schema("sales","vsalespersonsalesbyfiscalyearsdata")) %>%
collect()
str(v_salesperson_sales_by_fiscal_years_data)
```
```
## Classes 'tbl_df', 'tbl' and 'data.frame': 48 obs. of 6 variables:
## $ salespersonid : int 275 275 275 275 276 276 276 276 277 277 ...
## $ fullname : chr "Michael G Blythe" "Michael G Blythe" "Michael G Blythe" "Michael G Blythe" ...
## $ jobtitle : chr "Sales Representative" "Sales Representative" "Sales Representative" "Sales Representative" ...
## $ salesterritory: chr "Northeast" "Northeast" "Northeast" "Northeast" ...
## $ salestotal : num 63763 2399593 3765459 3065088 5476 ...
## $ fiscalyear : num 2011 2012 2013 2014 2011 ...
```
As we will see, our sample `view`, `vsalespersonsalesbyfiscalyearsdata` joins 5 different tables. We can assume that subsetting or calculation on any of the columns in the component tables will happen behind the scenes, on the database side, and done correctly. For example, the following query filters on a column that exists in only one of the `view`’s component tables.
```
tbl(con, in_schema("sales","vsalespersonsalesbyfiscalyearsdata")) %>%
count(salesterritory, fiscalyear) %>%
collect() %>% # ---- pull data here ---- #
pivot_wider(names_from = fiscalyear, values_from = n, names_prefix = "FY_")
```
```
## # A tibble: 10 x 5
## # Groups: salesterritory [10]
## salesterritory FY_2014 FY_2011 FY_2013 FY_2012
## <chr> <int> <int> <int> <int>
## 1 Southwest 2 2 2 2
## 2 Northeast 1 1 1 1
## 3 Southeast 1 1 1 1
## 4 France 1 NA 1 1
## 5 Canada 2 2 2 2
## 6 United Kingdom 1 NA 1 1
## 7 Northwest 3 2 3 2
## 8 Central 1 1 1 1
## 9 Australia 1 NA 1 NA
## 10 Germany 1 NA 1 NA
```
Although finding out what a view does behind the scenes requires that you use functions that are **not** standard, doing so has several general purposes:
* It is satisfying to know what’s going on behind the scenes.
* Specific elements or components of a `view` might be worth plagiarizing or incorporating in our queries.
* It is necessary to understand the mechanics of a `view` if we are going to build on what it does or intend to extend or modify it.
### 11\.3\.2 SQL source code
Functions for inspecting a view itself are not part of the ANSI standard, so they will be [database\-specific](https://www.postgresql.org/docs/9.5/functions-info.html). Here is the code to retrieve a PostgreSQL view (using the `pg_get_viewdef` function):
```
view_definition <- dbGetQuery(con, "select
pg_get_viewdef('sales.vsalespersonsalesbyfiscalyearsdata',
true)")
```
The PostgreSQL `pg_get_viewdef` function returns a data frame with one column named `pg_get_viewdef` and one row. To properly view its contents, the `\n` character strings need to be turned into new\-lines.
```
cat(unlist(view_definition$pg_get_viewdef))
```
```
## SELECT granular.salespersonid,
## granular.fullname,
## granular.jobtitle,
## granular.salesterritory,
## sum(granular.subtotal) AS salestotal,
## granular.fiscalyear
## FROM ( SELECT soh.salespersonid,
## ((p.firstname::text || ' '::text) || COALESCE(p.middlename::text || ' '::text, ''::text)) || p.lastname::text AS fullname,
## e.jobtitle,
## st.name AS salesterritory,
## soh.subtotal,
## date_part('year'::text, soh.orderdate + '6 mons'::interval) AS fiscalyear
## FROM salesperson sp
## JOIN salesorderheader soh ON sp.businessentityid = soh.salespersonid
## JOIN salesterritory st ON sp.territoryid = st.territoryid
## JOIN humanresources.employee e ON soh.salespersonid = e.businessentityid
## JOIN person.person p ON p.businessentityid = sp.businessentityid) granular
## GROUP BY granular.salespersonid, granular.fullname, granular.jobtitle, granular.salesterritory, granular.fiscalyear;
```
Even if you don’t intend to become completely fluent in SQL, it’s useful to study as much of it as possible. Studying the SQL in a view is particularly useful to:
* Test your understanding of the database structure, elements, and usage
* Extend what’s already been done to extract useful data from the database
### 11\.3\.3 The ERD as context for SQL code
A database Entity Relationship Diagram (ERD) is very helpful in making sense of the SQL in a `view`. The ERD for `AdventureWorks` is [here](https://i.stack.imgur.com/LMu4W.gif). If a published ERD is not available, a tool like the PostgreSQL *pg\_modeler* is capable of generating an ERD (or at least describing the portion of the database that is visible to you).
### 11\.3\.4 Selecting relevant tables and columns
Before bginning to write code, it can be helpful to actually mark up the ERD to identify the specific tables that are involved in the view you are going to reproduce.
Define each table that is involved and identify the columns that will be needed from that table. The `sales.vsalespersonsalesbyfiscalyearsdata` view joins data from five different tables:
1. sales\_order\_header
2. sales\_territory
3. sales\_person
4. employee
5. person
For each of the tables in the `view`, we select the columns that appear in the `sales.vsalespersonsalesbyfiscalyearsdata`. Selecting columns in this way prevents joins that `dbplyr` would make automatically based on common column names, such as `rowguid` and `ModifiedDate` columns, which appear in almost all `AdventureWorks` tables. In the following code we follow the convention that any column that we change or create on the fly uses a snake case naming convention.
```
sales_order_header <- tbl(con, in_schema("sales", "salesorderheader")) %>%
select(orderdate, salespersonid, subtotal)
sales_territory <- tbl(con, in_schema("sales", "salesterritory")) %>%
select(territoryid, territory_name = name)
sales_person <- tbl(con, in_schema("sales", "salesperson")) %>%
select(businessentityid, territoryid)
employee <- tbl(con, in_schema("humanresources", "employee")) %>%
select(businessentityid, jobtitle)
```
In addition to selecting rows as shown in the previous statements, `mutate` and other functions help us replicate code in the `view` such as:
```
((p.firstname::text || ' '::text) ||
COALESCE(p.middlename::text || ' '::text,
''::text)) || p.lastname::text AS fullname
```
The following dplyr code pastes the first, middle and last names together to make `full_name`:
```
person <- tbl(con, in_schema("person", "person")) %>%
mutate(full_name = paste(firstname, middlename, lastname)) %>%
select(businessentityid, full_name)
```
Double\-check on the names that are defined in each `tbl` object. The following function will show the names of columns in the tables we’ve defined:
```
getnames <- function(table) {
{table} %>%
collect(n = 5) %>% # ---- pull data here ---- #
names()
}
```
Verify the names selected:
```
getnames(sales_order_header)
```
```
## [1] "orderdate" "salespersonid" "subtotal"
```
```
getnames(sales_territory)
```
```
## [1] "territoryid" "territory_name"
```
```
getnames(sales_person)
```
```
## [1] "businessentityid" "territoryid"
```
```
getnames(employee)
```
```
## [1] "businessentityid" "jobtitle"
```
```
getnames(person)
```
```
## [1] "businessentityid" "full_name"
```
### 11\.3\.5 Join the tables together
First, join and download all of the data pertaining to a person. Notice that since each of these 4 tables contain `businessentityid`, dplyr will join them all on that common column automatically. And since we know that all of these tables are small, we don’t mind a query that joins and downloads all the data.
```
salesperson_info <- sales_person %>%
left_join(employee) %>%
left_join(person) %>%
left_join(sales_territory) %>%
collect()
```
```
## Joining, by = "businessentityid"
## Joining, by = "businessentityid"
```
```
## Joining, by = "territoryid"
```
```
str(salesperson_info)
```
```
## Classes 'tbl_df', 'tbl' and 'data.frame': 17 obs. of 5 variables:
## $ businessentityid: int 274 275 276 277 278 279 280 281 282 283 ...
## $ territoryid : int NA 2 4 3 6 5 1 4 6 1 ...
## $ jobtitle : chr "North American Sales Manager" "Sales Representative" "Sales Representative" "Sales Representative" ...
## $ full_name : chr "Stephen Y Jiang" "Michael G Blythe" "Linda C Mitchell" "Jillian Carson" ...
## $ territory_name : chr NA "Northeast" "Southwest" "Central" ...
```
The one part of the view that we haven’t replicated is:
`date_part('year'::text, soh.orderdate`
`+ '6 mons'::interval) AS fiscalyear`
The `lubridate` package makes it very easy to convert `orderdate` to `fiscal_year`. Doing that same conversion without lubridate (e.g., only dplyr and **ANSI\-STANDARD** SQL) is harder. Therefore we just pull the data from the server after the `left_join` and do the rest of the job on the R side. Note that this query doesn’t correct the problematic entry dates that we explored in the chapter on [Asking Business Questions From a Single Table](chapter-exploring-a-single-table.html#chapter_exploring-a-single-table). That will collapse many rows into a much smaller table. We know from our previous investigation that Sales Rep into sales are recorded more or less once a month. Therefore most of the crunching in this query happens on the database server side.
```
sales_data_fiscal_year <- sales_person %>%
left_join(sales_order_header, by = c("businessentityid" = "salespersonid")) %>%
group_by(businessentityid, orderdate) %>%
summarize(sales_total = sum(subtotal, na.rm = TRUE)) %>%
mutate(
orderdate = as.Date(orderdate),
day = day(orderdate)
) %>%
collect() %>% # ---- pull data here ---- #
mutate(
fiscal_year = year(orderdate %m+% months(6))
) %>%
ungroup() %>%
group_by(businessentityid, fiscal_year) %>%
summarize(sales_total = sum(sales_total, na.rm = FALSE)) %>%
ungroup()
```
Put the two parts together: `sales_data_fiscal_year` and `person_info` to yield the final query.
```
salesperson_sales_by_fiscal_years_dplyr <- sales_data_fiscal_year %>%
left_join(salesperson_info) %>%
filter(!is.na(territoryid))
```
```
## Joining, by = "businessentityid"
```
Notice that we’re dropping the Sales Managers who appear in the `salesperson_info` data frame because they don’t have a `territoryid`.
11\.4 Compare the official view and the dplyr output
----------------------------------------------------
Use `pivot_wider` to make it easier to compare the native `view` to our dplyr replicate.
```
salesperson_sales_by_fiscal_years_dplyr %>%
select(-jobtitle, -businessentityid, -territoryid) %>%
pivot_wider(names_from = fiscal_year, values_from = sales_total,
values_fill = list(sales_total = 0)) %>%
arrange(territory_name, full_name) %>%
filter(territory_name == "Canada")
```
```
## # A tibble: 2 x 6
## full_name territory_name `2011` `2012` `2013` `2014`
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Garrett R Vargas Canada 9109. 1254087. 1179531. 1166720.
## 2 José Edvaldo Saraiva Canada 106252. 2171995. 1388793. 2259378.
```
```
v_salesperson_sales_by_fiscal_years_data %>%
select(-jobtitle, -salespersonid) %>%
pivot_wider(names_from = fiscalyear, values_from = salestotal,
values_fill = list(salestotal = 0)) %>%
arrange(salesterritory, fullname) %>%
filter(salesterritory == "Canada")
```
```
## # A tibble: 2 x 6
## fullname salesterritory `2011` `2012` `2013` `2014`
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Garrett R Vargas Canada 9109. 1254087. 1179531. 1166720.
## 2 José Edvaldo Saraiva Canada 106252. 2171995. 1388793. 2259378.
```
The yearly totals match exactly. The column names don’t match up, because we are using snake case convention for derived elements.
11\.5 Revise the view to summarize by quarter not fiscal year
-------------------------------------------------------------
To summarize sales data by SAles Rep and quarter requires the `%m+%` infix operator from lubridate. The interleaved comments in the query below has hints that explain it. The totals in this revised query are off by a rounding error from the totals shown above in the fiscal year summaries.
```
tbl(con, in_schema("sales", "salesorderheader")) %>%
group_by(salespersonid, orderdate) %>%
summarize(subtotal = sum(subtotal, na.rm = TRUE), digits = 0) %>%
collect() %>% # ---- pull data here ---- #
# Adding 6 months to orderdate requires a lubridate function
mutate(orderdate = as.Date(orderdate) %m+% months(6),
year = year(orderdate),
quarter = quarter(orderdate)) %>%
ungroup() %>%
group_by(salespersonid, year, quarter) %>%
summarize(subtotal = round(sum(subtotal, na.rm = TRUE), digits = 0)) %>%
ungroup() %>%
# Join with the person information previously gathered
left_join(salesperson_info, by = c("salespersonid" = "businessentityid")) %>%
filter(territory_name == "Canada") %>%
# Pivot to make it easier to see what's going on
pivot_wider(names_from = quarter, values_from = subtotal,
values_fill = list(Q1 = 0, Q2 = 0, Q3 = 0, Q4 = 0), names_prefix = "Q", id_cols = full_name:year) %>%
select(`Name` = full_name, year, Q1, Q2, Q3, Q4) %>%
mutate(`Year Total` = Q1 + Q2 + Q3 + Q4) %>%
head(., n = 10) %>%
gt() %>%
fmt_number(use_seps = TRUE, decimals = 0, columns = vars(Q1,Q2, Q3, Q4, `Year Total`))
```
| Name | year | Q1 | Q2 | Q3 | Q4 | Year Total |
| --- | --- | --- | --- | --- | --- | --- |
| Garrett R Vargas | 2011 | NA | NA | NA | 9,109 | NA |
| Garrett R Vargas | 2012 | 233,696 | 257,287 | 410,518 | 352,587 | 1,254,088 |
| Garrett R Vargas | 2013 | 316,818 | 203,647 | 291,333 | 367,732 | 1,179,530 |
| Garrett R Vargas | 2014 | 393,788 | 336,984 | 290,536 | 145,413 | 1,166,721 |
| José Edvaldo Saraiva | 2011 | NA | NA | NA | 106,252 | NA |
| José Edvaldo Saraiva | 2012 | 521,794 | 546,962 | 795,861 | 307,379 | 2,171,996 |
| José Edvaldo Saraiva | 2013 | 408,415 | 324,062 | 231,991 | 424,326 | 1,388,794 |
| José Edvaldo Saraiva | 2014 | 748,430 | 466,137 | 618,832 | 425,979 | 2,259,378 |
11\.6 Clean up and close down
-----------------------------
```
connection_close(con) # Use in an interactive setting
# dbDisconnect(con) # Use in non-interactive setting
```
11\.1 Setup our standard working environment
--------------------------------------------
Use these libraries:
```
library(tidyverse)
library(DBI)
library(RPostgres)
library(connections)
library(glue)
require(knitr)
library(dbplyr)
library(sqlpetr)
library(bookdown)
library(lubridate)
library(gt)
```
Connect to `adventureworks`:
```
sp_docker_start("adventureworks")
Sys.sleep(sleep_default)
```
```
# con <- connection_open( # use in an interactive session
con <- dbConnect( # use in other settings
RPostgres::Postgres(),
# without the previous and next lines, some functions fail with bigint data
# so change int64 to integer
bigint = "integer",
host = "localhost",
port = 5432,
user = "postgres",
password = "postgres",
dbname = "adventureworks"
)
dbExecute(con, "set search_path to sales;") # so that `dbListFields()` works
```
```
## [1] 0
```
11\.2 The role of database `views`
----------------------------------
A database `view` is an SQL query that is stored in the database. Most `views` are used for data retrieval, since they usually denormalize the tables involved. Because they are standardized and well\-understood, they can save you a lot of work and document a query that can serve as a model to build on.
### 11\.2\.1 Why database `views` are useful
Database `views` are useful for many reasons.
* **Authoritative**: database `views` are typically written by the business application vendor or DBA, so they contain authoritative knowledge about the structure and intended use of the database.
* **Performance**: `views` are designed to gather data in an efficient way, using all the indexes in an efficient sequence and doing as much work on the database server as possible.
* **Abstraction**: `views` are abstractions or simplifications of complex queries that provide customary (useful) aggregations. Common examples would be monthly totals or aggregation of activity tied to one individual.
* **Reuse**: a `view` puts commonly used code in one place where it can be used for many purposes by many people. If there is a change or a problem found in a `view`, it only needs to be fixed in one place, rather than having to change many places downstream.
* **Security**: a view can give selective access to someone who does not have access to underlying tables or columns.
* **Provenance**: `views` standardize data provenance. For example, the `AdventureWorks` database all of them are named in a consistent way that suggests the underlying tables that they query. And they all start with a **v**.
The bottom line is that `views` can save you a lot of work.
### 11\.2\.2 Rely on – **and** be critical of – `views`
Because they represent a commonly used view of the database, it might seem like a `view` is always right. Even though they are conventional and authorized, they may still need verification or auditing, especially when used for a purpose other than the original intent. They can guide you toward what you need from the database but they could also mislead because they are easy to use and available. People may forget why a specific view exists and who is using it. Therefore any given view might be a forgotten vestige. part of a production data pipeline or a priceless nugget of insight. Who knows? Consider the `view`’s owner, schema, whether it’s a materialized index view or not, if it has a trigger and what the likely intention was behind the view.
### 11\.2\.1 Why database `views` are useful
Database `views` are useful for many reasons.
* **Authoritative**: database `views` are typically written by the business application vendor or DBA, so they contain authoritative knowledge about the structure and intended use of the database.
* **Performance**: `views` are designed to gather data in an efficient way, using all the indexes in an efficient sequence and doing as much work on the database server as possible.
* **Abstraction**: `views` are abstractions or simplifications of complex queries that provide customary (useful) aggregations. Common examples would be monthly totals or aggregation of activity tied to one individual.
* **Reuse**: a `view` puts commonly used code in one place where it can be used for many purposes by many people. If there is a change or a problem found in a `view`, it only needs to be fixed in one place, rather than having to change many places downstream.
* **Security**: a view can give selective access to someone who does not have access to underlying tables or columns.
* **Provenance**: `views` standardize data provenance. For example, the `AdventureWorks` database all of them are named in a consistent way that suggests the underlying tables that they query. And they all start with a **v**.
The bottom line is that `views` can save you a lot of work.
### 11\.2\.2 Rely on – **and** be critical of – `views`
Because they represent a commonly used view of the database, it might seem like a `view` is always right. Even though they are conventional and authorized, they may still need verification or auditing, especially when used for a purpose other than the original intent. They can guide you toward what you need from the database but they could also mislead because they are easy to use and available. People may forget why a specific view exists and who is using it. Therefore any given view might be a forgotten vestige. part of a production data pipeline or a priceless nugget of insight. Who knows? Consider the `view`’s owner, schema, whether it’s a materialized index view or not, if it has a trigger and what the likely intention was behind the view.
11\.3 Unpacking the elements of a `view` in the Tidyverse
---------------------------------------------------------
Since a view is in some ways just like an ordinary table, we can use familiar tools in the same way as they are used on a database table. For example, the simplest way of getting a list of columns in a `view` is the same as it is for a regular table:
```
dbListFields(con, "vsalespersonsalesbyfiscalyearsdata")
```
```
## [1] "salespersonid" "fullname" "jobtitle" "salesterritory"
## [5] "salestotal" "fiscalyear"
```
### 11\.3\.1 Use a `view` just like any other table
From a retrieval perspective a database `view` is just like any other table. Using a view to retrieve data from the database will be completely standard across all flavors of SQL.
```
v_salesperson_sales_by_fiscal_years_data <-
tbl(con, in_schema("sales","vsalespersonsalesbyfiscalyearsdata")) %>%
collect()
str(v_salesperson_sales_by_fiscal_years_data)
```
```
## Classes 'tbl_df', 'tbl' and 'data.frame': 48 obs. of 6 variables:
## $ salespersonid : int 275 275 275 275 276 276 276 276 277 277 ...
## $ fullname : chr "Michael G Blythe" "Michael G Blythe" "Michael G Blythe" "Michael G Blythe" ...
## $ jobtitle : chr "Sales Representative" "Sales Representative" "Sales Representative" "Sales Representative" ...
## $ salesterritory: chr "Northeast" "Northeast" "Northeast" "Northeast" ...
## $ salestotal : num 63763 2399593 3765459 3065088 5476 ...
## $ fiscalyear : num 2011 2012 2013 2014 2011 ...
```
As we will see, our sample `view`, `vsalespersonsalesbyfiscalyearsdata` joins 5 different tables. We can assume that subsetting or calculation on any of the columns in the component tables will happen behind the scenes, on the database side, and done correctly. For example, the following query filters on a column that exists in only one of the `view`’s component tables.
```
tbl(con, in_schema("sales","vsalespersonsalesbyfiscalyearsdata")) %>%
count(salesterritory, fiscalyear) %>%
collect() %>% # ---- pull data here ---- #
pivot_wider(names_from = fiscalyear, values_from = n, names_prefix = "FY_")
```
```
## # A tibble: 10 x 5
## # Groups: salesterritory [10]
## salesterritory FY_2014 FY_2011 FY_2013 FY_2012
## <chr> <int> <int> <int> <int>
## 1 Southwest 2 2 2 2
## 2 Northeast 1 1 1 1
## 3 Southeast 1 1 1 1
## 4 France 1 NA 1 1
## 5 Canada 2 2 2 2
## 6 United Kingdom 1 NA 1 1
## 7 Northwest 3 2 3 2
## 8 Central 1 1 1 1
## 9 Australia 1 NA 1 NA
## 10 Germany 1 NA 1 NA
```
Although finding out what a view does behind the scenes requires that you use functions that are **not** standard, doing so has several general purposes:
* It is satisfying to know what’s going on behind the scenes.
* Specific elements or components of a `view` might be worth plagiarizing or incorporating in our queries.
* It is necessary to understand the mechanics of a `view` if we are going to build on what it does or intend to extend or modify it.
### 11\.3\.2 SQL source code
Functions for inspecting a view itself are not part of the ANSI standard, so they will be [database\-specific](https://www.postgresql.org/docs/9.5/functions-info.html). Here is the code to retrieve a PostgreSQL view (using the `pg_get_viewdef` function):
```
view_definition <- dbGetQuery(con, "select
pg_get_viewdef('sales.vsalespersonsalesbyfiscalyearsdata',
true)")
```
The PostgreSQL `pg_get_viewdef` function returns a data frame with one column named `pg_get_viewdef` and one row. To properly view its contents, the `\n` character strings need to be turned into new\-lines.
```
cat(unlist(view_definition$pg_get_viewdef))
```
```
## SELECT granular.salespersonid,
## granular.fullname,
## granular.jobtitle,
## granular.salesterritory,
## sum(granular.subtotal) AS salestotal,
## granular.fiscalyear
## FROM ( SELECT soh.salespersonid,
## ((p.firstname::text || ' '::text) || COALESCE(p.middlename::text || ' '::text, ''::text)) || p.lastname::text AS fullname,
## e.jobtitle,
## st.name AS salesterritory,
## soh.subtotal,
## date_part('year'::text, soh.orderdate + '6 mons'::interval) AS fiscalyear
## FROM salesperson sp
## JOIN salesorderheader soh ON sp.businessentityid = soh.salespersonid
## JOIN salesterritory st ON sp.territoryid = st.territoryid
## JOIN humanresources.employee e ON soh.salespersonid = e.businessentityid
## JOIN person.person p ON p.businessentityid = sp.businessentityid) granular
## GROUP BY granular.salespersonid, granular.fullname, granular.jobtitle, granular.salesterritory, granular.fiscalyear;
```
Even if you don’t intend to become completely fluent in SQL, it’s useful to study as much of it as possible. Studying the SQL in a view is particularly useful to:
* Test your understanding of the database structure, elements, and usage
* Extend what’s already been done to extract useful data from the database
### 11\.3\.3 The ERD as context for SQL code
A database Entity Relationship Diagram (ERD) is very helpful in making sense of the SQL in a `view`. The ERD for `AdventureWorks` is [here](https://i.stack.imgur.com/LMu4W.gif). If a published ERD is not available, a tool like the PostgreSQL *pg\_modeler* is capable of generating an ERD (or at least describing the portion of the database that is visible to you).
### 11\.3\.4 Selecting relevant tables and columns
Before bginning to write code, it can be helpful to actually mark up the ERD to identify the specific tables that are involved in the view you are going to reproduce.
Define each table that is involved and identify the columns that will be needed from that table. The `sales.vsalespersonsalesbyfiscalyearsdata` view joins data from five different tables:
1. sales\_order\_header
2. sales\_territory
3. sales\_person
4. employee
5. person
For each of the tables in the `view`, we select the columns that appear in the `sales.vsalespersonsalesbyfiscalyearsdata`. Selecting columns in this way prevents joins that `dbplyr` would make automatically based on common column names, such as `rowguid` and `ModifiedDate` columns, which appear in almost all `AdventureWorks` tables. In the following code we follow the convention that any column that we change or create on the fly uses a snake case naming convention.
```
sales_order_header <- tbl(con, in_schema("sales", "salesorderheader")) %>%
select(orderdate, salespersonid, subtotal)
sales_territory <- tbl(con, in_schema("sales", "salesterritory")) %>%
select(territoryid, territory_name = name)
sales_person <- tbl(con, in_schema("sales", "salesperson")) %>%
select(businessentityid, territoryid)
employee <- tbl(con, in_schema("humanresources", "employee")) %>%
select(businessentityid, jobtitle)
```
In addition to selecting rows as shown in the previous statements, `mutate` and other functions help us replicate code in the `view` such as:
```
((p.firstname::text || ' '::text) ||
COALESCE(p.middlename::text || ' '::text,
''::text)) || p.lastname::text AS fullname
```
The following dplyr code pastes the first, middle and last names together to make `full_name`:
```
person <- tbl(con, in_schema("person", "person")) %>%
mutate(full_name = paste(firstname, middlename, lastname)) %>%
select(businessentityid, full_name)
```
Double\-check on the names that are defined in each `tbl` object. The following function will show the names of columns in the tables we’ve defined:
```
getnames <- function(table) {
{table} %>%
collect(n = 5) %>% # ---- pull data here ---- #
names()
}
```
Verify the names selected:
```
getnames(sales_order_header)
```
```
## [1] "orderdate" "salespersonid" "subtotal"
```
```
getnames(sales_territory)
```
```
## [1] "territoryid" "territory_name"
```
```
getnames(sales_person)
```
```
## [1] "businessentityid" "territoryid"
```
```
getnames(employee)
```
```
## [1] "businessentityid" "jobtitle"
```
```
getnames(person)
```
```
## [1] "businessentityid" "full_name"
```
### 11\.3\.5 Join the tables together
First, join and download all of the data pertaining to a person. Notice that since each of these 4 tables contain `businessentityid`, dplyr will join them all on that common column automatically. And since we know that all of these tables are small, we don’t mind a query that joins and downloads all the data.
```
salesperson_info <- sales_person %>%
left_join(employee) %>%
left_join(person) %>%
left_join(sales_territory) %>%
collect()
```
```
## Joining, by = "businessentityid"
## Joining, by = "businessentityid"
```
```
## Joining, by = "territoryid"
```
```
str(salesperson_info)
```
```
## Classes 'tbl_df', 'tbl' and 'data.frame': 17 obs. of 5 variables:
## $ businessentityid: int 274 275 276 277 278 279 280 281 282 283 ...
## $ territoryid : int NA 2 4 3 6 5 1 4 6 1 ...
## $ jobtitle : chr "North American Sales Manager" "Sales Representative" "Sales Representative" "Sales Representative" ...
## $ full_name : chr "Stephen Y Jiang" "Michael G Blythe" "Linda C Mitchell" "Jillian Carson" ...
## $ territory_name : chr NA "Northeast" "Southwest" "Central" ...
```
The one part of the view that we haven’t replicated is:
`date_part('year'::text, soh.orderdate`
`+ '6 mons'::interval) AS fiscalyear`
The `lubridate` package makes it very easy to convert `orderdate` to `fiscal_year`. Doing that same conversion without lubridate (e.g., only dplyr and **ANSI\-STANDARD** SQL) is harder. Therefore we just pull the data from the server after the `left_join` and do the rest of the job on the R side. Note that this query doesn’t correct the problematic entry dates that we explored in the chapter on [Asking Business Questions From a Single Table](chapter-exploring-a-single-table.html#chapter_exploring-a-single-table). That will collapse many rows into a much smaller table. We know from our previous investigation that Sales Rep into sales are recorded more or less once a month. Therefore most of the crunching in this query happens on the database server side.
```
sales_data_fiscal_year <- sales_person %>%
left_join(sales_order_header, by = c("businessentityid" = "salespersonid")) %>%
group_by(businessentityid, orderdate) %>%
summarize(sales_total = sum(subtotal, na.rm = TRUE)) %>%
mutate(
orderdate = as.Date(orderdate),
day = day(orderdate)
) %>%
collect() %>% # ---- pull data here ---- #
mutate(
fiscal_year = year(orderdate %m+% months(6))
) %>%
ungroup() %>%
group_by(businessentityid, fiscal_year) %>%
summarize(sales_total = sum(sales_total, na.rm = FALSE)) %>%
ungroup()
```
Put the two parts together: `sales_data_fiscal_year` and `person_info` to yield the final query.
```
salesperson_sales_by_fiscal_years_dplyr <- sales_data_fiscal_year %>%
left_join(salesperson_info) %>%
filter(!is.na(territoryid))
```
```
## Joining, by = "businessentityid"
```
Notice that we’re dropping the Sales Managers who appear in the `salesperson_info` data frame because they don’t have a `territoryid`.
### 11\.3\.1 Use a `view` just like any other table
From a retrieval perspective a database `view` is just like any other table. Using a view to retrieve data from the database will be completely standard across all flavors of SQL.
```
v_salesperson_sales_by_fiscal_years_data <-
tbl(con, in_schema("sales","vsalespersonsalesbyfiscalyearsdata")) %>%
collect()
str(v_salesperson_sales_by_fiscal_years_data)
```
```
## Classes 'tbl_df', 'tbl' and 'data.frame': 48 obs. of 6 variables:
## $ salespersonid : int 275 275 275 275 276 276 276 276 277 277 ...
## $ fullname : chr "Michael G Blythe" "Michael G Blythe" "Michael G Blythe" "Michael G Blythe" ...
## $ jobtitle : chr "Sales Representative" "Sales Representative" "Sales Representative" "Sales Representative" ...
## $ salesterritory: chr "Northeast" "Northeast" "Northeast" "Northeast" ...
## $ salestotal : num 63763 2399593 3765459 3065088 5476 ...
## $ fiscalyear : num 2011 2012 2013 2014 2011 ...
```
As we will see, our sample `view`, `vsalespersonsalesbyfiscalyearsdata` joins 5 different tables. We can assume that subsetting or calculation on any of the columns in the component tables will happen behind the scenes, on the database side, and done correctly. For example, the following query filters on a column that exists in only one of the `view`’s component tables.
```
tbl(con, in_schema("sales","vsalespersonsalesbyfiscalyearsdata")) %>%
count(salesterritory, fiscalyear) %>%
collect() %>% # ---- pull data here ---- #
pivot_wider(names_from = fiscalyear, values_from = n, names_prefix = "FY_")
```
```
## # A tibble: 10 x 5
## # Groups: salesterritory [10]
## salesterritory FY_2014 FY_2011 FY_2013 FY_2012
## <chr> <int> <int> <int> <int>
## 1 Southwest 2 2 2 2
## 2 Northeast 1 1 1 1
## 3 Southeast 1 1 1 1
## 4 France 1 NA 1 1
## 5 Canada 2 2 2 2
## 6 United Kingdom 1 NA 1 1
## 7 Northwest 3 2 3 2
## 8 Central 1 1 1 1
## 9 Australia 1 NA 1 NA
## 10 Germany 1 NA 1 NA
```
Although finding out what a view does behind the scenes requires that you use functions that are **not** standard, doing so has several general purposes:
* It is satisfying to know what’s going on behind the scenes.
* Specific elements or components of a `view` might be worth plagiarizing or incorporating in our queries.
* It is necessary to understand the mechanics of a `view` if we are going to build on what it does or intend to extend or modify it.
### 11\.3\.2 SQL source code
Functions for inspecting a view itself are not part of the ANSI standard, so they will be [database\-specific](https://www.postgresql.org/docs/9.5/functions-info.html). Here is the code to retrieve a PostgreSQL view (using the `pg_get_viewdef` function):
```
view_definition <- dbGetQuery(con, "select
pg_get_viewdef('sales.vsalespersonsalesbyfiscalyearsdata',
true)")
```
The PostgreSQL `pg_get_viewdef` function returns a data frame with one column named `pg_get_viewdef` and one row. To properly view its contents, the `\n` character strings need to be turned into new\-lines.
```
cat(unlist(view_definition$pg_get_viewdef))
```
```
## SELECT granular.salespersonid,
## granular.fullname,
## granular.jobtitle,
## granular.salesterritory,
## sum(granular.subtotal) AS salestotal,
## granular.fiscalyear
## FROM ( SELECT soh.salespersonid,
## ((p.firstname::text || ' '::text) || COALESCE(p.middlename::text || ' '::text, ''::text)) || p.lastname::text AS fullname,
## e.jobtitle,
## st.name AS salesterritory,
## soh.subtotal,
## date_part('year'::text, soh.orderdate + '6 mons'::interval) AS fiscalyear
## FROM salesperson sp
## JOIN salesorderheader soh ON sp.businessentityid = soh.salespersonid
## JOIN salesterritory st ON sp.territoryid = st.territoryid
## JOIN humanresources.employee e ON soh.salespersonid = e.businessentityid
## JOIN person.person p ON p.businessentityid = sp.businessentityid) granular
## GROUP BY granular.salespersonid, granular.fullname, granular.jobtitle, granular.salesterritory, granular.fiscalyear;
```
Even if you don’t intend to become completely fluent in SQL, it’s useful to study as much of it as possible. Studying the SQL in a view is particularly useful to:
* Test your understanding of the database structure, elements, and usage
* Extend what’s already been done to extract useful data from the database
### 11\.3\.3 The ERD as context for SQL code
A database Entity Relationship Diagram (ERD) is very helpful in making sense of the SQL in a `view`. The ERD for `AdventureWorks` is [here](https://i.stack.imgur.com/LMu4W.gif). If a published ERD is not available, a tool like the PostgreSQL *pg\_modeler* is capable of generating an ERD (or at least describing the portion of the database that is visible to you).
### 11\.3\.4 Selecting relevant tables and columns
Before bginning to write code, it can be helpful to actually mark up the ERD to identify the specific tables that are involved in the view you are going to reproduce.
Define each table that is involved and identify the columns that will be needed from that table. The `sales.vsalespersonsalesbyfiscalyearsdata` view joins data from five different tables:
1. sales\_order\_header
2. sales\_territory
3. sales\_person
4. employee
5. person
For each of the tables in the `view`, we select the columns that appear in the `sales.vsalespersonsalesbyfiscalyearsdata`. Selecting columns in this way prevents joins that `dbplyr` would make automatically based on common column names, such as `rowguid` and `ModifiedDate` columns, which appear in almost all `AdventureWorks` tables. In the following code we follow the convention that any column that we change or create on the fly uses a snake case naming convention.
```
sales_order_header <- tbl(con, in_schema("sales", "salesorderheader")) %>%
select(orderdate, salespersonid, subtotal)
sales_territory <- tbl(con, in_schema("sales", "salesterritory")) %>%
select(territoryid, territory_name = name)
sales_person <- tbl(con, in_schema("sales", "salesperson")) %>%
select(businessentityid, territoryid)
employee <- tbl(con, in_schema("humanresources", "employee")) %>%
select(businessentityid, jobtitle)
```
In addition to selecting rows as shown in the previous statements, `mutate` and other functions help us replicate code in the `view` such as:
```
((p.firstname::text || ' '::text) ||
COALESCE(p.middlename::text || ' '::text,
''::text)) || p.lastname::text AS fullname
```
The following dplyr code pastes the first, middle and last names together to make `full_name`:
```
person <- tbl(con, in_schema("person", "person")) %>%
mutate(full_name = paste(firstname, middlename, lastname)) %>%
select(businessentityid, full_name)
```
Double\-check on the names that are defined in each `tbl` object. The following function will show the names of columns in the tables we’ve defined:
```
getnames <- function(table) {
{table} %>%
collect(n = 5) %>% # ---- pull data here ---- #
names()
}
```
Verify the names selected:
```
getnames(sales_order_header)
```
```
## [1] "orderdate" "salespersonid" "subtotal"
```
```
getnames(sales_territory)
```
```
## [1] "territoryid" "territory_name"
```
```
getnames(sales_person)
```
```
## [1] "businessentityid" "territoryid"
```
```
getnames(employee)
```
```
## [1] "businessentityid" "jobtitle"
```
```
getnames(person)
```
```
## [1] "businessentityid" "full_name"
```
### 11\.3\.5 Join the tables together
First, join and download all of the data pertaining to a person. Notice that since each of these 4 tables contain `businessentityid`, dplyr will join them all on that common column automatically. And since we know that all of these tables are small, we don’t mind a query that joins and downloads all the data.
```
salesperson_info <- sales_person %>%
left_join(employee) %>%
left_join(person) %>%
left_join(sales_territory) %>%
collect()
```
```
## Joining, by = "businessentityid"
## Joining, by = "businessentityid"
```
```
## Joining, by = "territoryid"
```
```
str(salesperson_info)
```
```
## Classes 'tbl_df', 'tbl' and 'data.frame': 17 obs. of 5 variables:
## $ businessentityid: int 274 275 276 277 278 279 280 281 282 283 ...
## $ territoryid : int NA 2 4 3 6 5 1 4 6 1 ...
## $ jobtitle : chr "North American Sales Manager" "Sales Representative" "Sales Representative" "Sales Representative" ...
## $ full_name : chr "Stephen Y Jiang" "Michael G Blythe" "Linda C Mitchell" "Jillian Carson" ...
## $ territory_name : chr NA "Northeast" "Southwest" "Central" ...
```
The one part of the view that we haven’t replicated is:
`date_part('year'::text, soh.orderdate`
`+ '6 mons'::interval) AS fiscalyear`
The `lubridate` package makes it very easy to convert `orderdate` to `fiscal_year`. Doing that same conversion without lubridate (e.g., only dplyr and **ANSI\-STANDARD** SQL) is harder. Therefore we just pull the data from the server after the `left_join` and do the rest of the job on the R side. Note that this query doesn’t correct the problematic entry dates that we explored in the chapter on [Asking Business Questions From a Single Table](chapter-exploring-a-single-table.html#chapter_exploring-a-single-table). That will collapse many rows into a much smaller table. We know from our previous investigation that Sales Rep into sales are recorded more or less once a month. Therefore most of the crunching in this query happens on the database server side.
```
sales_data_fiscal_year <- sales_person %>%
left_join(sales_order_header, by = c("businessentityid" = "salespersonid")) %>%
group_by(businessentityid, orderdate) %>%
summarize(sales_total = sum(subtotal, na.rm = TRUE)) %>%
mutate(
orderdate = as.Date(orderdate),
day = day(orderdate)
) %>%
collect() %>% # ---- pull data here ---- #
mutate(
fiscal_year = year(orderdate %m+% months(6))
) %>%
ungroup() %>%
group_by(businessentityid, fiscal_year) %>%
summarize(sales_total = sum(sales_total, na.rm = FALSE)) %>%
ungroup()
```
Put the two parts together: `sales_data_fiscal_year` and `person_info` to yield the final query.
```
salesperson_sales_by_fiscal_years_dplyr <- sales_data_fiscal_year %>%
left_join(salesperson_info) %>%
filter(!is.na(territoryid))
```
```
## Joining, by = "businessentityid"
```
Notice that we’re dropping the Sales Managers who appear in the `salesperson_info` data frame because they don’t have a `territoryid`.
11\.4 Compare the official view and the dplyr output
----------------------------------------------------
Use `pivot_wider` to make it easier to compare the native `view` to our dplyr replicate.
```
salesperson_sales_by_fiscal_years_dplyr %>%
select(-jobtitle, -businessentityid, -territoryid) %>%
pivot_wider(names_from = fiscal_year, values_from = sales_total,
values_fill = list(sales_total = 0)) %>%
arrange(territory_name, full_name) %>%
filter(territory_name == "Canada")
```
```
## # A tibble: 2 x 6
## full_name territory_name `2011` `2012` `2013` `2014`
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Garrett R Vargas Canada 9109. 1254087. 1179531. 1166720.
## 2 José Edvaldo Saraiva Canada 106252. 2171995. 1388793. 2259378.
```
```
v_salesperson_sales_by_fiscal_years_data %>%
select(-jobtitle, -salespersonid) %>%
pivot_wider(names_from = fiscalyear, values_from = salestotal,
values_fill = list(salestotal = 0)) %>%
arrange(salesterritory, fullname) %>%
filter(salesterritory == "Canada")
```
```
## # A tibble: 2 x 6
## fullname salesterritory `2011` `2012` `2013` `2014`
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 Garrett R Vargas Canada 9109. 1254087. 1179531. 1166720.
## 2 José Edvaldo Saraiva Canada 106252. 2171995. 1388793. 2259378.
```
The yearly totals match exactly. The column names don’t match up, because we are using snake case convention for derived elements.
11\.5 Revise the view to summarize by quarter not fiscal year
-------------------------------------------------------------
To summarize sales data by SAles Rep and quarter requires the `%m+%` infix operator from lubridate. The interleaved comments in the query below has hints that explain it. The totals in this revised query are off by a rounding error from the totals shown above in the fiscal year summaries.
```
tbl(con, in_schema("sales", "salesorderheader")) %>%
group_by(salespersonid, orderdate) %>%
summarize(subtotal = sum(subtotal, na.rm = TRUE), digits = 0) %>%
collect() %>% # ---- pull data here ---- #
# Adding 6 months to orderdate requires a lubridate function
mutate(orderdate = as.Date(orderdate) %m+% months(6),
year = year(orderdate),
quarter = quarter(orderdate)) %>%
ungroup() %>%
group_by(salespersonid, year, quarter) %>%
summarize(subtotal = round(sum(subtotal, na.rm = TRUE), digits = 0)) %>%
ungroup() %>%
# Join with the person information previously gathered
left_join(salesperson_info, by = c("salespersonid" = "businessentityid")) %>%
filter(territory_name == "Canada") %>%
# Pivot to make it easier to see what's going on
pivot_wider(names_from = quarter, values_from = subtotal,
values_fill = list(Q1 = 0, Q2 = 0, Q3 = 0, Q4 = 0), names_prefix = "Q", id_cols = full_name:year) %>%
select(`Name` = full_name, year, Q1, Q2, Q3, Q4) %>%
mutate(`Year Total` = Q1 + Q2 + Q3 + Q4) %>%
head(., n = 10) %>%
gt() %>%
fmt_number(use_seps = TRUE, decimals = 0, columns = vars(Q1,Q2, Q3, Q4, `Year Total`))
```
| Name | year | Q1 | Q2 | Q3 | Q4 | Year Total |
| --- | --- | --- | --- | --- | --- | --- |
| Garrett R Vargas | 2011 | NA | NA | NA | 9,109 | NA |
| Garrett R Vargas | 2012 | 233,696 | 257,287 | 410,518 | 352,587 | 1,254,088 |
| Garrett R Vargas | 2013 | 316,818 | 203,647 | 291,333 | 367,732 | 1,179,530 |
| Garrett R Vargas | 2014 | 393,788 | 336,984 | 290,536 | 145,413 | 1,166,721 |
| José Edvaldo Saraiva | 2011 | NA | NA | NA | 106,252 | NA |
| José Edvaldo Saraiva | 2012 | 521,794 | 546,962 | 795,861 | 307,379 | 2,171,996 |
| José Edvaldo Saraiva | 2013 | 408,415 | 324,062 | 231,991 | 424,326 | 1,388,794 |
| José Edvaldo Saraiva | 2014 | 748,430 | 466,137 | 618,832 | 425,979 | 2,259,378 |
11\.6 Clean up and close down
-----------------------------
```
connection_close(con) # Use in an interactive setting
# dbDisconnect(con) # Use in non-interactive setting
```
| Data Databases and Engineering |
smithjd.github.io | https://smithjd.github.io/sql-pet/chapter-postgresql-metadata.html |
Chapter 12 Getting metadata about and from PostgreSQL
=====================================================
> This chapter demonstrates:
>
>
> * What kind of data about the database is contained in a dbms
> * Several methods for obtaining metadata from the dbms
The following packages are used in this chapter:
```
library(tidyverse)
library(DBI)
library(RPostgres)
library(glue)
library(here)
require(knitr)
library(dbplyr)
library(sqlpetr)
```
Assume that the Docker container with PostgreSQL and the dvdrental database are ready to go.
```
sp_docker_start("adventureworks")
```
Connect to the database:
```
con <- sqlpetr::sp_get_postgres_connection(
user = Sys.getenv("DEFAULT_POSTGRES_USER_NAME"),
password = Sys.getenv("DEFAULT_POSTGRES_PASSWORD"),
dbname = "adventureworks",
port = 5432,
seconds_to_test = 20,
connection_tab = TRUE
)
```
12\.1 Views trick parked here for the time being
------------------------------------------------
### 12\.1\.1 Explore the vsalelsperson and vsalespersonsalesbyfiscalyearsdata views
The following trick goes later in the book, where it’s used to prove the finding that to make sense of othe data you need to
```
cat(unlist(dbGetQuery(con, "select pg_get_viewdef('sales.vsalesperson', true)")))
```
```
## SELECT s.businessentityid,
## p.title,
## p.firstname,
## p.middlename,
## p.lastname,
## p.suffix,
## e.jobtitle,
## pp.phonenumber,
## pnt.name AS phonenumbertype,
## ea.emailaddress,
## p.emailpromotion,
## a.addressline1,
## a.addressline2,
## a.city,
## sp.name AS stateprovincename,
## a.postalcode,
## cr.name AS countryregionname,
## st.name AS territoryname,
## st."group" AS territorygroup,
## s.salesquota,
## s.salesytd,
## s.saleslastyear
## FROM sales.salesperson s
## JOIN humanresources.employee e ON e.businessentityid = s.businessentityid
## JOIN person.person p ON p.businessentityid = s.businessentityid
## JOIN person.businessentityaddress bea ON bea.businessentityid = s.businessentityid
## JOIN person.address a ON a.addressid = bea.addressid
## JOIN person.stateprovince sp ON sp.stateprovinceid = a.stateprovinceid
## JOIN person.countryregion cr ON cr.countryregioncode::text = sp.countryregioncode::text
## LEFT JOIN sales.salesterritory st ON st.territoryid = s.territoryid
## LEFT JOIN person.emailaddress ea ON ea.businessentityid = p.businessentityid
## LEFT JOIN person.personphone pp ON pp.businessentityid = p.businessentityid
## LEFT JOIN person.phonenumbertype pnt ON pnt.phonenumbertypeid = pp.phonenumbertypeid;
```
```
## pg_get_viewdef
## 1 SELECT granular.salespersonid,\n granular.fullname,\n granular.jobtitle,\n granular.salesterritory,\n sum(granular.subtotal) AS salestotal,\n granular.fiscalyear\n FROM ( SELECT soh.salespersonid,\n ((p.firstname::text || ' '::text) || COALESCE(p.middlename::text || ' '::text, ''::text)) || p.lastname::text AS fullname,\n e.jobtitle,\n st.name AS salesterritory,\n soh.subtotal,\n date_part('year'::text, soh.orderdate + '6 mons'::interval) AS fiscalyear\n FROM sales.salesperson sp\n JOIN sales.salesorderheader soh ON sp.businessentityid = soh.salespersonid\n JOIN sales.salesterritory st ON sp.territoryid = st.territoryid\n JOIN humanresources.employee e ON soh.salespersonid = e.businessentityid\n JOIN person.person p ON p.businessentityid = sp.businessentityid) granular\n GROUP BY granular.salespersonid, granular.fullname, granular.jobtitle, granular.salesterritory, granular.fiscalyear;
```
12\.2 Database contents and structure
-------------------------------------
After just looking at the data you seek, it might be worthwhile stepping back and looking at the big picture.
### 12\.2\.1 Database structure
For large or complex databases you need to use both the available documentation for your database (e.g., [the dvdrental](http://www.postgresqltutorial.com/postgresql-sample-database/) database) and the other empirical tools that are available. For example it’s worth learning to interpret the symbols in an [Entity Relationship Diagram](https://en.wikipedia.org/wiki/Entity%E2%80%93relationship_model):
The `information_schema` is a trove of information *about* the database. Its format is more or less consistent across the different SQL implementations that are available. Here we explore some of what’s available using several different methods. PostgreSQL stores [a lot of metadata](https://www.postgresql.org/docs/current/static/infoschema-columns.html).
### 12\.2\.2 Contents of the `information_schema`
For this chapter R needs the `dbplyr` package to access alternate schemas. A [schema](http://www.postgresqltutorial.com/postgresql-server-and-database-objects/) is an object that contains one or more tables. Most often there will be a default schema, but to access the metadata, you need to explicitly specify which schema contains the data you want.
### 12\.2\.3 What tables are in the database?
The simplest way to get a list of tables is with … *NO LONGER WORKS*:
```
schema_list <- tbl(con, in_schema("information_schema", "schemata")) %>%
select(catalog_name, schema_name, schema_owner) %>%
collect()
sp_print_df(head(schema_list))
```
\#\#\# Digging into the `information_schema`
We usually need more detail than just a list of tables. Most SQL databases have an `information_schema` that has a standard structure to describe and control the database.
The `information_schema` is in a different schema from the default, so to connect to the `tables` table in the `information_schema` we connect to the database in a different way:
```
table_info_schema_table <- tbl(con, dbplyr::in_schema("information_schema", "tables"))
```
The `information_schema` is large and complex and contains 343 tables. So it’s easy to get lost in it.
This query retrieves a list of the tables in the database that includes additional detail, not just the name of the table.
```
table_info <- table_info_schema_table %>%
# filter(table_schema == "public") %>%
select(table_catalog, table_schema, table_name, table_type) %>%
arrange(table_type, table_name) %>%
collect()
sp_print_df(head(table_info))
```
In this context `table_catalog` is synonymous with `database`.
Notice that *VIEWS* are composites made up of one or more *BASE TABLES*.
The SQL world has its own terminology. For example `rs` is shorthand for `result set`. That’s equivalent to using `df` for a `data frame`. The following SQL query returns the same information as the previous dplyr code.
```
rs <- dbGetQuery(
con,
"select table_catalog, table_schema, table_name, table_type
from information_schema.tables
where table_schema not in ('pg_catalog','information_schema')
order by table_type, table_name
;"
)
sp_print_df(head(rs))
```
12\.3 What columns do those tables contain?
-------------------------------------------
Of course, the `DBI` package has a `dbListFields` function that provides the simplest way to get the minimum, a list of column names:
```
# DBI::dbListFields(con, "rental")
```
But the `information_schema` has a lot more useful information that we can use.
```
columns_info_schema_table <- tbl(con, dbplyr::in_schema("information_schema", "columns"))
```
Since the `information_schema` contains 2961 columns, we are narrowing our focus to just one table. This query retrieves more information about the `rental` table:
```
columns_info_schema_info <- columns_info_schema_table %>%
# filter(table_schema == "public") %>%
select(
table_catalog, table_schema, table_name, column_name, data_type, ordinal_position,
character_maximum_length, column_default, numeric_precision, numeric_precision_radix
) %>%
collect(n = Inf) %>%
mutate(data_type = case_when(
data_type == "character varying" ~ paste0(data_type, " (", character_maximum_length, ")"),
data_type == "real" ~ paste0(data_type, " (", numeric_precision, ",", numeric_precision_radix, ")"),
TRUE ~ data_type
)) %>%
# filter(table_name == "rental") %>%
select(-table_schema, -numeric_precision, -numeric_precision_radix)
glimpse(columns_info_schema_info)
```
```
## Observations: 2,961
## Variables: 7
## $ table_catalog <chr> "adventureworks", "adventureworks", "adventu…
## $ table_name <chr> "pg_proc", "pg_proc", "pg_proc", "pg_proc", …
## $ column_name <chr> "proname", "pronamespace", "proowner", "prol…
## $ data_type <chr> "name", "oid", "oid", "oid", "real (24,2)", …
## $ ordinal_position <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 1…
## $ character_maximum_length <int> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
## $ column_default <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
```
```
sp_print_df(head(columns_info_schema_info))
```
### 12\.3\.1 What is the difference between a `VIEW` and a `BASE TABLE`?
The `BASE TABLE` has the underlying data in the database
```
table_info_schema_table %>%
filter( table_type == "BASE TABLE") %>%
# filter(table_schema == "public" & table_type == "BASE TABLE") %>%
select(table_name, table_type) %>%
left_join(columns_info_schema_table, by = c("table_name" = "table_name")) %>%
select(
table_type, table_name, column_name, data_type, ordinal_position,
column_default
) %>%
collect(n = Inf) %>%
filter(str_detect(table_name, "cust")) %>%
head() %>%
sp_print_df()
```
Probably should explore how the `VIEW` is made up of data from BASE TABLEs.
```
table_info_schema_table %>%
filter( table_type == "VIEW") %>%
# filter(table_schema == "public" & table_type == "VIEW") %>%
select(table_name, table_type) %>%
left_join(columns_info_schema_table, by = c("table_name" = "table_name")) %>%
select(
table_type, table_name, column_name, data_type, ordinal_position,
column_default
) %>%
collect(n = Inf) %>%
filter(str_detect(table_name, "cust")) %>%
head() %>%
sp_print_df()
```
### 12\.3\.2 What data types are found in the database?
```
columns_info_schema_info %>%
count(data_type) %>%
head() %>%
sp_print_df()
```
12\.4 Characterizing how things are named
-----------------------------------------
Names are the handle for accessing the data. Tables and columns may or may not be named consistently or in a way that makes sense to you. You should look at these names *as data*.
### 12\.4\.1 Counting columns and name reuse
Pull out some rough\-and\-ready but useful statistics about your database. Since we are in SQL\-land we talk about variables as `columns`.
*this is wrong!*
```
public_tables <- columns_info_schema_table %>%
# filter(str_detect(table_name, "pg_") == FALSE) %>%
# filter(table_schema == "public") %>%
collect()
public_tables %>%
count(table_name, sort = TRUE) %>%
head(n = 15) %>%
sp_print_df()
```
How many *column names* are shared across tables (or duplicated)?
```
public_tables %>% count(column_name, sort = TRUE) %>%
filter(n > 1) %>%
head()
```
```
## # A tibble: 6 x 2
## column_name n
## <chr> <int>
## 1 modifieddate 140
## 2 rowguid 61
## 3 id 60
## 4 name 59
## 5 businessentityid 49
## 6 productid 32
```
How many column names are unique?
```
public_tables %>%
count(column_name) %>%
filter(n == 1) %>%
count() %>%
head()
```
```
## # A tibble: 1 x 1
## n
## <int>
## 1 882
```
12\.5 Database keys
-------------------
### 12\.5\.1 Direct SQL
How do we use this output? Could it be generated by dplyr?
```
rs <- dbGetQuery(
con,
"
--SELECT conrelid::regclass as table_from
select table_catalog||'.'||table_schema||'.'||table_name table_name
, conname, pg_catalog.pg_get_constraintdef(r.oid, true) as condef
FROM information_schema.columns c,pg_catalog.pg_constraint r
WHERE 1 = 1 --r.conrelid = '16485'
AND r.contype in ('f','p') ORDER BY 1
;"
)
glimpse(rs)
```
```
## Observations: 467,838
## Variables: 3
## $ table_name <chr> "adventureworks.hr.d", "adventureworks.hr.d", "adventurewo…
## $ conname <chr> "FK_SalesOrderDetail_SpecialOfferProduct_SpecialOfferIDPro…
## $ condef <chr> "FOREIGN KEY (specialofferid, productid) REFERENCES sales.…
```
```
sp_print_df(head(rs))
```
The following is more compact and looks more useful. What is the difference between the two?
```
rs <- dbGetQuery(
con,
"select conrelid::regclass as table_from
,c.conname
,pg_get_constraintdef(c.oid)
from pg_constraint c
join pg_namespace n on n.oid = c.connamespace
where c.contype in ('f','p')
and n.nspname = 'public'
order by conrelid::regclass::text, contype DESC;
"
)
glimpse(rs)
```
```
## Observations: 0
## Variables: 3
## $ table_from <chr>
## $ conname <chr>
## $ pg_get_constraintdef <chr>
```
```
sp_print_df(head(rs))
```
```
dim(rs)[1]
```
```
## [1] 0
```
### 12\.5\.2 Database keys with dplyr
This query shows the primary and foreign keys in the database.
```
tables <- tbl(con, dbplyr::in_schema("information_schema", "tables"))
table_constraints <- tbl(con, dbplyr::in_schema("information_schema", "table_constraints"))
key_column_usage <- tbl(con, dbplyr::in_schema("information_schema", "key_column_usage"))
referential_constraints <- tbl(con, dbplyr::in_schema("information_schema", "referential_constraints"))
constraint_column_usage <- tbl(con, dbplyr::in_schema("information_schema", "constraint_column_usage"))
keys <- tables %>%
left_join(table_constraints, by = c(
"table_catalog" = "table_catalog",
"table_schema" = "table_schema",
"table_name" = "table_name"
)) %>%
# table_constraints %>%
filter(constraint_type %in% c("FOREIGN KEY", "PRIMARY KEY")) %>%
left_join(key_column_usage,
by = c(
"table_catalog" = "table_catalog",
"constraint_catalog" = "constraint_catalog",
"constraint_schema" = "constraint_schema",
"table_name" = "table_name",
"table_schema" = "table_schema",
"constraint_name" = "constraint_name"
)
) %>%
# left_join(constraint_column_usage) %>% # does this table add anything useful?
select(table_name, table_type, constraint_name, constraint_type, column_name, ordinal_position) %>%
arrange(table_name) %>%
collect()
glimpse(keys)
```
```
## Observations: 190
## Variables: 6
## $ table_name <chr> "address", "address", "addresstype", "billofmaterial…
## $ table_type <chr> "BASE TABLE", "BASE TABLE", "BASE TABLE", "BASE TABL…
## $ constraint_name <chr> "FK_Address_StateProvince_StateProvinceID", "PK_Addr…
## $ constraint_type <chr> "FOREIGN KEY", "PRIMARY KEY", "PRIMARY KEY", "FOREIG…
## $ column_name <chr> "stateprovinceid", "addressid", "addresstypeid", "co…
## $ ordinal_position <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 2, 1, 1, 1, 2…
```
```
sp_print_df(head(keys))
```
What do we learn from the following query? How is it useful?
```
rs <- dbGetQuery(
con,
"SELECT r.*,
pg_catalog.pg_get_constraintdef(r.oid, true) as condef
FROM pg_catalog.pg_constraint r
WHERE 1=1 --r.conrelid = '16485' AND r.contype = 'f' ORDER BY 1;
"
)
head(rs)
```
```
## conname connamespace contype condeferrable condeferred
## 1 cardinal_number_domain_check 12771 c FALSE FALSE
## 2 yes_or_no_check 12771 c FALSE FALSE
## 3 CK_Employee_BirthDate 16386 c FALSE FALSE
## 4 CK_Employee_Gender 16386 c FALSE FALSE
## 5 CK_Employee_HireDate 16386 c FALSE FALSE
## 6 CK_Employee_MaritalStatus 16386 c FALSE FALSE
## convalidated conrelid contypid conindid conparentid confrelid confupdtype
## 1 TRUE 0 12785 0 0 0
## 2 TRUE 0 12797 0 0 0
## 3 TRUE 16450 0 0 0 0
## 4 TRUE 16450 0 0 0 0
## 5 TRUE 16450 0 0 0 0
## 6 TRUE 16450 0 0 0 0
## confdeltype confmatchtype conislocal coninhcount connoinherit conkey confkey
## 1 TRUE 0 FALSE <NA> <NA>
## 2 TRUE 0 FALSE <NA> <NA>
## 3 TRUE 0 FALSE {5} <NA>
## 4 TRUE 0 FALSE {7} <NA>
## 5 TRUE 0 FALSE {8} <NA>
## 6 TRUE 0 FALSE {6} <NA>
## conpfeqop conppeqop conffeqop conexclop
## 1 <NA> <NA> <NA> <NA>
## 2 <NA> <NA> <NA> <NA>
## 3 <NA> <NA> <NA> <NA>
## 4 <NA> <NA> <NA> <NA>
## 5 <NA> <NA> <NA> <NA>
## 6 <NA> <NA> <NA> <NA>
## conbin
## 1 {OPEXPR :opno 525 :opfuncid 150 :opresulttype 16 :opretset false :opcollid 0 :inputcollid 0 :args ({COERCETODOMAINVALUE :typeId 23 :typeMod -1 :collation 0 :location 195} {CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4 :constbyval true :constisnull false :location 204 :constvalue 4 [ 0 0 0 0 0 0 0 0 ]}) :location 201}
## 2 {SCALARARRAYOPEXPR :opno 98 :opfuncid 67 :useOr true :inputcollid 100 :args ({RELABELTYPE :arg {COERCETODOMAINVALUE :typeId 1043 :typeMod 7 :collation 100 :location 121} :resulttype 25 :resulttypmod -1 :resultcollid 100 :relabelformat 2 :location -1} {ARRAYCOERCEEXPR :arg {ARRAY :array_typeid 1015 :array_collid 100 :element_typeid 1043 :elements ({CONST :consttype 1043 :consttypmod -1 :constcollid 100 :constlen -1 :constbyval false :constisnull false :location 131 :constvalue 7 [ 28 0 0 0 89 69 83 ]} {CONST :consttype 1043 :consttypmod -1 :constcollid 100 :constlen -1 :constbyval false :constisnull false :location 138 :constvalue 6 [ 24 0 0 0 78 79 ]}) :multidims false :location -1} :elemexpr {RELABELTYPE :arg {CASETESTEXPR :typeId 1043 :typeMod -1 :collation 0} :resulttype 25 :resulttypmod -1 :resultcollid 100 :relabelformat 2 :location -1} :resulttype 1009 :resulttypmod -1 :resultcollid 100 :coerceformat 2 :location -1}) :location 127}
## 3 {BOOLEXPR :boolop and :args ({OPEXPR :opno 1098 :opfuncid 1090 :opresulttype 16 :opretset false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1 :varattno 5 :vartype 1082 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnoold 1 :varoattno 5 :location 804} {CONST :consttype 1082 :consttypmod -1 :constcollid 0 :constlen 4 :constbyval true :constisnull false :location 817 :constvalue 4 [ 33 -100 -1 -1 -1 -1 -1 -1 ]}) :location 814} {OPEXPR :opno 2359 :opfuncid 2352 :opresulttype 16 :opretset false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1 :varattno 5 :vartype 1082 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnoold 1 :varoattno 5 :location 842} {OPEXPR :opno 1329 :opfuncid 1190 :opresulttype 1184 :opretset false :opcollid 0 :inputcollid 0 :args ({FUNCEXPR :funcid 1299 :funcresulttype 1184 :funcretset false :funcvariadic false :funcformat 0 :funccollid 0 :inputcollid 0 :args <> :location 856} {CONST :consttype 1186 :consttypmod -1 :constcollid 0 :constlen 16 :constbyval false :constisnull false :location 864 :constvalue 16 [ 0 0 0 0 0 0 0 0 0 0 0 0 -40 0 0 0 ]}) :location 862}) :location 852}) :location 837}
## 4 {SCALARARRAYOPEXPR :opno 98 :opfuncid 67 :useOr true :inputcollid 100 :args ({FUNCEXPR :funcid 871 :funcresulttype 25 :funcretset false :funcvariadic false :funcformat 0 :funccollid 100 :inputcollid 100 :args ({FUNCEXPR :funcid 401 :funcresulttype 25 :funcretset false :funcvariadic false :funcformat 1 :funccollid 100 :inputcollid 100 :args ({VAR :varno 1 :varattno 7 :vartype 1042 :vartypmod 5 :varcollid 100 :varlevelsup 0 :varnoold 1 :varoattno 7 :location 941}) :location 948}) :location 934} {ARRAY :array_typeid 1009 :array_collid 100 :element_typeid 25 :elements ({CONST :consttype 25 :consttypmod -1 :constcollid 100 :constlen -1 :constbyval false :constisnull false :location 969 :constvalue 5 [ 20 0 0 0 77 ]} {CONST :consttype 25 :consttypmod -1 :constcollid 100 :constlen -1 :constbyval false :constisnull false :location 980 :constvalue 5 [ 20 0 0 0 70 ]}) :multidims false :location 963}) :location 956}
## 5 {BOOLEXPR :boolop and :args ({OPEXPR :opno 1098 :opfuncid 1090 :opresulttype 16 :opretset false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1 :varattno 8 :vartype 1082 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnoold 1 :varoattno 8 :location 1042} {CONST :consttype 1082 :consttypmod -1 :constcollid 0 :constlen 4 :constbyval true :constisnull false :location 1054 :constvalue 4 [ 1 -5 -1 -1 -1 -1 -1 -1 ]}) :location 1051} {OPEXPR :opno 2359 :opfuncid 2352 :opresulttype 16 :opretset false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1 :varattno 8 :vartype 1082 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnoold 1 :varoattno 8 :location 1079} {OPEXPR :opno 1327 :opfuncid 1189 :opresulttype 1184 :opretset false :opcollid 0 :inputcollid 0 :args ({FUNCEXPR :funcid 1299 :funcresulttype 1184 :funcretset false :funcvariadic false :funcformat 0 :funccollid 0 :inputcollid 0 :args <> :location 1092} {CONST :consttype 1186 :consttypmod -1 :constcollid 0 :constlen 16 :constbyval false :constisnull false :location 1100 :constvalue 16 [ 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 ]}) :location 1098}) :location 1088}) :location 1074}
## 6 {SCALARARRAYOPEXPR :opno 98 :opfuncid 67 :useOr true :inputcollid 100 :args ({FUNCEXPR :funcid 871 :funcresulttype 25 :funcretset false :funcvariadic false :funcformat 0 :funccollid 100 :inputcollid 100 :args ({FUNCEXPR :funcid 401 :funcresulttype 25 :funcretset false :funcvariadic false :funcformat 1 :funccollid 100 :inputcollid 100 :args ({VAR :varno 1 :varattno 6 :vartype 1042 :vartypmod 5 :varcollid 100 :varlevelsup 0 :varnoold 1 :varoattno 6 :location 1181}) :location 1195}) :location 1174} {ARRAY :array_typeid 1009 :array_collid 100 :element_typeid 25 :elements ({CONST :consttype 25 :consttypmod -1 :constcollid 100 :constlen -1 :constbyval false :constisnull false :location 1216 :constvalue 5 [ 20 0 0 0 77 ]} {CONST :consttype 25 :consttypmod -1 :constcollid 100 :constlen -1 :constbyval false :constisnull false :location 1227 :constvalue 5 [ 20 0 0 0 83 ]}) :multidims false :location 1210}) :location 1203}
## consrc
## 1 (VALUE >= 0)
## 2 ((VALUE)::text = ANY ((ARRAY['YES'::character varying, 'NO'::character varying])::text[]))
## 3 ((birthdate >= '1930-01-01'::date) AND (birthdate <= (now() - '18 years'::interval)))
## 4 (upper((gender)::text) = ANY (ARRAY['M'::text, 'F'::text]))
## 5 ((hiredate >= '1996-07-01'::date) AND (hiredate <= (now() + '1 day'::interval)))
## 6 (upper((maritalstatus)::text) = ANY (ARRAY['M'::text, 'S'::text]))
## condef
## 1 CHECK (VALUE >= 0)
## 2 CHECK (VALUE::text = ANY (ARRAY['YES'::character varying, 'NO'::character varying]::text[]))
## 3 CHECK (birthdate >= '1930-01-01'::date AND birthdate <= (now() - '18 years'::interval))
## 4 CHECK (upper(gender::text) = ANY (ARRAY['M'::text, 'F'::text]))
## 5 CHECK (hiredate >= '1996-07-01'::date AND hiredate <= (now() + '1 day'::interval))
## 6 CHECK (upper(maritalstatus::text) = ANY (ARRAY['M'::text, 'S'::text]))
```
12\.6 Creating your own data dictionary
---------------------------------------
If you are going to work with a database for an extended period it can be useful to create your own data dictionary. This can take the form of [keeping detaild notes](https://caitlinhudon.com/2018/10/30/data-dictionaries/) as well as extracting metadata from the dbms. Here is an illustration of the idea.
*This probably doens’t work anymore*
```
# some_tables <- c("rental", "city", "store")
#
# all_meta <- map_df(some_tables, sp_get_dbms_data_dictionary, con = con)
#
# all_meta
#
# glimpse(all_meta)
#
# sp_print_df(head(all_meta))
```
12\.7 Save your work!
---------------------
The work you do to understand the structure and contents of a database can be useful for others (including future\-you). So at the end of a session, you might look at all the data frames you want to save. Consider saving them in a form where you can add notes at the appropriate level (as in a Google Doc representing table or columns that you annotate over time).
```
ls()
```
```
## [1] "columns_info_schema_info" "columns_info_schema_table"
## [3] "con" "constraint_column_usage"
## [5] "cranex" "key_column_usage"
## [7] "keys" "public_tables"
## [9] "referential_constraints" "rs"
## [11] "schema_list" "table_constraints"
## [13] "table_info" "table_info_schema_table"
## [15] "tables"
```
\`\`\`
\#\# Cleaning up
Always have R disconnect from the database when you’re done and stop the Adventureworks Container
```
dbDisconnect(con)
sp_docker_stop("adventureworks")
```
12\.1 Views trick parked here for the time being
------------------------------------------------
### 12\.1\.1 Explore the vsalelsperson and vsalespersonsalesbyfiscalyearsdata views
The following trick goes later in the book, where it’s used to prove the finding that to make sense of othe data you need to
```
cat(unlist(dbGetQuery(con, "select pg_get_viewdef('sales.vsalesperson', true)")))
```
```
## SELECT s.businessentityid,
## p.title,
## p.firstname,
## p.middlename,
## p.lastname,
## p.suffix,
## e.jobtitle,
## pp.phonenumber,
## pnt.name AS phonenumbertype,
## ea.emailaddress,
## p.emailpromotion,
## a.addressline1,
## a.addressline2,
## a.city,
## sp.name AS stateprovincename,
## a.postalcode,
## cr.name AS countryregionname,
## st.name AS territoryname,
## st."group" AS territorygroup,
## s.salesquota,
## s.salesytd,
## s.saleslastyear
## FROM sales.salesperson s
## JOIN humanresources.employee e ON e.businessentityid = s.businessentityid
## JOIN person.person p ON p.businessentityid = s.businessentityid
## JOIN person.businessentityaddress bea ON bea.businessentityid = s.businessentityid
## JOIN person.address a ON a.addressid = bea.addressid
## JOIN person.stateprovince sp ON sp.stateprovinceid = a.stateprovinceid
## JOIN person.countryregion cr ON cr.countryregioncode::text = sp.countryregioncode::text
## LEFT JOIN sales.salesterritory st ON st.territoryid = s.territoryid
## LEFT JOIN person.emailaddress ea ON ea.businessentityid = p.businessentityid
## LEFT JOIN person.personphone pp ON pp.businessentityid = p.businessentityid
## LEFT JOIN person.phonenumbertype pnt ON pnt.phonenumbertypeid = pp.phonenumbertypeid;
```
```
## pg_get_viewdef
## 1 SELECT granular.salespersonid,\n granular.fullname,\n granular.jobtitle,\n granular.salesterritory,\n sum(granular.subtotal) AS salestotal,\n granular.fiscalyear\n FROM ( SELECT soh.salespersonid,\n ((p.firstname::text || ' '::text) || COALESCE(p.middlename::text || ' '::text, ''::text)) || p.lastname::text AS fullname,\n e.jobtitle,\n st.name AS salesterritory,\n soh.subtotal,\n date_part('year'::text, soh.orderdate + '6 mons'::interval) AS fiscalyear\n FROM sales.salesperson sp\n JOIN sales.salesorderheader soh ON sp.businessentityid = soh.salespersonid\n JOIN sales.salesterritory st ON sp.territoryid = st.territoryid\n JOIN humanresources.employee e ON soh.salespersonid = e.businessentityid\n JOIN person.person p ON p.businessentityid = sp.businessentityid) granular\n GROUP BY granular.salespersonid, granular.fullname, granular.jobtitle, granular.salesterritory, granular.fiscalyear;
```
### 12\.1\.1 Explore the vsalelsperson and vsalespersonsalesbyfiscalyearsdata views
The following trick goes later in the book, where it’s used to prove the finding that to make sense of othe data you need to
```
cat(unlist(dbGetQuery(con, "select pg_get_viewdef('sales.vsalesperson', true)")))
```
```
## SELECT s.businessentityid,
## p.title,
## p.firstname,
## p.middlename,
## p.lastname,
## p.suffix,
## e.jobtitle,
## pp.phonenumber,
## pnt.name AS phonenumbertype,
## ea.emailaddress,
## p.emailpromotion,
## a.addressline1,
## a.addressline2,
## a.city,
## sp.name AS stateprovincename,
## a.postalcode,
## cr.name AS countryregionname,
## st.name AS territoryname,
## st."group" AS territorygroup,
## s.salesquota,
## s.salesytd,
## s.saleslastyear
## FROM sales.salesperson s
## JOIN humanresources.employee e ON e.businessentityid = s.businessentityid
## JOIN person.person p ON p.businessentityid = s.businessentityid
## JOIN person.businessentityaddress bea ON bea.businessentityid = s.businessentityid
## JOIN person.address a ON a.addressid = bea.addressid
## JOIN person.stateprovince sp ON sp.stateprovinceid = a.stateprovinceid
## JOIN person.countryregion cr ON cr.countryregioncode::text = sp.countryregioncode::text
## LEFT JOIN sales.salesterritory st ON st.territoryid = s.territoryid
## LEFT JOIN person.emailaddress ea ON ea.businessentityid = p.businessentityid
## LEFT JOIN person.personphone pp ON pp.businessentityid = p.businessentityid
## LEFT JOIN person.phonenumbertype pnt ON pnt.phonenumbertypeid = pp.phonenumbertypeid;
```
```
## pg_get_viewdef
## 1 SELECT granular.salespersonid,\n granular.fullname,\n granular.jobtitle,\n granular.salesterritory,\n sum(granular.subtotal) AS salestotal,\n granular.fiscalyear\n FROM ( SELECT soh.salespersonid,\n ((p.firstname::text || ' '::text) || COALESCE(p.middlename::text || ' '::text, ''::text)) || p.lastname::text AS fullname,\n e.jobtitle,\n st.name AS salesterritory,\n soh.subtotal,\n date_part('year'::text, soh.orderdate + '6 mons'::interval) AS fiscalyear\n FROM sales.salesperson sp\n JOIN sales.salesorderheader soh ON sp.businessentityid = soh.salespersonid\n JOIN sales.salesterritory st ON sp.territoryid = st.territoryid\n JOIN humanresources.employee e ON soh.salespersonid = e.businessentityid\n JOIN person.person p ON p.businessentityid = sp.businessentityid) granular\n GROUP BY granular.salespersonid, granular.fullname, granular.jobtitle, granular.salesterritory, granular.fiscalyear;
```
12\.2 Database contents and structure
-------------------------------------
After just looking at the data you seek, it might be worthwhile stepping back and looking at the big picture.
### 12\.2\.1 Database structure
For large or complex databases you need to use both the available documentation for your database (e.g., [the dvdrental](http://www.postgresqltutorial.com/postgresql-sample-database/) database) and the other empirical tools that are available. For example it’s worth learning to interpret the symbols in an [Entity Relationship Diagram](https://en.wikipedia.org/wiki/Entity%E2%80%93relationship_model):
The `information_schema` is a trove of information *about* the database. Its format is more or less consistent across the different SQL implementations that are available. Here we explore some of what’s available using several different methods. PostgreSQL stores [a lot of metadata](https://www.postgresql.org/docs/current/static/infoschema-columns.html).
### 12\.2\.2 Contents of the `information_schema`
For this chapter R needs the `dbplyr` package to access alternate schemas. A [schema](http://www.postgresqltutorial.com/postgresql-server-and-database-objects/) is an object that contains one or more tables. Most often there will be a default schema, but to access the metadata, you need to explicitly specify which schema contains the data you want.
### 12\.2\.3 What tables are in the database?
The simplest way to get a list of tables is with … *NO LONGER WORKS*:
```
schema_list <- tbl(con, in_schema("information_schema", "schemata")) %>%
select(catalog_name, schema_name, schema_owner) %>%
collect()
sp_print_df(head(schema_list))
```
\#\#\# Digging into the `information_schema`
We usually need more detail than just a list of tables. Most SQL databases have an `information_schema` that has a standard structure to describe and control the database.
The `information_schema` is in a different schema from the default, so to connect to the `tables` table in the `information_schema` we connect to the database in a different way:
```
table_info_schema_table <- tbl(con, dbplyr::in_schema("information_schema", "tables"))
```
The `information_schema` is large and complex and contains 343 tables. So it’s easy to get lost in it.
This query retrieves a list of the tables in the database that includes additional detail, not just the name of the table.
```
table_info <- table_info_schema_table %>%
# filter(table_schema == "public") %>%
select(table_catalog, table_schema, table_name, table_type) %>%
arrange(table_type, table_name) %>%
collect()
sp_print_df(head(table_info))
```
In this context `table_catalog` is synonymous with `database`.
Notice that *VIEWS* are composites made up of one or more *BASE TABLES*.
The SQL world has its own terminology. For example `rs` is shorthand for `result set`. That’s equivalent to using `df` for a `data frame`. The following SQL query returns the same information as the previous dplyr code.
```
rs <- dbGetQuery(
con,
"select table_catalog, table_schema, table_name, table_type
from information_schema.tables
where table_schema not in ('pg_catalog','information_schema')
order by table_type, table_name
;"
)
sp_print_df(head(rs))
```
### 12\.2\.1 Database structure
For large or complex databases you need to use both the available documentation for your database (e.g., [the dvdrental](http://www.postgresqltutorial.com/postgresql-sample-database/) database) and the other empirical tools that are available. For example it’s worth learning to interpret the symbols in an [Entity Relationship Diagram](https://en.wikipedia.org/wiki/Entity%E2%80%93relationship_model):
The `information_schema` is a trove of information *about* the database. Its format is more or less consistent across the different SQL implementations that are available. Here we explore some of what’s available using several different methods. PostgreSQL stores [a lot of metadata](https://www.postgresql.org/docs/current/static/infoschema-columns.html).
### 12\.2\.2 Contents of the `information_schema`
For this chapter R needs the `dbplyr` package to access alternate schemas. A [schema](http://www.postgresqltutorial.com/postgresql-server-and-database-objects/) is an object that contains one or more tables. Most often there will be a default schema, but to access the metadata, you need to explicitly specify which schema contains the data you want.
### 12\.2\.3 What tables are in the database?
The simplest way to get a list of tables is with … *NO LONGER WORKS*:
```
schema_list <- tbl(con, in_schema("information_schema", "schemata")) %>%
select(catalog_name, schema_name, schema_owner) %>%
collect()
sp_print_df(head(schema_list))
```
\#\#\# Digging into the `information_schema`
We usually need more detail than just a list of tables. Most SQL databases have an `information_schema` that has a standard structure to describe and control the database.
The `information_schema` is in a different schema from the default, so to connect to the `tables` table in the `information_schema` we connect to the database in a different way:
```
table_info_schema_table <- tbl(con, dbplyr::in_schema("information_schema", "tables"))
```
The `information_schema` is large and complex and contains 343 tables. So it’s easy to get lost in it.
This query retrieves a list of the tables in the database that includes additional detail, not just the name of the table.
```
table_info <- table_info_schema_table %>%
# filter(table_schema == "public") %>%
select(table_catalog, table_schema, table_name, table_type) %>%
arrange(table_type, table_name) %>%
collect()
sp_print_df(head(table_info))
```
In this context `table_catalog` is synonymous with `database`.
Notice that *VIEWS* are composites made up of one or more *BASE TABLES*.
The SQL world has its own terminology. For example `rs` is shorthand for `result set`. That’s equivalent to using `df` for a `data frame`. The following SQL query returns the same information as the previous dplyr code.
```
rs <- dbGetQuery(
con,
"select table_catalog, table_schema, table_name, table_type
from information_schema.tables
where table_schema not in ('pg_catalog','information_schema')
order by table_type, table_name
;"
)
sp_print_df(head(rs))
```
12\.3 What columns do those tables contain?
-------------------------------------------
Of course, the `DBI` package has a `dbListFields` function that provides the simplest way to get the minimum, a list of column names:
```
# DBI::dbListFields(con, "rental")
```
But the `information_schema` has a lot more useful information that we can use.
```
columns_info_schema_table <- tbl(con, dbplyr::in_schema("information_schema", "columns"))
```
Since the `information_schema` contains 2961 columns, we are narrowing our focus to just one table. This query retrieves more information about the `rental` table:
```
columns_info_schema_info <- columns_info_schema_table %>%
# filter(table_schema == "public") %>%
select(
table_catalog, table_schema, table_name, column_name, data_type, ordinal_position,
character_maximum_length, column_default, numeric_precision, numeric_precision_radix
) %>%
collect(n = Inf) %>%
mutate(data_type = case_when(
data_type == "character varying" ~ paste0(data_type, " (", character_maximum_length, ")"),
data_type == "real" ~ paste0(data_type, " (", numeric_precision, ",", numeric_precision_radix, ")"),
TRUE ~ data_type
)) %>%
# filter(table_name == "rental") %>%
select(-table_schema, -numeric_precision, -numeric_precision_radix)
glimpse(columns_info_schema_info)
```
```
## Observations: 2,961
## Variables: 7
## $ table_catalog <chr> "adventureworks", "adventureworks", "adventu…
## $ table_name <chr> "pg_proc", "pg_proc", "pg_proc", "pg_proc", …
## $ column_name <chr> "proname", "pronamespace", "proowner", "prol…
## $ data_type <chr> "name", "oid", "oid", "oid", "real (24,2)", …
## $ ordinal_position <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 1…
## $ character_maximum_length <int> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
## $ column_default <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
```
```
sp_print_df(head(columns_info_schema_info))
```
### 12\.3\.1 What is the difference between a `VIEW` and a `BASE TABLE`?
The `BASE TABLE` has the underlying data in the database
```
table_info_schema_table %>%
filter( table_type == "BASE TABLE") %>%
# filter(table_schema == "public" & table_type == "BASE TABLE") %>%
select(table_name, table_type) %>%
left_join(columns_info_schema_table, by = c("table_name" = "table_name")) %>%
select(
table_type, table_name, column_name, data_type, ordinal_position,
column_default
) %>%
collect(n = Inf) %>%
filter(str_detect(table_name, "cust")) %>%
head() %>%
sp_print_df()
```
Probably should explore how the `VIEW` is made up of data from BASE TABLEs.
```
table_info_schema_table %>%
filter( table_type == "VIEW") %>%
# filter(table_schema == "public" & table_type == "VIEW") %>%
select(table_name, table_type) %>%
left_join(columns_info_schema_table, by = c("table_name" = "table_name")) %>%
select(
table_type, table_name, column_name, data_type, ordinal_position,
column_default
) %>%
collect(n = Inf) %>%
filter(str_detect(table_name, "cust")) %>%
head() %>%
sp_print_df()
```
### 12\.3\.2 What data types are found in the database?
```
columns_info_schema_info %>%
count(data_type) %>%
head() %>%
sp_print_df()
```
### 12\.3\.1 What is the difference between a `VIEW` and a `BASE TABLE`?
The `BASE TABLE` has the underlying data in the database
```
table_info_schema_table %>%
filter( table_type == "BASE TABLE") %>%
# filter(table_schema == "public" & table_type == "BASE TABLE") %>%
select(table_name, table_type) %>%
left_join(columns_info_schema_table, by = c("table_name" = "table_name")) %>%
select(
table_type, table_name, column_name, data_type, ordinal_position,
column_default
) %>%
collect(n = Inf) %>%
filter(str_detect(table_name, "cust")) %>%
head() %>%
sp_print_df()
```
Probably should explore how the `VIEW` is made up of data from BASE TABLEs.
```
table_info_schema_table %>%
filter( table_type == "VIEW") %>%
# filter(table_schema == "public" & table_type == "VIEW") %>%
select(table_name, table_type) %>%
left_join(columns_info_schema_table, by = c("table_name" = "table_name")) %>%
select(
table_type, table_name, column_name, data_type, ordinal_position,
column_default
) %>%
collect(n = Inf) %>%
filter(str_detect(table_name, "cust")) %>%
head() %>%
sp_print_df()
```
### 12\.3\.2 What data types are found in the database?
```
columns_info_schema_info %>%
count(data_type) %>%
head() %>%
sp_print_df()
```
12\.4 Characterizing how things are named
-----------------------------------------
Names are the handle for accessing the data. Tables and columns may or may not be named consistently or in a way that makes sense to you. You should look at these names *as data*.
### 12\.4\.1 Counting columns and name reuse
Pull out some rough\-and\-ready but useful statistics about your database. Since we are in SQL\-land we talk about variables as `columns`.
*this is wrong!*
```
public_tables <- columns_info_schema_table %>%
# filter(str_detect(table_name, "pg_") == FALSE) %>%
# filter(table_schema == "public") %>%
collect()
public_tables %>%
count(table_name, sort = TRUE) %>%
head(n = 15) %>%
sp_print_df()
```
How many *column names* are shared across tables (or duplicated)?
```
public_tables %>% count(column_name, sort = TRUE) %>%
filter(n > 1) %>%
head()
```
```
## # A tibble: 6 x 2
## column_name n
## <chr> <int>
## 1 modifieddate 140
## 2 rowguid 61
## 3 id 60
## 4 name 59
## 5 businessentityid 49
## 6 productid 32
```
How many column names are unique?
```
public_tables %>%
count(column_name) %>%
filter(n == 1) %>%
count() %>%
head()
```
```
## # A tibble: 1 x 1
## n
## <int>
## 1 882
```
### 12\.4\.1 Counting columns and name reuse
Pull out some rough\-and\-ready but useful statistics about your database. Since we are in SQL\-land we talk about variables as `columns`.
*this is wrong!*
```
public_tables <- columns_info_schema_table %>%
# filter(str_detect(table_name, "pg_") == FALSE) %>%
# filter(table_schema == "public") %>%
collect()
public_tables %>%
count(table_name, sort = TRUE) %>%
head(n = 15) %>%
sp_print_df()
```
How many *column names* are shared across tables (or duplicated)?
```
public_tables %>% count(column_name, sort = TRUE) %>%
filter(n > 1) %>%
head()
```
```
## # A tibble: 6 x 2
## column_name n
## <chr> <int>
## 1 modifieddate 140
## 2 rowguid 61
## 3 id 60
## 4 name 59
## 5 businessentityid 49
## 6 productid 32
```
How many column names are unique?
```
public_tables %>%
count(column_name) %>%
filter(n == 1) %>%
count() %>%
head()
```
```
## # A tibble: 1 x 1
## n
## <int>
## 1 882
```
12\.5 Database keys
-------------------
### 12\.5\.1 Direct SQL
How do we use this output? Could it be generated by dplyr?
```
rs <- dbGetQuery(
con,
"
--SELECT conrelid::regclass as table_from
select table_catalog||'.'||table_schema||'.'||table_name table_name
, conname, pg_catalog.pg_get_constraintdef(r.oid, true) as condef
FROM information_schema.columns c,pg_catalog.pg_constraint r
WHERE 1 = 1 --r.conrelid = '16485'
AND r.contype in ('f','p') ORDER BY 1
;"
)
glimpse(rs)
```
```
## Observations: 467,838
## Variables: 3
## $ table_name <chr> "adventureworks.hr.d", "adventureworks.hr.d", "adventurewo…
## $ conname <chr> "FK_SalesOrderDetail_SpecialOfferProduct_SpecialOfferIDPro…
## $ condef <chr> "FOREIGN KEY (specialofferid, productid) REFERENCES sales.…
```
```
sp_print_df(head(rs))
```
The following is more compact and looks more useful. What is the difference between the two?
```
rs <- dbGetQuery(
con,
"select conrelid::regclass as table_from
,c.conname
,pg_get_constraintdef(c.oid)
from pg_constraint c
join pg_namespace n on n.oid = c.connamespace
where c.contype in ('f','p')
and n.nspname = 'public'
order by conrelid::regclass::text, contype DESC;
"
)
glimpse(rs)
```
```
## Observations: 0
## Variables: 3
## $ table_from <chr>
## $ conname <chr>
## $ pg_get_constraintdef <chr>
```
```
sp_print_df(head(rs))
```
```
dim(rs)[1]
```
```
## [1] 0
```
### 12\.5\.2 Database keys with dplyr
This query shows the primary and foreign keys in the database.
```
tables <- tbl(con, dbplyr::in_schema("information_schema", "tables"))
table_constraints <- tbl(con, dbplyr::in_schema("information_schema", "table_constraints"))
key_column_usage <- tbl(con, dbplyr::in_schema("information_schema", "key_column_usage"))
referential_constraints <- tbl(con, dbplyr::in_schema("information_schema", "referential_constraints"))
constraint_column_usage <- tbl(con, dbplyr::in_schema("information_schema", "constraint_column_usage"))
keys <- tables %>%
left_join(table_constraints, by = c(
"table_catalog" = "table_catalog",
"table_schema" = "table_schema",
"table_name" = "table_name"
)) %>%
# table_constraints %>%
filter(constraint_type %in% c("FOREIGN KEY", "PRIMARY KEY")) %>%
left_join(key_column_usage,
by = c(
"table_catalog" = "table_catalog",
"constraint_catalog" = "constraint_catalog",
"constraint_schema" = "constraint_schema",
"table_name" = "table_name",
"table_schema" = "table_schema",
"constraint_name" = "constraint_name"
)
) %>%
# left_join(constraint_column_usage) %>% # does this table add anything useful?
select(table_name, table_type, constraint_name, constraint_type, column_name, ordinal_position) %>%
arrange(table_name) %>%
collect()
glimpse(keys)
```
```
## Observations: 190
## Variables: 6
## $ table_name <chr> "address", "address", "addresstype", "billofmaterial…
## $ table_type <chr> "BASE TABLE", "BASE TABLE", "BASE TABLE", "BASE TABL…
## $ constraint_name <chr> "FK_Address_StateProvince_StateProvinceID", "PK_Addr…
## $ constraint_type <chr> "FOREIGN KEY", "PRIMARY KEY", "PRIMARY KEY", "FOREIG…
## $ column_name <chr> "stateprovinceid", "addressid", "addresstypeid", "co…
## $ ordinal_position <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 2, 1, 1, 1, 2…
```
```
sp_print_df(head(keys))
```
What do we learn from the following query? How is it useful?
```
rs <- dbGetQuery(
con,
"SELECT r.*,
pg_catalog.pg_get_constraintdef(r.oid, true) as condef
FROM pg_catalog.pg_constraint r
WHERE 1=1 --r.conrelid = '16485' AND r.contype = 'f' ORDER BY 1;
"
)
head(rs)
```
```
## conname connamespace contype condeferrable condeferred
## 1 cardinal_number_domain_check 12771 c FALSE FALSE
## 2 yes_or_no_check 12771 c FALSE FALSE
## 3 CK_Employee_BirthDate 16386 c FALSE FALSE
## 4 CK_Employee_Gender 16386 c FALSE FALSE
## 5 CK_Employee_HireDate 16386 c FALSE FALSE
## 6 CK_Employee_MaritalStatus 16386 c FALSE FALSE
## convalidated conrelid contypid conindid conparentid confrelid confupdtype
## 1 TRUE 0 12785 0 0 0
## 2 TRUE 0 12797 0 0 0
## 3 TRUE 16450 0 0 0 0
## 4 TRUE 16450 0 0 0 0
## 5 TRUE 16450 0 0 0 0
## 6 TRUE 16450 0 0 0 0
## confdeltype confmatchtype conislocal coninhcount connoinherit conkey confkey
## 1 TRUE 0 FALSE <NA> <NA>
## 2 TRUE 0 FALSE <NA> <NA>
## 3 TRUE 0 FALSE {5} <NA>
## 4 TRUE 0 FALSE {7} <NA>
## 5 TRUE 0 FALSE {8} <NA>
## 6 TRUE 0 FALSE {6} <NA>
## conpfeqop conppeqop conffeqop conexclop
## 1 <NA> <NA> <NA> <NA>
## 2 <NA> <NA> <NA> <NA>
## 3 <NA> <NA> <NA> <NA>
## 4 <NA> <NA> <NA> <NA>
## 5 <NA> <NA> <NA> <NA>
## 6 <NA> <NA> <NA> <NA>
## conbin
## 1 {OPEXPR :opno 525 :opfuncid 150 :opresulttype 16 :opretset false :opcollid 0 :inputcollid 0 :args ({COERCETODOMAINVALUE :typeId 23 :typeMod -1 :collation 0 :location 195} {CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4 :constbyval true :constisnull false :location 204 :constvalue 4 [ 0 0 0 0 0 0 0 0 ]}) :location 201}
## 2 {SCALARARRAYOPEXPR :opno 98 :opfuncid 67 :useOr true :inputcollid 100 :args ({RELABELTYPE :arg {COERCETODOMAINVALUE :typeId 1043 :typeMod 7 :collation 100 :location 121} :resulttype 25 :resulttypmod -1 :resultcollid 100 :relabelformat 2 :location -1} {ARRAYCOERCEEXPR :arg {ARRAY :array_typeid 1015 :array_collid 100 :element_typeid 1043 :elements ({CONST :consttype 1043 :consttypmod -1 :constcollid 100 :constlen -1 :constbyval false :constisnull false :location 131 :constvalue 7 [ 28 0 0 0 89 69 83 ]} {CONST :consttype 1043 :consttypmod -1 :constcollid 100 :constlen -1 :constbyval false :constisnull false :location 138 :constvalue 6 [ 24 0 0 0 78 79 ]}) :multidims false :location -1} :elemexpr {RELABELTYPE :arg {CASETESTEXPR :typeId 1043 :typeMod -1 :collation 0} :resulttype 25 :resulttypmod -1 :resultcollid 100 :relabelformat 2 :location -1} :resulttype 1009 :resulttypmod -1 :resultcollid 100 :coerceformat 2 :location -1}) :location 127}
## 3 {BOOLEXPR :boolop and :args ({OPEXPR :opno 1098 :opfuncid 1090 :opresulttype 16 :opretset false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1 :varattno 5 :vartype 1082 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnoold 1 :varoattno 5 :location 804} {CONST :consttype 1082 :consttypmod -1 :constcollid 0 :constlen 4 :constbyval true :constisnull false :location 817 :constvalue 4 [ 33 -100 -1 -1 -1 -1 -1 -1 ]}) :location 814} {OPEXPR :opno 2359 :opfuncid 2352 :opresulttype 16 :opretset false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1 :varattno 5 :vartype 1082 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnoold 1 :varoattno 5 :location 842} {OPEXPR :opno 1329 :opfuncid 1190 :opresulttype 1184 :opretset false :opcollid 0 :inputcollid 0 :args ({FUNCEXPR :funcid 1299 :funcresulttype 1184 :funcretset false :funcvariadic false :funcformat 0 :funccollid 0 :inputcollid 0 :args <> :location 856} {CONST :consttype 1186 :consttypmod -1 :constcollid 0 :constlen 16 :constbyval false :constisnull false :location 864 :constvalue 16 [ 0 0 0 0 0 0 0 0 0 0 0 0 -40 0 0 0 ]}) :location 862}) :location 852}) :location 837}
## 4 {SCALARARRAYOPEXPR :opno 98 :opfuncid 67 :useOr true :inputcollid 100 :args ({FUNCEXPR :funcid 871 :funcresulttype 25 :funcretset false :funcvariadic false :funcformat 0 :funccollid 100 :inputcollid 100 :args ({FUNCEXPR :funcid 401 :funcresulttype 25 :funcretset false :funcvariadic false :funcformat 1 :funccollid 100 :inputcollid 100 :args ({VAR :varno 1 :varattno 7 :vartype 1042 :vartypmod 5 :varcollid 100 :varlevelsup 0 :varnoold 1 :varoattno 7 :location 941}) :location 948}) :location 934} {ARRAY :array_typeid 1009 :array_collid 100 :element_typeid 25 :elements ({CONST :consttype 25 :consttypmod -1 :constcollid 100 :constlen -1 :constbyval false :constisnull false :location 969 :constvalue 5 [ 20 0 0 0 77 ]} {CONST :consttype 25 :consttypmod -1 :constcollid 100 :constlen -1 :constbyval false :constisnull false :location 980 :constvalue 5 [ 20 0 0 0 70 ]}) :multidims false :location 963}) :location 956}
## 5 {BOOLEXPR :boolop and :args ({OPEXPR :opno 1098 :opfuncid 1090 :opresulttype 16 :opretset false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1 :varattno 8 :vartype 1082 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnoold 1 :varoattno 8 :location 1042} {CONST :consttype 1082 :consttypmod -1 :constcollid 0 :constlen 4 :constbyval true :constisnull false :location 1054 :constvalue 4 [ 1 -5 -1 -1 -1 -1 -1 -1 ]}) :location 1051} {OPEXPR :opno 2359 :opfuncid 2352 :opresulttype 16 :opretset false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1 :varattno 8 :vartype 1082 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnoold 1 :varoattno 8 :location 1079} {OPEXPR :opno 1327 :opfuncid 1189 :opresulttype 1184 :opretset false :opcollid 0 :inputcollid 0 :args ({FUNCEXPR :funcid 1299 :funcresulttype 1184 :funcretset false :funcvariadic false :funcformat 0 :funccollid 0 :inputcollid 0 :args <> :location 1092} {CONST :consttype 1186 :consttypmod -1 :constcollid 0 :constlen 16 :constbyval false :constisnull false :location 1100 :constvalue 16 [ 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 ]}) :location 1098}) :location 1088}) :location 1074}
## 6 {SCALARARRAYOPEXPR :opno 98 :opfuncid 67 :useOr true :inputcollid 100 :args ({FUNCEXPR :funcid 871 :funcresulttype 25 :funcretset false :funcvariadic false :funcformat 0 :funccollid 100 :inputcollid 100 :args ({FUNCEXPR :funcid 401 :funcresulttype 25 :funcretset false :funcvariadic false :funcformat 1 :funccollid 100 :inputcollid 100 :args ({VAR :varno 1 :varattno 6 :vartype 1042 :vartypmod 5 :varcollid 100 :varlevelsup 0 :varnoold 1 :varoattno 6 :location 1181}) :location 1195}) :location 1174} {ARRAY :array_typeid 1009 :array_collid 100 :element_typeid 25 :elements ({CONST :consttype 25 :consttypmod -1 :constcollid 100 :constlen -1 :constbyval false :constisnull false :location 1216 :constvalue 5 [ 20 0 0 0 77 ]} {CONST :consttype 25 :consttypmod -1 :constcollid 100 :constlen -1 :constbyval false :constisnull false :location 1227 :constvalue 5 [ 20 0 0 0 83 ]}) :multidims false :location 1210}) :location 1203}
## consrc
## 1 (VALUE >= 0)
## 2 ((VALUE)::text = ANY ((ARRAY['YES'::character varying, 'NO'::character varying])::text[]))
## 3 ((birthdate >= '1930-01-01'::date) AND (birthdate <= (now() - '18 years'::interval)))
## 4 (upper((gender)::text) = ANY (ARRAY['M'::text, 'F'::text]))
## 5 ((hiredate >= '1996-07-01'::date) AND (hiredate <= (now() + '1 day'::interval)))
## 6 (upper((maritalstatus)::text) = ANY (ARRAY['M'::text, 'S'::text]))
## condef
## 1 CHECK (VALUE >= 0)
## 2 CHECK (VALUE::text = ANY (ARRAY['YES'::character varying, 'NO'::character varying]::text[]))
## 3 CHECK (birthdate >= '1930-01-01'::date AND birthdate <= (now() - '18 years'::interval))
## 4 CHECK (upper(gender::text) = ANY (ARRAY['M'::text, 'F'::text]))
## 5 CHECK (hiredate >= '1996-07-01'::date AND hiredate <= (now() + '1 day'::interval))
## 6 CHECK (upper(maritalstatus::text) = ANY (ARRAY['M'::text, 'S'::text]))
```
### 12\.5\.1 Direct SQL
How do we use this output? Could it be generated by dplyr?
```
rs <- dbGetQuery(
con,
"
--SELECT conrelid::regclass as table_from
select table_catalog||'.'||table_schema||'.'||table_name table_name
, conname, pg_catalog.pg_get_constraintdef(r.oid, true) as condef
FROM information_schema.columns c,pg_catalog.pg_constraint r
WHERE 1 = 1 --r.conrelid = '16485'
AND r.contype in ('f','p') ORDER BY 1
;"
)
glimpse(rs)
```
```
## Observations: 467,838
## Variables: 3
## $ table_name <chr> "adventureworks.hr.d", "adventureworks.hr.d", "adventurewo…
## $ conname <chr> "FK_SalesOrderDetail_SpecialOfferProduct_SpecialOfferIDPro…
## $ condef <chr> "FOREIGN KEY (specialofferid, productid) REFERENCES sales.…
```
```
sp_print_df(head(rs))
```
The following is more compact and looks more useful. What is the difference between the two?
```
rs <- dbGetQuery(
con,
"select conrelid::regclass as table_from
,c.conname
,pg_get_constraintdef(c.oid)
from pg_constraint c
join pg_namespace n on n.oid = c.connamespace
where c.contype in ('f','p')
and n.nspname = 'public'
order by conrelid::regclass::text, contype DESC;
"
)
glimpse(rs)
```
```
## Observations: 0
## Variables: 3
## $ table_from <chr>
## $ conname <chr>
## $ pg_get_constraintdef <chr>
```
```
sp_print_df(head(rs))
```
```
dim(rs)[1]
```
```
## [1] 0
```
### 12\.5\.2 Database keys with dplyr
This query shows the primary and foreign keys in the database.
```
tables <- tbl(con, dbplyr::in_schema("information_schema", "tables"))
table_constraints <- tbl(con, dbplyr::in_schema("information_schema", "table_constraints"))
key_column_usage <- tbl(con, dbplyr::in_schema("information_schema", "key_column_usage"))
referential_constraints <- tbl(con, dbplyr::in_schema("information_schema", "referential_constraints"))
constraint_column_usage <- tbl(con, dbplyr::in_schema("information_schema", "constraint_column_usage"))
keys <- tables %>%
left_join(table_constraints, by = c(
"table_catalog" = "table_catalog",
"table_schema" = "table_schema",
"table_name" = "table_name"
)) %>%
# table_constraints %>%
filter(constraint_type %in% c("FOREIGN KEY", "PRIMARY KEY")) %>%
left_join(key_column_usage,
by = c(
"table_catalog" = "table_catalog",
"constraint_catalog" = "constraint_catalog",
"constraint_schema" = "constraint_schema",
"table_name" = "table_name",
"table_schema" = "table_schema",
"constraint_name" = "constraint_name"
)
) %>%
# left_join(constraint_column_usage) %>% # does this table add anything useful?
select(table_name, table_type, constraint_name, constraint_type, column_name, ordinal_position) %>%
arrange(table_name) %>%
collect()
glimpse(keys)
```
```
## Observations: 190
## Variables: 6
## $ table_name <chr> "address", "address", "addresstype", "billofmaterial…
## $ table_type <chr> "BASE TABLE", "BASE TABLE", "BASE TABLE", "BASE TABL…
## $ constraint_name <chr> "FK_Address_StateProvince_StateProvinceID", "PK_Addr…
## $ constraint_type <chr> "FOREIGN KEY", "PRIMARY KEY", "PRIMARY KEY", "FOREIG…
## $ column_name <chr> "stateprovinceid", "addressid", "addresstypeid", "co…
## $ ordinal_position <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 2, 1, 1, 1, 2…
```
```
sp_print_df(head(keys))
```
What do we learn from the following query? How is it useful?
```
rs <- dbGetQuery(
con,
"SELECT r.*,
pg_catalog.pg_get_constraintdef(r.oid, true) as condef
FROM pg_catalog.pg_constraint r
WHERE 1=1 --r.conrelid = '16485' AND r.contype = 'f' ORDER BY 1;
"
)
head(rs)
```
```
## conname connamespace contype condeferrable condeferred
## 1 cardinal_number_domain_check 12771 c FALSE FALSE
## 2 yes_or_no_check 12771 c FALSE FALSE
## 3 CK_Employee_BirthDate 16386 c FALSE FALSE
## 4 CK_Employee_Gender 16386 c FALSE FALSE
## 5 CK_Employee_HireDate 16386 c FALSE FALSE
## 6 CK_Employee_MaritalStatus 16386 c FALSE FALSE
## convalidated conrelid contypid conindid conparentid confrelid confupdtype
## 1 TRUE 0 12785 0 0 0
## 2 TRUE 0 12797 0 0 0
## 3 TRUE 16450 0 0 0 0
## 4 TRUE 16450 0 0 0 0
## 5 TRUE 16450 0 0 0 0
## 6 TRUE 16450 0 0 0 0
## confdeltype confmatchtype conislocal coninhcount connoinherit conkey confkey
## 1 TRUE 0 FALSE <NA> <NA>
## 2 TRUE 0 FALSE <NA> <NA>
## 3 TRUE 0 FALSE {5} <NA>
## 4 TRUE 0 FALSE {7} <NA>
## 5 TRUE 0 FALSE {8} <NA>
## 6 TRUE 0 FALSE {6} <NA>
## conpfeqop conppeqop conffeqop conexclop
## 1 <NA> <NA> <NA> <NA>
## 2 <NA> <NA> <NA> <NA>
## 3 <NA> <NA> <NA> <NA>
## 4 <NA> <NA> <NA> <NA>
## 5 <NA> <NA> <NA> <NA>
## 6 <NA> <NA> <NA> <NA>
## conbin
## 1 {OPEXPR :opno 525 :opfuncid 150 :opresulttype 16 :opretset false :opcollid 0 :inputcollid 0 :args ({COERCETODOMAINVALUE :typeId 23 :typeMod -1 :collation 0 :location 195} {CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4 :constbyval true :constisnull false :location 204 :constvalue 4 [ 0 0 0 0 0 0 0 0 ]}) :location 201}
## 2 {SCALARARRAYOPEXPR :opno 98 :opfuncid 67 :useOr true :inputcollid 100 :args ({RELABELTYPE :arg {COERCETODOMAINVALUE :typeId 1043 :typeMod 7 :collation 100 :location 121} :resulttype 25 :resulttypmod -1 :resultcollid 100 :relabelformat 2 :location -1} {ARRAYCOERCEEXPR :arg {ARRAY :array_typeid 1015 :array_collid 100 :element_typeid 1043 :elements ({CONST :consttype 1043 :consttypmod -1 :constcollid 100 :constlen -1 :constbyval false :constisnull false :location 131 :constvalue 7 [ 28 0 0 0 89 69 83 ]} {CONST :consttype 1043 :consttypmod -1 :constcollid 100 :constlen -1 :constbyval false :constisnull false :location 138 :constvalue 6 [ 24 0 0 0 78 79 ]}) :multidims false :location -1} :elemexpr {RELABELTYPE :arg {CASETESTEXPR :typeId 1043 :typeMod -1 :collation 0} :resulttype 25 :resulttypmod -1 :resultcollid 100 :relabelformat 2 :location -1} :resulttype 1009 :resulttypmod -1 :resultcollid 100 :coerceformat 2 :location -1}) :location 127}
## 3 {BOOLEXPR :boolop and :args ({OPEXPR :opno 1098 :opfuncid 1090 :opresulttype 16 :opretset false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1 :varattno 5 :vartype 1082 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnoold 1 :varoattno 5 :location 804} {CONST :consttype 1082 :consttypmod -1 :constcollid 0 :constlen 4 :constbyval true :constisnull false :location 817 :constvalue 4 [ 33 -100 -1 -1 -1 -1 -1 -1 ]}) :location 814} {OPEXPR :opno 2359 :opfuncid 2352 :opresulttype 16 :opretset false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1 :varattno 5 :vartype 1082 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnoold 1 :varoattno 5 :location 842} {OPEXPR :opno 1329 :opfuncid 1190 :opresulttype 1184 :opretset false :opcollid 0 :inputcollid 0 :args ({FUNCEXPR :funcid 1299 :funcresulttype 1184 :funcretset false :funcvariadic false :funcformat 0 :funccollid 0 :inputcollid 0 :args <> :location 856} {CONST :consttype 1186 :consttypmod -1 :constcollid 0 :constlen 16 :constbyval false :constisnull false :location 864 :constvalue 16 [ 0 0 0 0 0 0 0 0 0 0 0 0 -40 0 0 0 ]}) :location 862}) :location 852}) :location 837}
## 4 {SCALARARRAYOPEXPR :opno 98 :opfuncid 67 :useOr true :inputcollid 100 :args ({FUNCEXPR :funcid 871 :funcresulttype 25 :funcretset false :funcvariadic false :funcformat 0 :funccollid 100 :inputcollid 100 :args ({FUNCEXPR :funcid 401 :funcresulttype 25 :funcretset false :funcvariadic false :funcformat 1 :funccollid 100 :inputcollid 100 :args ({VAR :varno 1 :varattno 7 :vartype 1042 :vartypmod 5 :varcollid 100 :varlevelsup 0 :varnoold 1 :varoattno 7 :location 941}) :location 948}) :location 934} {ARRAY :array_typeid 1009 :array_collid 100 :element_typeid 25 :elements ({CONST :consttype 25 :consttypmod -1 :constcollid 100 :constlen -1 :constbyval false :constisnull false :location 969 :constvalue 5 [ 20 0 0 0 77 ]} {CONST :consttype 25 :consttypmod -1 :constcollid 100 :constlen -1 :constbyval false :constisnull false :location 980 :constvalue 5 [ 20 0 0 0 70 ]}) :multidims false :location 963}) :location 956}
## 5 {BOOLEXPR :boolop and :args ({OPEXPR :opno 1098 :opfuncid 1090 :opresulttype 16 :opretset false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1 :varattno 8 :vartype 1082 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnoold 1 :varoattno 8 :location 1042} {CONST :consttype 1082 :consttypmod -1 :constcollid 0 :constlen 4 :constbyval true :constisnull false :location 1054 :constvalue 4 [ 1 -5 -1 -1 -1 -1 -1 -1 ]}) :location 1051} {OPEXPR :opno 2359 :opfuncid 2352 :opresulttype 16 :opretset false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1 :varattno 8 :vartype 1082 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnoold 1 :varoattno 8 :location 1079} {OPEXPR :opno 1327 :opfuncid 1189 :opresulttype 1184 :opretset false :opcollid 0 :inputcollid 0 :args ({FUNCEXPR :funcid 1299 :funcresulttype 1184 :funcretset false :funcvariadic false :funcformat 0 :funccollid 0 :inputcollid 0 :args <> :location 1092} {CONST :consttype 1186 :consttypmod -1 :constcollid 0 :constlen 16 :constbyval false :constisnull false :location 1100 :constvalue 16 [ 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 ]}) :location 1098}) :location 1088}) :location 1074}
## 6 {SCALARARRAYOPEXPR :opno 98 :opfuncid 67 :useOr true :inputcollid 100 :args ({FUNCEXPR :funcid 871 :funcresulttype 25 :funcretset false :funcvariadic false :funcformat 0 :funccollid 100 :inputcollid 100 :args ({FUNCEXPR :funcid 401 :funcresulttype 25 :funcretset false :funcvariadic false :funcformat 1 :funccollid 100 :inputcollid 100 :args ({VAR :varno 1 :varattno 6 :vartype 1042 :vartypmod 5 :varcollid 100 :varlevelsup 0 :varnoold 1 :varoattno 6 :location 1181}) :location 1195}) :location 1174} {ARRAY :array_typeid 1009 :array_collid 100 :element_typeid 25 :elements ({CONST :consttype 25 :consttypmod -1 :constcollid 100 :constlen -1 :constbyval false :constisnull false :location 1216 :constvalue 5 [ 20 0 0 0 77 ]} {CONST :consttype 25 :consttypmod -1 :constcollid 100 :constlen -1 :constbyval false :constisnull false :location 1227 :constvalue 5 [ 20 0 0 0 83 ]}) :multidims false :location 1210}) :location 1203}
## consrc
## 1 (VALUE >= 0)
## 2 ((VALUE)::text = ANY ((ARRAY['YES'::character varying, 'NO'::character varying])::text[]))
## 3 ((birthdate >= '1930-01-01'::date) AND (birthdate <= (now() - '18 years'::interval)))
## 4 (upper((gender)::text) = ANY (ARRAY['M'::text, 'F'::text]))
## 5 ((hiredate >= '1996-07-01'::date) AND (hiredate <= (now() + '1 day'::interval)))
## 6 (upper((maritalstatus)::text) = ANY (ARRAY['M'::text, 'S'::text]))
## condef
## 1 CHECK (VALUE >= 0)
## 2 CHECK (VALUE::text = ANY (ARRAY['YES'::character varying, 'NO'::character varying]::text[]))
## 3 CHECK (birthdate >= '1930-01-01'::date AND birthdate <= (now() - '18 years'::interval))
## 4 CHECK (upper(gender::text) = ANY (ARRAY['M'::text, 'F'::text]))
## 5 CHECK (hiredate >= '1996-07-01'::date AND hiredate <= (now() + '1 day'::interval))
## 6 CHECK (upper(maritalstatus::text) = ANY (ARRAY['M'::text, 'S'::text]))
```
12\.6 Creating your own data dictionary
---------------------------------------
If you are going to work with a database for an extended period it can be useful to create your own data dictionary. This can take the form of [keeping detaild notes](https://caitlinhudon.com/2018/10/30/data-dictionaries/) as well as extracting metadata from the dbms. Here is an illustration of the idea.
*This probably doens’t work anymore*
```
# some_tables <- c("rental", "city", "store")
#
# all_meta <- map_df(some_tables, sp_get_dbms_data_dictionary, con = con)
#
# all_meta
#
# glimpse(all_meta)
#
# sp_print_df(head(all_meta))
```
12\.7 Save your work!
---------------------
The work you do to understand the structure and contents of a database can be useful for others (including future\-you). So at the end of a session, you might look at all the data frames you want to save. Consider saving them in a form where you can add notes at the appropriate level (as in a Google Doc representing table or columns that you annotate over time).
```
ls()
```
```
## [1] "columns_info_schema_info" "columns_info_schema_table"
## [3] "con" "constraint_column_usage"
## [5] "cranex" "key_column_usage"
## [7] "keys" "public_tables"
## [9] "referential_constraints" "rs"
## [11] "schema_list" "table_constraints"
## [13] "table_info" "table_info_schema_table"
## [15] "tables"
```
\`\`\`
\#\# Cleaning up
Always have R disconnect from the database when you’re done and stop the Adventureworks Container
```
dbDisconnect(con)
sp_docker_stop("adventureworks")
```
| Data Databases and Engineering |
smithjd.github.io | https://smithjd.github.io/sql-pet/appendix-background-basic-concepts.html |
A Background and Basic Concepts
===============================
---
> This Appendix describes:
>
>
> * The overall structure of our Docker\-based PostgreSQL sandbox
> * Basic concepts around each of the elements that make up our sandbox: tidy data, pipes, Docker, PostgreSQL, data representation, and our `petsqlr` package.
---
A.1 The big picture: R and the Docker / PostgreSQL playground on your machine
-----------------------------------------------------------------------------
Here is an overview of how R and Docker fit on your operating system in this book’s sandbox:
R and Docker
You run R from RStudio to set up Docker, launch PostgreSQL inside it and then send queries directly to PostgreSQL from R. (We provide more details about our sandbox environment in the chapter on [mapping your environment](#chapter_appendix-sandbox-environment).
A.2 Your computer and its operating system
------------------------------------------
The playground that we construct in this book is designed so that some of the mysteries of accessing a corporate database are more visible – it’s all happening on *your computer*. The challenge, however, is that we know very little about your computer and its operating system. In the workshops we’ve given about this book, the details of individual computers have turned out to be diverse and difficult to pin down in advance. So there can be many issues, but not many basic concepts that we can highlight in advance.
A.3 R
-----
We assume a general familiarity with R and RStudio. RStudio’s Big Data workshop at the 2019 RStudio has an abundance of introductory material (Ruiz [2019](#ref-Ruiz2019)).
This book is [Tidyverse\-oriented](https://www.tidyverse.org), so we assume familiarity with the pipe operator, tidy data (Wickham [2014](#ref-Wickham2014)), dplyr, and techniques for tidying data (Wickham [2018](#ref-Wickham2018)).
R connects to a database by means of a series of packages that work together. The following diagram from a [big data workshop](https://github.com/rstudio/bigdataclass) at the 2019 RStudio conference shows the big picture. The biggest difference in terms of retrieval strategies is between writing `dplyr` and native `SQL` code. Dplyr generates [SQL\-92 standard](https://en.wikipedia.org/wiki/SQL-92) code; whereas you can write SQL code that leverages the specific language features of your DBMS when you write SQL code yourself.
Rstudio’s DBMS architecture \- slide \# 33
A.4 Our `sqlpetr` package
-------------------------
The `sqlpetr` package is the companion R package for this database tutorial. It has two classes of functions:
* Functions to install the dependencies needed to build the book and perform the operations covered in the tutorial, and
* Utilities for dealing with Docker and the PostgreSQL Docker image we use.
`sqlpetr` has a pkgdown site at <https://smithjd.github.io/sqlpetr/>.
A.5 Docker
----------
Docker and the DevOps tools surrounding it have fostered a revolution in the way services are delivered over the internet. In this book, we’re piggybacking on a small piece of that revolution, Docker on the desktop.
### A.5\.1 Virtual machines and hypervisors
A *virtual machine* is a machine that is running purely as software hosted by another real machine. To the user, a virtual machine looks just like a real one. But it has no processors, memory or I/O devices of its own \- all of those are supplied and managed by the host.
A virtual machine can run any operating system that will run on the host’s hardware. A Linux host can run a Windows virtual machine and vice versa.
A *hypervisor* is the component of the host system software that manages virtual machines, usually called *guests*. Linux systems have a native hypervisor called *Kernel Virtual Machine* (`kvm`). And laptop, desktop and server processors from Intel and Advanced Micro Devices (AMD) have hardware that makes this hypervisor more efficient.
Windows servers and Windows 10 Pro have a hypervisor called *Hyper\-V*. Like `kvm`, `Hyper-V` can take advantage of the hardware in Intel and AMD processors. On Macintosh, there is a *Hypervisor Framework* (<https://developer.apple.com/documentation/hypervisor>) and other tools build on that.
If this book is about Docker, why do we care about virtual machines and hypervisors? Docker is a Linux subsystem \- it only runs on Linux laptops, desktops and servers. As we’ll see shortly, if we want to run Docker on Windows or MacOS, we’ll need a hypervisor, a Linux virtual machine and some “glue logic” to provide a Docker user experience equivalent to the one on a Linux system.
### A.5\.2 Containers
A *container* is a set of processes running in an operating system. The host operating system is usually Linux, but other operating systems also can host containers.
Unlike a virtual machine, the container has no operating system kernel of its own. If the host is running the Linux kernel, so is the container. And since the container OS is the same as the host OS, there’s no need for a hypervisor or hardware to support the hypervisor. So a container is more efficient than a virtual machine.
A container **does** have its own file system. From inside the container, this file system looks like a Linux file system, but it can use any Linux distro. For example, you can have an Ubuntu 18\.04 LTS host running Ubuntu 14\.04 LTS or Fedora 28 or CentOS 7 containers. The kernel will always be the host kernel, but the utilities and applications will be those from the container.
### A.5\.3 Docker itself
While there are both older (*lxc*) and newer container tools, the one that has caught on in terms of widespread use is *Docker* (Docker [2019](#ref-Docker2019a)[a](#ref-Docker2019a)). Docker is widely used on cloud providers to deploy services of all kinds. Using Docker on the desktop to deliver standardized packages, as we are doing in this book, is a secondary use case, but a common one.
If you’re using a Linux laptop / desktop, all you need to do is install Docker CE (Docker [2018](#ref-Docker2018b)[a](#ref-Docker2018b)). However, most laptops and desktops don’t run Linux \- they run Windows or MacOS. As noted above, to use Docker on Windows or MacOS, you need a hypervisor and a Linux virtual machine.
### A.5\.4 Docker objects
The Docker subsystem manages several kinds of objects \- containers, images, volumes and networks. In this book, we are only using the basic command line tools to manage containers, images and volumes.
Docker `images` are files that define a container’s initial file system. You can find pre\-built images on Docker Hub and the Docker Store \- the base PostgreSQL image we use comes from Docker Hub (<https://hub.docker.com/_/postgres/>). If there isn’t a Docker image that does exactly what you want, you can build your own by creating a Dockerfile and running `docker build`. We do this in \[Build the pet\-sql Docker Image].
Docker `volumes` – explain `mount`.
### A.5\.5 Hosting Docker on Windows machines
There are two ways to get Docker on Windows. For Windows 10 Home and older versions of Windows, you need Docker Toolbox (Docker [2019](#ref-Docker2019b)[e](#ref-Docker2019b)). Note that for Docker Toolbox, you need a 64\-bit AMD or Intel processor with the virtualization hardware installed and enabled in the BIOS.
For Windows 10 Pro, you have the Hyper\-V virtualizer as standard equipment, and can use Docker for Windows (Docker [2019](#ref-Docker2019c)[c](#ref-Docker2019c)).
### A.5\.6 Hosting Docker on macOS machines
As with Windows, there are two ways to get Docker. For older Intel systems, you’ll need Docker Toolbox (Docker [2019](#ref-Docker2019d)[d](#ref-Docker2019d)). Newer systems (2010 or later running at least macOS El Capitan 10\.11\) can run Docker for Mac (Docker [2019](#ref-Docker2019e)[b](#ref-Docker2019e)).
### A.5\.7 Hosting Docker on UNIX machines
Unix was the original host for both R and Docker. Unix\-like commands show up.
A.6 ‘Normal’ and ‘normalized’ data
----------------------------------
### A.6\.1 Tidy data
Tidy data (Wickham [2014](#ref-Wickham2014)) is well\-behaved from the point of view of analysis and tools in the Tidyverse (RStudio [2019](#ref-RStudio2019)). Tidy data is easier to think about and it is usually worthwhile to make the data tidy (Wickham [2018](#ref-Wickham2018)). Tidy data is roughly equivalent to *third normal form* as discussed below.
### A.6\.2 Design of “normal data”
Data in a database is most often optimized to minimize storage space and increase performance while preserving integrity when adding, changing, or deleting data. The Wikipedia article on Database Normalization has a good introduction to the characteristics of “normal” data and the process of re\-organizing it to meet those desirable criteria (Wikipedia [2019](#ref-Wikipedia2019)). The bottom line is that “data normalization is practical” although there are mathematical arguments for normalization based on the preservation of data integrity.
A.7 SQL Language
----------------
SQL stands for Structured Query Language. It is a database language where we can perform certain operations on the existing database and we can use it create a new database. There are four main categories where the SQL commands fall into: DML, DDL, DCL, and TCL.
### A.7\.1 Data Manipulation Langauge (DML)
These four SQL commands deal with the manipulation of data in the database. For everyday analytical work, these are the commands that you will use the most.
```
1. SELECT
2. INSERT
3. UPDATE
4. DELETE
```
### A.7\.2 Data Definition Langauge (DDL)
It consists of the SQL commands that can be used to define a database schema. The DDL commands include:
```
1. CREATE
2. ALTER
3. TRUNCATE
4. COMMENT
5. RENAME
6. DROP
```
### A.7\.3 Data Control Language (DCL)
The DCL commands deals with user rights, permissions and other controls in database management system.
```
1. GRANT
2. REVOKE
```
### A.7\.4 Transaction Control Language (TCL)
These commands deal with the control over transaction within the database. Transaction combines a set of tasks into single execution.
```
1. SET TRANSACTION
2. SAVEPOINT
3. ROLLBACK
4. COMMIT
```
A.8 Enterprise DBMS
-------------------
The organizational context of a database matters just as much as its design characteristics. The design of a database (or *data model*) may have been purchased from an external vendor or developed in\-house. In either case time has a tendency to erode the original design concept so that the data you find in a DBMS may not quite match the original design specification. And the original design may or may not be well reflected in the current naming of tables, columns and other objects.
It’s a naive misconception to think that the data you are analyzing just “comes from the database”, although that’s literally true and may be the step that happens before you get your hands on it. In fact it comes from the people who design, enter, manage, protect, and use your organization’s data. In practice, a [database administrator](https://en.wikipedia.org/wiki/Database_administrator) (DBA) is often a key point of contact in terms of access and may have stringent criteria for query performance. Make friends with your DBA.
### A.8\.1 SQL databases
Although there are [ANSI standards](https://en.wikipedia.org/wiki/SQL#Interoperability_and_standardization) for [SQL syntax](https://en.wikipedia.org/wiki/SQL_syntax), different implementations vary in enough details that R’s ability to customize queries for those implementations is very helpful.
The tables in a DBMS correspond to a data frame in R, so interaction with a DBMS is fairly natural for useRs.
SQL code is characterized by the fact that it describes *what* to retrieve, leaving the DBMS back end to determine how to do it. Therefore it has a *batch* feel. The pipe operator (`%>%`, which is read as *and then*) is inherently procedural when it’s used with dplyr: it can be used to construct queries step\-by\-step. Once a test dplyr query has been executed, it is easy to inspect the results and add steps with the pipe operator to refine or expand the query.
### A.8\.2 Data mapping between R vs SQL data types
The following code shows how different elements of the R bestiary are translated to and from ANSI standard data types. Note that R factors are translated as `TEXT` so that missing levels are ignored on the SQL side.
```
library(DBI)
dbDataType(ANSI(), 1:5)
```
```
## [1] "INT"
```
```
dbDataType(ANSI(), 1)
```
```
## [1] "DOUBLE"
```
```
dbDataType(ANSI(), TRUE)
```
```
## [1] "SMALLINT"
```
```
dbDataType(ANSI(), Sys.Date())
```
```
## [1] "DATE"
```
```
dbDataType(ANSI(), Sys.time())
```
```
## [1] "TIMESTAMP"
```
```
dbDataType(ANSI(), Sys.time() - as.POSIXct(Sys.Date()))
```
```
## [1] "TIME"
```
```
dbDataType(ANSI(), c("x", "abc"))
```
```
## [1] "TEXT"
```
```
dbDataType(ANSI(), list(raw(10), raw(20)))
```
```
## [1] "BLOB"
```
```
dbDataType(ANSI(), I(3))
```
```
## [1] "DOUBLE"
```
```
dbDataType(ANSI(), iris)
```
```
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## "DOUBLE" "DOUBLE" "DOUBLE" "DOUBLE" "TEXT"
```
The [DBI specification](https://cran.r-project.org/web/packages/DBI/vignettes/spec.html) provides extensive documentation that is worth digesting if you intend to work with a DBMS from R. As you work through the examples in this book, you will also want to refer to the following resources:
* RStudio’s [Databases using R](https://db.rstudio.com) site describes many of the technical details involved.
* The [RStudio community](https://community.rstudio.com/tags/database) is an excellent place to ask questions or study what has been discussed previously.
### A.8\.3 PostgreSQL and connection parameters
An **important detail:** We use a PostgreSQL database server running in a Docker container for the database functions. It is installed inside Docker, so you do not have to download or install it yourself. To connect to it, you have to define some parameters. These parameters are used in two places:
1. When the Docker container is created, they’re used to initialize the database, and
2. Whenever we connect to the database, we need to specify them to authenticate.
We define the parameters in an environment file that R reads when starting up. The file is called `.Renviron`, and is located in your home directory. See the discussion of [securing and using dbms credentials](chapter-appendix-postresql-authentication.html#chapter_appendix-postresql-authentication).
### A.8\.4 Connecting the R and DBMS environments
Although everything happens on one machine in our Docker / PostgreSQL playground, in real life R and PostgreSQL (or other DBMS) will be in different environments on separate machines. How R connects them gives you control over where the work happens. You need to be aware of the differences beween the R and DBMS environments as well as how you can leverage the strengths of each one.
**Characteristics of local vs. server processing**
| Dimension | Local | Remote |
| --- | --- | --- |
| Design purpose | The R environment on your local machine is designed to be flexible and easy to use; ideal for data investigation. | The DBMS environment is designed for large and complex databases where data integrity is more important than flexibility or ease of use. |
| Processor power | Your local machine has less memory, speed, and storage than the typical database server. | Database servers are specialized, more expensive, and have more power. |
| Memory constraint | In R, query results must fit into memory. | Servers have a lot of memory and write intermediate results to disk if needed without you knowing about it. |
| Data crunching | Data lives in the DBMS, so crunching it down locally requires you to pull it over the network. | A DBMS has powerful data crunching capabilities once you know what you want and moves data over the server backbone to crunch it. |
| Security | Local control. Whether it is good or not depends on you. | Responsibility of database administrators who set the rules. You play by their rules. |
| Storage of intermediate results | Very easy to save a data frame with intermediate results locally. | May require extra privileges to save results in the database. |
| Analytical resources | Ecosystem of available R packages | Extending SQL instruction set involves dbms\-specific functions or R pseudo functions |
| Collaboration | One person working on a few data.frames. | Many people collaborating on *many* tables. |
### A.8\.5 Using SQLite to simulate an enterprise DBMS
SQLite engine is embedded in one file, so that many tables are stored together in one object. SQL commands can run against an SQLite database as demonstrated in how many uses of SQLite are in the [RStudio `dbplyr` documentation](https://dbplyr.tidyverse.org).
A.1 The big picture: R and the Docker / PostgreSQL playground on your machine
-----------------------------------------------------------------------------
Here is an overview of how R and Docker fit on your operating system in this book’s sandbox:
R and Docker
You run R from RStudio to set up Docker, launch PostgreSQL inside it and then send queries directly to PostgreSQL from R. (We provide more details about our sandbox environment in the chapter on [mapping your environment](#chapter_appendix-sandbox-environment).
A.2 Your computer and its operating system
------------------------------------------
The playground that we construct in this book is designed so that some of the mysteries of accessing a corporate database are more visible – it’s all happening on *your computer*. The challenge, however, is that we know very little about your computer and its operating system. In the workshops we’ve given about this book, the details of individual computers have turned out to be diverse and difficult to pin down in advance. So there can be many issues, but not many basic concepts that we can highlight in advance.
A.3 R
-----
We assume a general familiarity with R and RStudio. RStudio’s Big Data workshop at the 2019 RStudio has an abundance of introductory material (Ruiz [2019](#ref-Ruiz2019)).
This book is [Tidyverse\-oriented](https://www.tidyverse.org), so we assume familiarity with the pipe operator, tidy data (Wickham [2014](#ref-Wickham2014)), dplyr, and techniques for tidying data (Wickham [2018](#ref-Wickham2018)).
R connects to a database by means of a series of packages that work together. The following diagram from a [big data workshop](https://github.com/rstudio/bigdataclass) at the 2019 RStudio conference shows the big picture. The biggest difference in terms of retrieval strategies is between writing `dplyr` and native `SQL` code. Dplyr generates [SQL\-92 standard](https://en.wikipedia.org/wiki/SQL-92) code; whereas you can write SQL code that leverages the specific language features of your DBMS when you write SQL code yourself.
Rstudio’s DBMS architecture \- slide \# 33
A.4 Our `sqlpetr` package
-------------------------
The `sqlpetr` package is the companion R package for this database tutorial. It has two classes of functions:
* Functions to install the dependencies needed to build the book and perform the operations covered in the tutorial, and
* Utilities for dealing with Docker and the PostgreSQL Docker image we use.
`sqlpetr` has a pkgdown site at <https://smithjd.github.io/sqlpetr/>.
A.5 Docker
----------
Docker and the DevOps tools surrounding it have fostered a revolution in the way services are delivered over the internet. In this book, we’re piggybacking on a small piece of that revolution, Docker on the desktop.
### A.5\.1 Virtual machines and hypervisors
A *virtual machine* is a machine that is running purely as software hosted by another real machine. To the user, a virtual machine looks just like a real one. But it has no processors, memory or I/O devices of its own \- all of those are supplied and managed by the host.
A virtual machine can run any operating system that will run on the host’s hardware. A Linux host can run a Windows virtual machine and vice versa.
A *hypervisor* is the component of the host system software that manages virtual machines, usually called *guests*. Linux systems have a native hypervisor called *Kernel Virtual Machine* (`kvm`). And laptop, desktop and server processors from Intel and Advanced Micro Devices (AMD) have hardware that makes this hypervisor more efficient.
Windows servers and Windows 10 Pro have a hypervisor called *Hyper\-V*. Like `kvm`, `Hyper-V` can take advantage of the hardware in Intel and AMD processors. On Macintosh, there is a *Hypervisor Framework* (<https://developer.apple.com/documentation/hypervisor>) and other tools build on that.
If this book is about Docker, why do we care about virtual machines and hypervisors? Docker is a Linux subsystem \- it only runs on Linux laptops, desktops and servers. As we’ll see shortly, if we want to run Docker on Windows or MacOS, we’ll need a hypervisor, a Linux virtual machine and some “glue logic” to provide a Docker user experience equivalent to the one on a Linux system.
### A.5\.2 Containers
A *container* is a set of processes running in an operating system. The host operating system is usually Linux, but other operating systems also can host containers.
Unlike a virtual machine, the container has no operating system kernel of its own. If the host is running the Linux kernel, so is the container. And since the container OS is the same as the host OS, there’s no need for a hypervisor or hardware to support the hypervisor. So a container is more efficient than a virtual machine.
A container **does** have its own file system. From inside the container, this file system looks like a Linux file system, but it can use any Linux distro. For example, you can have an Ubuntu 18\.04 LTS host running Ubuntu 14\.04 LTS or Fedora 28 or CentOS 7 containers. The kernel will always be the host kernel, but the utilities and applications will be those from the container.
### A.5\.3 Docker itself
While there are both older (*lxc*) and newer container tools, the one that has caught on in terms of widespread use is *Docker* (Docker [2019](#ref-Docker2019a)[a](#ref-Docker2019a)). Docker is widely used on cloud providers to deploy services of all kinds. Using Docker on the desktop to deliver standardized packages, as we are doing in this book, is a secondary use case, but a common one.
If you’re using a Linux laptop / desktop, all you need to do is install Docker CE (Docker [2018](#ref-Docker2018b)[a](#ref-Docker2018b)). However, most laptops and desktops don’t run Linux \- they run Windows or MacOS. As noted above, to use Docker on Windows or MacOS, you need a hypervisor and a Linux virtual machine.
### A.5\.4 Docker objects
The Docker subsystem manages several kinds of objects \- containers, images, volumes and networks. In this book, we are only using the basic command line tools to manage containers, images and volumes.
Docker `images` are files that define a container’s initial file system. You can find pre\-built images on Docker Hub and the Docker Store \- the base PostgreSQL image we use comes from Docker Hub (<https://hub.docker.com/_/postgres/>). If there isn’t a Docker image that does exactly what you want, you can build your own by creating a Dockerfile and running `docker build`. We do this in \[Build the pet\-sql Docker Image].
Docker `volumes` – explain `mount`.
### A.5\.5 Hosting Docker on Windows machines
There are two ways to get Docker on Windows. For Windows 10 Home and older versions of Windows, you need Docker Toolbox (Docker [2019](#ref-Docker2019b)[e](#ref-Docker2019b)). Note that for Docker Toolbox, you need a 64\-bit AMD or Intel processor with the virtualization hardware installed and enabled in the BIOS.
For Windows 10 Pro, you have the Hyper\-V virtualizer as standard equipment, and can use Docker for Windows (Docker [2019](#ref-Docker2019c)[c](#ref-Docker2019c)).
### A.5\.6 Hosting Docker on macOS machines
As with Windows, there are two ways to get Docker. For older Intel systems, you’ll need Docker Toolbox (Docker [2019](#ref-Docker2019d)[d](#ref-Docker2019d)). Newer systems (2010 or later running at least macOS El Capitan 10\.11\) can run Docker for Mac (Docker [2019](#ref-Docker2019e)[b](#ref-Docker2019e)).
### A.5\.7 Hosting Docker on UNIX machines
Unix was the original host for both R and Docker. Unix\-like commands show up.
### A.5\.1 Virtual machines and hypervisors
A *virtual machine* is a machine that is running purely as software hosted by another real machine. To the user, a virtual machine looks just like a real one. But it has no processors, memory or I/O devices of its own \- all of those are supplied and managed by the host.
A virtual machine can run any operating system that will run on the host’s hardware. A Linux host can run a Windows virtual machine and vice versa.
A *hypervisor* is the component of the host system software that manages virtual machines, usually called *guests*. Linux systems have a native hypervisor called *Kernel Virtual Machine* (`kvm`). And laptop, desktop and server processors from Intel and Advanced Micro Devices (AMD) have hardware that makes this hypervisor more efficient.
Windows servers and Windows 10 Pro have a hypervisor called *Hyper\-V*. Like `kvm`, `Hyper-V` can take advantage of the hardware in Intel and AMD processors. On Macintosh, there is a *Hypervisor Framework* (<https://developer.apple.com/documentation/hypervisor>) and other tools build on that.
If this book is about Docker, why do we care about virtual machines and hypervisors? Docker is a Linux subsystem \- it only runs on Linux laptops, desktops and servers. As we’ll see shortly, if we want to run Docker on Windows or MacOS, we’ll need a hypervisor, a Linux virtual machine and some “glue logic” to provide a Docker user experience equivalent to the one on a Linux system.
### A.5\.2 Containers
A *container* is a set of processes running in an operating system. The host operating system is usually Linux, but other operating systems also can host containers.
Unlike a virtual machine, the container has no operating system kernel of its own. If the host is running the Linux kernel, so is the container. And since the container OS is the same as the host OS, there’s no need for a hypervisor or hardware to support the hypervisor. So a container is more efficient than a virtual machine.
A container **does** have its own file system. From inside the container, this file system looks like a Linux file system, but it can use any Linux distro. For example, you can have an Ubuntu 18\.04 LTS host running Ubuntu 14\.04 LTS or Fedora 28 or CentOS 7 containers. The kernel will always be the host kernel, but the utilities and applications will be those from the container.
### A.5\.3 Docker itself
While there are both older (*lxc*) and newer container tools, the one that has caught on in terms of widespread use is *Docker* (Docker [2019](#ref-Docker2019a)[a](#ref-Docker2019a)). Docker is widely used on cloud providers to deploy services of all kinds. Using Docker on the desktop to deliver standardized packages, as we are doing in this book, is a secondary use case, but a common one.
If you’re using a Linux laptop / desktop, all you need to do is install Docker CE (Docker [2018](#ref-Docker2018b)[a](#ref-Docker2018b)). However, most laptops and desktops don’t run Linux \- they run Windows or MacOS. As noted above, to use Docker on Windows or MacOS, you need a hypervisor and a Linux virtual machine.
### A.5\.4 Docker objects
The Docker subsystem manages several kinds of objects \- containers, images, volumes and networks. In this book, we are only using the basic command line tools to manage containers, images and volumes.
Docker `images` are files that define a container’s initial file system. You can find pre\-built images on Docker Hub and the Docker Store \- the base PostgreSQL image we use comes from Docker Hub (<https://hub.docker.com/_/postgres/>). If there isn’t a Docker image that does exactly what you want, you can build your own by creating a Dockerfile and running `docker build`. We do this in \[Build the pet\-sql Docker Image].
Docker `volumes` – explain `mount`.
### A.5\.5 Hosting Docker on Windows machines
There are two ways to get Docker on Windows. For Windows 10 Home and older versions of Windows, you need Docker Toolbox (Docker [2019](#ref-Docker2019b)[e](#ref-Docker2019b)). Note that for Docker Toolbox, you need a 64\-bit AMD or Intel processor with the virtualization hardware installed and enabled in the BIOS.
For Windows 10 Pro, you have the Hyper\-V virtualizer as standard equipment, and can use Docker for Windows (Docker [2019](#ref-Docker2019c)[c](#ref-Docker2019c)).
### A.5\.6 Hosting Docker on macOS machines
As with Windows, there are two ways to get Docker. For older Intel systems, you’ll need Docker Toolbox (Docker [2019](#ref-Docker2019d)[d](#ref-Docker2019d)). Newer systems (2010 or later running at least macOS El Capitan 10\.11\) can run Docker for Mac (Docker [2019](#ref-Docker2019e)[b](#ref-Docker2019e)).
### A.5\.7 Hosting Docker on UNIX machines
Unix was the original host for both R and Docker. Unix\-like commands show up.
A.6 ‘Normal’ and ‘normalized’ data
----------------------------------
### A.6\.1 Tidy data
Tidy data (Wickham [2014](#ref-Wickham2014)) is well\-behaved from the point of view of analysis and tools in the Tidyverse (RStudio [2019](#ref-RStudio2019)). Tidy data is easier to think about and it is usually worthwhile to make the data tidy (Wickham [2018](#ref-Wickham2018)). Tidy data is roughly equivalent to *third normal form* as discussed below.
### A.6\.2 Design of “normal data”
Data in a database is most often optimized to minimize storage space and increase performance while preserving integrity when adding, changing, or deleting data. The Wikipedia article on Database Normalization has a good introduction to the characteristics of “normal” data and the process of re\-organizing it to meet those desirable criteria (Wikipedia [2019](#ref-Wikipedia2019)). The bottom line is that “data normalization is practical” although there are mathematical arguments for normalization based on the preservation of data integrity.
### A.6\.1 Tidy data
Tidy data (Wickham [2014](#ref-Wickham2014)) is well\-behaved from the point of view of analysis and tools in the Tidyverse (RStudio [2019](#ref-RStudio2019)). Tidy data is easier to think about and it is usually worthwhile to make the data tidy (Wickham [2018](#ref-Wickham2018)). Tidy data is roughly equivalent to *third normal form* as discussed below.
### A.6\.2 Design of “normal data”
Data in a database is most often optimized to minimize storage space and increase performance while preserving integrity when adding, changing, or deleting data. The Wikipedia article on Database Normalization has a good introduction to the characteristics of “normal” data and the process of re\-organizing it to meet those desirable criteria (Wikipedia [2019](#ref-Wikipedia2019)). The bottom line is that “data normalization is practical” although there are mathematical arguments for normalization based on the preservation of data integrity.
A.7 SQL Language
----------------
SQL stands for Structured Query Language. It is a database language where we can perform certain operations on the existing database and we can use it create a new database. There are four main categories where the SQL commands fall into: DML, DDL, DCL, and TCL.
### A.7\.1 Data Manipulation Langauge (DML)
These four SQL commands deal with the manipulation of data in the database. For everyday analytical work, these are the commands that you will use the most.
```
1. SELECT
2. INSERT
3. UPDATE
4. DELETE
```
### A.7\.2 Data Definition Langauge (DDL)
It consists of the SQL commands that can be used to define a database schema. The DDL commands include:
```
1. CREATE
2. ALTER
3. TRUNCATE
4. COMMENT
5. RENAME
6. DROP
```
### A.7\.3 Data Control Language (DCL)
The DCL commands deals with user rights, permissions and other controls in database management system.
```
1. GRANT
2. REVOKE
```
### A.7\.4 Transaction Control Language (TCL)
These commands deal with the control over transaction within the database. Transaction combines a set of tasks into single execution.
```
1. SET TRANSACTION
2. SAVEPOINT
3. ROLLBACK
4. COMMIT
```
### A.7\.1 Data Manipulation Langauge (DML)
These four SQL commands deal with the manipulation of data in the database. For everyday analytical work, these are the commands that you will use the most.
```
1. SELECT
2. INSERT
3. UPDATE
4. DELETE
```
### A.7\.2 Data Definition Langauge (DDL)
It consists of the SQL commands that can be used to define a database schema. The DDL commands include:
```
1. CREATE
2. ALTER
3. TRUNCATE
4. COMMENT
5. RENAME
6. DROP
```
### A.7\.3 Data Control Language (DCL)
The DCL commands deals with user rights, permissions and other controls in database management system.
```
1. GRANT
2. REVOKE
```
### A.7\.4 Transaction Control Language (TCL)
These commands deal with the control over transaction within the database. Transaction combines a set of tasks into single execution.
```
1. SET TRANSACTION
2. SAVEPOINT
3. ROLLBACK
4. COMMIT
```
A.8 Enterprise DBMS
-------------------
The organizational context of a database matters just as much as its design characteristics. The design of a database (or *data model*) may have been purchased from an external vendor or developed in\-house. In either case time has a tendency to erode the original design concept so that the data you find in a DBMS may not quite match the original design specification. And the original design may or may not be well reflected in the current naming of tables, columns and other objects.
It’s a naive misconception to think that the data you are analyzing just “comes from the database”, although that’s literally true and may be the step that happens before you get your hands on it. In fact it comes from the people who design, enter, manage, protect, and use your organization’s data. In practice, a [database administrator](https://en.wikipedia.org/wiki/Database_administrator) (DBA) is often a key point of contact in terms of access and may have stringent criteria for query performance. Make friends with your DBA.
### A.8\.1 SQL databases
Although there are [ANSI standards](https://en.wikipedia.org/wiki/SQL#Interoperability_and_standardization) for [SQL syntax](https://en.wikipedia.org/wiki/SQL_syntax), different implementations vary in enough details that R’s ability to customize queries for those implementations is very helpful.
The tables in a DBMS correspond to a data frame in R, so interaction with a DBMS is fairly natural for useRs.
SQL code is characterized by the fact that it describes *what* to retrieve, leaving the DBMS back end to determine how to do it. Therefore it has a *batch* feel. The pipe operator (`%>%`, which is read as *and then*) is inherently procedural when it’s used with dplyr: it can be used to construct queries step\-by\-step. Once a test dplyr query has been executed, it is easy to inspect the results and add steps with the pipe operator to refine or expand the query.
### A.8\.2 Data mapping between R vs SQL data types
The following code shows how different elements of the R bestiary are translated to and from ANSI standard data types. Note that R factors are translated as `TEXT` so that missing levels are ignored on the SQL side.
```
library(DBI)
dbDataType(ANSI(), 1:5)
```
```
## [1] "INT"
```
```
dbDataType(ANSI(), 1)
```
```
## [1] "DOUBLE"
```
```
dbDataType(ANSI(), TRUE)
```
```
## [1] "SMALLINT"
```
```
dbDataType(ANSI(), Sys.Date())
```
```
## [1] "DATE"
```
```
dbDataType(ANSI(), Sys.time())
```
```
## [1] "TIMESTAMP"
```
```
dbDataType(ANSI(), Sys.time() - as.POSIXct(Sys.Date()))
```
```
## [1] "TIME"
```
```
dbDataType(ANSI(), c("x", "abc"))
```
```
## [1] "TEXT"
```
```
dbDataType(ANSI(), list(raw(10), raw(20)))
```
```
## [1] "BLOB"
```
```
dbDataType(ANSI(), I(3))
```
```
## [1] "DOUBLE"
```
```
dbDataType(ANSI(), iris)
```
```
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## "DOUBLE" "DOUBLE" "DOUBLE" "DOUBLE" "TEXT"
```
The [DBI specification](https://cran.r-project.org/web/packages/DBI/vignettes/spec.html) provides extensive documentation that is worth digesting if you intend to work with a DBMS from R. As you work through the examples in this book, you will also want to refer to the following resources:
* RStudio’s [Databases using R](https://db.rstudio.com) site describes many of the technical details involved.
* The [RStudio community](https://community.rstudio.com/tags/database) is an excellent place to ask questions or study what has been discussed previously.
### A.8\.3 PostgreSQL and connection parameters
An **important detail:** We use a PostgreSQL database server running in a Docker container for the database functions. It is installed inside Docker, so you do not have to download or install it yourself. To connect to it, you have to define some parameters. These parameters are used in two places:
1. When the Docker container is created, they’re used to initialize the database, and
2. Whenever we connect to the database, we need to specify them to authenticate.
We define the parameters in an environment file that R reads when starting up. The file is called `.Renviron`, and is located in your home directory. See the discussion of [securing and using dbms credentials](chapter-appendix-postresql-authentication.html#chapter_appendix-postresql-authentication).
### A.8\.4 Connecting the R and DBMS environments
Although everything happens on one machine in our Docker / PostgreSQL playground, in real life R and PostgreSQL (or other DBMS) will be in different environments on separate machines. How R connects them gives you control over where the work happens. You need to be aware of the differences beween the R and DBMS environments as well as how you can leverage the strengths of each one.
**Characteristics of local vs. server processing**
| Dimension | Local | Remote |
| --- | --- | --- |
| Design purpose | The R environment on your local machine is designed to be flexible and easy to use; ideal for data investigation. | The DBMS environment is designed for large and complex databases where data integrity is more important than flexibility or ease of use. |
| Processor power | Your local machine has less memory, speed, and storage than the typical database server. | Database servers are specialized, more expensive, and have more power. |
| Memory constraint | In R, query results must fit into memory. | Servers have a lot of memory and write intermediate results to disk if needed without you knowing about it. |
| Data crunching | Data lives in the DBMS, so crunching it down locally requires you to pull it over the network. | A DBMS has powerful data crunching capabilities once you know what you want and moves data over the server backbone to crunch it. |
| Security | Local control. Whether it is good or not depends on you. | Responsibility of database administrators who set the rules. You play by their rules. |
| Storage of intermediate results | Very easy to save a data frame with intermediate results locally. | May require extra privileges to save results in the database. |
| Analytical resources | Ecosystem of available R packages | Extending SQL instruction set involves dbms\-specific functions or R pseudo functions |
| Collaboration | One person working on a few data.frames. | Many people collaborating on *many* tables. |
### A.8\.5 Using SQLite to simulate an enterprise DBMS
SQLite engine is embedded in one file, so that many tables are stored together in one object. SQL commands can run against an SQLite database as demonstrated in how many uses of SQLite are in the [RStudio `dbplyr` documentation](https://dbplyr.tidyverse.org).
### A.8\.1 SQL databases
Although there are [ANSI standards](https://en.wikipedia.org/wiki/SQL#Interoperability_and_standardization) for [SQL syntax](https://en.wikipedia.org/wiki/SQL_syntax), different implementations vary in enough details that R’s ability to customize queries for those implementations is very helpful.
The tables in a DBMS correspond to a data frame in R, so interaction with a DBMS is fairly natural for useRs.
SQL code is characterized by the fact that it describes *what* to retrieve, leaving the DBMS back end to determine how to do it. Therefore it has a *batch* feel. The pipe operator (`%>%`, which is read as *and then*) is inherently procedural when it’s used with dplyr: it can be used to construct queries step\-by\-step. Once a test dplyr query has been executed, it is easy to inspect the results and add steps with the pipe operator to refine or expand the query.
### A.8\.2 Data mapping between R vs SQL data types
The following code shows how different elements of the R bestiary are translated to and from ANSI standard data types. Note that R factors are translated as `TEXT` so that missing levels are ignored on the SQL side.
```
library(DBI)
dbDataType(ANSI(), 1:5)
```
```
## [1] "INT"
```
```
dbDataType(ANSI(), 1)
```
```
## [1] "DOUBLE"
```
```
dbDataType(ANSI(), TRUE)
```
```
## [1] "SMALLINT"
```
```
dbDataType(ANSI(), Sys.Date())
```
```
## [1] "DATE"
```
```
dbDataType(ANSI(), Sys.time())
```
```
## [1] "TIMESTAMP"
```
```
dbDataType(ANSI(), Sys.time() - as.POSIXct(Sys.Date()))
```
```
## [1] "TIME"
```
```
dbDataType(ANSI(), c("x", "abc"))
```
```
## [1] "TEXT"
```
```
dbDataType(ANSI(), list(raw(10), raw(20)))
```
```
## [1] "BLOB"
```
```
dbDataType(ANSI(), I(3))
```
```
## [1] "DOUBLE"
```
```
dbDataType(ANSI(), iris)
```
```
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## "DOUBLE" "DOUBLE" "DOUBLE" "DOUBLE" "TEXT"
```
The [DBI specification](https://cran.r-project.org/web/packages/DBI/vignettes/spec.html) provides extensive documentation that is worth digesting if you intend to work with a DBMS from R. As you work through the examples in this book, you will also want to refer to the following resources:
* RStudio’s [Databases using R](https://db.rstudio.com) site describes many of the technical details involved.
* The [RStudio community](https://community.rstudio.com/tags/database) is an excellent place to ask questions or study what has been discussed previously.
### A.8\.3 PostgreSQL and connection parameters
An **important detail:** We use a PostgreSQL database server running in a Docker container for the database functions. It is installed inside Docker, so you do not have to download or install it yourself. To connect to it, you have to define some parameters. These parameters are used in two places:
1. When the Docker container is created, they’re used to initialize the database, and
2. Whenever we connect to the database, we need to specify them to authenticate.
We define the parameters in an environment file that R reads when starting up. The file is called `.Renviron`, and is located in your home directory. See the discussion of [securing and using dbms credentials](chapter-appendix-postresql-authentication.html#chapter_appendix-postresql-authentication).
### A.8\.4 Connecting the R and DBMS environments
Although everything happens on one machine in our Docker / PostgreSQL playground, in real life R and PostgreSQL (or other DBMS) will be in different environments on separate machines. How R connects them gives you control over where the work happens. You need to be aware of the differences beween the R and DBMS environments as well as how you can leverage the strengths of each one.
**Characteristics of local vs. server processing**
| Dimension | Local | Remote |
| --- | --- | --- |
| Design purpose | The R environment on your local machine is designed to be flexible and easy to use; ideal for data investigation. | The DBMS environment is designed for large and complex databases where data integrity is more important than flexibility or ease of use. |
| Processor power | Your local machine has less memory, speed, and storage than the typical database server. | Database servers are specialized, more expensive, and have more power. |
| Memory constraint | In R, query results must fit into memory. | Servers have a lot of memory and write intermediate results to disk if needed without you knowing about it. |
| Data crunching | Data lives in the DBMS, so crunching it down locally requires you to pull it over the network. | A DBMS has powerful data crunching capabilities once you know what you want and moves data over the server backbone to crunch it. |
| Security | Local control. Whether it is good or not depends on you. | Responsibility of database administrators who set the rules. You play by their rules. |
| Storage of intermediate results | Very easy to save a data frame with intermediate results locally. | May require extra privileges to save results in the database. |
| Analytical resources | Ecosystem of available R packages | Extending SQL instruction set involves dbms\-specific functions or R pseudo functions |
| Collaboration | One person working on a few data.frames. | Many people collaborating on *many* tables. |
### A.8\.5 Using SQLite to simulate an enterprise DBMS
SQLite engine is embedded in one file, so that many tables are stored together in one object. SQL commands can run against an SQLite database as demonstrated in how many uses of SQLite are in the [RStudio `dbplyr` documentation](https://dbplyr.tidyverse.org).
| Data Databases and Engineering |
smithjd.github.io | https://smithjd.github.io/sql-pet/chapter-appendix-setup-instructions.html |
B \- Setup instructions
=======================
> This appendix explains:
>
>
> * Hardware and software prerequisites for setting up the sandbox used in this book
> * Documentation for all of the elements used in this sandbox
B.1 Sandbox prerequisites
-------------------------
The sandbox environment requires:
* A computer running
+ Windows (Windows 7 64\-bit or later \- Windows 10\-Pro is recommended),
+ MacOS, or
+ Linux (any Linux distro that will run Docker Community Edition, R and RStudio will work)
* Current versions of [R and RStudio](https://www.datacamp.com/community/tutorials/installing-R-windows-mac-ubuntu) \[Vargas ([2018](#ref-Vargas2018))) required.
* Docker (instructions below)
* Our companion package `sqlpetr` (Borasky et al. [2018](#ref-Borasky2018a))
The database we use is PostgreSQL 11, but you do not need to install it \- it’s installed via a Docker image.
In addition to the current version of R and RStudio, you will need current versions of the following packages:
* `DBI` (R Special Interest Group on Databases (R\-SIG\-DB), Wickham, and Müller [2019](#ref-R-DBI))
* `DiagrammeR` (Iannone [2020](#ref-R-DiagrammeR))
* `RPostgres` (Wickham, Ooms, and Müller [2019](#ref-R-RPostgres))
* `dbplyr` (Wickham and Ruiz [2019](#ref-R-dbplyr))
* `devtools` (Wickham, Hester, and Chang [2019](#ref-R-devtools))
* `downloader` (Chang [2015](#ref-R-downloader))
* `glue` (Hester [2019](#ref-R-glue))
* `here` (Müller [2017](#ref-R-here))
* `knitr` (Xie [2020](#ref-R-knitr)[b](#ref-R-knitr))
* `skimr` (Waring et al. [2019](#ref-R-skimr))
* `tidyverse` (Wickham [2019](#ref-R-tidyverse))
* `bookdown` (Xie [2020](#ref-R-bookdown)[a](#ref-R-bookdown)) (for compiling the book, if you want to)
B.2 R, RStudio and Git
----------------------
Most readers will probably have these already, but if not:
1. If you do not have R:
* Go to <https://cran.rstudio.com/> (R Core Team [2018](#ref-RCT2018)).
* Select the download link for your system. For Linux, choose your distro. We recommend Ubuntu 18\.04 LTS “Bionic Beaver”. It’s much easier to find support answers on the web for Ubuntu than other distros.
* Follow the instructions.
* Note: if you already have R, make sure it’s upgraded to R 3\.5\.1\. We don’t test on older versions!
2. If you do not have RStudio: go to [https://www.rstudio.com/products/rstudio/download/\#download](https://www.rstudio.com/products/rstudio/download/#download). Make sure you have version 1\.1\.463 or later.
3. If you do not have Git:
* On Windows, go to [https://git\-scm.com/download/win](https://git-scm.com/download/win) and follow instructions. There are a lot of options. Just pick the defaults!!!
* On MacOS, go to [https://sourceforge.net/projects/git\-osx\-installer/files/](https://sourceforge.net/projects/git-osx-installer/files/) and follow instructions.
* On Linux, install Git from your distribution.
B.3 Install Docker
------------------
Installation depends on your operating system and we have found that it can be somewhat intricate. You will need Docker Community Edition (Docker CE):
* For Windows, [consider these issues and follow these instructions](#windows-tech-details): Go to [https://store.docker.com/editions/community/docker\-ce\-desktop\-windows](https://store.docker.com/editions/community/docker-ce-desktop-windows). If you don’t have a Docker Store log in, you’ll need to create one. Then:
+ If you have Windows 10 Pro, download and install Docker for Windows.
+ If you have an older version of Windows, download and install Docker Toolbox (<https://docs.docker.com/toolbox/overview/>).
+ Note that both versions require 64\-bit hardware and the virtualization needs to be enabled in the firmware.
* [On a Mac](https://docs.docker.com/docker-for-mac/install/) (Docker [2018](#ref-Docker2018c)[c](#ref-Docker2018c)): Go to [https://store.docker.com/editions/community/docker\-ce\-desktop\-mac](https://store.docker.com/editions/community/docker-ce-desktop-mac). If you don’t have a Docker Store login, you’ll need to create one. Then download and install Docker for Mac. Your MacOS must be at least release Yosemite (10\.10\.3\).
* [On UNIX flavors](https://docs.docker.com/install/#supported-platforms) (Docker [2018](#ref-Docker2018b)[a](#ref-Docker2018b)): note that, as with Windows and MacOS, you’ll need a Docker Store loin. Although most Linux distros ship with some version of Docker, chances are it’s not the same as the official Docker CE version.
+ Ubuntu: [https://store.docker.com/editions/community/docker\-ce\-server\-ubuntu](https://store.docker.com/editions/community/docker-ce-server-ubuntu),
+ Fedora: [https://store.docker.com/editions/community/docker\-ce\-server\-fedora](https://store.docker.com/editions/community/docker-ce-server-fedora),
+ Cent OS: [https://store.docker.com/editions/community/docker\-ce\-server\-centos](https://store.docker.com/editions/community/docker-ce-server-centos),
+ Debian: [https://store.docker.com/editions/community/docker\-ce\-server\-debian](https://store.docker.com/editions/community/docker-ce-server-debian).
***Note that on Linux, you will need to be a member of the `docker` group to use Docker.*** To do that, execute `sudo usermod -aG docker ${USER}`. Then, log out and back in again.
B.1 Sandbox prerequisites
-------------------------
The sandbox environment requires:
* A computer running
+ Windows (Windows 7 64\-bit or later \- Windows 10\-Pro is recommended),
+ MacOS, or
+ Linux (any Linux distro that will run Docker Community Edition, R and RStudio will work)
* Current versions of [R and RStudio](https://www.datacamp.com/community/tutorials/installing-R-windows-mac-ubuntu) \[Vargas ([2018](#ref-Vargas2018))) required.
* Docker (instructions below)
* Our companion package `sqlpetr` (Borasky et al. [2018](#ref-Borasky2018a))
The database we use is PostgreSQL 11, but you do not need to install it \- it’s installed via a Docker image.
In addition to the current version of R and RStudio, you will need current versions of the following packages:
* `DBI` (R Special Interest Group on Databases (R\-SIG\-DB), Wickham, and Müller [2019](#ref-R-DBI))
* `DiagrammeR` (Iannone [2020](#ref-R-DiagrammeR))
* `RPostgres` (Wickham, Ooms, and Müller [2019](#ref-R-RPostgres))
* `dbplyr` (Wickham and Ruiz [2019](#ref-R-dbplyr))
* `devtools` (Wickham, Hester, and Chang [2019](#ref-R-devtools))
* `downloader` (Chang [2015](#ref-R-downloader))
* `glue` (Hester [2019](#ref-R-glue))
* `here` (Müller [2017](#ref-R-here))
* `knitr` (Xie [2020](#ref-R-knitr)[b](#ref-R-knitr))
* `skimr` (Waring et al. [2019](#ref-R-skimr))
* `tidyverse` (Wickham [2019](#ref-R-tidyverse))
* `bookdown` (Xie [2020](#ref-R-bookdown)[a](#ref-R-bookdown)) (for compiling the book, if you want to)
B.2 R, RStudio and Git
----------------------
Most readers will probably have these already, but if not:
1. If you do not have R:
* Go to <https://cran.rstudio.com/> (R Core Team [2018](#ref-RCT2018)).
* Select the download link for your system. For Linux, choose your distro. We recommend Ubuntu 18\.04 LTS “Bionic Beaver”. It’s much easier to find support answers on the web for Ubuntu than other distros.
* Follow the instructions.
* Note: if you already have R, make sure it’s upgraded to R 3\.5\.1\. We don’t test on older versions!
2. If you do not have RStudio: go to [https://www.rstudio.com/products/rstudio/download/\#download](https://www.rstudio.com/products/rstudio/download/#download). Make sure you have version 1\.1\.463 or later.
3. If you do not have Git:
* On Windows, go to [https://git\-scm.com/download/win](https://git-scm.com/download/win) and follow instructions. There are a lot of options. Just pick the defaults!!!
* On MacOS, go to [https://sourceforge.net/projects/git\-osx\-installer/files/](https://sourceforge.net/projects/git-osx-installer/files/) and follow instructions.
* On Linux, install Git from your distribution.
B.3 Install Docker
------------------
Installation depends on your operating system and we have found that it can be somewhat intricate. You will need Docker Community Edition (Docker CE):
* For Windows, [consider these issues and follow these instructions](#windows-tech-details): Go to [https://store.docker.com/editions/community/docker\-ce\-desktop\-windows](https://store.docker.com/editions/community/docker-ce-desktop-windows). If you don’t have a Docker Store log in, you’ll need to create one. Then:
+ If you have Windows 10 Pro, download and install Docker for Windows.
+ If you have an older version of Windows, download and install Docker Toolbox (<https://docs.docker.com/toolbox/overview/>).
+ Note that both versions require 64\-bit hardware and the virtualization needs to be enabled in the firmware.
* [On a Mac](https://docs.docker.com/docker-for-mac/install/) (Docker [2018](#ref-Docker2018c)[c](#ref-Docker2018c)): Go to [https://store.docker.com/editions/community/docker\-ce\-desktop\-mac](https://store.docker.com/editions/community/docker-ce-desktop-mac). If you don’t have a Docker Store login, you’ll need to create one. Then download and install Docker for Mac. Your MacOS must be at least release Yosemite (10\.10\.3\).
* [On UNIX flavors](https://docs.docker.com/install/#supported-platforms) (Docker [2018](#ref-Docker2018b)[a](#ref-Docker2018b)): note that, as with Windows and MacOS, you’ll need a Docker Store loin. Although most Linux distros ship with some version of Docker, chances are it’s not the same as the official Docker CE version.
+ Ubuntu: [https://store.docker.com/editions/community/docker\-ce\-server\-ubuntu](https://store.docker.com/editions/community/docker-ce-server-ubuntu),
+ Fedora: [https://store.docker.com/editions/community/docker\-ce\-server\-fedora](https://store.docker.com/editions/community/docker-ce-server-fedora),
+ Cent OS: [https://store.docker.com/editions/community/docker\-ce\-server\-centos](https://store.docker.com/editions/community/docker-ce-server-centos),
+ Debian: [https://store.docker.com/editions/community/docker\-ce\-server\-debian](https://store.docker.com/editions/community/docker-ce-server-debian).
***Note that on Linux, you will need to be a member of the `docker` group to use Docker.*** To do that, execute `sudo usermod -aG docker ${USER}`. Then, log out and back in again.
| Data Databases and Engineering |
smithjd.github.io | https://smithjd.github.io/sql-pet/chapter-appendix-postgres-local-db-installation.html |
C Appendix E \- Install `adventureworks` on your own machine
============================================================
> This appendix demonstrates how to:
>
>
> * Setup the `adventureworks` database locally on your machine
> * Connect to the `adventureworks` database
> * These instructions should be tested by a Windows user
> * The PostgreSQL tutorial links do not work, despite being pasted from the site
C.1 Overview
------------
This appendix details the process to download and restore the `adventureworks` database so that you can work with the database locally on your own machine. This tutorial assumes that (1\) you have PostgreSQL installed on your computer, and (2\) that you have configured your system to run psql at the command line. Installation of PostgreSQL and configuration of psql are outside the scope of this book.
### C.1\.1 Download the `adventureworks` database
Download the `adventureworks` database from [here](https://github.com/smithjd/sql-pet/blob/master/book-src/adventureworks.sql).
### C.1\.2 Restore the `dvdrental` database at the command line
1. Launch the psql tool
2. Enter account information to log into the PostgreSQL database server, if prompted
3. Enter the following command to create a new database `CREATE DATABASE adventureworks;`
4. Open a **new terminal window** (not in psql) and navigate to the folder where the `adventureworks.sql` file is located. Use the `cd` command in the terminal, followed by the file path to change directories to the location of `adventureworks.sql`. For example: `cd /Users/username/Documents/adventureworks`.
5. Enter the following command prompt: `pg_restore -d adventureworks -f -U postgres adventureworks.sql`
### C.1\.3 Restore the `adventureworks` database using pgAdmin
Another option to restore the `adventureworks` database locally on your machine is with the pgAdmin graphical user interface. However, we highly recommend using the command line methods detailed above. Installation and configuration of pgAdmin is outside the scope of this book.
C.2 Resources
-------------
* [Instructions by PostgreSQL Tutorial](www.postgresqltutorial.com/load-postgresql-sample-database/%5D) to load the `dvdrental` database. (PostgreSQL Tutorial Website 2019\).
* [Windows installation of PostgreSQL](www.postgresqltutorial.com/install-postgresql/) by PostgreSQL Tutorial. (PostgreSQL Tutorial Website 2019\).
* [Installation of PostgreSQL on a Mac](https://postgresapp.com/) using Postgres.app. (Postgres.app 2019\).
* [Command line configuration of PosgreSQL on a Mac](https://postgresapp.com/documentation/cli-tools.html) with Postgres.app. (Postgres.app 2019\).
* [Installing PostgreSQL for Linux, Arch Linux, Windows, Mac](http://postgresguide.com/setup/install.html) and other operating systems, by Postgres Guide. (Postgres Guide Website 2019\).
C.1 Overview
------------
This appendix details the process to download and restore the `adventureworks` database so that you can work with the database locally on your own machine. This tutorial assumes that (1\) you have PostgreSQL installed on your computer, and (2\) that you have configured your system to run psql at the command line. Installation of PostgreSQL and configuration of psql are outside the scope of this book.
### C.1\.1 Download the `adventureworks` database
Download the `adventureworks` database from [here](https://github.com/smithjd/sql-pet/blob/master/book-src/adventureworks.sql).
### C.1\.2 Restore the `dvdrental` database at the command line
1. Launch the psql tool
2. Enter account information to log into the PostgreSQL database server, if prompted
3. Enter the following command to create a new database `CREATE DATABASE adventureworks;`
4. Open a **new terminal window** (not in psql) and navigate to the folder where the `adventureworks.sql` file is located. Use the `cd` command in the terminal, followed by the file path to change directories to the location of `adventureworks.sql`. For example: `cd /Users/username/Documents/adventureworks`.
5. Enter the following command prompt: `pg_restore -d adventureworks -f -U postgres adventureworks.sql`
### C.1\.3 Restore the `adventureworks` database using pgAdmin
Another option to restore the `adventureworks` database locally on your machine is with the pgAdmin graphical user interface. However, we highly recommend using the command line methods detailed above. Installation and configuration of pgAdmin is outside the scope of this book.
### C.1\.1 Download the `adventureworks` database
Download the `adventureworks` database from [here](https://github.com/smithjd/sql-pet/blob/master/book-src/adventureworks.sql).
### C.1\.2 Restore the `dvdrental` database at the command line
1. Launch the psql tool
2. Enter account information to log into the PostgreSQL database server, if prompted
3. Enter the following command to create a new database `CREATE DATABASE adventureworks;`
4. Open a **new terminal window** (not in psql) and navigate to the folder where the `adventureworks.sql` file is located. Use the `cd` command in the terminal, followed by the file path to change directories to the location of `adventureworks.sql`. For example: `cd /Users/username/Documents/adventureworks`.
5. Enter the following command prompt: `pg_restore -d adventureworks -f -U postgres adventureworks.sql`
### C.1\.3 Restore the `adventureworks` database using pgAdmin
Another option to restore the `adventureworks` database locally on your machine is with the pgAdmin graphical user interface. However, we highly recommend using the command line methods detailed above. Installation and configuration of pgAdmin is outside the scope of this book.
C.2 Resources
-------------
* [Instructions by PostgreSQL Tutorial](www.postgresqltutorial.com/load-postgresql-sample-database/%5D) to load the `dvdrental` database. (PostgreSQL Tutorial Website 2019\).
* [Windows installation of PostgreSQL](www.postgresqltutorial.com/install-postgresql/) by PostgreSQL Tutorial. (PostgreSQL Tutorial Website 2019\).
* [Installation of PostgreSQL on a Mac](https://postgresapp.com/) using Postgres.app. (Postgres.app 2019\).
* [Command line configuration of PosgreSQL on a Mac](https://postgresapp.com/documentation/cli-tools.html) with Postgres.app. (Postgres.app 2019\).
* [Installing PostgreSQL for Linux, Arch Linux, Windows, Mac](http://postgresguide.com/setup/install.html) and other operating systems, by Postgres Guide. (Postgres Guide Website 2019\).
| Data Databases and Engineering |
bluefoxr.github.io | https://bluefoxr.github.io/COINrDoc/appendix-analysing-a-composite-indicator-example.html |
Chapter 19 Appendix: Analysing a Composite Indicator Example
============================================================
Here some possible steps for analysing a composite indicator are given. The format for doing this might depend on what you are trying to accomplish. If you want to analyse your own composite indicator as you are building it, this should be integrated with the previous [Appendix: Building a Composite Indicator Example](appendix-building-a-composite-indicator-example.html#appendix-building-a-composite-indicator-example). If you are analysing an existing composite indicator built by someone else, you may wish to do this in a separate R Markdown document, for example.
19\.1 Loading data
------------------
If the analysis is part of constructing a composite indicator in COINr, you should have a COIN assembled following the steps in the previous chapter. However, if you are analysing an existing composite indicator, there may be a slight difference. It is possible, for example, that you may be provided with indicator data, which may already be normalised and aggregated.
COINr’s `assemble()` function allows you to assemble a COIN with pre\-aggregated data. This is done by setting `preagg = TRUE`. If this is enabled, COINr will create a COIN with a data set called “PreAggregated”. To do this, you need a pre\-aggregated data set which includes (possibly normalised) columns for all indicators *and* for all aggregates.
To demonstrate, we can use a pre\-aggregated data set created by COINr.
```
library(COINr6)
# build ASEM index
ASEM <- build_ASEM()
# extract aggregated data set (as a data frame)
Aggregated_Data <- ASEM$Data$Aggregated
# assemble new COIN only including pre-aggregated data
ASEM_preagg <- assemble(IndData = Aggregated_Data,
IndMeta = ASEMIndMeta,
AggMeta = ASEMAggMeta,
preagg = TRUE)
```
COINr will check that the column names in the indicator data correspond to the codes supplied in the `IndMeta` and `AggMeta`. This means that these two latter data frames still need to be supplied. However, from this point the COIN functions as any other, although consider that it cannot be regenerated (the methodology to arrive at the pre\-aggregated data is unknown), and the only data set present is the “PreAggregated” data.
19\.2 Check calculations
------------------------
If you are using pre\-aggregated data, you may wish to check the calculations to make sure that they are correct. If you additionally have raw data, and you know the methodology used to build the index, you can recreate this by rebuilding the index in COINr. If you only have normalised and aggregated data, you can still at least check the aggregation stage as follows. Assuming that the indicator columns in your pre\-aggregated data are normalised, we can first manually create a normalised data set:
```
library(dplyr)
ASEM_preagg$Data$Normalised <- ASEM_preagg$Data$PreAggregated %>%
select(!ASEM$Input$AggMeta$Code)
```
Here we have just copied the pre\-aggregated data, but removed any aggregation columns.
Next, we can aggregate these columns using COINr.
```
ASEM_preagg <- aggregate(ASEM_preagg, dset = "Normalised", agtype = "arith_mean")
# check data set names
names(ASEM_preagg$Data)
## [1] "PreAggregated" "Normalised" "Aggregated"
```
Finally, we can check to see whether these data frames are the same. There are many possible ways of doing this, but a simple way is to use dplyr’s `all_equal()` function.
```
all_equal(ASEM_preagg$Data$PreAggregated,
ASEM_preagg$Data$Aggregated)
## [1] TRUE
```
As expected, here the results are the same. If the results are *not* the same, `all_equal()` will give some information about the differences. If you reconstruct the index from raw data, and you find differences, a few points are worth considering:
1. The difference could be due to an error in the pre\-aggregated data, or even a bug in COINr. If you suspect the latter please open an issue on the repo.
2. If you have used data treatment or imputation, differences can easily arise. One reason is that some things are possible to calculate in different ways. COINr uses certain choices, but other choices are also valid. Examples of this include:
* Skew and kurtosis (underlying data treatment) \- see e.g. `?e1071::skewness`
* Correlation and treatment of missing values \- see `?cor`
* Ranks and how to handle ties \- see `?rank`
3. Errors can also arise from how you entered the data. Worth re\-checking all that as well.
Double\-checking calculations is tedious but in the process you often learn a lot.
19\.3 Indicator statistics
--------------------------
One next step is to check the indicator statistics. If you have the raw data, it is probably advisable to do this both on the raw data, and the normalised/aggregated data. Here, we will just do this on the aggregated data (but this works on any data set, e.g. also the “PreAggregated” and “Normalised” data sets created above):
```
ASEM <- getStats(ASEM, dset = "Aggregated")
## Number of collinear indicators = 5
## Number of signficant negative indicator correlations = 396
## Number of indicators with high denominator correlations = 0
# view stats table
# this is rounded first (to make it easier to view), then sent to reactable to make an interactive table
# you can also view this in R Studio or use any other table viewer to look at it.
ASEM$Analysis$Aggregated$StatTable %>%
roundDF() %>%
reactable::reactable()
```
Particular things of interest here will be whether indicators or aggregates are highly skewed, the percentage of missing data for each indicator, the percentage of unique values and zeroes, the presence of correlations with denominators, and negative correlations. Recall that `getStats()` allows you to change thresholds for flagging outliers and high/low correlations.
Another output of `getStats()` is correlation matrices, which are also found in `ASEM$Analysis$Aggregated`. A good reason to include these here is that as part of the COIN, we can export all analysis to Excel, if needed (see the end of this chapter). However, for viewing correlations directly, COINr has dedicated functions \- see the next section.
Before arriving at correlations, let’s also check data availability in more detail. The function of interest here is `checkData()`.
```
ASEM <- checkData(ASEM, dset = "Raw")
# view missing data by group
ASEM$Analysis$Raw$MissDatByGroup %>%
roundDF %>%
reactable::reactable()
```
This adds analysis tables to the `ASEM$Analysis` folder (analysis always appears under the respective “dset” name). We have applied the analysis to the raw data set because in later steps, the data is imputed. Here, the missing data is given by each aggregation group. This helps to flag, for instance, cases where there is very low data availability for a unit in a given aggregation group. We can check the minimum:
```
# get minimum, exclude first column (not numerical)
min(ASEM$Analysis$Raw$MissDatByGroup[-1])
## [1] 55.55556
```
This shows that in the worst case, there is more than 50% data availability for each unit in each aggregation group.
19\.4 Multivariate analysis
---------------------------
This section will overlap to some extent with the [Multivariate analysis](appendix-analysing-a-composite-indicator-example.html#multivariate-analysis-1) chapter. In any case we will summarise a few options here at the risk of repetition.
Perhaps the most useful way to view correlations for an aggregated index is to look at correlations of each indicator or aggregate with its parents. This is done quickly in COINr with the `plotCorr()` function.
```
plotCorr(ASEM, dset = "Aggregated", showvals = T, withparent = "family", flagcolours = TRUE)
```
This shows at a glance where some problems may lie. In particular we learn that the TBT indicator is negatively correlated with all its parent groups. We can also see that the “Forest” indicator is insignificantly correlated with its parents. This table is generated here for indicators, but can also generated for other levels by setting the `aglevs` argument.
```
plotCorr(ASEM, dset = "Aggregated", showvals = T, withparent = "family", flagcolours = TRUE, aglevs = 2)
```
Recall also that this function can return data frames for your own processing or presentation, rather than figures, by changing the `out2` argument.
You may also want to view correlations between indicators or within specific groups. The `plotCorr()` function is flexible in this respect.
```
# plot correlations within a specific pillar (Physical)
plotCorr(ASEM, dset = "Raw", icodes = "Physical", aglevs = 1, cortype = "spearman", pval = 0)
```
You may wish to generate these correlation plots for each major aggregation group. If any specific correlations are of interest, we can generate (for example) a scatter plot. Technical barriers to trade (TBTs) are shown in particular to be negatively correlated with the overall index and all parent levels.
```
iplotIndDist2(ASEM, dsets = c("Raw", "Aggregated"), icodes = "TBTs", aglevs = c(1,4), ptype = "Scatter")
```
Recall here that because we have plotted a “Raw” indicator against the index, and this is a negative indicator, its direction is flipped. Meaning that in this plot there is a positive correlation, but plotting the normalised indicator against the index would show a negative correlation.
Next we can check the internal consistency of the indicators using Cronbach’s alpha. Remember that this can be done for any group of indicators \- e.g. we could do it for all indicators, or target specific groups or also check the consistency of aggregated values. Examples:
```
# all indicators
getCronbach(ASEM, dset = "Normalised")
## [1] 0.8985903
# indicators in connectivity sub-index
getCronbach(ASEM, dset = "Normalised", icodes = "Conn", aglev = 1)
## [1] 0.8805543
# indicators in sustainability sub-index
getCronbach(ASEM, dset = "Normalised", icodes = "Sust", aglev = 1)
## [1] 0.6794643
# pillars in connectivity sub-index
getCronbach(ASEM, dset = "Aggregated", icodes = "Conn", aglev = 2)
## [1] 0.7827171
# pillars in sustainability sub-index
getCronbach(ASEM, dset = "Aggregated", icodes = "Sust", aglev = 2)
## [1] -0.289811
```
Note that it makes more sense to do correlation analysis on the normalised data, because indicators have had their directions reveresed where appropriate. But what’s going on with the consistency of the sustainability pillars? We can check:
```
plotCorr(ASEM, dset = "Aggregated", icodes = "Sust", aglevs = 2, pval = 0)
```
Sustainability dimensions are not well\-correlated and are in fact slightly negatively correlated. This points to trade\-offs between different aspects of sustainable development: as social sustainability increases, environmental sustainability often tends to decrease. Or at best, an increase in one does not really imply an increase in the others.
Finally for the multivariate analysis, we may wish to run a principle component analysis. As with Cronbach’s alpha, we can do this on any group or level of indicators, so there are many possibilities. In the [Multivariate analysis](appendix-analysing-a-composite-indicator-example.html#multivariate-analysis-1) chapter, this was done at the pillar level. We can also try here at the indicator level, let’s say within one of the pillar groups:
```
PCA_P2P <- getPCA(ASEM, dset = "Normalised", icodes = "P2P", aglev = 1, out2 = "list")
summary(PCA_P2P$PCAresults$P2P$PCAres)
## Importance of components:
## PC1 PC2 PC3 PC4 PC5 PC6 PC7
## Standard deviation 1.9896 1.0164 0.9477 0.9067 0.72952 0.59923 0.46175
## Proportion of Variance 0.4948 0.1291 0.1123 0.1028 0.06653 0.04488 0.02665
## Cumulative Proportion 0.4948 0.6239 0.7362 0.8390 0.90548 0.95037 0.97702
## PC8
## Standard deviation 0.42878
## Proportion of Variance 0.02298
## Cumulative Proportion 1.00000
```
We can see that the first principle component explains about 50% of the variance of the indicators, which is perhaps borderline for the existence of single latent variable. That said, many composite indicators will not yield strong latent variables in many cases.
We can now produce a PCA biplot using this information.
```
# install ggbiplot if you don't have it
# library(devtools)
# install_github("vqv/ggbiplot")
library(ggbiplot)
ggbiplot(PCA_P2P$PCAresults$P2P$PCAres,
labels = ASEM$Data$Normalised$UnitCode,
groups = ASEM$Data$Normalised$Group_EurAsia)
```
Once again we see a fairly clear divide between Asia and Europe in terms of P2P connectivity, with exceptions of Singapore which is very well\-connected. We also note the small cluster of New Zealand and Australia which have very similar characteristics in P2P connectivity.
19\.5 Weights
-------------
Another worthwhile check is to understand the effective weights of each indicator. A sometimes under\-appreciated fact is that the weight of an indicator in the final index is due to its own weight, plus the weight of all its parents, as well as the number of indicators and aggregates in each group. For example, an equally\-weighted group of two indicators will each have a higher weight (0\.5\) than an equally weighted group of ten indicators (0\.1\), and this applies to all aggregation levels.
The function `effectiveWeight()` gives effective weights for all levels. We can check the indicator level by filtering:
```
EffWts <- effectiveWeight(ASEM)
IndWts <- EffWts$EffectiveWeightsList %>%
filter(AgLevel == 1)
head(IndWts)
## AgLevel Code EffectiveWeight
## 1 1 Goods 0.02000000
## 2 1 Services 0.02000000
## 3 1 FDI 0.02000000
## 4 1 PRemit 0.02000000
## 5 1 ForPort 0.02000000
## 6 1 CostImpEx 0.01666667
```
We might want to know the maximum and minimum effective weights and which indicators are involved:
```
IndWts %>%
filter(EffectiveWeight %in% c(min(IndWts$EffectiveWeight), max(IndWts$EffectiveWeight)))
## AgLevel Code EffectiveWeight
## 1 1 StMob 0.01250000
## 2 1 Research 0.01250000
## 3 1 Pat 0.01250000
## 4 1 CultServ 0.01250000
## 5 1 CultGood 0.01250000
## 6 1 Tourist 0.01250000
## 7 1 MigStock 0.01250000
## 8 1 Lang 0.01250000
## 9 1 LPI 0.01250000
## 10 1 Flights 0.01250000
## 11 1 Ship 0.01250000
## 12 1 Bord 0.01250000
## 13 1 Elec 0.01250000
## 14 1 Gas 0.01250000
## 15 1 ConSpeed 0.01250000
## 16 1 Cov4G 0.01250000
## 17 1 Embs 0.03333333
## 18 1 IGOs 0.03333333
## 19 1 UNVote 0.03333333
## 20 1 Renew 0.03333333
## 21 1 PrimEner 0.03333333
## 22 1 CO2 0.03333333
## 23 1 MatCon 0.03333333
## 24 1 Forest 0.03333333
## 25 1 PubDebt 0.03333333
## 26 1 PrivDebt 0.03333333
## 27 1 GDPGrow 0.03333333
## 28 1 RDExp 0.03333333
## 29 1 NEET 0.03333333
```
An easy way to visualise is to call the `plotFramework()` function:
```
plotframework(ASEM)
```
19\.6 What\-ifs
---------------
Due to the uncertainty in composite indicators, it may often be useful to know how the scores and ranks would change under an alternative formulation of the index. Examples could include different weights, as well as adding/removing/substituting indicators or entire aggregation groups.
To give an example, let’s imagine that some stakeholders claim that Political connectivity is not relevant to the overall concept. What would happen if it were removed? There are at least three ways we can check this:
1. Use the “exclude” argument of `assemble()` to exclude the relevant indicators
2. Set the weight of the Political pillar to zero
3. Remove the indicators manually
Here we will take the quickest option, which is Option 2\.
```
# Copy the COIN
ASEM_NoPolitical <- ASEM
# Copy the weights
ASEM_NoPolitical$Parameters$Weights$NoPolitical <- ASEM_NoPolitical$Parameters$Weights$Original
# Set Political weight to zero
ASEM_NoPolitical$Parameters$Weights$NoPolitical$Weight[
ASEM_NoPolitical$Parameters$Weights$NoPolitical$Code == "Political"] <- 0
# Alter methodology to use new weights
ASEM_NoPolitical$Method$aggregate$agweights <- "NoPolitical"
# Regenerate
ASEM_NoPolitical <- regen(ASEM_NoPolitical)
## -----------------
## Denominators detected - stored in .$Input$Denominators
## -----------------
## -----------------
## Indicator codes cross-checked and OK.
## -----------------
## Number of indicators = 49
## Number of units = 51
## Number of aggregation levels = 3 above indicator level.
## -----------------
## Aggregation level 1 with 8 aggregate groups: Physical, ConEcFin, Political, Instit, P2P, Environ, Social, SusEcFin
## Cross-check between metadata and framework = OK.
## Aggregation level 2 with 2 aggregate groups: Conn, Sust
## Cross-check between metadata and framework = OK.
## Aggregation level 3 with 1 aggregate groups: Index
## Cross-check between metadata and framework = OK.
## -----------------
## Missing data points detected = 65
## Missing data points imputed = 65, using method = indgroup_mean
```
Now we need to compare the two alternative indexes:
```
compTable(ASEM, ASEM_NoPolitical, dset = "Aggregated", isel = "Index",
COINnames = c("Original", "NoPolitical"))
## UnitCode UnitName Rank: Original Rank: NoPolitical RankChange
## 46 SGP Singapore 14 5 9
## 33 MLT Malta 10 3 7
## 9 CYP Cyprus 29 23 6
## 13 ESP Spain 19 25 -6
## 14 EST Estonia 22 16 6
## 31 LUX Luxembourg 8 2 6
## 6 BRN Brunei Darussalam 40 35 5
## 11 DEU Germany 9 14 -5
## 16 FRA France 21 26 -5
## 25 JPN Japan 34 39 -5
## 29 LAO Lao PDR 48 43 5
## 2 AUT Austria 7 11 -4
## 32 LVA Latvia 23 19 4
## 34 MMR Myanmar 41 37 4
## 35 MNG Mongolia 44 40 4
## 37 NLD Netherlands 2 6 -4
## 41 PHL Philippines 38 42 -4
## 49 SWE Sweden 6 10 -4
## 3 BEL Belgium 5 8 -3
## 21 IDN Indonesia 43 46 -3
## 22 IND India 45 48 -3
## 24 ITA Italy 28 31 -3
## 30 LTU Lithuania 16 13 3
## 38 NOR Norway 4 7 -3
## 39 NZL New Zealand 33 30 3
## 47 SVK Slovakia 24 21 3
## 15 FIN Finland 13 15 -2
## 17 GBR United Kingdom 15 17 -2
## 19 HRV Croatia 18 20 -2
## 20 HUN Hungary 20 22 -2
## 26 KAZ Kazakhstan 47 45 2
## 36 MYS Malaysia 39 41 -2
## 42 POL Poland 26 28 -2
## 48 SVN Slovenia 11 9 2
## 50 THA Thailand 42 44 -2
## 51 VNM Vietnam 36 38 -2
## 1 AUS Australia 35 34 1
## 4 BGD Bangladesh 46 47 -1
## 5 BGR Bulgaria 30 29 1
## 8 CHN China 49 50 -1
## 10 CZE Czech Republic 17 18 -1
## 12 DNK Denmark 3 4 -1
## 18 GRC Greece 32 33 -1
## 27 KHM Cambodia 37 36 1
## 28 KOR Korea 31 32 -1
## 40 PAK Pakistan 50 49 1
## 44 ROU Romania 25 24 1
## 7 CHE Switzerland 1 1 0
## 23 IRL Ireland 12 12 0
## 43 PRT Portugal 27 27 0
## 45 RUS Russian Federation 51 51 0
## AbsRankChange
## 46 9
## 33 7
## 9 6
## 13 6
## 14 6
## 31 6
## 6 5
## 11 5
## 16 5
## 25 5
## 29 5
## 2 4
## 32 4
## 34 4
## 35 4
## 37 4
## 41 4
## 49 4
## 3 3
## 21 3
## 22 3
## 24 3
## 30 3
## 38 3
## 39 3
## 47 3
## 15 2
## 17 2
## 19 2
## 20 2
## 26 2
## 36 2
## 42 2
## 48 2
## 50 2
## 51 2
## 1 1
## 4 1
## 5 1
## 8 1
## 10 1
## 12 1
## 18 1
## 27 1
## 28 1
## 40 1
## 44 1
## 7 0
## 23 0
## 43 0
## 45 0
```
The results show that the rank changes are not major at the index level, with a maximum shift of six places for Estonia. The implication might be (depending on context) that the inclusion or not of the Political pillar does not have a drastic impact on the results, although one should bear in mind that the changes on lower aggregation levels are probably higher, and the indicators themselves may have value and add legitimacy to the framework. In other words, it is not always just the index that counts.
You can dig deeper investigating what\-ifs with all kinds of adjustments. If you want to test the overall effect of uncertainties, rather than specific cases, an uncertainty and sensitivity analysis is very useful. This is explained in the [Sensitivity analysis](sensitivity-analysis.html#sensitivity-analysis) chapter and will not be repeated here.
19\.7 Summing up
----------------
COINr offers all kinds of possibilities for analysis of a composite indicator, although to access most things you would need to assemble the indicator data into a COIN. As mentioned, I would recommend perfoming the analysis in an R Markdown document, then knitting the document to Word, pdf or HTML depending on the format that you need to present the work in. This can save a lot of time in exporting figures, and if changes are made to the index, you can easily rerun everything. Consider also that if your output is Word, you can use a formatting template to already pre\-format the document.
*To be completed*
19\.1 Loading data
------------------
If the analysis is part of constructing a composite indicator in COINr, you should have a COIN assembled following the steps in the previous chapter. However, if you are analysing an existing composite indicator, there may be a slight difference. It is possible, for example, that you may be provided with indicator data, which may already be normalised and aggregated.
COINr’s `assemble()` function allows you to assemble a COIN with pre\-aggregated data. This is done by setting `preagg = TRUE`. If this is enabled, COINr will create a COIN with a data set called “PreAggregated”. To do this, you need a pre\-aggregated data set which includes (possibly normalised) columns for all indicators *and* for all aggregates.
To demonstrate, we can use a pre\-aggregated data set created by COINr.
```
library(COINr6)
# build ASEM index
ASEM <- build_ASEM()
# extract aggregated data set (as a data frame)
Aggregated_Data <- ASEM$Data$Aggregated
# assemble new COIN only including pre-aggregated data
ASEM_preagg <- assemble(IndData = Aggregated_Data,
IndMeta = ASEMIndMeta,
AggMeta = ASEMAggMeta,
preagg = TRUE)
```
COINr will check that the column names in the indicator data correspond to the codes supplied in the `IndMeta` and `AggMeta`. This means that these two latter data frames still need to be supplied. However, from this point the COIN functions as any other, although consider that it cannot be regenerated (the methodology to arrive at the pre\-aggregated data is unknown), and the only data set present is the “PreAggregated” data.
19\.2 Check calculations
------------------------
If you are using pre\-aggregated data, you may wish to check the calculations to make sure that they are correct. If you additionally have raw data, and you know the methodology used to build the index, you can recreate this by rebuilding the index in COINr. If you only have normalised and aggregated data, you can still at least check the aggregation stage as follows. Assuming that the indicator columns in your pre\-aggregated data are normalised, we can first manually create a normalised data set:
```
library(dplyr)
ASEM_preagg$Data$Normalised <- ASEM_preagg$Data$PreAggregated %>%
select(!ASEM$Input$AggMeta$Code)
```
Here we have just copied the pre\-aggregated data, but removed any aggregation columns.
Next, we can aggregate these columns using COINr.
```
ASEM_preagg <- aggregate(ASEM_preagg, dset = "Normalised", agtype = "arith_mean")
# check data set names
names(ASEM_preagg$Data)
## [1] "PreAggregated" "Normalised" "Aggregated"
```
Finally, we can check to see whether these data frames are the same. There are many possible ways of doing this, but a simple way is to use dplyr’s `all_equal()` function.
```
all_equal(ASEM_preagg$Data$PreAggregated,
ASEM_preagg$Data$Aggregated)
## [1] TRUE
```
As expected, here the results are the same. If the results are *not* the same, `all_equal()` will give some information about the differences. If you reconstruct the index from raw data, and you find differences, a few points are worth considering:
1. The difference could be due to an error in the pre\-aggregated data, or even a bug in COINr. If you suspect the latter please open an issue on the repo.
2. If you have used data treatment or imputation, differences can easily arise. One reason is that some things are possible to calculate in different ways. COINr uses certain choices, but other choices are also valid. Examples of this include:
* Skew and kurtosis (underlying data treatment) \- see e.g. `?e1071::skewness`
* Correlation and treatment of missing values \- see `?cor`
* Ranks and how to handle ties \- see `?rank`
3. Errors can also arise from how you entered the data. Worth re\-checking all that as well.
Double\-checking calculations is tedious but in the process you often learn a lot.
19\.3 Indicator statistics
--------------------------
One next step is to check the indicator statistics. If you have the raw data, it is probably advisable to do this both on the raw data, and the normalised/aggregated data. Here, we will just do this on the aggregated data (but this works on any data set, e.g. also the “PreAggregated” and “Normalised” data sets created above):
```
ASEM <- getStats(ASEM, dset = "Aggregated")
## Number of collinear indicators = 5
## Number of signficant negative indicator correlations = 396
## Number of indicators with high denominator correlations = 0
# view stats table
# this is rounded first (to make it easier to view), then sent to reactable to make an interactive table
# you can also view this in R Studio or use any other table viewer to look at it.
ASEM$Analysis$Aggregated$StatTable %>%
roundDF() %>%
reactable::reactable()
```
Particular things of interest here will be whether indicators or aggregates are highly skewed, the percentage of missing data for each indicator, the percentage of unique values and zeroes, the presence of correlations with denominators, and negative correlations. Recall that `getStats()` allows you to change thresholds for flagging outliers and high/low correlations.
Another output of `getStats()` is correlation matrices, which are also found in `ASEM$Analysis$Aggregated`. A good reason to include these here is that as part of the COIN, we can export all analysis to Excel, if needed (see the end of this chapter). However, for viewing correlations directly, COINr has dedicated functions \- see the next section.
Before arriving at correlations, let’s also check data availability in more detail. The function of interest here is `checkData()`.
```
ASEM <- checkData(ASEM, dset = "Raw")
# view missing data by group
ASEM$Analysis$Raw$MissDatByGroup %>%
roundDF %>%
reactable::reactable()
```
This adds analysis tables to the `ASEM$Analysis` folder (analysis always appears under the respective “dset” name). We have applied the analysis to the raw data set because in later steps, the data is imputed. Here, the missing data is given by each aggregation group. This helps to flag, for instance, cases where there is very low data availability for a unit in a given aggregation group. We can check the minimum:
```
# get minimum, exclude first column (not numerical)
min(ASEM$Analysis$Raw$MissDatByGroup[-1])
## [1] 55.55556
```
This shows that in the worst case, there is more than 50% data availability for each unit in each aggregation group.
19\.4 Multivariate analysis
---------------------------
This section will overlap to some extent with the [Multivariate analysis](appendix-analysing-a-composite-indicator-example.html#multivariate-analysis-1) chapter. In any case we will summarise a few options here at the risk of repetition.
Perhaps the most useful way to view correlations for an aggregated index is to look at correlations of each indicator or aggregate with its parents. This is done quickly in COINr with the `plotCorr()` function.
```
plotCorr(ASEM, dset = "Aggregated", showvals = T, withparent = "family", flagcolours = TRUE)
```
This shows at a glance where some problems may lie. In particular we learn that the TBT indicator is negatively correlated with all its parent groups. We can also see that the “Forest” indicator is insignificantly correlated with its parents. This table is generated here for indicators, but can also generated for other levels by setting the `aglevs` argument.
```
plotCorr(ASEM, dset = "Aggregated", showvals = T, withparent = "family", flagcolours = TRUE, aglevs = 2)
```
Recall also that this function can return data frames for your own processing or presentation, rather than figures, by changing the `out2` argument.
You may also want to view correlations between indicators or within specific groups. The `plotCorr()` function is flexible in this respect.
```
# plot correlations within a specific pillar (Physical)
plotCorr(ASEM, dset = "Raw", icodes = "Physical", aglevs = 1, cortype = "spearman", pval = 0)
```
You may wish to generate these correlation plots for each major aggregation group. If any specific correlations are of interest, we can generate (for example) a scatter plot. Technical barriers to trade (TBTs) are shown in particular to be negatively correlated with the overall index and all parent levels.
```
iplotIndDist2(ASEM, dsets = c("Raw", "Aggregated"), icodes = "TBTs", aglevs = c(1,4), ptype = "Scatter")
```
Recall here that because we have plotted a “Raw” indicator against the index, and this is a negative indicator, its direction is flipped. Meaning that in this plot there is a positive correlation, but plotting the normalised indicator against the index would show a negative correlation.
Next we can check the internal consistency of the indicators using Cronbach’s alpha. Remember that this can be done for any group of indicators \- e.g. we could do it for all indicators, or target specific groups or also check the consistency of aggregated values. Examples:
```
# all indicators
getCronbach(ASEM, dset = "Normalised")
## [1] 0.8985903
# indicators in connectivity sub-index
getCronbach(ASEM, dset = "Normalised", icodes = "Conn", aglev = 1)
## [1] 0.8805543
# indicators in sustainability sub-index
getCronbach(ASEM, dset = "Normalised", icodes = "Sust", aglev = 1)
## [1] 0.6794643
# pillars in connectivity sub-index
getCronbach(ASEM, dset = "Aggregated", icodes = "Conn", aglev = 2)
## [1] 0.7827171
# pillars in sustainability sub-index
getCronbach(ASEM, dset = "Aggregated", icodes = "Sust", aglev = 2)
## [1] -0.289811
```
Note that it makes more sense to do correlation analysis on the normalised data, because indicators have had their directions reveresed where appropriate. But what’s going on with the consistency of the sustainability pillars? We can check:
```
plotCorr(ASEM, dset = "Aggregated", icodes = "Sust", aglevs = 2, pval = 0)
```
Sustainability dimensions are not well\-correlated and are in fact slightly negatively correlated. This points to trade\-offs between different aspects of sustainable development: as social sustainability increases, environmental sustainability often tends to decrease. Or at best, an increase in one does not really imply an increase in the others.
Finally for the multivariate analysis, we may wish to run a principle component analysis. As with Cronbach’s alpha, we can do this on any group or level of indicators, so there are many possibilities. In the [Multivariate analysis](appendix-analysing-a-composite-indicator-example.html#multivariate-analysis-1) chapter, this was done at the pillar level. We can also try here at the indicator level, let’s say within one of the pillar groups:
```
PCA_P2P <- getPCA(ASEM, dset = "Normalised", icodes = "P2P", aglev = 1, out2 = "list")
summary(PCA_P2P$PCAresults$P2P$PCAres)
## Importance of components:
## PC1 PC2 PC3 PC4 PC5 PC6 PC7
## Standard deviation 1.9896 1.0164 0.9477 0.9067 0.72952 0.59923 0.46175
## Proportion of Variance 0.4948 0.1291 0.1123 0.1028 0.06653 0.04488 0.02665
## Cumulative Proportion 0.4948 0.6239 0.7362 0.8390 0.90548 0.95037 0.97702
## PC8
## Standard deviation 0.42878
## Proportion of Variance 0.02298
## Cumulative Proportion 1.00000
```
We can see that the first principle component explains about 50% of the variance of the indicators, which is perhaps borderline for the existence of single latent variable. That said, many composite indicators will not yield strong latent variables in many cases.
We can now produce a PCA biplot using this information.
```
# install ggbiplot if you don't have it
# library(devtools)
# install_github("vqv/ggbiplot")
library(ggbiplot)
ggbiplot(PCA_P2P$PCAresults$P2P$PCAres,
labels = ASEM$Data$Normalised$UnitCode,
groups = ASEM$Data$Normalised$Group_EurAsia)
```
Once again we see a fairly clear divide between Asia and Europe in terms of P2P connectivity, with exceptions of Singapore which is very well\-connected. We also note the small cluster of New Zealand and Australia which have very similar characteristics in P2P connectivity.
19\.5 Weights
-------------
Another worthwhile check is to understand the effective weights of each indicator. A sometimes under\-appreciated fact is that the weight of an indicator in the final index is due to its own weight, plus the weight of all its parents, as well as the number of indicators and aggregates in each group. For example, an equally\-weighted group of two indicators will each have a higher weight (0\.5\) than an equally weighted group of ten indicators (0\.1\), and this applies to all aggregation levels.
The function `effectiveWeight()` gives effective weights for all levels. We can check the indicator level by filtering:
```
EffWts <- effectiveWeight(ASEM)
IndWts <- EffWts$EffectiveWeightsList %>%
filter(AgLevel == 1)
head(IndWts)
## AgLevel Code EffectiveWeight
## 1 1 Goods 0.02000000
## 2 1 Services 0.02000000
## 3 1 FDI 0.02000000
## 4 1 PRemit 0.02000000
## 5 1 ForPort 0.02000000
## 6 1 CostImpEx 0.01666667
```
We might want to know the maximum and minimum effective weights and which indicators are involved:
```
IndWts %>%
filter(EffectiveWeight %in% c(min(IndWts$EffectiveWeight), max(IndWts$EffectiveWeight)))
## AgLevel Code EffectiveWeight
## 1 1 StMob 0.01250000
## 2 1 Research 0.01250000
## 3 1 Pat 0.01250000
## 4 1 CultServ 0.01250000
## 5 1 CultGood 0.01250000
## 6 1 Tourist 0.01250000
## 7 1 MigStock 0.01250000
## 8 1 Lang 0.01250000
## 9 1 LPI 0.01250000
## 10 1 Flights 0.01250000
## 11 1 Ship 0.01250000
## 12 1 Bord 0.01250000
## 13 1 Elec 0.01250000
## 14 1 Gas 0.01250000
## 15 1 ConSpeed 0.01250000
## 16 1 Cov4G 0.01250000
## 17 1 Embs 0.03333333
## 18 1 IGOs 0.03333333
## 19 1 UNVote 0.03333333
## 20 1 Renew 0.03333333
## 21 1 PrimEner 0.03333333
## 22 1 CO2 0.03333333
## 23 1 MatCon 0.03333333
## 24 1 Forest 0.03333333
## 25 1 PubDebt 0.03333333
## 26 1 PrivDebt 0.03333333
## 27 1 GDPGrow 0.03333333
## 28 1 RDExp 0.03333333
## 29 1 NEET 0.03333333
```
An easy way to visualise is to call the `plotFramework()` function:
```
plotframework(ASEM)
```
19\.6 What\-ifs
---------------
Due to the uncertainty in composite indicators, it may often be useful to know how the scores and ranks would change under an alternative formulation of the index. Examples could include different weights, as well as adding/removing/substituting indicators or entire aggregation groups.
To give an example, let’s imagine that some stakeholders claim that Political connectivity is not relevant to the overall concept. What would happen if it were removed? There are at least three ways we can check this:
1. Use the “exclude” argument of `assemble()` to exclude the relevant indicators
2. Set the weight of the Political pillar to zero
3. Remove the indicators manually
Here we will take the quickest option, which is Option 2\.
```
# Copy the COIN
ASEM_NoPolitical <- ASEM
# Copy the weights
ASEM_NoPolitical$Parameters$Weights$NoPolitical <- ASEM_NoPolitical$Parameters$Weights$Original
# Set Political weight to zero
ASEM_NoPolitical$Parameters$Weights$NoPolitical$Weight[
ASEM_NoPolitical$Parameters$Weights$NoPolitical$Code == "Political"] <- 0
# Alter methodology to use new weights
ASEM_NoPolitical$Method$aggregate$agweights <- "NoPolitical"
# Regenerate
ASEM_NoPolitical <- regen(ASEM_NoPolitical)
## -----------------
## Denominators detected - stored in .$Input$Denominators
## -----------------
## -----------------
## Indicator codes cross-checked and OK.
## -----------------
## Number of indicators = 49
## Number of units = 51
## Number of aggregation levels = 3 above indicator level.
## -----------------
## Aggregation level 1 with 8 aggregate groups: Physical, ConEcFin, Political, Instit, P2P, Environ, Social, SusEcFin
## Cross-check between metadata and framework = OK.
## Aggregation level 2 with 2 aggregate groups: Conn, Sust
## Cross-check between metadata and framework = OK.
## Aggregation level 3 with 1 aggregate groups: Index
## Cross-check between metadata and framework = OK.
## -----------------
## Missing data points detected = 65
## Missing data points imputed = 65, using method = indgroup_mean
```
Now we need to compare the two alternative indexes:
```
compTable(ASEM, ASEM_NoPolitical, dset = "Aggregated", isel = "Index",
COINnames = c("Original", "NoPolitical"))
## UnitCode UnitName Rank: Original Rank: NoPolitical RankChange
## 46 SGP Singapore 14 5 9
## 33 MLT Malta 10 3 7
## 9 CYP Cyprus 29 23 6
## 13 ESP Spain 19 25 -6
## 14 EST Estonia 22 16 6
## 31 LUX Luxembourg 8 2 6
## 6 BRN Brunei Darussalam 40 35 5
## 11 DEU Germany 9 14 -5
## 16 FRA France 21 26 -5
## 25 JPN Japan 34 39 -5
## 29 LAO Lao PDR 48 43 5
## 2 AUT Austria 7 11 -4
## 32 LVA Latvia 23 19 4
## 34 MMR Myanmar 41 37 4
## 35 MNG Mongolia 44 40 4
## 37 NLD Netherlands 2 6 -4
## 41 PHL Philippines 38 42 -4
## 49 SWE Sweden 6 10 -4
## 3 BEL Belgium 5 8 -3
## 21 IDN Indonesia 43 46 -3
## 22 IND India 45 48 -3
## 24 ITA Italy 28 31 -3
## 30 LTU Lithuania 16 13 3
## 38 NOR Norway 4 7 -3
## 39 NZL New Zealand 33 30 3
## 47 SVK Slovakia 24 21 3
## 15 FIN Finland 13 15 -2
## 17 GBR United Kingdom 15 17 -2
## 19 HRV Croatia 18 20 -2
## 20 HUN Hungary 20 22 -2
## 26 KAZ Kazakhstan 47 45 2
## 36 MYS Malaysia 39 41 -2
## 42 POL Poland 26 28 -2
## 48 SVN Slovenia 11 9 2
## 50 THA Thailand 42 44 -2
## 51 VNM Vietnam 36 38 -2
## 1 AUS Australia 35 34 1
## 4 BGD Bangladesh 46 47 -1
## 5 BGR Bulgaria 30 29 1
## 8 CHN China 49 50 -1
## 10 CZE Czech Republic 17 18 -1
## 12 DNK Denmark 3 4 -1
## 18 GRC Greece 32 33 -1
## 27 KHM Cambodia 37 36 1
## 28 KOR Korea 31 32 -1
## 40 PAK Pakistan 50 49 1
## 44 ROU Romania 25 24 1
## 7 CHE Switzerland 1 1 0
## 23 IRL Ireland 12 12 0
## 43 PRT Portugal 27 27 0
## 45 RUS Russian Federation 51 51 0
## AbsRankChange
## 46 9
## 33 7
## 9 6
## 13 6
## 14 6
## 31 6
## 6 5
## 11 5
## 16 5
## 25 5
## 29 5
## 2 4
## 32 4
## 34 4
## 35 4
## 37 4
## 41 4
## 49 4
## 3 3
## 21 3
## 22 3
## 24 3
## 30 3
## 38 3
## 39 3
## 47 3
## 15 2
## 17 2
## 19 2
## 20 2
## 26 2
## 36 2
## 42 2
## 48 2
## 50 2
## 51 2
## 1 1
## 4 1
## 5 1
## 8 1
## 10 1
## 12 1
## 18 1
## 27 1
## 28 1
## 40 1
## 44 1
## 7 0
## 23 0
## 43 0
## 45 0
```
The results show that the rank changes are not major at the index level, with a maximum shift of six places for Estonia. The implication might be (depending on context) that the inclusion or not of the Political pillar does not have a drastic impact on the results, although one should bear in mind that the changes on lower aggregation levels are probably higher, and the indicators themselves may have value and add legitimacy to the framework. In other words, it is not always just the index that counts.
You can dig deeper investigating what\-ifs with all kinds of adjustments. If you want to test the overall effect of uncertainties, rather than specific cases, an uncertainty and sensitivity analysis is very useful. This is explained in the [Sensitivity analysis](sensitivity-analysis.html#sensitivity-analysis) chapter and will not be repeated here.
19\.7 Summing up
----------------
COINr offers all kinds of possibilities for analysis of a composite indicator, although to access most things you would need to assemble the indicator data into a COIN. As mentioned, I would recommend perfoming the analysis in an R Markdown document, then knitting the document to Word, pdf or HTML depending on the format that you need to present the work in. This can save a lot of time in exporting figures, and if changes are made to the index, you can easily rerun everything. Consider also that if your output is Word, you can use a formatting template to already pre\-format the document.
*To be completed*
| Social Science |
bluefoxr.github.io | https://bluefoxr.github.io/COINrDoc/acknowledgements.html | Social Science |
|
tidy-survey-r.github.io | https://tidy-survey-r.github.io/tidy-survey-book/index.html |
Welcome
=======
This is the online version of the book published by CRC Press in November 2024\. You can purchase a copy of this book directly from [Routledge](https://www.routledge.com/Exploring-Complex-Survey-Data-Analysis-Using-R-A-Tidy-Introduction-with-srvyr-and-survey/Zimmer-Powell-Velasquez/p/book/9781032302867) or your preferred bookstore. Cover artwork designed and created by [Allison Horst](https://allisonhorst.com/).
Dedication
----------
To Will, Tom, and Drew, thanks for all the help with additional chores and plenty of Git consulting!
Citation
--------
To cite this book, we recommend the following citation:
Zimmer, S. A., Powell, R. J., \& Velásquez, I. C. (2024\). *Exploring Complex Survey Data Analysis Using R: A Tidy Introduction with {srvyr} and {survey}*. Chapman \& Hall: CRC Press.
Dedication
----------
To Will, Tom, and Drew, thanks for all the help with additional chores and plenty of Git consulting!
Citation
--------
To cite this book, we recommend the following citation:
Zimmer, S. A., Powell, R. J., \& Velásquez, I. C. (2024\). *Exploring Complex Survey Data Analysis Using R: A Tidy Introduction with {srvyr} and {survey}*. Chapman \& Hall: CRC Press.
| Social Science |
tidy-survey-r.github.io | https://tidy-survey-r.github.io/tidy-survey-book/c01-intro.html |
Chapter 1 Introduction
======================
Surveys are valuable tools for gathering information about a population. Researchers, governments, and businesses use surveys to better understand public opinion and behaviors. For example, a non\-profit group may analyze societal trends to measure their impact, government agencies may study behaviors to inform policy, or companies may seek to learn customer product preferences to refine business strategy. With survey data, we can explore the world around us.
Surveys are often conducted with a sample of the population. Therefore, to use the survey data to understand the population, we use weights to adjust the survey results for unequal probabilities of selection, nonresponse, and post\-stratification. These adjustments ensure the sample accurately represents the population of interest ([Gard et al. 2023](#ref-gard2023weightsdef)). To account for the intricate nature of the survey design, analysts rely on statistical software such as SAS, Stata, SUDAAN, and R.
In this book, we focus on R to introduce survey analysis. Our goal is to provide a comprehensive guide for individuals new to survey analysis but with some familiarity with statistics and R programming. We use a combination of the {survey} and {srvyr} packages and present the code following best practices from the tidyverse ([Freedman Ellis and Schneider 2024](#ref-R-srvyr); [Lumley 2010](#ref-lumley2010complex); [Wickham et al. 2019](#ref-tidyverse2019)).
1\.1 Survey analysis in R
-------------------------
The {survey} package was released on the [Comprehensive R Archive Network (CRAN)](https://cran.r-project.org/src/contrib/Archive/survey/) in 2003 and has been continuously developed over time. This package, primarily authored by Thomas Lumley, offers an extensive array of features, including:
* Calculation of point estimates and estimates of their uncertainty, including means, totals, ratios, quantiles, and proportions
* Estimation of regression models, including generalized linear models, log\-linear models, and survival curves
* Variances by Taylor linearization or by replicate weights, including balance repeated replication, jackknife, bootstrap, multistage bootstrap, or user\-supplied methods
* Hypothesis testing for means, proportions, and other parameters
The {srvyr} package builds on the {survey} package by providing wrappers for functions that align with the tidyverse philosophy. This is our motivation for using and recommending the {srvyr} package. We find that it is user\-friendly for those familiar with the tidyverse packages in R.
For example, while many functions in the {survey} package access variables through formulas, the {srvyr} package uses tidy selection to pass variable names, a common feature in the tidyverse ([Henry and Wickham 2024](#ref-R-tidyselect)). Users of the tidyverse are also likely familiar with the magrittr pipe operator (`%>%`), which seamlessly works with functions from the {srvyr} package. Moreover, several common functions from {dplyr}, such as `filter()`, `mutate()`, and `summarize()`, can be applied to survey objects ([Wickham et al. 2023](#ref-R-dplyr)). This enables users to streamline their analysis workflow and leverage the benefits of both the {srvyr} and {tidyverse} packages.
While the {srvyr} package offers many advantages, there is one notable limitation: it doesn’t fully incorporate the modeling capabilities of the {survey} package into tidy wrappers. When discussing modeling and hypothesis testing, we primarily rely on the {survey} package. However, we provide information on how to apply the pipe operator to these functions to maintain clarity and consistency in analyses.
1\.2 What to expect
-------------------
This book covers many aspects of survey design and analysis, from understanding how to create design objects to conducting descriptive analysis, statistical tests, and models. We emphasize coding best practices and effective presentation techniques while using real\-world data and practical examples to help readers gain proficiency in survey analysis.
Below is a summary of each chapter:
* **Chapter [2](c02-overview-surveys.html#c02-overview-surveys) \- Overview of surveys**:
+ Overview of survey design processes
+ References for more in\-depth knowledge
* **Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation) \- Survey data documentation**:
+ Guide to survey documentation types
+ How to read survey documentation
* **Chapter [4](c04-getting-started.html#c04-getting-started) \- Getting started**:
+ Installation of packages
+ Introduction to the {srvyrexploR} package and its analytic datasets
+ Outline of the survey analysis process
+ Comparison between the {dplyr} and {srvyr} packages
* **Chapter [5](c05-descriptive-analysis.html#c05-descriptive-analysis) \- Descriptive analyses**:
+ Calculation of point estimates
+ Estimation of standard errors and confidence intervals
+ Calculation of design effects
* **Chapter [6](c06-statistical-testing.html#c06-statistical-testing) \- Statistical testing**:
+ Statistical testing methods
+ Comparison of means and proportions
+ Goodness\-of\-fit tests, tests of independence, and tests of homogeneity
* **Chapter [7](c07-modeling.html#c07-modeling) \- Modeling**:
+ Overview of model formula specifications
+ Linear regression, ANOVA, and logistic regression modeling
* **Chapter [8](c08-communicating-results.html#c08-communicating-results) \- Communication of results**:
+ Strategies for communicating survey results
+ Tools and guidance for creating publishable tables and graphs
* **Chapter [9](c09-reprex-data.html#c09-reprex-data) \- Reproducible research**:
+ Tools and methods for achieving reproducibility
+ Resources for reproducible research
* **Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights) \- Sample designs and replicate weights**:
+ Overview of common sampling designs
+ Replicate weight methods
+ How to specify survey designs in R
* **Chapter [11](c11-missing-data.html#c11-missing-data) \- Missing data**:
+ Overview of missing data in surveys
+ Approaches to dealing with missing data
* **Chapter [12](c12-recommendations.html#c12-recommendations) \- Successful survey analysis recommendations**:
+ Tips for successful analysis
+ Recommendations for debugging
* **Chapter [13](c13-ncvs-vignette.html#c13-ncvs-vignette) \- National Crime Victimization Survey Vignette**:
+ Vignette on analyzing National Crime Victimization Survey (NCVS) data
+ Illustration of analysis requiring multiple files for victimization rates
* **Chapter [14](c14-ambarom-vignette.html#c14-ambarom-vignette) \- AmericasBarometer Vignette**:
+ Vignette on analyzing AmericasBarometer survey data
+ Creation of choropleth maps with survey estimates
The majority of chapters contain code that readers can follow. Each of these chapters starts with a “Prerequisites” section, which includes the code needed to load the packages and datasets used in the chapter. We then provide the main idea of the chapter and examples of how to use the functions. Most chapters conclude with exercises to work through. We provide the solutions to the exercises in the [online version of the book](https://tidy-survey-r.github.io/tidy-survey-book/).
While we provide a brief overview of survey methodology and statistical theory, this book is not intended to be the sole resource for these topics. We reference other materials and encourage readers to seek them out for more information.
1\.3 Prerequisites
------------------
To get the most out of this book, we assume a survey has already been conducted and readers have obtained a microdata file. Microdata, also known as respondent\-level or row\-level data, differ from summarized data typically found in tables. Microdata contain individual survey responses, along with analysis weights and design variables such as strata or clusters.
Additionally, the survey data should already include weights and design variables. These are required to accurately calculate unbiased estimates. The concepts and techniques discussed in this book help readers to extract meaningful insights from survey data, but this book does not cover how to create weights, as this is a separate complex topic. If weights are not already created for the survey data, we recommend reviewing other resources focused on weight creation such as Valliant and Dever ([2018](#ref-Valliant2018weights)).
This book is tailored for analysts already familiar with R and the tidyverse, but who may be new to complex survey analysis in R. We anticipate that readers of this book can:
* Install R and their Integrated Development Environment (IDE) of choice, such as RStudio
* Install and load packages from CRAN and GitHub repositories
* Run R code
* Read data from a folder or their working directory
* Understand fundamental tidyverse concepts such as tidy/long/wide data, tibbles, the magrittr pipe (`%>%`), and tidy selection
* Use the tidyverse packages to wrangle, tidy, and visualize data
If these concepts or skills are unfamiliar, we recommend starting with introductory resources to cover these topics before reading this book. R for Data Science ([Wickham, Çetinkaya\-Rundel, and Grolemund 2023](#ref-wickham2023r4ds)) is a beginner\-friendly guide for getting started in data science using R. It offers guidance on preliminary installation steps, basic R syntax, and tidyverse workflows and packages.
1\.4 Datasets used in this book
-------------------------------
We work with two key datasets throughout the book: the Residential Energy Consumption Survey (RECS – [U.S. Energy Information Administration 2023b](#ref-recs-2020-tech)) and the American National Election Studies (ANES – [DeBell 2010](#ref-debell)). We introduce the loading and preparation of these datasets in Chapter [4](c04-getting-started.html#c04-getting-started).
1\.5 Conventions
----------------
Throughout the book, we use the following typographical conventions:
* Package names are surrounded by curly brackets: {srvyr}
* Function names are in constant\-width text format and include parentheses: `survey_mean()`
* Object and variable names are in constant\-width text format: `anes_des`
1\.6 Getting help
-----------------
We recommend first trying to resolve errors and issues independently using the tips provided in Chapter [12](c12-recommendations.html#c12-recommendations).
There are several community forums for asking questions, including:
* [Posit Community](https://forum.posit.co/)
* [R for Data Science Slack Community](https://rfordatasci.com/)
* [Stack Overflow](https://stackoverflow.com/)
Please report any bugs and issues to the book’s [GitHub repository](https://github.com/tidy-survey-r/tidy-survey-book/issues).
1\.7 Acknowledgments
--------------------
We would like to thank Holly Cast, Greg Freedman Ellis, Joe Murphy, and Sheila Saia for their reviews of the initial draft. Their detailed and honest feedback helped improve this book, and we are grateful for their input. Additionally, this book started with two short courses. The first was at the Annual Conference for the American Association for Public Opinion Research (AAPOR) and the second was a series of webinars for the Midwest Association of Public Opinion Research (MAPOR). We would like to also thank those who assisted us by moderating breakout rooms and answering questions from attendees: Greg Freedman Ellis, Raphael Nishimura, and Benjamin Schneider.
1\.8 Colophon
-------------
This book was written in [bookdown](http://bookdown.org/) using [RStudio](http://www.rstudio.com/ide/). The complete source is available on [GitHub](https://github.com/tidy-survey-r/tidy-survey-book).
This version of the book was built with R version 4\.4\.0 (2024\-04\-24\) and with the packages listed in Table [1\.1](c01-intro.html#tab:intro-packages-tab).
TABLE 1\.1: Package versions and sources used in building this book
| **Package** | **Version** | **Source** |
| --- | --- | --- |
| DiagrammeR | 1\.0\.11 | CRAN |
| Matrix | 1\.7\-0 | CRAN |
| bookdown | 0\.39 | CRAN |
| broom | 1\.0\.5 | CRAN |
| censusapi | 0\.9\.0\.9000 | GitHub (hrecht/censusapi@74334d4\) |
| dplyr | 1\.1\.4 | CRAN |
| forcats | 1\.0\.0 | CRAN |
| ggpattern | 1\.0\.1 | CRAN |
| ggplot2 | 3\.5\.1 | CRAN |
| gt | 0\.11\.0\.9000 | GitHub (rstudio/gt@28de628\) |
| gtsummary | 1\.7\.2 | CRAN |
| haven | 2\.5\.4 | CRAN |
| janitor | 2\.2\.0 | CRAN |
| kableExtra | 1\.4\.0 | CRAN |
| knitr | 1\.46 | CRAN |
| labelled | 2\.13\.0 | CRAN |
| lubridate | 1\.9\.3 | CRAN |
| naniar | 1\.1\.0 | CRAN |
| osfr | 0\.2\.9 | CRAN |
| prettyunits | 1\.2\.0 | CRAN |
| purrr | 1\.0\.2 | CRAN |
| readr | 2\.1\.5 | CRAN |
| renv | 1\.0\.7 | CRAN |
| rmarkdown | 2\.26 | CRAN |
| rnaturalearth | 1\.0\.1 | CRAN |
| rnaturalearthdata | 1\.0\.0 | CRAN |
| sf | 1\.0\-16 | CRAN |
| srvyr | 1\.3\.0 | CRAN |
| srvyrexploR | 1\.0\.1 | GitHub (tidy\-survey\-r/srvyrexploR@cdf9316\) |
| stringr | 1\.5\.1 | CRAN |
| styler | 1\.10\.3 | CRAN |
| survey | 4\.4\-2 | CRAN |
| survival | 3\.6\-4 | CRAN |
| tibble | 3\.2\.1 | CRAN |
| tidycensus | 1\.6\.3 | CRAN |
| tidyr | 1\.3\.1 | CRAN |
| tidyselect | 1\.2\.1 | CRAN |
| tidyverse | 2\.0\.0 | CRAN |
1\.1 Survey analysis in R
-------------------------
The {survey} package was released on the [Comprehensive R Archive Network (CRAN)](https://cran.r-project.org/src/contrib/Archive/survey/) in 2003 and has been continuously developed over time. This package, primarily authored by Thomas Lumley, offers an extensive array of features, including:
* Calculation of point estimates and estimates of their uncertainty, including means, totals, ratios, quantiles, and proportions
* Estimation of regression models, including generalized linear models, log\-linear models, and survival curves
* Variances by Taylor linearization or by replicate weights, including balance repeated replication, jackknife, bootstrap, multistage bootstrap, or user\-supplied methods
* Hypothesis testing for means, proportions, and other parameters
The {srvyr} package builds on the {survey} package by providing wrappers for functions that align with the tidyverse philosophy. This is our motivation for using and recommending the {srvyr} package. We find that it is user\-friendly for those familiar with the tidyverse packages in R.
For example, while many functions in the {survey} package access variables through formulas, the {srvyr} package uses tidy selection to pass variable names, a common feature in the tidyverse ([Henry and Wickham 2024](#ref-R-tidyselect)). Users of the tidyverse are also likely familiar with the magrittr pipe operator (`%>%`), which seamlessly works with functions from the {srvyr} package. Moreover, several common functions from {dplyr}, such as `filter()`, `mutate()`, and `summarize()`, can be applied to survey objects ([Wickham et al. 2023](#ref-R-dplyr)). This enables users to streamline their analysis workflow and leverage the benefits of both the {srvyr} and {tidyverse} packages.
While the {srvyr} package offers many advantages, there is one notable limitation: it doesn’t fully incorporate the modeling capabilities of the {survey} package into tidy wrappers. When discussing modeling and hypothesis testing, we primarily rely on the {survey} package. However, we provide information on how to apply the pipe operator to these functions to maintain clarity and consistency in analyses.
1\.2 What to expect
-------------------
This book covers many aspects of survey design and analysis, from understanding how to create design objects to conducting descriptive analysis, statistical tests, and models. We emphasize coding best practices and effective presentation techniques while using real\-world data and practical examples to help readers gain proficiency in survey analysis.
Below is a summary of each chapter:
* **Chapter [2](c02-overview-surveys.html#c02-overview-surveys) \- Overview of surveys**:
+ Overview of survey design processes
+ References for more in\-depth knowledge
* **Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation) \- Survey data documentation**:
+ Guide to survey documentation types
+ How to read survey documentation
* **Chapter [4](c04-getting-started.html#c04-getting-started) \- Getting started**:
+ Installation of packages
+ Introduction to the {srvyrexploR} package and its analytic datasets
+ Outline of the survey analysis process
+ Comparison between the {dplyr} and {srvyr} packages
* **Chapter [5](c05-descriptive-analysis.html#c05-descriptive-analysis) \- Descriptive analyses**:
+ Calculation of point estimates
+ Estimation of standard errors and confidence intervals
+ Calculation of design effects
* **Chapter [6](c06-statistical-testing.html#c06-statistical-testing) \- Statistical testing**:
+ Statistical testing methods
+ Comparison of means and proportions
+ Goodness\-of\-fit tests, tests of independence, and tests of homogeneity
* **Chapter [7](c07-modeling.html#c07-modeling) \- Modeling**:
+ Overview of model formula specifications
+ Linear regression, ANOVA, and logistic regression modeling
* **Chapter [8](c08-communicating-results.html#c08-communicating-results) \- Communication of results**:
+ Strategies for communicating survey results
+ Tools and guidance for creating publishable tables and graphs
* **Chapter [9](c09-reprex-data.html#c09-reprex-data) \- Reproducible research**:
+ Tools and methods for achieving reproducibility
+ Resources for reproducible research
* **Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights) \- Sample designs and replicate weights**:
+ Overview of common sampling designs
+ Replicate weight methods
+ How to specify survey designs in R
* **Chapter [11](c11-missing-data.html#c11-missing-data) \- Missing data**:
+ Overview of missing data in surveys
+ Approaches to dealing with missing data
* **Chapter [12](c12-recommendations.html#c12-recommendations) \- Successful survey analysis recommendations**:
+ Tips for successful analysis
+ Recommendations for debugging
* **Chapter [13](c13-ncvs-vignette.html#c13-ncvs-vignette) \- National Crime Victimization Survey Vignette**:
+ Vignette on analyzing National Crime Victimization Survey (NCVS) data
+ Illustration of analysis requiring multiple files for victimization rates
* **Chapter [14](c14-ambarom-vignette.html#c14-ambarom-vignette) \- AmericasBarometer Vignette**:
+ Vignette on analyzing AmericasBarometer survey data
+ Creation of choropleth maps with survey estimates
The majority of chapters contain code that readers can follow. Each of these chapters starts with a “Prerequisites” section, which includes the code needed to load the packages and datasets used in the chapter. We then provide the main idea of the chapter and examples of how to use the functions. Most chapters conclude with exercises to work through. We provide the solutions to the exercises in the [online version of the book](https://tidy-survey-r.github.io/tidy-survey-book/).
While we provide a brief overview of survey methodology and statistical theory, this book is not intended to be the sole resource for these topics. We reference other materials and encourage readers to seek them out for more information.
1\.3 Prerequisites
------------------
To get the most out of this book, we assume a survey has already been conducted and readers have obtained a microdata file. Microdata, also known as respondent\-level or row\-level data, differ from summarized data typically found in tables. Microdata contain individual survey responses, along with analysis weights and design variables such as strata or clusters.
Additionally, the survey data should already include weights and design variables. These are required to accurately calculate unbiased estimates. The concepts and techniques discussed in this book help readers to extract meaningful insights from survey data, but this book does not cover how to create weights, as this is a separate complex topic. If weights are not already created for the survey data, we recommend reviewing other resources focused on weight creation such as Valliant and Dever ([2018](#ref-Valliant2018weights)).
This book is tailored for analysts already familiar with R and the tidyverse, but who may be new to complex survey analysis in R. We anticipate that readers of this book can:
* Install R and their Integrated Development Environment (IDE) of choice, such as RStudio
* Install and load packages from CRAN and GitHub repositories
* Run R code
* Read data from a folder or their working directory
* Understand fundamental tidyverse concepts such as tidy/long/wide data, tibbles, the magrittr pipe (`%>%`), and tidy selection
* Use the tidyverse packages to wrangle, tidy, and visualize data
If these concepts or skills are unfamiliar, we recommend starting with introductory resources to cover these topics before reading this book. R for Data Science ([Wickham, Çetinkaya\-Rundel, and Grolemund 2023](#ref-wickham2023r4ds)) is a beginner\-friendly guide for getting started in data science using R. It offers guidance on preliminary installation steps, basic R syntax, and tidyverse workflows and packages.
1\.4 Datasets used in this book
-------------------------------
We work with two key datasets throughout the book: the Residential Energy Consumption Survey (RECS – [U.S. Energy Information Administration 2023b](#ref-recs-2020-tech)) and the American National Election Studies (ANES – [DeBell 2010](#ref-debell)). We introduce the loading and preparation of these datasets in Chapter [4](c04-getting-started.html#c04-getting-started).
1\.5 Conventions
----------------
Throughout the book, we use the following typographical conventions:
* Package names are surrounded by curly brackets: {srvyr}
* Function names are in constant\-width text format and include parentheses: `survey_mean()`
* Object and variable names are in constant\-width text format: `anes_des`
1\.6 Getting help
-----------------
We recommend first trying to resolve errors and issues independently using the tips provided in Chapter [12](c12-recommendations.html#c12-recommendations).
There are several community forums for asking questions, including:
* [Posit Community](https://forum.posit.co/)
* [R for Data Science Slack Community](https://rfordatasci.com/)
* [Stack Overflow](https://stackoverflow.com/)
Please report any bugs and issues to the book’s [GitHub repository](https://github.com/tidy-survey-r/tidy-survey-book/issues).
1\.7 Acknowledgments
--------------------
We would like to thank Holly Cast, Greg Freedman Ellis, Joe Murphy, and Sheila Saia for their reviews of the initial draft. Their detailed and honest feedback helped improve this book, and we are grateful for their input. Additionally, this book started with two short courses. The first was at the Annual Conference for the American Association for Public Opinion Research (AAPOR) and the second was a series of webinars for the Midwest Association of Public Opinion Research (MAPOR). We would like to also thank those who assisted us by moderating breakout rooms and answering questions from attendees: Greg Freedman Ellis, Raphael Nishimura, and Benjamin Schneider.
1\.8 Colophon
-------------
This book was written in [bookdown](http://bookdown.org/) using [RStudio](http://www.rstudio.com/ide/). The complete source is available on [GitHub](https://github.com/tidy-survey-r/tidy-survey-book).
This version of the book was built with R version 4\.4\.0 (2024\-04\-24\) and with the packages listed in Table [1\.1](c01-intro.html#tab:intro-packages-tab).
TABLE 1\.1: Package versions and sources used in building this book
| **Package** | **Version** | **Source** |
| --- | --- | --- |
| DiagrammeR | 1\.0\.11 | CRAN |
| Matrix | 1\.7\-0 | CRAN |
| bookdown | 0\.39 | CRAN |
| broom | 1\.0\.5 | CRAN |
| censusapi | 0\.9\.0\.9000 | GitHub (hrecht/censusapi@74334d4\) |
| dplyr | 1\.1\.4 | CRAN |
| forcats | 1\.0\.0 | CRAN |
| ggpattern | 1\.0\.1 | CRAN |
| ggplot2 | 3\.5\.1 | CRAN |
| gt | 0\.11\.0\.9000 | GitHub (rstudio/gt@28de628\) |
| gtsummary | 1\.7\.2 | CRAN |
| haven | 2\.5\.4 | CRAN |
| janitor | 2\.2\.0 | CRAN |
| kableExtra | 1\.4\.0 | CRAN |
| knitr | 1\.46 | CRAN |
| labelled | 2\.13\.0 | CRAN |
| lubridate | 1\.9\.3 | CRAN |
| naniar | 1\.1\.0 | CRAN |
| osfr | 0\.2\.9 | CRAN |
| prettyunits | 1\.2\.0 | CRAN |
| purrr | 1\.0\.2 | CRAN |
| readr | 2\.1\.5 | CRAN |
| renv | 1\.0\.7 | CRAN |
| rmarkdown | 2\.26 | CRAN |
| rnaturalearth | 1\.0\.1 | CRAN |
| rnaturalearthdata | 1\.0\.0 | CRAN |
| sf | 1\.0\-16 | CRAN |
| srvyr | 1\.3\.0 | CRAN |
| srvyrexploR | 1\.0\.1 | GitHub (tidy\-survey\-r/srvyrexploR@cdf9316\) |
| stringr | 1\.5\.1 | CRAN |
| styler | 1\.10\.3 | CRAN |
| survey | 4\.4\-2 | CRAN |
| survival | 3\.6\-4 | CRAN |
| tibble | 3\.2\.1 | CRAN |
| tidycensus | 1\.6\.3 | CRAN |
| tidyr | 1\.3\.1 | CRAN |
| tidyselect | 1\.2\.1 | CRAN |
| tidyverse | 2\.0\.0 | CRAN |
| Social Science |
tidy-survey-r.github.io | https://tidy-survey-r.github.io/tidy-survey-book/c02-overview-surveys.html |
Chapter 2 Overview of surveys
=============================
2\.1 Introduction
-----------------
Developing surveys to gather accurate information about populations involves an intricate and time\-intensive process. Researchers can spend months, or even years, developing the study design, questions, and other methods for a single survey to ensure high\-quality data is collected.
Before analyzing survey data, we recommend understanding the entire survey life cycle. This understanding can provide better insight into what types of analyses should be conducted on the data. The survey life cycle consists of the necessary stages to execute a survey project successfully. Each stage influences the survey’s timing, costs, and feasibility, consequently impacting the data collected and how we should analyze them. Figure [2\.1](c02-overview-surveys.html#fig:overview-diag) shows a high\-level overview of the survey process.
FIGURE 2\.1: Overview of the survey process
The survey life cycle starts with a research topic or question of interest (e.g., the impact that childhood trauma has on health outcomes later in life). Drawing from available resources can result in a reduced burden on respondents, lower costs, and faster research outcomes. Therefore, we recommend reviewing existing data sources to determine if data that can address this question are already available. However, if existing data cannot answer the nuances of the research question, we can capture the exact data we need through a questionnaire, or a set of questions.
To gain a deeper understanding of survey design and implementation, we recommend reviewing several pieces of existing literature in detail (e.g., [Biemer and Lyberg 2003](#ref-biemer2003survqual); [Bradburn, Sudman, and Wansink 2004](#ref-Bradburn2004); [Dillman, Smyth, and Christian 2014](#ref-dillman2014mode); [Groves et al. 2009](#ref-groves2009survey); [Tourangeau, Rips, and Rasinski 2000](#ref-Tourangeau2000psych); [Valliant, Dever, and Kreuter 2013](#ref-valliant2013practical)).
2\.2 Searching for public\-use survey data
------------------------------------------
Throughout this book, we use public\-use datasets from different surveys, including the American National Election Studies (ANES), the Residential Energy Consumption Survey (RECS), the National Crime Victimization Survey (NCVS), and the AmericasBarometer surveys.
As mentioned above, we should look for existing data that can provide insights into our research questions before embarking on a new survey. One of the greatest sources of data is the government. For example, in the U.S., we can get data directly from the various statistical agencies such as the U.S. Energy Information Administration or Bureau of Justice Statistics. Other countries often have data available through official statistics offices, such as the Office for National Statistics in the United Kingdom.
In addition to government data, many researchers make their data publicly available through repositories such as the [Inter\-university Consortium for Political and Social Research (ICPSR)](https://www.icpsr.umich.edu/web/pages/ICPSR/ssvd/) or the [Odum Institute Data Archive](https://odum.unc.edu/archive/). Searching these repositories or other compiled lists (e.g., [Analyze Survey Data for Free](https://asdfree.com)) can be an efficient way to identify surveys with questions related to our research topic.
2\.3 Pre\-survey planning
-------------------------
There are multiple things to consider when starting a survey. Errors are the differences between the true values of the variables being studied and the values obtained through the survey. Each step and decision made before the launch of the survey impact the types of errors that are introduced into the data, which in turn impact how to interpret the results.
Generally, survey researchers consider there to be seven main sources of error that fall under either Representation or Measurement ([Groves et al. 2009](#ref-groves2009survey)):
* Representation
+ Coverage Error: A mismatch between the population of interest and the sampling frame, the list from which the sample is drawn.
+ Sampling Error: Error produced when selecting a sample, the subset of the population, from the sampling frame. This error is due to randomization, and we discuss how to quantify this error in Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights). There is no sampling error in a census, as there is no randomization. The sampling error measures the difference between all potential samples under the same sampling method.
+ Nonresponse Error: Differences between those who responded and did not respond to the survey (unit nonresponse) or a given question (item nonresponse).
+ Adjustment Error: Error introduced during post\-survey statistical adjustments.
* Measurement
+ Validity: A mismatch between the research topic and the question(s) used to collect that information.
+ Measurement Error: A mismatch between what the researcher asked and how the respondent answered.
+ Processing Error: Edits by the researcher to responses provided by the respondent (e.g., adjustments to data based on illogical responses).
Almost every survey has errors. Researchers attempt to conduct a survey that reduces the total survey error, or the accumulation of all errors that may arise throughout the survey life cycle. By assessing these different types of errors together, researchers can seek strategies to maximize the overall survey quality and improve the reliability and validity of results ([Biemer 2010](#ref-tse-doc)). However, attempts to reduce individual source errors (and therefore total survey error) come at the price of time and money. For example:
* Coverage Error Tradeoff: Researchers can search for or create more accurate and updated sampling frames, but they can be difficult to construct or obtain.
* Sampling Error Tradeoff: Researchers can increase the sample size to reduce sampling error; however, larger samples can be expensive and time\-consuming to field.
* Nonresponse Error Tradeoff: Researchers can increase or diversify efforts to improve survey participation, but this may be resource\-intensive while not entirely removing nonresponse bias.
* Adjustment Error Tradeoff: Weighting is a statistical technique used to adjust the contribution of individual survey responses to the final survey estimates. It is typically done to make the sample more representative of the population of interest. However, if researchers do not carefully execute the adjustments or base them on inaccurate information, they can introduce new biases, leading to less accurate estimates.
* Validity Error Tradeoff: Researchers can increase validity through a variety of ways, such as using established scales or collaborating with a psychometrician during survey design to pilot and evaluate questions. However, doing so increases the amount of time and resources needed to complete survey design.
* Measurement Error Tradeoff: Researchers can use techniques such as questionnaire testing and cognitive interviewing to ensure respondents are answering questions as expected. However, these activities require time and resources to complete.
* Processing Error Tradeoff: Researchers can impose rigorous data cleaning and validation processes. However, this requires supervision, training, and time.
The challenge for survey researchers is to find the optimal tradeoffs among these errors. They must carefully consider ways to reduce each error source and total survey error while balancing their study’s objectives and resources.
For survey analysts, understanding the decisions that researchers took to minimize these error sources can impact how results are interpreted. The remainder of this chapter explores critical considerations for survey development. We explore how to consider each of these sources of error and how these error sources can inform the interpretations of the data.
2\.4 Study design
-----------------
From formulating methodologies to choosing an appropriate sampling frame, the study design phase is where the blueprint for a successful survey takes shape. Study design encompasses multiple parts of the survey life cycle, including decisions on the population of interest, survey mode (the format through which a survey is administered to respondents), timeline, and questionnaire design. Knowing who and how to survey individuals depends on the study’s goals and the feasibility of implementation. This section explores the strategic planning that lays the foundation for a survey.
### 2\.4\.1 Sampling design
The set or group we want to survey is known as the population of interest or the target population. The population of interest could be broad, such as “all adults age 18\+ living in the U.S.” or a specific population based on a particular characteristic or location. For example, we may want to know about “adults aged 18–24 who live in North Carolina” or “eligible voters living in Illinois.”
However, a sampling frame with contact information is needed to survey individuals in these populations of interest. If we are looking at eligible voters, the sampling frame could be the voting registry for a given state or area. If we are looking at more board populations of interest, like all adults in the United States, the sampling frame is likely imperfect. In these cases, a full list of individuals in the United States is not available for a sampling frame. Instead, we may choose to use a sampling frame of mailing addresses and send the survey to households, or we may choose to use random digit dialing (RDD) and call random phone numbers (that may or may not be assigned, connected, and working).
These imperfect sampling frames can result in coverage error where there is a mismatch between the population of interest and the list of individuals we can select. For example, if we are looking to obtain estimates for “all adults aged 18\+ living in the U.S.,” a sampling frame of mailing addresses will miss specific types of individuals, such as the homeless, transient populations, and incarcerated individuals. Additionally, many households have more than one adult resident, so we would need to consider how to get a specific individual to fill out the survey (called within household selection) or adjust the population of interest to report on “U.S. households” instead of “individuals.”
Once we have selected the sampling frame, the next step is determining how to select individuals for the survey. In rare cases, we may conduct a census and survey everyone on the sampling frame. However, the ability to implement a questionnaire at that scale is something only a few can do (e.g., government censuses). Instead, we typically choose to sample individuals and use weights to estimate numbers in the population of interest. They can use a variety of different sampling methods, and more information on these can be found in Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights). This decision of which sampling method to use impacts sampling error and can be accounted for in weighting.
#### Example: Number of pets in a household
Let’s use a simple example where we are interested in the average number of pets in a household. We need to consider the population of interest for this study. Specifically, are we interested in all households in a given country or households in a more local area (e.g., city or state)? Let’s assume we are interested in the number of pets in a U.S. household with at least one adult (18 years or older). In this case, a sampling frame of mailing addresses would introduce only a small amount of coverage error as the frame would closely match our population of interest. Specifically, we would likely want to use the Computerized Delivery Sequence File (CDSF), which is a file of mailing addresses that the United States Postal Service (USPS) creates and covers nearly 100% of U.S. households ([Harter et al. 2016](#ref-harter2016address)). To sample these households, for simplicity, we use a stratified simple random sample design (see Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights) for more information on sample designs), where we randomly sample households within each state (i.e., we stratify by state).
Throughout this chapter, we build on this example research question to plan a survey.
### 2\.4\.2 Data collection planning
With the sampling design decided, researchers can then decide how to survey these individuals. Specifically, the modes used for contacting and surveying the sample, how frequently to send reminders and follow\-ups, and the overall timeline of the study are some of the major data collection determinations. Traditionally, survey researchers have considered there to be four main modes[1](#fn1):
* Computer\-Assisted Personal Interview (CAPI; also known as face\-to\-face or in\-person interviewing)
* Computer\-Assisted Telephone Interview (CATI; also known as phone or telephone interviewing)
* Computer\-Assisted Web Interview (CAWI; also known as web or online interviewing)
* Paper and Pencil Interview (PAPI)
We can use a single mode to collect data or multiple modes (also called mixed\-modes). Using mixed\-modes can allow for broader reach and increase response rates depending on the population of interest ([Biemer et al. 2017](#ref-biemer_choiceplus); [DeLeeuw 2005](#ref-deLeeuw2005), [2018](#ref-DeLeeuw_2018)). For example, we could both call households to conduct a CATI survey and send mail with a PAPI survey to the household. By using both modes, we could gain participation through the mail from individuals who do not pick up the phone to unknown numbers or through the phone from individuals who do not open all of their mail. However, mode effects (where responses differ based on the mode of response) can be present in the data and may need to be considered during analysis.
When selecting which mode, or modes, to use, understanding the unique aspects of the chosen population of interest and sampling frame provides insight into how they can best be reached and engaged. For example, if we plan to survey adults aged 18–24 who live in North Carolina, asking them to complete a survey using CATI (i.e., over the phone) would likely not be as successful as other modes like the web. This age group does not talk on the phone as much as other generations and often does not answer phone calls from unknown numbers. Additionally, the mode for contacting respondents relies on what information is available in the sampling frame. For example, if our sampling frame includes an email address, we could email our selected sample members to convince them to complete a survey. Alternatively, if the sampling frame is a list of mailing addresses, we could contact sample members with a letter.
It is important to note that there can be a difference between the contact and survey modes. For example, if we have a sampling frame with addresses, we can send a letter to our sample members and provide information on completing a web survey. Another option is using mixed\-mode surveys by mailing sample members a paper and pencil survey but also including instructions to complete the survey online. Combining different contact modes and different survey modes can be helpful in reducing unit nonresponse error–where the entire unit (e.g., a household) does not respond to the survey at all–as different sample members may respond better to different contact and survey modes. However, when considering which modes to use, it is important to make access to the survey as easy as possible for sample members to reduce burden and unit nonresponse.
Another way to reduce unit nonresponse error is by varying the language of the contact materials ([Dillman, Smyth, and Christian 2014](#ref-dillman2014mode)). People are motivated by different things, so constantly repeating the same message may not be helpful. Instead, mixing up the messaging and the type of contact material the sample member receives can increase response rates and reduce the unit nonresponse error. For example, instead of only sending standard letters, we could consider sending mailings that invoke “urgent” or “important” thoughts by sending priority letters or using other delivery services like FedEx, UPS, or DHL.
A study timeline may also determine the number and types of contacts. If the timeline is long, there is plentiful time for follow\-ups and diversified messages in contact materials. If the timeline is short, then fewer follow\-ups can be implemented. Many studies start with the tailored design method put forth by Dillman, Smyth, and Christian ([2014](#ref-dillman2014mode)) and implement five contacts:
* Pre\-notification (Pre\-notice) to let sample members know the survey is coming
* Invitation to complete the survey
* Reminder to also thank the respondents who have already completed the survey
* Reminder (with a replacement paper survey if needed)
* Final reminder
This method is easily adaptable based on the study timeline and needs but provides a starting point for most studies.
#### Example: Number of pets in a household
Let’s return to our example of the average number of pets in a household. We are using a sampling frame of mailing addresses, so we recommend starting our data collection with letters mailed to households, but later in data collection, we want to send interviewers to the house to conduct an in\-person (or CAPI) interview to decrease unit nonresponse error. This means we have two contact modes (paper and in\-person). As mentioned above, the survey mode does not have to be the same as the contact mode, so we recommend a mixed\-mode study with both web and CAPI modes. Let’s assume we have 6 months for data collection, so we could recommend Table [2\.1](c02-overview-surveys.html#tab:prot-examp)’s protocol:
TABLE 2\.1: Protocol example for 6\-month web and CAPI data collection
| Week | Contact Mode | Contact Message | Survey Mode Offered |
| --- | --- | --- | --- |
| 1 | Mail: Letter | Pre\-notice | — |
| 2 | Mail: Letter | Invitation | Web |
| 3 | Mail: Postcard | Thank You/Reminder | Web |
| 6 | Mail: Letter in large envelope | Animal Welfare Discussion | Web |
| 10 | Mail: Postcard | Inform Upcoming In\-Person Visit | Web |
| 14 | In\-Person Visit | — | CAPI |
| 16 | Mail: Letter | Reminder of In\-Person Visit | Web, but includes a number to call to schedule CAPI |
| 20 | In\-Person Visit | — | CAPI |
| 25 | Mail: Letter in large envelope | Survey Closing Notice | Web, but includes a number to call to schedule CAPI |
This is just one possible protocol that we can use that starts respondents with the web (typically done to reduce costs). However, we could begin in\-person data collection earlier during the data collection period or ask interviewers to attempt more than two visits with a household.
### 2\.4\.3 Questionnaire design
When developing the questionnaire, it can be helpful to first outline the topics to be asked and include the “why” each question or topic is important to the research question(s). This can help us better tailor the questionnaire and reduce the number of questions (and thus the burden on the respondent) if topics are deemed irrelevant to the research question. When making these decisions, we should also consider questions needed for weighting. While we would love to have everyone in our population of interest answer our survey, this rarely happens. Thus, including questions about demographics in the survey can assist with weighting for nonresponse errors (both unit and item nonresponse). Knowing the details of the sampling plan and what may impact coverage error and sampling error can help us determine what types of demographics to include. Thus questionnaire design is typically done in conjunction with sampling design.
We can benefit from the work of others by using questions from other surveys. Demographic sections in surveys, such as race, ethnicity, or education, often are borrowed questions from a government census or other official surveys. Question banks such as the [ICPSR variable search](https://www.icpsr.umich.edu/web/pages/ICPSR/ssvd/) can provide additional potential questions.
If a question does not exist in a question bank, we can craft our own. When developing survey questions, we should start with the research topic and attempt to write questions that match the concept. The closer the question asked is to the overall concept, the better validity there is. For example, if we want to know how people consume T.V. series and movies but only ask a question about how many T.V.s are in the house, then we would be missing other ways that people watch T.V. series and movies, such as on other devices or at places outside of the home. As mentioned above, we can employ techniques to increase the validity of questionnaires. For example, questionnaire testing involves piloting the survey instrument to identify and fix potential issues before conducting the main survey. Additionally, we could conduct cognitive interviews – a technique where we walk through the survey with participants, encouraging them to speak their thoughts out loud to uncover how they interpret and understand survey questions.
Additionally, when designing questions, we should consider the mode for the survey and adjust the language appropriately. In self\-administered surveys (e.g., web or mail), respondents can see all the questions and response options, but that is not the case in interviewer\-administered surveys (e.g., CATI or CAPI). With interviewer\-administered surveys, the response options must be read aloud to the respondents, so the question may need to be adjusted to create a better flow to the interview. Additionally, with self\-administered surveys, because the respondents are viewing the questionnaire, the formatting of the questions is even more critical to ensure accurate measurement. Incorrect formatting or wording can result in measurement error, so following best practices or using existing validated questions can reduce error. There are multiple resources to help researchers draft questions for different modes (e.g., [Bradburn, Sudman, and Wansink 2004](#ref-Bradburn2004); [Dillman, Smyth, and Christian 2014](#ref-dillman2014mode); [Fowler and Mangione 1989](#ref-Fowler1989); [Tourangeau, Couper, and Conrad 2004](#ref-Tourangeau2004spacing)).
#### Example: Number of pets in a household
As part of our survey on the average number of pets in a household, we may want to know what animal most people prefer to have as a pet. Let’s say we have a question in our survey as displayed in Figure [2\.2](c02-overview-surveys.html#fig:overview-pet-examp1).
FIGURE 2\.2: Example question asking pet preference type
This question may have validity issues as it only provides the options of “dogs” and “cats” to respondents, and the interpretation of the data could be incorrect. For example, if we had 100 respondents who answered the question and 50 selected dogs, then the results of this question cannot be “50% of the population prefers to have a dog as a pet,” as only two response options were provided. If a respondent taking our survey prefers turtles, they could either be forced to choose a response between these two (i.e., interpret the question as “between dogs and cats, which do you prefer?” and result in measurement error), or they may not answer the question (which results in item nonresponse error). Based on this, the interpretation of this question should be, “When given a choice between dogs and cats, 50% of respondents preferred to have a dog as a pet.”
To avoid this issue, we should consider these possibilities and adjust the question accordingly. One simple way could be to add an “other” response option to give respondents a chance to provide a different response. The “other” response option could then include a way for respondents to write their other preference. For example, we could rewrite this question as displayed in Figure [2\.3](c02-overview-surveys.html#fig:overview-pet-examp2).
FIGURE 2\.3: Example question asking pet preference type with other specify option
We can then code the responses from the open\-ended box and get a better understanding of the respondent’s choice of preferred pet. Interpreting this question becomes easier as researchers no longer need to qualify the results with the choices provided.
This is a simple example of how the presentation of the question and options can impact the findings. For more complex topics and questions, we must thoroughly consider how to mitigate any impacts from the presentation, formatting, wording, and other aspects. For survey analysts, reviewing not only the data but also the wording of the questions is crucial to ensure the results are presented in a manner consistent with the question asked. Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation) provides further details on how to review existing survey documentation to inform our analyses, and Chapter [8](c08-communicating-results.html#c08-communicating-results) goes into more details on communicating results.
2\.5 Data collection
--------------------
Once the data collection starts, we try to stick to the data collection protocol designed during pre\-survey planning. However, effective researchers also prepare to adjust their plans and adapt as needed to the current progress of data collection ([Schouten, Peytchev, and Wagner 2018](#ref-Schouten2018)). Some extreme examples could be natural disasters that could prevent mailings or interviewers from getting to the sample members. This could cause an in\-person survey needing to quickly pivot to a self\-administered survey, or the field period could be delayed, for example. Others could be smaller in that something newsworthy occurs connected to the survey, so we could choose to play this up in communication materials. In addition to these external factors, there could be factors unique to the survey, such as lower response rates for a specific subgroup, so the data collection protocol may need to find ways to improve response rates for that specific group.
2\.6 Post\-survey processing
----------------------------
After data collection, various activities need to be completed before we can analyze the survey. Multiple decisions made during this post\-survey phase can assist us in reducing different error sources, such as weighting to account for the sample selection. Knowing the decisions made in creating the final analytic data can impact how we use the data and interpret the results.
### 2\.6\.1 Data cleaning and imputation
Post\-survey cleaning is one of the first steps we do to get the survey responses into an analytic dataset. Data cleaning can consist of correcting inconsistent data (e.g., with skip pattern errors or multiple questions throughout the survey being consistent with each other), editing numeric entries or open\-ended responses for grammar and consistency, or recoding open\-ended questions into categories for analysis. There is no universal set of fixed rules that every survey must adhere to. Instead, each survey or research study should establish its own guidelines and procedures for handling various cleaning scenarios based on its specific objectives.
We should use our best judgment to ensure data integrity, and all decisions should be documented and available to those using the data in the analysis. Each decision we make impacts processing error, so often, multiple people review these rules or recode open\-ended data and adjudicate any differences in an attempt to reduce this error.
Another crucial step in post\-survey processing is imputation. Often, there is item nonresponse where respondents do not answer specific questions. If the questions are crucial to analysis efforts or the research question, we may implement imputation to reduce item nonresponse error. Imputation is a technique for replacing missing or incomplete data values with estimated values. However, as imputation is a way of assigning values to missing data based on an algorithm or model, it can also introduce processing error, so we should consider the overall implications of imputing data compared to having item nonresponse. There are multiple ways to impute data. We recommend reviewing other resources like Kim and Shao ([2021](#ref-Kim2021)) for more information.
#### Example: Number of pets in a household
Let’s return to the question we created to ask about [animal preference](c02-overview-surveys.html#overview-design-questionnaire-ex). The “other specify” invites respondents to specify the type of animal they prefer to have as a pet. If respondents entered answers such as “puppy,” “turtle,” “rabit,” “rabbit,” “bunny,” “ant farm,” “snake,” “Mr. Purr,” then we may wish to categorize these write\-in responses to help with analysis. In this example, “puppy” could be assumed to be a reference to a “Dog” and could be recoded there. The misspelling of “rabit” could be coded along with “rabbit” and “bunny” into a single category of “Bunny or Rabbit.” These are relatively standard decisions that we can make. The remaining write\-in responses could be categorized in a few different ways. “Mr. Purr,” which may be someone’s reference to their own cat, could be recoded as “Cat,” or it could remain as “Other” or some category that is “Unknown.” Depending on the number of responses related to each of the others, they could all be combined into a single “Other” category, or maybe categories such as “Reptiles” or “Insects” could be created. Each of these decisions may impact the interpretation of the data, so we should document the types of responses that fall into each of the new categories and any decisions made.
### 2\.6\.2 Weighting
We can address some error sources identified in the previous sections using weighting. During the weighting process, weights are created for each respondent record. These weights allow the survey responses to generalize to the population. A weight, generally, reflects how many units in the population each respondent represents. Often, the weight is constructed such that the sum of the weights is the size of the population.
Weights can address coverage, sampling, and nonresponse errors. Many published surveys include an “analysis weight” variable that combines these adjustments. However, weighting itself can also introduce adjustment error, so we need to balance which types of errors should be corrected with weighting. The construction of weights is outside the scope of this book, we recommend referencing other materials if interested in weight construction ([Valliant and Dever 2018](#ref-Valliant2018weights)). Instead, this book assumes the survey has been completed, weights are constructed, and data are available to users.
#### Example: Number of pets in a household
In the simple example of our survey, we decided to obtain a random sample from each state to select our sample members. Knowing this sampling design, we can include selection weights for analysis that account for how the sample members were selected for the survey. Additionally, the sampling frame may have the type of building associated with each address, so we could include the building type as a potential nonresponse weighting variable, along with some interviewer observations that may be related to our research topic of the average number of pets in a household. Combining these weights, we can create an analytic weight that analysts need to use when analyzing the data.
### 2\.6\.3 Disclosure
Before data is released publicly, we need to ensure that individual respondents cannot be identified by the data when confidentiality is required. There are a variety of different methods that can be used. Here we describe a few of the most commonly used:
* Data swapping: We may swap specific data values across different respondents so that it does not impact insights from the data but ensures that specific individuals cannot be identified.
* Top/bottom coding: We may choose top or bottom coding to mask extreme values. For example, we may top\-code income values such that households with income greater than $500,000 are coded as “$500,000 or more” with other incomes being presented as integers between $0 and $499,999\. This can impact analyses at the tails of the distribution.
* Coarsening: We may use coarsening to mask unique values. For example, a survey question may ask for a precise income but the public data may include income as a categorical variable. Another example commonly used in survey practice is to coarsen geographic variables. Data collectors likely know the precise address of sample members, but the public data may only include the state or even region of respondents.
* Perturbation: We may add random noise to outcomes. As with swapping, this is done so that it does not impact insights from the data but ensures that specific individuals cannot be identified.
There is as much art as there is science to the methods used for disclosure. Only high\-level comments about the disclosure are provided in the survey documentation, not specific details. This ensures nobody can reverse the disclosure and thus identify individuals. For more information on different disclosure methods, please see Skinner ([2009](#ref-Skinner2009)) and the [AAPOR Standards](https://aapor.org/standards-and-ethics/disclosure-standards/).
### 2\.6\.4 Documentation
Documentation is a critical step of the survey life cycle. We should systematically record all the details, decisions, procedures, and methodologies to ensure transparency, reproducibility, and the overall quality of survey research.
Proper documentation allows analysts to understand, reproduce, and evaluate the study’s methods and findings. Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation) dives into how analysts should use survey data documentation.
2\.7 Post\-survey data analysis and reporting
---------------------------------------------
After completing the survey life cycle, the data are ready for analysts. Chapter [4](c04-getting-started.html#c04-getting-started) continues from this point. For more information on the survey life cycle, please explore the references cited throughout this chapter.
2\.1 Introduction
-----------------
Developing surveys to gather accurate information about populations involves an intricate and time\-intensive process. Researchers can spend months, or even years, developing the study design, questions, and other methods for a single survey to ensure high\-quality data is collected.
Before analyzing survey data, we recommend understanding the entire survey life cycle. This understanding can provide better insight into what types of analyses should be conducted on the data. The survey life cycle consists of the necessary stages to execute a survey project successfully. Each stage influences the survey’s timing, costs, and feasibility, consequently impacting the data collected and how we should analyze them. Figure [2\.1](c02-overview-surveys.html#fig:overview-diag) shows a high\-level overview of the survey process.
FIGURE 2\.1: Overview of the survey process
The survey life cycle starts with a research topic or question of interest (e.g., the impact that childhood trauma has on health outcomes later in life). Drawing from available resources can result in a reduced burden on respondents, lower costs, and faster research outcomes. Therefore, we recommend reviewing existing data sources to determine if data that can address this question are already available. However, if existing data cannot answer the nuances of the research question, we can capture the exact data we need through a questionnaire, or a set of questions.
To gain a deeper understanding of survey design and implementation, we recommend reviewing several pieces of existing literature in detail (e.g., [Biemer and Lyberg 2003](#ref-biemer2003survqual); [Bradburn, Sudman, and Wansink 2004](#ref-Bradburn2004); [Dillman, Smyth, and Christian 2014](#ref-dillman2014mode); [Groves et al. 2009](#ref-groves2009survey); [Tourangeau, Rips, and Rasinski 2000](#ref-Tourangeau2000psych); [Valliant, Dever, and Kreuter 2013](#ref-valliant2013practical)).
2\.2 Searching for public\-use survey data
------------------------------------------
Throughout this book, we use public\-use datasets from different surveys, including the American National Election Studies (ANES), the Residential Energy Consumption Survey (RECS), the National Crime Victimization Survey (NCVS), and the AmericasBarometer surveys.
As mentioned above, we should look for existing data that can provide insights into our research questions before embarking on a new survey. One of the greatest sources of data is the government. For example, in the U.S., we can get data directly from the various statistical agencies such as the U.S. Energy Information Administration or Bureau of Justice Statistics. Other countries often have data available through official statistics offices, such as the Office for National Statistics in the United Kingdom.
In addition to government data, many researchers make their data publicly available through repositories such as the [Inter\-university Consortium for Political and Social Research (ICPSR)](https://www.icpsr.umich.edu/web/pages/ICPSR/ssvd/) or the [Odum Institute Data Archive](https://odum.unc.edu/archive/). Searching these repositories or other compiled lists (e.g., [Analyze Survey Data for Free](https://asdfree.com)) can be an efficient way to identify surveys with questions related to our research topic.
2\.3 Pre\-survey planning
-------------------------
There are multiple things to consider when starting a survey. Errors are the differences between the true values of the variables being studied and the values obtained through the survey. Each step and decision made before the launch of the survey impact the types of errors that are introduced into the data, which in turn impact how to interpret the results.
Generally, survey researchers consider there to be seven main sources of error that fall under either Representation or Measurement ([Groves et al. 2009](#ref-groves2009survey)):
* Representation
+ Coverage Error: A mismatch between the population of interest and the sampling frame, the list from which the sample is drawn.
+ Sampling Error: Error produced when selecting a sample, the subset of the population, from the sampling frame. This error is due to randomization, and we discuss how to quantify this error in Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights). There is no sampling error in a census, as there is no randomization. The sampling error measures the difference between all potential samples under the same sampling method.
+ Nonresponse Error: Differences between those who responded and did not respond to the survey (unit nonresponse) or a given question (item nonresponse).
+ Adjustment Error: Error introduced during post\-survey statistical adjustments.
* Measurement
+ Validity: A mismatch between the research topic and the question(s) used to collect that information.
+ Measurement Error: A mismatch between what the researcher asked and how the respondent answered.
+ Processing Error: Edits by the researcher to responses provided by the respondent (e.g., adjustments to data based on illogical responses).
Almost every survey has errors. Researchers attempt to conduct a survey that reduces the total survey error, or the accumulation of all errors that may arise throughout the survey life cycle. By assessing these different types of errors together, researchers can seek strategies to maximize the overall survey quality and improve the reliability and validity of results ([Biemer 2010](#ref-tse-doc)). However, attempts to reduce individual source errors (and therefore total survey error) come at the price of time and money. For example:
* Coverage Error Tradeoff: Researchers can search for or create more accurate and updated sampling frames, but they can be difficult to construct or obtain.
* Sampling Error Tradeoff: Researchers can increase the sample size to reduce sampling error; however, larger samples can be expensive and time\-consuming to field.
* Nonresponse Error Tradeoff: Researchers can increase or diversify efforts to improve survey participation, but this may be resource\-intensive while not entirely removing nonresponse bias.
* Adjustment Error Tradeoff: Weighting is a statistical technique used to adjust the contribution of individual survey responses to the final survey estimates. It is typically done to make the sample more representative of the population of interest. However, if researchers do not carefully execute the adjustments or base them on inaccurate information, they can introduce new biases, leading to less accurate estimates.
* Validity Error Tradeoff: Researchers can increase validity through a variety of ways, such as using established scales or collaborating with a psychometrician during survey design to pilot and evaluate questions. However, doing so increases the amount of time and resources needed to complete survey design.
* Measurement Error Tradeoff: Researchers can use techniques such as questionnaire testing and cognitive interviewing to ensure respondents are answering questions as expected. However, these activities require time and resources to complete.
* Processing Error Tradeoff: Researchers can impose rigorous data cleaning and validation processes. However, this requires supervision, training, and time.
The challenge for survey researchers is to find the optimal tradeoffs among these errors. They must carefully consider ways to reduce each error source and total survey error while balancing their study’s objectives and resources.
For survey analysts, understanding the decisions that researchers took to minimize these error sources can impact how results are interpreted. The remainder of this chapter explores critical considerations for survey development. We explore how to consider each of these sources of error and how these error sources can inform the interpretations of the data.
2\.4 Study design
-----------------
From formulating methodologies to choosing an appropriate sampling frame, the study design phase is where the blueprint for a successful survey takes shape. Study design encompasses multiple parts of the survey life cycle, including decisions on the population of interest, survey mode (the format through which a survey is administered to respondents), timeline, and questionnaire design. Knowing who and how to survey individuals depends on the study’s goals and the feasibility of implementation. This section explores the strategic planning that lays the foundation for a survey.
### 2\.4\.1 Sampling design
The set or group we want to survey is known as the population of interest or the target population. The population of interest could be broad, such as “all adults age 18\+ living in the U.S.” or a specific population based on a particular characteristic or location. For example, we may want to know about “adults aged 18–24 who live in North Carolina” or “eligible voters living in Illinois.”
However, a sampling frame with contact information is needed to survey individuals in these populations of interest. If we are looking at eligible voters, the sampling frame could be the voting registry for a given state or area. If we are looking at more board populations of interest, like all adults in the United States, the sampling frame is likely imperfect. In these cases, a full list of individuals in the United States is not available for a sampling frame. Instead, we may choose to use a sampling frame of mailing addresses and send the survey to households, or we may choose to use random digit dialing (RDD) and call random phone numbers (that may or may not be assigned, connected, and working).
These imperfect sampling frames can result in coverage error where there is a mismatch between the population of interest and the list of individuals we can select. For example, if we are looking to obtain estimates for “all adults aged 18\+ living in the U.S.,” a sampling frame of mailing addresses will miss specific types of individuals, such as the homeless, transient populations, and incarcerated individuals. Additionally, many households have more than one adult resident, so we would need to consider how to get a specific individual to fill out the survey (called within household selection) or adjust the population of interest to report on “U.S. households” instead of “individuals.”
Once we have selected the sampling frame, the next step is determining how to select individuals for the survey. In rare cases, we may conduct a census and survey everyone on the sampling frame. However, the ability to implement a questionnaire at that scale is something only a few can do (e.g., government censuses). Instead, we typically choose to sample individuals and use weights to estimate numbers in the population of interest. They can use a variety of different sampling methods, and more information on these can be found in Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights). This decision of which sampling method to use impacts sampling error and can be accounted for in weighting.
#### Example: Number of pets in a household
Let’s use a simple example where we are interested in the average number of pets in a household. We need to consider the population of interest for this study. Specifically, are we interested in all households in a given country or households in a more local area (e.g., city or state)? Let’s assume we are interested in the number of pets in a U.S. household with at least one adult (18 years or older). In this case, a sampling frame of mailing addresses would introduce only a small amount of coverage error as the frame would closely match our population of interest. Specifically, we would likely want to use the Computerized Delivery Sequence File (CDSF), which is a file of mailing addresses that the United States Postal Service (USPS) creates and covers nearly 100% of U.S. households ([Harter et al. 2016](#ref-harter2016address)). To sample these households, for simplicity, we use a stratified simple random sample design (see Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights) for more information on sample designs), where we randomly sample households within each state (i.e., we stratify by state).
Throughout this chapter, we build on this example research question to plan a survey.
### 2\.4\.2 Data collection planning
With the sampling design decided, researchers can then decide how to survey these individuals. Specifically, the modes used for contacting and surveying the sample, how frequently to send reminders and follow\-ups, and the overall timeline of the study are some of the major data collection determinations. Traditionally, survey researchers have considered there to be four main modes[1](#fn1):
* Computer\-Assisted Personal Interview (CAPI; also known as face\-to\-face or in\-person interviewing)
* Computer\-Assisted Telephone Interview (CATI; also known as phone or telephone interviewing)
* Computer\-Assisted Web Interview (CAWI; also known as web or online interviewing)
* Paper and Pencil Interview (PAPI)
We can use a single mode to collect data or multiple modes (also called mixed\-modes). Using mixed\-modes can allow for broader reach and increase response rates depending on the population of interest ([Biemer et al. 2017](#ref-biemer_choiceplus); [DeLeeuw 2005](#ref-deLeeuw2005), [2018](#ref-DeLeeuw_2018)). For example, we could both call households to conduct a CATI survey and send mail with a PAPI survey to the household. By using both modes, we could gain participation through the mail from individuals who do not pick up the phone to unknown numbers or through the phone from individuals who do not open all of their mail. However, mode effects (where responses differ based on the mode of response) can be present in the data and may need to be considered during analysis.
When selecting which mode, or modes, to use, understanding the unique aspects of the chosen population of interest and sampling frame provides insight into how they can best be reached and engaged. For example, if we plan to survey adults aged 18–24 who live in North Carolina, asking them to complete a survey using CATI (i.e., over the phone) would likely not be as successful as other modes like the web. This age group does not talk on the phone as much as other generations and often does not answer phone calls from unknown numbers. Additionally, the mode for contacting respondents relies on what information is available in the sampling frame. For example, if our sampling frame includes an email address, we could email our selected sample members to convince them to complete a survey. Alternatively, if the sampling frame is a list of mailing addresses, we could contact sample members with a letter.
It is important to note that there can be a difference between the contact and survey modes. For example, if we have a sampling frame with addresses, we can send a letter to our sample members and provide information on completing a web survey. Another option is using mixed\-mode surveys by mailing sample members a paper and pencil survey but also including instructions to complete the survey online. Combining different contact modes and different survey modes can be helpful in reducing unit nonresponse error–where the entire unit (e.g., a household) does not respond to the survey at all–as different sample members may respond better to different contact and survey modes. However, when considering which modes to use, it is important to make access to the survey as easy as possible for sample members to reduce burden and unit nonresponse.
Another way to reduce unit nonresponse error is by varying the language of the contact materials ([Dillman, Smyth, and Christian 2014](#ref-dillman2014mode)). People are motivated by different things, so constantly repeating the same message may not be helpful. Instead, mixing up the messaging and the type of contact material the sample member receives can increase response rates and reduce the unit nonresponse error. For example, instead of only sending standard letters, we could consider sending mailings that invoke “urgent” or “important” thoughts by sending priority letters or using other delivery services like FedEx, UPS, or DHL.
A study timeline may also determine the number and types of contacts. If the timeline is long, there is plentiful time for follow\-ups and diversified messages in contact materials. If the timeline is short, then fewer follow\-ups can be implemented. Many studies start with the tailored design method put forth by Dillman, Smyth, and Christian ([2014](#ref-dillman2014mode)) and implement five contacts:
* Pre\-notification (Pre\-notice) to let sample members know the survey is coming
* Invitation to complete the survey
* Reminder to also thank the respondents who have already completed the survey
* Reminder (with a replacement paper survey if needed)
* Final reminder
This method is easily adaptable based on the study timeline and needs but provides a starting point for most studies.
#### Example: Number of pets in a household
Let’s return to our example of the average number of pets in a household. We are using a sampling frame of mailing addresses, so we recommend starting our data collection with letters mailed to households, but later in data collection, we want to send interviewers to the house to conduct an in\-person (or CAPI) interview to decrease unit nonresponse error. This means we have two contact modes (paper and in\-person). As mentioned above, the survey mode does not have to be the same as the contact mode, so we recommend a mixed\-mode study with both web and CAPI modes. Let’s assume we have 6 months for data collection, so we could recommend Table [2\.1](c02-overview-surveys.html#tab:prot-examp)’s protocol:
TABLE 2\.1: Protocol example for 6\-month web and CAPI data collection
| Week | Contact Mode | Contact Message | Survey Mode Offered |
| --- | --- | --- | --- |
| 1 | Mail: Letter | Pre\-notice | — |
| 2 | Mail: Letter | Invitation | Web |
| 3 | Mail: Postcard | Thank You/Reminder | Web |
| 6 | Mail: Letter in large envelope | Animal Welfare Discussion | Web |
| 10 | Mail: Postcard | Inform Upcoming In\-Person Visit | Web |
| 14 | In\-Person Visit | — | CAPI |
| 16 | Mail: Letter | Reminder of In\-Person Visit | Web, but includes a number to call to schedule CAPI |
| 20 | In\-Person Visit | — | CAPI |
| 25 | Mail: Letter in large envelope | Survey Closing Notice | Web, but includes a number to call to schedule CAPI |
This is just one possible protocol that we can use that starts respondents with the web (typically done to reduce costs). However, we could begin in\-person data collection earlier during the data collection period or ask interviewers to attempt more than two visits with a household.
### 2\.4\.3 Questionnaire design
When developing the questionnaire, it can be helpful to first outline the topics to be asked and include the “why” each question or topic is important to the research question(s). This can help us better tailor the questionnaire and reduce the number of questions (and thus the burden on the respondent) if topics are deemed irrelevant to the research question. When making these decisions, we should also consider questions needed for weighting. While we would love to have everyone in our population of interest answer our survey, this rarely happens. Thus, including questions about demographics in the survey can assist with weighting for nonresponse errors (both unit and item nonresponse). Knowing the details of the sampling plan and what may impact coverage error and sampling error can help us determine what types of demographics to include. Thus questionnaire design is typically done in conjunction with sampling design.
We can benefit from the work of others by using questions from other surveys. Demographic sections in surveys, such as race, ethnicity, or education, often are borrowed questions from a government census or other official surveys. Question banks such as the [ICPSR variable search](https://www.icpsr.umich.edu/web/pages/ICPSR/ssvd/) can provide additional potential questions.
If a question does not exist in a question bank, we can craft our own. When developing survey questions, we should start with the research topic and attempt to write questions that match the concept. The closer the question asked is to the overall concept, the better validity there is. For example, if we want to know how people consume T.V. series and movies but only ask a question about how many T.V.s are in the house, then we would be missing other ways that people watch T.V. series and movies, such as on other devices or at places outside of the home. As mentioned above, we can employ techniques to increase the validity of questionnaires. For example, questionnaire testing involves piloting the survey instrument to identify and fix potential issues before conducting the main survey. Additionally, we could conduct cognitive interviews – a technique where we walk through the survey with participants, encouraging them to speak their thoughts out loud to uncover how they interpret and understand survey questions.
Additionally, when designing questions, we should consider the mode for the survey and adjust the language appropriately. In self\-administered surveys (e.g., web or mail), respondents can see all the questions and response options, but that is not the case in interviewer\-administered surveys (e.g., CATI or CAPI). With interviewer\-administered surveys, the response options must be read aloud to the respondents, so the question may need to be adjusted to create a better flow to the interview. Additionally, with self\-administered surveys, because the respondents are viewing the questionnaire, the formatting of the questions is even more critical to ensure accurate measurement. Incorrect formatting or wording can result in measurement error, so following best practices or using existing validated questions can reduce error. There are multiple resources to help researchers draft questions for different modes (e.g., [Bradburn, Sudman, and Wansink 2004](#ref-Bradburn2004); [Dillman, Smyth, and Christian 2014](#ref-dillman2014mode); [Fowler and Mangione 1989](#ref-Fowler1989); [Tourangeau, Couper, and Conrad 2004](#ref-Tourangeau2004spacing)).
#### Example: Number of pets in a household
As part of our survey on the average number of pets in a household, we may want to know what animal most people prefer to have as a pet. Let’s say we have a question in our survey as displayed in Figure [2\.2](c02-overview-surveys.html#fig:overview-pet-examp1).
FIGURE 2\.2: Example question asking pet preference type
This question may have validity issues as it only provides the options of “dogs” and “cats” to respondents, and the interpretation of the data could be incorrect. For example, if we had 100 respondents who answered the question and 50 selected dogs, then the results of this question cannot be “50% of the population prefers to have a dog as a pet,” as only two response options were provided. If a respondent taking our survey prefers turtles, they could either be forced to choose a response between these two (i.e., interpret the question as “between dogs and cats, which do you prefer?” and result in measurement error), or they may not answer the question (which results in item nonresponse error). Based on this, the interpretation of this question should be, “When given a choice between dogs and cats, 50% of respondents preferred to have a dog as a pet.”
To avoid this issue, we should consider these possibilities and adjust the question accordingly. One simple way could be to add an “other” response option to give respondents a chance to provide a different response. The “other” response option could then include a way for respondents to write their other preference. For example, we could rewrite this question as displayed in Figure [2\.3](c02-overview-surveys.html#fig:overview-pet-examp2).
FIGURE 2\.3: Example question asking pet preference type with other specify option
We can then code the responses from the open\-ended box and get a better understanding of the respondent’s choice of preferred pet. Interpreting this question becomes easier as researchers no longer need to qualify the results with the choices provided.
This is a simple example of how the presentation of the question and options can impact the findings. For more complex topics and questions, we must thoroughly consider how to mitigate any impacts from the presentation, formatting, wording, and other aspects. For survey analysts, reviewing not only the data but also the wording of the questions is crucial to ensure the results are presented in a manner consistent with the question asked. Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation) provides further details on how to review existing survey documentation to inform our analyses, and Chapter [8](c08-communicating-results.html#c08-communicating-results) goes into more details on communicating results.
### 2\.4\.1 Sampling design
The set or group we want to survey is known as the population of interest or the target population. The population of interest could be broad, such as “all adults age 18\+ living in the U.S.” or a specific population based on a particular characteristic or location. For example, we may want to know about “adults aged 18–24 who live in North Carolina” or “eligible voters living in Illinois.”
However, a sampling frame with contact information is needed to survey individuals in these populations of interest. If we are looking at eligible voters, the sampling frame could be the voting registry for a given state or area. If we are looking at more board populations of interest, like all adults in the United States, the sampling frame is likely imperfect. In these cases, a full list of individuals in the United States is not available for a sampling frame. Instead, we may choose to use a sampling frame of mailing addresses and send the survey to households, or we may choose to use random digit dialing (RDD) and call random phone numbers (that may or may not be assigned, connected, and working).
These imperfect sampling frames can result in coverage error where there is a mismatch between the population of interest and the list of individuals we can select. For example, if we are looking to obtain estimates for “all adults aged 18\+ living in the U.S.,” a sampling frame of mailing addresses will miss specific types of individuals, such as the homeless, transient populations, and incarcerated individuals. Additionally, many households have more than one adult resident, so we would need to consider how to get a specific individual to fill out the survey (called within household selection) or adjust the population of interest to report on “U.S. households” instead of “individuals.”
Once we have selected the sampling frame, the next step is determining how to select individuals for the survey. In rare cases, we may conduct a census and survey everyone on the sampling frame. However, the ability to implement a questionnaire at that scale is something only a few can do (e.g., government censuses). Instead, we typically choose to sample individuals and use weights to estimate numbers in the population of interest. They can use a variety of different sampling methods, and more information on these can be found in Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights). This decision of which sampling method to use impacts sampling error and can be accounted for in weighting.
#### Example: Number of pets in a household
Let’s use a simple example where we are interested in the average number of pets in a household. We need to consider the population of interest for this study. Specifically, are we interested in all households in a given country or households in a more local area (e.g., city or state)? Let’s assume we are interested in the number of pets in a U.S. household with at least one adult (18 years or older). In this case, a sampling frame of mailing addresses would introduce only a small amount of coverage error as the frame would closely match our population of interest. Specifically, we would likely want to use the Computerized Delivery Sequence File (CDSF), which is a file of mailing addresses that the United States Postal Service (USPS) creates and covers nearly 100% of U.S. households ([Harter et al. 2016](#ref-harter2016address)). To sample these households, for simplicity, we use a stratified simple random sample design (see Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights) for more information on sample designs), where we randomly sample households within each state (i.e., we stratify by state).
Throughout this chapter, we build on this example research question to plan a survey.
#### Example: Number of pets in a household
Let’s use a simple example where we are interested in the average number of pets in a household. We need to consider the population of interest for this study. Specifically, are we interested in all households in a given country or households in a more local area (e.g., city or state)? Let’s assume we are interested in the number of pets in a U.S. household with at least one adult (18 years or older). In this case, a sampling frame of mailing addresses would introduce only a small amount of coverage error as the frame would closely match our population of interest. Specifically, we would likely want to use the Computerized Delivery Sequence File (CDSF), which is a file of mailing addresses that the United States Postal Service (USPS) creates and covers nearly 100% of U.S. households ([Harter et al. 2016](#ref-harter2016address)). To sample these households, for simplicity, we use a stratified simple random sample design (see Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights) for more information on sample designs), where we randomly sample households within each state (i.e., we stratify by state).
Throughout this chapter, we build on this example research question to plan a survey.
### 2\.4\.2 Data collection planning
With the sampling design decided, researchers can then decide how to survey these individuals. Specifically, the modes used for contacting and surveying the sample, how frequently to send reminders and follow\-ups, and the overall timeline of the study are some of the major data collection determinations. Traditionally, survey researchers have considered there to be four main modes[1](#fn1):
* Computer\-Assisted Personal Interview (CAPI; also known as face\-to\-face or in\-person interviewing)
* Computer\-Assisted Telephone Interview (CATI; also known as phone or telephone interviewing)
* Computer\-Assisted Web Interview (CAWI; also known as web or online interviewing)
* Paper and Pencil Interview (PAPI)
We can use a single mode to collect data or multiple modes (also called mixed\-modes). Using mixed\-modes can allow for broader reach and increase response rates depending on the population of interest ([Biemer et al. 2017](#ref-biemer_choiceplus); [DeLeeuw 2005](#ref-deLeeuw2005), [2018](#ref-DeLeeuw_2018)). For example, we could both call households to conduct a CATI survey and send mail with a PAPI survey to the household. By using both modes, we could gain participation through the mail from individuals who do not pick up the phone to unknown numbers or through the phone from individuals who do not open all of their mail. However, mode effects (where responses differ based on the mode of response) can be present in the data and may need to be considered during analysis.
When selecting which mode, or modes, to use, understanding the unique aspects of the chosen population of interest and sampling frame provides insight into how they can best be reached and engaged. For example, if we plan to survey adults aged 18–24 who live in North Carolina, asking them to complete a survey using CATI (i.e., over the phone) would likely not be as successful as other modes like the web. This age group does not talk on the phone as much as other generations and often does not answer phone calls from unknown numbers. Additionally, the mode for contacting respondents relies on what information is available in the sampling frame. For example, if our sampling frame includes an email address, we could email our selected sample members to convince them to complete a survey. Alternatively, if the sampling frame is a list of mailing addresses, we could contact sample members with a letter.
It is important to note that there can be a difference between the contact and survey modes. For example, if we have a sampling frame with addresses, we can send a letter to our sample members and provide information on completing a web survey. Another option is using mixed\-mode surveys by mailing sample members a paper and pencil survey but also including instructions to complete the survey online. Combining different contact modes and different survey modes can be helpful in reducing unit nonresponse error–where the entire unit (e.g., a household) does not respond to the survey at all–as different sample members may respond better to different contact and survey modes. However, when considering which modes to use, it is important to make access to the survey as easy as possible for sample members to reduce burden and unit nonresponse.
Another way to reduce unit nonresponse error is by varying the language of the contact materials ([Dillman, Smyth, and Christian 2014](#ref-dillman2014mode)). People are motivated by different things, so constantly repeating the same message may not be helpful. Instead, mixing up the messaging and the type of contact material the sample member receives can increase response rates and reduce the unit nonresponse error. For example, instead of only sending standard letters, we could consider sending mailings that invoke “urgent” or “important” thoughts by sending priority letters or using other delivery services like FedEx, UPS, or DHL.
A study timeline may also determine the number and types of contacts. If the timeline is long, there is plentiful time for follow\-ups and diversified messages in contact materials. If the timeline is short, then fewer follow\-ups can be implemented. Many studies start with the tailored design method put forth by Dillman, Smyth, and Christian ([2014](#ref-dillman2014mode)) and implement five contacts:
* Pre\-notification (Pre\-notice) to let sample members know the survey is coming
* Invitation to complete the survey
* Reminder to also thank the respondents who have already completed the survey
* Reminder (with a replacement paper survey if needed)
* Final reminder
This method is easily adaptable based on the study timeline and needs but provides a starting point for most studies.
#### Example: Number of pets in a household
Let’s return to our example of the average number of pets in a household. We are using a sampling frame of mailing addresses, so we recommend starting our data collection with letters mailed to households, but later in data collection, we want to send interviewers to the house to conduct an in\-person (or CAPI) interview to decrease unit nonresponse error. This means we have two contact modes (paper and in\-person). As mentioned above, the survey mode does not have to be the same as the contact mode, so we recommend a mixed\-mode study with both web and CAPI modes. Let’s assume we have 6 months for data collection, so we could recommend Table [2\.1](c02-overview-surveys.html#tab:prot-examp)’s protocol:
TABLE 2\.1: Protocol example for 6\-month web and CAPI data collection
| Week | Contact Mode | Contact Message | Survey Mode Offered |
| --- | --- | --- | --- |
| 1 | Mail: Letter | Pre\-notice | — |
| 2 | Mail: Letter | Invitation | Web |
| 3 | Mail: Postcard | Thank You/Reminder | Web |
| 6 | Mail: Letter in large envelope | Animal Welfare Discussion | Web |
| 10 | Mail: Postcard | Inform Upcoming In\-Person Visit | Web |
| 14 | In\-Person Visit | — | CAPI |
| 16 | Mail: Letter | Reminder of In\-Person Visit | Web, but includes a number to call to schedule CAPI |
| 20 | In\-Person Visit | — | CAPI |
| 25 | Mail: Letter in large envelope | Survey Closing Notice | Web, but includes a number to call to schedule CAPI |
This is just one possible protocol that we can use that starts respondents with the web (typically done to reduce costs). However, we could begin in\-person data collection earlier during the data collection period or ask interviewers to attempt more than two visits with a household.
#### Example: Number of pets in a household
Let’s return to our example of the average number of pets in a household. We are using a sampling frame of mailing addresses, so we recommend starting our data collection with letters mailed to households, but later in data collection, we want to send interviewers to the house to conduct an in\-person (or CAPI) interview to decrease unit nonresponse error. This means we have two contact modes (paper and in\-person). As mentioned above, the survey mode does not have to be the same as the contact mode, so we recommend a mixed\-mode study with both web and CAPI modes. Let’s assume we have 6 months for data collection, so we could recommend Table [2\.1](c02-overview-surveys.html#tab:prot-examp)’s protocol:
TABLE 2\.1: Protocol example for 6\-month web and CAPI data collection
| Week | Contact Mode | Contact Message | Survey Mode Offered |
| --- | --- | --- | --- |
| 1 | Mail: Letter | Pre\-notice | — |
| 2 | Mail: Letter | Invitation | Web |
| 3 | Mail: Postcard | Thank You/Reminder | Web |
| 6 | Mail: Letter in large envelope | Animal Welfare Discussion | Web |
| 10 | Mail: Postcard | Inform Upcoming In\-Person Visit | Web |
| 14 | In\-Person Visit | — | CAPI |
| 16 | Mail: Letter | Reminder of In\-Person Visit | Web, but includes a number to call to schedule CAPI |
| 20 | In\-Person Visit | — | CAPI |
| 25 | Mail: Letter in large envelope | Survey Closing Notice | Web, but includes a number to call to schedule CAPI |
This is just one possible protocol that we can use that starts respondents with the web (typically done to reduce costs). However, we could begin in\-person data collection earlier during the data collection period or ask interviewers to attempt more than two visits with a household.
### 2\.4\.3 Questionnaire design
When developing the questionnaire, it can be helpful to first outline the topics to be asked and include the “why” each question or topic is important to the research question(s). This can help us better tailor the questionnaire and reduce the number of questions (and thus the burden on the respondent) if topics are deemed irrelevant to the research question. When making these decisions, we should also consider questions needed for weighting. While we would love to have everyone in our population of interest answer our survey, this rarely happens. Thus, including questions about demographics in the survey can assist with weighting for nonresponse errors (both unit and item nonresponse). Knowing the details of the sampling plan and what may impact coverage error and sampling error can help us determine what types of demographics to include. Thus questionnaire design is typically done in conjunction with sampling design.
We can benefit from the work of others by using questions from other surveys. Demographic sections in surveys, such as race, ethnicity, or education, often are borrowed questions from a government census or other official surveys. Question banks such as the [ICPSR variable search](https://www.icpsr.umich.edu/web/pages/ICPSR/ssvd/) can provide additional potential questions.
If a question does not exist in a question bank, we can craft our own. When developing survey questions, we should start with the research topic and attempt to write questions that match the concept. The closer the question asked is to the overall concept, the better validity there is. For example, if we want to know how people consume T.V. series and movies but only ask a question about how many T.V.s are in the house, then we would be missing other ways that people watch T.V. series and movies, such as on other devices or at places outside of the home. As mentioned above, we can employ techniques to increase the validity of questionnaires. For example, questionnaire testing involves piloting the survey instrument to identify and fix potential issues before conducting the main survey. Additionally, we could conduct cognitive interviews – a technique where we walk through the survey with participants, encouraging them to speak their thoughts out loud to uncover how they interpret and understand survey questions.
Additionally, when designing questions, we should consider the mode for the survey and adjust the language appropriately. In self\-administered surveys (e.g., web or mail), respondents can see all the questions and response options, but that is not the case in interviewer\-administered surveys (e.g., CATI or CAPI). With interviewer\-administered surveys, the response options must be read aloud to the respondents, so the question may need to be adjusted to create a better flow to the interview. Additionally, with self\-administered surveys, because the respondents are viewing the questionnaire, the formatting of the questions is even more critical to ensure accurate measurement. Incorrect formatting or wording can result in measurement error, so following best practices or using existing validated questions can reduce error. There are multiple resources to help researchers draft questions for different modes (e.g., [Bradburn, Sudman, and Wansink 2004](#ref-Bradburn2004); [Dillman, Smyth, and Christian 2014](#ref-dillman2014mode); [Fowler and Mangione 1989](#ref-Fowler1989); [Tourangeau, Couper, and Conrad 2004](#ref-Tourangeau2004spacing)).
#### Example: Number of pets in a household
As part of our survey on the average number of pets in a household, we may want to know what animal most people prefer to have as a pet. Let’s say we have a question in our survey as displayed in Figure [2\.2](c02-overview-surveys.html#fig:overview-pet-examp1).
FIGURE 2\.2: Example question asking pet preference type
This question may have validity issues as it only provides the options of “dogs” and “cats” to respondents, and the interpretation of the data could be incorrect. For example, if we had 100 respondents who answered the question and 50 selected dogs, then the results of this question cannot be “50% of the population prefers to have a dog as a pet,” as only two response options were provided. If a respondent taking our survey prefers turtles, they could either be forced to choose a response between these two (i.e., interpret the question as “between dogs and cats, which do you prefer?” and result in measurement error), or they may not answer the question (which results in item nonresponse error). Based on this, the interpretation of this question should be, “When given a choice between dogs and cats, 50% of respondents preferred to have a dog as a pet.”
To avoid this issue, we should consider these possibilities and adjust the question accordingly. One simple way could be to add an “other” response option to give respondents a chance to provide a different response. The “other” response option could then include a way for respondents to write their other preference. For example, we could rewrite this question as displayed in Figure [2\.3](c02-overview-surveys.html#fig:overview-pet-examp2).
FIGURE 2\.3: Example question asking pet preference type with other specify option
We can then code the responses from the open\-ended box and get a better understanding of the respondent’s choice of preferred pet. Interpreting this question becomes easier as researchers no longer need to qualify the results with the choices provided.
This is a simple example of how the presentation of the question and options can impact the findings. For more complex topics and questions, we must thoroughly consider how to mitigate any impacts from the presentation, formatting, wording, and other aspects. For survey analysts, reviewing not only the data but also the wording of the questions is crucial to ensure the results are presented in a manner consistent with the question asked. Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation) provides further details on how to review existing survey documentation to inform our analyses, and Chapter [8](c08-communicating-results.html#c08-communicating-results) goes into more details on communicating results.
#### Example: Number of pets in a household
As part of our survey on the average number of pets in a household, we may want to know what animal most people prefer to have as a pet. Let’s say we have a question in our survey as displayed in Figure [2\.2](c02-overview-surveys.html#fig:overview-pet-examp1).
FIGURE 2\.2: Example question asking pet preference type
This question may have validity issues as it only provides the options of “dogs” and “cats” to respondents, and the interpretation of the data could be incorrect. For example, if we had 100 respondents who answered the question and 50 selected dogs, then the results of this question cannot be “50% of the population prefers to have a dog as a pet,” as only two response options were provided. If a respondent taking our survey prefers turtles, they could either be forced to choose a response between these two (i.e., interpret the question as “between dogs and cats, which do you prefer?” and result in measurement error), or they may not answer the question (which results in item nonresponse error). Based on this, the interpretation of this question should be, “When given a choice between dogs and cats, 50% of respondents preferred to have a dog as a pet.”
To avoid this issue, we should consider these possibilities and adjust the question accordingly. One simple way could be to add an “other” response option to give respondents a chance to provide a different response. The “other” response option could then include a way for respondents to write their other preference. For example, we could rewrite this question as displayed in Figure [2\.3](c02-overview-surveys.html#fig:overview-pet-examp2).
FIGURE 2\.3: Example question asking pet preference type with other specify option
We can then code the responses from the open\-ended box and get a better understanding of the respondent’s choice of preferred pet. Interpreting this question becomes easier as researchers no longer need to qualify the results with the choices provided.
This is a simple example of how the presentation of the question and options can impact the findings. For more complex topics and questions, we must thoroughly consider how to mitigate any impacts from the presentation, formatting, wording, and other aspects. For survey analysts, reviewing not only the data but also the wording of the questions is crucial to ensure the results are presented in a manner consistent with the question asked. Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation) provides further details on how to review existing survey documentation to inform our analyses, and Chapter [8](c08-communicating-results.html#c08-communicating-results) goes into more details on communicating results.
2\.5 Data collection
--------------------
Once the data collection starts, we try to stick to the data collection protocol designed during pre\-survey planning. However, effective researchers also prepare to adjust their plans and adapt as needed to the current progress of data collection ([Schouten, Peytchev, and Wagner 2018](#ref-Schouten2018)). Some extreme examples could be natural disasters that could prevent mailings or interviewers from getting to the sample members. This could cause an in\-person survey needing to quickly pivot to a self\-administered survey, or the field period could be delayed, for example. Others could be smaller in that something newsworthy occurs connected to the survey, so we could choose to play this up in communication materials. In addition to these external factors, there could be factors unique to the survey, such as lower response rates for a specific subgroup, so the data collection protocol may need to find ways to improve response rates for that specific group.
2\.6 Post\-survey processing
----------------------------
After data collection, various activities need to be completed before we can analyze the survey. Multiple decisions made during this post\-survey phase can assist us in reducing different error sources, such as weighting to account for the sample selection. Knowing the decisions made in creating the final analytic data can impact how we use the data and interpret the results.
### 2\.6\.1 Data cleaning and imputation
Post\-survey cleaning is one of the first steps we do to get the survey responses into an analytic dataset. Data cleaning can consist of correcting inconsistent data (e.g., with skip pattern errors or multiple questions throughout the survey being consistent with each other), editing numeric entries or open\-ended responses for grammar and consistency, or recoding open\-ended questions into categories for analysis. There is no universal set of fixed rules that every survey must adhere to. Instead, each survey or research study should establish its own guidelines and procedures for handling various cleaning scenarios based on its specific objectives.
We should use our best judgment to ensure data integrity, and all decisions should be documented and available to those using the data in the analysis. Each decision we make impacts processing error, so often, multiple people review these rules or recode open\-ended data and adjudicate any differences in an attempt to reduce this error.
Another crucial step in post\-survey processing is imputation. Often, there is item nonresponse where respondents do not answer specific questions. If the questions are crucial to analysis efforts or the research question, we may implement imputation to reduce item nonresponse error. Imputation is a technique for replacing missing or incomplete data values with estimated values. However, as imputation is a way of assigning values to missing data based on an algorithm or model, it can also introduce processing error, so we should consider the overall implications of imputing data compared to having item nonresponse. There are multiple ways to impute data. We recommend reviewing other resources like Kim and Shao ([2021](#ref-Kim2021)) for more information.
#### Example: Number of pets in a household
Let’s return to the question we created to ask about [animal preference](c02-overview-surveys.html#overview-design-questionnaire-ex). The “other specify” invites respondents to specify the type of animal they prefer to have as a pet. If respondents entered answers such as “puppy,” “turtle,” “rabit,” “rabbit,” “bunny,” “ant farm,” “snake,” “Mr. Purr,” then we may wish to categorize these write\-in responses to help with analysis. In this example, “puppy” could be assumed to be a reference to a “Dog” and could be recoded there. The misspelling of “rabit” could be coded along with “rabbit” and “bunny” into a single category of “Bunny or Rabbit.” These are relatively standard decisions that we can make. The remaining write\-in responses could be categorized in a few different ways. “Mr. Purr,” which may be someone’s reference to their own cat, could be recoded as “Cat,” or it could remain as “Other” or some category that is “Unknown.” Depending on the number of responses related to each of the others, they could all be combined into a single “Other” category, or maybe categories such as “Reptiles” or “Insects” could be created. Each of these decisions may impact the interpretation of the data, so we should document the types of responses that fall into each of the new categories and any decisions made.
### 2\.6\.2 Weighting
We can address some error sources identified in the previous sections using weighting. During the weighting process, weights are created for each respondent record. These weights allow the survey responses to generalize to the population. A weight, generally, reflects how many units in the population each respondent represents. Often, the weight is constructed such that the sum of the weights is the size of the population.
Weights can address coverage, sampling, and nonresponse errors. Many published surveys include an “analysis weight” variable that combines these adjustments. However, weighting itself can also introduce adjustment error, so we need to balance which types of errors should be corrected with weighting. The construction of weights is outside the scope of this book, we recommend referencing other materials if interested in weight construction ([Valliant and Dever 2018](#ref-Valliant2018weights)). Instead, this book assumes the survey has been completed, weights are constructed, and data are available to users.
#### Example: Number of pets in a household
In the simple example of our survey, we decided to obtain a random sample from each state to select our sample members. Knowing this sampling design, we can include selection weights for analysis that account for how the sample members were selected for the survey. Additionally, the sampling frame may have the type of building associated with each address, so we could include the building type as a potential nonresponse weighting variable, along with some interviewer observations that may be related to our research topic of the average number of pets in a household. Combining these weights, we can create an analytic weight that analysts need to use when analyzing the data.
### 2\.6\.3 Disclosure
Before data is released publicly, we need to ensure that individual respondents cannot be identified by the data when confidentiality is required. There are a variety of different methods that can be used. Here we describe a few of the most commonly used:
* Data swapping: We may swap specific data values across different respondents so that it does not impact insights from the data but ensures that specific individuals cannot be identified.
* Top/bottom coding: We may choose top or bottom coding to mask extreme values. For example, we may top\-code income values such that households with income greater than $500,000 are coded as “$500,000 or more” with other incomes being presented as integers between $0 and $499,999\. This can impact analyses at the tails of the distribution.
* Coarsening: We may use coarsening to mask unique values. For example, a survey question may ask for a precise income but the public data may include income as a categorical variable. Another example commonly used in survey practice is to coarsen geographic variables. Data collectors likely know the precise address of sample members, but the public data may only include the state or even region of respondents.
* Perturbation: We may add random noise to outcomes. As with swapping, this is done so that it does not impact insights from the data but ensures that specific individuals cannot be identified.
There is as much art as there is science to the methods used for disclosure. Only high\-level comments about the disclosure are provided in the survey documentation, not specific details. This ensures nobody can reverse the disclosure and thus identify individuals. For more information on different disclosure methods, please see Skinner ([2009](#ref-Skinner2009)) and the [AAPOR Standards](https://aapor.org/standards-and-ethics/disclosure-standards/).
### 2\.6\.4 Documentation
Documentation is a critical step of the survey life cycle. We should systematically record all the details, decisions, procedures, and methodologies to ensure transparency, reproducibility, and the overall quality of survey research.
Proper documentation allows analysts to understand, reproduce, and evaluate the study’s methods and findings. Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation) dives into how analysts should use survey data documentation.
### 2\.6\.1 Data cleaning and imputation
Post\-survey cleaning is one of the first steps we do to get the survey responses into an analytic dataset. Data cleaning can consist of correcting inconsistent data (e.g., with skip pattern errors or multiple questions throughout the survey being consistent with each other), editing numeric entries or open\-ended responses for grammar and consistency, or recoding open\-ended questions into categories for analysis. There is no universal set of fixed rules that every survey must adhere to. Instead, each survey or research study should establish its own guidelines and procedures for handling various cleaning scenarios based on its specific objectives.
We should use our best judgment to ensure data integrity, and all decisions should be documented and available to those using the data in the analysis. Each decision we make impacts processing error, so often, multiple people review these rules or recode open\-ended data and adjudicate any differences in an attempt to reduce this error.
Another crucial step in post\-survey processing is imputation. Often, there is item nonresponse where respondents do not answer specific questions. If the questions are crucial to analysis efforts or the research question, we may implement imputation to reduce item nonresponse error. Imputation is a technique for replacing missing or incomplete data values with estimated values. However, as imputation is a way of assigning values to missing data based on an algorithm or model, it can also introduce processing error, so we should consider the overall implications of imputing data compared to having item nonresponse. There are multiple ways to impute data. We recommend reviewing other resources like Kim and Shao ([2021](#ref-Kim2021)) for more information.
#### Example: Number of pets in a household
Let’s return to the question we created to ask about [animal preference](c02-overview-surveys.html#overview-design-questionnaire-ex). The “other specify” invites respondents to specify the type of animal they prefer to have as a pet. If respondents entered answers such as “puppy,” “turtle,” “rabit,” “rabbit,” “bunny,” “ant farm,” “snake,” “Mr. Purr,” then we may wish to categorize these write\-in responses to help with analysis. In this example, “puppy” could be assumed to be a reference to a “Dog” and could be recoded there. The misspelling of “rabit” could be coded along with “rabbit” and “bunny” into a single category of “Bunny or Rabbit.” These are relatively standard decisions that we can make. The remaining write\-in responses could be categorized in a few different ways. “Mr. Purr,” which may be someone’s reference to their own cat, could be recoded as “Cat,” or it could remain as “Other” or some category that is “Unknown.” Depending on the number of responses related to each of the others, they could all be combined into a single “Other” category, or maybe categories such as “Reptiles” or “Insects” could be created. Each of these decisions may impact the interpretation of the data, so we should document the types of responses that fall into each of the new categories and any decisions made.
#### Example: Number of pets in a household
Let’s return to the question we created to ask about [animal preference](c02-overview-surveys.html#overview-design-questionnaire-ex). The “other specify” invites respondents to specify the type of animal they prefer to have as a pet. If respondents entered answers such as “puppy,” “turtle,” “rabit,” “rabbit,” “bunny,” “ant farm,” “snake,” “Mr. Purr,” then we may wish to categorize these write\-in responses to help with analysis. In this example, “puppy” could be assumed to be a reference to a “Dog” and could be recoded there. The misspelling of “rabit” could be coded along with “rabbit” and “bunny” into a single category of “Bunny or Rabbit.” These are relatively standard decisions that we can make. The remaining write\-in responses could be categorized in a few different ways. “Mr. Purr,” which may be someone’s reference to their own cat, could be recoded as “Cat,” or it could remain as “Other” or some category that is “Unknown.” Depending on the number of responses related to each of the others, they could all be combined into a single “Other” category, or maybe categories such as “Reptiles” or “Insects” could be created. Each of these decisions may impact the interpretation of the data, so we should document the types of responses that fall into each of the new categories and any decisions made.
### 2\.6\.2 Weighting
We can address some error sources identified in the previous sections using weighting. During the weighting process, weights are created for each respondent record. These weights allow the survey responses to generalize to the population. A weight, generally, reflects how many units in the population each respondent represents. Often, the weight is constructed such that the sum of the weights is the size of the population.
Weights can address coverage, sampling, and nonresponse errors. Many published surveys include an “analysis weight” variable that combines these adjustments. However, weighting itself can also introduce adjustment error, so we need to balance which types of errors should be corrected with weighting. The construction of weights is outside the scope of this book, we recommend referencing other materials if interested in weight construction ([Valliant and Dever 2018](#ref-Valliant2018weights)). Instead, this book assumes the survey has been completed, weights are constructed, and data are available to users.
#### Example: Number of pets in a household
In the simple example of our survey, we decided to obtain a random sample from each state to select our sample members. Knowing this sampling design, we can include selection weights for analysis that account for how the sample members were selected for the survey. Additionally, the sampling frame may have the type of building associated with each address, so we could include the building type as a potential nonresponse weighting variable, along with some interviewer observations that may be related to our research topic of the average number of pets in a household. Combining these weights, we can create an analytic weight that analysts need to use when analyzing the data.
#### Example: Number of pets in a household
In the simple example of our survey, we decided to obtain a random sample from each state to select our sample members. Knowing this sampling design, we can include selection weights for analysis that account for how the sample members were selected for the survey. Additionally, the sampling frame may have the type of building associated with each address, so we could include the building type as a potential nonresponse weighting variable, along with some interviewer observations that may be related to our research topic of the average number of pets in a household. Combining these weights, we can create an analytic weight that analysts need to use when analyzing the data.
### 2\.6\.3 Disclosure
Before data is released publicly, we need to ensure that individual respondents cannot be identified by the data when confidentiality is required. There are a variety of different methods that can be used. Here we describe a few of the most commonly used:
* Data swapping: We may swap specific data values across different respondents so that it does not impact insights from the data but ensures that specific individuals cannot be identified.
* Top/bottom coding: We may choose top or bottom coding to mask extreme values. For example, we may top\-code income values such that households with income greater than $500,000 are coded as “$500,000 or more” with other incomes being presented as integers between $0 and $499,999\. This can impact analyses at the tails of the distribution.
* Coarsening: We may use coarsening to mask unique values. For example, a survey question may ask for a precise income but the public data may include income as a categorical variable. Another example commonly used in survey practice is to coarsen geographic variables. Data collectors likely know the precise address of sample members, but the public data may only include the state or even region of respondents.
* Perturbation: We may add random noise to outcomes. As with swapping, this is done so that it does not impact insights from the data but ensures that specific individuals cannot be identified.
There is as much art as there is science to the methods used for disclosure. Only high\-level comments about the disclosure are provided in the survey documentation, not specific details. This ensures nobody can reverse the disclosure and thus identify individuals. For more information on different disclosure methods, please see Skinner ([2009](#ref-Skinner2009)) and the [AAPOR Standards](https://aapor.org/standards-and-ethics/disclosure-standards/).
### 2\.6\.4 Documentation
Documentation is a critical step of the survey life cycle. We should systematically record all the details, decisions, procedures, and methodologies to ensure transparency, reproducibility, and the overall quality of survey research.
Proper documentation allows analysts to understand, reproduce, and evaluate the study’s methods and findings. Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation) dives into how analysts should use survey data documentation.
2\.7 Post\-survey data analysis and reporting
---------------------------------------------
After completing the survey life cycle, the data are ready for analysts. Chapter [4](c04-getting-started.html#c04-getting-started) continues from this point. For more information on the survey life cycle, please explore the references cited throughout this chapter.
| Social Science |
tidy-survey-r.github.io | https://tidy-survey-r.github.io/tidy-survey-book/c03-survey-data-documentation.html |
Chapter 3 Survey data documentation
===================================
3\.1 Introduction
-----------------
Survey documentation helps us prepare before we look at the actual survey data. The documentation includes technical guides, questionnaires, codebooks, errata, and other useful resources. By taking the time to review these materials, we can gain a comprehensive understanding of the survey data (including research and design decisions discussed in Chapters [2](c02-overview-surveys.html#c02-overview-surveys) and [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights)) and conduct our analysis more effectively.
Survey documentation can vary in organization, type, and ease of use. The information may be stored in any format—PDFs, Excel spreadsheets, Word documents, and so on. Some surveys bundle documentation together, such as providing the codebook and questionnaire in a single document. Others keep them in separate files. Despite these variations, we can gain a general understanding of the documentation types and what aspects to focus on in each.
3\.2 Types of survey documentation
----------------------------------
### 3\.2\.1 Technical documentation
The technical documentation, also known as user guides or methodology/analysis guides, highlights the variables necessary to specify the survey design. We recommend concentrating on these key sections:
* Introduction: The introduction orients us to the survey. This section provides the project’s background, the study’s purpose, and the main research questions.
* Study design: The study design section describes how researchers prepared and administered the survey.
* Sample: The sample section describes the sample frame, any known sampling errors, and limitations of the sample. This section can contain recommendations on how to use sampling weights. Look for weight information, whether the survey design contains strata, clusters/PSUs, or replicate weights. Also, look for population sizes, finite population correction, or replicate weight scaling information. Additional detail on sample designs is available in Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights).
* Notes on fielding: Any additional notes on fielding, such as response rates, may be found in the technical documentation.
The technical documentation may include other helpful resources. For example, some technical documentation includes syntax for SAS, SUDAAN, Stata, and/or R, so we do not have to create this code from scratch.
### 3\.2\.2 Questionnaires
A questionnaire is a series of questions used to collect information from people in a survey. It can ask about opinions, behaviors, demographics, or even just numbers like the count of lightbulbs, square footage, or farm size. Questionnaires can employ different types of questions, such as closed\-ended (e.g., select one or check all that apply), open\-ended (e.g., numeric or text), Likert scales (e.g., a 5\- or 7\-point scale specifying a respondent’s level of agreement to a statement), or ranking questions (e.g., a list of options that a respondent ranks by preference). It may randomize the display order of responses or include instructions that help respondents understand the questions. A survey may have one questionnaire or multiple, depending on its scale and scope.
The questionnaire is another important resource for understanding and interpreting the survey data (see Section [2\.4\.3](c02-overview-surveys.html#overview-design-questionnaire)), and we should use it alongside any analysis. It provides details about each of the questions asked in the survey, such as question name, question wording, response options, skip logic, randomizations, display specifications, mode differences, and the universe (the subset of respondents who were asked a question).
In Figure [3\.1](c03-survey-data-documentation.html#fig:understand-que-examp), we show an example from the American National Election Studies (ANES) 2020 questionnaire ([American National Election Studies 2021](#ref-anes-svy)). The figure shows the question name (`POSTVOTE_RVOTE`), description (Did R Vote?), full wording of the question and responses, response order, universe, question logic (this question was only asked if `vote_pre` \= 0\), and other specifications. The section also includes the variable name, which we can link to the codebook.
FIGURE 3\.1: ANES 2020 questionnaire example
The content and structure of questionnaires vary depending on the specific survey. For instance, question names may be informative (like the ANES example above), sequential, or denoted by a code. In some cases, surveys may not use separate names for questions and variables. Figure [3\.2](c03-survey-data-documentation.html#fig:understand-que-examp-2) shows an example from the Behavioral Risk Factor Surveillance System (BRFSS) questionnaire that shows a sequential question number and a coded variable name (as opposed to a question name) ([Centers for Disease Control and Prevention (CDC) 2021](#ref-brfss-svy)).
FIGURE 3\.2: BRFSS 2021 questionnaire example
We should factor in the details of a survey when conducting our analyses. For example, surveys that use various modes (e.g., web and mail) may have differences in question wording or skip logic, as web surveys can include fills or automate skip logic. If large enough, these variations could warrant separate analyses for each mode.
### 3\.2\.3 Codebooks
While a questionnaire provides information about the questions posed to respondents, the codebook explains how the survey data were coded and recorded. It lists details such as variable names, variable labels, variable meanings, codes for missing data, value labels, and value types (whether categorical, continuous, etc.). The codebook helps us understand and use the variables appropriately in our analysis. In particular, the codebook (as opposed to the questionnaire) often includes information on missing data. Note that the term data dictionary is sometimes used interchangeably with codebook, but a data dictionary may include more details on the structure and elements of the data.
Figure [3\.3](c03-survey-data-documentation.html#fig:understand-codebook-examp) is a question from the ANES 2020 codebook ([American National Election Studies 2022](#ref-anes-cb)). This section indicates a variable’s name (`V202066`), question wording, value labels, universe, and associated survey question (`POSTVOTE_RVOTE`).
FIGURE 3\.3: ANES 2020 codebook example
Reviewing the questionnaires and codebooks in parallel can clarify how to interpret the variables (Figures [3\.1](c03-survey-data-documentation.html#fig:understand-que-examp) and [3\.3](c03-survey-data-documentation.html#fig:understand-codebook-examp)), as questions and variables do not always correspond directly to each other in a one\-to\-one mapping. A single question may have multiple associated variables, or a single variable may summarize multiple questions.
### 3\.2\.4 Errata
An erratum (singular) or errata (plural) is a document that lists errors found in a publication or dataset. The purpose of an erratum is to correct or update inaccuracies in the original document. Examples of errata include:
* Issuing a corrected data table after realizing a typo or mistake in a table cell
* Reporting incorrectly programmed skips in an electronic survey where questions are skipped by the respondent when they should not have been
For example, the 2004 ANES dataset released an erratum, notifying analysts to remove a specific row from the data file due to the inclusion of a respondent who should not have been part of the sample. Adhering to an issued erratum helps us increase the accuracy and reliability of analysis.
### 3\.2\.5 Additional resources
Survey documentation may include additional material, such as interviewer instructions or “show cards” provided to respondents during interviewer\-administered surveys to help respondents answer questions. Explore the survey website to find out what resources were used and in what contexts.
3\.3 Missing data coding
------------------------
Some observations in a dataset may have missing data. This can be due to design or nonresponse, and these concepts are detailed in Chapter [11](c11-missing-data.html#c11-missing-data). In that chapter, we also discuss how to analyze data with missing values. This chapter walks through how to understand documentation related to missing data.
The survey documentation, often the codebook, represents the missing data with a code. The codebook may list different codes depending on why certain data points are missing. In the example of variable `V202066` from the ANES (Figure [3\.3](c03-survey-data-documentation.html#fig:understand-codebook-examp)), `-9` represents “Refused,” `-7` means that the response was deleted due to an incomplete interview, `-6` means that there is no response because there was no follow\-up interview, and `-1` means “Inapplicable” (due to a designed skip pattern).
As another example, there may be a summary variable that describes the missingness of a set of variables — particularly with “select all that apply” or “multiple response” questions. In the National Crime Victimization Survey (NCVS), respondents who are victims of a crime and saw the offender are asked if the offender had a weapon and then asked what the type of weapon was. This part of the questionnaire from 2021 is shown in Figure [3\.4](c03-survey-data-documentation.html#fig:understand-ncvs-weapon-q) ([U. S. Bureau of Justice Statistics 2020](#ref-ncvs_survey_2020)).
FIGURE 3\.4: Excerpt from the NCVS 2020\-2021 Crime Incident Report \- Weapon Type
For these multiple response variables (select all that apply), the NCVS codebook includes what they call a “lead\-in” variable that summarizes the response. This lead\-in variable provides metadata information on how a respondent answered the question. For example, question 23a on the weapon type, the lead\-in variable is V4050 (shown in Figure [3\.5](c03-survey-data-documentation.html#fig:understand-ncvs-weapon-cb)) indicates the quality and type of response ([U. S. Bureau of Justice Statistics 2022](#ref-ncvs_cb_2020)). In the codebook, this variable is then followed by a set of variables for each weapon type. An example of one of the individual variables from the codebook, the handgun (V4051\), is shown in Figure [3\.6](c03-survey-data-documentation.html#fig:understand-ncvs-weapon-cb-hg) ([U. S. Bureau of Justice Statistics 2022](#ref-ncvs_cb_2020)). We will dive into how to analyze this variable in Chapter [11](c11-missing-data.html#c11-missing-data).
FIGURE 3\.5: Excerpt from the NCVS 2021 Codebook for V4050 \- LI WHAT WAS WEAPON
FIGURE 3\.6: Excerpt from the NCVS 2021 Codebook for V4051 \- C WEAPON: HAND GUN
When data are read into R, some values may be system missing, that is they are coded as `NA` even if that is not evident in a codebook. We discuss in Chapter [11](c11-missing-data.html#c11-missing-data) how to analyze data with `NA` values and review how R handles missing data in calculations.
3\.4 Example: ANES 2020 survey documentation
--------------------------------------------
Let’s look at the survey documentation for the ANES 2020 and the documentation from their [website](https://electionstudies.org/data-center/2020-time-series-study/). Navigating to “User Guide and Codebook” ([American National Election Studies 2022](#ref-anes-cb)), we can download the PDF that contains the survey documentation, titled “ANES 2020 Time Series Study Full Release: User Guide and Codebook.” Do not be daunted by the 796\-page PDF. Below, we focus on the most critical information.
#### Introduction
The first section in the User Guide explains that the ANES 2020 Times Series Study continues a series of election surveys conducted since 1948\. These surveys contain data on public opinion and voting behavior in the U.S. presidential elections. The introduction also includes information about the modes used for data collection (web, live video interviewing, or CATI). Additionally, there is a summary of the number of pre\-election interviews (8,280\) and post\-election re\-interviews (7,449\).
#### Sample design and respondent recruitment
The section “Sample Design and Respondent Recruitment” provides more detail about the survey’s sequential mixed\-mode design. All three modes were conducted one after another and not at the same time. Additionally, it indicates that for the 2020 survey, they resampled all respondents who participated in the 2016 ANES, along with a newly drawn cross\-section:
> The target population for the fresh cross\-section was the 231 million non\-institutional U.S. citizens aged 18 or older living in the 50 U.S. states or the District of Columbia.
The document continues with more details on the sample groups.
#### Data analysis, weights, and variance estimation
The section “Data Analysis, Weights, and Variance Estimation” includes information on weights and strata/cluster variables. Reading through, we can find the full sample weight variables:
> For analysis of the complete set of cases using pre\-election data only, including all cases and representative of the 2020 electorate, use the full sample pre\-election weight, **V200010a**. For analysis including post\-election data for the complete set of participants (i.e., analysis of post\-election data only or a combination of pre\- and post\-election data), use the full sample post\-election weight, **V200010b**. Additional weights are provided for analysis of subsets of the data…
The document provides more information about the design variables, summarized in Table [3\.1](c03-survey-data-documentation.html#tab:aneswgts).
TABLE 3\.1: Weight and variance information for ANES
| For weight | Variance unit/cluster | Variance stratum |
| --- | --- | --- |
| V200010a | V200010c | V200010d |
| V200010b | V200010c | V200010d |
### Methodology
The user guide mentions a supplemental document called “How to Analyze ANES Survey Data” ([DeBell 2010](#ref-debell)) as a how\-to guide for analyzing the data. In this document, we learn more about the weights, and that they sum to the sample size and not the population. If our goal is to calculate estimates for the entire U.S. population instead of just the sample, we must adjust the weights to the U.S. population. To create accurate weights for the population, we need to determine the total population size at the time of the survey. Let’s review the “Sample Design and Respondent Recruitment” section for more details:
> The target population for the fresh cross\-section was the 231 million non\-institutional U.S. citizens aged 18 or older living in the 50 U.S. states or the District of Columbia.
The documentation suggests that the population should equal around 231 million, but this is a very imprecise count. Upon further investigation of the available resources, we can find the methodology file titled “Methodology Report for the ANES 2020 Time Series Study” ([DeBell et al. 2022](#ref-anes-2020-tech)). This file states that we can use the population total from the Current Population Survey (CPS), a monthly survey sponsored by the U.S. Census Bureau and the U.S. Bureau of Labor Statistics. The CPS provides a more accurate population estimate for a specific month. Therefore, we can use the CPS to get the total population number for March 2020, when the ANES was conducted. Chapter [4](c04-getting-started.html#c04-getting-started) goes into detailed instructions on how to calculate and adjust this value in the data.
3\.1 Introduction
-----------------
Survey documentation helps us prepare before we look at the actual survey data. The documentation includes technical guides, questionnaires, codebooks, errata, and other useful resources. By taking the time to review these materials, we can gain a comprehensive understanding of the survey data (including research and design decisions discussed in Chapters [2](c02-overview-surveys.html#c02-overview-surveys) and [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights)) and conduct our analysis more effectively.
Survey documentation can vary in organization, type, and ease of use. The information may be stored in any format—PDFs, Excel spreadsheets, Word documents, and so on. Some surveys bundle documentation together, such as providing the codebook and questionnaire in a single document. Others keep them in separate files. Despite these variations, we can gain a general understanding of the documentation types and what aspects to focus on in each.
3\.2 Types of survey documentation
----------------------------------
### 3\.2\.1 Technical documentation
The technical documentation, also known as user guides or methodology/analysis guides, highlights the variables necessary to specify the survey design. We recommend concentrating on these key sections:
* Introduction: The introduction orients us to the survey. This section provides the project’s background, the study’s purpose, and the main research questions.
* Study design: The study design section describes how researchers prepared and administered the survey.
* Sample: The sample section describes the sample frame, any known sampling errors, and limitations of the sample. This section can contain recommendations on how to use sampling weights. Look for weight information, whether the survey design contains strata, clusters/PSUs, or replicate weights. Also, look for population sizes, finite population correction, or replicate weight scaling information. Additional detail on sample designs is available in Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights).
* Notes on fielding: Any additional notes on fielding, such as response rates, may be found in the technical documentation.
The technical documentation may include other helpful resources. For example, some technical documentation includes syntax for SAS, SUDAAN, Stata, and/or R, so we do not have to create this code from scratch.
### 3\.2\.2 Questionnaires
A questionnaire is a series of questions used to collect information from people in a survey. It can ask about opinions, behaviors, demographics, or even just numbers like the count of lightbulbs, square footage, or farm size. Questionnaires can employ different types of questions, such as closed\-ended (e.g., select one or check all that apply), open\-ended (e.g., numeric or text), Likert scales (e.g., a 5\- or 7\-point scale specifying a respondent’s level of agreement to a statement), or ranking questions (e.g., a list of options that a respondent ranks by preference). It may randomize the display order of responses or include instructions that help respondents understand the questions. A survey may have one questionnaire or multiple, depending on its scale and scope.
The questionnaire is another important resource for understanding and interpreting the survey data (see Section [2\.4\.3](c02-overview-surveys.html#overview-design-questionnaire)), and we should use it alongside any analysis. It provides details about each of the questions asked in the survey, such as question name, question wording, response options, skip logic, randomizations, display specifications, mode differences, and the universe (the subset of respondents who were asked a question).
In Figure [3\.1](c03-survey-data-documentation.html#fig:understand-que-examp), we show an example from the American National Election Studies (ANES) 2020 questionnaire ([American National Election Studies 2021](#ref-anes-svy)). The figure shows the question name (`POSTVOTE_RVOTE`), description (Did R Vote?), full wording of the question and responses, response order, universe, question logic (this question was only asked if `vote_pre` \= 0\), and other specifications. The section also includes the variable name, which we can link to the codebook.
FIGURE 3\.1: ANES 2020 questionnaire example
The content and structure of questionnaires vary depending on the specific survey. For instance, question names may be informative (like the ANES example above), sequential, or denoted by a code. In some cases, surveys may not use separate names for questions and variables. Figure [3\.2](c03-survey-data-documentation.html#fig:understand-que-examp-2) shows an example from the Behavioral Risk Factor Surveillance System (BRFSS) questionnaire that shows a sequential question number and a coded variable name (as opposed to a question name) ([Centers for Disease Control and Prevention (CDC) 2021](#ref-brfss-svy)).
FIGURE 3\.2: BRFSS 2021 questionnaire example
We should factor in the details of a survey when conducting our analyses. For example, surveys that use various modes (e.g., web and mail) may have differences in question wording or skip logic, as web surveys can include fills or automate skip logic. If large enough, these variations could warrant separate analyses for each mode.
### 3\.2\.3 Codebooks
While a questionnaire provides information about the questions posed to respondents, the codebook explains how the survey data were coded and recorded. It lists details such as variable names, variable labels, variable meanings, codes for missing data, value labels, and value types (whether categorical, continuous, etc.). The codebook helps us understand and use the variables appropriately in our analysis. In particular, the codebook (as opposed to the questionnaire) often includes information on missing data. Note that the term data dictionary is sometimes used interchangeably with codebook, but a data dictionary may include more details on the structure and elements of the data.
Figure [3\.3](c03-survey-data-documentation.html#fig:understand-codebook-examp) is a question from the ANES 2020 codebook ([American National Election Studies 2022](#ref-anes-cb)). This section indicates a variable’s name (`V202066`), question wording, value labels, universe, and associated survey question (`POSTVOTE_RVOTE`).
FIGURE 3\.3: ANES 2020 codebook example
Reviewing the questionnaires and codebooks in parallel can clarify how to interpret the variables (Figures [3\.1](c03-survey-data-documentation.html#fig:understand-que-examp) and [3\.3](c03-survey-data-documentation.html#fig:understand-codebook-examp)), as questions and variables do not always correspond directly to each other in a one\-to\-one mapping. A single question may have multiple associated variables, or a single variable may summarize multiple questions.
### 3\.2\.4 Errata
An erratum (singular) or errata (plural) is a document that lists errors found in a publication or dataset. The purpose of an erratum is to correct or update inaccuracies in the original document. Examples of errata include:
* Issuing a corrected data table after realizing a typo or mistake in a table cell
* Reporting incorrectly programmed skips in an electronic survey where questions are skipped by the respondent when they should not have been
For example, the 2004 ANES dataset released an erratum, notifying analysts to remove a specific row from the data file due to the inclusion of a respondent who should not have been part of the sample. Adhering to an issued erratum helps us increase the accuracy and reliability of analysis.
### 3\.2\.5 Additional resources
Survey documentation may include additional material, such as interviewer instructions or “show cards” provided to respondents during interviewer\-administered surveys to help respondents answer questions. Explore the survey website to find out what resources were used and in what contexts.
### 3\.2\.1 Technical documentation
The technical documentation, also known as user guides or methodology/analysis guides, highlights the variables necessary to specify the survey design. We recommend concentrating on these key sections:
* Introduction: The introduction orients us to the survey. This section provides the project’s background, the study’s purpose, and the main research questions.
* Study design: The study design section describes how researchers prepared and administered the survey.
* Sample: The sample section describes the sample frame, any known sampling errors, and limitations of the sample. This section can contain recommendations on how to use sampling weights. Look for weight information, whether the survey design contains strata, clusters/PSUs, or replicate weights. Also, look for population sizes, finite population correction, or replicate weight scaling information. Additional detail on sample designs is available in Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights).
* Notes on fielding: Any additional notes on fielding, such as response rates, may be found in the technical documentation.
The technical documentation may include other helpful resources. For example, some technical documentation includes syntax for SAS, SUDAAN, Stata, and/or R, so we do not have to create this code from scratch.
### 3\.2\.2 Questionnaires
A questionnaire is a series of questions used to collect information from people in a survey. It can ask about opinions, behaviors, demographics, or even just numbers like the count of lightbulbs, square footage, or farm size. Questionnaires can employ different types of questions, such as closed\-ended (e.g., select one or check all that apply), open\-ended (e.g., numeric or text), Likert scales (e.g., a 5\- or 7\-point scale specifying a respondent’s level of agreement to a statement), or ranking questions (e.g., a list of options that a respondent ranks by preference). It may randomize the display order of responses or include instructions that help respondents understand the questions. A survey may have one questionnaire or multiple, depending on its scale and scope.
The questionnaire is another important resource for understanding and interpreting the survey data (see Section [2\.4\.3](c02-overview-surveys.html#overview-design-questionnaire)), and we should use it alongside any analysis. It provides details about each of the questions asked in the survey, such as question name, question wording, response options, skip logic, randomizations, display specifications, mode differences, and the universe (the subset of respondents who were asked a question).
In Figure [3\.1](c03-survey-data-documentation.html#fig:understand-que-examp), we show an example from the American National Election Studies (ANES) 2020 questionnaire ([American National Election Studies 2021](#ref-anes-svy)). The figure shows the question name (`POSTVOTE_RVOTE`), description (Did R Vote?), full wording of the question and responses, response order, universe, question logic (this question was only asked if `vote_pre` \= 0\), and other specifications. The section also includes the variable name, which we can link to the codebook.
FIGURE 3\.1: ANES 2020 questionnaire example
The content and structure of questionnaires vary depending on the specific survey. For instance, question names may be informative (like the ANES example above), sequential, or denoted by a code. In some cases, surveys may not use separate names for questions and variables. Figure [3\.2](c03-survey-data-documentation.html#fig:understand-que-examp-2) shows an example from the Behavioral Risk Factor Surveillance System (BRFSS) questionnaire that shows a sequential question number and a coded variable name (as opposed to a question name) ([Centers for Disease Control and Prevention (CDC) 2021](#ref-brfss-svy)).
FIGURE 3\.2: BRFSS 2021 questionnaire example
We should factor in the details of a survey when conducting our analyses. For example, surveys that use various modes (e.g., web and mail) may have differences in question wording or skip logic, as web surveys can include fills or automate skip logic. If large enough, these variations could warrant separate analyses for each mode.
### 3\.2\.3 Codebooks
While a questionnaire provides information about the questions posed to respondents, the codebook explains how the survey data were coded and recorded. It lists details such as variable names, variable labels, variable meanings, codes for missing data, value labels, and value types (whether categorical, continuous, etc.). The codebook helps us understand and use the variables appropriately in our analysis. In particular, the codebook (as opposed to the questionnaire) often includes information on missing data. Note that the term data dictionary is sometimes used interchangeably with codebook, but a data dictionary may include more details on the structure and elements of the data.
Figure [3\.3](c03-survey-data-documentation.html#fig:understand-codebook-examp) is a question from the ANES 2020 codebook ([American National Election Studies 2022](#ref-anes-cb)). This section indicates a variable’s name (`V202066`), question wording, value labels, universe, and associated survey question (`POSTVOTE_RVOTE`).
FIGURE 3\.3: ANES 2020 codebook example
Reviewing the questionnaires and codebooks in parallel can clarify how to interpret the variables (Figures [3\.1](c03-survey-data-documentation.html#fig:understand-que-examp) and [3\.3](c03-survey-data-documentation.html#fig:understand-codebook-examp)), as questions and variables do not always correspond directly to each other in a one\-to\-one mapping. A single question may have multiple associated variables, or a single variable may summarize multiple questions.
### 3\.2\.4 Errata
An erratum (singular) or errata (plural) is a document that lists errors found in a publication or dataset. The purpose of an erratum is to correct or update inaccuracies in the original document. Examples of errata include:
* Issuing a corrected data table after realizing a typo or mistake in a table cell
* Reporting incorrectly programmed skips in an electronic survey where questions are skipped by the respondent when they should not have been
For example, the 2004 ANES dataset released an erratum, notifying analysts to remove a specific row from the data file due to the inclusion of a respondent who should not have been part of the sample. Adhering to an issued erratum helps us increase the accuracy and reliability of analysis.
### 3\.2\.5 Additional resources
Survey documentation may include additional material, such as interviewer instructions or “show cards” provided to respondents during interviewer\-administered surveys to help respondents answer questions. Explore the survey website to find out what resources were used and in what contexts.
3\.3 Missing data coding
------------------------
Some observations in a dataset may have missing data. This can be due to design or nonresponse, and these concepts are detailed in Chapter [11](c11-missing-data.html#c11-missing-data). In that chapter, we also discuss how to analyze data with missing values. This chapter walks through how to understand documentation related to missing data.
The survey documentation, often the codebook, represents the missing data with a code. The codebook may list different codes depending on why certain data points are missing. In the example of variable `V202066` from the ANES (Figure [3\.3](c03-survey-data-documentation.html#fig:understand-codebook-examp)), `-9` represents “Refused,” `-7` means that the response was deleted due to an incomplete interview, `-6` means that there is no response because there was no follow\-up interview, and `-1` means “Inapplicable” (due to a designed skip pattern).
As another example, there may be a summary variable that describes the missingness of a set of variables — particularly with “select all that apply” or “multiple response” questions. In the National Crime Victimization Survey (NCVS), respondents who are victims of a crime and saw the offender are asked if the offender had a weapon and then asked what the type of weapon was. This part of the questionnaire from 2021 is shown in Figure [3\.4](c03-survey-data-documentation.html#fig:understand-ncvs-weapon-q) ([U. S. Bureau of Justice Statistics 2020](#ref-ncvs_survey_2020)).
FIGURE 3\.4: Excerpt from the NCVS 2020\-2021 Crime Incident Report \- Weapon Type
For these multiple response variables (select all that apply), the NCVS codebook includes what they call a “lead\-in” variable that summarizes the response. This lead\-in variable provides metadata information on how a respondent answered the question. For example, question 23a on the weapon type, the lead\-in variable is V4050 (shown in Figure [3\.5](c03-survey-data-documentation.html#fig:understand-ncvs-weapon-cb)) indicates the quality and type of response ([U. S. Bureau of Justice Statistics 2022](#ref-ncvs_cb_2020)). In the codebook, this variable is then followed by a set of variables for each weapon type. An example of one of the individual variables from the codebook, the handgun (V4051\), is shown in Figure [3\.6](c03-survey-data-documentation.html#fig:understand-ncvs-weapon-cb-hg) ([U. S. Bureau of Justice Statistics 2022](#ref-ncvs_cb_2020)). We will dive into how to analyze this variable in Chapter [11](c11-missing-data.html#c11-missing-data).
FIGURE 3\.5: Excerpt from the NCVS 2021 Codebook for V4050 \- LI WHAT WAS WEAPON
FIGURE 3\.6: Excerpt from the NCVS 2021 Codebook for V4051 \- C WEAPON: HAND GUN
When data are read into R, some values may be system missing, that is they are coded as `NA` even if that is not evident in a codebook. We discuss in Chapter [11](c11-missing-data.html#c11-missing-data) how to analyze data with `NA` values and review how R handles missing data in calculations.
3\.4 Example: ANES 2020 survey documentation
--------------------------------------------
Let’s look at the survey documentation for the ANES 2020 and the documentation from their [website](https://electionstudies.org/data-center/2020-time-series-study/). Navigating to “User Guide and Codebook” ([American National Election Studies 2022](#ref-anes-cb)), we can download the PDF that contains the survey documentation, titled “ANES 2020 Time Series Study Full Release: User Guide and Codebook.” Do not be daunted by the 796\-page PDF. Below, we focus on the most critical information.
#### Introduction
The first section in the User Guide explains that the ANES 2020 Times Series Study continues a series of election surveys conducted since 1948\. These surveys contain data on public opinion and voting behavior in the U.S. presidential elections. The introduction also includes information about the modes used for data collection (web, live video interviewing, or CATI). Additionally, there is a summary of the number of pre\-election interviews (8,280\) and post\-election re\-interviews (7,449\).
#### Sample design and respondent recruitment
The section “Sample Design and Respondent Recruitment” provides more detail about the survey’s sequential mixed\-mode design. All three modes were conducted one after another and not at the same time. Additionally, it indicates that for the 2020 survey, they resampled all respondents who participated in the 2016 ANES, along with a newly drawn cross\-section:
> The target population for the fresh cross\-section was the 231 million non\-institutional U.S. citizens aged 18 or older living in the 50 U.S. states or the District of Columbia.
The document continues with more details on the sample groups.
#### Data analysis, weights, and variance estimation
The section “Data Analysis, Weights, and Variance Estimation” includes information on weights and strata/cluster variables. Reading through, we can find the full sample weight variables:
> For analysis of the complete set of cases using pre\-election data only, including all cases and representative of the 2020 electorate, use the full sample pre\-election weight, **V200010a**. For analysis including post\-election data for the complete set of participants (i.e., analysis of post\-election data only or a combination of pre\- and post\-election data), use the full sample post\-election weight, **V200010b**. Additional weights are provided for analysis of subsets of the data…
The document provides more information about the design variables, summarized in Table [3\.1](c03-survey-data-documentation.html#tab:aneswgts).
TABLE 3\.1: Weight and variance information for ANES
| For weight | Variance unit/cluster | Variance stratum |
| --- | --- | --- |
| V200010a | V200010c | V200010d |
| V200010b | V200010c | V200010d |
### Methodology
The user guide mentions a supplemental document called “How to Analyze ANES Survey Data” ([DeBell 2010](#ref-debell)) as a how\-to guide for analyzing the data. In this document, we learn more about the weights, and that they sum to the sample size and not the population. If our goal is to calculate estimates for the entire U.S. population instead of just the sample, we must adjust the weights to the U.S. population. To create accurate weights for the population, we need to determine the total population size at the time of the survey. Let’s review the “Sample Design and Respondent Recruitment” section for more details:
> The target population for the fresh cross\-section was the 231 million non\-institutional U.S. citizens aged 18 or older living in the 50 U.S. states or the District of Columbia.
The documentation suggests that the population should equal around 231 million, but this is a very imprecise count. Upon further investigation of the available resources, we can find the methodology file titled “Methodology Report for the ANES 2020 Time Series Study” ([DeBell et al. 2022](#ref-anes-2020-tech)). This file states that we can use the population total from the Current Population Survey (CPS), a monthly survey sponsored by the U.S. Census Bureau and the U.S. Bureau of Labor Statistics. The CPS provides a more accurate population estimate for a specific month. Therefore, we can use the CPS to get the total population number for March 2020, when the ANES was conducted. Chapter [4](c04-getting-started.html#c04-getting-started) goes into detailed instructions on how to calculate and adjust this value in the data.
#### Introduction
The first section in the User Guide explains that the ANES 2020 Times Series Study continues a series of election surveys conducted since 1948\. These surveys contain data on public opinion and voting behavior in the U.S. presidential elections. The introduction also includes information about the modes used for data collection (web, live video interviewing, or CATI). Additionally, there is a summary of the number of pre\-election interviews (8,280\) and post\-election re\-interviews (7,449\).
#### Sample design and respondent recruitment
The section “Sample Design and Respondent Recruitment” provides more detail about the survey’s sequential mixed\-mode design. All three modes were conducted one after another and not at the same time. Additionally, it indicates that for the 2020 survey, they resampled all respondents who participated in the 2016 ANES, along with a newly drawn cross\-section:
> The target population for the fresh cross\-section was the 231 million non\-institutional U.S. citizens aged 18 or older living in the 50 U.S. states or the District of Columbia.
The document continues with more details on the sample groups.
#### Data analysis, weights, and variance estimation
The section “Data Analysis, Weights, and Variance Estimation” includes information on weights and strata/cluster variables. Reading through, we can find the full sample weight variables:
> For analysis of the complete set of cases using pre\-election data only, including all cases and representative of the 2020 electorate, use the full sample pre\-election weight, **V200010a**. For analysis including post\-election data for the complete set of participants (i.e., analysis of post\-election data only or a combination of pre\- and post\-election data), use the full sample post\-election weight, **V200010b**. Additional weights are provided for analysis of subsets of the data…
The document provides more information about the design variables, summarized in Table [3\.1](c03-survey-data-documentation.html#tab:aneswgts).
TABLE 3\.1: Weight and variance information for ANES
| For weight | Variance unit/cluster | Variance stratum |
| --- | --- | --- |
| V200010a | V200010c | V200010d |
| V200010b | V200010c | V200010d |
### Methodology
The user guide mentions a supplemental document called “How to Analyze ANES Survey Data” ([DeBell 2010](#ref-debell)) as a how\-to guide for analyzing the data. In this document, we learn more about the weights, and that they sum to the sample size and not the population. If our goal is to calculate estimates for the entire U.S. population instead of just the sample, we must adjust the weights to the U.S. population. To create accurate weights for the population, we need to determine the total population size at the time of the survey. Let’s review the “Sample Design and Respondent Recruitment” section for more details:
> The target population for the fresh cross\-section was the 231 million non\-institutional U.S. citizens aged 18 or older living in the 50 U.S. states or the District of Columbia.
The documentation suggests that the population should equal around 231 million, but this is a very imprecise count. Upon further investigation of the available resources, we can find the methodology file titled “Methodology Report for the ANES 2020 Time Series Study” ([DeBell et al. 2022](#ref-anes-2020-tech)). This file states that we can use the population total from the Current Population Survey (CPS), a monthly survey sponsored by the U.S. Census Bureau and the U.S. Bureau of Labor Statistics. The CPS provides a more accurate population estimate for a specific month. Therefore, we can use the CPS to get the total population number for March 2020, when the ANES was conducted. Chapter [4](c04-getting-started.html#c04-getting-started) goes into detailed instructions on how to calculate and adjust this value in the data.
| Social Science |
tidy-survey-r.github.io | https://tidy-survey-r.github.io/tidy-survey-book/c04-getting-started.html |
Chapter 4 Getting started
=========================
4\.1 Introduction
-----------------
This chapter provides an overview of the packages, data, and design objects we use frequently throughout this book. As mentioned in Chapter [2](c02-overview-surveys.html#c02-overview-surveys), understanding how a survey was conducted helps us make sense of the results and interpret findings. Therefore, we provide background on the datasets used in examples and exercises. Next, we walk through how to create the survey design objects necessary to begin an analysis. Finally, we provide an overview of the {srvyr} package and the steps needed for analysis. Please report any bugs and issues encountered while going through the book to the book’s [GitHub repository](https://github.com/tidy-survey-r/tidy-survey-book).
4\.2 Setup
----------
This section provides details on the required packages and data, as well as the steps for preparing survey design objects. For a streamlined learning experience, we recommend taking the time to walk through the code provided here and making sure everything is properly set up.
### 4\.2\.1 Packages
We use several packages throughout the book, but let’s install and load specific ones for this chapter. Many functions in the examples and exercises are from three packages: {tidyverse}, {survey}, and {srvyr} ([Wickham et al. 2019](#ref-tidyverse2019); [Lumley 2010](#ref-lumley2010complex); [Freedman Ellis and Schneider 2024](#ref-R-srvyr)). The packages can be installed from the Comprehensive R Archive Network (CRAN) using the code below:
```
install.packages(c("tidyverse", "survey", "srvyr"))
```
We bundled the datasets used in the book in an R package, {srvyrexploR} ([Zimmer, Powell, and Velásquez 2024](#ref-R-srvyrexploR)). To install it from GitHub, use the {pak} package ([Csárdi and Hester 2024](#ref-R-pak)):
```
install.packages("pak")
pak::pak("tidy-survey-r/srvyrexploR")
```
After installing these packages, load them using the `library()` function:
```
library(tidyverse)
library(survey)
library(srvyr)
library(srvyrexploR)
```
The packages {broom}, {gt}, and {gtsummary} play a role in displaying output and creating formatted tables ([Iannone et al. 2024](#ref-R-gt); [Robinson, Hayes, and Couch 2023](#ref-R-broom); [Sjoberg et al. 2021](#ref-gtsummarysjo)). Install them with the provided code[2](#fn2):
```
install.packages(c("gt", "gtsummary"))
```
After installing these packages, load them using the `library()` function:
```
library(broom)
library(gt)
library(gtsummary)
```
Install and load the {censusapi} package to access the Current Population Survey (CPS), which we use to ensure accurate weighting of a key dataset in the book ([Recht 2024](#ref-R-censusapi)). Run the code below to install {censusapi}:
```
install.packages("censusapi")
```
After installing this package, load it using the `library()` function:
```
library(censusapi)
```
Note that the {censusapi} package requires a Census API key, available for free from the [U.S. Census Bureau website](https://api.census.gov/data/key_signup.html) (refer to the package documentation for more information). We recommend storing the Census API key in the R environment instead of directly in the code. To do this, run the `Sys.setenv()` script below, substituting the API key where it says `YOUR_API_KEY_HERE`.
```
Sys.setenv(CENSUS_KEY = "YOUR_API_KEY_HERE")
```
Then, restart the R session. Once the Census API key is stored, we can retrieve it in our R code with `Sys.getenv("CENSUS_KEY")`.
There are a few other packages used in the book in limited frequency. We list them in the Prerequisite boxes at the beginning of each chapter. As we work through the book, make sure to check the Prerequisite box and install any missing packages before proceeding.
### 4\.2\.2 Data
The {srvyrexploR} package contains the datasets used in the book. Once installed and loaded, explore the documentation using the `help()` function. Read the descriptions of the datasets to understand what they contain:
```
help(package = "srvyrexploR")
```
This book uses two main datasets: the American National Election Studies (ANES – [DeBell 2010](#ref-debell)) and the Residential Energy Consumption Survey (RECS – [U.S. Energy Information Administration 2023b](#ref-recs-2020-tech)), which are included as `anes_2020` and `recs_2020` in the {srvyrexploR} package, respectively.
#### American National Election Studies Data
American National Election Studies (ANES) collect data from election surveys dating back to 1948\. These surveys contain information on public opinion and voting behavior in U.S. presidential elections and some midterm elections[3](#fn3). They cover topics such as party affiliation, voting choice, and level of trust in the government. The 2020 survey (data used in this book) was fielded online, through live video interviews, or via computer\-assisted telephone interviews (CATI).
When working with new survey data, we should review the survey documentation (see Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation)) to understand the data collection methods. The original ANES data contains variables starting with `V20` ([DeBell 2010](#ref-debell)), so to assist with our analysis throughout the book, we created descriptive variable names. For example, the respondent’s age is now in a variable called `Age`, and gender is in a variable called `Gender`. These descriptive variables are included in the {srvyrexploR} package. A complete overview of all variables can be found in Appendix [B](anes-cb.html#anes-cb).
Before beginning an analysis, it is useful to view the data to understand the available variables. The `dplyr::glimpse()` function produces a list of all variables, their types (e.g., function, double), and a few example values. Below, we remove variables containing a “V” followed by numbers with `select(-matches("^V\\d"))` before using `glimpse()` to get a quick overview of the data with descriptive variable names:
```
anes_2020 %>%
select(-matches("^V\\d")) %>%
glimpse()
```
```
## Rows: 7,453
## Columns: 21
## $ CaseID <dbl> 200015, 200022, 200039, 200046, 200053…
## $ InterviewMode <fct> Web, Web, Web, Web, Web, Web, Web, Web…
## $ Weight <dbl> 1.0057, 1.1635, 0.7687, 0.5210, 0.9658…
## $ VarUnit <fct> 2, 2, 1, 2, 1, 2, 1, 2, 2, 2, 1, 1, 2,…
## $ Stratum <fct> 9, 26, 41, 29, 23, 37, 7, 37, 32, 41, …
## $ CampaignInterest <fct> Somewhat interested, Not much interest…
## $ EarlyVote2020 <fct> NA, NA, NA, NA, NA, NA, NA, NA, Yes, N…
## $ VotedPres2016 <fct> Yes, Yes, Yes, Yes, Yes, No, Yes, No, …
## $ VotedPres2016_selection <fct> Trump, Other, Clinton, Clinton, Trump,…
## $ PartyID <fct> Strong republican, Independent, Indepe…
## $ TrustGovernment <fct> Never, Never, Some of the time, About …
## $ TrustPeople <fct> About half the time, Some of the time,…
## $ Age <dbl> 46, 37, 40, 41, 72, 71, 37, 45, 70, 43…
## $ AgeGroup <fct> 40-49, 30-39, 40-49, 40-49, 70 or olde…
## $ Education <fct> Bachelor's, Post HS, High school, Post…
## $ RaceEth <fct> "Hispanic", "Asian, NH/PI", "White", "…
## $ Gender <fct> Male, Female, Female, Male, Male, Fema…
## $ Income <fct> "$175,000-249,999", "$70,000-74,999", …
## $ Income7 <fct> $125k or more, $60k to < 80k, $100k to…
## $ VotedPres2020 <fct> NA, Yes, Yes, Yes, Yes, Yes, Yes, NA, …
## $ VotedPres2020_selection <fct> NA, Other, Biden, Biden, Trump, Biden,…
```
From the output, we can see there are 7,453 rows and 21 variables in the ANES data. This output also indicates that most of the variables are factors (e.g., `InterviewMode`), while a few variables are in double (numeric) format (e.g., `Age`).
#### Residential Energy Consumption Survey Data
Residential Energy Consumption Survey (RECS) is a study that measures energy consumption and expenditure in American households. Funded by the Energy Information Administration, RECS data are collected through interviews with household members and energy suppliers. These interviews take place in person, over the phone, via mail, and on the web, with modes changing over time. The survey has been fielded 14 times between 1950 and 2020\. It includes questions about appliances, electronics, heating, air conditioning (A/C), temperatures, water heating, lighting, energy bills, respondent demographics, and energy assistance.
We should read the survey documentation (see Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation)) to understand how the data were collected and implemented. An overview of all variables can be found in Appendix [C](recs-cb.html#recs-cb).
Before starting an analysis, we recommend viewing the data to understand the types of data and variables that are included. The `dplyr::glimpse()` function produces a list of all variables, the type of the variable (e.g., function, double), and a few example values. Below, we remove the weight variables with `select(-matches("^NWEIGHT"))` before using `glimpse()` to get a quick overview of the data:
```
recs_2020 %>%
select(-matches("^NWEIGHT")) %>%
glimpse()
```
```
## Rows: 18,496
## Columns: 39
## $ DOEID <dbl> 1e+05, 1e+05, 1e+05, 1e+05, 1e+05, 1e+05, 1e+…
## $ ClimateRegion_BA <fct> Mixed-Dry, Mixed-Humid, Mixed-Dry, Mixed-Humi…
## $ Urbanicity <fct> Urban Area, Urban Area, Urban Area, Urban Are…
## $ Region <fct> West, South, West, South, Northeast, South, S…
## $ REGIONC <chr> "WEST", "SOUTH", "WEST", "SOUTH", "NORTHEAST"…
## $ Division <fct> Mountain South, West South Central, Mountain …
## $ STATE_FIPS <chr> "35", "05", "35", "45", "34", "48", "40", "28…
## $ state_postal <fct> NM, AR, NM, SC, NJ, TX, OK, MS, DC, AZ, CA, T…
## $ state_name <fct> New Mexico, Arkansas, New Mexico, South Carol…
## $ HDD65 <dbl> 3844, 3766, 3819, 2614, 4219, 901, 3148, 1825…
## $ CDD65 <dbl> 1679, 1458, 1696, 1718, 1363, 3558, 2128, 237…
## $ HDD30YR <dbl> 4451, 4429, 4500, 3229, 4896, 1150, 3564, 266…
## $ CDD30YR <dbl> 1027, 1305, 1010, 1653, 1059, 3588, 2043, 216…
## $ HousingUnitType <fct> Single-family detached, Apartment: 5 or more …
## $ YearMade <ord> 1970-1979, 1980-1989, 1960-1969, 1980-1989, 1…
## $ TOTSQFT_EN <dbl> 2100, 590, 900, 2100, 800, 4520, 2100, 900, 7…
## $ TOTHSQFT <dbl> 2100, 590, 900, 2100, 800, 3010, 1200, 900, 7…
## $ TOTCSQFT <dbl> 2100, 590, 900, 2100, 800, 3010, 1200, 0, 500…
## $ SpaceHeatingUsed <lgl> TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRU…
## $ ACUsed <lgl> TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, FAL…
## $ HeatingBehavior <fct> Set one temp and leave it, Turn on or off as …
## $ WinterTempDay <dbl> 70, 70, 69, 68, 68, 76, 74, 70, 68, 70, 72, 7…
## $ WinterTempAway <dbl> 70, 65, 68, 68, 68, 76, 65, 70, 60, 70, 70, 7…
## $ WinterTempNight <dbl> 68, 65, 67, 68, 68, 68, 74, 68, 62, 68, 72, 7…
## $ ACBehavior <fct> Set one temp and leave it, Turn on or off as …
## $ SummerTempDay <dbl> 71, 68, 70, 72, 72, 69, 68, NA, 72, 74, 77, 7…
## $ SummerTempAway <dbl> 71, 68, 68, 72, 72, 74, 70, NA, 76, 74, 77, 7…
## $ SummerTempNight <dbl> 71, 68, 68, 72, 72, 68, 70, NA, 68, 72, 77, 7…
## $ BTUEL <dbl> 42723, 17889, 8147, 31647, 20027, 48968, 4940…
## $ DOLLAREL <dbl> 1955.06, 713.27, 334.51, 1424.86, 1087.00, 18…
## $ BTUNG <dbl> 101924.4, 10145.3, 22603.1, 55118.7, 39099.5,…
## $ DOLLARNG <dbl> 701.83, 261.73, 188.14, 636.91, 376.04, 439.4…
## $ BTULP <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 178…
## $ DOLLARLP <dbl> 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, …
## $ BTUFO <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 681…
## $ DOLLARFO <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 187…
## $ BTUWOOD <dbl> 0, 0, 0, 0, 0, 3000, 0, 0, 0, 0, 0, 0, 0, 0, …
## $ TOTALBTU <dbl> 144648, 28035, 30750, 86765, 59127, 85401, 13…
## $ TOTALDOL <dbl> 2656.9, 975.0, 522.6, 2061.8, 1463.0, 2335.1,…
```
From the output, we can see that the RECS data has 18,496 rows and 39 non\-weight variables. This output also indicates that most of the variables are in double (numeric) format (e.g., `TOTSQFT_EN`), with some factor (e.g., `Region`), Boolean (e.g., `ACUsed`), character (e.g., `REGIONC`), and ordinal (e.g., `YearMade`) variables.
### 4\.2\.3 Design objects
The design object is the backbone for survey analysis. It is where we specify the sampling design, weights, and other necessary information to ensure we account for errors in the data. Before creating the design object, we should carefully review the survey documentation to understand how to create the design object for accurate analysis.
In this section, we provide details on how to code the design object for the ANES and RECS data used in the book. However, we only provide a high\-level overview to get readers started. For a deeper understanding of creating design objects for a variety of sampling designs, see Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights).
While we recommend conducting exploratory data analysis on the original data before diving into complex survey analysis (see Chapter [12](c12-recommendations.html#c12-recommendations)), the actual survey analysis and inference should be performed with the survey design objects instead of the original survey data. For example, the ANES data is called `anes_2020`. If we create a survey design object called `anes_des`, our survey analyses should begin with `anes_des` and not `anes_2020`. Using the survey design object ensures that our calculations appropriately account for the details of the survey design.
#### American National Election Studies Design Object
The ANES documentation ([DeBell 2010](#ref-debell)) details the sampling and weighting implications for analyzing the survey data. From this documentation and as noted in Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation), the 2020 ANES data are weighted to the sample, not the population. To make generalizations about the population, we need to weigh the data against the full population count. The ANES methodology recommends using the Current Population Survey (CPS) to determine the number of non\-institutional U.S. citizens aged 18 or older living in the 50 U.S. states or D.C. in March 2020\.
We can use the {censusapi} package to obtain the information needed for the survey design object. The `getCensus()` function allows us to retrieve the CPS data for March (`cps/basic/mar`) in 2020 (`vintage = 2020`). Additionally, we extract several variables from the CPS:
* month (`HRMONTH`) and year (`HRYEAR4`) of the interview: to confirm the correct time period
* age (`PRTAGE`) of the respondent: to narrow the population to 18 and older (eligible age to vote)
* citizenship status (`PRCITSHP`) of the respondent: to narrow the population to only those eligible to vote
* final person\-level weight (`PWSSWGT`)
Detailed information for these variables can be found in the [CPS data dictionary](https://www2.census.gov/programs-surveys/cps/datasets/2020/basic/2020_Basic_CPS_Public_Use_Record_Layout_plus_IO_Code_list.txt).
```
cps_state_in <- getCensus(
name = "cps/basic/mar",
vintage = 2020,
region = "state",
vars = c(
"HRMONTH", "HRYEAR4",
"PRTAGE", "PRCITSHP", "PWSSWGT"
),
key = Sys.getenv("CENSUS_KEY")
)
cps_state <- cps_state_in %>%
as_tibble() %>%
mutate(across(
.cols = everything(),
.fns = as.numeric
))
```
In the code above, we include `region = "state"`. The default region type for the CPS data is at the state level. While not required, including the region can be helpful for understanding the geographical context of the data.
In `getCensus()`, we filtered the dataset by specifying the month (`HRMONTH == 3`) and year (`HRYEAR4 == 2020`) of our request. Therefore, we expect that all interviews within our output were conducted during that particular month and year. We can confirm that the data are from March 2020 by running the code below:
```
cps_state %>%
distinct(HRMONTH, HRYEAR4)
```
```
## # A tibble: 1 × 2
## HRMONTH HRYEAR4
## <dbl> <dbl>
## 1 3 2020
```
We can narrow down the dataset using the age and citizenship variables to include only individuals who are 18 years or older (`PRTAGE >= 18`) and have U.S. citizenship (`PRCITSHIP %in% c(1:4)`):
```
cps_narrow_resp <- cps_state %>%
filter(
PRTAGE >= 18,
PRCITSHP %in% c(1:4)
)
```
To calculate the U.S. population from the filtered data, we sum the person weights (`PWSSWGT`):
```
targetpop <- cps_narrow_resp %>%
pull(PWSSWGT) %>%
sum()
scales::comma(targetpop)
```
```
## [1] "231,034,125"
```
The population of interest in 2020 is 231,034,125\. This result gives us what we need to create the survey design object for estimating population statistics. Using the `anes_2020` data, we adjust the weighting variable (`V200010b`) using the population of interest we just calculated (`targetpop`). We determine the proportion of the total weight for each individual weight (`V200010b / sum(V200010b)`) and then multiply that proportion by the calculated population of interest.
```
anes_adjwgt <- anes_2020 %>%
mutate(Weight = V200010b / sum(V200010b) * targetpop)
```
Once we have the adjusted weights, we can refer to the rest of the documentation to create the survey design. The documentation indicates that the study uses a stratified cluster sampling design. Therefore, we need to specify variables for `strata` and `ids` (cluster) and fill in the `nest` argument. The documentation provides guidance on which strata and cluster variables to use depending on whether we are analyzing pre\- or post\-election data. In this book, we analyze post\-election data, so we need to use the post\-election weight `V200010b`, strata variable `V200010d`, and Primary Sampling Unit (PSU)/cluster variable `V200010c`. Additionally, we set `nest=TRUE` to ensure the clusters are nested within the strata.
```
anes_des <- anes_adjwgt %>%
as_survey_design(
weights = Weight,
strata = V200010d,
ids = V200010c,
nest = TRUE
)
anes_des
```
```
## Stratified 1 - level Cluster Sampling design (with replacement)
## With (101) clusters.
## Called via srvyr
## Sampling variables:
## - ids: V200010c
## - strata: V200010d
## - weights: Weight
## Data variables:
## - V200001 (dbl), CaseID (dbl), V200002 (dbl+lbl), InterviewMode
## (fct), V200010b (dbl), Weight (dbl), V200010c (dbl), VarUnit (fct),
## V200010d (dbl), Stratum (fct), V201006 (dbl+lbl), CampaignInterest
## (fct), V201023 (dbl+lbl), EarlyVote2020 (fct), V201024 (dbl+lbl),
## V201025x (dbl+lbl), V201028 (dbl+lbl), V201029 (dbl+lbl), V201101
## (dbl+lbl), V201102 (dbl+lbl), VotedPres2016 (fct), V201103
## (dbl+lbl), VotedPres2016_selection (fct), V201228 (dbl+lbl),
## V201229 (dbl+lbl), V201230 (dbl+lbl), V201231x (dbl+lbl), PartyID
## (fct), V201233 (dbl+lbl), TrustGovernment (fct), V201237 (dbl+lbl),
## TrustPeople (fct), V201507x (dbl+lbl), Age (dbl), AgeGroup (fct),
## V201510 (dbl+lbl), Education (fct), V201546 (dbl+lbl), V201547a
## (dbl+lbl), V201547b (dbl+lbl), V201547c (dbl+lbl), V201547d
## (dbl+lbl), V201547e (dbl+lbl), V201547z (dbl+lbl), V201549x
## (dbl+lbl), RaceEth (fct), V201600 (dbl+lbl), Gender (fct), V201607
## (dbl+lbl), V201610 (dbl+lbl), V201611 (dbl+lbl), V201613 (dbl+lbl),
## V201615 (dbl+lbl), V201616 (dbl+lbl), V201617x (dbl+lbl), Income
## (fct), Income7 (fct), V202051 (dbl+lbl), V202066 (dbl+lbl), V202072
## (dbl+lbl), VotedPres2020 (fct), V202073 (dbl+lbl), V202109x
## (dbl+lbl), V202110x (dbl+lbl), VotedPres2020_selection (fct)
```
We can examine this new object to learn more about the survey design, such that the ANES is a “Stratified 1 \- level Cluster Sampling design (with replacement) With (101\) clusters.” Additionally, the output displays the sampling variables and then lists the remaining variables in the dataset. This design object is used throughout this book to conduct survey analysis.
#### Residential Energy Consumption Survey Design Object
The RECS documentation ([U.S. Energy Information Administration 2023b](#ref-recs-2020-tech)) provides information on the survey’s sampling and weighting implications for analysis. The documentation shows the 2020 RECS uses Jackknife weights, where the main analytic weight is `NWEIGHT`, and the Jackknife weights are `NWEIGHT1`\-`NWEIGHT60`. We can specify these in the `weights` and `repweights` arguments in the survey design object code, respectively.
With Jackknife weights, additional information is required: `type`, `scale`, and `mse`. Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights) discusses in depth each of these arguments; but to quickly get started, the RECS documentation lets us know that `type=JK1`, `scale=59/60`, and `mse = TRUE`. We can use the following code to create the survey design object:
```
recs_des <- recs_2020 %>%
as_survey_rep(
weights = NWEIGHT,
repweights = NWEIGHT1:NWEIGHT60,
type = "JK1",
scale = 59 / 60,
mse = TRUE
)
recs_des
```
```
## Call: Called via srvyr
## Unstratified cluster jacknife (JK1) with 60 replicates and MSE variances.
## Sampling variables:
## - repweights: `NWEIGHT1 + NWEIGHT2 + NWEIGHT3 + NWEIGHT4 + NWEIGHT5 +
## NWEIGHT6 + NWEIGHT7 + NWEIGHT8 + NWEIGHT9 + NWEIGHT10 + NWEIGHT11 +
## NWEIGHT12 + NWEIGHT13 + NWEIGHT14 + NWEIGHT15 + NWEIGHT16 +
## NWEIGHT17 + NWEIGHT18 + NWEIGHT19 + NWEIGHT20 + NWEIGHT21 +
## NWEIGHT22 + NWEIGHT23 + NWEIGHT24 + NWEIGHT25 + NWEIGHT26 +
## NWEIGHT27 + NWEIGHT28 + NWEIGHT29 + NWEIGHT30 + NWEIGHT31 +
## NWEIGHT32 + NWEIGHT33 + NWEIGHT34 + NWEIGHT35 + NWEIGHT36 +
## NWEIGHT37 + NWEIGHT38 + NWEIGHT39 + NWEIGHT40 + NWEIGHT41 +
## NWEIGHT42 + NWEIGHT43 + NWEIGHT44 + NWEIGHT45 + NWEIGHT46 +
## NWEIGHT47 + NWEIGHT48 + NWEIGHT49 + NWEIGHT50 + NWEIGHT51 +
## NWEIGHT52 + NWEIGHT53 + NWEIGHT54 + NWEIGHT55 + NWEIGHT56 +
## NWEIGHT57 + NWEIGHT58 + NWEIGHT59 + NWEIGHT60`
## - weights: NWEIGHT
## Data variables:
## - DOEID (dbl), ClimateRegion_BA (fct), Urbanicity (fct), Region
## (fct), REGIONC (chr), Division (fct), STATE_FIPS (chr),
## state_postal (fct), state_name (fct), HDD65 (dbl), CDD65 (dbl),
## HDD30YR (dbl), CDD30YR (dbl), HousingUnitType (fct), YearMade
## (ord), TOTSQFT_EN (dbl), TOTHSQFT (dbl), TOTCSQFT (dbl),
## SpaceHeatingUsed (lgl), ACUsed (lgl), HeatingBehavior (fct),
## WinterTempDay (dbl), WinterTempAway (dbl), WinterTempNight (dbl),
## ACBehavior (fct), SummerTempDay (dbl), SummerTempAway (dbl),
## SummerTempNight (dbl), NWEIGHT (dbl), NWEIGHT1 (dbl), NWEIGHT2
## (dbl), NWEIGHT3 (dbl), NWEIGHT4 (dbl), NWEIGHT5 (dbl), NWEIGHT6
## (dbl), NWEIGHT7 (dbl), NWEIGHT8 (dbl), NWEIGHT9 (dbl), NWEIGHT10
## (dbl), NWEIGHT11 (dbl), NWEIGHT12 (dbl), NWEIGHT13 (dbl), NWEIGHT14
## (dbl), NWEIGHT15 (dbl), NWEIGHT16 (dbl), NWEIGHT17 (dbl), NWEIGHT18
## (dbl), NWEIGHT19 (dbl), NWEIGHT20 (dbl), NWEIGHT21 (dbl), NWEIGHT22
## (dbl), NWEIGHT23 (dbl), NWEIGHT24 (dbl), NWEIGHT25 (dbl), NWEIGHT26
## (dbl), NWEIGHT27 (dbl), NWEIGHT28 (dbl), NWEIGHT29 (dbl), NWEIGHT30
## (dbl), NWEIGHT31 (dbl), NWEIGHT32 (dbl), NWEIGHT33 (dbl), NWEIGHT34
## (dbl), NWEIGHT35 (dbl), NWEIGHT36 (dbl), NWEIGHT37 (dbl), NWEIGHT38
## (dbl), NWEIGHT39 (dbl), NWEIGHT40 (dbl), NWEIGHT41 (dbl), NWEIGHT42
## (dbl), NWEIGHT43 (dbl), NWEIGHT44 (dbl), NWEIGHT45 (dbl), NWEIGHT46
## (dbl), NWEIGHT47 (dbl), NWEIGHT48 (dbl), NWEIGHT49 (dbl), NWEIGHT50
## (dbl), NWEIGHT51 (dbl), NWEIGHT52 (dbl), NWEIGHT53 (dbl), NWEIGHT54
## (dbl), NWEIGHT55 (dbl), NWEIGHT56 (dbl), NWEIGHT57 (dbl), NWEIGHT58
## (dbl), NWEIGHT59 (dbl), NWEIGHT60 (dbl), BTUEL (dbl), DOLLAREL
## (dbl), BTUNG (dbl), DOLLARNG (dbl), BTULP (dbl), DOLLARLP (dbl),
## BTUFO (dbl), DOLLARFO (dbl), BTUWOOD (dbl), TOTALBTU (dbl),
## TOTALDOL (dbl)
```
Viewing this new object provides information about the survey design, such that RECS is an “Unstratified cluster jacknife (JK1\) with 60 replicates and MSE variances.” Additionally, the output shows the sampling variables (`NWEIGHT1`\-`NWEIGHT60`) and then lists the remaining variables in the dataset. This design object is used throughout this book to conduct survey analysis.
4\.3 Survey analysis process
----------------------------
There is a general process for analyzing data to create estimates with {srvyr} package:
1. Create a `tbl_svy` object (a survey object) using: `as_survey_design()` or `as_survey_rep()`
2. Subset data (if needed) using `filter()` (to create subpopulations)
3. Specify domains of analysis using `group_by()`
4. Within `summarize()`, specify variables to calculate, including means, totals, proportions, quantiles, and more
In Section [4\.2\.3](c04-getting-started.html#setup-des-obj), we follow Step 1 to create the survey design objects for the ANES and RECS data featured in this book. Additional details on how to create design objects can be found in Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights). Then, once we have the design object, we can filter the data to any subpopulation of interest (if needed). It is important to filter the data after creating the design object. This ensures that we are accurately accounting for the survey design in our calculations. Finally, we can use `group_by()`, `summarize()`, and other functions from the {survey} and {srvyr} packages to analyze the survey data by estimating means, totals, and so on.
4\.4 Similarities between {dplyr} and {srvyr} functions
-------------------------------------------------------
The {dplyr} package from the tidyverse offers flexible and intuitive functions for data wrangling ([Wickham et al. 2023](#ref-R-dplyr)). One of the major advantages of using {srvyr} is that it applies {dplyr}\-like syntax to the {survey} package ([Freedman Ellis and Schneider 2024](#ref-R-srvyr)). We can use pipes, such as `%>%` from the {magrittr} package, to specify a survey design object, apply a function, and then feed that output into the next function’s first argument ([Bache and Wickham 2022](#ref-R-magrittr)). Functions follow the ‘tidy’ convention of snake\_case function names.
To help explain the similarities between {dplyr} functions and {srvyr} functions, we use the `towny` dataset from the {gt} package and `apistrat` data that comes in the {survey} package. The `towny` dataset provides population data for municipalities in Ontario, Canada on census years between 1996 and 2021\. Taking a look at `towny` with `dplyr::glimpse()`, we can see the dataset has 25 columns with a mix of character and numeric data.
```
towny %>%
glimpse()
```
```
## Rows: 414
## Columns: 25
## $ name <chr> "Addington Highlands", "Adelaide Metc…
## $ website <chr> "https://addingtonhighlands.ca", "htt…
## $ status <chr> "lower-tier", "lower-tier", "lower-ti…
## $ csd_type <chr> "township", "township", "township", "…
## $ census_div <chr> "Lennox and Addington", "Middlesex", …
## $ latitude <dbl> 45.00, 42.95, 44.13, 45.53, 43.86, 48…
## $ longitude <dbl> -77.25, -81.70, -79.93, -76.90, -79.0…
## $ land_area_km2 <dbl> 1293.99, 331.11, 371.53, 519.59, 66.6…
## $ population_1996 <int> 2429, 3128, 9359, 2837, 64430, 1027, …
## $ population_2001 <int> 2402, 3149, 10082, 2824, 73753, 956, …
## $ population_2006 <int> 2512, 3135, 10695, 2716, 90167, 958, …
## $ population_2011 <int> 2517, 3028, 10603, 2844, 109600, 864,…
## $ population_2016 <int> 2318, 2990, 10975, 2935, 119677, 969,…
## $ population_2021 <int> 2534, 3011, 10989, 2995, 126666, 954,…
## $ density_1996 <dbl> 1.88, 9.45, 25.19, 5.46, 966.84, 8.81…
## $ density_2001 <dbl> 1.86, 9.51, 27.14, 5.44, 1106.74, 8.2…
## $ density_2006 <dbl> 1.94, 9.47, 28.79, 5.23, 1353.05, 8.2…
## $ density_2011 <dbl> 1.95, 9.14, 28.54, 5.47, 1644.66, 7.4…
## $ density_2016 <dbl> 1.79, 9.03, 29.54, 5.65, 1795.87, 8.3…
## $ density_2021 <dbl> 1.96, 9.09, 29.58, 5.76, 1900.75, 8.1…
## $ pop_change_1996_2001_pct <dbl> -0.0111, 0.0067, 0.0773, -0.0046, 0.1…
## $ pop_change_2001_2006_pct <dbl> 0.0458, -0.0044, 0.0608, -0.0382, 0.2…
## $ pop_change_2006_2011_pct <dbl> 0.0020, -0.0341, -0.0086, 0.0471, 0.2…
## $ pop_change_2011_2016_pct <dbl> -0.0791, -0.0125, 0.0351, 0.0320, 0.0…
## $ pop_change_2016_2021_pct <dbl> 0.0932, 0.0070, 0.0013, 0.0204, 0.058…
```
Let’s examine the `towny` object’s class. We verify that it is a tibble, as indicated by `"tbl_df"`, by running the code below:
```
class(towny)
```
```
## [1] "tbl_df" "tbl" "data.frame"
```
All tibbles are data.frames, but not all data.frames are tibbles. Compared to data.frames, tibbles have some advantages, with the printing behavior being a noticeable advantage. When working with tidyverse style code, we recommend making all your datasets tibbles for ease of analysis.
The {survey} package contains datasets related to the California Academic Performance Index, which measures student performance in schools with at least 100 students in California. We can access these datasets by loading the {survey} package and running `data(api)`.
Let’s work with the `apistrat` dataset, which is a stratified random sample, stratified by school type (`stype`) with three levels: `E` for elementary school, `M` for middle school, and `H` for high school. We first create the survey design object (see Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights) for more information). The sample is stratified by the `stype` variable and the sampling weights are found in the `pw` variable. We can use this information to construct the design object, `apistrat_des`.
```
data(api)
apistrat_des <- apistrat %>%
as_survey_design(
strata = stype,
weights = pw
)
```
When we check the class of `apistrat_des`, it is not a typical `data.frame`. Applying the `as_survey_design()` function transforms the data into a `tbl_svy`, a special class specifically for survey design objects. The {srvyr} package is designed to work with the `tbl_svy` class of objects.
```
class(apistrat_des)
```
```
## [1] "tbl_svy" "survey.design2" "survey.design"
```
Let’s look at how {dplyr} works with regular data frames. The example below calculates the mean and median for the `land_area_km2` variable in the `towny` dataset.
```
towny %>%
summarize(
area_mean = mean(land_area_km2),
area_median = median(land_area_km2)
)
```
```
## # A tibble: 1 × 2
## area_mean area_median
## <dbl> <dbl>
## 1 373. 273.
```
In the code below, we calculate the mean and median of the variable `api00` using `apistrat_des`. Note the similarity in the syntax. However, the standard error of the statistic is also calculated in addition to the statistic itself.
```
apistrat_des %>%
summarize(
api00_mean = survey_mean(api00),
api00_med = survey_median(api00)
)
```
```
## # A tibble: 1 × 4
## api00_mean api00_mean_se api00_med api00_med_se
## <dbl> <dbl> <dbl> <dbl>
## 1 662. 9.54 668 13.7
```
The functions in {srvyr} also play nicely with other tidyverse functions. For example, if we wanted to select columns with shared characteristics, we can use {tidyselect} functions such as `starts_with()`, `num_range()`, etc. ([Henry and Wickham 2024](#ref-R-tidyselect)). In the examples below, we use a combination of `across()` and `starts_with()` to calculate the mean of variables starting with “population” in the `towny` data frame and those beginning with `api` in the `apistrat_des` survey object.
```
towny %>%
summarize(across(
starts_with("population"),
~ mean(.x, na.rm = TRUE)
))
```
```
## # A tibble: 1 × 6
## population_1996 population_2001 population_2006 population_2011
## <dbl> <dbl> <dbl> <dbl>
## 1 25866. 27538. 29173. 30838.
## # ℹ 2 more variables: population_2016 <dbl>, population_2021 <dbl>
```
```
apistrat_des %>%
summarize(across(
starts_with("api"),
survey_mean
))
```
```
## # A tibble: 1 × 6
## api00 api00_se api99 api99_se api.stu api.stu_se
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 662. 9.54 629. 10.1 498. 16.4
```
We have the flexibility to use {dplyr} verbs such as `mutate()`, `filter()`, and `select()` on our survey design object. As mentioned in Section [4\.3](c04-getting-started.html#survey-analysis-process), these steps should be performed on the survey design object. This ensures our survey design is properly considered in all our calculations.
```
apistrat_des_mod <- apistrat_des %>%
mutate(api_diff = api00 - api99) %>%
filter(stype == "E") %>%
select(stype, api99, api00, api_diff, api_students = api.stu)
apistrat_des_mod
```
```
## Stratified Independent Sampling design (with replacement)
## Called via srvyr
## Sampling variables:
## - ids: `1`
## - strata: stype
## - weights: pw
## Data variables:
## - stype (fct), api99 (int), api00 (int), api_diff (int), api_students
## (int)
```
```
apistrat_des
```
```
## Stratified Independent Sampling design (with replacement)
## Called via srvyr
## Sampling variables:
## - ids: `1`
## - strata: stype
## - weights: pw
## Data variables:
## - cds (chr), stype (fct), name (chr), sname (chr), snum (dbl), dname
## (chr), dnum (int), cname (chr), cnum (int), flag (int), pcttest
## (int), api00 (int), api99 (int), target (int), growth (int),
## sch.wide (fct), comp.imp (fct), both (fct), awards (fct), meals
## (int), ell (int), yr.rnd (fct), mobility (int), acs.k3 (int),
## acs.46 (int), acs.core (int), pct.resp (int), not.hsg (int), hsg
## (int), some.col (int), col.grad (int), grad.sch (int), avg.ed
## (dbl), full (int), emer (int), enroll (int), api.stu (int), pw
## (dbl), fpc (dbl)
```
Several functions in {srvyr} must be called within `srvyr::summarize()`, with the exception of `srvyr::survey_count()` and `srvyr::survey_tally()`. This is similar to how `dplyr::count()` and `dplyr::tally()` are not called within `dplyr::summarize()`. The `summarize()` function can be used in conjunction with the `group_by()` function or `by/.by` arguments, which applies the functions on a group\-by\-group basis to create grouped summaries.
```
towny %>%
group_by(csd_type) %>%
dplyr::summarize(
area_mean = mean(land_area_km2),
area_median = median(land_area_km2)
)
```
```
## # A tibble: 5 × 3
## csd_type area_mean area_median
## <chr> <dbl> <dbl>
## 1 city 498. 198.
## 2 municipality 607. 488.
## 3 town 183. 129.
## 4 township 363. 301.
## 5 village 23.0 3.3
```
We use a similar setup to summarize data in {srvyr}:
```
apistrat_des %>%
group_by(stype) %>%
summarize(
api00_mean = survey_mean(api00),
api00_median = survey_median(api00)
)
```
```
## # A tibble: 3 × 5
## stype api00_mean api00_mean_se api00_median api00_median_se
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 E 674. 12.5 671 20.7
## 2 H 626. 15.5 635 21.6
## 3 M 637. 16.6 648 24.1
```
An alternative way to do grouped analysis on the `towny` data would be with the `.by` argument:
```
towny %>%
dplyr::summarize(
area_mean = mean(land_area_km2),
area_median = median(land_area_km2),
.by = csd_type
)
```
```
## # A tibble: 5 × 3
## csd_type area_mean area_median
## <chr> <dbl> <dbl>
## 1 township 363. 301.
## 2 town 183. 129.
## 3 municipality 607. 488.
## 4 city 498. 198.
## 5 village 23.0 3.3
```
The `.by` syntax is similarly implemented in {srvyr} for grouped analysis:
```
apistrat_des %>%
summarize(
api00_mean = survey_mean(api00),
api00_median = survey_median(api00),
.by = stype
)
```
```
## # A tibble: 3 × 5
## stype api00_mean api00_mean_se api00_median api00_median_se
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 E 674. 12.5 671 20.7
## 2 H 626. 15.5 635 21.6
## 3 M 637. 16.6 648 24.1
```
As mentioned above, {srvyr} functions are meant for `tbl_svy` objects. Attempting to manipulate data on non\-`tbl_svy` objects, like the `towny` example shown below, results in an error. Running the code lets us know what the issue is: `Survey context not set`.
```
towny %>%
summarize(area_mean = survey_mean(land_area_km2))
```
```
## Error in `summarize()`:
## ℹ In argument: `area_mean = survey_mean(land_area_km2)`.
## Caused by error in `cur_svy()`:
## ! Survey context not set
```
A few functions in {srvyr} have counterparts in {dplyr}, such as `srvyr::summarize()` and `srvyr::group_by()`. Unlike {srvyr}\-specific verbs, {srvyr} recognizes these parallel functions if applied to a non\-survey object. Instead of causing an error, the package provides the equivalent output from {dplyr}:
```
towny %>%
srvyr::summarize(area_mean = mean(land_area_km2))
```
```
## # A tibble: 1 × 1
## area_mean
## <dbl>
## 1 373.
```
Because this book focuses on survey analysis, most of our pipes stem from a survey object. When we load the {dplyr} and {srvyr} packages, the functions automatically figure out the class of data and use the appropriate one from {dplyr} or {srvyr}. Therefore, we do not need to include the namespace for each function (e.g., `srvyr::summarize()`).
4\.1 Introduction
-----------------
This chapter provides an overview of the packages, data, and design objects we use frequently throughout this book. As mentioned in Chapter [2](c02-overview-surveys.html#c02-overview-surveys), understanding how a survey was conducted helps us make sense of the results and interpret findings. Therefore, we provide background on the datasets used in examples and exercises. Next, we walk through how to create the survey design objects necessary to begin an analysis. Finally, we provide an overview of the {srvyr} package and the steps needed for analysis. Please report any bugs and issues encountered while going through the book to the book’s [GitHub repository](https://github.com/tidy-survey-r/tidy-survey-book).
4\.2 Setup
----------
This section provides details on the required packages and data, as well as the steps for preparing survey design objects. For a streamlined learning experience, we recommend taking the time to walk through the code provided here and making sure everything is properly set up.
### 4\.2\.1 Packages
We use several packages throughout the book, but let’s install and load specific ones for this chapter. Many functions in the examples and exercises are from three packages: {tidyverse}, {survey}, and {srvyr} ([Wickham et al. 2019](#ref-tidyverse2019); [Lumley 2010](#ref-lumley2010complex); [Freedman Ellis and Schneider 2024](#ref-R-srvyr)). The packages can be installed from the Comprehensive R Archive Network (CRAN) using the code below:
```
install.packages(c("tidyverse", "survey", "srvyr"))
```
We bundled the datasets used in the book in an R package, {srvyrexploR} ([Zimmer, Powell, and Velásquez 2024](#ref-R-srvyrexploR)). To install it from GitHub, use the {pak} package ([Csárdi and Hester 2024](#ref-R-pak)):
```
install.packages("pak")
pak::pak("tidy-survey-r/srvyrexploR")
```
After installing these packages, load them using the `library()` function:
```
library(tidyverse)
library(survey)
library(srvyr)
library(srvyrexploR)
```
The packages {broom}, {gt}, and {gtsummary} play a role in displaying output and creating formatted tables ([Iannone et al. 2024](#ref-R-gt); [Robinson, Hayes, and Couch 2023](#ref-R-broom); [Sjoberg et al. 2021](#ref-gtsummarysjo)). Install them with the provided code[2](#fn2):
```
install.packages(c("gt", "gtsummary"))
```
After installing these packages, load them using the `library()` function:
```
library(broom)
library(gt)
library(gtsummary)
```
Install and load the {censusapi} package to access the Current Population Survey (CPS), which we use to ensure accurate weighting of a key dataset in the book ([Recht 2024](#ref-R-censusapi)). Run the code below to install {censusapi}:
```
install.packages("censusapi")
```
After installing this package, load it using the `library()` function:
```
library(censusapi)
```
Note that the {censusapi} package requires a Census API key, available for free from the [U.S. Census Bureau website](https://api.census.gov/data/key_signup.html) (refer to the package documentation for more information). We recommend storing the Census API key in the R environment instead of directly in the code. To do this, run the `Sys.setenv()` script below, substituting the API key where it says `YOUR_API_KEY_HERE`.
```
Sys.setenv(CENSUS_KEY = "YOUR_API_KEY_HERE")
```
Then, restart the R session. Once the Census API key is stored, we can retrieve it in our R code with `Sys.getenv("CENSUS_KEY")`.
There are a few other packages used in the book in limited frequency. We list them in the Prerequisite boxes at the beginning of each chapter. As we work through the book, make sure to check the Prerequisite box and install any missing packages before proceeding.
### 4\.2\.2 Data
The {srvyrexploR} package contains the datasets used in the book. Once installed and loaded, explore the documentation using the `help()` function. Read the descriptions of the datasets to understand what they contain:
```
help(package = "srvyrexploR")
```
This book uses two main datasets: the American National Election Studies (ANES – [DeBell 2010](#ref-debell)) and the Residential Energy Consumption Survey (RECS – [U.S. Energy Information Administration 2023b](#ref-recs-2020-tech)), which are included as `anes_2020` and `recs_2020` in the {srvyrexploR} package, respectively.
#### American National Election Studies Data
American National Election Studies (ANES) collect data from election surveys dating back to 1948\. These surveys contain information on public opinion and voting behavior in U.S. presidential elections and some midterm elections[3](#fn3). They cover topics such as party affiliation, voting choice, and level of trust in the government. The 2020 survey (data used in this book) was fielded online, through live video interviews, or via computer\-assisted telephone interviews (CATI).
When working with new survey data, we should review the survey documentation (see Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation)) to understand the data collection methods. The original ANES data contains variables starting with `V20` ([DeBell 2010](#ref-debell)), so to assist with our analysis throughout the book, we created descriptive variable names. For example, the respondent’s age is now in a variable called `Age`, and gender is in a variable called `Gender`. These descriptive variables are included in the {srvyrexploR} package. A complete overview of all variables can be found in Appendix [B](anes-cb.html#anes-cb).
Before beginning an analysis, it is useful to view the data to understand the available variables. The `dplyr::glimpse()` function produces a list of all variables, their types (e.g., function, double), and a few example values. Below, we remove variables containing a “V” followed by numbers with `select(-matches("^V\\d"))` before using `glimpse()` to get a quick overview of the data with descriptive variable names:
```
anes_2020 %>%
select(-matches("^V\\d")) %>%
glimpse()
```
```
## Rows: 7,453
## Columns: 21
## $ CaseID <dbl> 200015, 200022, 200039, 200046, 200053…
## $ InterviewMode <fct> Web, Web, Web, Web, Web, Web, Web, Web…
## $ Weight <dbl> 1.0057, 1.1635, 0.7687, 0.5210, 0.9658…
## $ VarUnit <fct> 2, 2, 1, 2, 1, 2, 1, 2, 2, 2, 1, 1, 2,…
## $ Stratum <fct> 9, 26, 41, 29, 23, 37, 7, 37, 32, 41, …
## $ CampaignInterest <fct> Somewhat interested, Not much interest…
## $ EarlyVote2020 <fct> NA, NA, NA, NA, NA, NA, NA, NA, Yes, N…
## $ VotedPres2016 <fct> Yes, Yes, Yes, Yes, Yes, No, Yes, No, …
## $ VotedPres2016_selection <fct> Trump, Other, Clinton, Clinton, Trump,…
## $ PartyID <fct> Strong republican, Independent, Indepe…
## $ TrustGovernment <fct> Never, Never, Some of the time, About …
## $ TrustPeople <fct> About half the time, Some of the time,…
## $ Age <dbl> 46, 37, 40, 41, 72, 71, 37, 45, 70, 43…
## $ AgeGroup <fct> 40-49, 30-39, 40-49, 40-49, 70 or olde…
## $ Education <fct> Bachelor's, Post HS, High school, Post…
## $ RaceEth <fct> "Hispanic", "Asian, NH/PI", "White", "…
## $ Gender <fct> Male, Female, Female, Male, Male, Fema…
## $ Income <fct> "$175,000-249,999", "$70,000-74,999", …
## $ Income7 <fct> $125k or more, $60k to < 80k, $100k to…
## $ VotedPres2020 <fct> NA, Yes, Yes, Yes, Yes, Yes, Yes, NA, …
## $ VotedPres2020_selection <fct> NA, Other, Biden, Biden, Trump, Biden,…
```
From the output, we can see there are 7,453 rows and 21 variables in the ANES data. This output also indicates that most of the variables are factors (e.g., `InterviewMode`), while a few variables are in double (numeric) format (e.g., `Age`).
#### Residential Energy Consumption Survey Data
Residential Energy Consumption Survey (RECS) is a study that measures energy consumption and expenditure in American households. Funded by the Energy Information Administration, RECS data are collected through interviews with household members and energy suppliers. These interviews take place in person, over the phone, via mail, and on the web, with modes changing over time. The survey has been fielded 14 times between 1950 and 2020\. It includes questions about appliances, electronics, heating, air conditioning (A/C), temperatures, water heating, lighting, energy bills, respondent demographics, and energy assistance.
We should read the survey documentation (see Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation)) to understand how the data were collected and implemented. An overview of all variables can be found in Appendix [C](recs-cb.html#recs-cb).
Before starting an analysis, we recommend viewing the data to understand the types of data and variables that are included. The `dplyr::glimpse()` function produces a list of all variables, the type of the variable (e.g., function, double), and a few example values. Below, we remove the weight variables with `select(-matches("^NWEIGHT"))` before using `glimpse()` to get a quick overview of the data:
```
recs_2020 %>%
select(-matches("^NWEIGHT")) %>%
glimpse()
```
```
## Rows: 18,496
## Columns: 39
## $ DOEID <dbl> 1e+05, 1e+05, 1e+05, 1e+05, 1e+05, 1e+05, 1e+…
## $ ClimateRegion_BA <fct> Mixed-Dry, Mixed-Humid, Mixed-Dry, Mixed-Humi…
## $ Urbanicity <fct> Urban Area, Urban Area, Urban Area, Urban Are…
## $ Region <fct> West, South, West, South, Northeast, South, S…
## $ REGIONC <chr> "WEST", "SOUTH", "WEST", "SOUTH", "NORTHEAST"…
## $ Division <fct> Mountain South, West South Central, Mountain …
## $ STATE_FIPS <chr> "35", "05", "35", "45", "34", "48", "40", "28…
## $ state_postal <fct> NM, AR, NM, SC, NJ, TX, OK, MS, DC, AZ, CA, T…
## $ state_name <fct> New Mexico, Arkansas, New Mexico, South Carol…
## $ HDD65 <dbl> 3844, 3766, 3819, 2614, 4219, 901, 3148, 1825…
## $ CDD65 <dbl> 1679, 1458, 1696, 1718, 1363, 3558, 2128, 237…
## $ HDD30YR <dbl> 4451, 4429, 4500, 3229, 4896, 1150, 3564, 266…
## $ CDD30YR <dbl> 1027, 1305, 1010, 1653, 1059, 3588, 2043, 216…
## $ HousingUnitType <fct> Single-family detached, Apartment: 5 or more …
## $ YearMade <ord> 1970-1979, 1980-1989, 1960-1969, 1980-1989, 1…
## $ TOTSQFT_EN <dbl> 2100, 590, 900, 2100, 800, 4520, 2100, 900, 7…
## $ TOTHSQFT <dbl> 2100, 590, 900, 2100, 800, 3010, 1200, 900, 7…
## $ TOTCSQFT <dbl> 2100, 590, 900, 2100, 800, 3010, 1200, 0, 500…
## $ SpaceHeatingUsed <lgl> TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRU…
## $ ACUsed <lgl> TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, FAL…
## $ HeatingBehavior <fct> Set one temp and leave it, Turn on or off as …
## $ WinterTempDay <dbl> 70, 70, 69, 68, 68, 76, 74, 70, 68, 70, 72, 7…
## $ WinterTempAway <dbl> 70, 65, 68, 68, 68, 76, 65, 70, 60, 70, 70, 7…
## $ WinterTempNight <dbl> 68, 65, 67, 68, 68, 68, 74, 68, 62, 68, 72, 7…
## $ ACBehavior <fct> Set one temp and leave it, Turn on or off as …
## $ SummerTempDay <dbl> 71, 68, 70, 72, 72, 69, 68, NA, 72, 74, 77, 7…
## $ SummerTempAway <dbl> 71, 68, 68, 72, 72, 74, 70, NA, 76, 74, 77, 7…
## $ SummerTempNight <dbl> 71, 68, 68, 72, 72, 68, 70, NA, 68, 72, 77, 7…
## $ BTUEL <dbl> 42723, 17889, 8147, 31647, 20027, 48968, 4940…
## $ DOLLAREL <dbl> 1955.06, 713.27, 334.51, 1424.86, 1087.00, 18…
## $ BTUNG <dbl> 101924.4, 10145.3, 22603.1, 55118.7, 39099.5,…
## $ DOLLARNG <dbl> 701.83, 261.73, 188.14, 636.91, 376.04, 439.4…
## $ BTULP <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 178…
## $ DOLLARLP <dbl> 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, …
## $ BTUFO <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 681…
## $ DOLLARFO <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 187…
## $ BTUWOOD <dbl> 0, 0, 0, 0, 0, 3000, 0, 0, 0, 0, 0, 0, 0, 0, …
## $ TOTALBTU <dbl> 144648, 28035, 30750, 86765, 59127, 85401, 13…
## $ TOTALDOL <dbl> 2656.9, 975.0, 522.6, 2061.8, 1463.0, 2335.1,…
```
From the output, we can see that the RECS data has 18,496 rows and 39 non\-weight variables. This output also indicates that most of the variables are in double (numeric) format (e.g., `TOTSQFT_EN`), with some factor (e.g., `Region`), Boolean (e.g., `ACUsed`), character (e.g., `REGIONC`), and ordinal (e.g., `YearMade`) variables.
### 4\.2\.3 Design objects
The design object is the backbone for survey analysis. It is where we specify the sampling design, weights, and other necessary information to ensure we account for errors in the data. Before creating the design object, we should carefully review the survey documentation to understand how to create the design object for accurate analysis.
In this section, we provide details on how to code the design object for the ANES and RECS data used in the book. However, we only provide a high\-level overview to get readers started. For a deeper understanding of creating design objects for a variety of sampling designs, see Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights).
While we recommend conducting exploratory data analysis on the original data before diving into complex survey analysis (see Chapter [12](c12-recommendations.html#c12-recommendations)), the actual survey analysis and inference should be performed with the survey design objects instead of the original survey data. For example, the ANES data is called `anes_2020`. If we create a survey design object called `anes_des`, our survey analyses should begin with `anes_des` and not `anes_2020`. Using the survey design object ensures that our calculations appropriately account for the details of the survey design.
#### American National Election Studies Design Object
The ANES documentation ([DeBell 2010](#ref-debell)) details the sampling and weighting implications for analyzing the survey data. From this documentation and as noted in Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation), the 2020 ANES data are weighted to the sample, not the population. To make generalizations about the population, we need to weigh the data against the full population count. The ANES methodology recommends using the Current Population Survey (CPS) to determine the number of non\-institutional U.S. citizens aged 18 or older living in the 50 U.S. states or D.C. in March 2020\.
We can use the {censusapi} package to obtain the information needed for the survey design object. The `getCensus()` function allows us to retrieve the CPS data for March (`cps/basic/mar`) in 2020 (`vintage = 2020`). Additionally, we extract several variables from the CPS:
* month (`HRMONTH`) and year (`HRYEAR4`) of the interview: to confirm the correct time period
* age (`PRTAGE`) of the respondent: to narrow the population to 18 and older (eligible age to vote)
* citizenship status (`PRCITSHP`) of the respondent: to narrow the population to only those eligible to vote
* final person\-level weight (`PWSSWGT`)
Detailed information for these variables can be found in the [CPS data dictionary](https://www2.census.gov/programs-surveys/cps/datasets/2020/basic/2020_Basic_CPS_Public_Use_Record_Layout_plus_IO_Code_list.txt).
```
cps_state_in <- getCensus(
name = "cps/basic/mar",
vintage = 2020,
region = "state",
vars = c(
"HRMONTH", "HRYEAR4",
"PRTAGE", "PRCITSHP", "PWSSWGT"
),
key = Sys.getenv("CENSUS_KEY")
)
cps_state <- cps_state_in %>%
as_tibble() %>%
mutate(across(
.cols = everything(),
.fns = as.numeric
))
```
In the code above, we include `region = "state"`. The default region type for the CPS data is at the state level. While not required, including the region can be helpful for understanding the geographical context of the data.
In `getCensus()`, we filtered the dataset by specifying the month (`HRMONTH == 3`) and year (`HRYEAR4 == 2020`) of our request. Therefore, we expect that all interviews within our output were conducted during that particular month and year. We can confirm that the data are from March 2020 by running the code below:
```
cps_state %>%
distinct(HRMONTH, HRYEAR4)
```
```
## # A tibble: 1 × 2
## HRMONTH HRYEAR4
## <dbl> <dbl>
## 1 3 2020
```
We can narrow down the dataset using the age and citizenship variables to include only individuals who are 18 years or older (`PRTAGE >= 18`) and have U.S. citizenship (`PRCITSHIP %in% c(1:4)`):
```
cps_narrow_resp <- cps_state %>%
filter(
PRTAGE >= 18,
PRCITSHP %in% c(1:4)
)
```
To calculate the U.S. population from the filtered data, we sum the person weights (`PWSSWGT`):
```
targetpop <- cps_narrow_resp %>%
pull(PWSSWGT) %>%
sum()
scales::comma(targetpop)
```
```
## [1] "231,034,125"
```
The population of interest in 2020 is 231,034,125\. This result gives us what we need to create the survey design object for estimating population statistics. Using the `anes_2020` data, we adjust the weighting variable (`V200010b`) using the population of interest we just calculated (`targetpop`). We determine the proportion of the total weight for each individual weight (`V200010b / sum(V200010b)`) and then multiply that proportion by the calculated population of interest.
```
anes_adjwgt <- anes_2020 %>%
mutate(Weight = V200010b / sum(V200010b) * targetpop)
```
Once we have the adjusted weights, we can refer to the rest of the documentation to create the survey design. The documentation indicates that the study uses a stratified cluster sampling design. Therefore, we need to specify variables for `strata` and `ids` (cluster) and fill in the `nest` argument. The documentation provides guidance on which strata and cluster variables to use depending on whether we are analyzing pre\- or post\-election data. In this book, we analyze post\-election data, so we need to use the post\-election weight `V200010b`, strata variable `V200010d`, and Primary Sampling Unit (PSU)/cluster variable `V200010c`. Additionally, we set `nest=TRUE` to ensure the clusters are nested within the strata.
```
anes_des <- anes_adjwgt %>%
as_survey_design(
weights = Weight,
strata = V200010d,
ids = V200010c,
nest = TRUE
)
anes_des
```
```
## Stratified 1 - level Cluster Sampling design (with replacement)
## With (101) clusters.
## Called via srvyr
## Sampling variables:
## - ids: V200010c
## - strata: V200010d
## - weights: Weight
## Data variables:
## - V200001 (dbl), CaseID (dbl), V200002 (dbl+lbl), InterviewMode
## (fct), V200010b (dbl), Weight (dbl), V200010c (dbl), VarUnit (fct),
## V200010d (dbl), Stratum (fct), V201006 (dbl+lbl), CampaignInterest
## (fct), V201023 (dbl+lbl), EarlyVote2020 (fct), V201024 (dbl+lbl),
## V201025x (dbl+lbl), V201028 (dbl+lbl), V201029 (dbl+lbl), V201101
## (dbl+lbl), V201102 (dbl+lbl), VotedPres2016 (fct), V201103
## (dbl+lbl), VotedPres2016_selection (fct), V201228 (dbl+lbl),
## V201229 (dbl+lbl), V201230 (dbl+lbl), V201231x (dbl+lbl), PartyID
## (fct), V201233 (dbl+lbl), TrustGovernment (fct), V201237 (dbl+lbl),
## TrustPeople (fct), V201507x (dbl+lbl), Age (dbl), AgeGroup (fct),
## V201510 (dbl+lbl), Education (fct), V201546 (dbl+lbl), V201547a
## (dbl+lbl), V201547b (dbl+lbl), V201547c (dbl+lbl), V201547d
## (dbl+lbl), V201547e (dbl+lbl), V201547z (dbl+lbl), V201549x
## (dbl+lbl), RaceEth (fct), V201600 (dbl+lbl), Gender (fct), V201607
## (dbl+lbl), V201610 (dbl+lbl), V201611 (dbl+lbl), V201613 (dbl+lbl),
## V201615 (dbl+lbl), V201616 (dbl+lbl), V201617x (dbl+lbl), Income
## (fct), Income7 (fct), V202051 (dbl+lbl), V202066 (dbl+lbl), V202072
## (dbl+lbl), VotedPres2020 (fct), V202073 (dbl+lbl), V202109x
## (dbl+lbl), V202110x (dbl+lbl), VotedPres2020_selection (fct)
```
We can examine this new object to learn more about the survey design, such that the ANES is a “Stratified 1 \- level Cluster Sampling design (with replacement) With (101\) clusters.” Additionally, the output displays the sampling variables and then lists the remaining variables in the dataset. This design object is used throughout this book to conduct survey analysis.
#### Residential Energy Consumption Survey Design Object
The RECS documentation ([U.S. Energy Information Administration 2023b](#ref-recs-2020-tech)) provides information on the survey’s sampling and weighting implications for analysis. The documentation shows the 2020 RECS uses Jackknife weights, where the main analytic weight is `NWEIGHT`, and the Jackknife weights are `NWEIGHT1`\-`NWEIGHT60`. We can specify these in the `weights` and `repweights` arguments in the survey design object code, respectively.
With Jackknife weights, additional information is required: `type`, `scale`, and `mse`. Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights) discusses in depth each of these arguments; but to quickly get started, the RECS documentation lets us know that `type=JK1`, `scale=59/60`, and `mse = TRUE`. We can use the following code to create the survey design object:
```
recs_des <- recs_2020 %>%
as_survey_rep(
weights = NWEIGHT,
repweights = NWEIGHT1:NWEIGHT60,
type = "JK1",
scale = 59 / 60,
mse = TRUE
)
recs_des
```
```
## Call: Called via srvyr
## Unstratified cluster jacknife (JK1) with 60 replicates and MSE variances.
## Sampling variables:
## - repweights: `NWEIGHT1 + NWEIGHT2 + NWEIGHT3 + NWEIGHT4 + NWEIGHT5 +
## NWEIGHT6 + NWEIGHT7 + NWEIGHT8 + NWEIGHT9 + NWEIGHT10 + NWEIGHT11 +
## NWEIGHT12 + NWEIGHT13 + NWEIGHT14 + NWEIGHT15 + NWEIGHT16 +
## NWEIGHT17 + NWEIGHT18 + NWEIGHT19 + NWEIGHT20 + NWEIGHT21 +
## NWEIGHT22 + NWEIGHT23 + NWEIGHT24 + NWEIGHT25 + NWEIGHT26 +
## NWEIGHT27 + NWEIGHT28 + NWEIGHT29 + NWEIGHT30 + NWEIGHT31 +
## NWEIGHT32 + NWEIGHT33 + NWEIGHT34 + NWEIGHT35 + NWEIGHT36 +
## NWEIGHT37 + NWEIGHT38 + NWEIGHT39 + NWEIGHT40 + NWEIGHT41 +
## NWEIGHT42 + NWEIGHT43 + NWEIGHT44 + NWEIGHT45 + NWEIGHT46 +
## NWEIGHT47 + NWEIGHT48 + NWEIGHT49 + NWEIGHT50 + NWEIGHT51 +
## NWEIGHT52 + NWEIGHT53 + NWEIGHT54 + NWEIGHT55 + NWEIGHT56 +
## NWEIGHT57 + NWEIGHT58 + NWEIGHT59 + NWEIGHT60`
## - weights: NWEIGHT
## Data variables:
## - DOEID (dbl), ClimateRegion_BA (fct), Urbanicity (fct), Region
## (fct), REGIONC (chr), Division (fct), STATE_FIPS (chr),
## state_postal (fct), state_name (fct), HDD65 (dbl), CDD65 (dbl),
## HDD30YR (dbl), CDD30YR (dbl), HousingUnitType (fct), YearMade
## (ord), TOTSQFT_EN (dbl), TOTHSQFT (dbl), TOTCSQFT (dbl),
## SpaceHeatingUsed (lgl), ACUsed (lgl), HeatingBehavior (fct),
## WinterTempDay (dbl), WinterTempAway (dbl), WinterTempNight (dbl),
## ACBehavior (fct), SummerTempDay (dbl), SummerTempAway (dbl),
## SummerTempNight (dbl), NWEIGHT (dbl), NWEIGHT1 (dbl), NWEIGHT2
## (dbl), NWEIGHT3 (dbl), NWEIGHT4 (dbl), NWEIGHT5 (dbl), NWEIGHT6
## (dbl), NWEIGHT7 (dbl), NWEIGHT8 (dbl), NWEIGHT9 (dbl), NWEIGHT10
## (dbl), NWEIGHT11 (dbl), NWEIGHT12 (dbl), NWEIGHT13 (dbl), NWEIGHT14
## (dbl), NWEIGHT15 (dbl), NWEIGHT16 (dbl), NWEIGHT17 (dbl), NWEIGHT18
## (dbl), NWEIGHT19 (dbl), NWEIGHT20 (dbl), NWEIGHT21 (dbl), NWEIGHT22
## (dbl), NWEIGHT23 (dbl), NWEIGHT24 (dbl), NWEIGHT25 (dbl), NWEIGHT26
## (dbl), NWEIGHT27 (dbl), NWEIGHT28 (dbl), NWEIGHT29 (dbl), NWEIGHT30
## (dbl), NWEIGHT31 (dbl), NWEIGHT32 (dbl), NWEIGHT33 (dbl), NWEIGHT34
## (dbl), NWEIGHT35 (dbl), NWEIGHT36 (dbl), NWEIGHT37 (dbl), NWEIGHT38
## (dbl), NWEIGHT39 (dbl), NWEIGHT40 (dbl), NWEIGHT41 (dbl), NWEIGHT42
## (dbl), NWEIGHT43 (dbl), NWEIGHT44 (dbl), NWEIGHT45 (dbl), NWEIGHT46
## (dbl), NWEIGHT47 (dbl), NWEIGHT48 (dbl), NWEIGHT49 (dbl), NWEIGHT50
## (dbl), NWEIGHT51 (dbl), NWEIGHT52 (dbl), NWEIGHT53 (dbl), NWEIGHT54
## (dbl), NWEIGHT55 (dbl), NWEIGHT56 (dbl), NWEIGHT57 (dbl), NWEIGHT58
## (dbl), NWEIGHT59 (dbl), NWEIGHT60 (dbl), BTUEL (dbl), DOLLAREL
## (dbl), BTUNG (dbl), DOLLARNG (dbl), BTULP (dbl), DOLLARLP (dbl),
## BTUFO (dbl), DOLLARFO (dbl), BTUWOOD (dbl), TOTALBTU (dbl),
## TOTALDOL (dbl)
```
Viewing this new object provides information about the survey design, such that RECS is an “Unstratified cluster jacknife (JK1\) with 60 replicates and MSE variances.” Additionally, the output shows the sampling variables (`NWEIGHT1`\-`NWEIGHT60`) and then lists the remaining variables in the dataset. This design object is used throughout this book to conduct survey analysis.
### 4\.2\.1 Packages
We use several packages throughout the book, but let’s install and load specific ones for this chapter. Many functions in the examples and exercises are from three packages: {tidyverse}, {survey}, and {srvyr} ([Wickham et al. 2019](#ref-tidyverse2019); [Lumley 2010](#ref-lumley2010complex); [Freedman Ellis and Schneider 2024](#ref-R-srvyr)). The packages can be installed from the Comprehensive R Archive Network (CRAN) using the code below:
```
install.packages(c("tidyverse", "survey", "srvyr"))
```
We bundled the datasets used in the book in an R package, {srvyrexploR} ([Zimmer, Powell, and Velásquez 2024](#ref-R-srvyrexploR)). To install it from GitHub, use the {pak} package ([Csárdi and Hester 2024](#ref-R-pak)):
```
install.packages("pak")
pak::pak("tidy-survey-r/srvyrexploR")
```
After installing these packages, load them using the `library()` function:
```
library(tidyverse)
library(survey)
library(srvyr)
library(srvyrexploR)
```
The packages {broom}, {gt}, and {gtsummary} play a role in displaying output and creating formatted tables ([Iannone et al. 2024](#ref-R-gt); [Robinson, Hayes, and Couch 2023](#ref-R-broom); [Sjoberg et al. 2021](#ref-gtsummarysjo)). Install them with the provided code[2](#fn2):
```
install.packages(c("gt", "gtsummary"))
```
After installing these packages, load them using the `library()` function:
```
library(broom)
library(gt)
library(gtsummary)
```
Install and load the {censusapi} package to access the Current Population Survey (CPS), which we use to ensure accurate weighting of a key dataset in the book ([Recht 2024](#ref-R-censusapi)). Run the code below to install {censusapi}:
```
install.packages("censusapi")
```
After installing this package, load it using the `library()` function:
```
library(censusapi)
```
Note that the {censusapi} package requires a Census API key, available for free from the [U.S. Census Bureau website](https://api.census.gov/data/key_signup.html) (refer to the package documentation for more information). We recommend storing the Census API key in the R environment instead of directly in the code. To do this, run the `Sys.setenv()` script below, substituting the API key where it says `YOUR_API_KEY_HERE`.
```
Sys.setenv(CENSUS_KEY = "YOUR_API_KEY_HERE")
```
Then, restart the R session. Once the Census API key is stored, we can retrieve it in our R code with `Sys.getenv("CENSUS_KEY")`.
There are a few other packages used in the book in limited frequency. We list them in the Prerequisite boxes at the beginning of each chapter. As we work through the book, make sure to check the Prerequisite box and install any missing packages before proceeding.
### 4\.2\.2 Data
The {srvyrexploR} package contains the datasets used in the book. Once installed and loaded, explore the documentation using the `help()` function. Read the descriptions of the datasets to understand what they contain:
```
help(package = "srvyrexploR")
```
This book uses two main datasets: the American National Election Studies (ANES – [DeBell 2010](#ref-debell)) and the Residential Energy Consumption Survey (RECS – [U.S. Energy Information Administration 2023b](#ref-recs-2020-tech)), which are included as `anes_2020` and `recs_2020` in the {srvyrexploR} package, respectively.
#### American National Election Studies Data
American National Election Studies (ANES) collect data from election surveys dating back to 1948\. These surveys contain information on public opinion and voting behavior in U.S. presidential elections and some midterm elections[3](#fn3). They cover topics such as party affiliation, voting choice, and level of trust in the government. The 2020 survey (data used in this book) was fielded online, through live video interviews, or via computer\-assisted telephone interviews (CATI).
When working with new survey data, we should review the survey documentation (see Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation)) to understand the data collection methods. The original ANES data contains variables starting with `V20` ([DeBell 2010](#ref-debell)), so to assist with our analysis throughout the book, we created descriptive variable names. For example, the respondent’s age is now in a variable called `Age`, and gender is in a variable called `Gender`. These descriptive variables are included in the {srvyrexploR} package. A complete overview of all variables can be found in Appendix [B](anes-cb.html#anes-cb).
Before beginning an analysis, it is useful to view the data to understand the available variables. The `dplyr::glimpse()` function produces a list of all variables, their types (e.g., function, double), and a few example values. Below, we remove variables containing a “V” followed by numbers with `select(-matches("^V\\d"))` before using `glimpse()` to get a quick overview of the data with descriptive variable names:
```
anes_2020 %>%
select(-matches("^V\\d")) %>%
glimpse()
```
```
## Rows: 7,453
## Columns: 21
## $ CaseID <dbl> 200015, 200022, 200039, 200046, 200053…
## $ InterviewMode <fct> Web, Web, Web, Web, Web, Web, Web, Web…
## $ Weight <dbl> 1.0057, 1.1635, 0.7687, 0.5210, 0.9658…
## $ VarUnit <fct> 2, 2, 1, 2, 1, 2, 1, 2, 2, 2, 1, 1, 2,…
## $ Stratum <fct> 9, 26, 41, 29, 23, 37, 7, 37, 32, 41, …
## $ CampaignInterest <fct> Somewhat interested, Not much interest…
## $ EarlyVote2020 <fct> NA, NA, NA, NA, NA, NA, NA, NA, Yes, N…
## $ VotedPres2016 <fct> Yes, Yes, Yes, Yes, Yes, No, Yes, No, …
## $ VotedPres2016_selection <fct> Trump, Other, Clinton, Clinton, Trump,…
## $ PartyID <fct> Strong republican, Independent, Indepe…
## $ TrustGovernment <fct> Never, Never, Some of the time, About …
## $ TrustPeople <fct> About half the time, Some of the time,…
## $ Age <dbl> 46, 37, 40, 41, 72, 71, 37, 45, 70, 43…
## $ AgeGroup <fct> 40-49, 30-39, 40-49, 40-49, 70 or olde…
## $ Education <fct> Bachelor's, Post HS, High school, Post…
## $ RaceEth <fct> "Hispanic", "Asian, NH/PI", "White", "…
## $ Gender <fct> Male, Female, Female, Male, Male, Fema…
## $ Income <fct> "$175,000-249,999", "$70,000-74,999", …
## $ Income7 <fct> $125k or more, $60k to < 80k, $100k to…
## $ VotedPres2020 <fct> NA, Yes, Yes, Yes, Yes, Yes, Yes, NA, …
## $ VotedPres2020_selection <fct> NA, Other, Biden, Biden, Trump, Biden,…
```
From the output, we can see there are 7,453 rows and 21 variables in the ANES data. This output also indicates that most of the variables are factors (e.g., `InterviewMode`), while a few variables are in double (numeric) format (e.g., `Age`).
#### Residential Energy Consumption Survey Data
Residential Energy Consumption Survey (RECS) is a study that measures energy consumption and expenditure in American households. Funded by the Energy Information Administration, RECS data are collected through interviews with household members and energy suppliers. These interviews take place in person, over the phone, via mail, and on the web, with modes changing over time. The survey has been fielded 14 times between 1950 and 2020\. It includes questions about appliances, electronics, heating, air conditioning (A/C), temperatures, water heating, lighting, energy bills, respondent demographics, and energy assistance.
We should read the survey documentation (see Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation)) to understand how the data were collected and implemented. An overview of all variables can be found in Appendix [C](recs-cb.html#recs-cb).
Before starting an analysis, we recommend viewing the data to understand the types of data and variables that are included. The `dplyr::glimpse()` function produces a list of all variables, the type of the variable (e.g., function, double), and a few example values. Below, we remove the weight variables with `select(-matches("^NWEIGHT"))` before using `glimpse()` to get a quick overview of the data:
```
recs_2020 %>%
select(-matches("^NWEIGHT")) %>%
glimpse()
```
```
## Rows: 18,496
## Columns: 39
## $ DOEID <dbl> 1e+05, 1e+05, 1e+05, 1e+05, 1e+05, 1e+05, 1e+…
## $ ClimateRegion_BA <fct> Mixed-Dry, Mixed-Humid, Mixed-Dry, Mixed-Humi…
## $ Urbanicity <fct> Urban Area, Urban Area, Urban Area, Urban Are…
## $ Region <fct> West, South, West, South, Northeast, South, S…
## $ REGIONC <chr> "WEST", "SOUTH", "WEST", "SOUTH", "NORTHEAST"…
## $ Division <fct> Mountain South, West South Central, Mountain …
## $ STATE_FIPS <chr> "35", "05", "35", "45", "34", "48", "40", "28…
## $ state_postal <fct> NM, AR, NM, SC, NJ, TX, OK, MS, DC, AZ, CA, T…
## $ state_name <fct> New Mexico, Arkansas, New Mexico, South Carol…
## $ HDD65 <dbl> 3844, 3766, 3819, 2614, 4219, 901, 3148, 1825…
## $ CDD65 <dbl> 1679, 1458, 1696, 1718, 1363, 3558, 2128, 237…
## $ HDD30YR <dbl> 4451, 4429, 4500, 3229, 4896, 1150, 3564, 266…
## $ CDD30YR <dbl> 1027, 1305, 1010, 1653, 1059, 3588, 2043, 216…
## $ HousingUnitType <fct> Single-family detached, Apartment: 5 or more …
## $ YearMade <ord> 1970-1979, 1980-1989, 1960-1969, 1980-1989, 1…
## $ TOTSQFT_EN <dbl> 2100, 590, 900, 2100, 800, 4520, 2100, 900, 7…
## $ TOTHSQFT <dbl> 2100, 590, 900, 2100, 800, 3010, 1200, 900, 7…
## $ TOTCSQFT <dbl> 2100, 590, 900, 2100, 800, 3010, 1200, 0, 500…
## $ SpaceHeatingUsed <lgl> TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRU…
## $ ACUsed <lgl> TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, FAL…
## $ HeatingBehavior <fct> Set one temp and leave it, Turn on or off as …
## $ WinterTempDay <dbl> 70, 70, 69, 68, 68, 76, 74, 70, 68, 70, 72, 7…
## $ WinterTempAway <dbl> 70, 65, 68, 68, 68, 76, 65, 70, 60, 70, 70, 7…
## $ WinterTempNight <dbl> 68, 65, 67, 68, 68, 68, 74, 68, 62, 68, 72, 7…
## $ ACBehavior <fct> Set one temp and leave it, Turn on or off as …
## $ SummerTempDay <dbl> 71, 68, 70, 72, 72, 69, 68, NA, 72, 74, 77, 7…
## $ SummerTempAway <dbl> 71, 68, 68, 72, 72, 74, 70, NA, 76, 74, 77, 7…
## $ SummerTempNight <dbl> 71, 68, 68, 72, 72, 68, 70, NA, 68, 72, 77, 7…
## $ BTUEL <dbl> 42723, 17889, 8147, 31647, 20027, 48968, 4940…
## $ DOLLAREL <dbl> 1955.06, 713.27, 334.51, 1424.86, 1087.00, 18…
## $ BTUNG <dbl> 101924.4, 10145.3, 22603.1, 55118.7, 39099.5,…
## $ DOLLARNG <dbl> 701.83, 261.73, 188.14, 636.91, 376.04, 439.4…
## $ BTULP <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 178…
## $ DOLLARLP <dbl> 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, …
## $ BTUFO <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 681…
## $ DOLLARFO <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 187…
## $ BTUWOOD <dbl> 0, 0, 0, 0, 0, 3000, 0, 0, 0, 0, 0, 0, 0, 0, …
## $ TOTALBTU <dbl> 144648, 28035, 30750, 86765, 59127, 85401, 13…
## $ TOTALDOL <dbl> 2656.9, 975.0, 522.6, 2061.8, 1463.0, 2335.1,…
```
From the output, we can see that the RECS data has 18,496 rows and 39 non\-weight variables. This output also indicates that most of the variables are in double (numeric) format (e.g., `TOTSQFT_EN`), with some factor (e.g., `Region`), Boolean (e.g., `ACUsed`), character (e.g., `REGIONC`), and ordinal (e.g., `YearMade`) variables.
#### American National Election Studies Data
American National Election Studies (ANES) collect data from election surveys dating back to 1948\. These surveys contain information on public opinion and voting behavior in U.S. presidential elections and some midterm elections[3](#fn3). They cover topics such as party affiliation, voting choice, and level of trust in the government. The 2020 survey (data used in this book) was fielded online, through live video interviews, or via computer\-assisted telephone interviews (CATI).
When working with new survey data, we should review the survey documentation (see Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation)) to understand the data collection methods. The original ANES data contains variables starting with `V20` ([DeBell 2010](#ref-debell)), so to assist with our analysis throughout the book, we created descriptive variable names. For example, the respondent’s age is now in a variable called `Age`, and gender is in a variable called `Gender`. These descriptive variables are included in the {srvyrexploR} package. A complete overview of all variables can be found in Appendix [B](anes-cb.html#anes-cb).
Before beginning an analysis, it is useful to view the data to understand the available variables. The `dplyr::glimpse()` function produces a list of all variables, their types (e.g., function, double), and a few example values. Below, we remove variables containing a “V” followed by numbers with `select(-matches("^V\\d"))` before using `glimpse()` to get a quick overview of the data with descriptive variable names:
```
anes_2020 %>%
select(-matches("^V\\d")) %>%
glimpse()
```
```
## Rows: 7,453
## Columns: 21
## $ CaseID <dbl> 200015, 200022, 200039, 200046, 200053…
## $ InterviewMode <fct> Web, Web, Web, Web, Web, Web, Web, Web…
## $ Weight <dbl> 1.0057, 1.1635, 0.7687, 0.5210, 0.9658…
## $ VarUnit <fct> 2, 2, 1, 2, 1, 2, 1, 2, 2, 2, 1, 1, 2,…
## $ Stratum <fct> 9, 26, 41, 29, 23, 37, 7, 37, 32, 41, …
## $ CampaignInterest <fct> Somewhat interested, Not much interest…
## $ EarlyVote2020 <fct> NA, NA, NA, NA, NA, NA, NA, NA, Yes, N…
## $ VotedPres2016 <fct> Yes, Yes, Yes, Yes, Yes, No, Yes, No, …
## $ VotedPres2016_selection <fct> Trump, Other, Clinton, Clinton, Trump,…
## $ PartyID <fct> Strong republican, Independent, Indepe…
## $ TrustGovernment <fct> Never, Never, Some of the time, About …
## $ TrustPeople <fct> About half the time, Some of the time,…
## $ Age <dbl> 46, 37, 40, 41, 72, 71, 37, 45, 70, 43…
## $ AgeGroup <fct> 40-49, 30-39, 40-49, 40-49, 70 or olde…
## $ Education <fct> Bachelor's, Post HS, High school, Post…
## $ RaceEth <fct> "Hispanic", "Asian, NH/PI", "White", "…
## $ Gender <fct> Male, Female, Female, Male, Male, Fema…
## $ Income <fct> "$175,000-249,999", "$70,000-74,999", …
## $ Income7 <fct> $125k or more, $60k to < 80k, $100k to…
## $ VotedPres2020 <fct> NA, Yes, Yes, Yes, Yes, Yes, Yes, NA, …
## $ VotedPres2020_selection <fct> NA, Other, Biden, Biden, Trump, Biden,…
```
From the output, we can see there are 7,453 rows and 21 variables in the ANES data. This output also indicates that most of the variables are factors (e.g., `InterviewMode`), while a few variables are in double (numeric) format (e.g., `Age`).
#### Residential Energy Consumption Survey Data
Residential Energy Consumption Survey (RECS) is a study that measures energy consumption and expenditure in American households. Funded by the Energy Information Administration, RECS data are collected through interviews with household members and energy suppliers. These interviews take place in person, over the phone, via mail, and on the web, with modes changing over time. The survey has been fielded 14 times between 1950 and 2020\. It includes questions about appliances, electronics, heating, air conditioning (A/C), temperatures, water heating, lighting, energy bills, respondent demographics, and energy assistance.
We should read the survey documentation (see Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation)) to understand how the data were collected and implemented. An overview of all variables can be found in Appendix [C](recs-cb.html#recs-cb).
Before starting an analysis, we recommend viewing the data to understand the types of data and variables that are included. The `dplyr::glimpse()` function produces a list of all variables, the type of the variable (e.g., function, double), and a few example values. Below, we remove the weight variables with `select(-matches("^NWEIGHT"))` before using `glimpse()` to get a quick overview of the data:
```
recs_2020 %>%
select(-matches("^NWEIGHT")) %>%
glimpse()
```
```
## Rows: 18,496
## Columns: 39
## $ DOEID <dbl> 1e+05, 1e+05, 1e+05, 1e+05, 1e+05, 1e+05, 1e+…
## $ ClimateRegion_BA <fct> Mixed-Dry, Mixed-Humid, Mixed-Dry, Mixed-Humi…
## $ Urbanicity <fct> Urban Area, Urban Area, Urban Area, Urban Are…
## $ Region <fct> West, South, West, South, Northeast, South, S…
## $ REGIONC <chr> "WEST", "SOUTH", "WEST", "SOUTH", "NORTHEAST"…
## $ Division <fct> Mountain South, West South Central, Mountain …
## $ STATE_FIPS <chr> "35", "05", "35", "45", "34", "48", "40", "28…
## $ state_postal <fct> NM, AR, NM, SC, NJ, TX, OK, MS, DC, AZ, CA, T…
## $ state_name <fct> New Mexico, Arkansas, New Mexico, South Carol…
## $ HDD65 <dbl> 3844, 3766, 3819, 2614, 4219, 901, 3148, 1825…
## $ CDD65 <dbl> 1679, 1458, 1696, 1718, 1363, 3558, 2128, 237…
## $ HDD30YR <dbl> 4451, 4429, 4500, 3229, 4896, 1150, 3564, 266…
## $ CDD30YR <dbl> 1027, 1305, 1010, 1653, 1059, 3588, 2043, 216…
## $ HousingUnitType <fct> Single-family detached, Apartment: 5 or more …
## $ YearMade <ord> 1970-1979, 1980-1989, 1960-1969, 1980-1989, 1…
## $ TOTSQFT_EN <dbl> 2100, 590, 900, 2100, 800, 4520, 2100, 900, 7…
## $ TOTHSQFT <dbl> 2100, 590, 900, 2100, 800, 3010, 1200, 900, 7…
## $ TOTCSQFT <dbl> 2100, 590, 900, 2100, 800, 3010, 1200, 0, 500…
## $ SpaceHeatingUsed <lgl> TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRU…
## $ ACUsed <lgl> TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, TRUE, FAL…
## $ HeatingBehavior <fct> Set one temp and leave it, Turn on or off as …
## $ WinterTempDay <dbl> 70, 70, 69, 68, 68, 76, 74, 70, 68, 70, 72, 7…
## $ WinterTempAway <dbl> 70, 65, 68, 68, 68, 76, 65, 70, 60, 70, 70, 7…
## $ WinterTempNight <dbl> 68, 65, 67, 68, 68, 68, 74, 68, 62, 68, 72, 7…
## $ ACBehavior <fct> Set one temp and leave it, Turn on or off as …
## $ SummerTempDay <dbl> 71, 68, 70, 72, 72, 69, 68, NA, 72, 74, 77, 7…
## $ SummerTempAway <dbl> 71, 68, 68, 72, 72, 74, 70, NA, 76, 74, 77, 7…
## $ SummerTempNight <dbl> 71, 68, 68, 72, 72, 68, 70, NA, 68, 72, 77, 7…
## $ BTUEL <dbl> 42723, 17889, 8147, 31647, 20027, 48968, 4940…
## $ DOLLAREL <dbl> 1955.06, 713.27, 334.51, 1424.86, 1087.00, 18…
## $ BTUNG <dbl> 101924.4, 10145.3, 22603.1, 55118.7, 39099.5,…
## $ DOLLARNG <dbl> 701.83, 261.73, 188.14, 636.91, 376.04, 439.4…
## $ BTULP <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 178…
## $ DOLLARLP <dbl> 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, …
## $ BTUFO <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 681…
## $ DOLLARFO <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 187…
## $ BTUWOOD <dbl> 0, 0, 0, 0, 0, 3000, 0, 0, 0, 0, 0, 0, 0, 0, …
## $ TOTALBTU <dbl> 144648, 28035, 30750, 86765, 59127, 85401, 13…
## $ TOTALDOL <dbl> 2656.9, 975.0, 522.6, 2061.8, 1463.0, 2335.1,…
```
From the output, we can see that the RECS data has 18,496 rows and 39 non\-weight variables. This output also indicates that most of the variables are in double (numeric) format (e.g., `TOTSQFT_EN`), with some factor (e.g., `Region`), Boolean (e.g., `ACUsed`), character (e.g., `REGIONC`), and ordinal (e.g., `YearMade`) variables.
### 4\.2\.3 Design objects
The design object is the backbone for survey analysis. It is where we specify the sampling design, weights, and other necessary information to ensure we account for errors in the data. Before creating the design object, we should carefully review the survey documentation to understand how to create the design object for accurate analysis.
In this section, we provide details on how to code the design object for the ANES and RECS data used in the book. However, we only provide a high\-level overview to get readers started. For a deeper understanding of creating design objects for a variety of sampling designs, see Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights).
While we recommend conducting exploratory data analysis on the original data before diving into complex survey analysis (see Chapter [12](c12-recommendations.html#c12-recommendations)), the actual survey analysis and inference should be performed with the survey design objects instead of the original survey data. For example, the ANES data is called `anes_2020`. If we create a survey design object called `anes_des`, our survey analyses should begin with `anes_des` and not `anes_2020`. Using the survey design object ensures that our calculations appropriately account for the details of the survey design.
#### American National Election Studies Design Object
The ANES documentation ([DeBell 2010](#ref-debell)) details the sampling and weighting implications for analyzing the survey data. From this documentation and as noted in Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation), the 2020 ANES data are weighted to the sample, not the population. To make generalizations about the population, we need to weigh the data against the full population count. The ANES methodology recommends using the Current Population Survey (CPS) to determine the number of non\-institutional U.S. citizens aged 18 or older living in the 50 U.S. states or D.C. in March 2020\.
We can use the {censusapi} package to obtain the information needed for the survey design object. The `getCensus()` function allows us to retrieve the CPS data for March (`cps/basic/mar`) in 2020 (`vintage = 2020`). Additionally, we extract several variables from the CPS:
* month (`HRMONTH`) and year (`HRYEAR4`) of the interview: to confirm the correct time period
* age (`PRTAGE`) of the respondent: to narrow the population to 18 and older (eligible age to vote)
* citizenship status (`PRCITSHP`) of the respondent: to narrow the population to only those eligible to vote
* final person\-level weight (`PWSSWGT`)
Detailed information for these variables can be found in the [CPS data dictionary](https://www2.census.gov/programs-surveys/cps/datasets/2020/basic/2020_Basic_CPS_Public_Use_Record_Layout_plus_IO_Code_list.txt).
```
cps_state_in <- getCensus(
name = "cps/basic/mar",
vintage = 2020,
region = "state",
vars = c(
"HRMONTH", "HRYEAR4",
"PRTAGE", "PRCITSHP", "PWSSWGT"
),
key = Sys.getenv("CENSUS_KEY")
)
cps_state <- cps_state_in %>%
as_tibble() %>%
mutate(across(
.cols = everything(),
.fns = as.numeric
))
```
In the code above, we include `region = "state"`. The default region type for the CPS data is at the state level. While not required, including the region can be helpful for understanding the geographical context of the data.
In `getCensus()`, we filtered the dataset by specifying the month (`HRMONTH == 3`) and year (`HRYEAR4 == 2020`) of our request. Therefore, we expect that all interviews within our output were conducted during that particular month and year. We can confirm that the data are from March 2020 by running the code below:
```
cps_state %>%
distinct(HRMONTH, HRYEAR4)
```
```
## # A tibble: 1 × 2
## HRMONTH HRYEAR4
## <dbl> <dbl>
## 1 3 2020
```
We can narrow down the dataset using the age and citizenship variables to include only individuals who are 18 years or older (`PRTAGE >= 18`) and have U.S. citizenship (`PRCITSHIP %in% c(1:4)`):
```
cps_narrow_resp <- cps_state %>%
filter(
PRTAGE >= 18,
PRCITSHP %in% c(1:4)
)
```
To calculate the U.S. population from the filtered data, we sum the person weights (`PWSSWGT`):
```
targetpop <- cps_narrow_resp %>%
pull(PWSSWGT) %>%
sum()
scales::comma(targetpop)
```
```
## [1] "231,034,125"
```
The population of interest in 2020 is 231,034,125\. This result gives us what we need to create the survey design object for estimating population statistics. Using the `anes_2020` data, we adjust the weighting variable (`V200010b`) using the population of interest we just calculated (`targetpop`). We determine the proportion of the total weight for each individual weight (`V200010b / sum(V200010b)`) and then multiply that proportion by the calculated population of interest.
```
anes_adjwgt <- anes_2020 %>%
mutate(Weight = V200010b / sum(V200010b) * targetpop)
```
Once we have the adjusted weights, we can refer to the rest of the documentation to create the survey design. The documentation indicates that the study uses a stratified cluster sampling design. Therefore, we need to specify variables for `strata` and `ids` (cluster) and fill in the `nest` argument. The documentation provides guidance on which strata and cluster variables to use depending on whether we are analyzing pre\- or post\-election data. In this book, we analyze post\-election data, so we need to use the post\-election weight `V200010b`, strata variable `V200010d`, and Primary Sampling Unit (PSU)/cluster variable `V200010c`. Additionally, we set `nest=TRUE` to ensure the clusters are nested within the strata.
```
anes_des <- anes_adjwgt %>%
as_survey_design(
weights = Weight,
strata = V200010d,
ids = V200010c,
nest = TRUE
)
anes_des
```
```
## Stratified 1 - level Cluster Sampling design (with replacement)
## With (101) clusters.
## Called via srvyr
## Sampling variables:
## - ids: V200010c
## - strata: V200010d
## - weights: Weight
## Data variables:
## - V200001 (dbl), CaseID (dbl), V200002 (dbl+lbl), InterviewMode
## (fct), V200010b (dbl), Weight (dbl), V200010c (dbl), VarUnit (fct),
## V200010d (dbl), Stratum (fct), V201006 (dbl+lbl), CampaignInterest
## (fct), V201023 (dbl+lbl), EarlyVote2020 (fct), V201024 (dbl+lbl),
## V201025x (dbl+lbl), V201028 (dbl+lbl), V201029 (dbl+lbl), V201101
## (dbl+lbl), V201102 (dbl+lbl), VotedPres2016 (fct), V201103
## (dbl+lbl), VotedPres2016_selection (fct), V201228 (dbl+lbl),
## V201229 (dbl+lbl), V201230 (dbl+lbl), V201231x (dbl+lbl), PartyID
## (fct), V201233 (dbl+lbl), TrustGovernment (fct), V201237 (dbl+lbl),
## TrustPeople (fct), V201507x (dbl+lbl), Age (dbl), AgeGroup (fct),
## V201510 (dbl+lbl), Education (fct), V201546 (dbl+lbl), V201547a
## (dbl+lbl), V201547b (dbl+lbl), V201547c (dbl+lbl), V201547d
## (dbl+lbl), V201547e (dbl+lbl), V201547z (dbl+lbl), V201549x
## (dbl+lbl), RaceEth (fct), V201600 (dbl+lbl), Gender (fct), V201607
## (dbl+lbl), V201610 (dbl+lbl), V201611 (dbl+lbl), V201613 (dbl+lbl),
## V201615 (dbl+lbl), V201616 (dbl+lbl), V201617x (dbl+lbl), Income
## (fct), Income7 (fct), V202051 (dbl+lbl), V202066 (dbl+lbl), V202072
## (dbl+lbl), VotedPres2020 (fct), V202073 (dbl+lbl), V202109x
## (dbl+lbl), V202110x (dbl+lbl), VotedPres2020_selection (fct)
```
We can examine this new object to learn more about the survey design, such that the ANES is a “Stratified 1 \- level Cluster Sampling design (with replacement) With (101\) clusters.” Additionally, the output displays the sampling variables and then lists the remaining variables in the dataset. This design object is used throughout this book to conduct survey analysis.
#### Residential Energy Consumption Survey Design Object
The RECS documentation ([U.S. Energy Information Administration 2023b](#ref-recs-2020-tech)) provides information on the survey’s sampling and weighting implications for analysis. The documentation shows the 2020 RECS uses Jackknife weights, where the main analytic weight is `NWEIGHT`, and the Jackknife weights are `NWEIGHT1`\-`NWEIGHT60`. We can specify these in the `weights` and `repweights` arguments in the survey design object code, respectively.
With Jackknife weights, additional information is required: `type`, `scale`, and `mse`. Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights) discusses in depth each of these arguments; but to quickly get started, the RECS documentation lets us know that `type=JK1`, `scale=59/60`, and `mse = TRUE`. We can use the following code to create the survey design object:
```
recs_des <- recs_2020 %>%
as_survey_rep(
weights = NWEIGHT,
repweights = NWEIGHT1:NWEIGHT60,
type = "JK1",
scale = 59 / 60,
mse = TRUE
)
recs_des
```
```
## Call: Called via srvyr
## Unstratified cluster jacknife (JK1) with 60 replicates and MSE variances.
## Sampling variables:
## - repweights: `NWEIGHT1 + NWEIGHT2 + NWEIGHT3 + NWEIGHT4 + NWEIGHT5 +
## NWEIGHT6 + NWEIGHT7 + NWEIGHT8 + NWEIGHT9 + NWEIGHT10 + NWEIGHT11 +
## NWEIGHT12 + NWEIGHT13 + NWEIGHT14 + NWEIGHT15 + NWEIGHT16 +
## NWEIGHT17 + NWEIGHT18 + NWEIGHT19 + NWEIGHT20 + NWEIGHT21 +
## NWEIGHT22 + NWEIGHT23 + NWEIGHT24 + NWEIGHT25 + NWEIGHT26 +
## NWEIGHT27 + NWEIGHT28 + NWEIGHT29 + NWEIGHT30 + NWEIGHT31 +
## NWEIGHT32 + NWEIGHT33 + NWEIGHT34 + NWEIGHT35 + NWEIGHT36 +
## NWEIGHT37 + NWEIGHT38 + NWEIGHT39 + NWEIGHT40 + NWEIGHT41 +
## NWEIGHT42 + NWEIGHT43 + NWEIGHT44 + NWEIGHT45 + NWEIGHT46 +
## NWEIGHT47 + NWEIGHT48 + NWEIGHT49 + NWEIGHT50 + NWEIGHT51 +
## NWEIGHT52 + NWEIGHT53 + NWEIGHT54 + NWEIGHT55 + NWEIGHT56 +
## NWEIGHT57 + NWEIGHT58 + NWEIGHT59 + NWEIGHT60`
## - weights: NWEIGHT
## Data variables:
## - DOEID (dbl), ClimateRegion_BA (fct), Urbanicity (fct), Region
## (fct), REGIONC (chr), Division (fct), STATE_FIPS (chr),
## state_postal (fct), state_name (fct), HDD65 (dbl), CDD65 (dbl),
## HDD30YR (dbl), CDD30YR (dbl), HousingUnitType (fct), YearMade
## (ord), TOTSQFT_EN (dbl), TOTHSQFT (dbl), TOTCSQFT (dbl),
## SpaceHeatingUsed (lgl), ACUsed (lgl), HeatingBehavior (fct),
## WinterTempDay (dbl), WinterTempAway (dbl), WinterTempNight (dbl),
## ACBehavior (fct), SummerTempDay (dbl), SummerTempAway (dbl),
## SummerTempNight (dbl), NWEIGHT (dbl), NWEIGHT1 (dbl), NWEIGHT2
## (dbl), NWEIGHT3 (dbl), NWEIGHT4 (dbl), NWEIGHT5 (dbl), NWEIGHT6
## (dbl), NWEIGHT7 (dbl), NWEIGHT8 (dbl), NWEIGHT9 (dbl), NWEIGHT10
## (dbl), NWEIGHT11 (dbl), NWEIGHT12 (dbl), NWEIGHT13 (dbl), NWEIGHT14
## (dbl), NWEIGHT15 (dbl), NWEIGHT16 (dbl), NWEIGHT17 (dbl), NWEIGHT18
## (dbl), NWEIGHT19 (dbl), NWEIGHT20 (dbl), NWEIGHT21 (dbl), NWEIGHT22
## (dbl), NWEIGHT23 (dbl), NWEIGHT24 (dbl), NWEIGHT25 (dbl), NWEIGHT26
## (dbl), NWEIGHT27 (dbl), NWEIGHT28 (dbl), NWEIGHT29 (dbl), NWEIGHT30
## (dbl), NWEIGHT31 (dbl), NWEIGHT32 (dbl), NWEIGHT33 (dbl), NWEIGHT34
## (dbl), NWEIGHT35 (dbl), NWEIGHT36 (dbl), NWEIGHT37 (dbl), NWEIGHT38
## (dbl), NWEIGHT39 (dbl), NWEIGHT40 (dbl), NWEIGHT41 (dbl), NWEIGHT42
## (dbl), NWEIGHT43 (dbl), NWEIGHT44 (dbl), NWEIGHT45 (dbl), NWEIGHT46
## (dbl), NWEIGHT47 (dbl), NWEIGHT48 (dbl), NWEIGHT49 (dbl), NWEIGHT50
## (dbl), NWEIGHT51 (dbl), NWEIGHT52 (dbl), NWEIGHT53 (dbl), NWEIGHT54
## (dbl), NWEIGHT55 (dbl), NWEIGHT56 (dbl), NWEIGHT57 (dbl), NWEIGHT58
## (dbl), NWEIGHT59 (dbl), NWEIGHT60 (dbl), BTUEL (dbl), DOLLAREL
## (dbl), BTUNG (dbl), DOLLARNG (dbl), BTULP (dbl), DOLLARLP (dbl),
## BTUFO (dbl), DOLLARFO (dbl), BTUWOOD (dbl), TOTALBTU (dbl),
## TOTALDOL (dbl)
```
Viewing this new object provides information about the survey design, such that RECS is an “Unstratified cluster jacknife (JK1\) with 60 replicates and MSE variances.” Additionally, the output shows the sampling variables (`NWEIGHT1`\-`NWEIGHT60`) and then lists the remaining variables in the dataset. This design object is used throughout this book to conduct survey analysis.
#### American National Election Studies Design Object
The ANES documentation ([DeBell 2010](#ref-debell)) details the sampling and weighting implications for analyzing the survey data. From this documentation and as noted in Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation), the 2020 ANES data are weighted to the sample, not the population. To make generalizations about the population, we need to weigh the data against the full population count. The ANES methodology recommends using the Current Population Survey (CPS) to determine the number of non\-institutional U.S. citizens aged 18 or older living in the 50 U.S. states or D.C. in March 2020\.
We can use the {censusapi} package to obtain the information needed for the survey design object. The `getCensus()` function allows us to retrieve the CPS data for March (`cps/basic/mar`) in 2020 (`vintage = 2020`). Additionally, we extract several variables from the CPS:
* month (`HRMONTH`) and year (`HRYEAR4`) of the interview: to confirm the correct time period
* age (`PRTAGE`) of the respondent: to narrow the population to 18 and older (eligible age to vote)
* citizenship status (`PRCITSHP`) of the respondent: to narrow the population to only those eligible to vote
* final person\-level weight (`PWSSWGT`)
Detailed information for these variables can be found in the [CPS data dictionary](https://www2.census.gov/programs-surveys/cps/datasets/2020/basic/2020_Basic_CPS_Public_Use_Record_Layout_plus_IO_Code_list.txt).
```
cps_state_in <- getCensus(
name = "cps/basic/mar",
vintage = 2020,
region = "state",
vars = c(
"HRMONTH", "HRYEAR4",
"PRTAGE", "PRCITSHP", "PWSSWGT"
),
key = Sys.getenv("CENSUS_KEY")
)
cps_state <- cps_state_in %>%
as_tibble() %>%
mutate(across(
.cols = everything(),
.fns = as.numeric
))
```
In the code above, we include `region = "state"`. The default region type for the CPS data is at the state level. While not required, including the region can be helpful for understanding the geographical context of the data.
In `getCensus()`, we filtered the dataset by specifying the month (`HRMONTH == 3`) and year (`HRYEAR4 == 2020`) of our request. Therefore, we expect that all interviews within our output were conducted during that particular month and year. We can confirm that the data are from March 2020 by running the code below:
```
cps_state %>%
distinct(HRMONTH, HRYEAR4)
```
```
## # A tibble: 1 × 2
## HRMONTH HRYEAR4
## <dbl> <dbl>
## 1 3 2020
```
We can narrow down the dataset using the age and citizenship variables to include only individuals who are 18 years or older (`PRTAGE >= 18`) and have U.S. citizenship (`PRCITSHIP %in% c(1:4)`):
```
cps_narrow_resp <- cps_state %>%
filter(
PRTAGE >= 18,
PRCITSHP %in% c(1:4)
)
```
To calculate the U.S. population from the filtered data, we sum the person weights (`PWSSWGT`):
```
targetpop <- cps_narrow_resp %>%
pull(PWSSWGT) %>%
sum()
scales::comma(targetpop)
```
```
## [1] "231,034,125"
```
The population of interest in 2020 is 231,034,125\. This result gives us what we need to create the survey design object for estimating population statistics. Using the `anes_2020` data, we adjust the weighting variable (`V200010b`) using the population of interest we just calculated (`targetpop`). We determine the proportion of the total weight for each individual weight (`V200010b / sum(V200010b)`) and then multiply that proportion by the calculated population of interest.
```
anes_adjwgt <- anes_2020 %>%
mutate(Weight = V200010b / sum(V200010b) * targetpop)
```
Once we have the adjusted weights, we can refer to the rest of the documentation to create the survey design. The documentation indicates that the study uses a stratified cluster sampling design. Therefore, we need to specify variables for `strata` and `ids` (cluster) and fill in the `nest` argument. The documentation provides guidance on which strata and cluster variables to use depending on whether we are analyzing pre\- or post\-election data. In this book, we analyze post\-election data, so we need to use the post\-election weight `V200010b`, strata variable `V200010d`, and Primary Sampling Unit (PSU)/cluster variable `V200010c`. Additionally, we set `nest=TRUE` to ensure the clusters are nested within the strata.
```
anes_des <- anes_adjwgt %>%
as_survey_design(
weights = Weight,
strata = V200010d,
ids = V200010c,
nest = TRUE
)
anes_des
```
```
## Stratified 1 - level Cluster Sampling design (with replacement)
## With (101) clusters.
## Called via srvyr
## Sampling variables:
## - ids: V200010c
## - strata: V200010d
## - weights: Weight
## Data variables:
## - V200001 (dbl), CaseID (dbl), V200002 (dbl+lbl), InterviewMode
## (fct), V200010b (dbl), Weight (dbl), V200010c (dbl), VarUnit (fct),
## V200010d (dbl), Stratum (fct), V201006 (dbl+lbl), CampaignInterest
## (fct), V201023 (dbl+lbl), EarlyVote2020 (fct), V201024 (dbl+lbl),
## V201025x (dbl+lbl), V201028 (dbl+lbl), V201029 (dbl+lbl), V201101
## (dbl+lbl), V201102 (dbl+lbl), VotedPres2016 (fct), V201103
## (dbl+lbl), VotedPres2016_selection (fct), V201228 (dbl+lbl),
## V201229 (dbl+lbl), V201230 (dbl+lbl), V201231x (dbl+lbl), PartyID
## (fct), V201233 (dbl+lbl), TrustGovernment (fct), V201237 (dbl+lbl),
## TrustPeople (fct), V201507x (dbl+lbl), Age (dbl), AgeGroup (fct),
## V201510 (dbl+lbl), Education (fct), V201546 (dbl+lbl), V201547a
## (dbl+lbl), V201547b (dbl+lbl), V201547c (dbl+lbl), V201547d
## (dbl+lbl), V201547e (dbl+lbl), V201547z (dbl+lbl), V201549x
## (dbl+lbl), RaceEth (fct), V201600 (dbl+lbl), Gender (fct), V201607
## (dbl+lbl), V201610 (dbl+lbl), V201611 (dbl+lbl), V201613 (dbl+lbl),
## V201615 (dbl+lbl), V201616 (dbl+lbl), V201617x (dbl+lbl), Income
## (fct), Income7 (fct), V202051 (dbl+lbl), V202066 (dbl+lbl), V202072
## (dbl+lbl), VotedPres2020 (fct), V202073 (dbl+lbl), V202109x
## (dbl+lbl), V202110x (dbl+lbl), VotedPres2020_selection (fct)
```
We can examine this new object to learn more about the survey design, such that the ANES is a “Stratified 1 \- level Cluster Sampling design (with replacement) With (101\) clusters.” Additionally, the output displays the sampling variables and then lists the remaining variables in the dataset. This design object is used throughout this book to conduct survey analysis.
#### Residential Energy Consumption Survey Design Object
The RECS documentation ([U.S. Energy Information Administration 2023b](#ref-recs-2020-tech)) provides information on the survey’s sampling and weighting implications for analysis. The documentation shows the 2020 RECS uses Jackknife weights, where the main analytic weight is `NWEIGHT`, and the Jackknife weights are `NWEIGHT1`\-`NWEIGHT60`. We can specify these in the `weights` and `repweights` arguments in the survey design object code, respectively.
With Jackknife weights, additional information is required: `type`, `scale`, and `mse`. Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights) discusses in depth each of these arguments; but to quickly get started, the RECS documentation lets us know that `type=JK1`, `scale=59/60`, and `mse = TRUE`. We can use the following code to create the survey design object:
```
recs_des <- recs_2020 %>%
as_survey_rep(
weights = NWEIGHT,
repweights = NWEIGHT1:NWEIGHT60,
type = "JK1",
scale = 59 / 60,
mse = TRUE
)
recs_des
```
```
## Call: Called via srvyr
## Unstratified cluster jacknife (JK1) with 60 replicates and MSE variances.
## Sampling variables:
## - repweights: `NWEIGHT1 + NWEIGHT2 + NWEIGHT3 + NWEIGHT4 + NWEIGHT5 +
## NWEIGHT6 + NWEIGHT7 + NWEIGHT8 + NWEIGHT9 + NWEIGHT10 + NWEIGHT11 +
## NWEIGHT12 + NWEIGHT13 + NWEIGHT14 + NWEIGHT15 + NWEIGHT16 +
## NWEIGHT17 + NWEIGHT18 + NWEIGHT19 + NWEIGHT20 + NWEIGHT21 +
## NWEIGHT22 + NWEIGHT23 + NWEIGHT24 + NWEIGHT25 + NWEIGHT26 +
## NWEIGHT27 + NWEIGHT28 + NWEIGHT29 + NWEIGHT30 + NWEIGHT31 +
## NWEIGHT32 + NWEIGHT33 + NWEIGHT34 + NWEIGHT35 + NWEIGHT36 +
## NWEIGHT37 + NWEIGHT38 + NWEIGHT39 + NWEIGHT40 + NWEIGHT41 +
## NWEIGHT42 + NWEIGHT43 + NWEIGHT44 + NWEIGHT45 + NWEIGHT46 +
## NWEIGHT47 + NWEIGHT48 + NWEIGHT49 + NWEIGHT50 + NWEIGHT51 +
## NWEIGHT52 + NWEIGHT53 + NWEIGHT54 + NWEIGHT55 + NWEIGHT56 +
## NWEIGHT57 + NWEIGHT58 + NWEIGHT59 + NWEIGHT60`
## - weights: NWEIGHT
## Data variables:
## - DOEID (dbl), ClimateRegion_BA (fct), Urbanicity (fct), Region
## (fct), REGIONC (chr), Division (fct), STATE_FIPS (chr),
## state_postal (fct), state_name (fct), HDD65 (dbl), CDD65 (dbl),
## HDD30YR (dbl), CDD30YR (dbl), HousingUnitType (fct), YearMade
## (ord), TOTSQFT_EN (dbl), TOTHSQFT (dbl), TOTCSQFT (dbl),
## SpaceHeatingUsed (lgl), ACUsed (lgl), HeatingBehavior (fct),
## WinterTempDay (dbl), WinterTempAway (dbl), WinterTempNight (dbl),
## ACBehavior (fct), SummerTempDay (dbl), SummerTempAway (dbl),
## SummerTempNight (dbl), NWEIGHT (dbl), NWEIGHT1 (dbl), NWEIGHT2
## (dbl), NWEIGHT3 (dbl), NWEIGHT4 (dbl), NWEIGHT5 (dbl), NWEIGHT6
## (dbl), NWEIGHT7 (dbl), NWEIGHT8 (dbl), NWEIGHT9 (dbl), NWEIGHT10
## (dbl), NWEIGHT11 (dbl), NWEIGHT12 (dbl), NWEIGHT13 (dbl), NWEIGHT14
## (dbl), NWEIGHT15 (dbl), NWEIGHT16 (dbl), NWEIGHT17 (dbl), NWEIGHT18
## (dbl), NWEIGHT19 (dbl), NWEIGHT20 (dbl), NWEIGHT21 (dbl), NWEIGHT22
## (dbl), NWEIGHT23 (dbl), NWEIGHT24 (dbl), NWEIGHT25 (dbl), NWEIGHT26
## (dbl), NWEIGHT27 (dbl), NWEIGHT28 (dbl), NWEIGHT29 (dbl), NWEIGHT30
## (dbl), NWEIGHT31 (dbl), NWEIGHT32 (dbl), NWEIGHT33 (dbl), NWEIGHT34
## (dbl), NWEIGHT35 (dbl), NWEIGHT36 (dbl), NWEIGHT37 (dbl), NWEIGHT38
## (dbl), NWEIGHT39 (dbl), NWEIGHT40 (dbl), NWEIGHT41 (dbl), NWEIGHT42
## (dbl), NWEIGHT43 (dbl), NWEIGHT44 (dbl), NWEIGHT45 (dbl), NWEIGHT46
## (dbl), NWEIGHT47 (dbl), NWEIGHT48 (dbl), NWEIGHT49 (dbl), NWEIGHT50
## (dbl), NWEIGHT51 (dbl), NWEIGHT52 (dbl), NWEIGHT53 (dbl), NWEIGHT54
## (dbl), NWEIGHT55 (dbl), NWEIGHT56 (dbl), NWEIGHT57 (dbl), NWEIGHT58
## (dbl), NWEIGHT59 (dbl), NWEIGHT60 (dbl), BTUEL (dbl), DOLLAREL
## (dbl), BTUNG (dbl), DOLLARNG (dbl), BTULP (dbl), DOLLARLP (dbl),
## BTUFO (dbl), DOLLARFO (dbl), BTUWOOD (dbl), TOTALBTU (dbl),
## TOTALDOL (dbl)
```
Viewing this new object provides information about the survey design, such that RECS is an “Unstratified cluster jacknife (JK1\) with 60 replicates and MSE variances.” Additionally, the output shows the sampling variables (`NWEIGHT1`\-`NWEIGHT60`) and then lists the remaining variables in the dataset. This design object is used throughout this book to conduct survey analysis.
4\.3 Survey analysis process
----------------------------
There is a general process for analyzing data to create estimates with {srvyr} package:
1. Create a `tbl_svy` object (a survey object) using: `as_survey_design()` or `as_survey_rep()`
2. Subset data (if needed) using `filter()` (to create subpopulations)
3. Specify domains of analysis using `group_by()`
4. Within `summarize()`, specify variables to calculate, including means, totals, proportions, quantiles, and more
In Section [4\.2\.3](c04-getting-started.html#setup-des-obj), we follow Step 1 to create the survey design objects for the ANES and RECS data featured in this book. Additional details on how to create design objects can be found in Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights). Then, once we have the design object, we can filter the data to any subpopulation of interest (if needed). It is important to filter the data after creating the design object. This ensures that we are accurately accounting for the survey design in our calculations. Finally, we can use `group_by()`, `summarize()`, and other functions from the {survey} and {srvyr} packages to analyze the survey data by estimating means, totals, and so on.
4\.4 Similarities between {dplyr} and {srvyr} functions
-------------------------------------------------------
The {dplyr} package from the tidyverse offers flexible and intuitive functions for data wrangling ([Wickham et al. 2023](#ref-R-dplyr)). One of the major advantages of using {srvyr} is that it applies {dplyr}\-like syntax to the {survey} package ([Freedman Ellis and Schneider 2024](#ref-R-srvyr)). We can use pipes, such as `%>%` from the {magrittr} package, to specify a survey design object, apply a function, and then feed that output into the next function’s first argument ([Bache and Wickham 2022](#ref-R-magrittr)). Functions follow the ‘tidy’ convention of snake\_case function names.
To help explain the similarities between {dplyr} functions and {srvyr} functions, we use the `towny` dataset from the {gt} package and `apistrat` data that comes in the {survey} package. The `towny` dataset provides population data for municipalities in Ontario, Canada on census years between 1996 and 2021\. Taking a look at `towny` with `dplyr::glimpse()`, we can see the dataset has 25 columns with a mix of character and numeric data.
```
towny %>%
glimpse()
```
```
## Rows: 414
## Columns: 25
## $ name <chr> "Addington Highlands", "Adelaide Metc…
## $ website <chr> "https://addingtonhighlands.ca", "htt…
## $ status <chr> "lower-tier", "lower-tier", "lower-ti…
## $ csd_type <chr> "township", "township", "township", "…
## $ census_div <chr> "Lennox and Addington", "Middlesex", …
## $ latitude <dbl> 45.00, 42.95, 44.13, 45.53, 43.86, 48…
## $ longitude <dbl> -77.25, -81.70, -79.93, -76.90, -79.0…
## $ land_area_km2 <dbl> 1293.99, 331.11, 371.53, 519.59, 66.6…
## $ population_1996 <int> 2429, 3128, 9359, 2837, 64430, 1027, …
## $ population_2001 <int> 2402, 3149, 10082, 2824, 73753, 956, …
## $ population_2006 <int> 2512, 3135, 10695, 2716, 90167, 958, …
## $ population_2011 <int> 2517, 3028, 10603, 2844, 109600, 864,…
## $ population_2016 <int> 2318, 2990, 10975, 2935, 119677, 969,…
## $ population_2021 <int> 2534, 3011, 10989, 2995, 126666, 954,…
## $ density_1996 <dbl> 1.88, 9.45, 25.19, 5.46, 966.84, 8.81…
## $ density_2001 <dbl> 1.86, 9.51, 27.14, 5.44, 1106.74, 8.2…
## $ density_2006 <dbl> 1.94, 9.47, 28.79, 5.23, 1353.05, 8.2…
## $ density_2011 <dbl> 1.95, 9.14, 28.54, 5.47, 1644.66, 7.4…
## $ density_2016 <dbl> 1.79, 9.03, 29.54, 5.65, 1795.87, 8.3…
## $ density_2021 <dbl> 1.96, 9.09, 29.58, 5.76, 1900.75, 8.1…
## $ pop_change_1996_2001_pct <dbl> -0.0111, 0.0067, 0.0773, -0.0046, 0.1…
## $ pop_change_2001_2006_pct <dbl> 0.0458, -0.0044, 0.0608, -0.0382, 0.2…
## $ pop_change_2006_2011_pct <dbl> 0.0020, -0.0341, -0.0086, 0.0471, 0.2…
## $ pop_change_2011_2016_pct <dbl> -0.0791, -0.0125, 0.0351, 0.0320, 0.0…
## $ pop_change_2016_2021_pct <dbl> 0.0932, 0.0070, 0.0013, 0.0204, 0.058…
```
Let’s examine the `towny` object’s class. We verify that it is a tibble, as indicated by `"tbl_df"`, by running the code below:
```
class(towny)
```
```
## [1] "tbl_df" "tbl" "data.frame"
```
All tibbles are data.frames, but not all data.frames are tibbles. Compared to data.frames, tibbles have some advantages, with the printing behavior being a noticeable advantage. When working with tidyverse style code, we recommend making all your datasets tibbles for ease of analysis.
The {survey} package contains datasets related to the California Academic Performance Index, which measures student performance in schools with at least 100 students in California. We can access these datasets by loading the {survey} package and running `data(api)`.
Let’s work with the `apistrat` dataset, which is a stratified random sample, stratified by school type (`stype`) with three levels: `E` for elementary school, `M` for middle school, and `H` for high school. We first create the survey design object (see Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights) for more information). The sample is stratified by the `stype` variable and the sampling weights are found in the `pw` variable. We can use this information to construct the design object, `apistrat_des`.
```
data(api)
apistrat_des <- apistrat %>%
as_survey_design(
strata = stype,
weights = pw
)
```
When we check the class of `apistrat_des`, it is not a typical `data.frame`. Applying the `as_survey_design()` function transforms the data into a `tbl_svy`, a special class specifically for survey design objects. The {srvyr} package is designed to work with the `tbl_svy` class of objects.
```
class(apistrat_des)
```
```
## [1] "tbl_svy" "survey.design2" "survey.design"
```
Let’s look at how {dplyr} works with regular data frames. The example below calculates the mean and median for the `land_area_km2` variable in the `towny` dataset.
```
towny %>%
summarize(
area_mean = mean(land_area_km2),
area_median = median(land_area_km2)
)
```
```
## # A tibble: 1 × 2
## area_mean area_median
## <dbl> <dbl>
## 1 373. 273.
```
In the code below, we calculate the mean and median of the variable `api00` using `apistrat_des`. Note the similarity in the syntax. However, the standard error of the statistic is also calculated in addition to the statistic itself.
```
apistrat_des %>%
summarize(
api00_mean = survey_mean(api00),
api00_med = survey_median(api00)
)
```
```
## # A tibble: 1 × 4
## api00_mean api00_mean_se api00_med api00_med_se
## <dbl> <dbl> <dbl> <dbl>
## 1 662. 9.54 668 13.7
```
The functions in {srvyr} also play nicely with other tidyverse functions. For example, if we wanted to select columns with shared characteristics, we can use {tidyselect} functions such as `starts_with()`, `num_range()`, etc. ([Henry and Wickham 2024](#ref-R-tidyselect)). In the examples below, we use a combination of `across()` and `starts_with()` to calculate the mean of variables starting with “population” in the `towny` data frame and those beginning with `api` in the `apistrat_des` survey object.
```
towny %>%
summarize(across(
starts_with("population"),
~ mean(.x, na.rm = TRUE)
))
```
```
## # A tibble: 1 × 6
## population_1996 population_2001 population_2006 population_2011
## <dbl> <dbl> <dbl> <dbl>
## 1 25866. 27538. 29173. 30838.
## # ℹ 2 more variables: population_2016 <dbl>, population_2021 <dbl>
```
```
apistrat_des %>%
summarize(across(
starts_with("api"),
survey_mean
))
```
```
## # A tibble: 1 × 6
## api00 api00_se api99 api99_se api.stu api.stu_se
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 662. 9.54 629. 10.1 498. 16.4
```
We have the flexibility to use {dplyr} verbs such as `mutate()`, `filter()`, and `select()` on our survey design object. As mentioned in Section [4\.3](c04-getting-started.html#survey-analysis-process), these steps should be performed on the survey design object. This ensures our survey design is properly considered in all our calculations.
```
apistrat_des_mod <- apistrat_des %>%
mutate(api_diff = api00 - api99) %>%
filter(stype == "E") %>%
select(stype, api99, api00, api_diff, api_students = api.stu)
apistrat_des_mod
```
```
## Stratified Independent Sampling design (with replacement)
## Called via srvyr
## Sampling variables:
## - ids: `1`
## - strata: stype
## - weights: pw
## Data variables:
## - stype (fct), api99 (int), api00 (int), api_diff (int), api_students
## (int)
```
```
apistrat_des
```
```
## Stratified Independent Sampling design (with replacement)
## Called via srvyr
## Sampling variables:
## - ids: `1`
## - strata: stype
## - weights: pw
## Data variables:
## - cds (chr), stype (fct), name (chr), sname (chr), snum (dbl), dname
## (chr), dnum (int), cname (chr), cnum (int), flag (int), pcttest
## (int), api00 (int), api99 (int), target (int), growth (int),
## sch.wide (fct), comp.imp (fct), both (fct), awards (fct), meals
## (int), ell (int), yr.rnd (fct), mobility (int), acs.k3 (int),
## acs.46 (int), acs.core (int), pct.resp (int), not.hsg (int), hsg
## (int), some.col (int), col.grad (int), grad.sch (int), avg.ed
## (dbl), full (int), emer (int), enroll (int), api.stu (int), pw
## (dbl), fpc (dbl)
```
Several functions in {srvyr} must be called within `srvyr::summarize()`, with the exception of `srvyr::survey_count()` and `srvyr::survey_tally()`. This is similar to how `dplyr::count()` and `dplyr::tally()` are not called within `dplyr::summarize()`. The `summarize()` function can be used in conjunction with the `group_by()` function or `by/.by` arguments, which applies the functions on a group\-by\-group basis to create grouped summaries.
```
towny %>%
group_by(csd_type) %>%
dplyr::summarize(
area_mean = mean(land_area_km2),
area_median = median(land_area_km2)
)
```
```
## # A tibble: 5 × 3
## csd_type area_mean area_median
## <chr> <dbl> <dbl>
## 1 city 498. 198.
## 2 municipality 607. 488.
## 3 town 183. 129.
## 4 township 363. 301.
## 5 village 23.0 3.3
```
We use a similar setup to summarize data in {srvyr}:
```
apistrat_des %>%
group_by(stype) %>%
summarize(
api00_mean = survey_mean(api00),
api00_median = survey_median(api00)
)
```
```
## # A tibble: 3 × 5
## stype api00_mean api00_mean_se api00_median api00_median_se
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 E 674. 12.5 671 20.7
## 2 H 626. 15.5 635 21.6
## 3 M 637. 16.6 648 24.1
```
An alternative way to do grouped analysis on the `towny` data would be with the `.by` argument:
```
towny %>%
dplyr::summarize(
area_mean = mean(land_area_km2),
area_median = median(land_area_km2),
.by = csd_type
)
```
```
## # A tibble: 5 × 3
## csd_type area_mean area_median
## <chr> <dbl> <dbl>
## 1 township 363. 301.
## 2 town 183. 129.
## 3 municipality 607. 488.
## 4 city 498. 198.
## 5 village 23.0 3.3
```
The `.by` syntax is similarly implemented in {srvyr} for grouped analysis:
```
apistrat_des %>%
summarize(
api00_mean = survey_mean(api00),
api00_median = survey_median(api00),
.by = stype
)
```
```
## # A tibble: 3 × 5
## stype api00_mean api00_mean_se api00_median api00_median_se
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 E 674. 12.5 671 20.7
## 2 H 626. 15.5 635 21.6
## 3 M 637. 16.6 648 24.1
```
As mentioned above, {srvyr} functions are meant for `tbl_svy` objects. Attempting to manipulate data on non\-`tbl_svy` objects, like the `towny` example shown below, results in an error. Running the code lets us know what the issue is: `Survey context not set`.
```
towny %>%
summarize(area_mean = survey_mean(land_area_km2))
```
```
## Error in `summarize()`:
## ℹ In argument: `area_mean = survey_mean(land_area_km2)`.
## Caused by error in `cur_svy()`:
## ! Survey context not set
```
A few functions in {srvyr} have counterparts in {dplyr}, such as `srvyr::summarize()` and `srvyr::group_by()`. Unlike {srvyr}\-specific verbs, {srvyr} recognizes these parallel functions if applied to a non\-survey object. Instead of causing an error, the package provides the equivalent output from {dplyr}:
```
towny %>%
srvyr::summarize(area_mean = mean(land_area_km2))
```
```
## # A tibble: 1 × 1
## area_mean
## <dbl>
## 1 373.
```
Because this book focuses on survey analysis, most of our pipes stem from a survey object. When we load the {dplyr} and {srvyr} packages, the functions automatically figure out the class of data and use the appropriate one from {dplyr} or {srvyr}. Therefore, we do not need to include the namespace for each function (e.g., `srvyr::summarize()`).
| Social Science |
tidy-survey-r.github.io | https://tidy-survey-r.github.io/tidy-survey-book/c05-descriptive-analysis.html |
Chapter 5 Descriptive analyses
==============================
### Prerequisites
For this chapter, load the following packages:
```
library(tidyverse)
library(srvyr)
library(srvyrexploR)
library(broom)
```
We are using data from ANES and RECS described in Chapter [4](c04-getting-started.html#c04-getting-started). As a reminder, here is the code to create the design objects for each to use throughout this chapter. For ANES, we need to adjust the weight so it sums to the population instead of the sample (see the ANES documentation and Chapter [4](c04-getting-started.html#c04-getting-started) for more information).
```
targetpop <- 231592693
anes_adjwgt <- anes_2020 %>%
mutate(Weight = Weight / sum(Weight) * targetpop)
anes_des <- anes_adjwgt %>%
as_survey_design(
weights = Weight,
strata = Stratum,
ids = VarUnit,
nest = TRUE
)
```
For RECS, details are included in the RECS documentation and Chapters [4](c04-getting-started.html#c04-getting-started) and [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights).
```
recs_des <- recs_2020 %>%
as_survey_rep(
weights = NWEIGHT,
repweights = NWEIGHT1:NWEIGHT60,
type = "JK1",
scale = 59 / 60,
mse = TRUE
)
```
5\.1 Introduction
-----------------
Descriptive analyses, such as basic counts, cross\-tabulations, or means, are among the first steps in making sense of our survey results. During descriptive analyses, we calculate point estimates of unknown population parameters, such as population mean, and uncertainty estimates, such as confidence intervals. By reviewing the findings, we can glean insight into the data, the underlying population, and any unique aspects of the data or population. For example, if only 10% of survey respondents are male, it could indicate a unique population, a potential error or bias, an intentional survey sampling method, or other factors. Additionally, descriptive analyses provide summaries of distribution and other measures. These analyses lay the groundwork for the next steps of running statistical tests or developing models.
We discuss many different types of descriptive analyses in this chapter. However, it is important to know what type of data we are working with and which statistics are appropriate. In survey data, we typically consider data as one of four main types:
* Categorical/nominal data: variables with levels or descriptions that cannot be ordered, such as the region of the country (North, South, East, and West)
* Ordinal data: variables that can be ordered, such as those from a Likert scale (strongly disagree, disagree, agree, and strongly agree)
* Discrete data: variables that are counted or measured, such as number of children
* Continuous data: variables that are measured and whose values can lie anywhere on an interval, such as income
This chapter discusses how to analyze measures of distribution (e.g., cross\-tabulations), central tendency (e.g., means), relationship (e.g., ratios), and dispersion (e.g., standard deviation) using functions from the {srvyr} package ([Freedman Ellis and Schneider 2024](#ref-R-srvyr)).
Measures of distribution describe how often an event or response occurs. These measures include counts and totals. We cover the following functions:
* Count of observations (`survey_count()` and `survey_tally()`)
* Summation of variables (`survey_total()`)
Measures of central tendency find the central (or average) responses. These measures include means and medians. We cover the following functions:
* Means and proportions (`survey_mean()` and `survey_prop()`)
* Quantiles and medians (`survey_quantile()` and `survey_median()`)
Measures of relationship describe how variables relate to each other. These measures include correlations and ratios. We cover the following functions:
* Correlations (`survey_corr()`)
* Ratios (`survey_ratio()`)
Measures of dispersion describe how data spread around the central tendency for continuous variables. These measures include standard deviations and variances. We cover the following functions:
* Variances and standard deviations (`survey_var()` and `survey_sd()`)
To incorporate each of these survey functions, recall the general process for survey estimation from Chapter [4](c04-getting-started.html#c04-getting-started):
1. Create a `tbl_svy` object using `srvyr::as_survey_design()` or `srvyr::as_survey_rep()`.
2. Subset the data for subpopulations using `srvyr::filter()`, if needed.
3. Specify domains of analysis using `srvyr::group_by()`, if needed.
4. Analyze the data with survey\-specific functions.
This chapter walks through how to apply the survey functions in Step 4\. Note that unless otherwise specified, our estimates are weighted as a result of setting up the survey design object.
To look at the data by different subgroups, we can choose to filter and/or group the data. It is very important that we filter and group the data only after creating the design object. This ensures that the results accurately reflect the survey design. If we filter or group data before creating the survey design object, the data for those cases are not included in the survey design information and estimations of the variance, leading to inaccurate results.
For the sake of simplicity, we’ve removed cases with missing values in the examples below. For a more detailed explanation of how to handle missing data, please refer to Chapter [11](c11-missing-data.html#c11-missing-data).
5\.2 Counts and cross\-tabulations
----------------------------------
Using `survey_count()` and `survey_tally()`, we can calculate the estimated population counts for a given variable or combination of variables. These summaries, often referred to as cross\-tabulations or cross\-tabs, are applied to categorical data. They help in estimating counts of the population size for different groups based on the survey data.
### 5\.2\.1 Syntax
The syntax for `survey_count()` is similar to the `dplyr::count()` syntax, as mentioned in Chapter [4](c04-getting-started.html#c04-getting-started). However, as noted above, this function can only be called on `tbl_svy` objects. Let’s explore the syntax:
```
survey_count(
x,
...,
wt = NULL,
sort = FALSE,
name = "n",
.drop = dplyr::group_by_drop_default(x),
vartype = c("se", "ci", "var", "cv")
)
```
The arguments are:
* `x`: a `tbl_svy` object created by `as_survey`
* `...`: variables to group by, passed to `group_by`
* `wt`: a variable to weight on in addition to the survey weights, defaults to `NULL`
* `sort`: how to sort the variables, defaults to `FALSE`
* `name`: the name of the count variable, defaults to `n`
* `.drop`: whether to drop empty groups
* `vartype`: type(s) of variation estimate to calculate including any of `c("se", "ci", "var", "cv")`, defaults to `se` (standard error) (see Section [5\.2\.1](c05-descriptive-analysis.html#desc-count-syntax) for more information)
To generate a count or cross\-tabs by different variables, we include them in the (`...`) argument. This argument can take any number of variables and breaks down the counts by all combinations of the provided variables. This is similar to `dplyr::count()`. To obtain an estimate of the overall population, we can exclude any variables from the (`...`) argument or use the `survey_tally()` function. While the `survey_tally()` function has a similar syntax to the `survey_count()` function, it does not include the (`...`) or the `.drop` arguments:
```
survey_tally(
x,
wt,
sort = FALSE,
name = "n",
vartype = c("se", "ci", "var", "cv")
)
```
Both functions include the `vartype` argument with four different values:
* `se`: standard error
+ The estimated standard deviation of the estimate
+ Output has a column with the variable name specified in the `name` argument with a suffix of “\_se”
* `ci`: confidence interval
+ The lower and upper limits of a confidence interval
+ Output has two columns with the variable name specified in the `name` argument with a suffix of “\_low” and “\_upp”
+ By default, this is a 95% confidence interval but can be changed by using the argument level and specifying a number between 0 and 1\. For example, `level=0.8` would produce an 80% confidence interval.
* `var`: variance
+ The estimated variance of the estimate
+ Output has a column with the variable name specified in the `name` argument with a suffix of “\_var”
* `cv`: coefficient of variation
+ A ratio of the standard error and the estimate
+ Output has a column with the variable name specified in the `name` argument with a suffix of “\_cv”
The confidence intervals are always calculated using a symmetric t\-distribution based method, given by the formula:
\\\[ \\text{estimate} \\pm t^\*\_{df}\\times SE\\]
where \\(t^\*\_{df}\\) is the critical value from a t\-distribution based on the confidence level and the degrees of freedom. By default, the degrees of freedom are based on the design or number of replicates, but they can be specified using the `df` argument. For survey design objects, the degrees of freedom are calculated as the number of primary sampling units (PSUs or clusters) minus the number of strata (see Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights) for more information on PSUs, strata, and sample designs). For replicate\-based objects, the degrees of freedom are calculated as one less than the rank of the matrix of replicate weight, where the number of replicates is typically the rank. Note that specifying `df = Inf` is equivalent to using a normal (z\-based) confidence interval – this is the default in {survey}. These variability types are the same for most of the survey functions, and we provide examples using different variability types throughout this chapter.
### 5\.2\.2 Examples
#### Example 1: Estimated population count
If we want to obtain the estimated number of households in the U.S. (the population of interest) using the Residential Energy Consumption Survey (RECS) data, we can use `survey_count()`. If we do not specify any variables in the `survey_count()` function, it outputs the estimated population count (`n`) and its corresponding standard error (`n_se`).
```
recs_des %>%
survey_count()
```
```
## # A tibble: 1 × 2
## n n_se
## <dbl> <dbl>
## 1 123529025. 0.148
```
Based on this calculation, the estimated number of households in the U.S. is 123,529,025\.
Alternatively, we could also use the `survey_tally()` function. The example below yields the same results as `survey_count()`.
```
recs_des %>%
survey_tally()
```
```
## # A tibble: 1 × 2
## n n_se
## <dbl> <dbl>
## 1 123529025. 0.148
```
#### Example 2: Estimated counts by subgroups (cross\-tabs)
To calculate the estimated number of observations for specific subgroups, such as Region and Division, we can include the variables of interest in the `survey_count()` function. In the example below, we calculate the estimated number of housing units by region and division. The argument `name =` in `survey_count()` allows us to change the name of the count variable in the output from the default `n` to `N`.
```
recs_des %>%
survey_count(Region, Division, name = "N")
```
```
## # A tibble: 10 × 4
## Region Division N N_se
## <fct> <fct> <dbl> <dbl>
## 1 Northeast New England 5876166 0.0000000137
## 2 Northeast Middle Atlantic 16043503 0.0000000487
## 3 Midwest East North Central 18546912 0.000000437
## 4 Midwest West North Central 8495815 0.0000000177
## 5 South South Atlantic 24843261 0.0000000418
## 6 South East South Central 7380717. 0.114
## 7 South West South Central 14619094 0.000488
## 8 West Mountain North 4615844 0.119
## 9 West Mountain South 4602070 0.0000000492
## 10 West Pacific 18505643. 0.00000295
```
When we run the cross\-tab, we see that there are an estimated 5,876,166 housing units in the New England Division.
The code results in an error if we try to use the `survey_count()` syntax with `survey_tally()`:
```
recs_des %>%
survey_tally(Region, Division, name = "N")
```
```
## Error in `dplyr::summarise()`:
## ℹ In argument: `N = survey_total(Region, vartype = vartype,
## na.rm = TRUE)`.
## Caused by error:
## ! Factor not allowed in survey functions, should be used as a grouping variable.
```
Use a `group_by()` function prior to using `survey_tally()` to successfully run the cross\-tab:
```
recs_des %>%
group_by(Region, Division) %>%
survey_tally(name = "N")
```
```
## # A tibble: 10 × 4
## # Groups: Region [4]
## Region Division N N_se
## <fct> <fct> <dbl> <dbl>
## 1 Northeast New England 5876166 0.0000000137
## 2 Northeast Middle Atlantic 16043503 0.0000000487
## 3 Midwest East North Central 18546912 0.000000437
## 4 Midwest West North Central 8495815 0.0000000177
## 5 South South Atlantic 24843261 0.0000000418
## 6 South East South Central 7380717. 0.114
## 7 South West South Central 14619094 0.000488
## 8 West Mountain North 4615844 0.119
## 9 West Mountain South 4602070 0.0000000492
## 10 West Pacific 18505643. 0.00000295
```
5\.3 Totals and sums
--------------------
The `survey_total()` function is analogous to `sum`. It can be applied to continuous variables to obtain the estimated total quantity in a population. Starting from this point in the chapter, all the introduced functions must be called within `summarize()`.
### 5\.3\.1 Syntax
Here is the syntax:
```
survey_total(
x,
na.rm = FALSE,
vartype = c("se", "ci", "var", "cv"),
level = 0.95,
deff = FALSE,
df = NULL
)
```
The arguments are:
* `x`: a variable, expression, or empty
* `na.rm`: an indicator of whether missing values should be dropped, defaults to `FALSE`
* `vartype`: type(s) of variation estimate to calculate including any of `c("se", "ci", "var", "cv")`, defaults to `se` (standard error) (see Section [5\.2\.1](c05-descriptive-analysis.html#desc-count-syntax) for more information)
* `level`: a number or a vector indicating the confidence level, defaults to 0\.95
* `deff`: a logical value stating whether the design effect should be returned, defaults to FALSE (this is described in more detail in Section [5\.9\.3](c05-descriptive-analysis.html#desc-deff))
* `df`: (for `vartype = 'ci'`), a numeric value indicating degrees of freedom for the t\-distribution
### 5\.3\.2 Examples
#### Example 1: Estimated population count
To calculate a population count estimate with `survey_total()`, we leave the argument `x` empty, as shown in the example below:
```
recs_des %>%
summarize(Tot = survey_total())
```
```
## # A tibble: 1 × 2
## Tot Tot_se
## <dbl> <dbl>
## 1 123529025. 0.148
```
The estimated number of households in the U.S. is 123,529,025\. Note that this result obtained from `survey_total()` is equivalent to the ones from the `survey_count()` and `survey_tally()` functions. However, the `survey_total()` function is called within `summarize()`, whereas `survey_count()` and `survey_tally()` are not.
#### Example 2: Overall summation of continuous variables
The distinction between `survey_total()` and `survey_count()` becomes more evident when working with continuous variables. Let’s compute the total cost of electricity in whole dollars from variable `DOLLAREL`[4](#fn4).
```
recs_des %>%
summarize(elec_bill = survey_total(DOLLAREL))
```
```
## # A tibble: 1 × 2
## elec_bill elec_bill_se
## <dbl> <dbl>
## 1 170473527909. 664893504.
```
It is estimated that American residential households spent a total of $170,473,527,909 on electricity in 2020, and the estimate has a standard error of $664,893,504\.
#### Example 3: Summation by groups
Since we are using the {srvyr} package, we can use `group_by()` to calculate the cost of electricity for different groups. Let’s examine the variations in the cost of electricity in whole dollars across regions and display the confidence interval instead of the default standard error.
```
recs_des %>%
group_by(Region) %>%
summarize(elec_bill = survey_total(DOLLAREL,
vartype = "ci"
))
```
```
## # A tibble: 4 × 4
## Region elec_bill elec_bill_low elec_bill_upp
## <fct> <dbl> <dbl> <dbl>
## 1 Northeast 29430369947. 28788987554. 30071752341.
## 2 Midwest 34972544751. 34339576041. 35605513460.
## 3 South 72496840204. 71534780902. 73458899506.
## 4 West 33573773008. 32909111702. 34238434313.
```
The survey results estimate that households in the Northeast spent $29,430,369,947 with a confidence interval of ($28,788,987,554, $30,071,752,341\) on electricity in 2020, while households in the South spent an estimated $72,496,840,204 with a confidence interval of ($71,534,780,902, $73,458,899,506\).
As we calculate these numbers, we may notice that the confidence interval of the South is larger than those of other regions. This implies that we have less certainty about the true value of electricity spending in the South. A larger confidence interval could be due to a variety of factors, such as a wider range of electricity spending in the South. We could try to analyze smaller regions within the South to identify areas that are contributing to more variability. Descriptive analyses serve as a valuable starting point for more in\-depth exploration and analysis.
5\.4 Means and proportions
--------------------------
Means and proportions form the foundation of many research studies. These estimates are often the first things we look for when reviewing research on a given topic. The `survey_mean()` and `survey_prop()` functions calculate means and proportions while taking into account the survey design elements. The `survey_mean()` function should be used on continuous variables of survey data, while the `survey_prop()` function should be used on categorical variables.
### 5\.4\.1 Syntax
The syntax for both means and proportions is very similar:
```
survey_mean(
x,
na.rm = FALSE,
vartype = c("se", "ci", "var", "cv"),
level = 0.95,
proportion = FALSE,
prop_method = c("logit", "likelihood", "asin", "beta", "mean"),
deff = FALSE,
df = NULL
)
survey_prop(
na.rm = FALSE,
vartype = c("se", "ci", "var", "cv"),
level = 0.95,
proportion = TRUE,
prop_method =
c("logit", "likelihood", "asin", "beta", "mean", "xlogit"),
deff = FALSE,
df = NULL
)
```
Both functions have the following arguments and defaults:
* `na.rm`: an indicator of whether missing values should be dropped, defaults to `FALSE`
* `vartype`: type(s) of variation estimate to calculate including any of `c("se", "ci", "var", "cv")`, defaults to `se` (standard error) (see Section [5\.2\.1](c05-descriptive-analysis.html#desc-count-syntax) for more information)
* `level`: a number or a vector indicating the confidence level, defaults to 0\.95
* `prop_method`: Method to calculate the confidence interval for confidence intervals
* `deff`: a logical value stating whether the design effect should be returned, defaults to FALSE (this is described in more detail in Section [5\.9\.3](c05-descriptive-analysis.html#desc-deff))
* `df`: (for `vartype = 'ci'`), a numeric value indicating degrees of freedom for the t\-distribution
There are two main differences in the syntax. The `survey_mean()` function includes the first argument `x`, representing the variable or expression on which the mean should be calculated. The `survey_prop()` does not have an argument to include the variables directly. Instead, prior to `summarize()`, we must use the `group_by()` function to specify the variables of interest for `survey_prop()`. For `survey_mean()`, including a `group_by()` function allows us to obtain the means by different groups.
The other main difference is with the `proportion` argument. The `survey_mean()` function can be used to calculate both means and proportions. Its `proportion` argument defaults to `FALSE`, indicating it is used for calculating means. If we wish to calculate a proportion using `survey_mean()`, we need to set the `proportion` argument to `TRUE`. In the `survey_prop()` function, the `proportion` argument defaults to `TRUE` because the function is specifically designed for calculating proportions.
In Section [5\.2\.1](c05-descriptive-analysis.html#desc-count-syntax), we provide an overview of different variability types. The confidence interval used for most measures, such as means and counts, is referred to as a Wald\-type interval. However, for proportions, a Wald\-type interval with a symmetric t\-based confidence interval may not provide accurate coverage, especially when dealing with small sample sizes or proportions “near” 0 or 1\. We can use other methods to calculate confidence intervals, which we specify using the `prop_method` option in `survey_prop()`. The options include:
* `logit`: fits a logistic regression model and computes a Wald\-type interval on the log\-odds scale, which is then transformed to the probability scale. This is the default method.
* `likelihood`: uses the (Rao\-Scott) scaled chi\-squared distribution for the log\-likelihood from a binomial distribution.
* `asin`: uses the variance\-stabilizing transformation for the binomial distribution, the arcsine square root, and then back\-transforms the interval to the probability scale.
* `beta`: uses the incomplete beta function with an effective sample size based on the estimated variance of the proportion.
* `mean`: the Wald\-type interval (\\(\\pm t\_{df}^\*\\times SE\\)).
* `xlogit`: uses a logit transformation of the proportion, calculates a Wald\-type interval, and then back\-transforms to the probability scale. This method is the same as those used by default in SUDAAN and SPSS.
Each option yields slightly different confidence interval bounds when dealing with proportions. Please note that when working with `survey_mean()`, we do not need to specify a method unless the `proportion` argument is `TRUE`. If `proportion` is `FALSE`, it calculates a symmetric `mean` type of confidence interval.
### 5\.4\.2 Examples
#### Example 1: One variable proportion
If we are interested in obtaining the proportion of people in each region in the RECS data, we can use `group_by()` and `survey_prop()` as shown below:
```
recs_des %>%
group_by(Region) %>%
summarize(p = survey_prop())
```
```
## # A tibble: 4 × 3
## Region p p_se
## <fct> <dbl> <dbl>
## 1 Northeast 0.177 0.000000000212
## 2 Midwest 0.219 0.000000000262
## 3 South 0.379 0.000000000740
## 4 West 0.224 0.000000000816
```
17\.7% of the households are in the Northeast, 21\.9% are in the Midwest, and so on. Note that the proportions in column `p` add up to one.
The `survey_prop()` function is essentially the same as using `survey_mean()` with a categorical variable and without specifying a numeric variable in the `x` argument. The following code gives us the same results as above:
```
recs_des %>%
group_by(Region) %>%
summarize(p = survey_mean())
```
```
## # A tibble: 4 × 3
## Region p p_se
## <fct> <dbl> <dbl>
## 1 Northeast 0.177 0.000000000212
## 2 Midwest 0.219 0.000000000262
## 3 South 0.379 0.000000000740
## 4 West 0.224 0.000000000816
```
#### Example 2: Conditional proportions
We can also obtain proportions by more than one variable. In the following example, we look at the proportion of housing units by Region and whether air conditioning (A/C) is used (`ACUsed`)[5](#fn5).
```
recs_des %>%
group_by(Region, ACUsed) %>%
summarize(p = survey_prop())
```
```
## # A tibble: 8 × 4
## # Groups: Region [4]
## Region ACUsed p p_se
## <fct> <lgl> <dbl> <dbl>
## 1 Northeast FALSE 0.110 0.00590
## 2 Northeast TRUE 0.890 0.00590
## 3 Midwest FALSE 0.0666 0.00508
## 4 Midwest TRUE 0.933 0.00508
## 5 South FALSE 0.0581 0.00278
## 6 South TRUE 0.942 0.00278
## 7 West FALSE 0.255 0.00759
## 8 West TRUE 0.745 0.00759
```
When specifying multiple variables, the proportions are conditional. In the results above, notice that the proportions sum to 1 within each region. This can be interpreted as the proportion of housing units with A/C within each region. For example, in the Northeast region, approximately 11\.0% of housing units don’t have A/C, while around 89\.0% have A/C.
#### Example 3: Joint proportions
If we’re interested in a joint proportion, we use the `interact()` function. In the example below, we apply the `interact()` function to `Region` and `ACUsed`:
```
recs_des %>%
group_by(interact(Region, ACUsed)) %>%
summarize(p = survey_prop())
```
```
## # A tibble: 8 × 4
## Region ACUsed p p_se
## <fct> <lgl> <dbl> <dbl>
## 1 Northeast FALSE 0.0196 0.00105
## 2 Northeast TRUE 0.158 0.00105
## 3 Midwest FALSE 0.0146 0.00111
## 4 Midwest TRUE 0.204 0.00111
## 5 South FALSE 0.0220 0.00106
## 6 South TRUE 0.357 0.00106
## 7 West FALSE 0.0573 0.00170
## 8 West TRUE 0.167 0.00170
```
In this case, all proportions sum to 1, not just within regions. This means that 15\.8% of the population lives in the Northeast and has A/C. As noted earlier, we can use both the `survey_prop()` and `survey_mean()` functions, and they produce the same results.
#### Example 4: Overall mean
Below, we calculate the estimated average cost of electricity in the U.S. using `survey_mean()`. To include both the standard error and the confidence interval, we can include them in the `vartype` argument:
```
recs_des %>%
summarize(elec_bill = survey_mean(DOLLAREL,
vartype = c("se", "ci")
))
```
```
## # A tibble: 1 × 4
## elec_bill elec_bill_se elec_bill_low elec_bill_upp
## <dbl> <dbl> <dbl> <dbl>
## 1 1380. 5.38 1369. 1391.
```
Nationally, the average household spent $1,380 in 2020\.
#### Example 5: Means by subgroup
We can also calculate the estimated average cost of electricity in the U.S. by each region. To do this, we include a `group_by()` function with the variable of interest before the `summarize()` function:
```
recs_des %>%
group_by(Region) %>%
summarize(elec_bill = survey_mean(DOLLAREL))
```
```
## # A tibble: 4 × 3
## Region elec_bill elec_bill_se
## <fct> <dbl> <dbl>
## 1 Northeast 1343. 14.6
## 2 Midwest 1293. 11.7
## 3 South 1548. 10.3
## 4 West 1211. 12.0
```
Households from the West spent approximately $1,211, while in the South, the average spending was $1,548\.
5\.5 Quantiles and medians
--------------------------
To better understand the distribution of a continuous variable like income, we can calculate quantiles at specific points. For example, computing estimates of the quartiles (25%, 50%, 75%) helps us understand how income is spread across the population. We use the `survey_quantile()` function to calculate quantiles in survey data.
Medians are useful for finding the midpoint of a continuous distribution when the data are skewed, as medians are less affected by outliers compared to means. The median is the same as the 50th percentile, meaning the value where 50% of the data are higher and 50% are lower. Because medians are a special, common case of quantiles, we have a dedicated function called `survey_median()` for calculating the median in survey data. Alternatively, we can use the `survey_quantile()` function with the `quantiles` argument set to `0.5` to achieve the same result.
### 5\.5\.1 Syntax
The syntax for `survey_quantile()` and `survey_median()` are nearly identical:
```
survey_quantile(
x,
quantiles,
na.rm = FALSE,
vartype = c("se", "ci", "var", "cv"),
level = 0.95,
interval_type =
c("mean", "beta", "xlogit", "asin", "score", "quantile"),
qrule = c("math", "school", "shahvaish", "hf1", "hf2", "hf3",
"hf4", "hf5", "hf6", "hf7", "hf8", "hf9"),
df = NULL
)
survey_median(
x,
na.rm = FALSE,
vartype = c("se", "ci", "var", "cv"),
level = 0.95,
interval_type =
c("mean", "beta", "xlogit", "asin", "score", "quantile"),
qrule = c("math", "school", "shahvaish", "hf1", "hf2", "hf3",
"hf4", "hf5", "hf6", "hf7", "hf8", "hf9"),
df = NULL
)
```
The arguments available in both functions are:
* `x`: a variable, expression, or empty
* `na.rm`: an indicator of whether missing values should be dropped, defaults to `FALSE`
* `vartype`: type(s) of variation estimate to calculate, defaults to `se` (standard error)
* `level`: a number or a vector indicating the confidence level, defaults to 0\.95
* `interval_type`: method for calculating a confidence interval
* `qrule`: rule for defining quantiles. The default is the lower end of the quantile interval (“math”). The midpoint of the quantile interval is the “school” rule. “hf1” to “hf9” are weighted analogs to type\=1 to 9 in `quantile()`. “shahvaish” corresponds to a rule proposed by Shah and Vaish ([2006](#ref-shahvaish)). See `vignette("qrule", package="survey")` for more information.
* `df`: (for `vartype = 'ci'`), a numeric value indicating degrees of freedom for the t\-distribution
The only difference between `survey_quantile()` and `survey_median()` is the inclusion of the `quantiles` argument in the `survey_quantile()` function. This argument takes a vector with values between 0 and 1 to indicate which quantiles to calculate. For example, if we wanted the quartiles of a variable, we would provide `quantiles = c(0.25, 0.5, 0.75)`. While we can specify quantiles of 0 and 1, which represent the minimum and maximum, this is not recommended. It only returns the minimum and maximum of the respondents and cannot be extrapolated to the population, as there is no valid definition of standard error.
In Section [5\.2\.1](c05-descriptive-analysis.html#desc-count-syntax), we provide an overview of the different variability types. The interval used in confidence intervals for most measures, such as means and counts, is referred to as a Wald\-type interval. However, this is not always the most accurate interval for quantiles. Similar to confidence intervals for proportions, quantiles have various interval types, including asin, beta, mean, and xlogit (see Section [5\.4\.1](c05-descriptive-analysis.html#desc-meanprop-syntax)). Quantiles also have two more methods available:
* `score`: the Francisco and Fuller confidence interval based on inverting a score test (only available for design\-based survey objects and not replicate\-based objects)
* `quantile`: based on the replicates of the quantile. This is not valid for jackknife\-type replicates but is available for bootstrap and BRR replicates.
One note with the `score` method is that when there are numerous ties in the data, this method may produce confidence intervals that do not contain the estimate. When dealing with a high propensity for ties (e.g., many respondents are the same age), it is recommended to use another method. SUDAAN, for example, uses the `score` method but adds noise to the values to prevent issues. The documentation in the {survey} package indicates, in general, that the `score` method may have poorer performance compared to the beta and logit intervals ([Lumley 2010](#ref-lumley2010complex)).
### 5\.5\.2 Examples
#### Example 1: Overall quartiles
Quantiles provide insights into the distribution of a variable. Let’s look into the quartiles, specifically, the first quartile (p\=0\.25\), the median (p\=0\.5\), and the third quartile (p\=0\.75\) of electric bills.
```
recs_des %>%
summarize(elec_bill = survey_quantile(DOLLAREL,
quantiles = c(0.25, .5, 0.75)
))
```
```
## # A tibble: 1 × 6
## elec_bill_q25 elec_bill_q50 elec_bill_q75 elec_bill_q25_se
## <dbl> <dbl> <dbl> <dbl>
## 1 795. 1215. 1770. 5.69
## elec_bill_q50_se elec_bill_q75_se
## <dbl> <dbl>
## 1 6.33 9.99
```
The output above shows the values for the three quartiles of electric bill costs and their respective standard errors: the 25th percentile is $795 with a standard error of $5\.69, the 50th percentile (median) is $1,215 with a standard error of $6\.33, and the 75th percentile is $1,770 with a standard error of $9\.99\.
#### Example 2: Quartiles by subgroup
We can estimate the quantiles of electric bills by region by using the `group_by()` function:
```
recs_des %>%
group_by(Region) %>%
summarize(elec_bill = survey_quantile(DOLLAREL,
quantiles = c(0.25, .5, 0.75)
))
```
```
## # A tibble: 4 × 7
## Region elec_bill_q25 elec_bill_q50 elec_bill_q75 elec_bill_q25_se
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 Northeast 740. 1148. 1712. 13.7
## 2 Midwest 769. 1149. 1632. 8.88
## 3 South 968. 1402. 1945. 10.6
## 4 West 623. 1028. 1568. 10.8
## elec_bill_q50_se elec_bill_q75_se
## <dbl> <dbl>
## 1 16.6 25.8
## 2 11.6 18.6
## 3 9.17 13.9
## 4 14.3 20.5
```
The 25th percentile for the Northeast region is $740, while it is $968 for the South.
#### Example 3: Minimum and maximum
As mentioned in the syntax section, we can specify quantiles of `0` (minimum) and `1` (maximum), and R calculates these values. However, these are only the minimum and maximum values in the data, and there is not enough information to determine their standard errors:
```
recs_des %>%
summarize(elec_bill = survey_quantile(DOLLAREL,
quantiles = c(0, 1)
))
```
```
## # A tibble: 1 × 4
## elec_bill_q00 elec_bill_q100 elec_bill_q00_se elec_bill_q100_se
## <dbl> <dbl> <dbl> <dbl>
## 1 -889. 15680. NaN 0
```
The minimum cost of electricity in the dataset is –$889, while the maximum is $15,680, but the standard error is shown as `NaN` and `0`, respectively. Notice that the minimum cost is a negative number. This may be surprising, but some housing units with solar power sell their energy back to the grid and earn money, which is recorded as a negative expenditure.
#### Example 4: Overall median
We can calculate the estimated median cost of electricity in the U.S. using the `survey_median()` function:
```
recs_des %>%
summarize(elec_bill = survey_median(DOLLAREL))
```
```
## # A tibble: 1 × 2
## elec_bill elec_bill_se
## <dbl> <dbl>
## 1 1215. 6.33
```
Nationally, the median household spent $1,215 in 2020\. This is the same result as we obtained using the `survey_quantile()` function. Interestingly, the average electric bill for households that we calculated in Section [5\.4](c05-descriptive-analysis.html#desc-meanprop) is $1,380, but the estimated median electric bill is $1,215, indicating the distribution is likely right\-skewed.
#### Example 5: Medians by subgroup
We can calculate the estimated median cost of electricity in the U.S. by region using the `group_by()` function with the variable(s) of interest before the `summarize()` function, similar to when we found the mean by region.
```
recs_des %>%
group_by(Region) %>%
summarize(elec_bill = survey_median(DOLLAREL))
```
```
## # A tibble: 4 × 3
## Region elec_bill elec_bill_se
## <fct> <dbl> <dbl>
## 1 Northeast 1148. 16.6
## 2 Midwest 1149. 11.6
## 3 South 1402. 9.17
## 4 West 1028. 14.3
```
We estimate that households in the Northeast spent a median of $1,148 on electricity, and in the South, they spent a median of $1,402\.
5\.6 Ratios
-----------
A ratio is a measure of the ratio of the sum of two variables, specifically in the form of:
\\\[ \\frac{\\sum x\_i}{\\sum y\_i}.\\]
Note that the ratio is not the same as calculating the following:
\\\[ \\frac{1}{N} \\sum \\frac{x\_i}{y\_i} \\]
which can be calculated with `survey_mean()` by creating a derived variable \\(z\=x/y\\) and then calculating the mean of \\(z\\).
Say we wanted to assess the energy efficiency of homes in a standardized way, where we can compare homes of different sizes. We can calculate the ratio of energy consumption to the square footage of a home. This helps us meaningfully compare homes of different sizes by identifying how much energy is being used per unit of space. To calculate this ratio, we would run `survey_ratio(Energy Consumption in BTUs, Square Footage of Home)`. If, instead, we used `survey_mean(Energy Consumption in BTUs/Square Footage of Home)`, we would estimate the average energy consumption per square foot of all surveyed homes. While helpful in understanding general energy use, this statistic does not account for differences in home sizes.
### 5\.6\.1 Syntax
The syntax for `survey_ratio()` is as follows:
```
survey_ratio(
numerator,
denominator,
na.rm = FALSE,
vartype = c("se", "ci", "var", "cv"),
level = 0.95,
deff = FALSE,
df = NULL
)
```
The arguments are:
* `numerator`: The numerator of the ratio
* `denominator`: The denominator of the ratio
* `na.rm`: A logical value to indicate whether missing values should be dropped
* `vartype`: type(s) of variation estimate to calculate including any of `c("se", "ci", "var", "cv")`, defaults to `se` (standard error) (see Section [5\.2\.1](c05-descriptive-analysis.html#desc-count-syntax) for more information)
* `level`: A single number or vector of numbers indicating the confidence level
* `deff`: A logical value to indicate whether the design effect should be returned (this is described in more detail in Section [5\.9\.3](c05-descriptive-analysis.html#desc-deff))
* `df`: (For vartype \= “ci” only) A numeric value indicating the degrees of freedom for t\-distribution
### 5\.6\.2 Examples
#### Example 1: Overall ratios
Suppose we wanted to find the ratio of dollars spent on liquid propane per unit (in British thermal unit \[Btu]) nationally[6](#fn6). To find the average cost to a household, we can use `survey_mean()`. However, to find the national unit rate, we can use `survey_ratio()`. In the following example, we show both methods and discuss the interpretation of each:
```
recs_des %>%
summarize(
DOLLARLP_Tot = survey_total(DOLLARLP, vartype = NULL),
BTULP_Tot = survey_total(BTULP, vartype = NULL),
DOL_BTU_Rat = survey_ratio(DOLLARLP, BTULP),
DOL_BTU_Avg = survey_mean(DOLLARLP / BTULP, na.rm = TRUE)
)
```
```
## # A tibble: 1 × 6
## DOLLARLP_Tot BTULP_Tot DOL_BTU_Rat DOL_BTU_Rat_se DOL_BTU_Avg
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 8122911173. 391425311586. 0.0208 0.000240 0.0240
## DOL_BTU_Avg_se
## <dbl>
## 1 0.000223
```
The ratio of the total spent on liquid propane to the total consumption was 0\.0208, but the average rate was 0\.024\. With a bit of calculation, we can show that the ratio is the ratio of the totals `DOLLARLP_Tot`/`BTULP_Tot`\=8,122,911,173/391,425,311,586\=0\.0208\. Although the estimated ratio can be calculated manually in this manner, the standard error requires the use of the `survey_ratio()` function. The average can be interpreted as the average rate paid by a household.
#### Example 2: Ratios by subgroup
As previously done with other estimates, we can use `group_by()` to examine whether this ratio varies by region.
```
recs_des %>%
group_by(Region) %>%
summarize(DOL_BTU_Rat = survey_ratio(DOLLARLP, BTULP)) %>%
arrange(DOL_BTU_Rat)
```
```
## # A tibble: 4 × 3
## Region DOL_BTU_Rat DOL_BTU_Rat_se
## <fct> <dbl> <dbl>
## 1 Midwest 0.0158 0.000240
## 2 South 0.0245 0.000388
## 3 West 0.0246 0.000875
## 4 Northeast 0.0247 0.000488
```
Although not a formal statistical test, it appears that the cost ratios for liquid propane are the lowest in the Midwest (0\.0158\).
5\.7 Correlations
-----------------
The correlation is a measure of the linear relationship between two continuous variables, which ranges between –1 and 1\. The most commonly used method is Pearson’s correlation (referred to as correlation henceforth). A sample correlation for a simple random sample is calculated as follows:
\\\[\\frac{\\sum (x\_i\-\\bar{x})(y\_i\-\\bar{y})}{\\sqrt{\\sum (x\_i\-\\bar{x})^2} \\sqrt{\\sum(y\_i\-\\bar{y})^2}} \\]
When using `survey_corr()` for designs other than a simple random sample, the weights are applied when estimating the correlation.
### 5\.7\.1 Syntax
The syntax for `survey_corr()` is as follows:
```
survey_corr(
x,
y,
na.rm = FALSE,
vartype = c("se", "ci", "var", "cv"),
level = 0.95,
df = NULL
)
```
The arguments are:
* `x`: A variable or expression
* `y`: A variable or expression
* `na.rm`: A logical value to indicate whether missing values should be dropped
* `vartype`: Type(s) of variation estimate to calculate including any of `c("se", "ci", "var", "cv")`, defaults to `se` (standard error) (see Section [5\.2\.1](c05-descriptive-analysis.html#desc-count-syntax) for more information)
* `level`: (For vartype \= “ci” only) A single number or vector of numbers indicating the confidence level
* `df`: (For vartype \= “ci” only) A numeric value indicating the degrees of freedom for t\-distribution
### 5\.7\.2 Examples
#### Example 1: Overall correlation
We can calculate the correlation between the total square footage of homes (`TOTSQFT_EN`)[7](#fn7) and electricity consumption (`BTUEL`)[8](#fn8).
```
recs_des %>%
summarize(SQFT_Elec_Corr = survey_corr(TOTSQFT_EN, BTUEL))
```
```
## # A tibble: 1 × 2
## SQFT_Elec_Corr SQFT_Elec_Corr_se
## <dbl> <dbl>
## 1 0.417 0.00689
```
The correlation between the total square footage of homes and electricity consumption is 0\.417, indicating a moderate positive relationship.
#### Example 2: Correlations by subgroup
We can explore the correlation between total square footage and electricity consumption based on subgroups, such as whether A/C is used (`ACUsed`).
```
recs_des %>%
group_by(ACUsed) %>%
summarize(SQFT_Elec_Corr = survey_corr(TOTSQFT_EN, DOLLAREL))
```
```
## # A tibble: 2 × 3
## ACUsed SQFT_Elec_Corr SQFT_Elec_Corr_se
## <lgl> <dbl> <dbl>
## 1 FALSE 0.290 0.0240
## 2 TRUE 0.401 0.00808
```
For homes without A/C, there is a small positive correlation between total square footage with electricity consumption (0\.29\). For homes with A/C, the correlation of 0\.401 indicates a stronger positive correlation between total square footage and electricity consumption.
5\.8 Standard deviation and variance
------------------------------------
All survey functions produce an estimate of the variability of a given estimate. No additional function is needed when dealing with variable estimates. However, if we are specifically interested in population variance and standard deviation, we can use the `survey_var()` and `survey_sd()` functions. In our experience, it is not common practice to use these functions. They can be used when designing a future study to gauge population variability and inform sampling precision.
### 5\.8\.1 Syntax
As with non\-survey data, the standard deviation estimate is the square root of the variance estimate. Therefore, the `survey_var()` and `survey_sd()` functions share the same arguments, except the standard deviation does not allow the usage of `vartype`.
```
survey_var(
x,
na.rm = FALSE,
vartype = c("se", "ci", "var"),
level = 0.95,
df = NULL
)
survey_sd(
x,
na.rm = FALSE
)
```
The arguments are:
* `x`: A variable or expression, or empty
* `na.rm`: A logical value to indicate whether missing values should be dropped
* `vartype`: Type(s) of variation estimate to calculate including any of `c("se", "ci", "var")`, defaults to `se` (standard error) (see Section [5\.2\.1](c05-descriptive-analysis.html#desc-count-syntax) for more information)
* `level`: (For vartype \= “ci” only) A single number or vector of numbers indicating the confidence level
* `df`: (For vartype \= “ci” only) A numeric value indicating the degrees of freedom for t\-distribution
### 5\.8\.2 Examples
#### Example 1: Overall variability
Let’s return to electricity bills and explore the variability in electricity expenditure.
```
recs_des %>%
summarize(
var_elbill = survey_var(DOLLAREL),
sd_elbill = survey_sd(DOLLAREL)
)
```
```
## # A tibble: 1 × 3
## var_elbill var_elbill_se sd_elbill
## <dbl> <dbl> <dbl>
## 1 704906. 13926. 840.
```
We may encounter a warning related to deprecated underlying calculations performed by the `survey_var()` function. This warning is a result of changes in the way R handles recycling in vectorized operations. The results are still valid. They give an estimate of the population variance of electricity bills (`var_elbill`), the standard error of that variance (`var_elbill_se`), and the estimated population standard deviation of electricity bills (`sd_elbill`). Note that no standard error is associated with the standard deviation; this is the only estimate that does not include a standard error.
#### Example 2: Variability by subgroup
To find out if the variability in electricity expenditure is similar across regions, we can calculate the variance by region using `group_by()`:
```
recs_des %>%
group_by(Region) %>%
summarize(
var_elbill = survey_var(DOLLAREL),
sd_elbill = survey_sd(DOLLAREL)
)
```
```
## # A tibble: 4 × 4
## Region var_elbill var_elbill_se sd_elbill
## <fct> <dbl> <dbl> <dbl>
## 1 Northeast 775450. 38843. 881.
## 2 Midwest 552423. 25252. 743.
## 3 South 702521. 30641. 838.
## 4 West 717886. 30597. 847.
```
5\.9 Additional topics
----------------------
### 5\.9\.1 Unweighted analysis
Sometimes, it is helpful to calculate an unweighted estimate of a given variable. For this, we use the `unweighted()` function in the `summarize()` function. The `unweighted()` function calculates unweighted summaries from a `tbl_svy` object, providing the summary among the respondents without extrapolating to a population estimate. The `unweighted()` function can be used in conjunction with any {dplyr} functions. Here is an example looking at the average household electricity cost:
```
recs_des %>%
summarize(
elec_bill = survey_mean(DOLLAREL),
elec_unweight = unweighted(mean(DOLLAREL))
)
```
```
## # A tibble: 1 × 3
## elec_bill elec_bill_se elec_unweight
## <dbl> <dbl> <dbl>
## 1 1380. 5.38 1425.
```
It is estimated that American residential households spent an average of $1,380 on electricity in 2020, and the estimate has a standard error of $5\.38\. The `unweighted()` function calculates the unweighted average and represents the average amount of money spent on electricity in 2020 by the respondents, which was $1,425\.
### 5\.9\.2 Subpopulation analysis
We mentioned using `filter()` to subset a survey object for analysis. This operation should be done after creating the survey design object. Subsetting data before creating the object can lead to incorrect variability estimates, if subsetting removes an entire Primary Sampling Unit (PSU; see Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights) for more information on PSUs and sample designs).
Suppose we want estimates of the average amount spent on natural gas among housing units using natural gas (based on the variable `BTUNG`)[9](#fn9). We first filter records to only include records where `BTUNG > 0` and then find the average amount spent.
```
recs_des %>%
filter(BTUNG > 0) %>%
summarize(NG_mean = survey_mean(DOLLARNG,
vartype = c("se", "ci")
))
```
```
## # A tibble: 1 × 4
## NG_mean NG_mean_se NG_mean_low NG_mean_upp
## <dbl> <dbl> <dbl> <dbl>
## 1 631. 4.64 621. 640.
```
The estimated average amount spent on natural gas among households that use natural gas is $631\. Let’s compare this to the mean when we do not filter.
```
recs_des %>%
summarize(NG_mean = survey_mean(DOLLARNG,
vartype = c("se", "ci")
))
```
```
## # A tibble: 1 × 4
## NG_mean NG_mean_se NG_mean_low NG_mean_upp
## <dbl> <dbl> <dbl> <dbl>
## 1 382. 3.41 375. 389.
```
Based on this calculation, the estimated average amount spent on natural gas is $382\. Note that applying the filter to include only housing units that use natural gas yields a higher mean than when not applying the filter. This is because including housing units that do not use natural gas introduces many $0 amounts, impacting the mean calculation.
### 5\.9\.3 Design effects
The design effect measures how the precision of an estimate is influenced by the sampling design. In other words, it measures how much more or less statistically efficient the survey design is compared to a simple random sample (SRS). It is computed by taking the ratio of the estimate’s variance under the design at hand to the estimate’s variance under a simple random sample without replacement. A design effect less than 1 indicates that the design is more statistically efficient than an SRS design, which is rare but possible in a stratified sampling design where the outcome correlates with the stratification variable(s). A design effect greater than 1 indicates that the design is less statistically efficient than an SRS design. From a design effect, we can calculate the effective sample size as follows:
\\\[n\_{eff}\=\\frac{n}{D\_{eff}} \\]
where \\(n\\) is the nominal sample size (the number of survey responses) and \\(D\_{eff}\\) is the estimated design effect. We can interpret the effective sample size \\(n\_{eff}\\) as the hypothetical sample size that a survey using an SRS design would need to achieve the same precision as the design at hand. Design effects specific to each outcome — outcomes that are less clustered in the population have smaller design effects than outcomes that are clustered.
In the {srvyr} package, design effects can be calculated for totals, proportions, means, and ratio estimates by setting the `deff` argument to `TRUE` in the corresponding functions. In the example below, we calculate the design effects for the average consumption of electricity (`BTUEL`), natural gas (`BTUNG`), liquid propane (`BTULP`), fuel oil (`BTUFO`), and wood (`BTUWOOD`) by setting `deff = TRUE`:
```
recs_des %>%
summarize(across(
c(BTUEL, BTUNG, BTULP, BTUFO, BTUWOOD),
~ survey_mean(.x, deff = TRUE, vartype = NULL)
)) %>%
select(ends_with("deff"))
```
```
## # A tibble: 1 × 5
## BTUEL_deff BTUNG_deff BTULP_deff BTUFO_deff BTUWOOD_deff
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 0.597 0.938 1.21 0.720 1.10
```
For the values less than 1 (`BTUEL_deff` and `BTUFO_deff`), the results suggest that the survey design is more efficient than a simple random sample. For the values greater than 1 (`BTUNG_deff`, `BTULP_deff`, and `BTUWOOD_deff`), the results indicate that the survey design is less efficient than a simple random sample.
### 5\.9\.4 Creating summary rows
When using `group_by()` in analysis, the results are returned with a row for each group or combination of groups. Often, we want both breakdowns by group and a summary row for the estimate representing the entire population. For example, we may want the average electricity consumption by region and nationally. The {srvyr} package has the convenient `cascade()` function, which adds summary rows for the total of a group. It is used instead of `summarize()` and has similar functionalities along with some additional features.
#### Syntax
The syntax is as follows:
```
cascade(
.data,
...,
.fill = NA,
.fill_level_top = FALSE,
.groupings = NULL
)
```
where the arguments are:
* `.data`: A `tbl_svy` object
* `...`: Name\-value pairs of summary functions (same as the `summarize()` function)
* `.fill`: Value to fill in for group summaries (defaults to `NA`)
* `.fill_level_top`: When filling factor variables, whether to put the value ‘.fill’ in the first position (defaults to FALSE, placing it in the bottom)
#### Example
First, let’s look at an example where we calculate the average household electricity cost. Then, we build on it to examine the features of the `cascade()` function. In the first example below, we calculate the average household energy cost `DOLLAREL_mn` using `survey_mean()` without modifying any of the argument defaults in the function:
```
recs_des %>%
cascade(DOLLAREL_mn = survey_mean(DOLLAREL))
```
```
## # A tibble: 1 × 2
## DOLLAREL_mn DOLLAREL_mn_se
## <dbl> <dbl>
## 1 1380. 5.38
```
Next, let’s group the results by region by adding `group_by()` before the `cascade()` function:
```
recs_des %>%
group_by(Region) %>%
cascade(DOLLAREL_mn = survey_mean(DOLLAREL))
```
```
## # A tibble: 5 × 3
## Region DOLLAREL_mn DOLLAREL_mn_se
## <fct> <dbl> <dbl>
## 1 Northeast 1343. 14.6
## 2 Midwest 1293. 11.7
## 3 South 1548. 10.3
## 4 West 1211. 12.0
## 5 <NA> 1380. 5.38
```
We can see the estimated average electricity bills by region: $1,343 for the Northeast, $1,548 for the South, and so on. The last row, where `Region = NA`, is the national average electricity bill, $1,380\. However, naming the national “region” as `NA` is not very informative. We can give it a better name using the `.fill` argument.
```
recs_des %>%
group_by(Region) %>%
cascade(
DOLLAREL_mn = survey_mean(DOLLAREL),
.fill = "National"
)
```
```
## # A tibble: 5 × 3
## Region DOLLAREL_mn DOLLAREL_mn_se
## <fct> <dbl> <dbl>
## 1 Northeast 1343. 14.6
## 2 Midwest 1293. 11.7
## 3 South 1548. 10.3
## 4 West 1211. 12.0
## 5 National 1380. 5.38
```
We can move the summary row to the first row by adding `.fill_level_top = TRUE` to `cascade()`:
```
recs_des %>%
group_by(Region) %>%
cascade(
DOLLAREL_mn = survey_mean(DOLLAREL),
.fill = "National",
.fill_level_top = TRUE
)
```
```
## # A tibble: 5 × 3
## Region DOLLAREL_mn DOLLAREL_mn_se
## <fct> <dbl> <dbl>
## 1 National 1380. 5.38
## 2 Northeast 1343. 14.6
## 3 Midwest 1293. 11.7
## 4 South 1548. 10.3
## 5 West 1211. 12.0
```
While the results remain the same, the table is now easier to interpret.
### 5\.9\.5 Calculating estimates for many outcomes
Often, we are interested in a summary statistic across many variables. Useful tools include the `across()` function in {dplyr}, shown a few times above, and the `map()` function in {purrr}.
The `across()` function applies the same function to multiple columns within `summarize()`. This works well with all functions shown above, except for `survey_prop()`. In a later example, we tackle summarizing multiple proportions.
#### Example 1: `across()`
Suppose we want to calculate the total and average consumption, along with coefficients of variation (CV), for each fuel type. These include the reported consumption of electricity (`BTUEL`), natural gas (`BTUNG`), liquid propane (`BTULP`), fuel oil (`BTUFO`), and wood (`BTUWOOD`), as mentioned in the section on design effects. We can take advantage of the fact that these are the only variables that start with “BTU” by selecting them with `starts_with("BTU")` in the `across()` function. For each selected column (`.x`), `across()` creates a list of two functions to be applied: `survey_total()` to calculate the total and `survey_mean()` to calculate the mean, along with their CV (`vartype = "cv"`). Finally, `.unpack = "{outer}.{inner}"` specifies that the resulting column names are a concatenation of the variable name, followed by Total or Mean, and then “coef” or “cv.”
```
consumption_ests <- recs_des %>%
summarize(across(
starts_with("BTU"),
list(
Total = ~ survey_total(.x, vartype = "cv"),
Mean = ~ survey_mean(.x, vartype = "cv")
),
.unpack = "{outer}.{inner}"
))
consumption_ests
```
```
## # A tibble: 1 × 20
## BTUEL_Total.coef BTUEL_Total._cv BTUEL_Mean.coef BTUEL_Mean._cv
## <dbl> <dbl> <dbl> <dbl>
## 1 4453284510065 0.00377 36051. 0.00377
## # ℹ 16 more variables: BTUNG_Total.coef <dbl>, BTUNG_Total._cv <dbl>,
## # BTUNG_Mean.coef <dbl>, BTUNG_Mean._cv <dbl>,
## # BTULP_Total.coef <dbl>, BTULP_Total._cv <dbl>,
## # BTULP_Mean.coef <dbl>, BTULP_Mean._cv <dbl>,
## # BTUFO_Total.coef <dbl>, BTUFO_Total._cv <dbl>,
## # BTUFO_Mean.coef <dbl>, BTUFO_Mean._cv <dbl>,
## # BTUWOOD_Total.coef <dbl>, BTUWOOD_Total._cv <dbl>, …
```
The estimated total consumption of electricity (`BTUEL`) is 4,453,284,510,065 (`BTUEL_Total.coef`), the estimated average consumption is 36,051 (`BTUEL_Mean.coef`), and the CV is 0\.0038\.
In the example above, the table was quite wide. We may prefer a row for each fuel type. Using the `pivot_longer()` and `pivot_wider()` functions from {tidyr} can help us achieve this. First, we use `pivot_longer()` to make each variable a column, changing the data to a “long” format. We use the `names_to` argument to specify new column names: `FuelType`, `Stat`, and `Type`. Then, the `names_pattern` argument extracts the names in the original column names based on the regular expression pattern `BTU(.*)_(.*)\\.(.*)`. They are saved in the column names defined in `names_to`.
```
consumption_ests_long <- consumption_ests %>%
pivot_longer(
cols = everything(),
names_to = c("FuelType", "Stat", "Type"),
names_pattern = "BTU(.*)_(.*)\\.(.*)"
)
consumption_ests_long
```
```
## # A tibble: 20 × 4
## FuelType Stat Type value
## <chr> <chr> <chr> <dbl>
## 1 EL Total coef 4453284510065
## 2 EL Total _cv 0.00377
## 3 EL Mean coef 36051.
## 4 EL Mean _cv 0.00377
## 5 NG Total coef 4240769382106.
## 6 NG Total _cv 0.00908
## 7 NG Mean coef 34330.
## 8 NG Mean _cv 0.00908
## 9 LP Total coef 391425311586.
## 10 LP Total _cv 0.0380
## 11 LP Mean coef 3169.
## 12 LP Mean _cv 0.0380
## 13 FO Total coef 395699976655.
## 14 FO Total _cv 0.0343
## 15 FO Mean coef 3203.
## 16 FO Mean _cv 0.0343
## 17 WOOD Total coef 345091088404.
## 18 WOOD Total _cv 0.0454
## 19 WOOD Mean coef 2794.
## 20 WOOD Mean _cv 0.0454
```
Then, we use `pivot_wider()` to create a table that is nearly ready for publication. Within the function, we can make the names for each element more descriptive and informative by gluing the `Stat` and `Type` together with `names_glue`. Further details on creating publication\-ready tables are covered in Chapter [8](c08-communicating-results.html#c08-communicating-results).
```
consumption_ests_long %>%
mutate(Type = case_when(
Type == "coef" ~ "",
Type == "_cv" ~ " (CV)"
)) %>%
pivot_wider(
id_cols = FuelType,
names_from = c(Stat, Type),
names_glue = "{Stat}{Type}",
values_from = value
)
```
```
## # A tibble: 5 × 5
## FuelType Total `Total (CV)` Mean `Mean (CV)`
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 EL 4453284510065 0.00377 36051. 0.00377
## 2 NG 4240769382106. 0.00908 34330. 0.00908
## 3 LP 391425311586. 0.0380 3169. 0.0380
## 4 FO 395699976655. 0.0343 3203. 0.0343
## 5 WOOD 345091088404. 0.0454 2794. 0.0454
```
#### Example 2: Proportions with `across()`
As mentioned earlier, proportions do not work as well directly with the `across()` method. If we want the proportion of houses with A/C and the proportion of houses with heating, we require two separate `group_by()` statements as shown below:
```
recs_des %>%
group_by(ACUsed) %>%
summarize(p = survey_prop())
```
```
## # A tibble: 2 × 3
## ACUsed p p_se
## <lgl> <dbl> <dbl>
## 1 FALSE 0.113 0.00306
## 2 TRUE 0.887 0.00306
```
```
recs_des %>%
group_by(SpaceHeatingUsed) %>%
summarize(p = survey_prop())
```
```
## # A tibble: 2 × 3
## SpaceHeatingUsed p p_se
## <lgl> <dbl> <dbl>
## 1 FALSE 0.0469 0.00207
## 2 TRUE 0.953 0.00207
```
We estimate 88\.7% of households have A/C and 95\.3% have heating.
If we are only interested in the `TRUE` outcomes, that is, the proportion of households that have A/C and the proportion that have heating, we can simplify the code. Applying `survey_mean()` to a logical variable is the same as using `survey_prop()`, as shown below:
```
cool_heat_tab <- recs_des %>%
summarize(across(c(ACUsed, SpaceHeatingUsed), ~ survey_mean(.x),
.unpack = "{outer}.{inner}"
))
cool_heat_tab
```
```
## # A tibble: 1 × 4
## ACUsed.coef ACUsed._se SpaceHeatingUsed.coef SpaceHeatingUsed._se
## <dbl> <dbl> <dbl> <dbl>
## 1 0.887 0.00306 0.953 0.00207
```
Note that the estimates are the same as those obtained using the separate `group_by()` statements. As before, we can use `pivot_longer()` to structure the table in a more suitable format for distribution.
```
cool_heat_tab %>%
pivot_longer(everything(),
names_to = c("Comfort", ".value"),
names_pattern = "(.*)\\.(.*)"
) %>%
rename(
p = coef,
se = `_se`
)
```
```
## # A tibble: 2 × 3
## Comfort p se
## <chr> <dbl> <dbl>
## 1 ACUsed 0.887 0.00306
## 2 SpaceHeatingUsed 0.953 0.00207
```
#### Example 3: `purrr::map()`
Loops are a common tool when dealing with repetitive calculations. The {purrr} package provides the `map()` functions, which, like a loop, allow us to perform the same task across different elements ([Wickham and Henry 2023](#ref-R-purrr)). In our case, we may want to calculate proportions from the same design multiple times. A straightforward approach is to design the calculation for one variable, build a function based on that, and then apply it iteratively for the rest of the variables.
Suppose we want to create a table that shows the proportion of people who express trust in their government (`TrustGovernment`)[10](#fn10) as well as those that trust in people (`TrustPeople`)[11](#fn11) using data from the 2020 ANES.
First, we create a table for a single variable. The table includes the variable name as a column, the response, and the corresponding percentage with its standard error.
```
anes_des %>%
drop_na(TrustGovernment) %>%
group_by(TrustGovernment) %>%
summarize(p = survey_prop() * 100) %>%
mutate(Variable = "TrustGovernment") %>%
rename(Answer = TrustGovernment) %>%
select(Variable, everything())
```
```
## # A tibble: 5 × 4
## Variable Answer p p_se
## <chr> <fct> <dbl> <dbl>
## 1 TrustGovernment Always 1.55 0.204
## 2 TrustGovernment Most of the time 13.2 0.553
## 3 TrustGovernment About half the time 30.9 0.829
## 4 TrustGovernment Some of the time 43.4 0.855
## 5 TrustGovernment Never 11.0 0.566
```
We estimate that 1\.55% of people always trust the government, 13\.16% trust the government most of the time, and so on.
Now, we want to use the original series of steps as a template to create a general function `calcps()` that can apply the same steps to other variables. We replace `TrustGovernment` with an argument for a generic variable, `var`. Referring to `var` involves a bit of tidy evaluation, an advanced skill. To learn more, we recommend Wickham ([2019](#ref-wickham2019advanced)).
```
calcps <- function(var) {
anes_des %>%
drop_na(!!sym(var)) %>%
group_by(!!sym(var)) %>%
summarize(p = survey_prop() * 100) %>%
mutate(Variable = var) %>%
rename(Answer := !!sym(var)) %>%
select(Variable, everything())
}
```
We then apply this function to the two variables of interest, `TrustGovernment` and `TrustPeople`:
```
calcps("TrustGovernment")
```
```
## # A tibble: 5 × 4
## Variable Answer p p_se
## <chr> <fct> <dbl> <dbl>
## 1 TrustGovernment Always 1.55 0.204
## 2 TrustGovernment Most of the time 13.2 0.553
## 3 TrustGovernment About half the time 30.9 0.829
## 4 TrustGovernment Some of the time 43.4 0.855
## 5 TrustGovernment Never 11.0 0.566
```
```
calcps("TrustPeople")
```
```
## # A tibble: 5 × 4
## Variable Answer p p_se
## <chr> <fct> <dbl> <dbl>
## 1 TrustPeople Always 0.809 0.164
## 2 TrustPeople Most of the time 41.4 0.857
## 3 TrustPeople About half the time 28.2 0.776
## 4 TrustPeople Some of the time 24.5 0.670
## 5 TrustPeople Never 5.05 0.422
```
Finally, we use `map()` to iterate over as many variables as needed. We feed our desired variables into `map()` along with our custom function, `calcps`. The output is a tibble with the variable names in the “Variable” column, the responses in the “Answer” column, along with the percentage and standard error. The `list_rbind()` function combines the rows into a single tibble. This example extends nicely when dealing with numerous variables for which we want percentage estimates.
```
c("TrustGovernment", "TrustPeople") %>%
map(calcps) %>%
list_rbind()
```
```
## # A tibble: 10 × 4
## Variable Answer p p_se
## <chr> <fct> <dbl> <dbl>
## 1 TrustGovernment Always 1.55 0.204
## 2 TrustGovernment Most of the time 13.2 0.553
## 3 TrustGovernment About half the time 30.9 0.829
## 4 TrustGovernment Some of the time 43.4 0.855
## 5 TrustGovernment Never 11.0 0.566
## 6 TrustPeople Always 0.809 0.164
## 7 TrustPeople Most of the time 41.4 0.857
## 8 TrustPeople About half the time 28.2 0.776
## 9 TrustPeople Some of the time 24.5 0.670
## 10 TrustPeople Never 5.05 0.422
```
In addition to our results above, we can also see the output for `TrustPeople`. While we estimate that 1\.55% of people always trust the government, 0\.81% always trust people.
5\.10 Exercises
---------------
The exercises use the design objects `anes_des` and `recs_des` provided in the Prerequisites box at the beginning of the chapter.
1. How many females have a graduate degree? Hint: The variables `Gender` and `Education` will be useful.
2. What percentage of people identify as “Strong Democrat”? Hint: The variable `PartyID` indicates someone’s party affiliation.
3. What percentage of people who voted in the 2020 election identify as “Strong Republican”? Hint: The variable `VotedPres2020` indicates whether someone voted in 2020\.
4. What percentage of people voted in both the 2016 election and the 2020 election? Include the logit confidence interval. Hint: The variable `VotedPres2016` indicates whether someone voted in 2016\.
5. What is the design effect for the proportion of people who voted early? Hint: The variable `EarlyVote2020` indicates whether someone voted early in 2020\.
6. What is the median temperature people set their thermostats to at night during the winter? Hint: The variable `WinterTempNight` indicates the temperature that people set their thermostat to in the winter at night.
7. People sometimes set their temperature differently over different seasons and during the day. What median temperatures do people set their thermostats to in the summer and winter, both during the day and at night? Include confidence intervals. Hint: Use the variables `WinterTempDay`, `WinterTempNight`, `SummerTempDay`, and `SummerTempNight`.
8. What is the correlation between the temperature that people set their temperature at during the night and during the day in the summer?
9. What is the 1st, 2nd, and 3rd quartile of money spent on energy by Building America (BA) climate zone? Hint: `TOTALDOL` indicates the total amount spent on all fuel, and `ClimateRegion_BA` indicates the BA climate zones.
### Prerequisites
5\.1 Introduction
-----------------
Descriptive analyses, such as basic counts, cross\-tabulations, or means, are among the first steps in making sense of our survey results. During descriptive analyses, we calculate point estimates of unknown population parameters, such as population mean, and uncertainty estimates, such as confidence intervals. By reviewing the findings, we can glean insight into the data, the underlying population, and any unique aspects of the data or population. For example, if only 10% of survey respondents are male, it could indicate a unique population, a potential error or bias, an intentional survey sampling method, or other factors. Additionally, descriptive analyses provide summaries of distribution and other measures. These analyses lay the groundwork for the next steps of running statistical tests or developing models.
We discuss many different types of descriptive analyses in this chapter. However, it is important to know what type of data we are working with and which statistics are appropriate. In survey data, we typically consider data as one of four main types:
* Categorical/nominal data: variables with levels or descriptions that cannot be ordered, such as the region of the country (North, South, East, and West)
* Ordinal data: variables that can be ordered, such as those from a Likert scale (strongly disagree, disagree, agree, and strongly agree)
* Discrete data: variables that are counted or measured, such as number of children
* Continuous data: variables that are measured and whose values can lie anywhere on an interval, such as income
This chapter discusses how to analyze measures of distribution (e.g., cross\-tabulations), central tendency (e.g., means), relationship (e.g., ratios), and dispersion (e.g., standard deviation) using functions from the {srvyr} package ([Freedman Ellis and Schneider 2024](#ref-R-srvyr)).
Measures of distribution describe how often an event or response occurs. These measures include counts and totals. We cover the following functions:
* Count of observations (`survey_count()` and `survey_tally()`)
* Summation of variables (`survey_total()`)
Measures of central tendency find the central (or average) responses. These measures include means and medians. We cover the following functions:
* Means and proportions (`survey_mean()` and `survey_prop()`)
* Quantiles and medians (`survey_quantile()` and `survey_median()`)
Measures of relationship describe how variables relate to each other. These measures include correlations and ratios. We cover the following functions:
* Correlations (`survey_corr()`)
* Ratios (`survey_ratio()`)
Measures of dispersion describe how data spread around the central tendency for continuous variables. These measures include standard deviations and variances. We cover the following functions:
* Variances and standard deviations (`survey_var()` and `survey_sd()`)
To incorporate each of these survey functions, recall the general process for survey estimation from Chapter [4](c04-getting-started.html#c04-getting-started):
1. Create a `tbl_svy` object using `srvyr::as_survey_design()` or `srvyr::as_survey_rep()`.
2. Subset the data for subpopulations using `srvyr::filter()`, if needed.
3. Specify domains of analysis using `srvyr::group_by()`, if needed.
4. Analyze the data with survey\-specific functions.
This chapter walks through how to apply the survey functions in Step 4\. Note that unless otherwise specified, our estimates are weighted as a result of setting up the survey design object.
To look at the data by different subgroups, we can choose to filter and/or group the data. It is very important that we filter and group the data only after creating the design object. This ensures that the results accurately reflect the survey design. If we filter or group data before creating the survey design object, the data for those cases are not included in the survey design information and estimations of the variance, leading to inaccurate results.
For the sake of simplicity, we’ve removed cases with missing values in the examples below. For a more detailed explanation of how to handle missing data, please refer to Chapter [11](c11-missing-data.html#c11-missing-data).
5\.2 Counts and cross\-tabulations
----------------------------------
Using `survey_count()` and `survey_tally()`, we can calculate the estimated population counts for a given variable or combination of variables. These summaries, often referred to as cross\-tabulations or cross\-tabs, are applied to categorical data. They help in estimating counts of the population size for different groups based on the survey data.
### 5\.2\.1 Syntax
The syntax for `survey_count()` is similar to the `dplyr::count()` syntax, as mentioned in Chapter [4](c04-getting-started.html#c04-getting-started). However, as noted above, this function can only be called on `tbl_svy` objects. Let’s explore the syntax:
```
survey_count(
x,
...,
wt = NULL,
sort = FALSE,
name = "n",
.drop = dplyr::group_by_drop_default(x),
vartype = c("se", "ci", "var", "cv")
)
```
The arguments are:
* `x`: a `tbl_svy` object created by `as_survey`
* `...`: variables to group by, passed to `group_by`
* `wt`: a variable to weight on in addition to the survey weights, defaults to `NULL`
* `sort`: how to sort the variables, defaults to `FALSE`
* `name`: the name of the count variable, defaults to `n`
* `.drop`: whether to drop empty groups
* `vartype`: type(s) of variation estimate to calculate including any of `c("se", "ci", "var", "cv")`, defaults to `se` (standard error) (see Section [5\.2\.1](c05-descriptive-analysis.html#desc-count-syntax) for more information)
To generate a count or cross\-tabs by different variables, we include them in the (`...`) argument. This argument can take any number of variables and breaks down the counts by all combinations of the provided variables. This is similar to `dplyr::count()`. To obtain an estimate of the overall population, we can exclude any variables from the (`...`) argument or use the `survey_tally()` function. While the `survey_tally()` function has a similar syntax to the `survey_count()` function, it does not include the (`...`) or the `.drop` arguments:
```
survey_tally(
x,
wt,
sort = FALSE,
name = "n",
vartype = c("se", "ci", "var", "cv")
)
```
Both functions include the `vartype` argument with four different values:
* `se`: standard error
+ The estimated standard deviation of the estimate
+ Output has a column with the variable name specified in the `name` argument with a suffix of “\_se”
* `ci`: confidence interval
+ The lower and upper limits of a confidence interval
+ Output has two columns with the variable name specified in the `name` argument with a suffix of “\_low” and “\_upp”
+ By default, this is a 95% confidence interval but can be changed by using the argument level and specifying a number between 0 and 1\. For example, `level=0.8` would produce an 80% confidence interval.
* `var`: variance
+ The estimated variance of the estimate
+ Output has a column with the variable name specified in the `name` argument with a suffix of “\_var”
* `cv`: coefficient of variation
+ A ratio of the standard error and the estimate
+ Output has a column with the variable name specified in the `name` argument with a suffix of “\_cv”
The confidence intervals are always calculated using a symmetric t\-distribution based method, given by the formula:
\\\[ \\text{estimate} \\pm t^\*\_{df}\\times SE\\]
where \\(t^\*\_{df}\\) is the critical value from a t\-distribution based on the confidence level and the degrees of freedom. By default, the degrees of freedom are based on the design or number of replicates, but they can be specified using the `df` argument. For survey design objects, the degrees of freedom are calculated as the number of primary sampling units (PSUs or clusters) minus the number of strata (see Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights) for more information on PSUs, strata, and sample designs). For replicate\-based objects, the degrees of freedom are calculated as one less than the rank of the matrix of replicate weight, where the number of replicates is typically the rank. Note that specifying `df = Inf` is equivalent to using a normal (z\-based) confidence interval – this is the default in {survey}. These variability types are the same for most of the survey functions, and we provide examples using different variability types throughout this chapter.
### 5\.2\.2 Examples
#### Example 1: Estimated population count
If we want to obtain the estimated number of households in the U.S. (the population of interest) using the Residential Energy Consumption Survey (RECS) data, we can use `survey_count()`. If we do not specify any variables in the `survey_count()` function, it outputs the estimated population count (`n`) and its corresponding standard error (`n_se`).
```
recs_des %>%
survey_count()
```
```
## # A tibble: 1 × 2
## n n_se
## <dbl> <dbl>
## 1 123529025. 0.148
```
Based on this calculation, the estimated number of households in the U.S. is 123,529,025\.
Alternatively, we could also use the `survey_tally()` function. The example below yields the same results as `survey_count()`.
```
recs_des %>%
survey_tally()
```
```
## # A tibble: 1 × 2
## n n_se
## <dbl> <dbl>
## 1 123529025. 0.148
```
#### Example 2: Estimated counts by subgroups (cross\-tabs)
To calculate the estimated number of observations for specific subgroups, such as Region and Division, we can include the variables of interest in the `survey_count()` function. In the example below, we calculate the estimated number of housing units by region and division. The argument `name =` in `survey_count()` allows us to change the name of the count variable in the output from the default `n` to `N`.
```
recs_des %>%
survey_count(Region, Division, name = "N")
```
```
## # A tibble: 10 × 4
## Region Division N N_se
## <fct> <fct> <dbl> <dbl>
## 1 Northeast New England 5876166 0.0000000137
## 2 Northeast Middle Atlantic 16043503 0.0000000487
## 3 Midwest East North Central 18546912 0.000000437
## 4 Midwest West North Central 8495815 0.0000000177
## 5 South South Atlantic 24843261 0.0000000418
## 6 South East South Central 7380717. 0.114
## 7 South West South Central 14619094 0.000488
## 8 West Mountain North 4615844 0.119
## 9 West Mountain South 4602070 0.0000000492
## 10 West Pacific 18505643. 0.00000295
```
When we run the cross\-tab, we see that there are an estimated 5,876,166 housing units in the New England Division.
The code results in an error if we try to use the `survey_count()` syntax with `survey_tally()`:
```
recs_des %>%
survey_tally(Region, Division, name = "N")
```
```
## Error in `dplyr::summarise()`:
## ℹ In argument: `N = survey_total(Region, vartype = vartype,
## na.rm = TRUE)`.
## Caused by error:
## ! Factor not allowed in survey functions, should be used as a grouping variable.
```
Use a `group_by()` function prior to using `survey_tally()` to successfully run the cross\-tab:
```
recs_des %>%
group_by(Region, Division) %>%
survey_tally(name = "N")
```
```
## # A tibble: 10 × 4
## # Groups: Region [4]
## Region Division N N_se
## <fct> <fct> <dbl> <dbl>
## 1 Northeast New England 5876166 0.0000000137
## 2 Northeast Middle Atlantic 16043503 0.0000000487
## 3 Midwest East North Central 18546912 0.000000437
## 4 Midwest West North Central 8495815 0.0000000177
## 5 South South Atlantic 24843261 0.0000000418
## 6 South East South Central 7380717. 0.114
## 7 South West South Central 14619094 0.000488
## 8 West Mountain North 4615844 0.119
## 9 West Mountain South 4602070 0.0000000492
## 10 West Pacific 18505643. 0.00000295
```
### 5\.2\.1 Syntax
The syntax for `survey_count()` is similar to the `dplyr::count()` syntax, as mentioned in Chapter [4](c04-getting-started.html#c04-getting-started). However, as noted above, this function can only be called on `tbl_svy` objects. Let’s explore the syntax:
```
survey_count(
x,
...,
wt = NULL,
sort = FALSE,
name = "n",
.drop = dplyr::group_by_drop_default(x),
vartype = c("se", "ci", "var", "cv")
)
```
The arguments are:
* `x`: a `tbl_svy` object created by `as_survey`
* `...`: variables to group by, passed to `group_by`
* `wt`: a variable to weight on in addition to the survey weights, defaults to `NULL`
* `sort`: how to sort the variables, defaults to `FALSE`
* `name`: the name of the count variable, defaults to `n`
* `.drop`: whether to drop empty groups
* `vartype`: type(s) of variation estimate to calculate including any of `c("se", "ci", "var", "cv")`, defaults to `se` (standard error) (see Section [5\.2\.1](c05-descriptive-analysis.html#desc-count-syntax) for more information)
To generate a count or cross\-tabs by different variables, we include them in the (`...`) argument. This argument can take any number of variables and breaks down the counts by all combinations of the provided variables. This is similar to `dplyr::count()`. To obtain an estimate of the overall population, we can exclude any variables from the (`...`) argument or use the `survey_tally()` function. While the `survey_tally()` function has a similar syntax to the `survey_count()` function, it does not include the (`...`) or the `.drop` arguments:
```
survey_tally(
x,
wt,
sort = FALSE,
name = "n",
vartype = c("se", "ci", "var", "cv")
)
```
Both functions include the `vartype` argument with four different values:
* `se`: standard error
+ The estimated standard deviation of the estimate
+ Output has a column with the variable name specified in the `name` argument with a suffix of “\_se”
* `ci`: confidence interval
+ The lower and upper limits of a confidence interval
+ Output has two columns with the variable name specified in the `name` argument with a suffix of “\_low” and “\_upp”
+ By default, this is a 95% confidence interval but can be changed by using the argument level and specifying a number between 0 and 1\. For example, `level=0.8` would produce an 80% confidence interval.
* `var`: variance
+ The estimated variance of the estimate
+ Output has a column with the variable name specified in the `name` argument with a suffix of “\_var”
* `cv`: coefficient of variation
+ A ratio of the standard error and the estimate
+ Output has a column with the variable name specified in the `name` argument with a suffix of “\_cv”
The confidence intervals are always calculated using a symmetric t\-distribution based method, given by the formula:
\\\[ \\text{estimate} \\pm t^\*\_{df}\\times SE\\]
where \\(t^\*\_{df}\\) is the critical value from a t\-distribution based on the confidence level and the degrees of freedom. By default, the degrees of freedom are based on the design or number of replicates, but they can be specified using the `df` argument. For survey design objects, the degrees of freedom are calculated as the number of primary sampling units (PSUs or clusters) minus the number of strata (see Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights) for more information on PSUs, strata, and sample designs). For replicate\-based objects, the degrees of freedom are calculated as one less than the rank of the matrix of replicate weight, where the number of replicates is typically the rank. Note that specifying `df = Inf` is equivalent to using a normal (z\-based) confidence interval – this is the default in {survey}. These variability types are the same for most of the survey functions, and we provide examples using different variability types throughout this chapter.
### 5\.2\.2 Examples
#### Example 1: Estimated population count
If we want to obtain the estimated number of households in the U.S. (the population of interest) using the Residential Energy Consumption Survey (RECS) data, we can use `survey_count()`. If we do not specify any variables in the `survey_count()` function, it outputs the estimated population count (`n`) and its corresponding standard error (`n_se`).
```
recs_des %>%
survey_count()
```
```
## # A tibble: 1 × 2
## n n_se
## <dbl> <dbl>
## 1 123529025. 0.148
```
Based on this calculation, the estimated number of households in the U.S. is 123,529,025\.
Alternatively, we could also use the `survey_tally()` function. The example below yields the same results as `survey_count()`.
```
recs_des %>%
survey_tally()
```
```
## # A tibble: 1 × 2
## n n_se
## <dbl> <dbl>
## 1 123529025. 0.148
```
#### Example 2: Estimated counts by subgroups (cross\-tabs)
To calculate the estimated number of observations for specific subgroups, such as Region and Division, we can include the variables of interest in the `survey_count()` function. In the example below, we calculate the estimated number of housing units by region and division. The argument `name =` in `survey_count()` allows us to change the name of the count variable in the output from the default `n` to `N`.
```
recs_des %>%
survey_count(Region, Division, name = "N")
```
```
## # A tibble: 10 × 4
## Region Division N N_se
## <fct> <fct> <dbl> <dbl>
## 1 Northeast New England 5876166 0.0000000137
## 2 Northeast Middle Atlantic 16043503 0.0000000487
## 3 Midwest East North Central 18546912 0.000000437
## 4 Midwest West North Central 8495815 0.0000000177
## 5 South South Atlantic 24843261 0.0000000418
## 6 South East South Central 7380717. 0.114
## 7 South West South Central 14619094 0.000488
## 8 West Mountain North 4615844 0.119
## 9 West Mountain South 4602070 0.0000000492
## 10 West Pacific 18505643. 0.00000295
```
When we run the cross\-tab, we see that there are an estimated 5,876,166 housing units in the New England Division.
The code results in an error if we try to use the `survey_count()` syntax with `survey_tally()`:
```
recs_des %>%
survey_tally(Region, Division, name = "N")
```
```
## Error in `dplyr::summarise()`:
## ℹ In argument: `N = survey_total(Region, vartype = vartype,
## na.rm = TRUE)`.
## Caused by error:
## ! Factor not allowed in survey functions, should be used as a grouping variable.
```
Use a `group_by()` function prior to using `survey_tally()` to successfully run the cross\-tab:
```
recs_des %>%
group_by(Region, Division) %>%
survey_tally(name = "N")
```
```
## # A tibble: 10 × 4
## # Groups: Region [4]
## Region Division N N_se
## <fct> <fct> <dbl> <dbl>
## 1 Northeast New England 5876166 0.0000000137
## 2 Northeast Middle Atlantic 16043503 0.0000000487
## 3 Midwest East North Central 18546912 0.000000437
## 4 Midwest West North Central 8495815 0.0000000177
## 5 South South Atlantic 24843261 0.0000000418
## 6 South East South Central 7380717. 0.114
## 7 South West South Central 14619094 0.000488
## 8 West Mountain North 4615844 0.119
## 9 West Mountain South 4602070 0.0000000492
## 10 West Pacific 18505643. 0.00000295
```
#### Example 1: Estimated population count
If we want to obtain the estimated number of households in the U.S. (the population of interest) using the Residential Energy Consumption Survey (RECS) data, we can use `survey_count()`. If we do not specify any variables in the `survey_count()` function, it outputs the estimated population count (`n`) and its corresponding standard error (`n_se`).
```
recs_des %>%
survey_count()
```
```
## # A tibble: 1 × 2
## n n_se
## <dbl> <dbl>
## 1 123529025. 0.148
```
Based on this calculation, the estimated number of households in the U.S. is 123,529,025\.
Alternatively, we could also use the `survey_tally()` function. The example below yields the same results as `survey_count()`.
```
recs_des %>%
survey_tally()
```
```
## # A tibble: 1 × 2
## n n_se
## <dbl> <dbl>
## 1 123529025. 0.148
```
#### Example 2: Estimated counts by subgroups (cross\-tabs)
To calculate the estimated number of observations for specific subgroups, such as Region and Division, we can include the variables of interest in the `survey_count()` function. In the example below, we calculate the estimated number of housing units by region and division. The argument `name =` in `survey_count()` allows us to change the name of the count variable in the output from the default `n` to `N`.
```
recs_des %>%
survey_count(Region, Division, name = "N")
```
```
## # A tibble: 10 × 4
## Region Division N N_se
## <fct> <fct> <dbl> <dbl>
## 1 Northeast New England 5876166 0.0000000137
## 2 Northeast Middle Atlantic 16043503 0.0000000487
## 3 Midwest East North Central 18546912 0.000000437
## 4 Midwest West North Central 8495815 0.0000000177
## 5 South South Atlantic 24843261 0.0000000418
## 6 South East South Central 7380717. 0.114
## 7 South West South Central 14619094 0.000488
## 8 West Mountain North 4615844 0.119
## 9 West Mountain South 4602070 0.0000000492
## 10 West Pacific 18505643. 0.00000295
```
When we run the cross\-tab, we see that there are an estimated 5,876,166 housing units in the New England Division.
The code results in an error if we try to use the `survey_count()` syntax with `survey_tally()`:
```
recs_des %>%
survey_tally(Region, Division, name = "N")
```
```
## Error in `dplyr::summarise()`:
## ℹ In argument: `N = survey_total(Region, vartype = vartype,
## na.rm = TRUE)`.
## Caused by error:
## ! Factor not allowed in survey functions, should be used as a grouping variable.
```
Use a `group_by()` function prior to using `survey_tally()` to successfully run the cross\-tab:
```
recs_des %>%
group_by(Region, Division) %>%
survey_tally(name = "N")
```
```
## # A tibble: 10 × 4
## # Groups: Region [4]
## Region Division N N_se
## <fct> <fct> <dbl> <dbl>
## 1 Northeast New England 5876166 0.0000000137
## 2 Northeast Middle Atlantic 16043503 0.0000000487
## 3 Midwest East North Central 18546912 0.000000437
## 4 Midwest West North Central 8495815 0.0000000177
## 5 South South Atlantic 24843261 0.0000000418
## 6 South East South Central 7380717. 0.114
## 7 South West South Central 14619094 0.000488
## 8 West Mountain North 4615844 0.119
## 9 West Mountain South 4602070 0.0000000492
## 10 West Pacific 18505643. 0.00000295
```
5\.3 Totals and sums
--------------------
The `survey_total()` function is analogous to `sum`. It can be applied to continuous variables to obtain the estimated total quantity in a population. Starting from this point in the chapter, all the introduced functions must be called within `summarize()`.
### 5\.3\.1 Syntax
Here is the syntax:
```
survey_total(
x,
na.rm = FALSE,
vartype = c("se", "ci", "var", "cv"),
level = 0.95,
deff = FALSE,
df = NULL
)
```
The arguments are:
* `x`: a variable, expression, or empty
* `na.rm`: an indicator of whether missing values should be dropped, defaults to `FALSE`
* `vartype`: type(s) of variation estimate to calculate including any of `c("se", "ci", "var", "cv")`, defaults to `se` (standard error) (see Section [5\.2\.1](c05-descriptive-analysis.html#desc-count-syntax) for more information)
* `level`: a number or a vector indicating the confidence level, defaults to 0\.95
* `deff`: a logical value stating whether the design effect should be returned, defaults to FALSE (this is described in more detail in Section [5\.9\.3](c05-descriptive-analysis.html#desc-deff))
* `df`: (for `vartype = 'ci'`), a numeric value indicating degrees of freedom for the t\-distribution
### 5\.3\.2 Examples
#### Example 1: Estimated population count
To calculate a population count estimate with `survey_total()`, we leave the argument `x` empty, as shown in the example below:
```
recs_des %>%
summarize(Tot = survey_total())
```
```
## # A tibble: 1 × 2
## Tot Tot_se
## <dbl> <dbl>
## 1 123529025. 0.148
```
The estimated number of households in the U.S. is 123,529,025\. Note that this result obtained from `survey_total()` is equivalent to the ones from the `survey_count()` and `survey_tally()` functions. However, the `survey_total()` function is called within `summarize()`, whereas `survey_count()` and `survey_tally()` are not.
#### Example 2: Overall summation of continuous variables
The distinction between `survey_total()` and `survey_count()` becomes more evident when working with continuous variables. Let’s compute the total cost of electricity in whole dollars from variable `DOLLAREL`[4](#fn4).
```
recs_des %>%
summarize(elec_bill = survey_total(DOLLAREL))
```
```
## # A tibble: 1 × 2
## elec_bill elec_bill_se
## <dbl> <dbl>
## 1 170473527909. 664893504.
```
It is estimated that American residential households spent a total of $170,473,527,909 on electricity in 2020, and the estimate has a standard error of $664,893,504\.
#### Example 3: Summation by groups
Since we are using the {srvyr} package, we can use `group_by()` to calculate the cost of electricity for different groups. Let’s examine the variations in the cost of electricity in whole dollars across regions and display the confidence interval instead of the default standard error.
```
recs_des %>%
group_by(Region) %>%
summarize(elec_bill = survey_total(DOLLAREL,
vartype = "ci"
))
```
```
## # A tibble: 4 × 4
## Region elec_bill elec_bill_low elec_bill_upp
## <fct> <dbl> <dbl> <dbl>
## 1 Northeast 29430369947. 28788987554. 30071752341.
## 2 Midwest 34972544751. 34339576041. 35605513460.
## 3 South 72496840204. 71534780902. 73458899506.
## 4 West 33573773008. 32909111702. 34238434313.
```
The survey results estimate that households in the Northeast spent $29,430,369,947 with a confidence interval of ($28,788,987,554, $30,071,752,341\) on electricity in 2020, while households in the South spent an estimated $72,496,840,204 with a confidence interval of ($71,534,780,902, $73,458,899,506\).
As we calculate these numbers, we may notice that the confidence interval of the South is larger than those of other regions. This implies that we have less certainty about the true value of electricity spending in the South. A larger confidence interval could be due to a variety of factors, such as a wider range of electricity spending in the South. We could try to analyze smaller regions within the South to identify areas that are contributing to more variability. Descriptive analyses serve as a valuable starting point for more in\-depth exploration and analysis.
### 5\.3\.1 Syntax
Here is the syntax:
```
survey_total(
x,
na.rm = FALSE,
vartype = c("se", "ci", "var", "cv"),
level = 0.95,
deff = FALSE,
df = NULL
)
```
The arguments are:
* `x`: a variable, expression, or empty
* `na.rm`: an indicator of whether missing values should be dropped, defaults to `FALSE`
* `vartype`: type(s) of variation estimate to calculate including any of `c("se", "ci", "var", "cv")`, defaults to `se` (standard error) (see Section [5\.2\.1](c05-descriptive-analysis.html#desc-count-syntax) for more information)
* `level`: a number or a vector indicating the confidence level, defaults to 0\.95
* `deff`: a logical value stating whether the design effect should be returned, defaults to FALSE (this is described in more detail in Section [5\.9\.3](c05-descriptive-analysis.html#desc-deff))
* `df`: (for `vartype = 'ci'`), a numeric value indicating degrees of freedom for the t\-distribution
### 5\.3\.2 Examples
#### Example 1: Estimated population count
To calculate a population count estimate with `survey_total()`, we leave the argument `x` empty, as shown in the example below:
```
recs_des %>%
summarize(Tot = survey_total())
```
```
## # A tibble: 1 × 2
## Tot Tot_se
## <dbl> <dbl>
## 1 123529025. 0.148
```
The estimated number of households in the U.S. is 123,529,025\. Note that this result obtained from `survey_total()` is equivalent to the ones from the `survey_count()` and `survey_tally()` functions. However, the `survey_total()` function is called within `summarize()`, whereas `survey_count()` and `survey_tally()` are not.
#### Example 2: Overall summation of continuous variables
The distinction between `survey_total()` and `survey_count()` becomes more evident when working with continuous variables. Let’s compute the total cost of electricity in whole dollars from variable `DOLLAREL`[4](#fn4).
```
recs_des %>%
summarize(elec_bill = survey_total(DOLLAREL))
```
```
## # A tibble: 1 × 2
## elec_bill elec_bill_se
## <dbl> <dbl>
## 1 170473527909. 664893504.
```
It is estimated that American residential households spent a total of $170,473,527,909 on electricity in 2020, and the estimate has a standard error of $664,893,504\.
#### Example 3: Summation by groups
Since we are using the {srvyr} package, we can use `group_by()` to calculate the cost of electricity for different groups. Let’s examine the variations in the cost of electricity in whole dollars across regions and display the confidence interval instead of the default standard error.
```
recs_des %>%
group_by(Region) %>%
summarize(elec_bill = survey_total(DOLLAREL,
vartype = "ci"
))
```
```
## # A tibble: 4 × 4
## Region elec_bill elec_bill_low elec_bill_upp
## <fct> <dbl> <dbl> <dbl>
## 1 Northeast 29430369947. 28788987554. 30071752341.
## 2 Midwest 34972544751. 34339576041. 35605513460.
## 3 South 72496840204. 71534780902. 73458899506.
## 4 West 33573773008. 32909111702. 34238434313.
```
The survey results estimate that households in the Northeast spent $29,430,369,947 with a confidence interval of ($28,788,987,554, $30,071,752,341\) on electricity in 2020, while households in the South spent an estimated $72,496,840,204 with a confidence interval of ($71,534,780,902, $73,458,899,506\).
As we calculate these numbers, we may notice that the confidence interval of the South is larger than those of other regions. This implies that we have less certainty about the true value of electricity spending in the South. A larger confidence interval could be due to a variety of factors, such as a wider range of electricity spending in the South. We could try to analyze smaller regions within the South to identify areas that are contributing to more variability. Descriptive analyses serve as a valuable starting point for more in\-depth exploration and analysis.
#### Example 1: Estimated population count
To calculate a population count estimate with `survey_total()`, we leave the argument `x` empty, as shown in the example below:
```
recs_des %>%
summarize(Tot = survey_total())
```
```
## # A tibble: 1 × 2
## Tot Tot_se
## <dbl> <dbl>
## 1 123529025. 0.148
```
The estimated number of households in the U.S. is 123,529,025\. Note that this result obtained from `survey_total()` is equivalent to the ones from the `survey_count()` and `survey_tally()` functions. However, the `survey_total()` function is called within `summarize()`, whereas `survey_count()` and `survey_tally()` are not.
#### Example 2: Overall summation of continuous variables
The distinction between `survey_total()` and `survey_count()` becomes more evident when working with continuous variables. Let’s compute the total cost of electricity in whole dollars from variable `DOLLAREL`[4](#fn4).
```
recs_des %>%
summarize(elec_bill = survey_total(DOLLAREL))
```
```
## # A tibble: 1 × 2
## elec_bill elec_bill_se
## <dbl> <dbl>
## 1 170473527909. 664893504.
```
It is estimated that American residential households spent a total of $170,473,527,909 on electricity in 2020, and the estimate has a standard error of $664,893,504\.
#### Example 3: Summation by groups
Since we are using the {srvyr} package, we can use `group_by()` to calculate the cost of electricity for different groups. Let’s examine the variations in the cost of electricity in whole dollars across regions and display the confidence interval instead of the default standard error.
```
recs_des %>%
group_by(Region) %>%
summarize(elec_bill = survey_total(DOLLAREL,
vartype = "ci"
))
```
```
## # A tibble: 4 × 4
## Region elec_bill elec_bill_low elec_bill_upp
## <fct> <dbl> <dbl> <dbl>
## 1 Northeast 29430369947. 28788987554. 30071752341.
## 2 Midwest 34972544751. 34339576041. 35605513460.
## 3 South 72496840204. 71534780902. 73458899506.
## 4 West 33573773008. 32909111702. 34238434313.
```
The survey results estimate that households in the Northeast spent $29,430,369,947 with a confidence interval of ($28,788,987,554, $30,071,752,341\) on electricity in 2020, while households in the South spent an estimated $72,496,840,204 with a confidence interval of ($71,534,780,902, $73,458,899,506\).
As we calculate these numbers, we may notice that the confidence interval of the South is larger than those of other regions. This implies that we have less certainty about the true value of electricity spending in the South. A larger confidence interval could be due to a variety of factors, such as a wider range of electricity spending in the South. We could try to analyze smaller regions within the South to identify areas that are contributing to more variability. Descriptive analyses serve as a valuable starting point for more in\-depth exploration and analysis.
5\.4 Means and proportions
--------------------------
Means and proportions form the foundation of many research studies. These estimates are often the first things we look for when reviewing research on a given topic. The `survey_mean()` and `survey_prop()` functions calculate means and proportions while taking into account the survey design elements. The `survey_mean()` function should be used on continuous variables of survey data, while the `survey_prop()` function should be used on categorical variables.
### 5\.4\.1 Syntax
The syntax for both means and proportions is very similar:
```
survey_mean(
x,
na.rm = FALSE,
vartype = c("se", "ci", "var", "cv"),
level = 0.95,
proportion = FALSE,
prop_method = c("logit", "likelihood", "asin", "beta", "mean"),
deff = FALSE,
df = NULL
)
survey_prop(
na.rm = FALSE,
vartype = c("se", "ci", "var", "cv"),
level = 0.95,
proportion = TRUE,
prop_method =
c("logit", "likelihood", "asin", "beta", "mean", "xlogit"),
deff = FALSE,
df = NULL
)
```
Both functions have the following arguments and defaults:
* `na.rm`: an indicator of whether missing values should be dropped, defaults to `FALSE`
* `vartype`: type(s) of variation estimate to calculate including any of `c("se", "ci", "var", "cv")`, defaults to `se` (standard error) (see Section [5\.2\.1](c05-descriptive-analysis.html#desc-count-syntax) for more information)
* `level`: a number or a vector indicating the confidence level, defaults to 0\.95
* `prop_method`: Method to calculate the confidence interval for confidence intervals
* `deff`: a logical value stating whether the design effect should be returned, defaults to FALSE (this is described in more detail in Section [5\.9\.3](c05-descriptive-analysis.html#desc-deff))
* `df`: (for `vartype = 'ci'`), a numeric value indicating degrees of freedom for the t\-distribution
There are two main differences in the syntax. The `survey_mean()` function includes the first argument `x`, representing the variable or expression on which the mean should be calculated. The `survey_prop()` does not have an argument to include the variables directly. Instead, prior to `summarize()`, we must use the `group_by()` function to specify the variables of interest for `survey_prop()`. For `survey_mean()`, including a `group_by()` function allows us to obtain the means by different groups.
The other main difference is with the `proportion` argument. The `survey_mean()` function can be used to calculate both means and proportions. Its `proportion` argument defaults to `FALSE`, indicating it is used for calculating means. If we wish to calculate a proportion using `survey_mean()`, we need to set the `proportion` argument to `TRUE`. In the `survey_prop()` function, the `proportion` argument defaults to `TRUE` because the function is specifically designed for calculating proportions.
In Section [5\.2\.1](c05-descriptive-analysis.html#desc-count-syntax), we provide an overview of different variability types. The confidence interval used for most measures, such as means and counts, is referred to as a Wald\-type interval. However, for proportions, a Wald\-type interval with a symmetric t\-based confidence interval may not provide accurate coverage, especially when dealing with small sample sizes or proportions “near” 0 or 1\. We can use other methods to calculate confidence intervals, which we specify using the `prop_method` option in `survey_prop()`. The options include:
* `logit`: fits a logistic regression model and computes a Wald\-type interval on the log\-odds scale, which is then transformed to the probability scale. This is the default method.
* `likelihood`: uses the (Rao\-Scott) scaled chi\-squared distribution for the log\-likelihood from a binomial distribution.
* `asin`: uses the variance\-stabilizing transformation for the binomial distribution, the arcsine square root, and then back\-transforms the interval to the probability scale.
* `beta`: uses the incomplete beta function with an effective sample size based on the estimated variance of the proportion.
* `mean`: the Wald\-type interval (\\(\\pm t\_{df}^\*\\times SE\\)).
* `xlogit`: uses a logit transformation of the proportion, calculates a Wald\-type interval, and then back\-transforms to the probability scale. This method is the same as those used by default in SUDAAN and SPSS.
Each option yields slightly different confidence interval bounds when dealing with proportions. Please note that when working with `survey_mean()`, we do not need to specify a method unless the `proportion` argument is `TRUE`. If `proportion` is `FALSE`, it calculates a symmetric `mean` type of confidence interval.
### 5\.4\.2 Examples
#### Example 1: One variable proportion
If we are interested in obtaining the proportion of people in each region in the RECS data, we can use `group_by()` and `survey_prop()` as shown below:
```
recs_des %>%
group_by(Region) %>%
summarize(p = survey_prop())
```
```
## # A tibble: 4 × 3
## Region p p_se
## <fct> <dbl> <dbl>
## 1 Northeast 0.177 0.000000000212
## 2 Midwest 0.219 0.000000000262
## 3 South 0.379 0.000000000740
## 4 West 0.224 0.000000000816
```
17\.7% of the households are in the Northeast, 21\.9% are in the Midwest, and so on. Note that the proportions in column `p` add up to one.
The `survey_prop()` function is essentially the same as using `survey_mean()` with a categorical variable and without specifying a numeric variable in the `x` argument. The following code gives us the same results as above:
```
recs_des %>%
group_by(Region) %>%
summarize(p = survey_mean())
```
```
## # A tibble: 4 × 3
## Region p p_se
## <fct> <dbl> <dbl>
## 1 Northeast 0.177 0.000000000212
## 2 Midwest 0.219 0.000000000262
## 3 South 0.379 0.000000000740
## 4 West 0.224 0.000000000816
```
#### Example 2: Conditional proportions
We can also obtain proportions by more than one variable. In the following example, we look at the proportion of housing units by Region and whether air conditioning (A/C) is used (`ACUsed`)[5](#fn5).
```
recs_des %>%
group_by(Region, ACUsed) %>%
summarize(p = survey_prop())
```
```
## # A tibble: 8 × 4
## # Groups: Region [4]
## Region ACUsed p p_se
## <fct> <lgl> <dbl> <dbl>
## 1 Northeast FALSE 0.110 0.00590
## 2 Northeast TRUE 0.890 0.00590
## 3 Midwest FALSE 0.0666 0.00508
## 4 Midwest TRUE 0.933 0.00508
## 5 South FALSE 0.0581 0.00278
## 6 South TRUE 0.942 0.00278
## 7 West FALSE 0.255 0.00759
## 8 West TRUE 0.745 0.00759
```
When specifying multiple variables, the proportions are conditional. In the results above, notice that the proportions sum to 1 within each region. This can be interpreted as the proportion of housing units with A/C within each region. For example, in the Northeast region, approximately 11\.0% of housing units don’t have A/C, while around 89\.0% have A/C.
#### Example 3: Joint proportions
If we’re interested in a joint proportion, we use the `interact()` function. In the example below, we apply the `interact()` function to `Region` and `ACUsed`:
```
recs_des %>%
group_by(interact(Region, ACUsed)) %>%
summarize(p = survey_prop())
```
```
## # A tibble: 8 × 4
## Region ACUsed p p_se
## <fct> <lgl> <dbl> <dbl>
## 1 Northeast FALSE 0.0196 0.00105
## 2 Northeast TRUE 0.158 0.00105
## 3 Midwest FALSE 0.0146 0.00111
## 4 Midwest TRUE 0.204 0.00111
## 5 South FALSE 0.0220 0.00106
## 6 South TRUE 0.357 0.00106
## 7 West FALSE 0.0573 0.00170
## 8 West TRUE 0.167 0.00170
```
In this case, all proportions sum to 1, not just within regions. This means that 15\.8% of the population lives in the Northeast and has A/C. As noted earlier, we can use both the `survey_prop()` and `survey_mean()` functions, and they produce the same results.
#### Example 4: Overall mean
Below, we calculate the estimated average cost of electricity in the U.S. using `survey_mean()`. To include both the standard error and the confidence interval, we can include them in the `vartype` argument:
```
recs_des %>%
summarize(elec_bill = survey_mean(DOLLAREL,
vartype = c("se", "ci")
))
```
```
## # A tibble: 1 × 4
## elec_bill elec_bill_se elec_bill_low elec_bill_upp
## <dbl> <dbl> <dbl> <dbl>
## 1 1380. 5.38 1369. 1391.
```
Nationally, the average household spent $1,380 in 2020\.
#### Example 5: Means by subgroup
We can also calculate the estimated average cost of electricity in the U.S. by each region. To do this, we include a `group_by()` function with the variable of interest before the `summarize()` function:
```
recs_des %>%
group_by(Region) %>%
summarize(elec_bill = survey_mean(DOLLAREL))
```
```
## # A tibble: 4 × 3
## Region elec_bill elec_bill_se
## <fct> <dbl> <dbl>
## 1 Northeast 1343. 14.6
## 2 Midwest 1293. 11.7
## 3 South 1548. 10.3
## 4 West 1211. 12.0
```
Households from the West spent approximately $1,211, while in the South, the average spending was $1,548\.
### 5\.4\.1 Syntax
The syntax for both means and proportions is very similar:
```
survey_mean(
x,
na.rm = FALSE,
vartype = c("se", "ci", "var", "cv"),
level = 0.95,
proportion = FALSE,
prop_method = c("logit", "likelihood", "asin", "beta", "mean"),
deff = FALSE,
df = NULL
)
survey_prop(
na.rm = FALSE,
vartype = c("se", "ci", "var", "cv"),
level = 0.95,
proportion = TRUE,
prop_method =
c("logit", "likelihood", "asin", "beta", "mean", "xlogit"),
deff = FALSE,
df = NULL
)
```
Both functions have the following arguments and defaults:
* `na.rm`: an indicator of whether missing values should be dropped, defaults to `FALSE`
* `vartype`: type(s) of variation estimate to calculate including any of `c("se", "ci", "var", "cv")`, defaults to `se` (standard error) (see Section [5\.2\.1](c05-descriptive-analysis.html#desc-count-syntax) for more information)
* `level`: a number or a vector indicating the confidence level, defaults to 0\.95
* `prop_method`: Method to calculate the confidence interval for confidence intervals
* `deff`: a logical value stating whether the design effect should be returned, defaults to FALSE (this is described in more detail in Section [5\.9\.3](c05-descriptive-analysis.html#desc-deff))
* `df`: (for `vartype = 'ci'`), a numeric value indicating degrees of freedom for the t\-distribution
There are two main differences in the syntax. The `survey_mean()` function includes the first argument `x`, representing the variable or expression on which the mean should be calculated. The `survey_prop()` does not have an argument to include the variables directly. Instead, prior to `summarize()`, we must use the `group_by()` function to specify the variables of interest for `survey_prop()`. For `survey_mean()`, including a `group_by()` function allows us to obtain the means by different groups.
The other main difference is with the `proportion` argument. The `survey_mean()` function can be used to calculate both means and proportions. Its `proportion` argument defaults to `FALSE`, indicating it is used for calculating means. If we wish to calculate a proportion using `survey_mean()`, we need to set the `proportion` argument to `TRUE`. In the `survey_prop()` function, the `proportion` argument defaults to `TRUE` because the function is specifically designed for calculating proportions.
In Section [5\.2\.1](c05-descriptive-analysis.html#desc-count-syntax), we provide an overview of different variability types. The confidence interval used for most measures, such as means and counts, is referred to as a Wald\-type interval. However, for proportions, a Wald\-type interval with a symmetric t\-based confidence interval may not provide accurate coverage, especially when dealing with small sample sizes or proportions “near” 0 or 1\. We can use other methods to calculate confidence intervals, which we specify using the `prop_method` option in `survey_prop()`. The options include:
* `logit`: fits a logistic regression model and computes a Wald\-type interval on the log\-odds scale, which is then transformed to the probability scale. This is the default method.
* `likelihood`: uses the (Rao\-Scott) scaled chi\-squared distribution for the log\-likelihood from a binomial distribution.
* `asin`: uses the variance\-stabilizing transformation for the binomial distribution, the arcsine square root, and then back\-transforms the interval to the probability scale.
* `beta`: uses the incomplete beta function with an effective sample size based on the estimated variance of the proportion.
* `mean`: the Wald\-type interval (\\(\\pm t\_{df}^\*\\times SE\\)).
* `xlogit`: uses a logit transformation of the proportion, calculates a Wald\-type interval, and then back\-transforms to the probability scale. This method is the same as those used by default in SUDAAN and SPSS.
Each option yields slightly different confidence interval bounds when dealing with proportions. Please note that when working with `survey_mean()`, we do not need to specify a method unless the `proportion` argument is `TRUE`. If `proportion` is `FALSE`, it calculates a symmetric `mean` type of confidence interval.
### 5\.4\.2 Examples
#### Example 1: One variable proportion
If we are interested in obtaining the proportion of people in each region in the RECS data, we can use `group_by()` and `survey_prop()` as shown below:
```
recs_des %>%
group_by(Region) %>%
summarize(p = survey_prop())
```
```
## # A tibble: 4 × 3
## Region p p_se
## <fct> <dbl> <dbl>
## 1 Northeast 0.177 0.000000000212
## 2 Midwest 0.219 0.000000000262
## 3 South 0.379 0.000000000740
## 4 West 0.224 0.000000000816
```
17\.7% of the households are in the Northeast, 21\.9% are in the Midwest, and so on. Note that the proportions in column `p` add up to one.
The `survey_prop()` function is essentially the same as using `survey_mean()` with a categorical variable and without specifying a numeric variable in the `x` argument. The following code gives us the same results as above:
```
recs_des %>%
group_by(Region) %>%
summarize(p = survey_mean())
```
```
## # A tibble: 4 × 3
## Region p p_se
## <fct> <dbl> <dbl>
## 1 Northeast 0.177 0.000000000212
## 2 Midwest 0.219 0.000000000262
## 3 South 0.379 0.000000000740
## 4 West 0.224 0.000000000816
```
#### Example 2: Conditional proportions
We can also obtain proportions by more than one variable. In the following example, we look at the proportion of housing units by Region and whether air conditioning (A/C) is used (`ACUsed`)[5](#fn5).
```
recs_des %>%
group_by(Region, ACUsed) %>%
summarize(p = survey_prop())
```
```
## # A tibble: 8 × 4
## # Groups: Region [4]
## Region ACUsed p p_se
## <fct> <lgl> <dbl> <dbl>
## 1 Northeast FALSE 0.110 0.00590
## 2 Northeast TRUE 0.890 0.00590
## 3 Midwest FALSE 0.0666 0.00508
## 4 Midwest TRUE 0.933 0.00508
## 5 South FALSE 0.0581 0.00278
## 6 South TRUE 0.942 0.00278
## 7 West FALSE 0.255 0.00759
## 8 West TRUE 0.745 0.00759
```
When specifying multiple variables, the proportions are conditional. In the results above, notice that the proportions sum to 1 within each region. This can be interpreted as the proportion of housing units with A/C within each region. For example, in the Northeast region, approximately 11\.0% of housing units don’t have A/C, while around 89\.0% have A/C.
#### Example 3: Joint proportions
If we’re interested in a joint proportion, we use the `interact()` function. In the example below, we apply the `interact()` function to `Region` and `ACUsed`:
```
recs_des %>%
group_by(interact(Region, ACUsed)) %>%
summarize(p = survey_prop())
```
```
## # A tibble: 8 × 4
## Region ACUsed p p_se
## <fct> <lgl> <dbl> <dbl>
## 1 Northeast FALSE 0.0196 0.00105
## 2 Northeast TRUE 0.158 0.00105
## 3 Midwest FALSE 0.0146 0.00111
## 4 Midwest TRUE 0.204 0.00111
## 5 South FALSE 0.0220 0.00106
## 6 South TRUE 0.357 0.00106
## 7 West FALSE 0.0573 0.00170
## 8 West TRUE 0.167 0.00170
```
In this case, all proportions sum to 1, not just within regions. This means that 15\.8% of the population lives in the Northeast and has A/C. As noted earlier, we can use both the `survey_prop()` and `survey_mean()` functions, and they produce the same results.
#### Example 4: Overall mean
Below, we calculate the estimated average cost of electricity in the U.S. using `survey_mean()`. To include both the standard error and the confidence interval, we can include them in the `vartype` argument:
```
recs_des %>%
summarize(elec_bill = survey_mean(DOLLAREL,
vartype = c("se", "ci")
))
```
```
## # A tibble: 1 × 4
## elec_bill elec_bill_se elec_bill_low elec_bill_upp
## <dbl> <dbl> <dbl> <dbl>
## 1 1380. 5.38 1369. 1391.
```
Nationally, the average household spent $1,380 in 2020\.
#### Example 5: Means by subgroup
We can also calculate the estimated average cost of electricity in the U.S. by each region. To do this, we include a `group_by()` function with the variable of interest before the `summarize()` function:
```
recs_des %>%
group_by(Region) %>%
summarize(elec_bill = survey_mean(DOLLAREL))
```
```
## # A tibble: 4 × 3
## Region elec_bill elec_bill_se
## <fct> <dbl> <dbl>
## 1 Northeast 1343. 14.6
## 2 Midwest 1293. 11.7
## 3 South 1548. 10.3
## 4 West 1211. 12.0
```
Households from the West spent approximately $1,211, while in the South, the average spending was $1,548\.
#### Example 1: One variable proportion
If we are interested in obtaining the proportion of people in each region in the RECS data, we can use `group_by()` and `survey_prop()` as shown below:
```
recs_des %>%
group_by(Region) %>%
summarize(p = survey_prop())
```
```
## # A tibble: 4 × 3
## Region p p_se
## <fct> <dbl> <dbl>
## 1 Northeast 0.177 0.000000000212
## 2 Midwest 0.219 0.000000000262
## 3 South 0.379 0.000000000740
## 4 West 0.224 0.000000000816
```
17\.7% of the households are in the Northeast, 21\.9% are in the Midwest, and so on. Note that the proportions in column `p` add up to one.
The `survey_prop()` function is essentially the same as using `survey_mean()` with a categorical variable and without specifying a numeric variable in the `x` argument. The following code gives us the same results as above:
```
recs_des %>%
group_by(Region) %>%
summarize(p = survey_mean())
```
```
## # A tibble: 4 × 3
## Region p p_se
## <fct> <dbl> <dbl>
## 1 Northeast 0.177 0.000000000212
## 2 Midwest 0.219 0.000000000262
## 3 South 0.379 0.000000000740
## 4 West 0.224 0.000000000816
```
#### Example 2: Conditional proportions
We can also obtain proportions by more than one variable. In the following example, we look at the proportion of housing units by Region and whether air conditioning (A/C) is used (`ACUsed`)[5](#fn5).
```
recs_des %>%
group_by(Region, ACUsed) %>%
summarize(p = survey_prop())
```
```
## # A tibble: 8 × 4
## # Groups: Region [4]
## Region ACUsed p p_se
## <fct> <lgl> <dbl> <dbl>
## 1 Northeast FALSE 0.110 0.00590
## 2 Northeast TRUE 0.890 0.00590
## 3 Midwest FALSE 0.0666 0.00508
## 4 Midwest TRUE 0.933 0.00508
## 5 South FALSE 0.0581 0.00278
## 6 South TRUE 0.942 0.00278
## 7 West FALSE 0.255 0.00759
## 8 West TRUE 0.745 0.00759
```
When specifying multiple variables, the proportions are conditional. In the results above, notice that the proportions sum to 1 within each region. This can be interpreted as the proportion of housing units with A/C within each region. For example, in the Northeast region, approximately 11\.0% of housing units don’t have A/C, while around 89\.0% have A/C.
#### Example 3: Joint proportions
If we’re interested in a joint proportion, we use the `interact()` function. In the example below, we apply the `interact()` function to `Region` and `ACUsed`:
```
recs_des %>%
group_by(interact(Region, ACUsed)) %>%
summarize(p = survey_prop())
```
```
## # A tibble: 8 × 4
## Region ACUsed p p_se
## <fct> <lgl> <dbl> <dbl>
## 1 Northeast FALSE 0.0196 0.00105
## 2 Northeast TRUE 0.158 0.00105
## 3 Midwest FALSE 0.0146 0.00111
## 4 Midwest TRUE 0.204 0.00111
## 5 South FALSE 0.0220 0.00106
## 6 South TRUE 0.357 0.00106
## 7 West FALSE 0.0573 0.00170
## 8 West TRUE 0.167 0.00170
```
In this case, all proportions sum to 1, not just within regions. This means that 15\.8% of the population lives in the Northeast and has A/C. As noted earlier, we can use both the `survey_prop()` and `survey_mean()` functions, and they produce the same results.
#### Example 4: Overall mean
Below, we calculate the estimated average cost of electricity in the U.S. using `survey_mean()`. To include both the standard error and the confidence interval, we can include them in the `vartype` argument:
```
recs_des %>%
summarize(elec_bill = survey_mean(DOLLAREL,
vartype = c("se", "ci")
))
```
```
## # A tibble: 1 × 4
## elec_bill elec_bill_se elec_bill_low elec_bill_upp
## <dbl> <dbl> <dbl> <dbl>
## 1 1380. 5.38 1369. 1391.
```
Nationally, the average household spent $1,380 in 2020\.
#### Example 5: Means by subgroup
We can also calculate the estimated average cost of electricity in the U.S. by each region. To do this, we include a `group_by()` function with the variable of interest before the `summarize()` function:
```
recs_des %>%
group_by(Region) %>%
summarize(elec_bill = survey_mean(DOLLAREL))
```
```
## # A tibble: 4 × 3
## Region elec_bill elec_bill_se
## <fct> <dbl> <dbl>
## 1 Northeast 1343. 14.6
## 2 Midwest 1293. 11.7
## 3 South 1548. 10.3
## 4 West 1211. 12.0
```
Households from the West spent approximately $1,211, while in the South, the average spending was $1,548\.
5\.5 Quantiles and medians
--------------------------
To better understand the distribution of a continuous variable like income, we can calculate quantiles at specific points. For example, computing estimates of the quartiles (25%, 50%, 75%) helps us understand how income is spread across the population. We use the `survey_quantile()` function to calculate quantiles in survey data.
Medians are useful for finding the midpoint of a continuous distribution when the data are skewed, as medians are less affected by outliers compared to means. The median is the same as the 50th percentile, meaning the value where 50% of the data are higher and 50% are lower. Because medians are a special, common case of quantiles, we have a dedicated function called `survey_median()` for calculating the median in survey data. Alternatively, we can use the `survey_quantile()` function with the `quantiles` argument set to `0.5` to achieve the same result.
### 5\.5\.1 Syntax
The syntax for `survey_quantile()` and `survey_median()` are nearly identical:
```
survey_quantile(
x,
quantiles,
na.rm = FALSE,
vartype = c("se", "ci", "var", "cv"),
level = 0.95,
interval_type =
c("mean", "beta", "xlogit", "asin", "score", "quantile"),
qrule = c("math", "school", "shahvaish", "hf1", "hf2", "hf3",
"hf4", "hf5", "hf6", "hf7", "hf8", "hf9"),
df = NULL
)
survey_median(
x,
na.rm = FALSE,
vartype = c("se", "ci", "var", "cv"),
level = 0.95,
interval_type =
c("mean", "beta", "xlogit", "asin", "score", "quantile"),
qrule = c("math", "school", "shahvaish", "hf1", "hf2", "hf3",
"hf4", "hf5", "hf6", "hf7", "hf8", "hf9"),
df = NULL
)
```
The arguments available in both functions are:
* `x`: a variable, expression, or empty
* `na.rm`: an indicator of whether missing values should be dropped, defaults to `FALSE`
* `vartype`: type(s) of variation estimate to calculate, defaults to `se` (standard error)
* `level`: a number or a vector indicating the confidence level, defaults to 0\.95
* `interval_type`: method for calculating a confidence interval
* `qrule`: rule for defining quantiles. The default is the lower end of the quantile interval (“math”). The midpoint of the quantile interval is the “school” rule. “hf1” to “hf9” are weighted analogs to type\=1 to 9 in `quantile()`. “shahvaish” corresponds to a rule proposed by Shah and Vaish ([2006](#ref-shahvaish)). See `vignette("qrule", package="survey")` for more information.
* `df`: (for `vartype = 'ci'`), a numeric value indicating degrees of freedom for the t\-distribution
The only difference between `survey_quantile()` and `survey_median()` is the inclusion of the `quantiles` argument in the `survey_quantile()` function. This argument takes a vector with values between 0 and 1 to indicate which quantiles to calculate. For example, if we wanted the quartiles of a variable, we would provide `quantiles = c(0.25, 0.5, 0.75)`. While we can specify quantiles of 0 and 1, which represent the minimum and maximum, this is not recommended. It only returns the minimum and maximum of the respondents and cannot be extrapolated to the population, as there is no valid definition of standard error.
In Section [5\.2\.1](c05-descriptive-analysis.html#desc-count-syntax), we provide an overview of the different variability types. The interval used in confidence intervals for most measures, such as means and counts, is referred to as a Wald\-type interval. However, this is not always the most accurate interval for quantiles. Similar to confidence intervals for proportions, quantiles have various interval types, including asin, beta, mean, and xlogit (see Section [5\.4\.1](c05-descriptive-analysis.html#desc-meanprop-syntax)). Quantiles also have two more methods available:
* `score`: the Francisco and Fuller confidence interval based on inverting a score test (only available for design\-based survey objects and not replicate\-based objects)
* `quantile`: based on the replicates of the quantile. This is not valid for jackknife\-type replicates but is available for bootstrap and BRR replicates.
One note with the `score` method is that when there are numerous ties in the data, this method may produce confidence intervals that do not contain the estimate. When dealing with a high propensity for ties (e.g., many respondents are the same age), it is recommended to use another method. SUDAAN, for example, uses the `score` method but adds noise to the values to prevent issues. The documentation in the {survey} package indicates, in general, that the `score` method may have poorer performance compared to the beta and logit intervals ([Lumley 2010](#ref-lumley2010complex)).
### 5\.5\.2 Examples
#### Example 1: Overall quartiles
Quantiles provide insights into the distribution of a variable. Let’s look into the quartiles, specifically, the first quartile (p\=0\.25\), the median (p\=0\.5\), and the third quartile (p\=0\.75\) of electric bills.
```
recs_des %>%
summarize(elec_bill = survey_quantile(DOLLAREL,
quantiles = c(0.25, .5, 0.75)
))
```
```
## # A tibble: 1 × 6
## elec_bill_q25 elec_bill_q50 elec_bill_q75 elec_bill_q25_se
## <dbl> <dbl> <dbl> <dbl>
## 1 795. 1215. 1770. 5.69
## elec_bill_q50_se elec_bill_q75_se
## <dbl> <dbl>
## 1 6.33 9.99
```
The output above shows the values for the three quartiles of electric bill costs and their respective standard errors: the 25th percentile is $795 with a standard error of $5\.69, the 50th percentile (median) is $1,215 with a standard error of $6\.33, and the 75th percentile is $1,770 with a standard error of $9\.99\.
#### Example 2: Quartiles by subgroup
We can estimate the quantiles of electric bills by region by using the `group_by()` function:
```
recs_des %>%
group_by(Region) %>%
summarize(elec_bill = survey_quantile(DOLLAREL,
quantiles = c(0.25, .5, 0.75)
))
```
```
## # A tibble: 4 × 7
## Region elec_bill_q25 elec_bill_q50 elec_bill_q75 elec_bill_q25_se
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 Northeast 740. 1148. 1712. 13.7
## 2 Midwest 769. 1149. 1632. 8.88
## 3 South 968. 1402. 1945. 10.6
## 4 West 623. 1028. 1568. 10.8
## elec_bill_q50_se elec_bill_q75_se
## <dbl> <dbl>
## 1 16.6 25.8
## 2 11.6 18.6
## 3 9.17 13.9
## 4 14.3 20.5
```
The 25th percentile for the Northeast region is $740, while it is $968 for the South.
#### Example 3: Minimum and maximum
As mentioned in the syntax section, we can specify quantiles of `0` (minimum) and `1` (maximum), and R calculates these values. However, these are only the minimum and maximum values in the data, and there is not enough information to determine their standard errors:
```
recs_des %>%
summarize(elec_bill = survey_quantile(DOLLAREL,
quantiles = c(0, 1)
))
```
```
## # A tibble: 1 × 4
## elec_bill_q00 elec_bill_q100 elec_bill_q00_se elec_bill_q100_se
## <dbl> <dbl> <dbl> <dbl>
## 1 -889. 15680. NaN 0
```
The minimum cost of electricity in the dataset is –$889, while the maximum is $15,680, but the standard error is shown as `NaN` and `0`, respectively. Notice that the minimum cost is a negative number. This may be surprising, but some housing units with solar power sell their energy back to the grid and earn money, which is recorded as a negative expenditure.
#### Example 4: Overall median
We can calculate the estimated median cost of electricity in the U.S. using the `survey_median()` function:
```
recs_des %>%
summarize(elec_bill = survey_median(DOLLAREL))
```
```
## # A tibble: 1 × 2
## elec_bill elec_bill_se
## <dbl> <dbl>
## 1 1215. 6.33
```
Nationally, the median household spent $1,215 in 2020\. This is the same result as we obtained using the `survey_quantile()` function. Interestingly, the average electric bill for households that we calculated in Section [5\.4](c05-descriptive-analysis.html#desc-meanprop) is $1,380, but the estimated median electric bill is $1,215, indicating the distribution is likely right\-skewed.
#### Example 5: Medians by subgroup
We can calculate the estimated median cost of electricity in the U.S. by region using the `group_by()` function with the variable(s) of interest before the `summarize()` function, similar to when we found the mean by region.
```
recs_des %>%
group_by(Region) %>%
summarize(elec_bill = survey_median(DOLLAREL))
```
```
## # A tibble: 4 × 3
## Region elec_bill elec_bill_se
## <fct> <dbl> <dbl>
## 1 Northeast 1148. 16.6
## 2 Midwest 1149. 11.6
## 3 South 1402. 9.17
## 4 West 1028. 14.3
```
We estimate that households in the Northeast spent a median of $1,148 on electricity, and in the South, they spent a median of $1,402\.
### 5\.5\.1 Syntax
The syntax for `survey_quantile()` and `survey_median()` are nearly identical:
```
survey_quantile(
x,
quantiles,
na.rm = FALSE,
vartype = c("se", "ci", "var", "cv"),
level = 0.95,
interval_type =
c("mean", "beta", "xlogit", "asin", "score", "quantile"),
qrule = c("math", "school", "shahvaish", "hf1", "hf2", "hf3",
"hf4", "hf5", "hf6", "hf7", "hf8", "hf9"),
df = NULL
)
survey_median(
x,
na.rm = FALSE,
vartype = c("se", "ci", "var", "cv"),
level = 0.95,
interval_type =
c("mean", "beta", "xlogit", "asin", "score", "quantile"),
qrule = c("math", "school", "shahvaish", "hf1", "hf2", "hf3",
"hf4", "hf5", "hf6", "hf7", "hf8", "hf9"),
df = NULL
)
```
The arguments available in both functions are:
* `x`: a variable, expression, or empty
* `na.rm`: an indicator of whether missing values should be dropped, defaults to `FALSE`
* `vartype`: type(s) of variation estimate to calculate, defaults to `se` (standard error)
* `level`: a number or a vector indicating the confidence level, defaults to 0\.95
* `interval_type`: method for calculating a confidence interval
* `qrule`: rule for defining quantiles. The default is the lower end of the quantile interval (“math”). The midpoint of the quantile interval is the “school” rule. “hf1” to “hf9” are weighted analogs to type\=1 to 9 in `quantile()`. “shahvaish” corresponds to a rule proposed by Shah and Vaish ([2006](#ref-shahvaish)). See `vignette("qrule", package="survey")` for more information.
* `df`: (for `vartype = 'ci'`), a numeric value indicating degrees of freedom for the t\-distribution
The only difference between `survey_quantile()` and `survey_median()` is the inclusion of the `quantiles` argument in the `survey_quantile()` function. This argument takes a vector with values between 0 and 1 to indicate which quantiles to calculate. For example, if we wanted the quartiles of a variable, we would provide `quantiles = c(0.25, 0.5, 0.75)`. While we can specify quantiles of 0 and 1, which represent the minimum and maximum, this is not recommended. It only returns the minimum and maximum of the respondents and cannot be extrapolated to the population, as there is no valid definition of standard error.
In Section [5\.2\.1](c05-descriptive-analysis.html#desc-count-syntax), we provide an overview of the different variability types. The interval used in confidence intervals for most measures, such as means and counts, is referred to as a Wald\-type interval. However, this is not always the most accurate interval for quantiles. Similar to confidence intervals for proportions, quantiles have various interval types, including asin, beta, mean, and xlogit (see Section [5\.4\.1](c05-descriptive-analysis.html#desc-meanprop-syntax)). Quantiles also have two more methods available:
* `score`: the Francisco and Fuller confidence interval based on inverting a score test (only available for design\-based survey objects and not replicate\-based objects)
* `quantile`: based on the replicates of the quantile. This is not valid for jackknife\-type replicates but is available for bootstrap and BRR replicates.
One note with the `score` method is that when there are numerous ties in the data, this method may produce confidence intervals that do not contain the estimate. When dealing with a high propensity for ties (e.g., many respondents are the same age), it is recommended to use another method. SUDAAN, for example, uses the `score` method but adds noise to the values to prevent issues. The documentation in the {survey} package indicates, in general, that the `score` method may have poorer performance compared to the beta and logit intervals ([Lumley 2010](#ref-lumley2010complex)).
### 5\.5\.2 Examples
#### Example 1: Overall quartiles
Quantiles provide insights into the distribution of a variable. Let’s look into the quartiles, specifically, the first quartile (p\=0\.25\), the median (p\=0\.5\), and the third quartile (p\=0\.75\) of electric bills.
```
recs_des %>%
summarize(elec_bill = survey_quantile(DOLLAREL,
quantiles = c(0.25, .5, 0.75)
))
```
```
## # A tibble: 1 × 6
## elec_bill_q25 elec_bill_q50 elec_bill_q75 elec_bill_q25_se
## <dbl> <dbl> <dbl> <dbl>
## 1 795. 1215. 1770. 5.69
## elec_bill_q50_se elec_bill_q75_se
## <dbl> <dbl>
## 1 6.33 9.99
```
The output above shows the values for the three quartiles of electric bill costs and their respective standard errors: the 25th percentile is $795 with a standard error of $5\.69, the 50th percentile (median) is $1,215 with a standard error of $6\.33, and the 75th percentile is $1,770 with a standard error of $9\.99\.
#### Example 2: Quartiles by subgroup
We can estimate the quantiles of electric bills by region by using the `group_by()` function:
```
recs_des %>%
group_by(Region) %>%
summarize(elec_bill = survey_quantile(DOLLAREL,
quantiles = c(0.25, .5, 0.75)
))
```
```
## # A tibble: 4 × 7
## Region elec_bill_q25 elec_bill_q50 elec_bill_q75 elec_bill_q25_se
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 Northeast 740. 1148. 1712. 13.7
## 2 Midwest 769. 1149. 1632. 8.88
## 3 South 968. 1402. 1945. 10.6
## 4 West 623. 1028. 1568. 10.8
## elec_bill_q50_se elec_bill_q75_se
## <dbl> <dbl>
## 1 16.6 25.8
## 2 11.6 18.6
## 3 9.17 13.9
## 4 14.3 20.5
```
The 25th percentile for the Northeast region is $740, while it is $968 for the South.
#### Example 3: Minimum and maximum
As mentioned in the syntax section, we can specify quantiles of `0` (minimum) and `1` (maximum), and R calculates these values. However, these are only the minimum and maximum values in the data, and there is not enough information to determine their standard errors:
```
recs_des %>%
summarize(elec_bill = survey_quantile(DOLLAREL,
quantiles = c(0, 1)
))
```
```
## # A tibble: 1 × 4
## elec_bill_q00 elec_bill_q100 elec_bill_q00_se elec_bill_q100_se
## <dbl> <dbl> <dbl> <dbl>
## 1 -889. 15680. NaN 0
```
The minimum cost of electricity in the dataset is –$889, while the maximum is $15,680, but the standard error is shown as `NaN` and `0`, respectively. Notice that the minimum cost is a negative number. This may be surprising, but some housing units with solar power sell their energy back to the grid and earn money, which is recorded as a negative expenditure.
#### Example 4: Overall median
We can calculate the estimated median cost of electricity in the U.S. using the `survey_median()` function:
```
recs_des %>%
summarize(elec_bill = survey_median(DOLLAREL))
```
```
## # A tibble: 1 × 2
## elec_bill elec_bill_se
## <dbl> <dbl>
## 1 1215. 6.33
```
Nationally, the median household spent $1,215 in 2020\. This is the same result as we obtained using the `survey_quantile()` function. Interestingly, the average electric bill for households that we calculated in Section [5\.4](c05-descriptive-analysis.html#desc-meanprop) is $1,380, but the estimated median electric bill is $1,215, indicating the distribution is likely right\-skewed.
#### Example 5: Medians by subgroup
We can calculate the estimated median cost of electricity in the U.S. by region using the `group_by()` function with the variable(s) of interest before the `summarize()` function, similar to when we found the mean by region.
```
recs_des %>%
group_by(Region) %>%
summarize(elec_bill = survey_median(DOLLAREL))
```
```
## # A tibble: 4 × 3
## Region elec_bill elec_bill_se
## <fct> <dbl> <dbl>
## 1 Northeast 1148. 16.6
## 2 Midwest 1149. 11.6
## 3 South 1402. 9.17
## 4 West 1028. 14.3
```
We estimate that households in the Northeast spent a median of $1,148 on electricity, and in the South, they spent a median of $1,402\.
#### Example 1: Overall quartiles
Quantiles provide insights into the distribution of a variable. Let’s look into the quartiles, specifically, the first quartile (p\=0\.25\), the median (p\=0\.5\), and the third quartile (p\=0\.75\) of electric bills.
```
recs_des %>%
summarize(elec_bill = survey_quantile(DOLLAREL,
quantiles = c(0.25, .5, 0.75)
))
```
```
## # A tibble: 1 × 6
## elec_bill_q25 elec_bill_q50 elec_bill_q75 elec_bill_q25_se
## <dbl> <dbl> <dbl> <dbl>
## 1 795. 1215. 1770. 5.69
## elec_bill_q50_se elec_bill_q75_se
## <dbl> <dbl>
## 1 6.33 9.99
```
The output above shows the values for the three quartiles of electric bill costs and their respective standard errors: the 25th percentile is $795 with a standard error of $5\.69, the 50th percentile (median) is $1,215 with a standard error of $6\.33, and the 75th percentile is $1,770 with a standard error of $9\.99\.
#### Example 2: Quartiles by subgroup
We can estimate the quantiles of electric bills by region by using the `group_by()` function:
```
recs_des %>%
group_by(Region) %>%
summarize(elec_bill = survey_quantile(DOLLAREL,
quantiles = c(0.25, .5, 0.75)
))
```
```
## # A tibble: 4 × 7
## Region elec_bill_q25 elec_bill_q50 elec_bill_q75 elec_bill_q25_se
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 Northeast 740. 1148. 1712. 13.7
## 2 Midwest 769. 1149. 1632. 8.88
## 3 South 968. 1402. 1945. 10.6
## 4 West 623. 1028. 1568. 10.8
## elec_bill_q50_se elec_bill_q75_se
## <dbl> <dbl>
## 1 16.6 25.8
## 2 11.6 18.6
## 3 9.17 13.9
## 4 14.3 20.5
```
The 25th percentile for the Northeast region is $740, while it is $968 for the South.
#### Example 3: Minimum and maximum
As mentioned in the syntax section, we can specify quantiles of `0` (minimum) and `1` (maximum), and R calculates these values. However, these are only the minimum and maximum values in the data, and there is not enough information to determine their standard errors:
```
recs_des %>%
summarize(elec_bill = survey_quantile(DOLLAREL,
quantiles = c(0, 1)
))
```
```
## # A tibble: 1 × 4
## elec_bill_q00 elec_bill_q100 elec_bill_q00_se elec_bill_q100_se
## <dbl> <dbl> <dbl> <dbl>
## 1 -889. 15680. NaN 0
```
The minimum cost of electricity in the dataset is –$889, while the maximum is $15,680, but the standard error is shown as `NaN` and `0`, respectively. Notice that the minimum cost is a negative number. This may be surprising, but some housing units with solar power sell their energy back to the grid and earn money, which is recorded as a negative expenditure.
#### Example 4: Overall median
We can calculate the estimated median cost of electricity in the U.S. using the `survey_median()` function:
```
recs_des %>%
summarize(elec_bill = survey_median(DOLLAREL))
```
```
## # A tibble: 1 × 2
## elec_bill elec_bill_se
## <dbl> <dbl>
## 1 1215. 6.33
```
Nationally, the median household spent $1,215 in 2020\. This is the same result as we obtained using the `survey_quantile()` function. Interestingly, the average electric bill for households that we calculated in Section [5\.4](c05-descriptive-analysis.html#desc-meanprop) is $1,380, but the estimated median electric bill is $1,215, indicating the distribution is likely right\-skewed.
#### Example 5: Medians by subgroup
We can calculate the estimated median cost of electricity in the U.S. by region using the `group_by()` function with the variable(s) of interest before the `summarize()` function, similar to when we found the mean by region.
```
recs_des %>%
group_by(Region) %>%
summarize(elec_bill = survey_median(DOLLAREL))
```
```
## # A tibble: 4 × 3
## Region elec_bill elec_bill_se
## <fct> <dbl> <dbl>
## 1 Northeast 1148. 16.6
## 2 Midwest 1149. 11.6
## 3 South 1402. 9.17
## 4 West 1028. 14.3
```
We estimate that households in the Northeast spent a median of $1,148 on electricity, and in the South, they spent a median of $1,402\.
5\.6 Ratios
-----------
A ratio is a measure of the ratio of the sum of two variables, specifically in the form of:
\\\[ \\frac{\\sum x\_i}{\\sum y\_i}.\\]
Note that the ratio is not the same as calculating the following:
\\\[ \\frac{1}{N} \\sum \\frac{x\_i}{y\_i} \\]
which can be calculated with `survey_mean()` by creating a derived variable \\(z\=x/y\\) and then calculating the mean of \\(z\\).
Say we wanted to assess the energy efficiency of homes in a standardized way, where we can compare homes of different sizes. We can calculate the ratio of energy consumption to the square footage of a home. This helps us meaningfully compare homes of different sizes by identifying how much energy is being used per unit of space. To calculate this ratio, we would run `survey_ratio(Energy Consumption in BTUs, Square Footage of Home)`. If, instead, we used `survey_mean(Energy Consumption in BTUs/Square Footage of Home)`, we would estimate the average energy consumption per square foot of all surveyed homes. While helpful in understanding general energy use, this statistic does not account for differences in home sizes.
### 5\.6\.1 Syntax
The syntax for `survey_ratio()` is as follows:
```
survey_ratio(
numerator,
denominator,
na.rm = FALSE,
vartype = c("se", "ci", "var", "cv"),
level = 0.95,
deff = FALSE,
df = NULL
)
```
The arguments are:
* `numerator`: The numerator of the ratio
* `denominator`: The denominator of the ratio
* `na.rm`: A logical value to indicate whether missing values should be dropped
* `vartype`: type(s) of variation estimate to calculate including any of `c("se", "ci", "var", "cv")`, defaults to `se` (standard error) (see Section [5\.2\.1](c05-descriptive-analysis.html#desc-count-syntax) for more information)
* `level`: A single number or vector of numbers indicating the confidence level
* `deff`: A logical value to indicate whether the design effect should be returned (this is described in more detail in Section [5\.9\.3](c05-descriptive-analysis.html#desc-deff))
* `df`: (For vartype \= “ci” only) A numeric value indicating the degrees of freedom for t\-distribution
### 5\.6\.2 Examples
#### Example 1: Overall ratios
Suppose we wanted to find the ratio of dollars spent on liquid propane per unit (in British thermal unit \[Btu]) nationally[6](#fn6). To find the average cost to a household, we can use `survey_mean()`. However, to find the national unit rate, we can use `survey_ratio()`. In the following example, we show both methods and discuss the interpretation of each:
```
recs_des %>%
summarize(
DOLLARLP_Tot = survey_total(DOLLARLP, vartype = NULL),
BTULP_Tot = survey_total(BTULP, vartype = NULL),
DOL_BTU_Rat = survey_ratio(DOLLARLP, BTULP),
DOL_BTU_Avg = survey_mean(DOLLARLP / BTULP, na.rm = TRUE)
)
```
```
## # A tibble: 1 × 6
## DOLLARLP_Tot BTULP_Tot DOL_BTU_Rat DOL_BTU_Rat_se DOL_BTU_Avg
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 8122911173. 391425311586. 0.0208 0.000240 0.0240
## DOL_BTU_Avg_se
## <dbl>
## 1 0.000223
```
The ratio of the total spent on liquid propane to the total consumption was 0\.0208, but the average rate was 0\.024\. With a bit of calculation, we can show that the ratio is the ratio of the totals `DOLLARLP_Tot`/`BTULP_Tot`\=8,122,911,173/391,425,311,586\=0\.0208\. Although the estimated ratio can be calculated manually in this manner, the standard error requires the use of the `survey_ratio()` function. The average can be interpreted as the average rate paid by a household.
#### Example 2: Ratios by subgroup
As previously done with other estimates, we can use `group_by()` to examine whether this ratio varies by region.
```
recs_des %>%
group_by(Region) %>%
summarize(DOL_BTU_Rat = survey_ratio(DOLLARLP, BTULP)) %>%
arrange(DOL_BTU_Rat)
```
```
## # A tibble: 4 × 3
## Region DOL_BTU_Rat DOL_BTU_Rat_se
## <fct> <dbl> <dbl>
## 1 Midwest 0.0158 0.000240
## 2 South 0.0245 0.000388
## 3 West 0.0246 0.000875
## 4 Northeast 0.0247 0.000488
```
Although not a formal statistical test, it appears that the cost ratios for liquid propane are the lowest in the Midwest (0\.0158\).
### 5\.6\.1 Syntax
The syntax for `survey_ratio()` is as follows:
```
survey_ratio(
numerator,
denominator,
na.rm = FALSE,
vartype = c("se", "ci", "var", "cv"),
level = 0.95,
deff = FALSE,
df = NULL
)
```
The arguments are:
* `numerator`: The numerator of the ratio
* `denominator`: The denominator of the ratio
* `na.rm`: A logical value to indicate whether missing values should be dropped
* `vartype`: type(s) of variation estimate to calculate including any of `c("se", "ci", "var", "cv")`, defaults to `se` (standard error) (see Section [5\.2\.1](c05-descriptive-analysis.html#desc-count-syntax) for more information)
* `level`: A single number or vector of numbers indicating the confidence level
* `deff`: A logical value to indicate whether the design effect should be returned (this is described in more detail in Section [5\.9\.3](c05-descriptive-analysis.html#desc-deff))
* `df`: (For vartype \= “ci” only) A numeric value indicating the degrees of freedom for t\-distribution
### 5\.6\.2 Examples
#### Example 1: Overall ratios
Suppose we wanted to find the ratio of dollars spent on liquid propane per unit (in British thermal unit \[Btu]) nationally[6](#fn6). To find the average cost to a household, we can use `survey_mean()`. However, to find the national unit rate, we can use `survey_ratio()`. In the following example, we show both methods and discuss the interpretation of each:
```
recs_des %>%
summarize(
DOLLARLP_Tot = survey_total(DOLLARLP, vartype = NULL),
BTULP_Tot = survey_total(BTULP, vartype = NULL),
DOL_BTU_Rat = survey_ratio(DOLLARLP, BTULP),
DOL_BTU_Avg = survey_mean(DOLLARLP / BTULP, na.rm = TRUE)
)
```
```
## # A tibble: 1 × 6
## DOLLARLP_Tot BTULP_Tot DOL_BTU_Rat DOL_BTU_Rat_se DOL_BTU_Avg
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 8122911173. 391425311586. 0.0208 0.000240 0.0240
## DOL_BTU_Avg_se
## <dbl>
## 1 0.000223
```
The ratio of the total spent on liquid propane to the total consumption was 0\.0208, but the average rate was 0\.024\. With a bit of calculation, we can show that the ratio is the ratio of the totals `DOLLARLP_Tot`/`BTULP_Tot`\=8,122,911,173/391,425,311,586\=0\.0208\. Although the estimated ratio can be calculated manually in this manner, the standard error requires the use of the `survey_ratio()` function. The average can be interpreted as the average rate paid by a household.
#### Example 2: Ratios by subgroup
As previously done with other estimates, we can use `group_by()` to examine whether this ratio varies by region.
```
recs_des %>%
group_by(Region) %>%
summarize(DOL_BTU_Rat = survey_ratio(DOLLARLP, BTULP)) %>%
arrange(DOL_BTU_Rat)
```
```
## # A tibble: 4 × 3
## Region DOL_BTU_Rat DOL_BTU_Rat_se
## <fct> <dbl> <dbl>
## 1 Midwest 0.0158 0.000240
## 2 South 0.0245 0.000388
## 3 West 0.0246 0.000875
## 4 Northeast 0.0247 0.000488
```
Although not a formal statistical test, it appears that the cost ratios for liquid propane are the lowest in the Midwest (0\.0158\).
#### Example 1: Overall ratios
Suppose we wanted to find the ratio of dollars spent on liquid propane per unit (in British thermal unit \[Btu]) nationally[6](#fn6). To find the average cost to a household, we can use `survey_mean()`. However, to find the national unit rate, we can use `survey_ratio()`. In the following example, we show both methods and discuss the interpretation of each:
```
recs_des %>%
summarize(
DOLLARLP_Tot = survey_total(DOLLARLP, vartype = NULL),
BTULP_Tot = survey_total(BTULP, vartype = NULL),
DOL_BTU_Rat = survey_ratio(DOLLARLP, BTULP),
DOL_BTU_Avg = survey_mean(DOLLARLP / BTULP, na.rm = TRUE)
)
```
```
## # A tibble: 1 × 6
## DOLLARLP_Tot BTULP_Tot DOL_BTU_Rat DOL_BTU_Rat_se DOL_BTU_Avg
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 8122911173. 391425311586. 0.0208 0.000240 0.0240
## DOL_BTU_Avg_se
## <dbl>
## 1 0.000223
```
The ratio of the total spent on liquid propane to the total consumption was 0\.0208, but the average rate was 0\.024\. With a bit of calculation, we can show that the ratio is the ratio of the totals `DOLLARLP_Tot`/`BTULP_Tot`\=8,122,911,173/391,425,311,586\=0\.0208\. Although the estimated ratio can be calculated manually in this manner, the standard error requires the use of the `survey_ratio()` function. The average can be interpreted as the average rate paid by a household.
#### Example 2: Ratios by subgroup
As previously done with other estimates, we can use `group_by()` to examine whether this ratio varies by region.
```
recs_des %>%
group_by(Region) %>%
summarize(DOL_BTU_Rat = survey_ratio(DOLLARLP, BTULP)) %>%
arrange(DOL_BTU_Rat)
```
```
## # A tibble: 4 × 3
## Region DOL_BTU_Rat DOL_BTU_Rat_se
## <fct> <dbl> <dbl>
## 1 Midwest 0.0158 0.000240
## 2 South 0.0245 0.000388
## 3 West 0.0246 0.000875
## 4 Northeast 0.0247 0.000488
```
Although not a formal statistical test, it appears that the cost ratios for liquid propane are the lowest in the Midwest (0\.0158\).
5\.7 Correlations
-----------------
The correlation is a measure of the linear relationship between two continuous variables, which ranges between –1 and 1\. The most commonly used method is Pearson’s correlation (referred to as correlation henceforth). A sample correlation for a simple random sample is calculated as follows:
\\\[\\frac{\\sum (x\_i\-\\bar{x})(y\_i\-\\bar{y})}{\\sqrt{\\sum (x\_i\-\\bar{x})^2} \\sqrt{\\sum(y\_i\-\\bar{y})^2}} \\]
When using `survey_corr()` for designs other than a simple random sample, the weights are applied when estimating the correlation.
### 5\.7\.1 Syntax
The syntax for `survey_corr()` is as follows:
```
survey_corr(
x,
y,
na.rm = FALSE,
vartype = c("se", "ci", "var", "cv"),
level = 0.95,
df = NULL
)
```
The arguments are:
* `x`: A variable or expression
* `y`: A variable or expression
* `na.rm`: A logical value to indicate whether missing values should be dropped
* `vartype`: Type(s) of variation estimate to calculate including any of `c("se", "ci", "var", "cv")`, defaults to `se` (standard error) (see Section [5\.2\.1](c05-descriptive-analysis.html#desc-count-syntax) for more information)
* `level`: (For vartype \= “ci” only) A single number or vector of numbers indicating the confidence level
* `df`: (For vartype \= “ci” only) A numeric value indicating the degrees of freedom for t\-distribution
### 5\.7\.2 Examples
#### Example 1: Overall correlation
We can calculate the correlation between the total square footage of homes (`TOTSQFT_EN`)[7](#fn7) and electricity consumption (`BTUEL`)[8](#fn8).
```
recs_des %>%
summarize(SQFT_Elec_Corr = survey_corr(TOTSQFT_EN, BTUEL))
```
```
## # A tibble: 1 × 2
## SQFT_Elec_Corr SQFT_Elec_Corr_se
## <dbl> <dbl>
## 1 0.417 0.00689
```
The correlation between the total square footage of homes and electricity consumption is 0\.417, indicating a moderate positive relationship.
#### Example 2: Correlations by subgroup
We can explore the correlation between total square footage and electricity consumption based on subgroups, such as whether A/C is used (`ACUsed`).
```
recs_des %>%
group_by(ACUsed) %>%
summarize(SQFT_Elec_Corr = survey_corr(TOTSQFT_EN, DOLLAREL))
```
```
## # A tibble: 2 × 3
## ACUsed SQFT_Elec_Corr SQFT_Elec_Corr_se
## <lgl> <dbl> <dbl>
## 1 FALSE 0.290 0.0240
## 2 TRUE 0.401 0.00808
```
For homes without A/C, there is a small positive correlation between total square footage with electricity consumption (0\.29\). For homes with A/C, the correlation of 0\.401 indicates a stronger positive correlation between total square footage and electricity consumption.
### 5\.7\.1 Syntax
The syntax for `survey_corr()` is as follows:
```
survey_corr(
x,
y,
na.rm = FALSE,
vartype = c("se", "ci", "var", "cv"),
level = 0.95,
df = NULL
)
```
The arguments are:
* `x`: A variable or expression
* `y`: A variable or expression
* `na.rm`: A logical value to indicate whether missing values should be dropped
* `vartype`: Type(s) of variation estimate to calculate including any of `c("se", "ci", "var", "cv")`, defaults to `se` (standard error) (see Section [5\.2\.1](c05-descriptive-analysis.html#desc-count-syntax) for more information)
* `level`: (For vartype \= “ci” only) A single number or vector of numbers indicating the confidence level
* `df`: (For vartype \= “ci” only) A numeric value indicating the degrees of freedom for t\-distribution
### 5\.7\.2 Examples
#### Example 1: Overall correlation
We can calculate the correlation between the total square footage of homes (`TOTSQFT_EN`)[7](#fn7) and electricity consumption (`BTUEL`)[8](#fn8).
```
recs_des %>%
summarize(SQFT_Elec_Corr = survey_corr(TOTSQFT_EN, BTUEL))
```
```
## # A tibble: 1 × 2
## SQFT_Elec_Corr SQFT_Elec_Corr_se
## <dbl> <dbl>
## 1 0.417 0.00689
```
The correlation between the total square footage of homes and electricity consumption is 0\.417, indicating a moderate positive relationship.
#### Example 2: Correlations by subgroup
We can explore the correlation between total square footage and electricity consumption based on subgroups, such as whether A/C is used (`ACUsed`).
```
recs_des %>%
group_by(ACUsed) %>%
summarize(SQFT_Elec_Corr = survey_corr(TOTSQFT_EN, DOLLAREL))
```
```
## # A tibble: 2 × 3
## ACUsed SQFT_Elec_Corr SQFT_Elec_Corr_se
## <lgl> <dbl> <dbl>
## 1 FALSE 0.290 0.0240
## 2 TRUE 0.401 0.00808
```
For homes without A/C, there is a small positive correlation between total square footage with electricity consumption (0\.29\). For homes with A/C, the correlation of 0\.401 indicates a stronger positive correlation between total square footage and electricity consumption.
#### Example 1: Overall correlation
We can calculate the correlation between the total square footage of homes (`TOTSQFT_EN`)[7](#fn7) and electricity consumption (`BTUEL`)[8](#fn8).
```
recs_des %>%
summarize(SQFT_Elec_Corr = survey_corr(TOTSQFT_EN, BTUEL))
```
```
## # A tibble: 1 × 2
## SQFT_Elec_Corr SQFT_Elec_Corr_se
## <dbl> <dbl>
## 1 0.417 0.00689
```
The correlation between the total square footage of homes and electricity consumption is 0\.417, indicating a moderate positive relationship.
#### Example 2: Correlations by subgroup
We can explore the correlation between total square footage and electricity consumption based on subgroups, such as whether A/C is used (`ACUsed`).
```
recs_des %>%
group_by(ACUsed) %>%
summarize(SQFT_Elec_Corr = survey_corr(TOTSQFT_EN, DOLLAREL))
```
```
## # A tibble: 2 × 3
## ACUsed SQFT_Elec_Corr SQFT_Elec_Corr_se
## <lgl> <dbl> <dbl>
## 1 FALSE 0.290 0.0240
## 2 TRUE 0.401 0.00808
```
For homes without A/C, there is a small positive correlation between total square footage with electricity consumption (0\.29\). For homes with A/C, the correlation of 0\.401 indicates a stronger positive correlation between total square footage and electricity consumption.
5\.8 Standard deviation and variance
------------------------------------
All survey functions produce an estimate of the variability of a given estimate. No additional function is needed when dealing with variable estimates. However, if we are specifically interested in population variance and standard deviation, we can use the `survey_var()` and `survey_sd()` functions. In our experience, it is not common practice to use these functions. They can be used when designing a future study to gauge population variability and inform sampling precision.
### 5\.8\.1 Syntax
As with non\-survey data, the standard deviation estimate is the square root of the variance estimate. Therefore, the `survey_var()` and `survey_sd()` functions share the same arguments, except the standard deviation does not allow the usage of `vartype`.
```
survey_var(
x,
na.rm = FALSE,
vartype = c("se", "ci", "var"),
level = 0.95,
df = NULL
)
survey_sd(
x,
na.rm = FALSE
)
```
The arguments are:
* `x`: A variable or expression, or empty
* `na.rm`: A logical value to indicate whether missing values should be dropped
* `vartype`: Type(s) of variation estimate to calculate including any of `c("se", "ci", "var")`, defaults to `se` (standard error) (see Section [5\.2\.1](c05-descriptive-analysis.html#desc-count-syntax) for more information)
* `level`: (For vartype \= “ci” only) A single number or vector of numbers indicating the confidence level
* `df`: (For vartype \= “ci” only) A numeric value indicating the degrees of freedom for t\-distribution
### 5\.8\.2 Examples
#### Example 1: Overall variability
Let’s return to electricity bills and explore the variability in electricity expenditure.
```
recs_des %>%
summarize(
var_elbill = survey_var(DOLLAREL),
sd_elbill = survey_sd(DOLLAREL)
)
```
```
## # A tibble: 1 × 3
## var_elbill var_elbill_se sd_elbill
## <dbl> <dbl> <dbl>
## 1 704906. 13926. 840.
```
We may encounter a warning related to deprecated underlying calculations performed by the `survey_var()` function. This warning is a result of changes in the way R handles recycling in vectorized operations. The results are still valid. They give an estimate of the population variance of electricity bills (`var_elbill`), the standard error of that variance (`var_elbill_se`), and the estimated population standard deviation of electricity bills (`sd_elbill`). Note that no standard error is associated with the standard deviation; this is the only estimate that does not include a standard error.
#### Example 2: Variability by subgroup
To find out if the variability in electricity expenditure is similar across regions, we can calculate the variance by region using `group_by()`:
```
recs_des %>%
group_by(Region) %>%
summarize(
var_elbill = survey_var(DOLLAREL),
sd_elbill = survey_sd(DOLLAREL)
)
```
```
## # A tibble: 4 × 4
## Region var_elbill var_elbill_se sd_elbill
## <fct> <dbl> <dbl> <dbl>
## 1 Northeast 775450. 38843. 881.
## 2 Midwest 552423. 25252. 743.
## 3 South 702521. 30641. 838.
## 4 West 717886. 30597. 847.
```
### 5\.8\.1 Syntax
As with non\-survey data, the standard deviation estimate is the square root of the variance estimate. Therefore, the `survey_var()` and `survey_sd()` functions share the same arguments, except the standard deviation does not allow the usage of `vartype`.
```
survey_var(
x,
na.rm = FALSE,
vartype = c("se", "ci", "var"),
level = 0.95,
df = NULL
)
survey_sd(
x,
na.rm = FALSE
)
```
The arguments are:
* `x`: A variable or expression, or empty
* `na.rm`: A logical value to indicate whether missing values should be dropped
* `vartype`: Type(s) of variation estimate to calculate including any of `c("se", "ci", "var")`, defaults to `se` (standard error) (see Section [5\.2\.1](c05-descriptive-analysis.html#desc-count-syntax) for more information)
* `level`: (For vartype \= “ci” only) A single number or vector of numbers indicating the confidence level
* `df`: (For vartype \= “ci” only) A numeric value indicating the degrees of freedom for t\-distribution
### 5\.8\.2 Examples
#### Example 1: Overall variability
Let’s return to electricity bills and explore the variability in electricity expenditure.
```
recs_des %>%
summarize(
var_elbill = survey_var(DOLLAREL),
sd_elbill = survey_sd(DOLLAREL)
)
```
```
## # A tibble: 1 × 3
## var_elbill var_elbill_se sd_elbill
## <dbl> <dbl> <dbl>
## 1 704906. 13926. 840.
```
We may encounter a warning related to deprecated underlying calculations performed by the `survey_var()` function. This warning is a result of changes in the way R handles recycling in vectorized operations. The results are still valid. They give an estimate of the population variance of electricity bills (`var_elbill`), the standard error of that variance (`var_elbill_se`), and the estimated population standard deviation of electricity bills (`sd_elbill`). Note that no standard error is associated with the standard deviation; this is the only estimate that does not include a standard error.
#### Example 2: Variability by subgroup
To find out if the variability in electricity expenditure is similar across regions, we can calculate the variance by region using `group_by()`:
```
recs_des %>%
group_by(Region) %>%
summarize(
var_elbill = survey_var(DOLLAREL),
sd_elbill = survey_sd(DOLLAREL)
)
```
```
## # A tibble: 4 × 4
## Region var_elbill var_elbill_se sd_elbill
## <fct> <dbl> <dbl> <dbl>
## 1 Northeast 775450. 38843. 881.
## 2 Midwest 552423. 25252. 743.
## 3 South 702521. 30641. 838.
## 4 West 717886. 30597. 847.
```
#### Example 1: Overall variability
Let’s return to electricity bills and explore the variability in electricity expenditure.
```
recs_des %>%
summarize(
var_elbill = survey_var(DOLLAREL),
sd_elbill = survey_sd(DOLLAREL)
)
```
```
## # A tibble: 1 × 3
## var_elbill var_elbill_se sd_elbill
## <dbl> <dbl> <dbl>
## 1 704906. 13926. 840.
```
We may encounter a warning related to deprecated underlying calculations performed by the `survey_var()` function. This warning is a result of changes in the way R handles recycling in vectorized operations. The results are still valid. They give an estimate of the population variance of electricity bills (`var_elbill`), the standard error of that variance (`var_elbill_se`), and the estimated population standard deviation of electricity bills (`sd_elbill`). Note that no standard error is associated with the standard deviation; this is the only estimate that does not include a standard error.
#### Example 2: Variability by subgroup
To find out if the variability in electricity expenditure is similar across regions, we can calculate the variance by region using `group_by()`:
```
recs_des %>%
group_by(Region) %>%
summarize(
var_elbill = survey_var(DOLLAREL),
sd_elbill = survey_sd(DOLLAREL)
)
```
```
## # A tibble: 4 × 4
## Region var_elbill var_elbill_se sd_elbill
## <fct> <dbl> <dbl> <dbl>
## 1 Northeast 775450. 38843. 881.
## 2 Midwest 552423. 25252. 743.
## 3 South 702521. 30641. 838.
## 4 West 717886. 30597. 847.
```
5\.9 Additional topics
----------------------
### 5\.9\.1 Unweighted analysis
Sometimes, it is helpful to calculate an unweighted estimate of a given variable. For this, we use the `unweighted()` function in the `summarize()` function. The `unweighted()` function calculates unweighted summaries from a `tbl_svy` object, providing the summary among the respondents without extrapolating to a population estimate. The `unweighted()` function can be used in conjunction with any {dplyr} functions. Here is an example looking at the average household electricity cost:
```
recs_des %>%
summarize(
elec_bill = survey_mean(DOLLAREL),
elec_unweight = unweighted(mean(DOLLAREL))
)
```
```
## # A tibble: 1 × 3
## elec_bill elec_bill_se elec_unweight
## <dbl> <dbl> <dbl>
## 1 1380. 5.38 1425.
```
It is estimated that American residential households spent an average of $1,380 on electricity in 2020, and the estimate has a standard error of $5\.38\. The `unweighted()` function calculates the unweighted average and represents the average amount of money spent on electricity in 2020 by the respondents, which was $1,425\.
### 5\.9\.2 Subpopulation analysis
We mentioned using `filter()` to subset a survey object for analysis. This operation should be done after creating the survey design object. Subsetting data before creating the object can lead to incorrect variability estimates, if subsetting removes an entire Primary Sampling Unit (PSU; see Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights) for more information on PSUs and sample designs).
Suppose we want estimates of the average amount spent on natural gas among housing units using natural gas (based on the variable `BTUNG`)[9](#fn9). We first filter records to only include records where `BTUNG > 0` and then find the average amount spent.
```
recs_des %>%
filter(BTUNG > 0) %>%
summarize(NG_mean = survey_mean(DOLLARNG,
vartype = c("se", "ci")
))
```
```
## # A tibble: 1 × 4
## NG_mean NG_mean_se NG_mean_low NG_mean_upp
## <dbl> <dbl> <dbl> <dbl>
## 1 631. 4.64 621. 640.
```
The estimated average amount spent on natural gas among households that use natural gas is $631\. Let’s compare this to the mean when we do not filter.
```
recs_des %>%
summarize(NG_mean = survey_mean(DOLLARNG,
vartype = c("se", "ci")
))
```
```
## # A tibble: 1 × 4
## NG_mean NG_mean_se NG_mean_low NG_mean_upp
## <dbl> <dbl> <dbl> <dbl>
## 1 382. 3.41 375. 389.
```
Based on this calculation, the estimated average amount spent on natural gas is $382\. Note that applying the filter to include only housing units that use natural gas yields a higher mean than when not applying the filter. This is because including housing units that do not use natural gas introduces many $0 amounts, impacting the mean calculation.
### 5\.9\.3 Design effects
The design effect measures how the precision of an estimate is influenced by the sampling design. In other words, it measures how much more or less statistically efficient the survey design is compared to a simple random sample (SRS). It is computed by taking the ratio of the estimate’s variance under the design at hand to the estimate’s variance under a simple random sample without replacement. A design effect less than 1 indicates that the design is more statistically efficient than an SRS design, which is rare but possible in a stratified sampling design where the outcome correlates with the stratification variable(s). A design effect greater than 1 indicates that the design is less statistically efficient than an SRS design. From a design effect, we can calculate the effective sample size as follows:
\\\[n\_{eff}\=\\frac{n}{D\_{eff}} \\]
where \\(n\\) is the nominal sample size (the number of survey responses) and \\(D\_{eff}\\) is the estimated design effect. We can interpret the effective sample size \\(n\_{eff}\\) as the hypothetical sample size that a survey using an SRS design would need to achieve the same precision as the design at hand. Design effects specific to each outcome — outcomes that are less clustered in the population have smaller design effects than outcomes that are clustered.
In the {srvyr} package, design effects can be calculated for totals, proportions, means, and ratio estimates by setting the `deff` argument to `TRUE` in the corresponding functions. In the example below, we calculate the design effects for the average consumption of electricity (`BTUEL`), natural gas (`BTUNG`), liquid propane (`BTULP`), fuel oil (`BTUFO`), and wood (`BTUWOOD`) by setting `deff = TRUE`:
```
recs_des %>%
summarize(across(
c(BTUEL, BTUNG, BTULP, BTUFO, BTUWOOD),
~ survey_mean(.x, deff = TRUE, vartype = NULL)
)) %>%
select(ends_with("deff"))
```
```
## # A tibble: 1 × 5
## BTUEL_deff BTUNG_deff BTULP_deff BTUFO_deff BTUWOOD_deff
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 0.597 0.938 1.21 0.720 1.10
```
For the values less than 1 (`BTUEL_deff` and `BTUFO_deff`), the results suggest that the survey design is more efficient than a simple random sample. For the values greater than 1 (`BTUNG_deff`, `BTULP_deff`, and `BTUWOOD_deff`), the results indicate that the survey design is less efficient than a simple random sample.
### 5\.9\.4 Creating summary rows
When using `group_by()` in analysis, the results are returned with a row for each group or combination of groups. Often, we want both breakdowns by group and a summary row for the estimate representing the entire population. For example, we may want the average electricity consumption by region and nationally. The {srvyr} package has the convenient `cascade()` function, which adds summary rows for the total of a group. It is used instead of `summarize()` and has similar functionalities along with some additional features.
#### Syntax
The syntax is as follows:
```
cascade(
.data,
...,
.fill = NA,
.fill_level_top = FALSE,
.groupings = NULL
)
```
where the arguments are:
* `.data`: A `tbl_svy` object
* `...`: Name\-value pairs of summary functions (same as the `summarize()` function)
* `.fill`: Value to fill in for group summaries (defaults to `NA`)
* `.fill_level_top`: When filling factor variables, whether to put the value ‘.fill’ in the first position (defaults to FALSE, placing it in the bottom)
#### Example
First, let’s look at an example where we calculate the average household electricity cost. Then, we build on it to examine the features of the `cascade()` function. In the first example below, we calculate the average household energy cost `DOLLAREL_mn` using `survey_mean()` without modifying any of the argument defaults in the function:
```
recs_des %>%
cascade(DOLLAREL_mn = survey_mean(DOLLAREL))
```
```
## # A tibble: 1 × 2
## DOLLAREL_mn DOLLAREL_mn_se
## <dbl> <dbl>
## 1 1380. 5.38
```
Next, let’s group the results by region by adding `group_by()` before the `cascade()` function:
```
recs_des %>%
group_by(Region) %>%
cascade(DOLLAREL_mn = survey_mean(DOLLAREL))
```
```
## # A tibble: 5 × 3
## Region DOLLAREL_mn DOLLAREL_mn_se
## <fct> <dbl> <dbl>
## 1 Northeast 1343. 14.6
## 2 Midwest 1293. 11.7
## 3 South 1548. 10.3
## 4 West 1211. 12.0
## 5 <NA> 1380. 5.38
```
We can see the estimated average electricity bills by region: $1,343 for the Northeast, $1,548 for the South, and so on. The last row, where `Region = NA`, is the national average electricity bill, $1,380\. However, naming the national “region” as `NA` is not very informative. We can give it a better name using the `.fill` argument.
```
recs_des %>%
group_by(Region) %>%
cascade(
DOLLAREL_mn = survey_mean(DOLLAREL),
.fill = "National"
)
```
```
## # A tibble: 5 × 3
## Region DOLLAREL_mn DOLLAREL_mn_se
## <fct> <dbl> <dbl>
## 1 Northeast 1343. 14.6
## 2 Midwest 1293. 11.7
## 3 South 1548. 10.3
## 4 West 1211. 12.0
## 5 National 1380. 5.38
```
We can move the summary row to the first row by adding `.fill_level_top = TRUE` to `cascade()`:
```
recs_des %>%
group_by(Region) %>%
cascade(
DOLLAREL_mn = survey_mean(DOLLAREL),
.fill = "National",
.fill_level_top = TRUE
)
```
```
## # A tibble: 5 × 3
## Region DOLLAREL_mn DOLLAREL_mn_se
## <fct> <dbl> <dbl>
## 1 National 1380. 5.38
## 2 Northeast 1343. 14.6
## 3 Midwest 1293. 11.7
## 4 South 1548. 10.3
## 5 West 1211. 12.0
```
While the results remain the same, the table is now easier to interpret.
### 5\.9\.5 Calculating estimates for many outcomes
Often, we are interested in a summary statistic across many variables. Useful tools include the `across()` function in {dplyr}, shown a few times above, and the `map()` function in {purrr}.
The `across()` function applies the same function to multiple columns within `summarize()`. This works well with all functions shown above, except for `survey_prop()`. In a later example, we tackle summarizing multiple proportions.
#### Example 1: `across()`
Suppose we want to calculate the total and average consumption, along with coefficients of variation (CV), for each fuel type. These include the reported consumption of electricity (`BTUEL`), natural gas (`BTUNG`), liquid propane (`BTULP`), fuel oil (`BTUFO`), and wood (`BTUWOOD`), as mentioned in the section on design effects. We can take advantage of the fact that these are the only variables that start with “BTU” by selecting them with `starts_with("BTU")` in the `across()` function. For each selected column (`.x`), `across()` creates a list of two functions to be applied: `survey_total()` to calculate the total and `survey_mean()` to calculate the mean, along with their CV (`vartype = "cv"`). Finally, `.unpack = "{outer}.{inner}"` specifies that the resulting column names are a concatenation of the variable name, followed by Total or Mean, and then “coef” or “cv.”
```
consumption_ests <- recs_des %>%
summarize(across(
starts_with("BTU"),
list(
Total = ~ survey_total(.x, vartype = "cv"),
Mean = ~ survey_mean(.x, vartype = "cv")
),
.unpack = "{outer}.{inner}"
))
consumption_ests
```
```
## # A tibble: 1 × 20
## BTUEL_Total.coef BTUEL_Total._cv BTUEL_Mean.coef BTUEL_Mean._cv
## <dbl> <dbl> <dbl> <dbl>
## 1 4453284510065 0.00377 36051. 0.00377
## # ℹ 16 more variables: BTUNG_Total.coef <dbl>, BTUNG_Total._cv <dbl>,
## # BTUNG_Mean.coef <dbl>, BTUNG_Mean._cv <dbl>,
## # BTULP_Total.coef <dbl>, BTULP_Total._cv <dbl>,
## # BTULP_Mean.coef <dbl>, BTULP_Mean._cv <dbl>,
## # BTUFO_Total.coef <dbl>, BTUFO_Total._cv <dbl>,
## # BTUFO_Mean.coef <dbl>, BTUFO_Mean._cv <dbl>,
## # BTUWOOD_Total.coef <dbl>, BTUWOOD_Total._cv <dbl>, …
```
The estimated total consumption of electricity (`BTUEL`) is 4,453,284,510,065 (`BTUEL_Total.coef`), the estimated average consumption is 36,051 (`BTUEL_Mean.coef`), and the CV is 0\.0038\.
In the example above, the table was quite wide. We may prefer a row for each fuel type. Using the `pivot_longer()` and `pivot_wider()` functions from {tidyr} can help us achieve this. First, we use `pivot_longer()` to make each variable a column, changing the data to a “long” format. We use the `names_to` argument to specify new column names: `FuelType`, `Stat`, and `Type`. Then, the `names_pattern` argument extracts the names in the original column names based on the regular expression pattern `BTU(.*)_(.*)\\.(.*)`. They are saved in the column names defined in `names_to`.
```
consumption_ests_long <- consumption_ests %>%
pivot_longer(
cols = everything(),
names_to = c("FuelType", "Stat", "Type"),
names_pattern = "BTU(.*)_(.*)\\.(.*)"
)
consumption_ests_long
```
```
## # A tibble: 20 × 4
## FuelType Stat Type value
## <chr> <chr> <chr> <dbl>
## 1 EL Total coef 4453284510065
## 2 EL Total _cv 0.00377
## 3 EL Mean coef 36051.
## 4 EL Mean _cv 0.00377
## 5 NG Total coef 4240769382106.
## 6 NG Total _cv 0.00908
## 7 NG Mean coef 34330.
## 8 NG Mean _cv 0.00908
## 9 LP Total coef 391425311586.
## 10 LP Total _cv 0.0380
## 11 LP Mean coef 3169.
## 12 LP Mean _cv 0.0380
## 13 FO Total coef 395699976655.
## 14 FO Total _cv 0.0343
## 15 FO Mean coef 3203.
## 16 FO Mean _cv 0.0343
## 17 WOOD Total coef 345091088404.
## 18 WOOD Total _cv 0.0454
## 19 WOOD Mean coef 2794.
## 20 WOOD Mean _cv 0.0454
```
Then, we use `pivot_wider()` to create a table that is nearly ready for publication. Within the function, we can make the names for each element more descriptive and informative by gluing the `Stat` and `Type` together with `names_glue`. Further details on creating publication\-ready tables are covered in Chapter [8](c08-communicating-results.html#c08-communicating-results).
```
consumption_ests_long %>%
mutate(Type = case_when(
Type == "coef" ~ "",
Type == "_cv" ~ " (CV)"
)) %>%
pivot_wider(
id_cols = FuelType,
names_from = c(Stat, Type),
names_glue = "{Stat}{Type}",
values_from = value
)
```
```
## # A tibble: 5 × 5
## FuelType Total `Total (CV)` Mean `Mean (CV)`
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 EL 4453284510065 0.00377 36051. 0.00377
## 2 NG 4240769382106. 0.00908 34330. 0.00908
## 3 LP 391425311586. 0.0380 3169. 0.0380
## 4 FO 395699976655. 0.0343 3203. 0.0343
## 5 WOOD 345091088404. 0.0454 2794. 0.0454
```
#### Example 2: Proportions with `across()`
As mentioned earlier, proportions do not work as well directly with the `across()` method. If we want the proportion of houses with A/C and the proportion of houses with heating, we require two separate `group_by()` statements as shown below:
```
recs_des %>%
group_by(ACUsed) %>%
summarize(p = survey_prop())
```
```
## # A tibble: 2 × 3
## ACUsed p p_se
## <lgl> <dbl> <dbl>
## 1 FALSE 0.113 0.00306
## 2 TRUE 0.887 0.00306
```
```
recs_des %>%
group_by(SpaceHeatingUsed) %>%
summarize(p = survey_prop())
```
```
## # A tibble: 2 × 3
## SpaceHeatingUsed p p_se
## <lgl> <dbl> <dbl>
## 1 FALSE 0.0469 0.00207
## 2 TRUE 0.953 0.00207
```
We estimate 88\.7% of households have A/C and 95\.3% have heating.
If we are only interested in the `TRUE` outcomes, that is, the proportion of households that have A/C and the proportion that have heating, we can simplify the code. Applying `survey_mean()` to a logical variable is the same as using `survey_prop()`, as shown below:
```
cool_heat_tab <- recs_des %>%
summarize(across(c(ACUsed, SpaceHeatingUsed), ~ survey_mean(.x),
.unpack = "{outer}.{inner}"
))
cool_heat_tab
```
```
## # A tibble: 1 × 4
## ACUsed.coef ACUsed._se SpaceHeatingUsed.coef SpaceHeatingUsed._se
## <dbl> <dbl> <dbl> <dbl>
## 1 0.887 0.00306 0.953 0.00207
```
Note that the estimates are the same as those obtained using the separate `group_by()` statements. As before, we can use `pivot_longer()` to structure the table in a more suitable format for distribution.
```
cool_heat_tab %>%
pivot_longer(everything(),
names_to = c("Comfort", ".value"),
names_pattern = "(.*)\\.(.*)"
) %>%
rename(
p = coef,
se = `_se`
)
```
```
## # A tibble: 2 × 3
## Comfort p se
## <chr> <dbl> <dbl>
## 1 ACUsed 0.887 0.00306
## 2 SpaceHeatingUsed 0.953 0.00207
```
#### Example 3: `purrr::map()`
Loops are a common tool when dealing with repetitive calculations. The {purrr} package provides the `map()` functions, which, like a loop, allow us to perform the same task across different elements ([Wickham and Henry 2023](#ref-R-purrr)). In our case, we may want to calculate proportions from the same design multiple times. A straightforward approach is to design the calculation for one variable, build a function based on that, and then apply it iteratively for the rest of the variables.
Suppose we want to create a table that shows the proportion of people who express trust in their government (`TrustGovernment`)[10](#fn10) as well as those that trust in people (`TrustPeople`)[11](#fn11) using data from the 2020 ANES.
First, we create a table for a single variable. The table includes the variable name as a column, the response, and the corresponding percentage with its standard error.
```
anes_des %>%
drop_na(TrustGovernment) %>%
group_by(TrustGovernment) %>%
summarize(p = survey_prop() * 100) %>%
mutate(Variable = "TrustGovernment") %>%
rename(Answer = TrustGovernment) %>%
select(Variable, everything())
```
```
## # A tibble: 5 × 4
## Variable Answer p p_se
## <chr> <fct> <dbl> <dbl>
## 1 TrustGovernment Always 1.55 0.204
## 2 TrustGovernment Most of the time 13.2 0.553
## 3 TrustGovernment About half the time 30.9 0.829
## 4 TrustGovernment Some of the time 43.4 0.855
## 5 TrustGovernment Never 11.0 0.566
```
We estimate that 1\.55% of people always trust the government, 13\.16% trust the government most of the time, and so on.
Now, we want to use the original series of steps as a template to create a general function `calcps()` that can apply the same steps to other variables. We replace `TrustGovernment` with an argument for a generic variable, `var`. Referring to `var` involves a bit of tidy evaluation, an advanced skill. To learn more, we recommend Wickham ([2019](#ref-wickham2019advanced)).
```
calcps <- function(var) {
anes_des %>%
drop_na(!!sym(var)) %>%
group_by(!!sym(var)) %>%
summarize(p = survey_prop() * 100) %>%
mutate(Variable = var) %>%
rename(Answer := !!sym(var)) %>%
select(Variable, everything())
}
```
We then apply this function to the two variables of interest, `TrustGovernment` and `TrustPeople`:
```
calcps("TrustGovernment")
```
```
## # A tibble: 5 × 4
## Variable Answer p p_se
## <chr> <fct> <dbl> <dbl>
## 1 TrustGovernment Always 1.55 0.204
## 2 TrustGovernment Most of the time 13.2 0.553
## 3 TrustGovernment About half the time 30.9 0.829
## 4 TrustGovernment Some of the time 43.4 0.855
## 5 TrustGovernment Never 11.0 0.566
```
```
calcps("TrustPeople")
```
```
## # A tibble: 5 × 4
## Variable Answer p p_se
## <chr> <fct> <dbl> <dbl>
## 1 TrustPeople Always 0.809 0.164
## 2 TrustPeople Most of the time 41.4 0.857
## 3 TrustPeople About half the time 28.2 0.776
## 4 TrustPeople Some of the time 24.5 0.670
## 5 TrustPeople Never 5.05 0.422
```
Finally, we use `map()` to iterate over as many variables as needed. We feed our desired variables into `map()` along with our custom function, `calcps`. The output is a tibble with the variable names in the “Variable” column, the responses in the “Answer” column, along with the percentage and standard error. The `list_rbind()` function combines the rows into a single tibble. This example extends nicely when dealing with numerous variables for which we want percentage estimates.
```
c("TrustGovernment", "TrustPeople") %>%
map(calcps) %>%
list_rbind()
```
```
## # A tibble: 10 × 4
## Variable Answer p p_se
## <chr> <fct> <dbl> <dbl>
## 1 TrustGovernment Always 1.55 0.204
## 2 TrustGovernment Most of the time 13.2 0.553
## 3 TrustGovernment About half the time 30.9 0.829
## 4 TrustGovernment Some of the time 43.4 0.855
## 5 TrustGovernment Never 11.0 0.566
## 6 TrustPeople Always 0.809 0.164
## 7 TrustPeople Most of the time 41.4 0.857
## 8 TrustPeople About half the time 28.2 0.776
## 9 TrustPeople Some of the time 24.5 0.670
## 10 TrustPeople Never 5.05 0.422
```
In addition to our results above, we can also see the output for `TrustPeople`. While we estimate that 1\.55% of people always trust the government, 0\.81% always trust people.
### 5\.9\.1 Unweighted analysis
Sometimes, it is helpful to calculate an unweighted estimate of a given variable. For this, we use the `unweighted()` function in the `summarize()` function. The `unweighted()` function calculates unweighted summaries from a `tbl_svy` object, providing the summary among the respondents without extrapolating to a population estimate. The `unweighted()` function can be used in conjunction with any {dplyr} functions. Here is an example looking at the average household electricity cost:
```
recs_des %>%
summarize(
elec_bill = survey_mean(DOLLAREL),
elec_unweight = unweighted(mean(DOLLAREL))
)
```
```
## # A tibble: 1 × 3
## elec_bill elec_bill_se elec_unweight
## <dbl> <dbl> <dbl>
## 1 1380. 5.38 1425.
```
It is estimated that American residential households spent an average of $1,380 on electricity in 2020, and the estimate has a standard error of $5\.38\. The `unweighted()` function calculates the unweighted average and represents the average amount of money spent on electricity in 2020 by the respondents, which was $1,425\.
### 5\.9\.2 Subpopulation analysis
We mentioned using `filter()` to subset a survey object for analysis. This operation should be done after creating the survey design object. Subsetting data before creating the object can lead to incorrect variability estimates, if subsetting removes an entire Primary Sampling Unit (PSU; see Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights) for more information on PSUs and sample designs).
Suppose we want estimates of the average amount spent on natural gas among housing units using natural gas (based on the variable `BTUNG`)[9](#fn9). We first filter records to only include records where `BTUNG > 0` and then find the average amount spent.
```
recs_des %>%
filter(BTUNG > 0) %>%
summarize(NG_mean = survey_mean(DOLLARNG,
vartype = c("se", "ci")
))
```
```
## # A tibble: 1 × 4
## NG_mean NG_mean_se NG_mean_low NG_mean_upp
## <dbl> <dbl> <dbl> <dbl>
## 1 631. 4.64 621. 640.
```
The estimated average amount spent on natural gas among households that use natural gas is $631\. Let’s compare this to the mean when we do not filter.
```
recs_des %>%
summarize(NG_mean = survey_mean(DOLLARNG,
vartype = c("se", "ci")
))
```
```
## # A tibble: 1 × 4
## NG_mean NG_mean_se NG_mean_low NG_mean_upp
## <dbl> <dbl> <dbl> <dbl>
## 1 382. 3.41 375. 389.
```
Based on this calculation, the estimated average amount spent on natural gas is $382\. Note that applying the filter to include only housing units that use natural gas yields a higher mean than when not applying the filter. This is because including housing units that do not use natural gas introduces many $0 amounts, impacting the mean calculation.
### 5\.9\.3 Design effects
The design effect measures how the precision of an estimate is influenced by the sampling design. In other words, it measures how much more or less statistically efficient the survey design is compared to a simple random sample (SRS). It is computed by taking the ratio of the estimate’s variance under the design at hand to the estimate’s variance under a simple random sample without replacement. A design effect less than 1 indicates that the design is more statistically efficient than an SRS design, which is rare but possible in a stratified sampling design where the outcome correlates with the stratification variable(s). A design effect greater than 1 indicates that the design is less statistically efficient than an SRS design. From a design effect, we can calculate the effective sample size as follows:
\\\[n\_{eff}\=\\frac{n}{D\_{eff}} \\]
where \\(n\\) is the nominal sample size (the number of survey responses) and \\(D\_{eff}\\) is the estimated design effect. We can interpret the effective sample size \\(n\_{eff}\\) as the hypothetical sample size that a survey using an SRS design would need to achieve the same precision as the design at hand. Design effects specific to each outcome — outcomes that are less clustered in the population have smaller design effects than outcomes that are clustered.
In the {srvyr} package, design effects can be calculated for totals, proportions, means, and ratio estimates by setting the `deff` argument to `TRUE` in the corresponding functions. In the example below, we calculate the design effects for the average consumption of electricity (`BTUEL`), natural gas (`BTUNG`), liquid propane (`BTULP`), fuel oil (`BTUFO`), and wood (`BTUWOOD`) by setting `deff = TRUE`:
```
recs_des %>%
summarize(across(
c(BTUEL, BTUNG, BTULP, BTUFO, BTUWOOD),
~ survey_mean(.x, deff = TRUE, vartype = NULL)
)) %>%
select(ends_with("deff"))
```
```
## # A tibble: 1 × 5
## BTUEL_deff BTUNG_deff BTULP_deff BTUFO_deff BTUWOOD_deff
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 0.597 0.938 1.21 0.720 1.10
```
For the values less than 1 (`BTUEL_deff` and `BTUFO_deff`), the results suggest that the survey design is more efficient than a simple random sample. For the values greater than 1 (`BTUNG_deff`, `BTULP_deff`, and `BTUWOOD_deff`), the results indicate that the survey design is less efficient than a simple random sample.
### 5\.9\.4 Creating summary rows
When using `group_by()` in analysis, the results are returned with a row for each group or combination of groups. Often, we want both breakdowns by group and a summary row for the estimate representing the entire population. For example, we may want the average electricity consumption by region and nationally. The {srvyr} package has the convenient `cascade()` function, which adds summary rows for the total of a group. It is used instead of `summarize()` and has similar functionalities along with some additional features.
#### Syntax
The syntax is as follows:
```
cascade(
.data,
...,
.fill = NA,
.fill_level_top = FALSE,
.groupings = NULL
)
```
where the arguments are:
* `.data`: A `tbl_svy` object
* `...`: Name\-value pairs of summary functions (same as the `summarize()` function)
* `.fill`: Value to fill in for group summaries (defaults to `NA`)
* `.fill_level_top`: When filling factor variables, whether to put the value ‘.fill’ in the first position (defaults to FALSE, placing it in the bottom)
#### Example
First, let’s look at an example where we calculate the average household electricity cost. Then, we build on it to examine the features of the `cascade()` function. In the first example below, we calculate the average household energy cost `DOLLAREL_mn` using `survey_mean()` without modifying any of the argument defaults in the function:
```
recs_des %>%
cascade(DOLLAREL_mn = survey_mean(DOLLAREL))
```
```
## # A tibble: 1 × 2
## DOLLAREL_mn DOLLAREL_mn_se
## <dbl> <dbl>
## 1 1380. 5.38
```
Next, let’s group the results by region by adding `group_by()` before the `cascade()` function:
```
recs_des %>%
group_by(Region) %>%
cascade(DOLLAREL_mn = survey_mean(DOLLAREL))
```
```
## # A tibble: 5 × 3
## Region DOLLAREL_mn DOLLAREL_mn_se
## <fct> <dbl> <dbl>
## 1 Northeast 1343. 14.6
## 2 Midwest 1293. 11.7
## 3 South 1548. 10.3
## 4 West 1211. 12.0
## 5 <NA> 1380. 5.38
```
We can see the estimated average electricity bills by region: $1,343 for the Northeast, $1,548 for the South, and so on. The last row, where `Region = NA`, is the national average electricity bill, $1,380\. However, naming the national “region” as `NA` is not very informative. We can give it a better name using the `.fill` argument.
```
recs_des %>%
group_by(Region) %>%
cascade(
DOLLAREL_mn = survey_mean(DOLLAREL),
.fill = "National"
)
```
```
## # A tibble: 5 × 3
## Region DOLLAREL_mn DOLLAREL_mn_se
## <fct> <dbl> <dbl>
## 1 Northeast 1343. 14.6
## 2 Midwest 1293. 11.7
## 3 South 1548. 10.3
## 4 West 1211. 12.0
## 5 National 1380. 5.38
```
We can move the summary row to the first row by adding `.fill_level_top = TRUE` to `cascade()`:
```
recs_des %>%
group_by(Region) %>%
cascade(
DOLLAREL_mn = survey_mean(DOLLAREL),
.fill = "National",
.fill_level_top = TRUE
)
```
```
## # A tibble: 5 × 3
## Region DOLLAREL_mn DOLLAREL_mn_se
## <fct> <dbl> <dbl>
## 1 National 1380. 5.38
## 2 Northeast 1343. 14.6
## 3 Midwest 1293. 11.7
## 4 South 1548. 10.3
## 5 West 1211. 12.0
```
While the results remain the same, the table is now easier to interpret.
#### Syntax
The syntax is as follows:
```
cascade(
.data,
...,
.fill = NA,
.fill_level_top = FALSE,
.groupings = NULL
)
```
where the arguments are:
* `.data`: A `tbl_svy` object
* `...`: Name\-value pairs of summary functions (same as the `summarize()` function)
* `.fill`: Value to fill in for group summaries (defaults to `NA`)
* `.fill_level_top`: When filling factor variables, whether to put the value ‘.fill’ in the first position (defaults to FALSE, placing it in the bottom)
#### Example
First, let’s look at an example where we calculate the average household electricity cost. Then, we build on it to examine the features of the `cascade()` function. In the first example below, we calculate the average household energy cost `DOLLAREL_mn` using `survey_mean()` without modifying any of the argument defaults in the function:
```
recs_des %>%
cascade(DOLLAREL_mn = survey_mean(DOLLAREL))
```
```
## # A tibble: 1 × 2
## DOLLAREL_mn DOLLAREL_mn_se
## <dbl> <dbl>
## 1 1380. 5.38
```
Next, let’s group the results by region by adding `group_by()` before the `cascade()` function:
```
recs_des %>%
group_by(Region) %>%
cascade(DOLLAREL_mn = survey_mean(DOLLAREL))
```
```
## # A tibble: 5 × 3
## Region DOLLAREL_mn DOLLAREL_mn_se
## <fct> <dbl> <dbl>
## 1 Northeast 1343. 14.6
## 2 Midwest 1293. 11.7
## 3 South 1548. 10.3
## 4 West 1211. 12.0
## 5 <NA> 1380. 5.38
```
We can see the estimated average electricity bills by region: $1,343 for the Northeast, $1,548 for the South, and so on. The last row, where `Region = NA`, is the national average electricity bill, $1,380\. However, naming the national “region” as `NA` is not very informative. We can give it a better name using the `.fill` argument.
```
recs_des %>%
group_by(Region) %>%
cascade(
DOLLAREL_mn = survey_mean(DOLLAREL),
.fill = "National"
)
```
```
## # A tibble: 5 × 3
## Region DOLLAREL_mn DOLLAREL_mn_se
## <fct> <dbl> <dbl>
## 1 Northeast 1343. 14.6
## 2 Midwest 1293. 11.7
## 3 South 1548. 10.3
## 4 West 1211. 12.0
## 5 National 1380. 5.38
```
We can move the summary row to the first row by adding `.fill_level_top = TRUE` to `cascade()`:
```
recs_des %>%
group_by(Region) %>%
cascade(
DOLLAREL_mn = survey_mean(DOLLAREL),
.fill = "National",
.fill_level_top = TRUE
)
```
```
## # A tibble: 5 × 3
## Region DOLLAREL_mn DOLLAREL_mn_se
## <fct> <dbl> <dbl>
## 1 National 1380. 5.38
## 2 Northeast 1343. 14.6
## 3 Midwest 1293. 11.7
## 4 South 1548. 10.3
## 5 West 1211. 12.0
```
While the results remain the same, the table is now easier to interpret.
### 5\.9\.5 Calculating estimates for many outcomes
Often, we are interested in a summary statistic across many variables. Useful tools include the `across()` function in {dplyr}, shown a few times above, and the `map()` function in {purrr}.
The `across()` function applies the same function to multiple columns within `summarize()`. This works well with all functions shown above, except for `survey_prop()`. In a later example, we tackle summarizing multiple proportions.
#### Example 1: `across()`
Suppose we want to calculate the total and average consumption, along with coefficients of variation (CV), for each fuel type. These include the reported consumption of electricity (`BTUEL`), natural gas (`BTUNG`), liquid propane (`BTULP`), fuel oil (`BTUFO`), and wood (`BTUWOOD`), as mentioned in the section on design effects. We can take advantage of the fact that these are the only variables that start with “BTU” by selecting them with `starts_with("BTU")` in the `across()` function. For each selected column (`.x`), `across()` creates a list of two functions to be applied: `survey_total()` to calculate the total and `survey_mean()` to calculate the mean, along with their CV (`vartype = "cv"`). Finally, `.unpack = "{outer}.{inner}"` specifies that the resulting column names are a concatenation of the variable name, followed by Total or Mean, and then “coef” or “cv.”
```
consumption_ests <- recs_des %>%
summarize(across(
starts_with("BTU"),
list(
Total = ~ survey_total(.x, vartype = "cv"),
Mean = ~ survey_mean(.x, vartype = "cv")
),
.unpack = "{outer}.{inner}"
))
consumption_ests
```
```
## # A tibble: 1 × 20
## BTUEL_Total.coef BTUEL_Total._cv BTUEL_Mean.coef BTUEL_Mean._cv
## <dbl> <dbl> <dbl> <dbl>
## 1 4453284510065 0.00377 36051. 0.00377
## # ℹ 16 more variables: BTUNG_Total.coef <dbl>, BTUNG_Total._cv <dbl>,
## # BTUNG_Mean.coef <dbl>, BTUNG_Mean._cv <dbl>,
## # BTULP_Total.coef <dbl>, BTULP_Total._cv <dbl>,
## # BTULP_Mean.coef <dbl>, BTULP_Mean._cv <dbl>,
## # BTUFO_Total.coef <dbl>, BTUFO_Total._cv <dbl>,
## # BTUFO_Mean.coef <dbl>, BTUFO_Mean._cv <dbl>,
## # BTUWOOD_Total.coef <dbl>, BTUWOOD_Total._cv <dbl>, …
```
The estimated total consumption of electricity (`BTUEL`) is 4,453,284,510,065 (`BTUEL_Total.coef`), the estimated average consumption is 36,051 (`BTUEL_Mean.coef`), and the CV is 0\.0038\.
In the example above, the table was quite wide. We may prefer a row for each fuel type. Using the `pivot_longer()` and `pivot_wider()` functions from {tidyr} can help us achieve this. First, we use `pivot_longer()` to make each variable a column, changing the data to a “long” format. We use the `names_to` argument to specify new column names: `FuelType`, `Stat`, and `Type`. Then, the `names_pattern` argument extracts the names in the original column names based on the regular expression pattern `BTU(.*)_(.*)\\.(.*)`. They are saved in the column names defined in `names_to`.
```
consumption_ests_long <- consumption_ests %>%
pivot_longer(
cols = everything(),
names_to = c("FuelType", "Stat", "Type"),
names_pattern = "BTU(.*)_(.*)\\.(.*)"
)
consumption_ests_long
```
```
## # A tibble: 20 × 4
## FuelType Stat Type value
## <chr> <chr> <chr> <dbl>
## 1 EL Total coef 4453284510065
## 2 EL Total _cv 0.00377
## 3 EL Mean coef 36051.
## 4 EL Mean _cv 0.00377
## 5 NG Total coef 4240769382106.
## 6 NG Total _cv 0.00908
## 7 NG Mean coef 34330.
## 8 NG Mean _cv 0.00908
## 9 LP Total coef 391425311586.
## 10 LP Total _cv 0.0380
## 11 LP Mean coef 3169.
## 12 LP Mean _cv 0.0380
## 13 FO Total coef 395699976655.
## 14 FO Total _cv 0.0343
## 15 FO Mean coef 3203.
## 16 FO Mean _cv 0.0343
## 17 WOOD Total coef 345091088404.
## 18 WOOD Total _cv 0.0454
## 19 WOOD Mean coef 2794.
## 20 WOOD Mean _cv 0.0454
```
Then, we use `pivot_wider()` to create a table that is nearly ready for publication. Within the function, we can make the names for each element more descriptive and informative by gluing the `Stat` and `Type` together with `names_glue`. Further details on creating publication\-ready tables are covered in Chapter [8](c08-communicating-results.html#c08-communicating-results).
```
consumption_ests_long %>%
mutate(Type = case_when(
Type == "coef" ~ "",
Type == "_cv" ~ " (CV)"
)) %>%
pivot_wider(
id_cols = FuelType,
names_from = c(Stat, Type),
names_glue = "{Stat}{Type}",
values_from = value
)
```
```
## # A tibble: 5 × 5
## FuelType Total `Total (CV)` Mean `Mean (CV)`
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 EL 4453284510065 0.00377 36051. 0.00377
## 2 NG 4240769382106. 0.00908 34330. 0.00908
## 3 LP 391425311586. 0.0380 3169. 0.0380
## 4 FO 395699976655. 0.0343 3203. 0.0343
## 5 WOOD 345091088404. 0.0454 2794. 0.0454
```
#### Example 2: Proportions with `across()`
As mentioned earlier, proportions do not work as well directly with the `across()` method. If we want the proportion of houses with A/C and the proportion of houses with heating, we require two separate `group_by()` statements as shown below:
```
recs_des %>%
group_by(ACUsed) %>%
summarize(p = survey_prop())
```
```
## # A tibble: 2 × 3
## ACUsed p p_se
## <lgl> <dbl> <dbl>
## 1 FALSE 0.113 0.00306
## 2 TRUE 0.887 0.00306
```
```
recs_des %>%
group_by(SpaceHeatingUsed) %>%
summarize(p = survey_prop())
```
```
## # A tibble: 2 × 3
## SpaceHeatingUsed p p_se
## <lgl> <dbl> <dbl>
## 1 FALSE 0.0469 0.00207
## 2 TRUE 0.953 0.00207
```
We estimate 88\.7% of households have A/C and 95\.3% have heating.
If we are only interested in the `TRUE` outcomes, that is, the proportion of households that have A/C and the proportion that have heating, we can simplify the code. Applying `survey_mean()` to a logical variable is the same as using `survey_prop()`, as shown below:
```
cool_heat_tab <- recs_des %>%
summarize(across(c(ACUsed, SpaceHeatingUsed), ~ survey_mean(.x),
.unpack = "{outer}.{inner}"
))
cool_heat_tab
```
```
## # A tibble: 1 × 4
## ACUsed.coef ACUsed._se SpaceHeatingUsed.coef SpaceHeatingUsed._se
## <dbl> <dbl> <dbl> <dbl>
## 1 0.887 0.00306 0.953 0.00207
```
Note that the estimates are the same as those obtained using the separate `group_by()` statements. As before, we can use `pivot_longer()` to structure the table in a more suitable format for distribution.
```
cool_heat_tab %>%
pivot_longer(everything(),
names_to = c("Comfort", ".value"),
names_pattern = "(.*)\\.(.*)"
) %>%
rename(
p = coef,
se = `_se`
)
```
```
## # A tibble: 2 × 3
## Comfort p se
## <chr> <dbl> <dbl>
## 1 ACUsed 0.887 0.00306
## 2 SpaceHeatingUsed 0.953 0.00207
```
#### Example 3: `purrr::map()`
Loops are a common tool when dealing with repetitive calculations. The {purrr} package provides the `map()` functions, which, like a loop, allow us to perform the same task across different elements ([Wickham and Henry 2023](#ref-R-purrr)). In our case, we may want to calculate proportions from the same design multiple times. A straightforward approach is to design the calculation for one variable, build a function based on that, and then apply it iteratively for the rest of the variables.
Suppose we want to create a table that shows the proportion of people who express trust in their government (`TrustGovernment`)[10](#fn10) as well as those that trust in people (`TrustPeople`)[11](#fn11) using data from the 2020 ANES.
First, we create a table for a single variable. The table includes the variable name as a column, the response, and the corresponding percentage with its standard error.
```
anes_des %>%
drop_na(TrustGovernment) %>%
group_by(TrustGovernment) %>%
summarize(p = survey_prop() * 100) %>%
mutate(Variable = "TrustGovernment") %>%
rename(Answer = TrustGovernment) %>%
select(Variable, everything())
```
```
## # A tibble: 5 × 4
## Variable Answer p p_se
## <chr> <fct> <dbl> <dbl>
## 1 TrustGovernment Always 1.55 0.204
## 2 TrustGovernment Most of the time 13.2 0.553
## 3 TrustGovernment About half the time 30.9 0.829
## 4 TrustGovernment Some of the time 43.4 0.855
## 5 TrustGovernment Never 11.0 0.566
```
We estimate that 1\.55% of people always trust the government, 13\.16% trust the government most of the time, and so on.
Now, we want to use the original series of steps as a template to create a general function `calcps()` that can apply the same steps to other variables. We replace `TrustGovernment` with an argument for a generic variable, `var`. Referring to `var` involves a bit of tidy evaluation, an advanced skill. To learn more, we recommend Wickham ([2019](#ref-wickham2019advanced)).
```
calcps <- function(var) {
anes_des %>%
drop_na(!!sym(var)) %>%
group_by(!!sym(var)) %>%
summarize(p = survey_prop() * 100) %>%
mutate(Variable = var) %>%
rename(Answer := !!sym(var)) %>%
select(Variable, everything())
}
```
We then apply this function to the two variables of interest, `TrustGovernment` and `TrustPeople`:
```
calcps("TrustGovernment")
```
```
## # A tibble: 5 × 4
## Variable Answer p p_se
## <chr> <fct> <dbl> <dbl>
## 1 TrustGovernment Always 1.55 0.204
## 2 TrustGovernment Most of the time 13.2 0.553
## 3 TrustGovernment About half the time 30.9 0.829
## 4 TrustGovernment Some of the time 43.4 0.855
## 5 TrustGovernment Never 11.0 0.566
```
```
calcps("TrustPeople")
```
```
## # A tibble: 5 × 4
## Variable Answer p p_se
## <chr> <fct> <dbl> <dbl>
## 1 TrustPeople Always 0.809 0.164
## 2 TrustPeople Most of the time 41.4 0.857
## 3 TrustPeople About half the time 28.2 0.776
## 4 TrustPeople Some of the time 24.5 0.670
## 5 TrustPeople Never 5.05 0.422
```
Finally, we use `map()` to iterate over as many variables as needed. We feed our desired variables into `map()` along with our custom function, `calcps`. The output is a tibble with the variable names in the “Variable” column, the responses in the “Answer” column, along with the percentage and standard error. The `list_rbind()` function combines the rows into a single tibble. This example extends nicely when dealing with numerous variables for which we want percentage estimates.
```
c("TrustGovernment", "TrustPeople") %>%
map(calcps) %>%
list_rbind()
```
```
## # A tibble: 10 × 4
## Variable Answer p p_se
## <chr> <fct> <dbl> <dbl>
## 1 TrustGovernment Always 1.55 0.204
## 2 TrustGovernment Most of the time 13.2 0.553
## 3 TrustGovernment About half the time 30.9 0.829
## 4 TrustGovernment Some of the time 43.4 0.855
## 5 TrustGovernment Never 11.0 0.566
## 6 TrustPeople Always 0.809 0.164
## 7 TrustPeople Most of the time 41.4 0.857
## 8 TrustPeople About half the time 28.2 0.776
## 9 TrustPeople Some of the time 24.5 0.670
## 10 TrustPeople Never 5.05 0.422
```
In addition to our results above, we can also see the output for `TrustPeople`. While we estimate that 1\.55% of people always trust the government, 0\.81% always trust people.
#### Example 1: `across()`
Suppose we want to calculate the total and average consumption, along with coefficients of variation (CV), for each fuel type. These include the reported consumption of electricity (`BTUEL`), natural gas (`BTUNG`), liquid propane (`BTULP`), fuel oil (`BTUFO`), and wood (`BTUWOOD`), as mentioned in the section on design effects. We can take advantage of the fact that these are the only variables that start with “BTU” by selecting them with `starts_with("BTU")` in the `across()` function. For each selected column (`.x`), `across()` creates a list of two functions to be applied: `survey_total()` to calculate the total and `survey_mean()` to calculate the mean, along with their CV (`vartype = "cv"`). Finally, `.unpack = "{outer}.{inner}"` specifies that the resulting column names are a concatenation of the variable name, followed by Total or Mean, and then “coef” or “cv.”
```
consumption_ests <- recs_des %>%
summarize(across(
starts_with("BTU"),
list(
Total = ~ survey_total(.x, vartype = "cv"),
Mean = ~ survey_mean(.x, vartype = "cv")
),
.unpack = "{outer}.{inner}"
))
consumption_ests
```
```
## # A tibble: 1 × 20
## BTUEL_Total.coef BTUEL_Total._cv BTUEL_Mean.coef BTUEL_Mean._cv
## <dbl> <dbl> <dbl> <dbl>
## 1 4453284510065 0.00377 36051. 0.00377
## # ℹ 16 more variables: BTUNG_Total.coef <dbl>, BTUNG_Total._cv <dbl>,
## # BTUNG_Mean.coef <dbl>, BTUNG_Mean._cv <dbl>,
## # BTULP_Total.coef <dbl>, BTULP_Total._cv <dbl>,
## # BTULP_Mean.coef <dbl>, BTULP_Mean._cv <dbl>,
## # BTUFO_Total.coef <dbl>, BTUFO_Total._cv <dbl>,
## # BTUFO_Mean.coef <dbl>, BTUFO_Mean._cv <dbl>,
## # BTUWOOD_Total.coef <dbl>, BTUWOOD_Total._cv <dbl>, …
```
The estimated total consumption of electricity (`BTUEL`) is 4,453,284,510,065 (`BTUEL_Total.coef`), the estimated average consumption is 36,051 (`BTUEL_Mean.coef`), and the CV is 0\.0038\.
In the example above, the table was quite wide. We may prefer a row for each fuel type. Using the `pivot_longer()` and `pivot_wider()` functions from {tidyr} can help us achieve this. First, we use `pivot_longer()` to make each variable a column, changing the data to a “long” format. We use the `names_to` argument to specify new column names: `FuelType`, `Stat`, and `Type`. Then, the `names_pattern` argument extracts the names in the original column names based on the regular expression pattern `BTU(.*)_(.*)\\.(.*)`. They are saved in the column names defined in `names_to`.
```
consumption_ests_long <- consumption_ests %>%
pivot_longer(
cols = everything(),
names_to = c("FuelType", "Stat", "Type"),
names_pattern = "BTU(.*)_(.*)\\.(.*)"
)
consumption_ests_long
```
```
## # A tibble: 20 × 4
## FuelType Stat Type value
## <chr> <chr> <chr> <dbl>
## 1 EL Total coef 4453284510065
## 2 EL Total _cv 0.00377
## 3 EL Mean coef 36051.
## 4 EL Mean _cv 0.00377
## 5 NG Total coef 4240769382106.
## 6 NG Total _cv 0.00908
## 7 NG Mean coef 34330.
## 8 NG Mean _cv 0.00908
## 9 LP Total coef 391425311586.
## 10 LP Total _cv 0.0380
## 11 LP Mean coef 3169.
## 12 LP Mean _cv 0.0380
## 13 FO Total coef 395699976655.
## 14 FO Total _cv 0.0343
## 15 FO Mean coef 3203.
## 16 FO Mean _cv 0.0343
## 17 WOOD Total coef 345091088404.
## 18 WOOD Total _cv 0.0454
## 19 WOOD Mean coef 2794.
## 20 WOOD Mean _cv 0.0454
```
Then, we use `pivot_wider()` to create a table that is nearly ready for publication. Within the function, we can make the names for each element more descriptive and informative by gluing the `Stat` and `Type` together with `names_glue`. Further details on creating publication\-ready tables are covered in Chapter [8](c08-communicating-results.html#c08-communicating-results).
```
consumption_ests_long %>%
mutate(Type = case_when(
Type == "coef" ~ "",
Type == "_cv" ~ " (CV)"
)) %>%
pivot_wider(
id_cols = FuelType,
names_from = c(Stat, Type),
names_glue = "{Stat}{Type}",
values_from = value
)
```
```
## # A tibble: 5 × 5
## FuelType Total `Total (CV)` Mean `Mean (CV)`
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 EL 4453284510065 0.00377 36051. 0.00377
## 2 NG 4240769382106. 0.00908 34330. 0.00908
## 3 LP 391425311586. 0.0380 3169. 0.0380
## 4 FO 395699976655. 0.0343 3203. 0.0343
## 5 WOOD 345091088404. 0.0454 2794. 0.0454
```
#### Example 2: Proportions with `across()`
As mentioned earlier, proportions do not work as well directly with the `across()` method. If we want the proportion of houses with A/C and the proportion of houses with heating, we require two separate `group_by()` statements as shown below:
```
recs_des %>%
group_by(ACUsed) %>%
summarize(p = survey_prop())
```
```
## # A tibble: 2 × 3
## ACUsed p p_se
## <lgl> <dbl> <dbl>
## 1 FALSE 0.113 0.00306
## 2 TRUE 0.887 0.00306
```
```
recs_des %>%
group_by(SpaceHeatingUsed) %>%
summarize(p = survey_prop())
```
```
## # A tibble: 2 × 3
## SpaceHeatingUsed p p_se
## <lgl> <dbl> <dbl>
## 1 FALSE 0.0469 0.00207
## 2 TRUE 0.953 0.00207
```
We estimate 88\.7% of households have A/C and 95\.3% have heating.
If we are only interested in the `TRUE` outcomes, that is, the proportion of households that have A/C and the proportion that have heating, we can simplify the code. Applying `survey_mean()` to a logical variable is the same as using `survey_prop()`, as shown below:
```
cool_heat_tab <- recs_des %>%
summarize(across(c(ACUsed, SpaceHeatingUsed), ~ survey_mean(.x),
.unpack = "{outer}.{inner}"
))
cool_heat_tab
```
```
## # A tibble: 1 × 4
## ACUsed.coef ACUsed._se SpaceHeatingUsed.coef SpaceHeatingUsed._se
## <dbl> <dbl> <dbl> <dbl>
## 1 0.887 0.00306 0.953 0.00207
```
Note that the estimates are the same as those obtained using the separate `group_by()` statements. As before, we can use `pivot_longer()` to structure the table in a more suitable format for distribution.
```
cool_heat_tab %>%
pivot_longer(everything(),
names_to = c("Comfort", ".value"),
names_pattern = "(.*)\\.(.*)"
) %>%
rename(
p = coef,
se = `_se`
)
```
```
## # A tibble: 2 × 3
## Comfort p se
## <chr> <dbl> <dbl>
## 1 ACUsed 0.887 0.00306
## 2 SpaceHeatingUsed 0.953 0.00207
```
#### Example 3: `purrr::map()`
Loops are a common tool when dealing with repetitive calculations. The {purrr} package provides the `map()` functions, which, like a loop, allow us to perform the same task across different elements ([Wickham and Henry 2023](#ref-R-purrr)). In our case, we may want to calculate proportions from the same design multiple times. A straightforward approach is to design the calculation for one variable, build a function based on that, and then apply it iteratively for the rest of the variables.
Suppose we want to create a table that shows the proportion of people who express trust in their government (`TrustGovernment`)[10](#fn10) as well as those that trust in people (`TrustPeople`)[11](#fn11) using data from the 2020 ANES.
First, we create a table for a single variable. The table includes the variable name as a column, the response, and the corresponding percentage with its standard error.
```
anes_des %>%
drop_na(TrustGovernment) %>%
group_by(TrustGovernment) %>%
summarize(p = survey_prop() * 100) %>%
mutate(Variable = "TrustGovernment") %>%
rename(Answer = TrustGovernment) %>%
select(Variable, everything())
```
```
## # A tibble: 5 × 4
## Variable Answer p p_se
## <chr> <fct> <dbl> <dbl>
## 1 TrustGovernment Always 1.55 0.204
## 2 TrustGovernment Most of the time 13.2 0.553
## 3 TrustGovernment About half the time 30.9 0.829
## 4 TrustGovernment Some of the time 43.4 0.855
## 5 TrustGovernment Never 11.0 0.566
```
We estimate that 1\.55% of people always trust the government, 13\.16% trust the government most of the time, and so on.
Now, we want to use the original series of steps as a template to create a general function `calcps()` that can apply the same steps to other variables. We replace `TrustGovernment` with an argument for a generic variable, `var`. Referring to `var` involves a bit of tidy evaluation, an advanced skill. To learn more, we recommend Wickham ([2019](#ref-wickham2019advanced)).
```
calcps <- function(var) {
anes_des %>%
drop_na(!!sym(var)) %>%
group_by(!!sym(var)) %>%
summarize(p = survey_prop() * 100) %>%
mutate(Variable = var) %>%
rename(Answer := !!sym(var)) %>%
select(Variable, everything())
}
```
We then apply this function to the two variables of interest, `TrustGovernment` and `TrustPeople`:
```
calcps("TrustGovernment")
```
```
## # A tibble: 5 × 4
## Variable Answer p p_se
## <chr> <fct> <dbl> <dbl>
## 1 TrustGovernment Always 1.55 0.204
## 2 TrustGovernment Most of the time 13.2 0.553
## 3 TrustGovernment About half the time 30.9 0.829
## 4 TrustGovernment Some of the time 43.4 0.855
## 5 TrustGovernment Never 11.0 0.566
```
```
calcps("TrustPeople")
```
```
## # A tibble: 5 × 4
## Variable Answer p p_se
## <chr> <fct> <dbl> <dbl>
## 1 TrustPeople Always 0.809 0.164
## 2 TrustPeople Most of the time 41.4 0.857
## 3 TrustPeople About half the time 28.2 0.776
## 4 TrustPeople Some of the time 24.5 0.670
## 5 TrustPeople Never 5.05 0.422
```
Finally, we use `map()` to iterate over as many variables as needed. We feed our desired variables into `map()` along with our custom function, `calcps`. The output is a tibble with the variable names in the “Variable” column, the responses in the “Answer” column, along with the percentage and standard error. The `list_rbind()` function combines the rows into a single tibble. This example extends nicely when dealing with numerous variables for which we want percentage estimates.
```
c("TrustGovernment", "TrustPeople") %>%
map(calcps) %>%
list_rbind()
```
```
## # A tibble: 10 × 4
## Variable Answer p p_se
## <chr> <fct> <dbl> <dbl>
## 1 TrustGovernment Always 1.55 0.204
## 2 TrustGovernment Most of the time 13.2 0.553
## 3 TrustGovernment About half the time 30.9 0.829
## 4 TrustGovernment Some of the time 43.4 0.855
## 5 TrustGovernment Never 11.0 0.566
## 6 TrustPeople Always 0.809 0.164
## 7 TrustPeople Most of the time 41.4 0.857
## 8 TrustPeople About half the time 28.2 0.776
## 9 TrustPeople Some of the time 24.5 0.670
## 10 TrustPeople Never 5.05 0.422
```
In addition to our results above, we can also see the output for `TrustPeople`. While we estimate that 1\.55% of people always trust the government, 0\.81% always trust people.
| Social Science |
tidy-survey-r.github.io | https://tidy-survey-r.github.io/tidy-survey-book/c06-statistical-testing.html |
Chapter 6 Statistical testing
=============================
### Prerequisites
For this chapter, load the following packages:
```
library(tidyverse)
library(survey)
library(srvyr)
library(srvyrexploR)
library(broom)
library(gt)
library(prettyunits)
```
We are using data from ANES and RECS described in Chapter [4](c04-getting-started.html#c04-getting-started). As a reminder, here is the code to create the design objects for each to use throughout this chapter. For ANES, we need to adjust the weight so it sums to the population instead of the sample (see the ANES documentation and Chapter [4](c04-getting-started.html#c04-getting-started) for more information).
```
targetpop <- 231592693
anes_adjwgt <- anes_2020 %>%
mutate(Weight = Weight / sum(Weight) * targetpop)
anes_des <- anes_adjwgt %>%
as_survey_design(
weights = Weight,
strata = Stratum,
ids = VarUnit,
nest = TRUE
)
```
For RECS, details are included in the RECS documentation and Chapters [4](c04-getting-started.html#c04-getting-started) and [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights).
```
recs_des <- recs_2020 %>%
as_survey_rep(
weights = NWEIGHT,
repweights = NWEIGHT1:NWEIGHT60,
type = "JK1",
scale = 59 / 60,
mse = TRUE
)
```
6\.1 Introduction
-----------------
When analyzing survey results, the point estimates described in Chapter [5](c05-descriptive-analysis.html#c05-descriptive-analysis) help us understand the data at a high level. Still, we often want to make comparisons between different groups. These comparisons are calculated through statistical testing.
The general idea of statistical testing is the same for data obtained through surveys and data obtained through other methods, where we compare the point estimates and uncertainty estimates of each statistic to see if statistically significant differences exist. However, statistical testing for complex surveys involves additional considerations due to the need to account for the sampling design in order to obtain accurate uncertainty estimates.
Statistical testing, also called hypothesis testing, involves declaring a null and alternative hypothesis. A null hypothesis is denoted as \\(H\_0\\) and the alternative hypothesis is denoted as \\(H\_A\\). The null hypothesis is the default assumption in that there are no differences in the data, or that the data are operating under “standard” behaviors. On the other hand, the alternative hypothesis is the break from the “standard,” and we are trying to determine if the data support this alternative hypothesis.
Let’s review an example outside of survey data. If we are flipping a coin, a null hypothesis would be that the coin is fair and that each side has an equal chance of being flipped. In other words, the probability of the coin landing on each side is 1/2, whereas an alternative hypothesis could be that the coin is unfair and that one side has a higher probability of being flipped (e.g., a probability of 1/4 to get heads but a probability of 3/4 to get tails). We write this set of hypotheses as:
* \\(H\_0: \\rho\_{heads} \= \\rho\_{tails}\\), where \\(\\rho\_{x}\\) is the probability of flipping the coin and having it land on heads (\\(\\rho\_{heads}\\)) or tails (\\(\\rho\_{tails}\\))
* \\(H\_A: \\rho\_{heads} \\neq \\rho\_{tails}\\)
When we conduct hypothesis testing, the statistical models calculate a p\-value, which shows how likely we are to observe the data if the null hypothesis is true. If the p\-value (a probability between 0 and 1\) is small, we have strong evidence to reject the null hypothesis, as it is unlikely to see the data we observe if the null hypothesis is true. However, if the p\-value is large, we say we do not have evidence to reject the null hypothesis. The size of the p\-value for this cut\-off is determined by Type 1 error known as \\(\\alpha\\). A common Type 1 error value for statistical testing is to use \\(\\alpha \= 0\.05\\)[12](#fn12). Explanations of statistical testing often refer to confidence level. The confidence level is the inverse of the Type 1 error. Thus, if \\(\\alpha \= 0\.05\\), the confidence level would be 95%.
The functions in the {survey} package allow for the correct estimation of the uncertainty estimates (e.g., standard deviations and confidence intervals). This chapter covers the following statistical tests with survey data and the following functions from the {survey} package ([Lumley 2010](#ref-lumley2010complex)):
* Comparison of proportions (`svyttest()`)
* Comparison of means (`svyttest()`)
* Goodness\-of\-fit tests (`svygofchisq()`)
* Tests of independence (`svychisq()`)
* Tests of homogeneity (`svychisq()`)
6\.2 Dot notation
-----------------
Up to this point, we have shown functions that use wrappers from the {srvyr} package. This means that the functions work with tidyverse syntax. However, the functions in this chapter do not have wrappers in the {srvyr} package and are instead used directly from the {survey} package. Therefore, the design object is not the first argument, and to use these functions with the magrittr pipe (`%>%`) and tidyverse syntax, we need to use dot (`.`) notation[13](#fn13).
Functions that work with the magrittr pipe (`%>%`) have the dataset as the first argument. When we run a function with the pipe, it automatically places anything to the left of the pipe into the first argument of the function to the right of the pipe. For example, if we wanted to take the `towny` data from the {gt} package and filter to municipalities with the Census Subdivision Type of “city,” we can write the code in at least four different ways:
1. `filter(towny, csd_type == "city")`
2. `towny %>% filter(csd_type == "city")`
3. `towny %>% filter(., csd_type == "city")`
4. `towny %>% filter(.data = ., csd_type == "city")`
Each of these lines of code produces the same output since the argument that takes the dataset is in the first spot in `filter()`. The first two are probably familiar to those who have worked with the tidyverse. The third option functions the same way as the second one but is explicit that `towny` goes into the first argument, and the fourth option indicates that `towny` is going into the named argument of `.data`. Here, we are telling R to take what is on the left side of the pipe (`towny`) and pipe it into the spot with the dot (`.`) — the first argument.
In functions that are not part of the tidyverse, the data argument may not be in the first spot. For example, in `svyttest()`, the data argument is in the second spot, which means we need to place the dot (`.`) in the second spot and not the first. For example:
```
svydata_des %>%
svyttest(x ~ y, .)
```
By default, the pipe places the left\-hand object in the first argument spot. Placing the dot (`.`) in the second argument spot indicates that the survey design object `svydata_des` should be used in the second argument and not the first.
Alternatively, named arguments could be used to place the dot first, as named arguments can appear at any location as in the following:
```
svydata_des %>%
svyttest(design = ., x ~ y)
```
However, the following code does not work as the `svyttest()` function expects the formula as the first argument when arguments are not named:
```
svydata_des %>%
svyttest(., x ~ y)
```
6\.3 Comparison of proportions and means
----------------------------------------
We use t\-tests to compare two proportions or means. T\-tests allow us to determine if one proportion or mean is statistically different from another. They are commonly used to determine if a single estimate differs from a known value (e.g., 0 or 50%) or to compare two group means (e.g., North versus South). Comparing a single estimate to a known value is called a one\-sample t\-test, and we can set up the hypothesis test as follows:
* \\(H\_0: \\mu \= 0\\) where \\(\\mu\\) is the mean outcome and \\(0\\) is the value we are comparing it to
* \\(H\_A: \\mu \\neq 0\\)
For comparing two estimates, this is called a two\-sample t\-test. We can set up the hypothesis test as follows:
* \\(H\_0: \\mu\_1 \= \\mu\_2\\) where \\(\\mu\_i\\) is the mean outcome for group \\(i\\)
* \\(H\_A: \\mu\_1 \\neq \\mu\_2\\)
Two\-sample t\-tests can also be paired or unpaired. If the data come from two different populations (e.g., North versus South), the t\-test run is an unpaired or independent samples t\-test. Paired t\-tests occur when the data come from the same population. This is commonly seen with data from the same population in two different time periods (e.g., before and after an intervention).
The difference between t\-tests with non\-survey data and survey data is based on the underlying variance estimation difference. Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights) provides a detailed overview of the math behind the mean and sampling error calculations for various sample designs. The functions in the {survey} package account for these nuances, provided the design object is correctly defined.
### 6\.3\.1 Syntax
When we do not have survey data, we can use the `t.test()` function from the {stats} package to run t\-tests. This function does not allow for weights or the variance structure that need to be accounted for with survey data. Therefore, we need to use the `svyttest()` function from {survey} when using survey data. Many of the arguments are the same between the two functions, but there are a few key differences:
* We need to use the survey design object instead of the original data frame
* We can only use a formula and not separate x and y data
* The confidence level cannot be specified and is always set to 95%. However, we show examples of how the confidence level can be changed after running the `svyttest()` function by using the `confint()` function.
Here is the syntax for the `svyttest()` function:
```
svyttest(formula,
design,
...)
```
The arguments are:
* `formula`: Formula, `outcome~group` for two\-sample, `outcome~0` or `outcome~1` for one\-sample. The group variable must be a factor or character with two levels, or be coded 0/1 or 1/2\. We give more details on formula set\-up below for different types of tests.
* `design`: survey design object
* `...`: This passes options on for one\-sided tests only, and thus, we can specify `na.rm=TRUE`
Notice that the first argument here is the `formula` and not the `design`. This means we must use the dot `(.)` if we pipe in the survey design object (as described in Section [6\.2](c06-statistical-testing.html#dot-notation)).
The `formula` argument can take several different forms depending on what we are measuring. Here are a few common scenarios:
1. One\-sample t\-test:
1. Comparison to 0: `var ~ 0`, where `var` is the measure of interest, and we compare it to the value `0`. For example, we could test if the population mean of household debt is different from `0` given the sample data collected.
2. Comparison to a different value: `var - value ~ 0`, where `var` is the measure of interest and `value` is what we are comparing to. For example, we could test if the proportion of the population that has blue eyes is different from `25%` by using `var - 0.25 ~ 0`. Note that specifying the formula as `var ~ 0.25` is not equivalent and results in a syntax error.
2. Two\-sample t\-test:
1. Unpaired:
* 2 level grouping variable: `var ~ groupVar`, where `var` is the measure of interest and `groupVar` is a variable with two categories. For example, we could test if the average age of the population who voted for president in 2020 differed from the age of people who did not vote. In this case, age would be used for `var`, and a binary variable indicating voting activity would be the `groupVar`.
* 3\+ level grouping variable: `var ~ groupVar == level`, where `var` is the measure of interest, `groupVar` is the categorical variable, and `level` is the category level to isolate. For example, we could test if the test scores in one classroom differed from all other classrooms where `groupVar` would be the variable holding the values for classroom IDs and `level` is the classroom ID we want to compare to the others.
2. Paired: `var_1 - var_2 ~ 0`, where `var_1` is the first variable of interest and `var_2` is the second variable of interest. For example, we could test if test scores on a subject differed between the start and the end of a course, so `var_1` would be the test score at the beginning of the course, and `var_2` would be the score at the end of the course.
The `na.rm` argument defaults to `FALSE`, which means if any data values are missing, the t\-test does not compute. Throughout this chapter, we always set `na.rm = TRUE`, but before analyzing the survey data, review the notes provided in Chapter [11](c11-missing-data.html#c11-missing-data) to better understand how to handle missing data.
Let’s walk through a few examples using the RECS data.
### 6\.3\.2 Examples
#### Example 1: One\-sample t\-test for mean
RECS asks respondents to indicate what temperature they set their house to during the summer at night[14](#fn14). In our data, we have called this variable `SummerTempNight`. If we want to see if the average U.S. household sets its temperature at a value different from 68\\(^\\circ\\)F[15](#fn15), we could set up the hypothesis as follows:
* \\(H\_0: \\mu \= 68\\) where \\(\\mu\\) is the average temperature U.S. households set their thermostat to in the summer at night
* \\(H\_A: \\mu \\neq 68\\)
To conduct this in R, we use `svyttest()` and subtract the temperature on the left\-hand side of the formula:
```
ttest_ex1 <- recs_des %>%
svyttest(
formula = SummerTempNight - 68 ~ 0,
design = .,
na.rm = TRUE
)
ttest_ex1
```
```
##
## Design-based one-sample t-test
##
## data: SummerTempNight - 68 ~ 0
## t = 85, df = 58, p-value <2e-16
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## 3.288 3.447
## sample estimates:
## mean
## 3.367
```
To pull out specific output, we can use R’s built\-in `$` operator. For instance, to obtain the estimate \\(\\mu \- 68\\), we run `ttest_ex1$estimate`.
If we want the average, we take our t\-test estimate and add it to 68:
```
ttest_ex1$estimate + 68
```
```
## mean
## 71.37
```
Or, we can use the `survey_mean()` function described in Chapter [5](c05-descriptive-analysis.html#c05-descriptive-analysis):
```
recs_des %>%
summarize(mu = survey_mean(SummerTempNight, na.rm = TRUE))
```
```
## # A tibble: 1 × 2
## mu mu_se
## <dbl> <dbl>
## 1 71.4 0.0397
```
The result is the same in both methods, so we see that the average temperature U.S. households set their thermostat to in the summer at night is 71\.4\\(^\\circ\\)F. Looking at the output from `svyttest()`, the t\-statistic is 84\.8, and the p\-value is \<0\.0001, indicating that the average is statistically different from 68\\(^\\circ\\)F at an \\(\\alpha\\) level of \\(0\.05\\).
If we want an 80% confidence interval for the test statistic, we can use the function `confint()` to change the confidence level. Below, we print the default confidence interval (95%), the confidence interval explicitly specifying the level as 95%, and the 80% confidence interval. When the confidence level is 95% either by default or explicitly, R returns a vector with both row and column names. However, when we specify any other confidence level, an unnamed vector is returned, with the first element being the lower bound and the second element being the upper bound of the confidence interval.
```
confint(ttest_ex1)
```
```
## 2.5 % 97.5 %
## as.numeric(SummerTempNight - 68) 3.288 3.447
## attr(,"conf.level")
## [1] 0.95
```
```
confint(ttest_ex1, level = 0.95)
```
```
## 2.5 % 97.5 %
## as.numeric(SummerTempNight - 68) 3.288 3.447
## attr(,"conf.level")
## [1] 0.95
```
```
confint(ttest_ex1, level = 0.8)
```
```
## [1] 3.316 3.419
## attr(,"conf.level")
## [1] 0.8
```
In this case, neither confidence interval contains 0, and we draw the same conclusion from either that the average temperature households set their thermostat in the summer at night is significantly higher than 68\\(^\\circ\\)F.
#### Example 2: One\-sample t\-test for proportion
RECS asked respondents if they use air conditioning (A/C) in their home[16](#fn16). In our data, we call this variable `ACUsed`. Let’s look at the proportion of U.S. households that use A/C in their homes using the `survey_prop()` function we learned in Chapter [5](c05-descriptive-analysis.html#c05-descriptive-analysis).
```
acprop <- recs_des %>%
group_by(ACUsed) %>%
summarize(p = survey_prop())
acprop
```
```
## # A tibble: 2 × 3
## ACUsed p p_se
## <lgl> <dbl> <dbl>
## 1 FALSE 0.113 0.00306
## 2 TRUE 0.887 0.00306
```
Based on this, 88\.7% of U.S. households use A/C in their homes. If we wanted to know if this differs from 90%, we could set up our hypothesis as follows:
* \\(H\_0: p \= 0\.90\\) where \\(p\\) is the proportion of U.S. households that use A/C in their homes
* \\(H\_A: p \\neq 0\.90\\)
To conduct this in R, we use the `svyttest()` function as follows:
```
ttest_ex2 <- recs_des %>%
svyttest(
formula = (ACUsed == TRUE) - 0.90 ~ 0,
design = .,
na.rm = TRUE
)
ttest_ex2
```
```
##
## Design-based one-sample t-test
##
## data: (ACUsed == TRUE) - 0.9 ~ 0
## t = -4.4, df = 58, p-value = 5e-05
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## -0.019603 -0.007348
## sample estimates:
## mean
## -0.01348
```
The output from the `svyttest()` function can be a bit hard to read. Using the `tidy()` function from the {broom} package, we can clean up the output into a tibble to more easily understand what the test tells us ([Robinson, Hayes, and Couch 2023](#ref-R-broom)).
```
tidy(ttest_ex2)
```
```
## # A tibble: 1 × 8
## estimate statistic p.value parameter conf.low conf.high method
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 -0.0135 -4.40 0.0000466 58 -0.0196 -0.00735 Design-base…
## # ℹ 1 more variable: alternative <chr>
```
The ‘tidied’ output can also be piped into the {gt} package to create a table ready for publication (see Table [6\.1](c06-statistical-testing.html#tab:stattest-ttest-ex2-gt-tab)). We go over the {gt} package in Chapter [8](c08-communicating-results.html#c08-communicating-results). The function `pretty_p_value()` comes from the {prettyunits} package and converts numeric p\-values to characters and, by default, prints four decimal places and displays any p\-value less than 0\.0001 as `"<0.0001"`, though another minimum display p\-value can be specified ([Csardi 2023](#ref-R-prettyunits)).
```
tidy(ttest_ex2) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 6\.1: One\-sample t\-test output for estimates of U.S. households use A/C in their homes differing from 90%, RECS 2020
| estimate | statistic | p.value | parameter | conf.low | conf.high | method | alternative |
| --- | --- | --- | --- | --- | --- | --- | --- |
| −0\.01 | −4\.40 | \<0\.0001 | 58\.00 | −0\.02 | −0\.01 | Design\-based one\-sample t\-test | two.sided |
The estimate differs from Example 1 in that it does not display \\(p \- 0\.90\\) but rather \\(p\\), or the difference between the U.S. households that use A/C and our comparison proportion. We can see that there is a difference of —1\.35 percentage points. Additionally, the t\-statistic value in the `statistic` column is —4\.4, and the p\-value is \<0\.0001\. These results indicate that fewer than 90% of U.S. households use A/C in their homes.
#### Example 3: Unpaired two\-sample t\-test
In addition to `ACUsed`, another variable in the RECS data is a household’s total electric cost in dollars (`DOLLAREL`).To see if U.S. households with A/C had higher electrical bills than those without, we can set up the hypothesis as follows:
* \\(H\_0: \\mu\_{AC} \= \\mu\_{noAC}\\) where \\(\\mu\_{AC}\\) is the electrical bill cost for U.S. households that used A/C, and \\(\\mu\_{noAC}\\) is the electrical bill cost for U.S. households that did not use A/C
* \\(H\_A: \\mu\_{AC} \\neq \\mu\_{noAC}\\)
Let’s take a quick look at the data to see how they are formatted:
```
recs_des %>%
group_by(ACUsed) %>%
summarize(mean = survey_mean(DOLLAREL, na.rm = TRUE))
```
```
## # A tibble: 2 × 3
## ACUsed mean mean_se
## <lgl> <dbl> <dbl>
## 1 FALSE 1056. 16.0
## 2 TRUE 1422. 5.69
```
To conduct this in R, we use `svyttest()`:
```
ttest_ex3 <- recs_des %>%
svyttest(
formula = DOLLAREL ~ ACUsed,
design = .,
na.rm = TRUE
)
```
```
tidy(ttest_ex3) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 6\.2: Unpaired two\-sample t\-test output for estimates of U.S. households electrical bills by A/C use, RECS 2020
| estimate | statistic | p.value | parameter | conf.low | conf.high | method | alternative |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 365\.72 | 21\.29 | \<0\.0001 | 58\.00 | 331\.33 | 400\.11 | Design\-based t\-test | two.sided |
The results in Table [6\.2](c06-statistical-testing.html#tab:stattest-ttest-ex3-gt-tab) indicate that the difference in electrical bills for those who used A/C and those who did not is, on average, $365\.72\. The difference appears to be statistically significant as the t\-statistic is 21\.3 and the p\-value is \<0\.0001\. Households that used A/C spent, on average, $365\.72 more in 2020 on electricity than households without A/C.
#### Example 4: Paired two\-sample t\-test
Let’s say we want to test whether the temperature at which U.S. households set their thermostat at night differs depending on the season (comparing summer and winter[17](#fn17) temperatures). We could set up the hypothesis as follows:
* \\(H\_0: \\mu\_{summer} \= \\mu\_{winter}\\) where \\(\\mu\_{summer}\\) is the temperature that U.S. households set their thermostat to during summer nights, and \\(\\mu\_{winter}\\) is the temperature that U.S. households set their thermostat to during winter nights
* \\(H\_A: \\mu\_{summer} \\neq \\mu\_{winter}\\)
To conduct this in R, we use `svyttest()` by calculating the temperature difference on the left\-hand side as follows:
```
ttest_ex4 <- recs_des %>%
svyttest(
design = .,
formula = SummerTempNight - WinterTempNight ~ 0,
na.rm = TRUE
)
```
```
tidy(ttest_ex4) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 6\.3: Paired two\-sample t\-test output for estimates of U.S. households thermostat temperature by season, RECS 2020
| estimate | statistic | p.value | parameter | conf.low | conf.high | method | alternative |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 2\.85 | 50\.83 | \<0\.0001 | 58\.00 | 2\.74 | 2\.96 | Design\-based one\-sample t\-test | two.sided |
The results displayed in Table [6\.3](c06-statistical-testing.html#tab:stattest-ttest-ex4-gt-tab) indicate that U.S. households set their thermostat on average 2\.9\\(^\\circ\\)F warmer in summer nights than winter nights, which is statistically significant (t \= 50\.8, p\-value is \<0\.0001\).
6\.4 Chi\-squared tests
-----------------------
Chi\-squared tests (\\(\\chi^2\\)) allow us to examine multiple proportions using a goodness\-of\-fit test, a test of independence, or a test of homogeneity. These three tests have the same \\(\\chi^2\\) distributions but with slightly different underlying assumptions.
First, goodness\-of\-fit tests are used when comparing observed data to expected data. For example, this could be used to determine if respondent demographics (the observed data in the sample) match known population information (the expected data). In this case, we can set up the hypothesis test as follows:
* \\(H\_0: p\_1 \= \\pi\_1, \~ p\_2 \= \\pi\_2, \~ ..., \~ p\_k \= \\pi\_k\\) where \\(p\_i\\) is the observed proportion for category \\(i\\), \\(\\pi\_i\\) is the expected proportion for category \\(i\\), and \\(k\\) is the number of categories
* \\(H\_A:\\) at least one level of \\(p\_i\\) does not match \\(\\pi\_i\\)
Second, tests of independence are used when comparing two types of observed data to see if there is a relationship. For example, this could be used to determine if the proportion of respondents who voted for each political party in the presidential election matches the proportion of respondents who voted for each political party in a local election. In this case, we can set up the hypothesis test as follows:
* \\(H\_0:\\) The two variables/factors are independent
* \\(H\_A:\\) The two variables/factors are not independent
Third, tests of homogeneity are used to compare two distributions to see if they match. For example, this could be used to determine if the highest education achieved is the same for both men and women. In this case, we can set up the hypothesis test as follows:
* \\(H\_0: p\_{1a} \= p\_{1b}, \~ p\_{2a} \= p\_{2b}, \~ ..., \~ p\_{ka} \= p\_{kb}\\) where \\(p\_{ia}\\) is the observed proportion of category \\(i\\) for subgroup \\(a\\), \\(p\_{ib}\\) is the observed proportion of category \\(i\\) for subgroup \\(a\\), and \\(k\\) is the number of categories
* \\(H\_A:\\) at least one category of \\(p\_{ia}\\) does not match \\(p\_{ib}\\)
As with t\-tests, the difference between using \\(\\chi^2\\) tests with non\-survey data and survey data is based on the underlying variance estimation. The functions in the {survey} package account for these nuances, provided the design object is correctly defined. For basic variance estimation formulas for different survey design types, refer to Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights).
### 6\.4\.1 Syntax
When we do not have survey data, we may be able to use the `chisq.test()` function from the {stats} package in base R to run chi\-squared tests ([R Core Team 2024](#ref-R-base)). However, this function does not allow for weights or the variance structure to be accounted for with survey data. Therefore, when using survey data, we need to use one of two functions:
* `svygofchisq()`: For goodness\-of\-fit tests
* `svychisq()`: For tests of independence and homogeneity
The non\-survey data function of `chisq.test()` requires either a single set of counts and given proportions (for goodness\-of\-fit tests) or two sets of counts for tests of independence and homogeneity. The functions we use with survey data require respondent\-level data and formulas instead of counts. This ensures that the variances are correctly calculated.
First, the function for the goodness\-of\-fit tests is `svygofchisq()`:
```
svygofchisq(formula,
p,
design,
na.rm = TRUE,
...)
```
The arguments are:
* `formula`: Formula specifying a single factor variable
* `p`: Vector of probabilities for the categories of the factor in the correct order. If the probabilities do not sum to 1, they are rescaled to sum to 1\.
* `design`: Survey design object
* …: Other arguments to pass on, such as `na.rm`
Based on the order of the arguments, we again must use the dot `(.)` notation if we pipe in the survey design object or explicitly name the arguments as described in Section [6\.2](c06-statistical-testing.html#dot-notation). For the goodness\-of\-fit tests, the formula is a single variable `formula = ~var` as we compare the observed data from this variable to the expected data. The expected probabilities are then entered in the `p` argument and need to be a vector of the same length as the number of categories in the variable. For example, if we want to know if the proportion of males and females matches a distribution of 30/70, then the sex variable (with two categories) would be used `formula = ~SEX`, and the proportions would be included as `p = c(.3, .7)`. It is important to note that the variable entered into the formula should be formatted as either a factor or a character. The examples below provide more detail and tips on how to make sure the levels match up correctly.
For tests of homogeneity and independence, the `svychisq()` function should be used. The syntax is as follows:
```
svychisq(
formula,
design,
statistic = c("F", "Chisq", "Wald", "adjWald",
"lincom", "saddlepoint"),
na.rm = TRUE
)
```
The arguments are:
* `formula`: Model formula specifying the table (shown in examples)
* `design`: Survey design object
* `statistic`: Type of test statistic to use in test (details below)
* `na.rm`: Remove missing values
There are six statistics that are accepted in this formula. For tests of homogeneity (when comparing cross\-tabulations), the `F` or `Chisq` statistics should be used[18](#fn18). The `F` statistic is the default and uses the Rao\-Scott second\-order correction. This correction is designed to assist with complicated sampling designs (i.e., those other than a simple random sample) ([Scott 2007](#ref-Scott2007)). The `Chisq` statistic is an adjusted version of the Pearson \\(\\chi^2\\) statistic. The version of this statistic in the `svychisq()` function compares the design effect estimate from the provided survey data to what the \\(\\chi^2\\) distribution would have been if the data came from a simple random sampling.
For tests of independence, the `Wald` and `adjWald` are recommended as they provide a better adjustment for variable comparisons ([Lumley 2010](#ref-lumley2010complex)). If the data have a small number of primary sampling units (PSUs) compared to the degrees of freedom, then the `adjWald` statistic should be used to account for this. The `lincom` and `saddlepoint` statistics are available for more complicated data structures.
The formula argument is always one\-sided, unlike the `svyttest()` function. The two variables of interest should be included with a plus sign: `formula = ~ var_1 + var_2`. As with the `svygofchisq()` function, the variables entered into the formula should be formatted as either a factor or a character.
Additionally, as with the t\-test function, both `svygofchisq()` and `svychisq()` have the `na.rm` argument. If any data values are missing, the \\(\\chi^2\\) tests assume that `NA` is a category and include it in the calculation. Throughout this chapter, we always set `na.rm = TRUE`, but before analyzing the survey data, review the notes provided in Chapter [11](c11-missing-data.html#c11-missing-data) to better understand how to handle missing data.
### 6\.4\.2 Examples
Let’s walk through a few examples using the ANES data.
#### Example 1: Goodness\-of\-fit test
ANES asked respondents about their highest education level[19](#fn19). Based on the data from the 2020 American Community Survey (ACS) 5\-year estimates[20](#fn20), the education distribution of those aged 18\+ in the United States (among the 50 states and the District of Columbia) is as follows:
* 11% had less than a high school degree
* 27% had a high school degree
* 29% had some college or an associate’s degree
* 33% had a bachelor’s degree or higher
If we want to see if the weighted distribution from the ANES 2020 data matches this distribution, we could set up the hypothesis as follows:
* \\(H\_0: p\_1 \= 0\.11, \~ p\_2 \= 0\.27, \~ p\_3 \= 0\.29, \~ p\_4 \= 0\.33\\)
* \\(H\_A:\\) at least one of the education levels does not match between the ANES and the ACS
To conduct this in R, let’s first look at the education variable (`Education`) we have on the ANES data. Using the `survey_mean()` function discussed in Chapter [5](c05-descriptive-analysis.html#c05-descriptive-analysis), we can see the education levels and estimated proportions.
```
anes_des %>%
drop_na(Education) %>%
group_by(Education) %>%
summarize(p = survey_mean())
```
```
## # A tibble: 5 × 3
## Education p p_se
## <fct> <dbl> <dbl>
## 1 Less than HS 0.0805 0.00568
## 2 High school 0.277 0.0102
## 3 Post HS 0.290 0.00713
## 4 Bachelor's 0.226 0.00633
## 5 Graduate 0.126 0.00499
```
Based on this output, we can see that we have different levels from the ACS data. Specifically, the education data from ANES include two levels for bachelor’s degree or higher (bachelor’s and graduate), so these two categories need to be collapsed into a single category to match the ACS data. For this, among other methods, we can use the {forcats} package from the tidyverse ([Wickham 2023](#ref-R-forcats)). The package’s `fct_collapse()` function helps us create a new variable by collapsing categories into a single one. Then, we use the `svygofchisq()` function to compare the ANES data to the ACS data, where we specify the updated design object, the formula using the collapsed education variable, the ACS estimates for education levels as p, and removing `NA` values.
```
anes_des_educ <- anes_des %>%
mutate(
Education2 =
fct_collapse(Education,
"Bachelor or Higher" = c(
"Bachelor's",
"Graduate"
)
)
)
anes_des_educ %>%
drop_na(Education2) %>%
group_by(Education2) %>%
summarize(p = survey_mean())
```
```
## # A tibble: 4 × 3
## Education2 p p_se
## <fct> <dbl> <dbl>
## 1 Less than HS 0.0805 0.00568
## 2 High school 0.277 0.0102
## 3 Post HS 0.290 0.00713
## 4 Bachelor or Higher 0.352 0.00732
```
```
chi_ex1 <- anes_des_educ %>%
svygofchisq(
formula = ~Education2,
p = c(0.11, 0.27, 0.29, 0.33),
design = .,
na.rm = TRUE
)
chi_ex1
```
```
##
## Design-based chi-squared test for given probabilities
##
## data: ~Education2
## X-squared = 2172220, scale = 1.1e+05, df = 2.3e+00, p-value =
## 9e-05
```
The output from the `svygofchisq()` indicates that at least one proportion from ANES does not match the ACS data (\\(\\chi^2 \=\\) 2,172,220; p\-value is \<0\.0001\). To get a better idea of the differences, we can use the `expected` output along with `survey_mean()` to create a comparison table:
```
ex1_table <- anes_des_educ %>%
drop_na(Education2) %>%
group_by(Education2) %>%
summarize(Observed = survey_mean(vartype = "ci")) %>%
rename(Education = Education2) %>%
mutate(Expected = c(0.11, 0.27, 0.29, 0.33)) %>%
select(Education, Expected, everything())
ex1_table
```
```
## # A tibble: 4 × 5
## Education Expected Observed Observed_low Observed_upp
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 Less than HS 0.11 0.0805 0.0691 0.0919
## 2 High school 0.27 0.277 0.257 0.298
## 3 Post HS 0.29 0.290 0.276 0.305
## 4 Bachelor or Higher 0.33 0.352 0.337 0.367
```
This output includes our expected proportions from the ACS that we provided the `svygofchisq()` function along with the output of the observed proportions and their confidence intervals. This table shows that the “high school” and “post HS” categories have nearly identical proportions, but that the other two categories are slightly different. Looking at the confidence intervals, we can see that the ANES data skew to include fewer people in the “less than HS” category and more people in the “bachelor or higher” category. This may be easier to see if we plot this. The code below uses the tabular output to create Figure [6\.1](c06-statistical-testing.html#fig:stattest-chi-ex1-graph).
```
ex1_table %>%
pivot_longer(
cols = c("Expected", "Observed"),
names_to = "Names",
values_to = "Proportion"
) %>%
mutate(
Observed_low = if_else(Names == "Observed", Observed_low, NA_real_),
Observed_upp = if_else(Names == "Observed", Observed_upp, NA_real_),
Names = if_else(Names == "Observed",
"ANES (observed)", "ACS (expected)"
)
) %>%
ggplot(aes(x = Education, y = Proportion, color = Names)) +
geom_point(alpha = 0.75, size = 2) +
geom_errorbar(aes(ymin = Observed_low, ymax = Observed_upp),
width = 0.25
) +
theme_bw() +
scale_color_manual(name = "Type", values = book_colors[c(4, 1)]) +
theme(legend.position = "bottom", legend.title = element_blank())
```
FIGURE 6\.1: Expected and observed proportions of education with confidence intervals
#### Example 2: Test of independence
ANES asked respondents two questions about trust:
* Question text: “How often can you trust the federal government to do what is right?” ([American National Election Studies 2021](#ref-anes-svy))
* Question text: “How often can you trust other people?” ([American National Election Studies 2021](#ref-anes-svy))
If we want to see if the distributions of these two questions are similar or not, we can conduct a test of independence. Here is how the hypothesis could be set up:
* \\(H\_0:\\) People’s trust in the federal government and their trust in other people are independent (i.e., not related)
* \\(H\_A:\\) People’s trust in the federal government and their trust in other people are not independent (i.e., they are related)
To conduct this in R, we use the `svychisq()` function to compare the two variables:
```
chi_ex2 <- anes_des %>%
svychisq(
formula = ~ TrustGovernment + TrustPeople,
design = .,
statistic = "Wald",
na.rm = TRUE
)
chi_ex2
```
```
##
## Design-based Wald test of association
##
## data: NextMethod()
## F = 21, ndf = 16, ddf = 51, p-value <2e-16
```
The output from `svychisq()` indicates that the distribution of people’s trust in the federal government and their trust in other people are not independent, meaning that they are related. Let’s output the distributions in a table to see the relationship. The `observed` output from the test provides a cross\-tabulation of the counts for each category:
```
chi_ex2$observed
```
```
## TrustPeople
## TrustGovernment Always Most of the time About half the time
## Always 16.470 25.009 31.848
## Most of the time 11.020 539.377 196.258
## About half the time 11.772 934.858 861.971
## Some of the time 17.007 1353.779 839.863
## Never 3.174 236.785 174.272
## TrustPeople
## TrustGovernment Some of the time Never
## Always 36.854 5.523
## Most of the time 206.556 27.184
## About half the time 428.871 65.024
## Some of the time 932.628 89.596
## Never 217.994 189.307
```
However, we often want to know about the proportions, not just the respondent counts from the survey. There are a couple of different ways that we can do this. The first is using the counts from `chi_ex2$observed` to calculate the proportion. We can then pivot the table to create a cross\-tabulation similar to the counts table above. Adding `group_by()` to the code means that we obtain the proportions within each variable level. In this case, we are looking at the distribution of `TrustGovernment` for each level of `TrustPeople`. The resulting table is shown in Table [6\.4](c06-statistical-testing.html#tab:stattest-chi-ex2-prop1-tab).
```
chi_ex2_table <- chi_ex2$observed %>%
as_tibble() %>%
group_by(TrustPeople) %>%
mutate(prop = round(n / sum(n), 3)) %>%
select(-n) %>%
pivot_wider(names_from = TrustPeople, values_from = prop) %>%
gt(rowname_col = "TrustGovernment") %>%
tab_stubhead(label = "Trust in Government") %>%
tab_spanner(
label = "Trust in People",
columns = everything()
) %>%
cols_label(
`Most of the time` = md("Most of<br />the time"),
`About half the time` = md("About half<br />the time"),
`Some of the time` = md("Some of<br />the time")
)
```
```
chi_ex2_table
```
TABLE 6\.4: Proportion of adults in the U.S. by levels of trust in people and government, ANES 2020
| Trust in Government | Trust in People | | | | |
| --- | --- | --- | --- | --- | --- |
| Always | Most ofthe time | About halfthe time | Some ofthe time | Never |
| Always | 0\.277 | 0\.008 | 0\.015 | 0\.020 | 0\.015 |
| Most of the time | 0\.185 | 0\.175 | 0\.093 | 0\.113 | 0\.072 |
| About half the time | 0\.198 | 0\.303 | 0\.410 | 0\.235 | 0\.173 |
| Some of the time | 0\.286 | 0\.438 | 0\.399 | 0\.512 | 0\.238 |
| Never | 0\.053 | 0\.077 | 0\.083 | 0\.120 | 0\.503 |
In Table [6\.4](c06-statistical-testing.html#tab:stattest-chi-ex2-prop1-tab), each column sums to 1\. For example, we can say that it is estimated that of people who always trust in people, 27\.7% also always trust in the government based on the top\-left cell, but 5\.3% never trust in the government.
The second option is to use the `group_by()` and `survey_mean()` functions to calculate the proportions from the ANES design object. Remember that with more than one variable listed in the `group_by()` statement, the proportions are within the first variable listed. As mentioned above, we are looking at the distribution of `TrustGovernment` for each level of `TrustPeople`.
```
chi_ex2_obs <- anes_des %>%
drop_na(TrustPeople, TrustGovernment) %>%
group_by(TrustPeople, TrustGovernment) %>%
summarize(
Observed = round(survey_mean(vartype = "ci"), 3),
.groups = "drop"
)
chi_ex2_obs_table <- chi_ex2_obs %>%
mutate(prop = paste0(
Observed, " (", Observed_low, ", ",
Observed_upp, ")"
)) %>%
select(TrustGovernment, TrustPeople, prop) %>%
pivot_wider(names_from = TrustPeople, values_from = prop) %>%
gt(rowname_col = "TrustGovernment") %>%
tab_stubhead(label = "Trust in Government") %>%
tab_spanner(
label = "Trust in People",
columns = everything()
) %>%
tab_options(page.orientation = "landscape")
```
```
chi_ex2_obs_table
```
TABLE 6\.5: Proportion of adults in the U.S. by levels of trust in people and government with confidence intervals, ANES 2020
| Trust in Government | Trust in People | | | | |
| --- | --- | --- | --- | --- | --- |
| Always | Most of the time | About half the time | Some of the time | Never |
| Always | 0\.277 (0\.11, 0\.444\) | 0\.008 (0\.004, 0\.012\) | 0\.015 (0\.006, 0\.024\) | 0\.02 (0\.008, 0\.033\) | 0\.015 (0, 0\.029\) |
| Most of the time | 0\.185 (\-0\.009, 0\.38\) | 0\.175 (0\.157, 0\.192\) | 0\.093 (0\.078, 0\.109\) | 0\.113 (0\.085, 0\.141\) | 0\.072 (0\.021, 0\.123\) |
| About half the time | 0\.198 (0\.046, 0\.35\) | 0\.303 (0\.281, 0\.324\) | 0\.41 (0\.378, 0\.441\) | 0\.235 (0\.2, 0\.271\) | 0\.173 (0\.099, 0\.246\) |
| Some of the time | 0\.286 (0\.069, 0\.503\) | 0\.438 (0\.415, 0\.462\) | 0\.399 (0\.365, 0\.433\) | 0\.512 (0\.481, 0\.543\) | 0\.238 (0\.178, 0\.298\) |
| Never | 0\.053 (\-0\.01, 0\.117\) | 0\.077 (0\.064, 0\.089\) | 0\.083 (0\.063, 0\.103\) | 0\.12 (0\.097, 0\.142\) | 0\.503 (0\.422, 0\.583\) |
Both methods produce the same output as the `svychisq()` function. However, calculating the proportions directly from the design object allows us to obtain the variance information. In this case, the output in Table [6\.5](c06-statistical-testing.html#tab:stattest-chi-ex2-prop2-tab) displays the survey estimate followed by the confidence intervals. Based on the output, we can see that of those who never trust people, 50\.3% also never trust the government, while the proportions of never trusting the government are much lower for each of the other levels of trusting people.
We may find it easier to look at these proportions graphically. We can use `ggplot()` and facets to provide an overview to create Figure [6\.2](c06-statistical-testing.html#fig:stattest-chi-ex2-graph) below:
```
chi_ex2_obs %>%
mutate(
TrustPeople =
fct_reorder(
str_c("Trust in People:\n", TrustPeople),
order(TrustPeople)
)
) %>%
ggplot(
aes(x = TrustGovernment, y = Observed, color = TrustGovernment)
) +
facet_wrap(~TrustPeople, ncol = 5) +
geom_point() +
geom_errorbar(aes(ymin = Observed_low, ymax = Observed_upp)) +
ylab("Proportion") +
xlab("") +
theme_bw() +
scale_color_manual(
name = "Trust in Government",
values = book_colors
) +
theme(
axis.text.x = element_blank(),
axis.ticks.x = element_blank(),
legend.position = "bottom"
) +
guides(col = guide_legend(nrow = 2))
```
FIGURE 6\.2: Proportion of adults in the U.S. by levels of trust in people and government with confidence intervals, ANES 2020
#### Example 3: Test of homogeneity
Researchers and politicians often look at specific demographics each election cycle to understand how each group is leaning or voting toward candidates. The ANES data are collected post\-election, but we can still see if there are differences in how specific demographic groups voted.
If we want to see if there is a difference in how each age group voted for the 2020 candidates, this would be a test of homogeneity, and we can set up the hypothesis as follows:
\\\[\\begin{align\*}
H\_0: p\_{1\_{Biden}} \&\= p\_{1\_{Trump}} \= p\_{1\_{Other}},\\\\
p\_{2\_{Biden}} \&\= p\_{2\_{Trump}} \= p\_{2\_{Other}},\\\\
p\_{3\_{Biden}} \&\= p\_{3\_{Trump}} \= p\_{3\_{Other}},\\\\
p\_{4\_{Biden}} \&\= p\_{4\_{Trump}} \= p\_{4\_{Other}},\\\\
p\_{5\_{Biden}} \&\= p\_{5\_{Trump}} \= p\_{5\_{Other}},\\\\
p\_{6\_{Biden}} \&\= p\_{6\_{Trump}} \= p\_{6\_{Other}}
\\end{align\*}\\]
where \\(p\_{i\_{Biden}}\\) is the observed proportion of each age group (\\(i\\)) that voted for Joseph Biden, \\(p\_{i\_{Trump}}\\) is the observed proportion of each age group (\\(i\\)) that voted for Donald Trump, and \\(p\_{i\_{Other}}\\) is the observed proportion of each age group (\\(i\\)) that voted for another candidate.
* \\(H\_A:\\) at least one category of \\(p\_{i\_{Biden}}\\) does not match \\(p\_{i\_{Trump}}\\) or \\(p\_{i\_{Other}}\\)
To conduct this in R, we use the `svychisq()` function to compare the two variables:
```
chi_ex3 <- anes_des %>%
drop_na(VotedPres2020_selection, AgeGroup) %>%
svychisq(
formula = ~ AgeGroup + VotedPres2020_selection,
design = .,
statistic = "Chisq",
na.rm = TRUE
)
chi_ex3
```
```
##
## Pearson's X^2: Rao & Scott adjustment
##
## data: NextMethod()
## X-squared = 171, df = 10, p-value <2e-16
```
The output from `svychisq()` indicates a difference in how each age group voted in the 2020 election. To get a better idea of the different distributions, let’s output proportions to see the relationship. As we learned in Example 2 above, we can use `chi_ex3$observed`, or if we want to get the variance information (which is crucial with survey data), we can use `survey_mean()`. Remember, when we have two variables in `group_by()`, we obtain the proportions within each level of the variable listed. In this case, we are looking at the distribution of `AgeGroup` for each level of `VotedPres2020_selection`.
```
chi_ex3_obs <- anes_des %>%
filter(VotedPres2020 == "Yes") %>%
drop_na(VotedPres2020_selection, AgeGroup) %>%
group_by(VotedPres2020_selection, AgeGroup) %>%
summarize(Observed = round(survey_mean(vartype = "ci"), 3))
chi_ex3_obs_table <- chi_ex3_obs %>%
mutate(prop = paste0(
Observed, " (", Observed_low, ", ",
Observed_upp, ")"
)) %>%
select(AgeGroup, VotedPres2020_selection, prop) %>%
pivot_wider(
names_from = VotedPres2020_selection,
values_from = prop
) %>%
gt(rowname_col = "AgeGroup") %>%
tab_stubhead(label = "Age Group")
```
```
chi_ex3_obs_table
```
TABLE 6\.6: Distribution of age group by presidential candidate selection with confidence intervals
| Age Group | Biden | Trump | Other |
| --- | --- | --- | --- |
| 18\-29 | 0\.203 (0\.177, 0\.229\) | 0\.113 (0\.095, 0\.132\) | 0\.221 (0\.144, 0\.298\) |
| 30\-39 | 0\.168 (0\.152, 0\.184\) | 0\.146 (0\.125, 0\.168\) | 0\.302 (0\.21, 0\.394\) |
| 40\-49 | 0\.163 (0\.146, 0\.18\) | 0\.157 (0\.137, 0\.177\) | 0\.21 (0\.13, 0\.29\) |
| 50\-59 | 0\.152 (0\.135, 0\.17\) | 0\.229 (0\.202, 0\.256\) | 0\.104 (0\.04, 0\.168\) |
| 60\-69 | 0\.177 (0\.159, 0\.196\) | 0\.193 (0\.173, 0\.213\) | 0\.103 (0\.025, 0\.182\) |
| 70 or older | 0\.136 (0\.123, 0\.149\) | 0\.161 (0\.143, 0\.179\) | 0\.06 (0\.01, 0\.109\) |
In Table [6\.6](c06-statistical-testing.html#tab:stattest-chi-ex3-tab) we can see that the age group distribution that voted for Biden and other candidates was younger than those that voted for Trump. For example, of those who voted for Biden, 20\.4% were in the 18–29 age group, compared to only 11\.4% of those who voted for Trump were in that age group. Conversely, 23\.4% of those who voted for Trump were in the 50–59 age group compared to only 15\.4% of those who voted for Biden.
6\.5 Exercises
--------------
The exercises use the design objects `anes_des` and `recs_des` as provided in the Prerequisites box at the [beginning of the chapter](c06-statistical-testing.html#c06-statistical-testing). Here are some exercises for practicing conducting t\-tests using `svyttest()`:
1. Using the RECS data, do more than 50% of U.S. households use A/C (`ACUsed`)?
2. Using the RECS data, does the average temperature at which U.S. households set their thermostats differ between the day and night in the winter (`WinterTempDay` and `WinterTempNight`)?
3. Using the ANES data, does the average age (`Age`) of those who voted for Joseph Biden in 2020 (`VotedPres2020_selection`) differ from those who voted for another candidate?
4. If we wanted to determine if the political party affiliation differed for males and females, what test would we use?
1. Goodness\-of\-fit test (`svygofchisq()`)
2. Test of independence (`svychisq()`)
3. Test of homogeneity (`svychisq()`)
5. In the RECS data, is there a relationship between the type of housing unit (`HousingUnitType`) and the year the house was built (`YearMade`)?
6. In the ANES data, is there a difference in the distribution of gender (`Gender`) across early voting status in 2020 (`EarlyVote2020`)?
### Prerequisites
6\.1 Introduction
-----------------
When analyzing survey results, the point estimates described in Chapter [5](c05-descriptive-analysis.html#c05-descriptive-analysis) help us understand the data at a high level. Still, we often want to make comparisons between different groups. These comparisons are calculated through statistical testing.
The general idea of statistical testing is the same for data obtained through surveys and data obtained through other methods, where we compare the point estimates and uncertainty estimates of each statistic to see if statistically significant differences exist. However, statistical testing for complex surveys involves additional considerations due to the need to account for the sampling design in order to obtain accurate uncertainty estimates.
Statistical testing, also called hypothesis testing, involves declaring a null and alternative hypothesis. A null hypothesis is denoted as \\(H\_0\\) and the alternative hypothesis is denoted as \\(H\_A\\). The null hypothesis is the default assumption in that there are no differences in the data, or that the data are operating under “standard” behaviors. On the other hand, the alternative hypothesis is the break from the “standard,” and we are trying to determine if the data support this alternative hypothesis.
Let’s review an example outside of survey data. If we are flipping a coin, a null hypothesis would be that the coin is fair and that each side has an equal chance of being flipped. In other words, the probability of the coin landing on each side is 1/2, whereas an alternative hypothesis could be that the coin is unfair and that one side has a higher probability of being flipped (e.g., a probability of 1/4 to get heads but a probability of 3/4 to get tails). We write this set of hypotheses as:
* \\(H\_0: \\rho\_{heads} \= \\rho\_{tails}\\), where \\(\\rho\_{x}\\) is the probability of flipping the coin and having it land on heads (\\(\\rho\_{heads}\\)) or tails (\\(\\rho\_{tails}\\))
* \\(H\_A: \\rho\_{heads} \\neq \\rho\_{tails}\\)
When we conduct hypothesis testing, the statistical models calculate a p\-value, which shows how likely we are to observe the data if the null hypothesis is true. If the p\-value (a probability between 0 and 1\) is small, we have strong evidence to reject the null hypothesis, as it is unlikely to see the data we observe if the null hypothesis is true. However, if the p\-value is large, we say we do not have evidence to reject the null hypothesis. The size of the p\-value for this cut\-off is determined by Type 1 error known as \\(\\alpha\\). A common Type 1 error value for statistical testing is to use \\(\\alpha \= 0\.05\\)[12](#fn12). Explanations of statistical testing often refer to confidence level. The confidence level is the inverse of the Type 1 error. Thus, if \\(\\alpha \= 0\.05\\), the confidence level would be 95%.
The functions in the {survey} package allow for the correct estimation of the uncertainty estimates (e.g., standard deviations and confidence intervals). This chapter covers the following statistical tests with survey data and the following functions from the {survey} package ([Lumley 2010](#ref-lumley2010complex)):
* Comparison of proportions (`svyttest()`)
* Comparison of means (`svyttest()`)
* Goodness\-of\-fit tests (`svygofchisq()`)
* Tests of independence (`svychisq()`)
* Tests of homogeneity (`svychisq()`)
6\.2 Dot notation
-----------------
Up to this point, we have shown functions that use wrappers from the {srvyr} package. This means that the functions work with tidyverse syntax. However, the functions in this chapter do not have wrappers in the {srvyr} package and are instead used directly from the {survey} package. Therefore, the design object is not the first argument, and to use these functions with the magrittr pipe (`%>%`) and tidyverse syntax, we need to use dot (`.`) notation[13](#fn13).
Functions that work with the magrittr pipe (`%>%`) have the dataset as the first argument. When we run a function with the pipe, it automatically places anything to the left of the pipe into the first argument of the function to the right of the pipe. For example, if we wanted to take the `towny` data from the {gt} package and filter to municipalities with the Census Subdivision Type of “city,” we can write the code in at least four different ways:
1. `filter(towny, csd_type == "city")`
2. `towny %>% filter(csd_type == "city")`
3. `towny %>% filter(., csd_type == "city")`
4. `towny %>% filter(.data = ., csd_type == "city")`
Each of these lines of code produces the same output since the argument that takes the dataset is in the first spot in `filter()`. The first two are probably familiar to those who have worked with the tidyverse. The third option functions the same way as the second one but is explicit that `towny` goes into the first argument, and the fourth option indicates that `towny` is going into the named argument of `.data`. Here, we are telling R to take what is on the left side of the pipe (`towny`) and pipe it into the spot with the dot (`.`) — the first argument.
In functions that are not part of the tidyverse, the data argument may not be in the first spot. For example, in `svyttest()`, the data argument is in the second spot, which means we need to place the dot (`.`) in the second spot and not the first. For example:
```
svydata_des %>%
svyttest(x ~ y, .)
```
By default, the pipe places the left\-hand object in the first argument spot. Placing the dot (`.`) in the second argument spot indicates that the survey design object `svydata_des` should be used in the second argument and not the first.
Alternatively, named arguments could be used to place the dot first, as named arguments can appear at any location as in the following:
```
svydata_des %>%
svyttest(design = ., x ~ y)
```
However, the following code does not work as the `svyttest()` function expects the formula as the first argument when arguments are not named:
```
svydata_des %>%
svyttest(., x ~ y)
```
6\.3 Comparison of proportions and means
----------------------------------------
We use t\-tests to compare two proportions or means. T\-tests allow us to determine if one proportion or mean is statistically different from another. They are commonly used to determine if a single estimate differs from a known value (e.g., 0 or 50%) or to compare two group means (e.g., North versus South). Comparing a single estimate to a known value is called a one\-sample t\-test, and we can set up the hypothesis test as follows:
* \\(H\_0: \\mu \= 0\\) where \\(\\mu\\) is the mean outcome and \\(0\\) is the value we are comparing it to
* \\(H\_A: \\mu \\neq 0\\)
For comparing two estimates, this is called a two\-sample t\-test. We can set up the hypothesis test as follows:
* \\(H\_0: \\mu\_1 \= \\mu\_2\\) where \\(\\mu\_i\\) is the mean outcome for group \\(i\\)
* \\(H\_A: \\mu\_1 \\neq \\mu\_2\\)
Two\-sample t\-tests can also be paired or unpaired. If the data come from two different populations (e.g., North versus South), the t\-test run is an unpaired or independent samples t\-test. Paired t\-tests occur when the data come from the same population. This is commonly seen with data from the same population in two different time periods (e.g., before and after an intervention).
The difference between t\-tests with non\-survey data and survey data is based on the underlying variance estimation difference. Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights) provides a detailed overview of the math behind the mean and sampling error calculations for various sample designs. The functions in the {survey} package account for these nuances, provided the design object is correctly defined.
### 6\.3\.1 Syntax
When we do not have survey data, we can use the `t.test()` function from the {stats} package to run t\-tests. This function does not allow for weights or the variance structure that need to be accounted for with survey data. Therefore, we need to use the `svyttest()` function from {survey} when using survey data. Many of the arguments are the same between the two functions, but there are a few key differences:
* We need to use the survey design object instead of the original data frame
* We can only use a formula and not separate x and y data
* The confidence level cannot be specified and is always set to 95%. However, we show examples of how the confidence level can be changed after running the `svyttest()` function by using the `confint()` function.
Here is the syntax for the `svyttest()` function:
```
svyttest(formula,
design,
...)
```
The arguments are:
* `formula`: Formula, `outcome~group` for two\-sample, `outcome~0` or `outcome~1` for one\-sample. The group variable must be a factor or character with two levels, or be coded 0/1 or 1/2\. We give more details on formula set\-up below for different types of tests.
* `design`: survey design object
* `...`: This passes options on for one\-sided tests only, and thus, we can specify `na.rm=TRUE`
Notice that the first argument here is the `formula` and not the `design`. This means we must use the dot `(.)` if we pipe in the survey design object (as described in Section [6\.2](c06-statistical-testing.html#dot-notation)).
The `formula` argument can take several different forms depending on what we are measuring. Here are a few common scenarios:
1. One\-sample t\-test:
1. Comparison to 0: `var ~ 0`, where `var` is the measure of interest, and we compare it to the value `0`. For example, we could test if the population mean of household debt is different from `0` given the sample data collected.
2. Comparison to a different value: `var - value ~ 0`, where `var` is the measure of interest and `value` is what we are comparing to. For example, we could test if the proportion of the population that has blue eyes is different from `25%` by using `var - 0.25 ~ 0`. Note that specifying the formula as `var ~ 0.25` is not equivalent and results in a syntax error.
2. Two\-sample t\-test:
1. Unpaired:
* 2 level grouping variable: `var ~ groupVar`, where `var` is the measure of interest and `groupVar` is a variable with two categories. For example, we could test if the average age of the population who voted for president in 2020 differed from the age of people who did not vote. In this case, age would be used for `var`, and a binary variable indicating voting activity would be the `groupVar`.
* 3\+ level grouping variable: `var ~ groupVar == level`, where `var` is the measure of interest, `groupVar` is the categorical variable, and `level` is the category level to isolate. For example, we could test if the test scores in one classroom differed from all other classrooms where `groupVar` would be the variable holding the values for classroom IDs and `level` is the classroom ID we want to compare to the others.
2. Paired: `var_1 - var_2 ~ 0`, where `var_1` is the first variable of interest and `var_2` is the second variable of interest. For example, we could test if test scores on a subject differed between the start and the end of a course, so `var_1` would be the test score at the beginning of the course, and `var_2` would be the score at the end of the course.
The `na.rm` argument defaults to `FALSE`, which means if any data values are missing, the t\-test does not compute. Throughout this chapter, we always set `na.rm = TRUE`, but before analyzing the survey data, review the notes provided in Chapter [11](c11-missing-data.html#c11-missing-data) to better understand how to handle missing data.
Let’s walk through a few examples using the RECS data.
### 6\.3\.2 Examples
#### Example 1: One\-sample t\-test for mean
RECS asks respondents to indicate what temperature they set their house to during the summer at night[14](#fn14). In our data, we have called this variable `SummerTempNight`. If we want to see if the average U.S. household sets its temperature at a value different from 68\\(^\\circ\\)F[15](#fn15), we could set up the hypothesis as follows:
* \\(H\_0: \\mu \= 68\\) where \\(\\mu\\) is the average temperature U.S. households set their thermostat to in the summer at night
* \\(H\_A: \\mu \\neq 68\\)
To conduct this in R, we use `svyttest()` and subtract the temperature on the left\-hand side of the formula:
```
ttest_ex1 <- recs_des %>%
svyttest(
formula = SummerTempNight - 68 ~ 0,
design = .,
na.rm = TRUE
)
ttest_ex1
```
```
##
## Design-based one-sample t-test
##
## data: SummerTempNight - 68 ~ 0
## t = 85, df = 58, p-value <2e-16
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## 3.288 3.447
## sample estimates:
## mean
## 3.367
```
To pull out specific output, we can use R’s built\-in `$` operator. For instance, to obtain the estimate \\(\\mu \- 68\\), we run `ttest_ex1$estimate`.
If we want the average, we take our t\-test estimate and add it to 68:
```
ttest_ex1$estimate + 68
```
```
## mean
## 71.37
```
Or, we can use the `survey_mean()` function described in Chapter [5](c05-descriptive-analysis.html#c05-descriptive-analysis):
```
recs_des %>%
summarize(mu = survey_mean(SummerTempNight, na.rm = TRUE))
```
```
## # A tibble: 1 × 2
## mu mu_se
## <dbl> <dbl>
## 1 71.4 0.0397
```
The result is the same in both methods, so we see that the average temperature U.S. households set their thermostat to in the summer at night is 71\.4\\(^\\circ\\)F. Looking at the output from `svyttest()`, the t\-statistic is 84\.8, and the p\-value is \<0\.0001, indicating that the average is statistically different from 68\\(^\\circ\\)F at an \\(\\alpha\\) level of \\(0\.05\\).
If we want an 80% confidence interval for the test statistic, we can use the function `confint()` to change the confidence level. Below, we print the default confidence interval (95%), the confidence interval explicitly specifying the level as 95%, and the 80% confidence interval. When the confidence level is 95% either by default or explicitly, R returns a vector with both row and column names. However, when we specify any other confidence level, an unnamed vector is returned, with the first element being the lower bound and the second element being the upper bound of the confidence interval.
```
confint(ttest_ex1)
```
```
## 2.5 % 97.5 %
## as.numeric(SummerTempNight - 68) 3.288 3.447
## attr(,"conf.level")
## [1] 0.95
```
```
confint(ttest_ex1, level = 0.95)
```
```
## 2.5 % 97.5 %
## as.numeric(SummerTempNight - 68) 3.288 3.447
## attr(,"conf.level")
## [1] 0.95
```
```
confint(ttest_ex1, level = 0.8)
```
```
## [1] 3.316 3.419
## attr(,"conf.level")
## [1] 0.8
```
In this case, neither confidence interval contains 0, and we draw the same conclusion from either that the average temperature households set their thermostat in the summer at night is significantly higher than 68\\(^\\circ\\)F.
#### Example 2: One\-sample t\-test for proportion
RECS asked respondents if they use air conditioning (A/C) in their home[16](#fn16). In our data, we call this variable `ACUsed`. Let’s look at the proportion of U.S. households that use A/C in their homes using the `survey_prop()` function we learned in Chapter [5](c05-descriptive-analysis.html#c05-descriptive-analysis).
```
acprop <- recs_des %>%
group_by(ACUsed) %>%
summarize(p = survey_prop())
acprop
```
```
## # A tibble: 2 × 3
## ACUsed p p_se
## <lgl> <dbl> <dbl>
## 1 FALSE 0.113 0.00306
## 2 TRUE 0.887 0.00306
```
Based on this, 88\.7% of U.S. households use A/C in their homes. If we wanted to know if this differs from 90%, we could set up our hypothesis as follows:
* \\(H\_0: p \= 0\.90\\) where \\(p\\) is the proportion of U.S. households that use A/C in their homes
* \\(H\_A: p \\neq 0\.90\\)
To conduct this in R, we use the `svyttest()` function as follows:
```
ttest_ex2 <- recs_des %>%
svyttest(
formula = (ACUsed == TRUE) - 0.90 ~ 0,
design = .,
na.rm = TRUE
)
ttest_ex2
```
```
##
## Design-based one-sample t-test
##
## data: (ACUsed == TRUE) - 0.9 ~ 0
## t = -4.4, df = 58, p-value = 5e-05
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## -0.019603 -0.007348
## sample estimates:
## mean
## -0.01348
```
The output from the `svyttest()` function can be a bit hard to read. Using the `tidy()` function from the {broom} package, we can clean up the output into a tibble to more easily understand what the test tells us ([Robinson, Hayes, and Couch 2023](#ref-R-broom)).
```
tidy(ttest_ex2)
```
```
## # A tibble: 1 × 8
## estimate statistic p.value parameter conf.low conf.high method
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 -0.0135 -4.40 0.0000466 58 -0.0196 -0.00735 Design-base…
## # ℹ 1 more variable: alternative <chr>
```
The ‘tidied’ output can also be piped into the {gt} package to create a table ready for publication (see Table [6\.1](c06-statistical-testing.html#tab:stattest-ttest-ex2-gt-tab)). We go over the {gt} package in Chapter [8](c08-communicating-results.html#c08-communicating-results). The function `pretty_p_value()` comes from the {prettyunits} package and converts numeric p\-values to characters and, by default, prints four decimal places and displays any p\-value less than 0\.0001 as `"<0.0001"`, though another minimum display p\-value can be specified ([Csardi 2023](#ref-R-prettyunits)).
```
tidy(ttest_ex2) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 6\.1: One\-sample t\-test output for estimates of U.S. households use A/C in their homes differing from 90%, RECS 2020
| estimate | statistic | p.value | parameter | conf.low | conf.high | method | alternative |
| --- | --- | --- | --- | --- | --- | --- | --- |
| −0\.01 | −4\.40 | \<0\.0001 | 58\.00 | −0\.02 | −0\.01 | Design\-based one\-sample t\-test | two.sided |
The estimate differs from Example 1 in that it does not display \\(p \- 0\.90\\) but rather \\(p\\), or the difference between the U.S. households that use A/C and our comparison proportion. We can see that there is a difference of —1\.35 percentage points. Additionally, the t\-statistic value in the `statistic` column is —4\.4, and the p\-value is \<0\.0001\. These results indicate that fewer than 90% of U.S. households use A/C in their homes.
#### Example 3: Unpaired two\-sample t\-test
In addition to `ACUsed`, another variable in the RECS data is a household’s total electric cost in dollars (`DOLLAREL`).To see if U.S. households with A/C had higher electrical bills than those without, we can set up the hypothesis as follows:
* \\(H\_0: \\mu\_{AC} \= \\mu\_{noAC}\\) where \\(\\mu\_{AC}\\) is the electrical bill cost for U.S. households that used A/C, and \\(\\mu\_{noAC}\\) is the electrical bill cost for U.S. households that did not use A/C
* \\(H\_A: \\mu\_{AC} \\neq \\mu\_{noAC}\\)
Let’s take a quick look at the data to see how they are formatted:
```
recs_des %>%
group_by(ACUsed) %>%
summarize(mean = survey_mean(DOLLAREL, na.rm = TRUE))
```
```
## # A tibble: 2 × 3
## ACUsed mean mean_se
## <lgl> <dbl> <dbl>
## 1 FALSE 1056. 16.0
## 2 TRUE 1422. 5.69
```
To conduct this in R, we use `svyttest()`:
```
ttest_ex3 <- recs_des %>%
svyttest(
formula = DOLLAREL ~ ACUsed,
design = .,
na.rm = TRUE
)
```
```
tidy(ttest_ex3) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 6\.2: Unpaired two\-sample t\-test output for estimates of U.S. households electrical bills by A/C use, RECS 2020
| estimate | statistic | p.value | parameter | conf.low | conf.high | method | alternative |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 365\.72 | 21\.29 | \<0\.0001 | 58\.00 | 331\.33 | 400\.11 | Design\-based t\-test | two.sided |
The results in Table [6\.2](c06-statistical-testing.html#tab:stattest-ttest-ex3-gt-tab) indicate that the difference in electrical bills for those who used A/C and those who did not is, on average, $365\.72\. The difference appears to be statistically significant as the t\-statistic is 21\.3 and the p\-value is \<0\.0001\. Households that used A/C spent, on average, $365\.72 more in 2020 on electricity than households without A/C.
#### Example 4: Paired two\-sample t\-test
Let’s say we want to test whether the temperature at which U.S. households set their thermostat at night differs depending on the season (comparing summer and winter[17](#fn17) temperatures). We could set up the hypothesis as follows:
* \\(H\_0: \\mu\_{summer} \= \\mu\_{winter}\\) where \\(\\mu\_{summer}\\) is the temperature that U.S. households set their thermostat to during summer nights, and \\(\\mu\_{winter}\\) is the temperature that U.S. households set their thermostat to during winter nights
* \\(H\_A: \\mu\_{summer} \\neq \\mu\_{winter}\\)
To conduct this in R, we use `svyttest()` by calculating the temperature difference on the left\-hand side as follows:
```
ttest_ex4 <- recs_des %>%
svyttest(
design = .,
formula = SummerTempNight - WinterTempNight ~ 0,
na.rm = TRUE
)
```
```
tidy(ttest_ex4) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 6\.3: Paired two\-sample t\-test output for estimates of U.S. households thermostat temperature by season, RECS 2020
| estimate | statistic | p.value | parameter | conf.low | conf.high | method | alternative |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 2\.85 | 50\.83 | \<0\.0001 | 58\.00 | 2\.74 | 2\.96 | Design\-based one\-sample t\-test | two.sided |
The results displayed in Table [6\.3](c06-statistical-testing.html#tab:stattest-ttest-ex4-gt-tab) indicate that U.S. households set their thermostat on average 2\.9\\(^\\circ\\)F warmer in summer nights than winter nights, which is statistically significant (t \= 50\.8, p\-value is \<0\.0001\).
### 6\.3\.1 Syntax
When we do not have survey data, we can use the `t.test()` function from the {stats} package to run t\-tests. This function does not allow for weights or the variance structure that need to be accounted for with survey data. Therefore, we need to use the `svyttest()` function from {survey} when using survey data. Many of the arguments are the same between the two functions, but there are a few key differences:
* We need to use the survey design object instead of the original data frame
* We can only use a formula and not separate x and y data
* The confidence level cannot be specified and is always set to 95%. However, we show examples of how the confidence level can be changed after running the `svyttest()` function by using the `confint()` function.
Here is the syntax for the `svyttest()` function:
```
svyttest(formula,
design,
...)
```
The arguments are:
* `formula`: Formula, `outcome~group` for two\-sample, `outcome~0` or `outcome~1` for one\-sample. The group variable must be a factor or character with two levels, or be coded 0/1 or 1/2\. We give more details on formula set\-up below for different types of tests.
* `design`: survey design object
* `...`: This passes options on for one\-sided tests only, and thus, we can specify `na.rm=TRUE`
Notice that the first argument here is the `formula` and not the `design`. This means we must use the dot `(.)` if we pipe in the survey design object (as described in Section [6\.2](c06-statistical-testing.html#dot-notation)).
The `formula` argument can take several different forms depending on what we are measuring. Here are a few common scenarios:
1. One\-sample t\-test:
1. Comparison to 0: `var ~ 0`, where `var` is the measure of interest, and we compare it to the value `0`. For example, we could test if the population mean of household debt is different from `0` given the sample data collected.
2. Comparison to a different value: `var - value ~ 0`, where `var` is the measure of interest and `value` is what we are comparing to. For example, we could test if the proportion of the population that has blue eyes is different from `25%` by using `var - 0.25 ~ 0`. Note that specifying the formula as `var ~ 0.25` is not equivalent and results in a syntax error.
2. Two\-sample t\-test:
1. Unpaired:
* 2 level grouping variable: `var ~ groupVar`, where `var` is the measure of interest and `groupVar` is a variable with two categories. For example, we could test if the average age of the population who voted for president in 2020 differed from the age of people who did not vote. In this case, age would be used for `var`, and a binary variable indicating voting activity would be the `groupVar`.
* 3\+ level grouping variable: `var ~ groupVar == level`, where `var` is the measure of interest, `groupVar` is the categorical variable, and `level` is the category level to isolate. For example, we could test if the test scores in one classroom differed from all other classrooms where `groupVar` would be the variable holding the values for classroom IDs and `level` is the classroom ID we want to compare to the others.
2. Paired: `var_1 - var_2 ~ 0`, where `var_1` is the first variable of interest and `var_2` is the second variable of interest. For example, we could test if test scores on a subject differed between the start and the end of a course, so `var_1` would be the test score at the beginning of the course, and `var_2` would be the score at the end of the course.
The `na.rm` argument defaults to `FALSE`, which means if any data values are missing, the t\-test does not compute. Throughout this chapter, we always set `na.rm = TRUE`, but before analyzing the survey data, review the notes provided in Chapter [11](c11-missing-data.html#c11-missing-data) to better understand how to handle missing data.
Let’s walk through a few examples using the RECS data.
### 6\.3\.2 Examples
#### Example 1: One\-sample t\-test for mean
RECS asks respondents to indicate what temperature they set their house to during the summer at night[14](#fn14). In our data, we have called this variable `SummerTempNight`. If we want to see if the average U.S. household sets its temperature at a value different from 68\\(^\\circ\\)F[15](#fn15), we could set up the hypothesis as follows:
* \\(H\_0: \\mu \= 68\\) where \\(\\mu\\) is the average temperature U.S. households set their thermostat to in the summer at night
* \\(H\_A: \\mu \\neq 68\\)
To conduct this in R, we use `svyttest()` and subtract the temperature on the left\-hand side of the formula:
```
ttest_ex1 <- recs_des %>%
svyttest(
formula = SummerTempNight - 68 ~ 0,
design = .,
na.rm = TRUE
)
ttest_ex1
```
```
##
## Design-based one-sample t-test
##
## data: SummerTempNight - 68 ~ 0
## t = 85, df = 58, p-value <2e-16
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## 3.288 3.447
## sample estimates:
## mean
## 3.367
```
To pull out specific output, we can use R’s built\-in `$` operator. For instance, to obtain the estimate \\(\\mu \- 68\\), we run `ttest_ex1$estimate`.
If we want the average, we take our t\-test estimate and add it to 68:
```
ttest_ex1$estimate + 68
```
```
## mean
## 71.37
```
Or, we can use the `survey_mean()` function described in Chapter [5](c05-descriptive-analysis.html#c05-descriptive-analysis):
```
recs_des %>%
summarize(mu = survey_mean(SummerTempNight, na.rm = TRUE))
```
```
## # A tibble: 1 × 2
## mu mu_se
## <dbl> <dbl>
## 1 71.4 0.0397
```
The result is the same in both methods, so we see that the average temperature U.S. households set their thermostat to in the summer at night is 71\.4\\(^\\circ\\)F. Looking at the output from `svyttest()`, the t\-statistic is 84\.8, and the p\-value is \<0\.0001, indicating that the average is statistically different from 68\\(^\\circ\\)F at an \\(\\alpha\\) level of \\(0\.05\\).
If we want an 80% confidence interval for the test statistic, we can use the function `confint()` to change the confidence level. Below, we print the default confidence interval (95%), the confidence interval explicitly specifying the level as 95%, and the 80% confidence interval. When the confidence level is 95% either by default or explicitly, R returns a vector with both row and column names. However, when we specify any other confidence level, an unnamed vector is returned, with the first element being the lower bound and the second element being the upper bound of the confidence interval.
```
confint(ttest_ex1)
```
```
## 2.5 % 97.5 %
## as.numeric(SummerTempNight - 68) 3.288 3.447
## attr(,"conf.level")
## [1] 0.95
```
```
confint(ttest_ex1, level = 0.95)
```
```
## 2.5 % 97.5 %
## as.numeric(SummerTempNight - 68) 3.288 3.447
## attr(,"conf.level")
## [1] 0.95
```
```
confint(ttest_ex1, level = 0.8)
```
```
## [1] 3.316 3.419
## attr(,"conf.level")
## [1] 0.8
```
In this case, neither confidence interval contains 0, and we draw the same conclusion from either that the average temperature households set their thermostat in the summer at night is significantly higher than 68\\(^\\circ\\)F.
#### Example 2: One\-sample t\-test for proportion
RECS asked respondents if they use air conditioning (A/C) in their home[16](#fn16). In our data, we call this variable `ACUsed`. Let’s look at the proportion of U.S. households that use A/C in their homes using the `survey_prop()` function we learned in Chapter [5](c05-descriptive-analysis.html#c05-descriptive-analysis).
```
acprop <- recs_des %>%
group_by(ACUsed) %>%
summarize(p = survey_prop())
acprop
```
```
## # A tibble: 2 × 3
## ACUsed p p_se
## <lgl> <dbl> <dbl>
## 1 FALSE 0.113 0.00306
## 2 TRUE 0.887 0.00306
```
Based on this, 88\.7% of U.S. households use A/C in their homes. If we wanted to know if this differs from 90%, we could set up our hypothesis as follows:
* \\(H\_0: p \= 0\.90\\) where \\(p\\) is the proportion of U.S. households that use A/C in their homes
* \\(H\_A: p \\neq 0\.90\\)
To conduct this in R, we use the `svyttest()` function as follows:
```
ttest_ex2 <- recs_des %>%
svyttest(
formula = (ACUsed == TRUE) - 0.90 ~ 0,
design = .,
na.rm = TRUE
)
ttest_ex2
```
```
##
## Design-based one-sample t-test
##
## data: (ACUsed == TRUE) - 0.9 ~ 0
## t = -4.4, df = 58, p-value = 5e-05
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## -0.019603 -0.007348
## sample estimates:
## mean
## -0.01348
```
The output from the `svyttest()` function can be a bit hard to read. Using the `tidy()` function from the {broom} package, we can clean up the output into a tibble to more easily understand what the test tells us ([Robinson, Hayes, and Couch 2023](#ref-R-broom)).
```
tidy(ttest_ex2)
```
```
## # A tibble: 1 × 8
## estimate statistic p.value parameter conf.low conf.high method
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 -0.0135 -4.40 0.0000466 58 -0.0196 -0.00735 Design-base…
## # ℹ 1 more variable: alternative <chr>
```
The ‘tidied’ output can also be piped into the {gt} package to create a table ready for publication (see Table [6\.1](c06-statistical-testing.html#tab:stattest-ttest-ex2-gt-tab)). We go over the {gt} package in Chapter [8](c08-communicating-results.html#c08-communicating-results). The function `pretty_p_value()` comes from the {prettyunits} package and converts numeric p\-values to characters and, by default, prints four decimal places and displays any p\-value less than 0\.0001 as `"<0.0001"`, though another minimum display p\-value can be specified ([Csardi 2023](#ref-R-prettyunits)).
```
tidy(ttest_ex2) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 6\.1: One\-sample t\-test output for estimates of U.S. households use A/C in their homes differing from 90%, RECS 2020
| estimate | statistic | p.value | parameter | conf.low | conf.high | method | alternative |
| --- | --- | --- | --- | --- | --- | --- | --- |
| −0\.01 | −4\.40 | \<0\.0001 | 58\.00 | −0\.02 | −0\.01 | Design\-based one\-sample t\-test | two.sided |
The estimate differs from Example 1 in that it does not display \\(p \- 0\.90\\) but rather \\(p\\), or the difference between the U.S. households that use A/C and our comparison proportion. We can see that there is a difference of —1\.35 percentage points. Additionally, the t\-statistic value in the `statistic` column is —4\.4, and the p\-value is \<0\.0001\. These results indicate that fewer than 90% of U.S. households use A/C in their homes.
#### Example 3: Unpaired two\-sample t\-test
In addition to `ACUsed`, another variable in the RECS data is a household’s total electric cost in dollars (`DOLLAREL`).To see if U.S. households with A/C had higher electrical bills than those without, we can set up the hypothesis as follows:
* \\(H\_0: \\mu\_{AC} \= \\mu\_{noAC}\\) where \\(\\mu\_{AC}\\) is the electrical bill cost for U.S. households that used A/C, and \\(\\mu\_{noAC}\\) is the electrical bill cost for U.S. households that did not use A/C
* \\(H\_A: \\mu\_{AC} \\neq \\mu\_{noAC}\\)
Let’s take a quick look at the data to see how they are formatted:
```
recs_des %>%
group_by(ACUsed) %>%
summarize(mean = survey_mean(DOLLAREL, na.rm = TRUE))
```
```
## # A tibble: 2 × 3
## ACUsed mean mean_se
## <lgl> <dbl> <dbl>
## 1 FALSE 1056. 16.0
## 2 TRUE 1422. 5.69
```
To conduct this in R, we use `svyttest()`:
```
ttest_ex3 <- recs_des %>%
svyttest(
formula = DOLLAREL ~ ACUsed,
design = .,
na.rm = TRUE
)
```
```
tidy(ttest_ex3) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 6\.2: Unpaired two\-sample t\-test output for estimates of U.S. households electrical bills by A/C use, RECS 2020
| estimate | statistic | p.value | parameter | conf.low | conf.high | method | alternative |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 365\.72 | 21\.29 | \<0\.0001 | 58\.00 | 331\.33 | 400\.11 | Design\-based t\-test | two.sided |
The results in Table [6\.2](c06-statistical-testing.html#tab:stattest-ttest-ex3-gt-tab) indicate that the difference in electrical bills for those who used A/C and those who did not is, on average, $365\.72\. The difference appears to be statistically significant as the t\-statistic is 21\.3 and the p\-value is \<0\.0001\. Households that used A/C spent, on average, $365\.72 more in 2020 on electricity than households without A/C.
#### Example 4: Paired two\-sample t\-test
Let’s say we want to test whether the temperature at which U.S. households set their thermostat at night differs depending on the season (comparing summer and winter[17](#fn17) temperatures). We could set up the hypothesis as follows:
* \\(H\_0: \\mu\_{summer} \= \\mu\_{winter}\\) where \\(\\mu\_{summer}\\) is the temperature that U.S. households set their thermostat to during summer nights, and \\(\\mu\_{winter}\\) is the temperature that U.S. households set their thermostat to during winter nights
* \\(H\_A: \\mu\_{summer} \\neq \\mu\_{winter}\\)
To conduct this in R, we use `svyttest()` by calculating the temperature difference on the left\-hand side as follows:
```
ttest_ex4 <- recs_des %>%
svyttest(
design = .,
formula = SummerTempNight - WinterTempNight ~ 0,
na.rm = TRUE
)
```
```
tidy(ttest_ex4) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 6\.3: Paired two\-sample t\-test output for estimates of U.S. households thermostat temperature by season, RECS 2020
| estimate | statistic | p.value | parameter | conf.low | conf.high | method | alternative |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 2\.85 | 50\.83 | \<0\.0001 | 58\.00 | 2\.74 | 2\.96 | Design\-based one\-sample t\-test | two.sided |
The results displayed in Table [6\.3](c06-statistical-testing.html#tab:stattest-ttest-ex4-gt-tab) indicate that U.S. households set their thermostat on average 2\.9\\(^\\circ\\)F warmer in summer nights than winter nights, which is statistically significant (t \= 50\.8, p\-value is \<0\.0001\).
#### Example 1: One\-sample t\-test for mean
RECS asks respondents to indicate what temperature they set their house to during the summer at night[14](#fn14). In our data, we have called this variable `SummerTempNight`. If we want to see if the average U.S. household sets its temperature at a value different from 68\\(^\\circ\\)F[15](#fn15), we could set up the hypothesis as follows:
* \\(H\_0: \\mu \= 68\\) where \\(\\mu\\) is the average temperature U.S. households set their thermostat to in the summer at night
* \\(H\_A: \\mu \\neq 68\\)
To conduct this in R, we use `svyttest()` and subtract the temperature on the left\-hand side of the formula:
```
ttest_ex1 <- recs_des %>%
svyttest(
formula = SummerTempNight - 68 ~ 0,
design = .,
na.rm = TRUE
)
ttest_ex1
```
```
##
## Design-based one-sample t-test
##
## data: SummerTempNight - 68 ~ 0
## t = 85, df = 58, p-value <2e-16
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## 3.288 3.447
## sample estimates:
## mean
## 3.367
```
To pull out specific output, we can use R’s built\-in `$` operator. For instance, to obtain the estimate \\(\\mu \- 68\\), we run `ttest_ex1$estimate`.
If we want the average, we take our t\-test estimate and add it to 68:
```
ttest_ex1$estimate + 68
```
```
## mean
## 71.37
```
Or, we can use the `survey_mean()` function described in Chapter [5](c05-descriptive-analysis.html#c05-descriptive-analysis):
```
recs_des %>%
summarize(mu = survey_mean(SummerTempNight, na.rm = TRUE))
```
```
## # A tibble: 1 × 2
## mu mu_se
## <dbl> <dbl>
## 1 71.4 0.0397
```
The result is the same in both methods, so we see that the average temperature U.S. households set their thermostat to in the summer at night is 71\.4\\(^\\circ\\)F. Looking at the output from `svyttest()`, the t\-statistic is 84\.8, and the p\-value is \<0\.0001, indicating that the average is statistically different from 68\\(^\\circ\\)F at an \\(\\alpha\\) level of \\(0\.05\\).
If we want an 80% confidence interval for the test statistic, we can use the function `confint()` to change the confidence level. Below, we print the default confidence interval (95%), the confidence interval explicitly specifying the level as 95%, and the 80% confidence interval. When the confidence level is 95% either by default or explicitly, R returns a vector with both row and column names. However, when we specify any other confidence level, an unnamed vector is returned, with the first element being the lower bound and the second element being the upper bound of the confidence interval.
```
confint(ttest_ex1)
```
```
## 2.5 % 97.5 %
## as.numeric(SummerTempNight - 68) 3.288 3.447
## attr(,"conf.level")
## [1] 0.95
```
```
confint(ttest_ex1, level = 0.95)
```
```
## 2.5 % 97.5 %
## as.numeric(SummerTempNight - 68) 3.288 3.447
## attr(,"conf.level")
## [1] 0.95
```
```
confint(ttest_ex1, level = 0.8)
```
```
## [1] 3.316 3.419
## attr(,"conf.level")
## [1] 0.8
```
In this case, neither confidence interval contains 0, and we draw the same conclusion from either that the average temperature households set their thermostat in the summer at night is significantly higher than 68\\(^\\circ\\)F.
#### Example 2: One\-sample t\-test for proportion
RECS asked respondents if they use air conditioning (A/C) in their home[16](#fn16). In our data, we call this variable `ACUsed`. Let’s look at the proportion of U.S. households that use A/C in their homes using the `survey_prop()` function we learned in Chapter [5](c05-descriptive-analysis.html#c05-descriptive-analysis).
```
acprop <- recs_des %>%
group_by(ACUsed) %>%
summarize(p = survey_prop())
acprop
```
```
## # A tibble: 2 × 3
## ACUsed p p_se
## <lgl> <dbl> <dbl>
## 1 FALSE 0.113 0.00306
## 2 TRUE 0.887 0.00306
```
Based on this, 88\.7% of U.S. households use A/C in their homes. If we wanted to know if this differs from 90%, we could set up our hypothesis as follows:
* \\(H\_0: p \= 0\.90\\) where \\(p\\) is the proportion of U.S. households that use A/C in their homes
* \\(H\_A: p \\neq 0\.90\\)
To conduct this in R, we use the `svyttest()` function as follows:
```
ttest_ex2 <- recs_des %>%
svyttest(
formula = (ACUsed == TRUE) - 0.90 ~ 0,
design = .,
na.rm = TRUE
)
ttest_ex2
```
```
##
## Design-based one-sample t-test
##
## data: (ACUsed == TRUE) - 0.9 ~ 0
## t = -4.4, df = 58, p-value = 5e-05
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## -0.019603 -0.007348
## sample estimates:
## mean
## -0.01348
```
The output from the `svyttest()` function can be a bit hard to read. Using the `tidy()` function from the {broom} package, we can clean up the output into a tibble to more easily understand what the test tells us ([Robinson, Hayes, and Couch 2023](#ref-R-broom)).
```
tidy(ttest_ex2)
```
```
## # A tibble: 1 × 8
## estimate statistic p.value parameter conf.low conf.high method
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 -0.0135 -4.40 0.0000466 58 -0.0196 -0.00735 Design-base…
## # ℹ 1 more variable: alternative <chr>
```
The ‘tidied’ output can also be piped into the {gt} package to create a table ready for publication (see Table [6\.1](c06-statistical-testing.html#tab:stattest-ttest-ex2-gt-tab)). We go over the {gt} package in Chapter [8](c08-communicating-results.html#c08-communicating-results). The function `pretty_p_value()` comes from the {prettyunits} package and converts numeric p\-values to characters and, by default, prints four decimal places and displays any p\-value less than 0\.0001 as `"<0.0001"`, though another minimum display p\-value can be specified ([Csardi 2023](#ref-R-prettyunits)).
```
tidy(ttest_ex2) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 6\.1: One\-sample t\-test output for estimates of U.S. households use A/C in their homes differing from 90%, RECS 2020
| estimate | statistic | p.value | parameter | conf.low | conf.high | method | alternative |
| --- | --- | --- | --- | --- | --- | --- | --- |
| −0\.01 | −4\.40 | \<0\.0001 | 58\.00 | −0\.02 | −0\.01 | Design\-based one\-sample t\-test | two.sided |
The estimate differs from Example 1 in that it does not display \\(p \- 0\.90\\) but rather \\(p\\), or the difference between the U.S. households that use A/C and our comparison proportion. We can see that there is a difference of —1\.35 percentage points. Additionally, the t\-statistic value in the `statistic` column is —4\.4, and the p\-value is \<0\.0001\. These results indicate that fewer than 90% of U.S. households use A/C in their homes.
#### Example 3: Unpaired two\-sample t\-test
In addition to `ACUsed`, another variable in the RECS data is a household’s total electric cost in dollars (`DOLLAREL`).To see if U.S. households with A/C had higher electrical bills than those without, we can set up the hypothesis as follows:
* \\(H\_0: \\mu\_{AC} \= \\mu\_{noAC}\\) where \\(\\mu\_{AC}\\) is the electrical bill cost for U.S. households that used A/C, and \\(\\mu\_{noAC}\\) is the electrical bill cost for U.S. households that did not use A/C
* \\(H\_A: \\mu\_{AC} \\neq \\mu\_{noAC}\\)
Let’s take a quick look at the data to see how they are formatted:
```
recs_des %>%
group_by(ACUsed) %>%
summarize(mean = survey_mean(DOLLAREL, na.rm = TRUE))
```
```
## # A tibble: 2 × 3
## ACUsed mean mean_se
## <lgl> <dbl> <dbl>
## 1 FALSE 1056. 16.0
## 2 TRUE 1422. 5.69
```
To conduct this in R, we use `svyttest()`:
```
ttest_ex3 <- recs_des %>%
svyttest(
formula = DOLLAREL ~ ACUsed,
design = .,
na.rm = TRUE
)
```
```
tidy(ttest_ex3) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 6\.2: Unpaired two\-sample t\-test output for estimates of U.S. households electrical bills by A/C use, RECS 2020
| estimate | statistic | p.value | parameter | conf.low | conf.high | method | alternative |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 365\.72 | 21\.29 | \<0\.0001 | 58\.00 | 331\.33 | 400\.11 | Design\-based t\-test | two.sided |
The results in Table [6\.2](c06-statistical-testing.html#tab:stattest-ttest-ex3-gt-tab) indicate that the difference in electrical bills for those who used A/C and those who did not is, on average, $365\.72\. The difference appears to be statistically significant as the t\-statistic is 21\.3 and the p\-value is \<0\.0001\. Households that used A/C spent, on average, $365\.72 more in 2020 on electricity than households without A/C.
#### Example 4: Paired two\-sample t\-test
Let’s say we want to test whether the temperature at which U.S. households set their thermostat at night differs depending on the season (comparing summer and winter[17](#fn17) temperatures). We could set up the hypothesis as follows:
* \\(H\_0: \\mu\_{summer} \= \\mu\_{winter}\\) where \\(\\mu\_{summer}\\) is the temperature that U.S. households set their thermostat to during summer nights, and \\(\\mu\_{winter}\\) is the temperature that U.S. households set their thermostat to during winter nights
* \\(H\_A: \\mu\_{summer} \\neq \\mu\_{winter}\\)
To conduct this in R, we use `svyttest()` by calculating the temperature difference on the left\-hand side as follows:
```
ttest_ex4 <- recs_des %>%
svyttest(
design = .,
formula = SummerTempNight - WinterTempNight ~ 0,
na.rm = TRUE
)
```
```
tidy(ttest_ex4) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 6\.3: Paired two\-sample t\-test output for estimates of U.S. households thermostat temperature by season, RECS 2020
| estimate | statistic | p.value | parameter | conf.low | conf.high | method | alternative |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 2\.85 | 50\.83 | \<0\.0001 | 58\.00 | 2\.74 | 2\.96 | Design\-based one\-sample t\-test | two.sided |
The results displayed in Table [6\.3](c06-statistical-testing.html#tab:stattest-ttest-ex4-gt-tab) indicate that U.S. households set their thermostat on average 2\.9\\(^\\circ\\)F warmer in summer nights than winter nights, which is statistically significant (t \= 50\.8, p\-value is \<0\.0001\).
6\.4 Chi\-squared tests
-----------------------
Chi\-squared tests (\\(\\chi^2\\)) allow us to examine multiple proportions using a goodness\-of\-fit test, a test of independence, or a test of homogeneity. These three tests have the same \\(\\chi^2\\) distributions but with slightly different underlying assumptions.
First, goodness\-of\-fit tests are used when comparing observed data to expected data. For example, this could be used to determine if respondent demographics (the observed data in the sample) match known population information (the expected data). In this case, we can set up the hypothesis test as follows:
* \\(H\_0: p\_1 \= \\pi\_1, \~ p\_2 \= \\pi\_2, \~ ..., \~ p\_k \= \\pi\_k\\) where \\(p\_i\\) is the observed proportion for category \\(i\\), \\(\\pi\_i\\) is the expected proportion for category \\(i\\), and \\(k\\) is the number of categories
* \\(H\_A:\\) at least one level of \\(p\_i\\) does not match \\(\\pi\_i\\)
Second, tests of independence are used when comparing two types of observed data to see if there is a relationship. For example, this could be used to determine if the proportion of respondents who voted for each political party in the presidential election matches the proportion of respondents who voted for each political party in a local election. In this case, we can set up the hypothesis test as follows:
* \\(H\_0:\\) The two variables/factors are independent
* \\(H\_A:\\) The two variables/factors are not independent
Third, tests of homogeneity are used to compare two distributions to see if they match. For example, this could be used to determine if the highest education achieved is the same for both men and women. In this case, we can set up the hypothesis test as follows:
* \\(H\_0: p\_{1a} \= p\_{1b}, \~ p\_{2a} \= p\_{2b}, \~ ..., \~ p\_{ka} \= p\_{kb}\\) where \\(p\_{ia}\\) is the observed proportion of category \\(i\\) for subgroup \\(a\\), \\(p\_{ib}\\) is the observed proportion of category \\(i\\) for subgroup \\(a\\), and \\(k\\) is the number of categories
* \\(H\_A:\\) at least one category of \\(p\_{ia}\\) does not match \\(p\_{ib}\\)
As with t\-tests, the difference between using \\(\\chi^2\\) tests with non\-survey data and survey data is based on the underlying variance estimation. The functions in the {survey} package account for these nuances, provided the design object is correctly defined. For basic variance estimation formulas for different survey design types, refer to Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights).
### 6\.4\.1 Syntax
When we do not have survey data, we may be able to use the `chisq.test()` function from the {stats} package in base R to run chi\-squared tests ([R Core Team 2024](#ref-R-base)). However, this function does not allow for weights or the variance structure to be accounted for with survey data. Therefore, when using survey data, we need to use one of two functions:
* `svygofchisq()`: For goodness\-of\-fit tests
* `svychisq()`: For tests of independence and homogeneity
The non\-survey data function of `chisq.test()` requires either a single set of counts and given proportions (for goodness\-of\-fit tests) or two sets of counts for tests of independence and homogeneity. The functions we use with survey data require respondent\-level data and formulas instead of counts. This ensures that the variances are correctly calculated.
First, the function for the goodness\-of\-fit tests is `svygofchisq()`:
```
svygofchisq(formula,
p,
design,
na.rm = TRUE,
...)
```
The arguments are:
* `formula`: Formula specifying a single factor variable
* `p`: Vector of probabilities for the categories of the factor in the correct order. If the probabilities do not sum to 1, they are rescaled to sum to 1\.
* `design`: Survey design object
* …: Other arguments to pass on, such as `na.rm`
Based on the order of the arguments, we again must use the dot `(.)` notation if we pipe in the survey design object or explicitly name the arguments as described in Section [6\.2](c06-statistical-testing.html#dot-notation). For the goodness\-of\-fit tests, the formula is a single variable `formula = ~var` as we compare the observed data from this variable to the expected data. The expected probabilities are then entered in the `p` argument and need to be a vector of the same length as the number of categories in the variable. For example, if we want to know if the proportion of males and females matches a distribution of 30/70, then the sex variable (with two categories) would be used `formula = ~SEX`, and the proportions would be included as `p = c(.3, .7)`. It is important to note that the variable entered into the formula should be formatted as either a factor or a character. The examples below provide more detail and tips on how to make sure the levels match up correctly.
For tests of homogeneity and independence, the `svychisq()` function should be used. The syntax is as follows:
```
svychisq(
formula,
design,
statistic = c("F", "Chisq", "Wald", "adjWald",
"lincom", "saddlepoint"),
na.rm = TRUE
)
```
The arguments are:
* `formula`: Model formula specifying the table (shown in examples)
* `design`: Survey design object
* `statistic`: Type of test statistic to use in test (details below)
* `na.rm`: Remove missing values
There are six statistics that are accepted in this formula. For tests of homogeneity (when comparing cross\-tabulations), the `F` or `Chisq` statistics should be used[18](#fn18). The `F` statistic is the default and uses the Rao\-Scott second\-order correction. This correction is designed to assist with complicated sampling designs (i.e., those other than a simple random sample) ([Scott 2007](#ref-Scott2007)). The `Chisq` statistic is an adjusted version of the Pearson \\(\\chi^2\\) statistic. The version of this statistic in the `svychisq()` function compares the design effect estimate from the provided survey data to what the \\(\\chi^2\\) distribution would have been if the data came from a simple random sampling.
For tests of independence, the `Wald` and `adjWald` are recommended as they provide a better adjustment for variable comparisons ([Lumley 2010](#ref-lumley2010complex)). If the data have a small number of primary sampling units (PSUs) compared to the degrees of freedom, then the `adjWald` statistic should be used to account for this. The `lincom` and `saddlepoint` statistics are available for more complicated data structures.
The formula argument is always one\-sided, unlike the `svyttest()` function. The two variables of interest should be included with a plus sign: `formula = ~ var_1 + var_2`. As with the `svygofchisq()` function, the variables entered into the formula should be formatted as either a factor or a character.
Additionally, as with the t\-test function, both `svygofchisq()` and `svychisq()` have the `na.rm` argument. If any data values are missing, the \\(\\chi^2\\) tests assume that `NA` is a category and include it in the calculation. Throughout this chapter, we always set `na.rm = TRUE`, but before analyzing the survey data, review the notes provided in Chapter [11](c11-missing-data.html#c11-missing-data) to better understand how to handle missing data.
### 6\.4\.2 Examples
Let’s walk through a few examples using the ANES data.
#### Example 1: Goodness\-of\-fit test
ANES asked respondents about their highest education level[19](#fn19). Based on the data from the 2020 American Community Survey (ACS) 5\-year estimates[20](#fn20), the education distribution of those aged 18\+ in the United States (among the 50 states and the District of Columbia) is as follows:
* 11% had less than a high school degree
* 27% had a high school degree
* 29% had some college or an associate’s degree
* 33% had a bachelor’s degree or higher
If we want to see if the weighted distribution from the ANES 2020 data matches this distribution, we could set up the hypothesis as follows:
* \\(H\_0: p\_1 \= 0\.11, \~ p\_2 \= 0\.27, \~ p\_3 \= 0\.29, \~ p\_4 \= 0\.33\\)
* \\(H\_A:\\) at least one of the education levels does not match between the ANES and the ACS
To conduct this in R, let’s first look at the education variable (`Education`) we have on the ANES data. Using the `survey_mean()` function discussed in Chapter [5](c05-descriptive-analysis.html#c05-descriptive-analysis), we can see the education levels and estimated proportions.
```
anes_des %>%
drop_na(Education) %>%
group_by(Education) %>%
summarize(p = survey_mean())
```
```
## # A tibble: 5 × 3
## Education p p_se
## <fct> <dbl> <dbl>
## 1 Less than HS 0.0805 0.00568
## 2 High school 0.277 0.0102
## 3 Post HS 0.290 0.00713
## 4 Bachelor's 0.226 0.00633
## 5 Graduate 0.126 0.00499
```
Based on this output, we can see that we have different levels from the ACS data. Specifically, the education data from ANES include two levels for bachelor’s degree or higher (bachelor’s and graduate), so these two categories need to be collapsed into a single category to match the ACS data. For this, among other methods, we can use the {forcats} package from the tidyverse ([Wickham 2023](#ref-R-forcats)). The package’s `fct_collapse()` function helps us create a new variable by collapsing categories into a single one. Then, we use the `svygofchisq()` function to compare the ANES data to the ACS data, where we specify the updated design object, the formula using the collapsed education variable, the ACS estimates for education levels as p, and removing `NA` values.
```
anes_des_educ <- anes_des %>%
mutate(
Education2 =
fct_collapse(Education,
"Bachelor or Higher" = c(
"Bachelor's",
"Graduate"
)
)
)
anes_des_educ %>%
drop_na(Education2) %>%
group_by(Education2) %>%
summarize(p = survey_mean())
```
```
## # A tibble: 4 × 3
## Education2 p p_se
## <fct> <dbl> <dbl>
## 1 Less than HS 0.0805 0.00568
## 2 High school 0.277 0.0102
## 3 Post HS 0.290 0.00713
## 4 Bachelor or Higher 0.352 0.00732
```
```
chi_ex1 <- anes_des_educ %>%
svygofchisq(
formula = ~Education2,
p = c(0.11, 0.27, 0.29, 0.33),
design = .,
na.rm = TRUE
)
chi_ex1
```
```
##
## Design-based chi-squared test for given probabilities
##
## data: ~Education2
## X-squared = 2172220, scale = 1.1e+05, df = 2.3e+00, p-value =
## 9e-05
```
The output from the `svygofchisq()` indicates that at least one proportion from ANES does not match the ACS data (\\(\\chi^2 \=\\) 2,172,220; p\-value is \<0\.0001\). To get a better idea of the differences, we can use the `expected` output along with `survey_mean()` to create a comparison table:
```
ex1_table <- anes_des_educ %>%
drop_na(Education2) %>%
group_by(Education2) %>%
summarize(Observed = survey_mean(vartype = "ci")) %>%
rename(Education = Education2) %>%
mutate(Expected = c(0.11, 0.27, 0.29, 0.33)) %>%
select(Education, Expected, everything())
ex1_table
```
```
## # A tibble: 4 × 5
## Education Expected Observed Observed_low Observed_upp
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 Less than HS 0.11 0.0805 0.0691 0.0919
## 2 High school 0.27 0.277 0.257 0.298
## 3 Post HS 0.29 0.290 0.276 0.305
## 4 Bachelor or Higher 0.33 0.352 0.337 0.367
```
This output includes our expected proportions from the ACS that we provided the `svygofchisq()` function along with the output of the observed proportions and their confidence intervals. This table shows that the “high school” and “post HS” categories have nearly identical proportions, but that the other two categories are slightly different. Looking at the confidence intervals, we can see that the ANES data skew to include fewer people in the “less than HS” category and more people in the “bachelor or higher” category. This may be easier to see if we plot this. The code below uses the tabular output to create Figure [6\.1](c06-statistical-testing.html#fig:stattest-chi-ex1-graph).
```
ex1_table %>%
pivot_longer(
cols = c("Expected", "Observed"),
names_to = "Names",
values_to = "Proportion"
) %>%
mutate(
Observed_low = if_else(Names == "Observed", Observed_low, NA_real_),
Observed_upp = if_else(Names == "Observed", Observed_upp, NA_real_),
Names = if_else(Names == "Observed",
"ANES (observed)", "ACS (expected)"
)
) %>%
ggplot(aes(x = Education, y = Proportion, color = Names)) +
geom_point(alpha = 0.75, size = 2) +
geom_errorbar(aes(ymin = Observed_low, ymax = Observed_upp),
width = 0.25
) +
theme_bw() +
scale_color_manual(name = "Type", values = book_colors[c(4, 1)]) +
theme(legend.position = "bottom", legend.title = element_blank())
```
FIGURE 6\.1: Expected and observed proportions of education with confidence intervals
#### Example 2: Test of independence
ANES asked respondents two questions about trust:
* Question text: “How often can you trust the federal government to do what is right?” ([American National Election Studies 2021](#ref-anes-svy))
* Question text: “How often can you trust other people?” ([American National Election Studies 2021](#ref-anes-svy))
If we want to see if the distributions of these two questions are similar or not, we can conduct a test of independence. Here is how the hypothesis could be set up:
* \\(H\_0:\\) People’s trust in the federal government and their trust in other people are independent (i.e., not related)
* \\(H\_A:\\) People’s trust in the federal government and their trust in other people are not independent (i.e., they are related)
To conduct this in R, we use the `svychisq()` function to compare the two variables:
```
chi_ex2 <- anes_des %>%
svychisq(
formula = ~ TrustGovernment + TrustPeople,
design = .,
statistic = "Wald",
na.rm = TRUE
)
chi_ex2
```
```
##
## Design-based Wald test of association
##
## data: NextMethod()
## F = 21, ndf = 16, ddf = 51, p-value <2e-16
```
The output from `svychisq()` indicates that the distribution of people’s trust in the federal government and their trust in other people are not independent, meaning that they are related. Let’s output the distributions in a table to see the relationship. The `observed` output from the test provides a cross\-tabulation of the counts for each category:
```
chi_ex2$observed
```
```
## TrustPeople
## TrustGovernment Always Most of the time About half the time
## Always 16.470 25.009 31.848
## Most of the time 11.020 539.377 196.258
## About half the time 11.772 934.858 861.971
## Some of the time 17.007 1353.779 839.863
## Never 3.174 236.785 174.272
## TrustPeople
## TrustGovernment Some of the time Never
## Always 36.854 5.523
## Most of the time 206.556 27.184
## About half the time 428.871 65.024
## Some of the time 932.628 89.596
## Never 217.994 189.307
```
However, we often want to know about the proportions, not just the respondent counts from the survey. There are a couple of different ways that we can do this. The first is using the counts from `chi_ex2$observed` to calculate the proportion. We can then pivot the table to create a cross\-tabulation similar to the counts table above. Adding `group_by()` to the code means that we obtain the proportions within each variable level. In this case, we are looking at the distribution of `TrustGovernment` for each level of `TrustPeople`. The resulting table is shown in Table [6\.4](c06-statistical-testing.html#tab:stattest-chi-ex2-prop1-tab).
```
chi_ex2_table <- chi_ex2$observed %>%
as_tibble() %>%
group_by(TrustPeople) %>%
mutate(prop = round(n / sum(n), 3)) %>%
select(-n) %>%
pivot_wider(names_from = TrustPeople, values_from = prop) %>%
gt(rowname_col = "TrustGovernment") %>%
tab_stubhead(label = "Trust in Government") %>%
tab_spanner(
label = "Trust in People",
columns = everything()
) %>%
cols_label(
`Most of the time` = md("Most of<br />the time"),
`About half the time` = md("About half<br />the time"),
`Some of the time` = md("Some of<br />the time")
)
```
```
chi_ex2_table
```
TABLE 6\.4: Proportion of adults in the U.S. by levels of trust in people and government, ANES 2020
| Trust in Government | Trust in People | | | | |
| --- | --- | --- | --- | --- | --- |
| Always | Most ofthe time | About halfthe time | Some ofthe time | Never |
| Always | 0\.277 | 0\.008 | 0\.015 | 0\.020 | 0\.015 |
| Most of the time | 0\.185 | 0\.175 | 0\.093 | 0\.113 | 0\.072 |
| About half the time | 0\.198 | 0\.303 | 0\.410 | 0\.235 | 0\.173 |
| Some of the time | 0\.286 | 0\.438 | 0\.399 | 0\.512 | 0\.238 |
| Never | 0\.053 | 0\.077 | 0\.083 | 0\.120 | 0\.503 |
In Table [6\.4](c06-statistical-testing.html#tab:stattest-chi-ex2-prop1-tab), each column sums to 1\. For example, we can say that it is estimated that of people who always trust in people, 27\.7% also always trust in the government based on the top\-left cell, but 5\.3% never trust in the government.
The second option is to use the `group_by()` and `survey_mean()` functions to calculate the proportions from the ANES design object. Remember that with more than one variable listed in the `group_by()` statement, the proportions are within the first variable listed. As mentioned above, we are looking at the distribution of `TrustGovernment` for each level of `TrustPeople`.
```
chi_ex2_obs <- anes_des %>%
drop_na(TrustPeople, TrustGovernment) %>%
group_by(TrustPeople, TrustGovernment) %>%
summarize(
Observed = round(survey_mean(vartype = "ci"), 3),
.groups = "drop"
)
chi_ex2_obs_table <- chi_ex2_obs %>%
mutate(prop = paste0(
Observed, " (", Observed_low, ", ",
Observed_upp, ")"
)) %>%
select(TrustGovernment, TrustPeople, prop) %>%
pivot_wider(names_from = TrustPeople, values_from = prop) %>%
gt(rowname_col = "TrustGovernment") %>%
tab_stubhead(label = "Trust in Government") %>%
tab_spanner(
label = "Trust in People",
columns = everything()
) %>%
tab_options(page.orientation = "landscape")
```
```
chi_ex2_obs_table
```
TABLE 6\.5: Proportion of adults in the U.S. by levels of trust in people and government with confidence intervals, ANES 2020
| Trust in Government | Trust in People | | | | |
| --- | --- | --- | --- | --- | --- |
| Always | Most of the time | About half the time | Some of the time | Never |
| Always | 0\.277 (0\.11, 0\.444\) | 0\.008 (0\.004, 0\.012\) | 0\.015 (0\.006, 0\.024\) | 0\.02 (0\.008, 0\.033\) | 0\.015 (0, 0\.029\) |
| Most of the time | 0\.185 (\-0\.009, 0\.38\) | 0\.175 (0\.157, 0\.192\) | 0\.093 (0\.078, 0\.109\) | 0\.113 (0\.085, 0\.141\) | 0\.072 (0\.021, 0\.123\) |
| About half the time | 0\.198 (0\.046, 0\.35\) | 0\.303 (0\.281, 0\.324\) | 0\.41 (0\.378, 0\.441\) | 0\.235 (0\.2, 0\.271\) | 0\.173 (0\.099, 0\.246\) |
| Some of the time | 0\.286 (0\.069, 0\.503\) | 0\.438 (0\.415, 0\.462\) | 0\.399 (0\.365, 0\.433\) | 0\.512 (0\.481, 0\.543\) | 0\.238 (0\.178, 0\.298\) |
| Never | 0\.053 (\-0\.01, 0\.117\) | 0\.077 (0\.064, 0\.089\) | 0\.083 (0\.063, 0\.103\) | 0\.12 (0\.097, 0\.142\) | 0\.503 (0\.422, 0\.583\) |
Both methods produce the same output as the `svychisq()` function. However, calculating the proportions directly from the design object allows us to obtain the variance information. In this case, the output in Table [6\.5](c06-statistical-testing.html#tab:stattest-chi-ex2-prop2-tab) displays the survey estimate followed by the confidence intervals. Based on the output, we can see that of those who never trust people, 50\.3% also never trust the government, while the proportions of never trusting the government are much lower for each of the other levels of trusting people.
We may find it easier to look at these proportions graphically. We can use `ggplot()` and facets to provide an overview to create Figure [6\.2](c06-statistical-testing.html#fig:stattest-chi-ex2-graph) below:
```
chi_ex2_obs %>%
mutate(
TrustPeople =
fct_reorder(
str_c("Trust in People:\n", TrustPeople),
order(TrustPeople)
)
) %>%
ggplot(
aes(x = TrustGovernment, y = Observed, color = TrustGovernment)
) +
facet_wrap(~TrustPeople, ncol = 5) +
geom_point() +
geom_errorbar(aes(ymin = Observed_low, ymax = Observed_upp)) +
ylab("Proportion") +
xlab("") +
theme_bw() +
scale_color_manual(
name = "Trust in Government",
values = book_colors
) +
theme(
axis.text.x = element_blank(),
axis.ticks.x = element_blank(),
legend.position = "bottom"
) +
guides(col = guide_legend(nrow = 2))
```
FIGURE 6\.2: Proportion of adults in the U.S. by levels of trust in people and government with confidence intervals, ANES 2020
#### Example 3: Test of homogeneity
Researchers and politicians often look at specific demographics each election cycle to understand how each group is leaning or voting toward candidates. The ANES data are collected post\-election, but we can still see if there are differences in how specific demographic groups voted.
If we want to see if there is a difference in how each age group voted for the 2020 candidates, this would be a test of homogeneity, and we can set up the hypothesis as follows:
\\\[\\begin{align\*}
H\_0: p\_{1\_{Biden}} \&\= p\_{1\_{Trump}} \= p\_{1\_{Other}},\\\\
p\_{2\_{Biden}} \&\= p\_{2\_{Trump}} \= p\_{2\_{Other}},\\\\
p\_{3\_{Biden}} \&\= p\_{3\_{Trump}} \= p\_{3\_{Other}},\\\\
p\_{4\_{Biden}} \&\= p\_{4\_{Trump}} \= p\_{4\_{Other}},\\\\
p\_{5\_{Biden}} \&\= p\_{5\_{Trump}} \= p\_{5\_{Other}},\\\\
p\_{6\_{Biden}} \&\= p\_{6\_{Trump}} \= p\_{6\_{Other}}
\\end{align\*}\\]
where \\(p\_{i\_{Biden}}\\) is the observed proportion of each age group (\\(i\\)) that voted for Joseph Biden, \\(p\_{i\_{Trump}}\\) is the observed proportion of each age group (\\(i\\)) that voted for Donald Trump, and \\(p\_{i\_{Other}}\\) is the observed proportion of each age group (\\(i\\)) that voted for another candidate.
* \\(H\_A:\\) at least one category of \\(p\_{i\_{Biden}}\\) does not match \\(p\_{i\_{Trump}}\\) or \\(p\_{i\_{Other}}\\)
To conduct this in R, we use the `svychisq()` function to compare the two variables:
```
chi_ex3 <- anes_des %>%
drop_na(VotedPres2020_selection, AgeGroup) %>%
svychisq(
formula = ~ AgeGroup + VotedPres2020_selection,
design = .,
statistic = "Chisq",
na.rm = TRUE
)
chi_ex3
```
```
##
## Pearson's X^2: Rao & Scott adjustment
##
## data: NextMethod()
## X-squared = 171, df = 10, p-value <2e-16
```
The output from `svychisq()` indicates a difference in how each age group voted in the 2020 election. To get a better idea of the different distributions, let’s output proportions to see the relationship. As we learned in Example 2 above, we can use `chi_ex3$observed`, or if we want to get the variance information (which is crucial with survey data), we can use `survey_mean()`. Remember, when we have two variables in `group_by()`, we obtain the proportions within each level of the variable listed. In this case, we are looking at the distribution of `AgeGroup` for each level of `VotedPres2020_selection`.
```
chi_ex3_obs <- anes_des %>%
filter(VotedPres2020 == "Yes") %>%
drop_na(VotedPres2020_selection, AgeGroup) %>%
group_by(VotedPres2020_selection, AgeGroup) %>%
summarize(Observed = round(survey_mean(vartype = "ci"), 3))
chi_ex3_obs_table <- chi_ex3_obs %>%
mutate(prop = paste0(
Observed, " (", Observed_low, ", ",
Observed_upp, ")"
)) %>%
select(AgeGroup, VotedPres2020_selection, prop) %>%
pivot_wider(
names_from = VotedPres2020_selection,
values_from = prop
) %>%
gt(rowname_col = "AgeGroup") %>%
tab_stubhead(label = "Age Group")
```
```
chi_ex3_obs_table
```
TABLE 6\.6: Distribution of age group by presidential candidate selection with confidence intervals
| Age Group | Biden | Trump | Other |
| --- | --- | --- | --- |
| 18\-29 | 0\.203 (0\.177, 0\.229\) | 0\.113 (0\.095, 0\.132\) | 0\.221 (0\.144, 0\.298\) |
| 30\-39 | 0\.168 (0\.152, 0\.184\) | 0\.146 (0\.125, 0\.168\) | 0\.302 (0\.21, 0\.394\) |
| 40\-49 | 0\.163 (0\.146, 0\.18\) | 0\.157 (0\.137, 0\.177\) | 0\.21 (0\.13, 0\.29\) |
| 50\-59 | 0\.152 (0\.135, 0\.17\) | 0\.229 (0\.202, 0\.256\) | 0\.104 (0\.04, 0\.168\) |
| 60\-69 | 0\.177 (0\.159, 0\.196\) | 0\.193 (0\.173, 0\.213\) | 0\.103 (0\.025, 0\.182\) |
| 70 or older | 0\.136 (0\.123, 0\.149\) | 0\.161 (0\.143, 0\.179\) | 0\.06 (0\.01, 0\.109\) |
In Table [6\.6](c06-statistical-testing.html#tab:stattest-chi-ex3-tab) we can see that the age group distribution that voted for Biden and other candidates was younger than those that voted for Trump. For example, of those who voted for Biden, 20\.4% were in the 18–29 age group, compared to only 11\.4% of those who voted for Trump were in that age group. Conversely, 23\.4% of those who voted for Trump were in the 50–59 age group compared to only 15\.4% of those who voted for Biden.
### 6\.4\.1 Syntax
When we do not have survey data, we may be able to use the `chisq.test()` function from the {stats} package in base R to run chi\-squared tests ([R Core Team 2024](#ref-R-base)). However, this function does not allow for weights or the variance structure to be accounted for with survey data. Therefore, when using survey data, we need to use one of two functions:
* `svygofchisq()`: For goodness\-of\-fit tests
* `svychisq()`: For tests of independence and homogeneity
The non\-survey data function of `chisq.test()` requires either a single set of counts and given proportions (for goodness\-of\-fit tests) or two sets of counts for tests of independence and homogeneity. The functions we use with survey data require respondent\-level data and formulas instead of counts. This ensures that the variances are correctly calculated.
First, the function for the goodness\-of\-fit tests is `svygofchisq()`:
```
svygofchisq(formula,
p,
design,
na.rm = TRUE,
...)
```
The arguments are:
* `formula`: Formula specifying a single factor variable
* `p`: Vector of probabilities for the categories of the factor in the correct order. If the probabilities do not sum to 1, they are rescaled to sum to 1\.
* `design`: Survey design object
* …: Other arguments to pass on, such as `na.rm`
Based on the order of the arguments, we again must use the dot `(.)` notation if we pipe in the survey design object or explicitly name the arguments as described in Section [6\.2](c06-statistical-testing.html#dot-notation). For the goodness\-of\-fit tests, the formula is a single variable `formula = ~var` as we compare the observed data from this variable to the expected data. The expected probabilities are then entered in the `p` argument and need to be a vector of the same length as the number of categories in the variable. For example, if we want to know if the proportion of males and females matches a distribution of 30/70, then the sex variable (with two categories) would be used `formula = ~SEX`, and the proportions would be included as `p = c(.3, .7)`. It is important to note that the variable entered into the formula should be formatted as either a factor or a character. The examples below provide more detail and tips on how to make sure the levels match up correctly.
For tests of homogeneity and independence, the `svychisq()` function should be used. The syntax is as follows:
```
svychisq(
formula,
design,
statistic = c("F", "Chisq", "Wald", "adjWald",
"lincom", "saddlepoint"),
na.rm = TRUE
)
```
The arguments are:
* `formula`: Model formula specifying the table (shown in examples)
* `design`: Survey design object
* `statistic`: Type of test statistic to use in test (details below)
* `na.rm`: Remove missing values
There are six statistics that are accepted in this formula. For tests of homogeneity (when comparing cross\-tabulations), the `F` or `Chisq` statistics should be used[18](#fn18). The `F` statistic is the default and uses the Rao\-Scott second\-order correction. This correction is designed to assist with complicated sampling designs (i.e., those other than a simple random sample) ([Scott 2007](#ref-Scott2007)). The `Chisq` statistic is an adjusted version of the Pearson \\(\\chi^2\\) statistic. The version of this statistic in the `svychisq()` function compares the design effect estimate from the provided survey data to what the \\(\\chi^2\\) distribution would have been if the data came from a simple random sampling.
For tests of independence, the `Wald` and `adjWald` are recommended as they provide a better adjustment for variable comparisons ([Lumley 2010](#ref-lumley2010complex)). If the data have a small number of primary sampling units (PSUs) compared to the degrees of freedom, then the `adjWald` statistic should be used to account for this. The `lincom` and `saddlepoint` statistics are available for more complicated data structures.
The formula argument is always one\-sided, unlike the `svyttest()` function. The two variables of interest should be included with a plus sign: `formula = ~ var_1 + var_2`. As with the `svygofchisq()` function, the variables entered into the formula should be formatted as either a factor or a character.
Additionally, as with the t\-test function, both `svygofchisq()` and `svychisq()` have the `na.rm` argument. If any data values are missing, the \\(\\chi^2\\) tests assume that `NA` is a category and include it in the calculation. Throughout this chapter, we always set `na.rm = TRUE`, but before analyzing the survey data, review the notes provided in Chapter [11](c11-missing-data.html#c11-missing-data) to better understand how to handle missing data.
### 6\.4\.2 Examples
Let’s walk through a few examples using the ANES data.
#### Example 1: Goodness\-of\-fit test
ANES asked respondents about their highest education level[19](#fn19). Based on the data from the 2020 American Community Survey (ACS) 5\-year estimates[20](#fn20), the education distribution of those aged 18\+ in the United States (among the 50 states and the District of Columbia) is as follows:
* 11% had less than a high school degree
* 27% had a high school degree
* 29% had some college or an associate’s degree
* 33% had a bachelor’s degree or higher
If we want to see if the weighted distribution from the ANES 2020 data matches this distribution, we could set up the hypothesis as follows:
* \\(H\_0: p\_1 \= 0\.11, \~ p\_2 \= 0\.27, \~ p\_3 \= 0\.29, \~ p\_4 \= 0\.33\\)
* \\(H\_A:\\) at least one of the education levels does not match between the ANES and the ACS
To conduct this in R, let’s first look at the education variable (`Education`) we have on the ANES data. Using the `survey_mean()` function discussed in Chapter [5](c05-descriptive-analysis.html#c05-descriptive-analysis), we can see the education levels and estimated proportions.
```
anes_des %>%
drop_na(Education) %>%
group_by(Education) %>%
summarize(p = survey_mean())
```
```
## # A tibble: 5 × 3
## Education p p_se
## <fct> <dbl> <dbl>
## 1 Less than HS 0.0805 0.00568
## 2 High school 0.277 0.0102
## 3 Post HS 0.290 0.00713
## 4 Bachelor's 0.226 0.00633
## 5 Graduate 0.126 0.00499
```
Based on this output, we can see that we have different levels from the ACS data. Specifically, the education data from ANES include two levels for bachelor’s degree or higher (bachelor’s and graduate), so these two categories need to be collapsed into a single category to match the ACS data. For this, among other methods, we can use the {forcats} package from the tidyverse ([Wickham 2023](#ref-R-forcats)). The package’s `fct_collapse()` function helps us create a new variable by collapsing categories into a single one. Then, we use the `svygofchisq()` function to compare the ANES data to the ACS data, where we specify the updated design object, the formula using the collapsed education variable, the ACS estimates for education levels as p, and removing `NA` values.
```
anes_des_educ <- anes_des %>%
mutate(
Education2 =
fct_collapse(Education,
"Bachelor or Higher" = c(
"Bachelor's",
"Graduate"
)
)
)
anes_des_educ %>%
drop_na(Education2) %>%
group_by(Education2) %>%
summarize(p = survey_mean())
```
```
## # A tibble: 4 × 3
## Education2 p p_se
## <fct> <dbl> <dbl>
## 1 Less than HS 0.0805 0.00568
## 2 High school 0.277 0.0102
## 3 Post HS 0.290 0.00713
## 4 Bachelor or Higher 0.352 0.00732
```
```
chi_ex1 <- anes_des_educ %>%
svygofchisq(
formula = ~Education2,
p = c(0.11, 0.27, 0.29, 0.33),
design = .,
na.rm = TRUE
)
chi_ex1
```
```
##
## Design-based chi-squared test for given probabilities
##
## data: ~Education2
## X-squared = 2172220, scale = 1.1e+05, df = 2.3e+00, p-value =
## 9e-05
```
The output from the `svygofchisq()` indicates that at least one proportion from ANES does not match the ACS data (\\(\\chi^2 \=\\) 2,172,220; p\-value is \<0\.0001\). To get a better idea of the differences, we can use the `expected` output along with `survey_mean()` to create a comparison table:
```
ex1_table <- anes_des_educ %>%
drop_na(Education2) %>%
group_by(Education2) %>%
summarize(Observed = survey_mean(vartype = "ci")) %>%
rename(Education = Education2) %>%
mutate(Expected = c(0.11, 0.27, 0.29, 0.33)) %>%
select(Education, Expected, everything())
ex1_table
```
```
## # A tibble: 4 × 5
## Education Expected Observed Observed_low Observed_upp
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 Less than HS 0.11 0.0805 0.0691 0.0919
## 2 High school 0.27 0.277 0.257 0.298
## 3 Post HS 0.29 0.290 0.276 0.305
## 4 Bachelor or Higher 0.33 0.352 0.337 0.367
```
This output includes our expected proportions from the ACS that we provided the `svygofchisq()` function along with the output of the observed proportions and their confidence intervals. This table shows that the “high school” and “post HS” categories have nearly identical proportions, but that the other two categories are slightly different. Looking at the confidence intervals, we can see that the ANES data skew to include fewer people in the “less than HS” category and more people in the “bachelor or higher” category. This may be easier to see if we plot this. The code below uses the tabular output to create Figure [6\.1](c06-statistical-testing.html#fig:stattest-chi-ex1-graph).
```
ex1_table %>%
pivot_longer(
cols = c("Expected", "Observed"),
names_to = "Names",
values_to = "Proportion"
) %>%
mutate(
Observed_low = if_else(Names == "Observed", Observed_low, NA_real_),
Observed_upp = if_else(Names == "Observed", Observed_upp, NA_real_),
Names = if_else(Names == "Observed",
"ANES (observed)", "ACS (expected)"
)
) %>%
ggplot(aes(x = Education, y = Proportion, color = Names)) +
geom_point(alpha = 0.75, size = 2) +
geom_errorbar(aes(ymin = Observed_low, ymax = Observed_upp),
width = 0.25
) +
theme_bw() +
scale_color_manual(name = "Type", values = book_colors[c(4, 1)]) +
theme(legend.position = "bottom", legend.title = element_blank())
```
FIGURE 6\.1: Expected and observed proportions of education with confidence intervals
#### Example 2: Test of independence
ANES asked respondents two questions about trust:
* Question text: “How often can you trust the federal government to do what is right?” ([American National Election Studies 2021](#ref-anes-svy))
* Question text: “How often can you trust other people?” ([American National Election Studies 2021](#ref-anes-svy))
If we want to see if the distributions of these two questions are similar or not, we can conduct a test of independence. Here is how the hypothesis could be set up:
* \\(H\_0:\\) People’s trust in the federal government and their trust in other people are independent (i.e., not related)
* \\(H\_A:\\) People’s trust in the federal government and their trust in other people are not independent (i.e., they are related)
To conduct this in R, we use the `svychisq()` function to compare the two variables:
```
chi_ex2 <- anes_des %>%
svychisq(
formula = ~ TrustGovernment + TrustPeople,
design = .,
statistic = "Wald",
na.rm = TRUE
)
chi_ex2
```
```
##
## Design-based Wald test of association
##
## data: NextMethod()
## F = 21, ndf = 16, ddf = 51, p-value <2e-16
```
The output from `svychisq()` indicates that the distribution of people’s trust in the federal government and their trust in other people are not independent, meaning that they are related. Let’s output the distributions in a table to see the relationship. The `observed` output from the test provides a cross\-tabulation of the counts for each category:
```
chi_ex2$observed
```
```
## TrustPeople
## TrustGovernment Always Most of the time About half the time
## Always 16.470 25.009 31.848
## Most of the time 11.020 539.377 196.258
## About half the time 11.772 934.858 861.971
## Some of the time 17.007 1353.779 839.863
## Never 3.174 236.785 174.272
## TrustPeople
## TrustGovernment Some of the time Never
## Always 36.854 5.523
## Most of the time 206.556 27.184
## About half the time 428.871 65.024
## Some of the time 932.628 89.596
## Never 217.994 189.307
```
However, we often want to know about the proportions, not just the respondent counts from the survey. There are a couple of different ways that we can do this. The first is using the counts from `chi_ex2$observed` to calculate the proportion. We can then pivot the table to create a cross\-tabulation similar to the counts table above. Adding `group_by()` to the code means that we obtain the proportions within each variable level. In this case, we are looking at the distribution of `TrustGovernment` for each level of `TrustPeople`. The resulting table is shown in Table [6\.4](c06-statistical-testing.html#tab:stattest-chi-ex2-prop1-tab).
```
chi_ex2_table <- chi_ex2$observed %>%
as_tibble() %>%
group_by(TrustPeople) %>%
mutate(prop = round(n / sum(n), 3)) %>%
select(-n) %>%
pivot_wider(names_from = TrustPeople, values_from = prop) %>%
gt(rowname_col = "TrustGovernment") %>%
tab_stubhead(label = "Trust in Government") %>%
tab_spanner(
label = "Trust in People",
columns = everything()
) %>%
cols_label(
`Most of the time` = md("Most of<br />the time"),
`About half the time` = md("About half<br />the time"),
`Some of the time` = md("Some of<br />the time")
)
```
```
chi_ex2_table
```
TABLE 6\.4: Proportion of adults in the U.S. by levels of trust in people and government, ANES 2020
| Trust in Government | Trust in People | | | | |
| --- | --- | --- | --- | --- | --- |
| Always | Most ofthe time | About halfthe time | Some ofthe time | Never |
| Always | 0\.277 | 0\.008 | 0\.015 | 0\.020 | 0\.015 |
| Most of the time | 0\.185 | 0\.175 | 0\.093 | 0\.113 | 0\.072 |
| About half the time | 0\.198 | 0\.303 | 0\.410 | 0\.235 | 0\.173 |
| Some of the time | 0\.286 | 0\.438 | 0\.399 | 0\.512 | 0\.238 |
| Never | 0\.053 | 0\.077 | 0\.083 | 0\.120 | 0\.503 |
In Table [6\.4](c06-statistical-testing.html#tab:stattest-chi-ex2-prop1-tab), each column sums to 1\. For example, we can say that it is estimated that of people who always trust in people, 27\.7% also always trust in the government based on the top\-left cell, but 5\.3% never trust in the government.
The second option is to use the `group_by()` and `survey_mean()` functions to calculate the proportions from the ANES design object. Remember that with more than one variable listed in the `group_by()` statement, the proportions are within the first variable listed. As mentioned above, we are looking at the distribution of `TrustGovernment` for each level of `TrustPeople`.
```
chi_ex2_obs <- anes_des %>%
drop_na(TrustPeople, TrustGovernment) %>%
group_by(TrustPeople, TrustGovernment) %>%
summarize(
Observed = round(survey_mean(vartype = "ci"), 3),
.groups = "drop"
)
chi_ex2_obs_table <- chi_ex2_obs %>%
mutate(prop = paste0(
Observed, " (", Observed_low, ", ",
Observed_upp, ")"
)) %>%
select(TrustGovernment, TrustPeople, prop) %>%
pivot_wider(names_from = TrustPeople, values_from = prop) %>%
gt(rowname_col = "TrustGovernment") %>%
tab_stubhead(label = "Trust in Government") %>%
tab_spanner(
label = "Trust in People",
columns = everything()
) %>%
tab_options(page.orientation = "landscape")
```
```
chi_ex2_obs_table
```
TABLE 6\.5: Proportion of adults in the U.S. by levels of trust in people and government with confidence intervals, ANES 2020
| Trust in Government | Trust in People | | | | |
| --- | --- | --- | --- | --- | --- |
| Always | Most of the time | About half the time | Some of the time | Never |
| Always | 0\.277 (0\.11, 0\.444\) | 0\.008 (0\.004, 0\.012\) | 0\.015 (0\.006, 0\.024\) | 0\.02 (0\.008, 0\.033\) | 0\.015 (0, 0\.029\) |
| Most of the time | 0\.185 (\-0\.009, 0\.38\) | 0\.175 (0\.157, 0\.192\) | 0\.093 (0\.078, 0\.109\) | 0\.113 (0\.085, 0\.141\) | 0\.072 (0\.021, 0\.123\) |
| About half the time | 0\.198 (0\.046, 0\.35\) | 0\.303 (0\.281, 0\.324\) | 0\.41 (0\.378, 0\.441\) | 0\.235 (0\.2, 0\.271\) | 0\.173 (0\.099, 0\.246\) |
| Some of the time | 0\.286 (0\.069, 0\.503\) | 0\.438 (0\.415, 0\.462\) | 0\.399 (0\.365, 0\.433\) | 0\.512 (0\.481, 0\.543\) | 0\.238 (0\.178, 0\.298\) |
| Never | 0\.053 (\-0\.01, 0\.117\) | 0\.077 (0\.064, 0\.089\) | 0\.083 (0\.063, 0\.103\) | 0\.12 (0\.097, 0\.142\) | 0\.503 (0\.422, 0\.583\) |
Both methods produce the same output as the `svychisq()` function. However, calculating the proportions directly from the design object allows us to obtain the variance information. In this case, the output in Table [6\.5](c06-statistical-testing.html#tab:stattest-chi-ex2-prop2-tab) displays the survey estimate followed by the confidence intervals. Based on the output, we can see that of those who never trust people, 50\.3% also never trust the government, while the proportions of never trusting the government are much lower for each of the other levels of trusting people.
We may find it easier to look at these proportions graphically. We can use `ggplot()` and facets to provide an overview to create Figure [6\.2](c06-statistical-testing.html#fig:stattest-chi-ex2-graph) below:
```
chi_ex2_obs %>%
mutate(
TrustPeople =
fct_reorder(
str_c("Trust in People:\n", TrustPeople),
order(TrustPeople)
)
) %>%
ggplot(
aes(x = TrustGovernment, y = Observed, color = TrustGovernment)
) +
facet_wrap(~TrustPeople, ncol = 5) +
geom_point() +
geom_errorbar(aes(ymin = Observed_low, ymax = Observed_upp)) +
ylab("Proportion") +
xlab("") +
theme_bw() +
scale_color_manual(
name = "Trust in Government",
values = book_colors
) +
theme(
axis.text.x = element_blank(),
axis.ticks.x = element_blank(),
legend.position = "bottom"
) +
guides(col = guide_legend(nrow = 2))
```
FIGURE 6\.2: Proportion of adults in the U.S. by levels of trust in people and government with confidence intervals, ANES 2020
#### Example 3: Test of homogeneity
Researchers and politicians often look at specific demographics each election cycle to understand how each group is leaning or voting toward candidates. The ANES data are collected post\-election, but we can still see if there are differences in how specific demographic groups voted.
If we want to see if there is a difference in how each age group voted for the 2020 candidates, this would be a test of homogeneity, and we can set up the hypothesis as follows:
\\\[\\begin{align\*}
H\_0: p\_{1\_{Biden}} \&\= p\_{1\_{Trump}} \= p\_{1\_{Other}},\\\\
p\_{2\_{Biden}} \&\= p\_{2\_{Trump}} \= p\_{2\_{Other}},\\\\
p\_{3\_{Biden}} \&\= p\_{3\_{Trump}} \= p\_{3\_{Other}},\\\\
p\_{4\_{Biden}} \&\= p\_{4\_{Trump}} \= p\_{4\_{Other}},\\\\
p\_{5\_{Biden}} \&\= p\_{5\_{Trump}} \= p\_{5\_{Other}},\\\\
p\_{6\_{Biden}} \&\= p\_{6\_{Trump}} \= p\_{6\_{Other}}
\\end{align\*}\\]
where \\(p\_{i\_{Biden}}\\) is the observed proportion of each age group (\\(i\\)) that voted for Joseph Biden, \\(p\_{i\_{Trump}}\\) is the observed proportion of each age group (\\(i\\)) that voted for Donald Trump, and \\(p\_{i\_{Other}}\\) is the observed proportion of each age group (\\(i\\)) that voted for another candidate.
* \\(H\_A:\\) at least one category of \\(p\_{i\_{Biden}}\\) does not match \\(p\_{i\_{Trump}}\\) or \\(p\_{i\_{Other}}\\)
To conduct this in R, we use the `svychisq()` function to compare the two variables:
```
chi_ex3 <- anes_des %>%
drop_na(VotedPres2020_selection, AgeGroup) %>%
svychisq(
formula = ~ AgeGroup + VotedPres2020_selection,
design = .,
statistic = "Chisq",
na.rm = TRUE
)
chi_ex3
```
```
##
## Pearson's X^2: Rao & Scott adjustment
##
## data: NextMethod()
## X-squared = 171, df = 10, p-value <2e-16
```
The output from `svychisq()` indicates a difference in how each age group voted in the 2020 election. To get a better idea of the different distributions, let’s output proportions to see the relationship. As we learned in Example 2 above, we can use `chi_ex3$observed`, or if we want to get the variance information (which is crucial with survey data), we can use `survey_mean()`. Remember, when we have two variables in `group_by()`, we obtain the proportions within each level of the variable listed. In this case, we are looking at the distribution of `AgeGroup` for each level of `VotedPres2020_selection`.
```
chi_ex3_obs <- anes_des %>%
filter(VotedPres2020 == "Yes") %>%
drop_na(VotedPres2020_selection, AgeGroup) %>%
group_by(VotedPres2020_selection, AgeGroup) %>%
summarize(Observed = round(survey_mean(vartype = "ci"), 3))
chi_ex3_obs_table <- chi_ex3_obs %>%
mutate(prop = paste0(
Observed, " (", Observed_low, ", ",
Observed_upp, ")"
)) %>%
select(AgeGroup, VotedPres2020_selection, prop) %>%
pivot_wider(
names_from = VotedPres2020_selection,
values_from = prop
) %>%
gt(rowname_col = "AgeGroup") %>%
tab_stubhead(label = "Age Group")
```
```
chi_ex3_obs_table
```
TABLE 6\.6: Distribution of age group by presidential candidate selection with confidence intervals
| Age Group | Biden | Trump | Other |
| --- | --- | --- | --- |
| 18\-29 | 0\.203 (0\.177, 0\.229\) | 0\.113 (0\.095, 0\.132\) | 0\.221 (0\.144, 0\.298\) |
| 30\-39 | 0\.168 (0\.152, 0\.184\) | 0\.146 (0\.125, 0\.168\) | 0\.302 (0\.21, 0\.394\) |
| 40\-49 | 0\.163 (0\.146, 0\.18\) | 0\.157 (0\.137, 0\.177\) | 0\.21 (0\.13, 0\.29\) |
| 50\-59 | 0\.152 (0\.135, 0\.17\) | 0\.229 (0\.202, 0\.256\) | 0\.104 (0\.04, 0\.168\) |
| 60\-69 | 0\.177 (0\.159, 0\.196\) | 0\.193 (0\.173, 0\.213\) | 0\.103 (0\.025, 0\.182\) |
| 70 or older | 0\.136 (0\.123, 0\.149\) | 0\.161 (0\.143, 0\.179\) | 0\.06 (0\.01, 0\.109\) |
In Table [6\.6](c06-statistical-testing.html#tab:stattest-chi-ex3-tab) we can see that the age group distribution that voted for Biden and other candidates was younger than those that voted for Trump. For example, of those who voted for Biden, 20\.4% were in the 18–29 age group, compared to only 11\.4% of those who voted for Trump were in that age group. Conversely, 23\.4% of those who voted for Trump were in the 50–59 age group compared to only 15\.4% of those who voted for Biden.
#### Example 1: Goodness\-of\-fit test
ANES asked respondents about their highest education level[19](#fn19). Based on the data from the 2020 American Community Survey (ACS) 5\-year estimates[20](#fn20), the education distribution of those aged 18\+ in the United States (among the 50 states and the District of Columbia) is as follows:
* 11% had less than a high school degree
* 27% had a high school degree
* 29% had some college or an associate’s degree
* 33% had a bachelor’s degree or higher
If we want to see if the weighted distribution from the ANES 2020 data matches this distribution, we could set up the hypothesis as follows:
* \\(H\_0: p\_1 \= 0\.11, \~ p\_2 \= 0\.27, \~ p\_3 \= 0\.29, \~ p\_4 \= 0\.33\\)
* \\(H\_A:\\) at least one of the education levels does not match between the ANES and the ACS
To conduct this in R, let’s first look at the education variable (`Education`) we have on the ANES data. Using the `survey_mean()` function discussed in Chapter [5](c05-descriptive-analysis.html#c05-descriptive-analysis), we can see the education levels and estimated proportions.
```
anes_des %>%
drop_na(Education) %>%
group_by(Education) %>%
summarize(p = survey_mean())
```
```
## # A tibble: 5 × 3
## Education p p_se
## <fct> <dbl> <dbl>
## 1 Less than HS 0.0805 0.00568
## 2 High school 0.277 0.0102
## 3 Post HS 0.290 0.00713
## 4 Bachelor's 0.226 0.00633
## 5 Graduate 0.126 0.00499
```
Based on this output, we can see that we have different levels from the ACS data. Specifically, the education data from ANES include two levels for bachelor’s degree or higher (bachelor’s and graduate), so these two categories need to be collapsed into a single category to match the ACS data. For this, among other methods, we can use the {forcats} package from the tidyverse ([Wickham 2023](#ref-R-forcats)). The package’s `fct_collapse()` function helps us create a new variable by collapsing categories into a single one. Then, we use the `svygofchisq()` function to compare the ANES data to the ACS data, where we specify the updated design object, the formula using the collapsed education variable, the ACS estimates for education levels as p, and removing `NA` values.
```
anes_des_educ <- anes_des %>%
mutate(
Education2 =
fct_collapse(Education,
"Bachelor or Higher" = c(
"Bachelor's",
"Graduate"
)
)
)
anes_des_educ %>%
drop_na(Education2) %>%
group_by(Education2) %>%
summarize(p = survey_mean())
```
```
## # A tibble: 4 × 3
## Education2 p p_se
## <fct> <dbl> <dbl>
## 1 Less than HS 0.0805 0.00568
## 2 High school 0.277 0.0102
## 3 Post HS 0.290 0.00713
## 4 Bachelor or Higher 0.352 0.00732
```
```
chi_ex1 <- anes_des_educ %>%
svygofchisq(
formula = ~Education2,
p = c(0.11, 0.27, 0.29, 0.33),
design = .,
na.rm = TRUE
)
chi_ex1
```
```
##
## Design-based chi-squared test for given probabilities
##
## data: ~Education2
## X-squared = 2172220, scale = 1.1e+05, df = 2.3e+00, p-value =
## 9e-05
```
The output from the `svygofchisq()` indicates that at least one proportion from ANES does not match the ACS data (\\(\\chi^2 \=\\) 2,172,220; p\-value is \<0\.0001\). To get a better idea of the differences, we can use the `expected` output along with `survey_mean()` to create a comparison table:
```
ex1_table <- anes_des_educ %>%
drop_na(Education2) %>%
group_by(Education2) %>%
summarize(Observed = survey_mean(vartype = "ci")) %>%
rename(Education = Education2) %>%
mutate(Expected = c(0.11, 0.27, 0.29, 0.33)) %>%
select(Education, Expected, everything())
ex1_table
```
```
## # A tibble: 4 × 5
## Education Expected Observed Observed_low Observed_upp
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 Less than HS 0.11 0.0805 0.0691 0.0919
## 2 High school 0.27 0.277 0.257 0.298
## 3 Post HS 0.29 0.290 0.276 0.305
## 4 Bachelor or Higher 0.33 0.352 0.337 0.367
```
This output includes our expected proportions from the ACS that we provided the `svygofchisq()` function along with the output of the observed proportions and their confidence intervals. This table shows that the “high school” and “post HS” categories have nearly identical proportions, but that the other two categories are slightly different. Looking at the confidence intervals, we can see that the ANES data skew to include fewer people in the “less than HS” category and more people in the “bachelor or higher” category. This may be easier to see if we plot this. The code below uses the tabular output to create Figure [6\.1](c06-statistical-testing.html#fig:stattest-chi-ex1-graph).
```
ex1_table %>%
pivot_longer(
cols = c("Expected", "Observed"),
names_to = "Names",
values_to = "Proportion"
) %>%
mutate(
Observed_low = if_else(Names == "Observed", Observed_low, NA_real_),
Observed_upp = if_else(Names == "Observed", Observed_upp, NA_real_),
Names = if_else(Names == "Observed",
"ANES (observed)", "ACS (expected)"
)
) %>%
ggplot(aes(x = Education, y = Proportion, color = Names)) +
geom_point(alpha = 0.75, size = 2) +
geom_errorbar(aes(ymin = Observed_low, ymax = Observed_upp),
width = 0.25
) +
theme_bw() +
scale_color_manual(name = "Type", values = book_colors[c(4, 1)]) +
theme(legend.position = "bottom", legend.title = element_blank())
```
FIGURE 6\.1: Expected and observed proportions of education with confidence intervals
#### Example 2: Test of independence
ANES asked respondents two questions about trust:
* Question text: “How often can you trust the federal government to do what is right?” ([American National Election Studies 2021](#ref-anes-svy))
* Question text: “How often can you trust other people?” ([American National Election Studies 2021](#ref-anes-svy))
If we want to see if the distributions of these two questions are similar or not, we can conduct a test of independence. Here is how the hypothesis could be set up:
* \\(H\_0:\\) People’s trust in the federal government and their trust in other people are independent (i.e., not related)
* \\(H\_A:\\) People’s trust in the federal government and their trust in other people are not independent (i.e., they are related)
To conduct this in R, we use the `svychisq()` function to compare the two variables:
```
chi_ex2 <- anes_des %>%
svychisq(
formula = ~ TrustGovernment + TrustPeople,
design = .,
statistic = "Wald",
na.rm = TRUE
)
chi_ex2
```
```
##
## Design-based Wald test of association
##
## data: NextMethod()
## F = 21, ndf = 16, ddf = 51, p-value <2e-16
```
The output from `svychisq()` indicates that the distribution of people’s trust in the federal government and their trust in other people are not independent, meaning that they are related. Let’s output the distributions in a table to see the relationship. The `observed` output from the test provides a cross\-tabulation of the counts for each category:
```
chi_ex2$observed
```
```
## TrustPeople
## TrustGovernment Always Most of the time About half the time
## Always 16.470 25.009 31.848
## Most of the time 11.020 539.377 196.258
## About half the time 11.772 934.858 861.971
## Some of the time 17.007 1353.779 839.863
## Never 3.174 236.785 174.272
## TrustPeople
## TrustGovernment Some of the time Never
## Always 36.854 5.523
## Most of the time 206.556 27.184
## About half the time 428.871 65.024
## Some of the time 932.628 89.596
## Never 217.994 189.307
```
However, we often want to know about the proportions, not just the respondent counts from the survey. There are a couple of different ways that we can do this. The first is using the counts from `chi_ex2$observed` to calculate the proportion. We can then pivot the table to create a cross\-tabulation similar to the counts table above. Adding `group_by()` to the code means that we obtain the proportions within each variable level. In this case, we are looking at the distribution of `TrustGovernment` for each level of `TrustPeople`. The resulting table is shown in Table [6\.4](c06-statistical-testing.html#tab:stattest-chi-ex2-prop1-tab).
```
chi_ex2_table <- chi_ex2$observed %>%
as_tibble() %>%
group_by(TrustPeople) %>%
mutate(prop = round(n / sum(n), 3)) %>%
select(-n) %>%
pivot_wider(names_from = TrustPeople, values_from = prop) %>%
gt(rowname_col = "TrustGovernment") %>%
tab_stubhead(label = "Trust in Government") %>%
tab_spanner(
label = "Trust in People",
columns = everything()
) %>%
cols_label(
`Most of the time` = md("Most of<br />the time"),
`About half the time` = md("About half<br />the time"),
`Some of the time` = md("Some of<br />the time")
)
```
```
chi_ex2_table
```
TABLE 6\.4: Proportion of adults in the U.S. by levels of trust in people and government, ANES 2020
| Trust in Government | Trust in People | | | | |
| --- | --- | --- | --- | --- | --- |
| Always | Most ofthe time | About halfthe time | Some ofthe time | Never |
| Always | 0\.277 | 0\.008 | 0\.015 | 0\.020 | 0\.015 |
| Most of the time | 0\.185 | 0\.175 | 0\.093 | 0\.113 | 0\.072 |
| About half the time | 0\.198 | 0\.303 | 0\.410 | 0\.235 | 0\.173 |
| Some of the time | 0\.286 | 0\.438 | 0\.399 | 0\.512 | 0\.238 |
| Never | 0\.053 | 0\.077 | 0\.083 | 0\.120 | 0\.503 |
In Table [6\.4](c06-statistical-testing.html#tab:stattest-chi-ex2-prop1-tab), each column sums to 1\. For example, we can say that it is estimated that of people who always trust in people, 27\.7% also always trust in the government based on the top\-left cell, but 5\.3% never trust in the government.
The second option is to use the `group_by()` and `survey_mean()` functions to calculate the proportions from the ANES design object. Remember that with more than one variable listed in the `group_by()` statement, the proportions are within the first variable listed. As mentioned above, we are looking at the distribution of `TrustGovernment` for each level of `TrustPeople`.
```
chi_ex2_obs <- anes_des %>%
drop_na(TrustPeople, TrustGovernment) %>%
group_by(TrustPeople, TrustGovernment) %>%
summarize(
Observed = round(survey_mean(vartype = "ci"), 3),
.groups = "drop"
)
chi_ex2_obs_table <- chi_ex2_obs %>%
mutate(prop = paste0(
Observed, " (", Observed_low, ", ",
Observed_upp, ")"
)) %>%
select(TrustGovernment, TrustPeople, prop) %>%
pivot_wider(names_from = TrustPeople, values_from = prop) %>%
gt(rowname_col = "TrustGovernment") %>%
tab_stubhead(label = "Trust in Government") %>%
tab_spanner(
label = "Trust in People",
columns = everything()
) %>%
tab_options(page.orientation = "landscape")
```
```
chi_ex2_obs_table
```
TABLE 6\.5: Proportion of adults in the U.S. by levels of trust in people and government with confidence intervals, ANES 2020
| Trust in Government | Trust in People | | | | |
| --- | --- | --- | --- | --- | --- |
| Always | Most of the time | About half the time | Some of the time | Never |
| Always | 0\.277 (0\.11, 0\.444\) | 0\.008 (0\.004, 0\.012\) | 0\.015 (0\.006, 0\.024\) | 0\.02 (0\.008, 0\.033\) | 0\.015 (0, 0\.029\) |
| Most of the time | 0\.185 (\-0\.009, 0\.38\) | 0\.175 (0\.157, 0\.192\) | 0\.093 (0\.078, 0\.109\) | 0\.113 (0\.085, 0\.141\) | 0\.072 (0\.021, 0\.123\) |
| About half the time | 0\.198 (0\.046, 0\.35\) | 0\.303 (0\.281, 0\.324\) | 0\.41 (0\.378, 0\.441\) | 0\.235 (0\.2, 0\.271\) | 0\.173 (0\.099, 0\.246\) |
| Some of the time | 0\.286 (0\.069, 0\.503\) | 0\.438 (0\.415, 0\.462\) | 0\.399 (0\.365, 0\.433\) | 0\.512 (0\.481, 0\.543\) | 0\.238 (0\.178, 0\.298\) |
| Never | 0\.053 (\-0\.01, 0\.117\) | 0\.077 (0\.064, 0\.089\) | 0\.083 (0\.063, 0\.103\) | 0\.12 (0\.097, 0\.142\) | 0\.503 (0\.422, 0\.583\) |
Both methods produce the same output as the `svychisq()` function. However, calculating the proportions directly from the design object allows us to obtain the variance information. In this case, the output in Table [6\.5](c06-statistical-testing.html#tab:stattest-chi-ex2-prop2-tab) displays the survey estimate followed by the confidence intervals. Based on the output, we can see that of those who never trust people, 50\.3% also never trust the government, while the proportions of never trusting the government are much lower for each of the other levels of trusting people.
We may find it easier to look at these proportions graphically. We can use `ggplot()` and facets to provide an overview to create Figure [6\.2](c06-statistical-testing.html#fig:stattest-chi-ex2-graph) below:
```
chi_ex2_obs %>%
mutate(
TrustPeople =
fct_reorder(
str_c("Trust in People:\n", TrustPeople),
order(TrustPeople)
)
) %>%
ggplot(
aes(x = TrustGovernment, y = Observed, color = TrustGovernment)
) +
facet_wrap(~TrustPeople, ncol = 5) +
geom_point() +
geom_errorbar(aes(ymin = Observed_low, ymax = Observed_upp)) +
ylab("Proportion") +
xlab("") +
theme_bw() +
scale_color_manual(
name = "Trust in Government",
values = book_colors
) +
theme(
axis.text.x = element_blank(),
axis.ticks.x = element_blank(),
legend.position = "bottom"
) +
guides(col = guide_legend(nrow = 2))
```
FIGURE 6\.2: Proportion of adults in the U.S. by levels of trust in people and government with confidence intervals, ANES 2020
#### Example 3: Test of homogeneity
Researchers and politicians often look at specific demographics each election cycle to understand how each group is leaning or voting toward candidates. The ANES data are collected post\-election, but we can still see if there are differences in how specific demographic groups voted.
If we want to see if there is a difference in how each age group voted for the 2020 candidates, this would be a test of homogeneity, and we can set up the hypothesis as follows:
\\\[\\begin{align\*}
H\_0: p\_{1\_{Biden}} \&\= p\_{1\_{Trump}} \= p\_{1\_{Other}},\\\\
p\_{2\_{Biden}} \&\= p\_{2\_{Trump}} \= p\_{2\_{Other}},\\\\
p\_{3\_{Biden}} \&\= p\_{3\_{Trump}} \= p\_{3\_{Other}},\\\\
p\_{4\_{Biden}} \&\= p\_{4\_{Trump}} \= p\_{4\_{Other}},\\\\
p\_{5\_{Biden}} \&\= p\_{5\_{Trump}} \= p\_{5\_{Other}},\\\\
p\_{6\_{Biden}} \&\= p\_{6\_{Trump}} \= p\_{6\_{Other}}
\\end{align\*}\\]
where \\(p\_{i\_{Biden}}\\) is the observed proportion of each age group (\\(i\\)) that voted for Joseph Biden, \\(p\_{i\_{Trump}}\\) is the observed proportion of each age group (\\(i\\)) that voted for Donald Trump, and \\(p\_{i\_{Other}}\\) is the observed proportion of each age group (\\(i\\)) that voted for another candidate.
* \\(H\_A:\\) at least one category of \\(p\_{i\_{Biden}}\\) does not match \\(p\_{i\_{Trump}}\\) or \\(p\_{i\_{Other}}\\)
To conduct this in R, we use the `svychisq()` function to compare the two variables:
```
chi_ex3 <- anes_des %>%
drop_na(VotedPres2020_selection, AgeGroup) %>%
svychisq(
formula = ~ AgeGroup + VotedPres2020_selection,
design = .,
statistic = "Chisq",
na.rm = TRUE
)
chi_ex3
```
```
##
## Pearson's X^2: Rao & Scott adjustment
##
## data: NextMethod()
## X-squared = 171, df = 10, p-value <2e-16
```
The output from `svychisq()` indicates a difference in how each age group voted in the 2020 election. To get a better idea of the different distributions, let’s output proportions to see the relationship. As we learned in Example 2 above, we can use `chi_ex3$observed`, or if we want to get the variance information (which is crucial with survey data), we can use `survey_mean()`. Remember, when we have two variables in `group_by()`, we obtain the proportions within each level of the variable listed. In this case, we are looking at the distribution of `AgeGroup` for each level of `VotedPres2020_selection`.
```
chi_ex3_obs <- anes_des %>%
filter(VotedPres2020 == "Yes") %>%
drop_na(VotedPres2020_selection, AgeGroup) %>%
group_by(VotedPres2020_selection, AgeGroup) %>%
summarize(Observed = round(survey_mean(vartype = "ci"), 3))
chi_ex3_obs_table <- chi_ex3_obs %>%
mutate(prop = paste0(
Observed, " (", Observed_low, ", ",
Observed_upp, ")"
)) %>%
select(AgeGroup, VotedPres2020_selection, prop) %>%
pivot_wider(
names_from = VotedPres2020_selection,
values_from = prop
) %>%
gt(rowname_col = "AgeGroup") %>%
tab_stubhead(label = "Age Group")
```
```
chi_ex3_obs_table
```
TABLE 6\.6: Distribution of age group by presidential candidate selection with confidence intervals
| Age Group | Biden | Trump | Other |
| --- | --- | --- | --- |
| 18\-29 | 0\.203 (0\.177, 0\.229\) | 0\.113 (0\.095, 0\.132\) | 0\.221 (0\.144, 0\.298\) |
| 30\-39 | 0\.168 (0\.152, 0\.184\) | 0\.146 (0\.125, 0\.168\) | 0\.302 (0\.21, 0\.394\) |
| 40\-49 | 0\.163 (0\.146, 0\.18\) | 0\.157 (0\.137, 0\.177\) | 0\.21 (0\.13, 0\.29\) |
| 50\-59 | 0\.152 (0\.135, 0\.17\) | 0\.229 (0\.202, 0\.256\) | 0\.104 (0\.04, 0\.168\) |
| 60\-69 | 0\.177 (0\.159, 0\.196\) | 0\.193 (0\.173, 0\.213\) | 0\.103 (0\.025, 0\.182\) |
| 70 or older | 0\.136 (0\.123, 0\.149\) | 0\.161 (0\.143, 0\.179\) | 0\.06 (0\.01, 0\.109\) |
In Table [6\.6](c06-statistical-testing.html#tab:stattest-chi-ex3-tab) we can see that the age group distribution that voted for Biden and other candidates was younger than those that voted for Trump. For example, of those who voted for Biden, 20\.4% were in the 18–29 age group, compared to only 11\.4% of those who voted for Trump were in that age group. Conversely, 23\.4% of those who voted for Trump were in the 50–59 age group compared to only 15\.4% of those who voted for Biden.
6\.5 Exercises
--------------
The exercises use the design objects `anes_des` and `recs_des` as provided in the Prerequisites box at the [beginning of the chapter](c06-statistical-testing.html#c06-statistical-testing). Here are some exercises for practicing conducting t\-tests using `svyttest()`:
1. Using the RECS data, do more than 50% of U.S. households use A/C (`ACUsed`)?
2. Using the RECS data, does the average temperature at which U.S. households set their thermostats differ between the day and night in the winter (`WinterTempDay` and `WinterTempNight`)?
3. Using the ANES data, does the average age (`Age`) of those who voted for Joseph Biden in 2020 (`VotedPres2020_selection`) differ from those who voted for another candidate?
4. If we wanted to determine if the political party affiliation differed for males and females, what test would we use?
1. Goodness\-of\-fit test (`svygofchisq()`)
2. Test of independence (`svychisq()`)
3. Test of homogeneity (`svychisq()`)
5. In the RECS data, is there a relationship between the type of housing unit (`HousingUnitType`) and the year the house was built (`YearMade`)?
6. In the ANES data, is there a difference in the distribution of gender (`Gender`) across early voting status in 2020 (`EarlyVote2020`)?
| Social Science |
tidy-survey-r.github.io | https://tidy-survey-r.github.io/tidy-survey-book/c07-modeling.html |
Chapter 7 Modeling
==================
### Prerequisites
For this chapter, load the following packages:
```
library(tidyverse)
library(survey)
library(srvyr)
library(srvyrexploR)
library(broom)
library(gt)
library(prettyunits)
```
We are using data from ANES and RECS described in Chapter [4](c04-getting-started.html#c04-getting-started). As a reminder, here is the code to create the design objects for each to use throughout this chapter. For ANES, we need to adjust the weight so it sums to the population instead of the sample (see the ANES documentation and Chapter [4](c04-getting-started.html#c04-getting-started) for more information).
```
targetpop <- 231592693
anes_adjwgt <- anes_2020 %>%
mutate(Weight = Weight / sum(Weight) * targetpop)
anes_des <- anes_adjwgt %>%
as_survey_design(
weights = Weight,
strata = Stratum,
ids = VarUnit,
nest = TRUE
)
```
For RECS, details are included in the RECS documentation and Chapters [4](c04-getting-started.html#c04-getting-started) and [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights).
```
recs_des <- recs_2020 %>%
as_survey_rep(
weights = NWEIGHT,
repweights = NWEIGHT1:NWEIGHT60,
type = "JK1",
scale = 59 / 60,
mse = TRUE
)
```
7\.1 Introduction
-----------------
Modeling data is a way for researchers to investigate the relationship between a single dependent variable and one or more independent variables. This builds upon the analyses conducted in Chapter [6](c06-statistical-testing.html#c06-statistical-testing), which looked at the relationships between just two variables. For example, in Example 3 in Section [6\.3\.2](c06-statistical-testing.html#stattest-ttest-examples), we investigated if there is a relationship between the electrical bill cost and whether or not the household used air conditioning (A/C). However, there are potentially other elements that could go into what the cost of electrical bills are in a household (e.g., outside temperature, desired internal temperature, types and number of appliances, etc.).
T\-tests only allow us to investigate the relationship of one independent variable at a time, but using models, we can look into multiple variables and even explore interactions between these variables. There are several types of models, but in this chapter, we cover Analysis of Variance (ANOVA) and linear regression models following common normal (Gaussian) and logit models. Jonas Kristoffer Lindeløv has an interesting [discussion](https://lindeloev.github.io/tests-as-linear/) of many statistical tests and models being equivalent to a linear model. For example, a one\-way ANOVA is a linear model with one categorical independent variable, and a two\-sample t\-test is an ANOVA where the independent variable has exactly two levels.
When modeling data, it is helpful to first create an equation that provides an overview of what we are modeling. The main structure of these models is as follows:
\\\[y\_i\=\\beta\_0 \+\\sum\_{i\=1}^p \\beta\_i x\_i \+ \\epsilon\_i\\]
where \\(y\_i\\) is the outcome, \\(\\beta\_0\\) is an intercept, \\(x\_1, \\cdots, x\_p\\) are the predictors with \\(\\beta\_1, \\cdots, \\beta\_p\\) as the associated coefficients, and \\(\\epsilon\_i\\) is the error. Not all models have all components. For example, some models may not include an intercept (\\(\\beta\_0\\)), may have interactions between different independent variables (\\(x\_i\\)), or may have different underlying structures for the dependent variable (\\(y\_i\\)). However, all linear models have the independent variables related to the dependent variable in a linear form.
To specify these models in R, the formulas are the same with both survey data and other data. The left side of the formula is the response/dependent variable, and the right side has the predictor/independent variable(s). There are many symbols used in R to specify the formula.
For example, a linear formula mathematically notated as
\\\[y\_i\=\\beta\_0\+\\beta\_1 x\_i\+\\epsilon\_i\\] would be specified in R as `y~x` where the intercept is not explicitly included. To fit a model with no intercept, that is,
\\\[y\_i\=\\beta\_1 x\_i\+\\epsilon\_i\\]
it can be specified in R as `y~x-1`. Formula notation details in R can be found in the help file for formula[21](#fn21). A quick overview of the common formula notation is in Table [7\.1](c07-modeling.html#tab:notation-common):
TABLE 7\.1: Common symbols in formula notation
| Symbol | Example | Meaning |
| --- | --- | --- |
| \+ | `+x` | include this variable |
| \- | `-x` | delete this variable |
| : | `x:z` | include the interaction between these variables |
| \* | `x*z` | include these variables and the interactions between them |
| `^n` | `(x+y+z)^3` | include these variables and all interactions up to n\-way |
| I | `I(x-z)` | as\-is: include a new variable that is calculated inside the parentheses (e.g., x\-z, x\*z, x/z are possible calculations that could be done) |
There are often multiple ways to specify the same formula. For example, consider the following equation using the `mtcars` dataset that is built into R:
\\\[mpg\_i\=\\beta\_0\+\\beta\_1cyl\_{i}\+\\beta\_2disp\_{i}\+\\beta\_3hp\_{i}\+\\beta\_4cyl\_{i}disp\_{i}\+\\beta\_5cyl\_{i}hp\_{i}\+\\beta\_6disp\_{i}hp\_{i}\+\\epsilon\_i\\]
This could be specified in R code as any of the following:
* `mpg ~ (cyl + disp + hp)^2`
* `mpg ~ cyl + disp + hp + cyl:disp + cyl:hp + disp:hp`
* `mpg ~ cyl*disp + cyl*hp + disp*hp`
In the above options, the ways the `:` and `*` notations are implemented are different. Using `:` only includes the interactions and not the main effects, while using `*` includes the main effects and all possible interactions. Table [7\.2](c07-modeling.html#tab:notation-diffs) provides an overview of the syntax and differences between the two notations.
TABLE 7\.2: Differences in formulas for `:` and `*` code syntax
| Symbol | Syntax | Formula |
| --- | --- | --- |
| : | `mpg ~ cyl:disp:hp` | \\\[ \\begin{aligned} mpg\_i \= \&\\beta\_0\+\\beta\_4cyl\_{i}disp\_{i}\+\\beta\_5cyl\_{i}hp\_{i}\+ \\\\\& \\beta\_6disp\_{i}hp\_{i}\+\\epsilon\_i\\end{aligned}\\] |
| \* | `mpg ~ cyl*disp*hp` | \\\[ \\begin{aligned} mpg\_i\= \&\\beta\_0\+\\beta\_1cyl\_{i}\+\\beta\_2disp\_{i}\+\\beta\_3hp\_{i}\+\\\\\& \\beta\_4cyl\_{i}disp\_{i}\+\\beta\_5cyl\_{i}hp\_{i}\+\\beta\_6disp\_{i}hp\_{i}\+\\\\\&\\beta\_7cyl\_{i}disp\_{i}hp\_{i}\+\\epsilon\_i\\end{aligned}\\] |
When using non\-survey data, such as experimental or observational data, researchers use the `glm()` function for linear models. With survey data, however, we use `svyglm()` from the {survey} package to ensure that we account for the survey design and weights in modeling[22](#fn22). This allows us to generalize a model to the population of interest and accounts for the fact that the observations in the survey data may not be independent. As discussed in Chapter [6](c06-statistical-testing.html#c06-statistical-testing), modeling survey data cannot be directly done in {srvyr}, but can be done in the {survey} package ([Lumley 2010](#ref-lumley2010complex)). In this chapter, we provide syntax and examples for linear models, including ANOVA, normal linear regression, and logistic regression. For details on other types of regression, including ordinal regression, log\-linear models, and survival analysis, refer to Lumley ([2010](#ref-lumley2010complex)). Lumley ([2010](#ref-lumley2010complex)) also discusses custom models such as a negative binomial or Poisson model in appendix E of his book.
7\.2 Analysis of variance
-------------------------
In ANOVA, we are testing whether the mean of an outcome is the same across two or more groups. Statistically, we set up this as follows:
* \\(H\_0: \\mu\_1 \= \\mu\_2\= \\dots \= \\mu\_k\\) where \\(\\mu\_i\\) is the mean outcome for group \\(i\\)
* \\(H\_A: \\text{At least one mean is different}\\)
An ANOVA test is also a linear model, we can re\-frame the problem using the framework as:
\\\[ y\_i\=\\sum\_{i\=1}^k \\mu\_i x\_i \+ \\epsilon\_i\\]
where \\(x\_i\\) is a group indicator for groups \\(1, \\cdots, k\\).
Some assumptions when using ANOVA on survey data include:
* The outcome variable is normally distributed within each group.
* The variances of the outcome variable between each group are approximately equal.
* We do NOT assume independence between the groups as with ANOVA on non\-survey data. The covariance is accounted for in the survey design.
### 7\.2\.1 Syntax
To perform this type of analysis in R, the general syntax is as follows:
```
des_obj %>%
svyglm(
formula = outcome ~ group,
design = .,
na.action = na.omit,
df.resid = NULL
)
```
The arguments are:
* `formula`: formula in the form of `outcome~group`. The group variable must be a factor or character.
* `design`: a `tbl_svy` object created by `as_survey`
* `na.action`: handling of missing data
* `df.resid`: degrees of freedom for Wald tests (optional); defaults to using `degf(design)-(g-1)` where \\(g\\) is the number of groups
The function `svyglm()` does not have the design as the first argument so the dot (`.`) notation is used to pass it with a pipe (see Chapter [6](c06-statistical-testing.html#c06-statistical-testing) for more details). The default for missing data is `na.omit`. This means that we are removing all records with any missing data in either predictors or outcomes from analyses. There are other options for handling missing data, and we recommend looking at the help documentation for `na.omit` (run `help(na.omit)` or `?na.omit`) for more information on options to use for `na.action`. For a discussion on how to handle missing data, see Chapter [11](c11-missing-data.html#c11-missing-data).
### 7\.2\.2 Example
Looking at an example helps us discuss the output and how to interpret the results. In RECS, respondents are asked what temperature they set their thermostat to during the evening when using A/C during the summer[23](#fn23). To analyze these data, we filter the respondents to only those using A/C (`ACUsed`)[24](#fn24). Then, if we want to see if there are regional differences, we can use `group_by()`. A descriptive analysis of the temperature at night (`SummerTempNight`) set by region and the sample sizes is displayed below.
```
recs_des %>%
filter(ACUsed) %>%
group_by(Region) %>%
summarize(
SMN = survey_mean(SummerTempNight, na.rm = TRUE),
n = unweighted(n()),
n_na = unweighted(sum(is.na(SummerTempNight)))
)
```
```
## # A tibble: 4 × 5
## Region SMN SMN_se n n_na
## <fct> <dbl> <dbl> <int> <int>
## 1 Northeast 69.7 0.103 3204 0
## 2 Midwest 71.0 0.0897 3619 0
## 3 South 71.8 0.0536 6065 0
## 4 West 72.5 0.129 3283 0
```
In the following code, we test whether this temperature varies by region by first using `svyglm()` to run the test and then using `broom::tidy()` to display the output. Note that the temperature setting is set to NA when the household does not use A/C, and since the default handling of NAs is `na.action=na.omit`, records that do not use A/C are not included in this regression.
```
anova_out <- recs_des %>%
svyglm(
design = .,
formula = SummerTempNight ~ Region
)
tidy(anova_out)
```
```
## # A tibble: 4 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 69.7 0.103 674. 3.69e-111
## 2 RegionMidwest 1.34 0.138 9.68 1.46e- 13
## 3 RegionSouth 2.05 0.128 16.0 1.36e- 22
## 4 RegionWest 2.80 0.177 15.9 2.27e- 22
```
In the output above, we can see the estimated coefficients (`estimate`), estimated standard errors of the coefficients (`std.error`), the t\-statistic (`statistic`), and the p\-value for each coefficient. In this output, the intercept represents the reference value of the Northeast region. The other coefficients indicate the difference in temperature relative to the Northeast region. For example, in the Midwest, temperatures are set, on average, 1\.34 (p\-value is \<0\.0001\) degrees higher than in the Northeast during summer nights, and each region sets its thermostats at significantly higher temperatures than the Northeast.
If we wanted to change the reference value, we would reorder the factor before modeling using the `relevel()` function from {stats} or using one of many factor ordering functions in {forcats} such as `fct_relevel()` or `fct_infreq()`. For example, if we wanted the reference level to be the Midwest region, we could use the following code with the results in Table [7\.3](c07-modeling.html#tab:model-anova-ex-tab). Note the usage of the `gt()` function on top of `tidy()` to print a nice\-looking output table ([Iannone et al. 2024](#ref-R-gt); [Robinson, Hayes, and Couch 2023](#ref-R-broom)) (see Chapter [8](c08-communicating-results.html#c08-communicating-results) for more information on the {gt} package).
```
anova_out_relevel <- recs_des %>%
mutate(Region = fct_relevel(Region, "Midwest", after = 0)) %>%
svyglm(
design = .,
formula = SummerTempNight ~ Region
)
```
```
tidy(anova_out_relevel) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 7\.3: ANOVA output for estimates of thermostat temperature setting at night by region with Midwest as the reference region, RECS 2020
| term | estimate | std.error | statistic | p.value |
| --- | --- | --- | --- | --- |
| (Intercept) | 71\.04 | 0\.09 | 791\.83 | \<0\.0001 |
| RegionNortheast | −1\.34 | 0\.14 | −9\.68 | \<0\.0001 |
| RegionSouth | 0\.71 | 0\.10 | 6\.91 | \<0\.0001 |
| RegionWest | 1\.47 | 0\.16 | 9\.17 | \<0\.0001 |
This output now has the coefficients indicating the difference in temperature relative to the Midwest region. For example, in the Northeast, temperatures are set, on average, 1\.34 (p\-value is \<0\.0001\) degrees lower than in the Midwest during summer nights, and each region sets its thermostats at significantly lower temperatures than the Midwest. This is the reverse of what we saw in the prior model, as we are still comparing the same two regions, just from different reference points.
7\.3 Normal linear regression
-----------------------------
Normal linear regression is a more generalized method than ANOVA, where we fit a model of a continuous outcome with any number of categorical or continuous predictors (whereas ANOVA only has categorical predictors) and is similarly specified as:
\\\[\\begin{equation}
y\_i\=\\beta\_0 \+\\sum\_{i\=1}^p \\beta\_i x\_i \+ \\epsilon\_i
\\end{equation}\\]
where \\(y\_i\\) is the outcome, \\(\\beta\_0\\) is an intercept, \\(x\_1, \\cdots, x\_p\\) are the predictors with \\(\\beta\_1, \\cdots, \\beta\_p\\) as the associated coefficients, and \\(\\epsilon\_i\\) is the error.
Assumptions in normal linear regression using survey data include:
* The residuals (\\(\\epsilon\_i\\)) are normally distributed, but there is not an assumption of independence, and the correlation structure is captured in the survey design object
* There is a linear relationship between the outcome variable and the independent variables
* The residuals are homoscedastic; that is, the error term is the same across all values of independent variables
### 7\.3\.1 Syntax
The syntax for this regression uses the same function as ANOVA but can have more than one variable listed on the right\-hand side of the formula:
```
des_obj %>%
svyglm(
formula = outcomevar ~ x1 + x2 + x3,
design = .,
na.action = na.omit,
df.resid = NULL
)
```
The arguments are:
* `formula`: formula in the form of `y~x`
* `design`: a `tbl_svy` object created by `as_survey`
* `na.action`: handling of missing data
* `df.resid`: degrees of freedom for Wald tests (optional); defaults to using `degf(design)-p` where \\(p\\) is the rank of the design matrix
As discussed in Section [7\.1](c07-modeling.html#model-intro), the formula on the right\-hand side can be specified in many ways, for example, denoting whether or not interactions are desired.
### 7\.3\.2 Examples
#### Example 1: Linear regression with a single variable
On RECS, we can obtain information on the square footage of homes[25](#fn25) and the electric bills. We assume that square footage is related to the amount of money spent on electricity and examine a model for this. Before any modeling, we first plot the data to determine whether it is reasonable to assume a linear relationship. In Figure [7\.1](c07-modeling.html#fig:model-plot-sf-elbill), each hexagon represents the weighted count of households in the bin, and we can see a general positive linear trend (as the square footage increases, so does the amount of money spent on electricity).
```
recs_2020 %>%
ggplot(aes(
x = TOTSQFT_EN,
y = DOLLAREL,
weight = NWEIGHT / 1000000
)) +
geom_hex() +
scale_fill_gradientn(
guide = "colorbar",
name = "Housing Units\n(Millions)",
labels = scales::comma,
colors = book_colors[c(3, 2, 1)]
) +
xlab("Total square footage") +
ylab("Amount spent on electricity") +
scale_y_continuous(labels = scales::dollar_format()) +
scale_x_continuous(labels = scales::comma_format()) +
theme_minimal()
```
FIGURE 7\.1: Relationship between square footage and dollars spent on electricity, RECS 2020
Given that the plot shows a potentially increasing relationship between square footage and electricity expenditure, fitting a model allows us to determine if the relationship is statistically significant. The model is fit below with electricity expenditure as the outcome.
```
m_electric_sqft <- recs_des %>%
svyglm(
design = .,
formula = DOLLAREL ~ TOTSQFT_EN,
na.action = na.omit
)
```
```
tidy(m_electric_sqft) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 7\.4: Linear regression output predicting electricity expenditure given square footage, RECS 2020
| term | estimate | std.error | statistic | p.value |
| --- | --- | --- | --- | --- |
| (Intercept) | 836\.72 | 12\.77 | 65\.51 | \<0\.0001 |
| TOTSQFT\_EN | 0\.30 | 0\.01 | 41\.67 | \<0\.0001 |
In Table [7\.4](c07-modeling.html#tab:model-slr-examp-tab), we can see the estimated coefficients (`estimate`), estimated standard errors of the coefficients (`std.error`), the t\-statistic (`statistic`), and the p\-value for each coefficient. In these results, we can say that, on average, for every additional square foot of house size, the electricity bill increases by 30 cents, and that square footage is significantly associated with electricity expenditure (p\-value is \<0\.0001\).
This is a straightforward model, and there are likely many more factors related to electricity expenditure, including the type of cooling, number of appliances, location, and more. However, starting with one\-variable models can help analysts understand what potential relationships there are between variables before fitting more complex models. Often, we start with known relationships before building models to determine what impact additional variables have on the model.
#### Example 2: Linear regression with multiple variables and interactions
In the following example, a model is fit to predict electricity expenditure, including census region (factor/categorical), urbanicity (factor/categorical), square footage (double/numeric), and whether A/C is used (logical/categorical) with all two\-way interactions also included. In this example, we are choosing to fit this model without an intercept (using `-1` in the formula). This results in an intercept estimate for each region instead of a single intercept for all data.
```
m_electric_multi <- recs_des %>%
svyglm(
design = .,
formula =
DOLLAREL ~ (Region + Urbanicity + TOTSQFT_EN + ACUsed)^2 - 1,
na.action = na.omit
)
```
```
tidy(m_electric_multi) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 7\.5: Linear regression output predicting electricity expenditure given region, urbanicity, square footage, A/C usage, and one\-way interactions, RECS 2020
| term | estimate | std.error | statistic | p.value |
| --- | --- | --- | --- | --- |
| RegionNortheast | 543\.73 | 56\.57 | 9\.61 | \<0\.0001 |
| RegionMidwest | 702\.16 | 78\.12 | 8\.99 | \<0\.0001 |
| RegionSouth | 938\.74 | 46\.99 | 19\.98 | \<0\.0001 |
| RegionWest | 603\.27 | 36\.31 | 16\.61 | \<0\.0001 |
| UrbanicityUrban Cluster | 73\.03 | 81\.50 | 0\.90 | 0\.3764 |
| UrbanicityRural | 204\.13 | 80\.69 | 2\.53 | 0\.0161 |
| TOTSQFT\_EN | 0\.24 | 0\.03 | 8\.65 | \<0\.0001 |
| ACUsedTRUE | 252\.06 | 54\.05 | 4\.66 | \<0\.0001 |
| RegionMidwest:UrbanicityUrban Cluster | 183\.06 | 82\.38 | 2\.22 | 0\.0328 |
| RegionSouth:UrbanicityUrban Cluster | 152\.56 | 76\.03 | 2\.01 | 0\.0526 |
| RegionWest:UrbanicityUrban Cluster | 98\.02 | 75\.16 | 1\.30 | 0\.2007 |
| RegionMidwest:UrbanicityRural | 312\.83 | 50\.88 | 6\.15 | \<0\.0001 |
| RegionSouth:UrbanicityRural | 220\.00 | 55\.00 | 4\.00 | 0\.0003 |
| RegionWest:UrbanicityRural | 180\.97 | 58\.70 | 3\.08 | 0\.0040 |
| RegionMidwest:TOTSQFT\_EN | −0\.05 | 0\.02 | −2\.09 | 0\.0441 |
| RegionSouth:TOTSQFT\_EN | 0\.00 | 0\.03 | 0\.11 | 0\.9109 |
| RegionWest:TOTSQFT\_EN | −0\.03 | 0\.03 | −1\.00 | 0\.3254 |
| RegionMidwest:ACUsedTRUE | −292\.97 | 60\.24 | −4\.86 | \<0\.0001 |
| RegionSouth:ACUsedTRUE | −294\.07 | 57\.44 | −5\.12 | \<0\.0001 |
| RegionWest:ACUsedTRUE | −77\.68 | 47\.05 | −1\.65 | 0\.1076 |
| UrbanicityUrban Cluster:TOTSQFT\_EN | −0\.04 | 0\.02 | −1\.63 | 0\.1112 |
| UrbanicityRural:TOTSQFT\_EN | −0\.06 | 0\.02 | −2\.60 | 0\.0137 |
| UrbanicityUrban Cluster:ACUsedTRUE | −130\.23 | 60\.30 | −2\.16 | 0\.0377 |
| UrbanicityRural:ACUsedTRUE | −33\.80 | 59\.30 | −0\.57 | 0\.5724 |
| TOTSQFT\_EN:ACUsedTRUE | 0\.08 | 0\.02 | 3\.48 | 0\.0014 |
As shown in Table [7\.5](c07-modeling.html#tab:model-lmr-examp-tab), there are many terms in this model. To test whether coefficients for a term are different from zero, the `regTermTest()` function can be used. For example, in the above regression, we can test whether the interaction of region and urbanicity is significant as follows:
```
urb_reg_test <- regTermTest(m_electric_multi, ~ Urbanicity:Region)
urb_reg_test
```
```
## Wald test for Urbanicity:Region
## in svyglm(design = ., formula = DOLLAREL ~ (Region + Urbanicity +
## TOTSQFT_EN + ACUsed)^2 - 1, na.action = na.omit)
## F = 6.851 on 6 and 35 df: p= 7.2e-05
```
This output indicates there is a significant interaction between urbanicity and region (p\-value is \<0\.0001\).
To examine the predictions, residuals, and more from the model, the `augment()` function from {broom} can be used. The `augment()` function returns a tibble with the independent and dependent variables and other fit statistics. The `augment()` function has not been specifically written for objects of class `svyglm`, and as such, a warning is displayed indicating this at this time. As it was not written exactly for this class of objects, a little tweaking needs to be done after using `augment()`. To obtain the standard error of the predicted values (`.se.fit`), we need to use the `attr()` function on the predicted values (`.fitted`) created by `augment()`. Additionally, the predicted values created are outputted with a type of `svrep`. If we want to plot the predicted values, we need to use `as.numeric()` to get the predicted values into a numeric format to work with. However, it is important to note that this adjustment must be completed after the standard error adjustment.
```
fitstats <-
augment(m_electric_multi) %>%
mutate(
.se.fit = sqrt(attr(.fitted, "var")),
.fitted = as.numeric(.fitted)
)
fitstats
```
```
## # A tibble: 18,496 × 13
## DOLLAREL Region Urbanicity TOTSQFT_EN ACUsed `(weights)` .fitted
## <dbl> <fct> <fct> <dbl> <lgl> <dbl> <dbl>
## 1 1955. West Urban Area 2100 TRUE 0.492 1397.
## 2 713. South Urban Area 590 TRUE 1.35 1090.
## 3 335. West Urban Area 900 TRUE 0.849 1043.
## 4 1425. South Urban Area 2100 TRUE 0.793 1584.
## 5 1087 Northeast Urban Area 800 TRUE 1.49 1055.
## 6 1896. South Urban Area 4520 TRUE 1.09 2375.
## 7 1418. South Urban Area 2100 TRUE 0.851 1584.
## 8 1237. South Urban Clust… 900 FALSE 1.45 1349.
## 9 538. South Urban Area 750 TRUE 0.185 1142.
## 10 625. West Urban Area 760 TRUE 1.06 1002.
## # ℹ 18,486 more rows
## # ℹ 6 more variables: .resid <dbl>, .hat <dbl>, .sigma <dbl>,
## # .cooksd <dbl>, .std.resid <dbl>, .se.fit <dbl>
```
These results can then be used in a variety of ways, including examining residual plots as illustrated in the code below and Figure [7\.2](c07-modeling.html#fig:model-aug-examp-plot). In the residual plot, we look for any patterns in the data. If we do see patterns, this may indicate a violation of the heteroscedasticity assumption and the standard errors of the coefficients may be incorrect. In Figure [7\.2](c07-modeling.html#fig:model-aug-examp-plot), we do not see a strong pattern indicating that our assumption of heteroscedasticity may hold.
```
fitstats %>%
ggplot(aes(x = .fitted, .resid)) +
geom_point(alpha = .1) +
geom_hline(yintercept = 0, color = "red") +
theme_minimal() +
xlab("Fitted value of electricity cost") +
ylab("Residual of model") +
scale_y_continuous(labels = scales::dollar_format()) +
scale_x_continuous(labels = scales::dollar_format())
```
FIGURE 7\.2: Residual plot of electric cost model with the following covariates: Region, Urbanicity, TOTSQFT\_EN, and ACUsed
Additionally, `augment()` can be used to predict outcomes for data not used in modeling. Perhaps we would like to predict the energy expenditure for a home in an urban area in the south that uses A/C and is 2,500 square feet. To do this, we first make a tibble including that additional data and then use the `newdata` argument in the `augment()` function. As before, to obtain the standard error of the predicted values, we need to use the `attr()` function.
```
add_data <- recs_2020 %>%
select(
DOEID, Region, Urbanicity,
TOTSQFT_EN, ACUsed,
DOLLAREL
) %>%
rbind(
tibble(
DOEID = NA,
Region = "South",
Urbanicity = "Urban Area",
TOTSQFT_EN = 2500,
ACUsed = TRUE,
DOLLAREL = NA
)
) %>%
tail(1)
pred_data <- augment(m_electric_multi, newdata = add_data) %>%
mutate(
.se.fit = sqrt(attr(.fitted, "var")),
.fitted = as.numeric(.fitted)
)
pred_data
```
```
## # A tibble: 1 × 8
## DOEID Region Urbanicity TOTSQFT_EN ACUsed DOLLAREL .fitted .se.fit
## <dbl> <fct> <fct> <dbl> <lgl> <dbl> <dbl> <dbl>
## 1 NA South Urban Area 2500 TRUE NA 1715. 22.6
```
In the above example, it is predicted that the energy expenditure would be $1,715\.
7\.4 Logistic regression
------------------------
Logistic regression is used to model binary outcomes, such as whether or not someone voted. There are several instances where an outcome may not be originally binary but is collapsed into being binary. For example, given that gender is often asked in surveys with multiple response options and not a binary scale, many researchers now code gender in logistic modeling as “cis\-male” compared to not “cis\-male.” We could also convert a 4\-point Likert scale that has levels of “Strongly Agree,” “Agree,” “Disagree,” and “Strongly Disagree” to group the agreement levels into one group and disagreement levels into a second group.
Logistic regression is a specific case of the generalized linear model (GLM). A GLM uses a link function to link the response variable to the linear model. If we tried to use a normal linear regression with a binary outcome, many assumptions would not hold, namely, the response would not be continuous. Logistic regression allows us to link a linear model between the covariates and the propensity of an outcome. In logistic regression, the link model is the logit function. Specifically, the model is specified as follows:
\\\[ y\_i \\sim \\text{Bernoulli}(\\pi\_i)\\]
\\\[\\begin{equation}
\\log \\left(\\frac{\\pi\_i}{1\-\\pi\_i} \\right)\=\\beta\_0 \+\\sum\_{i\=1}^n \\beta\_i x\_i
\\end{equation}\\]
which can be re\-expressed as
\\\[ \\pi\_i\=\\frac{\\exp \\left(\\beta\_0 \+\\sum\_{i\=1}^n \\beta\_i x\_i \\right)}{1\+\\exp \\left(\\beta\_0 \+\\sum\_{i\=1}^n \\beta\_i x\_i \\right)}\\] where \\(y\_i\\) is the outcome, \\(\\beta\_0\\) is an intercept, and \\(x\_1, \\cdots, x\_n\\) are the predictors with \\(\\beta\_1, \\cdots, \\beta\_n\\) as the associated coefficients.
The Bernoulli distribution is a distribution which has an outcome of 0 or 1 given some probability (\\(\\pi\_i\\)) in this case, and we model \\(\\pi\_i\\) as a function of the covariates \\(x\_i\\) using this logit link.
Assumptions in logistic regression using survey data include:
* The outcome variable has two levels
* There is a linear relationship between the independent variables and the log odds (the equation for the logit function)
* The residuals are homoscedastic; that is, the error term is the same across all values of independent variables
### 7\.4\.1 Syntax
The syntax for logistic regression is as follows:
```
des_obj %>%
svyglm(
formula = outcomevar ~ x1 + x2 + x3,
design = .,
na.action = na.omit,
df.resid = NULL,
family = quasibinomial
)
```
The arguments are:
* `formula`: Formula in the form of `y~x`
* `design`: a `tbl_svy` object created by `as_survey`
* `na.action`: handling of missing data
* `df.resid`: degrees of freedom for Wald tests (optional); defaults to using `degf(design)-p` where \\(p\\) is the rank of the design matrix
* `family`: the error distribution/link function to be used in the model
Note `svyglm()` is the same function used in both ANOVA and normal linear regression. However, we’ve added the link function quasibinomial. While we can use the binomial link function, it is recommended to use the quasibinomial as our weights may not be integers, and the quasibinomial also allows for overdispersion ([Lumley 2010](#ref-lumley2010complex); [McCullagh and Nelder 1989](#ref-mccullagh1989binary); [R Core Team 2024](#ref-R-base)). The quasibinomial family has a default logit link, which is specified in the equations above. When specifying the outcome variable, it is likely specified in one of three ways with survey data:
* A two\-level factor variable where the first level of the factor indicates a “failure,” and the second level indicates a “success”
* A numeric variable which is 1 or 0 where 1 indicates a success
* A logical variable where TRUE indicates a success
### 7\.4\.2 Examples
#### Example 1: Logistic regression with single variable
In the following example, we use the ANES data to model whether someone usually has trust in the government[26](#fn26) by whom someone voted for president in 2020\. As a reminder, the leading candidates were Biden and Trump, though people could vote for someone else not in the Democratic or Republican parties. Those votes are all grouped into an “Other” category. We first create a binary outcome for trusting in the government by collapsing “Always” and “Most of the time” into a single\-factor level, and the other response options (“About half the time,” “Some of the time,” and “Never”) into a second factor level. Next, a scatter plot of the raw data is not useful, as it is all 0 and 1 outcomes; so instead, we plot a summary of the data.
```
anes_des_der <- anes_des %>%
mutate(TrustGovernmentUsually = case_when(
is.na(TrustGovernment) ~ NA,
TRUE ~ TrustGovernment %in% c("Always", "Most of the time")
))
anes_des_der %>%
group_by(VotedPres2020_selection) %>%
summarize(
pct_trust = survey_mean(TrustGovernmentUsually,
na.rm = TRUE,
proportion = TRUE,
vartype = "ci"
),
.groups = "drop"
) %>%
filter(complete.cases(.)) %>%
ggplot(aes(
x = VotedPres2020_selection, y = pct_trust,
fill = VotedPres2020_selection
)) +
geom_bar(stat = "identity") +
geom_errorbar(aes(ymin = pct_trust_low, ymax = pct_trust_upp),
width = .2
) +
scale_fill_manual(values = c("#0b3954", "#bfd7ea", "#8d6b94")) +
xlab("Election choice (2020)") +
ylab("Usually trust the government") +
scale_y_continuous(labels = scales::percent) +
guides(fill = "none") +
theme_minimal()
```
FIGURE 7\.3: Relationship between candidate selection and trust in government, ANES 2020
Looking at Figure [7\.3](c07-modeling.html#fig:model-logisticexamp-plot), it appears that people who voted for Trump are more likely to say that they usually have trust in the government compared to those who voted for Biden and other candidates. To determine if this insight is accurate, we next fit the model.
```
logistic_trust_vote <- anes_des_der %>%
svyglm(
design = .,
formula = TrustGovernmentUsually ~ VotedPres2020_selection,
family = quasibinomial
)
```
```
tidy(logistic_trust_vote) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 7\.6: Logistic regression output predicting trust in government by presidential candidate selection, RECS 2020
| term | estimate | std.error | statistic | p.value |
| --- | --- | --- | --- | --- |
| (Intercept) | −1\.96 | 0\.07 | −27\.45 | \<0\.0001 |
| VotedPres2020\_selectionTrump | 0\.43 | 0\.09 | 4\.72 | \<0\.0001 |
| VotedPres2020\_selectionOther | −0\.65 | 0\.44 | −1\.49 | 0\.1429 |
In Table [7\.6](c07-modeling.html#tab:model-logisticexamp-tab), we can see the estimated coefficients (`estimate`), estimated standard errors of the coefficients (`std.error`), the t\-statistic (`statistic`), and the p\-value for each coefficient. This output indicates that respondents who voted for Trump are more likely to usually have trust in the government compared to those who voted for Biden (the reference level). The coefficient of 0\.435 represents the increase in the log odds of usually trusting the government.
In most cases, it is easier to talk about the odds instead of the log odds. To do this, we need to exponentiate the coefficients. We can use the same `tidy()` function but include the argument `exponentiate = TRUE` to see the odds.
```
tidy(logistic_trust_vote, exponentiate = TRUE) %>%
select(term, estimate) %>%
gt() %>%
fmt_number()
```
TABLE 7\.7: Logistic regression predicting trust in government by presidential candidate selection with exponentiated coefficients (odds), RECS 2020
| term | estimate |
| --- | --- |
| (Intercept) | 0\.14 |
| VotedPres2020\_selectionTrump | 1\.54 |
| VotedPres2020\_selectionOther | 0\.52 |
Using the output in Table [7\.7](c07-modeling.html#tab:model-logisticexamp-model-odds-tab), we can interpret this as saying that the odds of usually trusting the government for someone who voted for Trump is 154% as likely to trust the government compared to a person who voted for Biden (the reference level). In comparison, a person who voted for neither Biden nor Trump is 52% as likely to trust the government as someone who voted for Biden.
As with linear regression, the `augment()` can be used to predict values. By default, the prediction is the link function, not the probability model. To predict the probability, add an argument of `type.predict="response"` as demonstrated below:
```
logistic_trust_vote %>%
augment(type.predict = "response") %>%
mutate(
.se.fit = sqrt(attr(.fitted, "var")),
.fitted = as.numeric(.fitted)
) %>%
select(
TrustGovernmentUsually,
VotedPres2020_selection,
.fitted,
.se.fit
)
```
```
## # A tibble: 6,212 × 4
## TrustGovernmentUsually VotedPres2020_selection .fitted .se.fit
## <lgl> <fct> <dbl> <dbl>
## 1 FALSE Other 0.0681 0.0279
## 2 FALSE Biden 0.123 0.00772
## 3 FALSE Biden 0.123 0.00772
## 4 FALSE Trump 0.178 0.00919
## 5 FALSE Biden 0.123 0.00772
## 6 FALSE Trump 0.178 0.00919
## 7 FALSE Biden 0.123 0.00772
## 8 FALSE Biden 0.123 0.00772
## 9 TRUE Biden 0.123 0.00772
## 10 FALSE Biden 0.123 0.00772
## # ℹ 6,202 more rows
```
#### Example 2: Interaction effects
Let’s look at another example with interaction effects. If we’re interested in understanding the demographics of people who voted for Biden among all voters in 2020, we could include the indicator of whether respondents voted early (`EarlyVote2020`) and their income group (`Income7`) in our model.
First, we need to subset the data to 2020 voters and then create an indicator for who voted for Biden.
```
anes_des_ind <- anes_des %>%
filter(!is.na(VotedPres2020_selection)) %>%
mutate(VoteBiden = case_when(
VotedPres2020_selection == "Biden" ~ 1,
TRUE ~ 0
))
```
Let’s first look at the main effects of income grouping and early voting behavior.
```
log_biden_main <- anes_des_ind %>%
mutate(
EarlyVote2020 = fct_relevel(EarlyVote2020, "No", after = 0)
) %>%
svyglm(
design = .,
formula = VoteBiden ~ EarlyVote2020 + Income7,
family = quasibinomial
)
```
```
tidy(log_biden_main) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 7\.8: Logistic regression output for predicting voting for Biden given early voting behavior and income; main effects only, ANES 2020
| term | estimate | std.error | statistic | p.value |
| --- | --- | --- | --- | --- |
| (Intercept) | 1\.28 | 0\.43 | 2\.99 | 0\.0047 |
| EarlyVote2020Yes | 0\.44 | 0\.34 | 1\.29 | 0\.2039 |
| Income7$20k to \< 40k | −1\.06 | 0\.49 | −2\.18 | 0\.0352 |
| Income7$40k to \< 60k | −0\.78 | 0\.42 | −1\.86 | 0\.0705 |
| Income7$60k to \< 80k | −1\.24 | 0\.70 | −1\.77 | 0\.0842 |
| Income7$80k to \< 100k | −0\.66 | 0\.64 | −1\.02 | 0\.3137 |
| Income7$100k to \< 125k | −1\.02 | 0\.54 | −1\.89 | 0\.0662 |
| Income7$125k or more | −1\.25 | 0\.44 | −2\.87 | 0\.0065 |
This main effect model (see Table [7\.8](c07-modeling.html#tab:model-logisticexamp-biden-main-tab)) indicates that people with incomes of $125,000 or more have a significant negative coefficient –1\.25 (p\-value is 0\.0065\). This indicates that people with incomes of $125,000 or more were less likely to vote for Biden in the 2020 election compared to people with incomes of $20,000 or less (reference level).
Although early voting behavior was not significant, there may be an interaction between income and early voting behavior. To determine this, we can create a model that includes the interaction effects:
```
log_biden_int <- anes_des_ind %>%
mutate(
EarlyVote2020 = fct_relevel(EarlyVote2020, "No", after = 0)
) %>%
svyglm(
design = .,
formula = VoteBiden ~ (EarlyVote2020 + Income7)^2,
family = quasibinomial
)
```
```
tidy(log_biden_int) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 7\.9: Logistic regression output for predicting voting for Biden given early voting behavior and income; with interaction, ANES 2020
| term | estimate | std.error | statistic | p.value |
| --- | --- | --- | --- | --- |
| (Intercept) | 2\.32 | 0\.67 | 3\.45 | 0\.0015 |
| EarlyVote2020Yes | −0\.81 | 0\.78 | −1\.03 | 0\.3081 |
| Income7$20k to \< 40k | −2\.33 | 0\.87 | −2\.68 | 0\.0113 |
| Income7$40k to \< 60k | −1\.67 | 0\.89 | −1\.87 | 0\.0700 |
| Income7$60k to \< 80k | −2\.05 | 1\.05 | −1\.96 | 0\.0580 |
| Income7$80k to \< 100k | −3\.42 | 1\.12 | −3\.06 | 0\.0043 |
| Income7$100k to \< 125k | −2\.33 | 1\.07 | −2\.17 | 0\.0368 |
| Income7$125k or more | −2\.09 | 0\.92 | −2\.28 | 0\.0289 |
| EarlyVote2020Yes:Income7$20k to \< 40k | 1\.60 | 0\.95 | 1\.69 | 0\.1006 |
| EarlyVote2020Yes:Income7$40k to \< 60k | 0\.99 | 1\.00 | 0\.99 | 0\.3289 |
| EarlyVote2020Yes:Income7$60k to \< 80k | 0\.90 | 1\.14 | 0\.79 | 0\.4373 |
| EarlyVote2020Yes:Income7$80k to \< 100k | 3\.22 | 1\.16 | 2\.78 | 0\.0087 |
| EarlyVote2020Yes:Income7$100k to \< 125k | 1\.64 | 1\.11 | 1\.48 | 0\.1492 |
| EarlyVote2020Yes:Income7$125k or more | 1\.00 | 1\.14 | 0\.88 | 0\.3867 |
The results from the interaction model (see Table [7\.9](c07-modeling.html#tab:model-logisticexamp-biden-int-tab)) show that one interaction between early voting behavior and income is significant. To better understand what this interaction means, we can plot the predicted probabilities with an interaction plot. Let’s first obtain the predicted probabilities for each possible combination of variables using the `augment()` function.
```
log_biden_pred <- log_biden_int %>%
augment(type.predict = "response") %>%
mutate(
.se.fit = sqrt(attr(.fitted, "var")),
.fitted = as.numeric(.fitted)
) %>%
select(VoteBiden, EarlyVote2020, Income7, .fitted, .se.fit)
```
The y\-axis is the predicted probabilities, one of our x\-variables is on the x\-axis, and the other is represented by multiple lines. Figure [7\.4](c07-modeling.html#fig:model-logisticexamp-biden-plot) shows the interaction plot with early voting behavior on the x\-axis and income represented by the lines.
```
log_biden_pred %>%
filter(VoteBiden == 1) %>%
distinct() %>%
arrange(EarlyVote2020, Income7) %>%
ggplot(aes(
x = EarlyVote2020,
y = .fitted,
group = Income7,
color = Income7,
linetype = Income7
)) +
geom_line(linewidth = 1.1) +
scale_color_manual(values = colorRampPalette(book_colors)(7)) +
ylab("Predicted Probability of Voting for Biden") +
labs(
x = "Voted Early",
color = "Income",
linetype = "Income"
) +
coord_cartesian(ylim = c(0, 1)) +
guides(fill = "none") +
theme_minimal()
```
FIGURE 7\.4: Interaction plot of early voting and income predicting the probability of voting for Biden
From Figure [7\.4](c07-modeling.html#fig:model-logisticexamp-biden-plot), we can see that people who have incomes in most groups (e.g., $40,000 to less than $60,000\) have roughly the same probability of voting for Biden regardless of whether they voted early or not. However, those with income in the $100,000 to less than $125,000 group were more likely to vote for Biden if they voted early than if they did not vote early.
Interactions in models can be difficult to understand from the coefficients alone. Using these interaction plots can help others understand the nuances of the results.
7\.5 Exercises
--------------
1. The type of housing unit may have an impact on energy expenses. Is there any relationship between housing unit type (`HousingUnitType`) and total energy expenditure (`TOTALDOL`)? First, find the average energy expenditure by housing unit type as a descriptive analysis and then do the test. The reference level in the comparison should be the housing unit type that is most common.
2. Does temperature play a role in electricity expenditure? Cooling degree days are a measure of how hot a place is. CDD65 for a given day indicates the number of degrees Fahrenheit warmer than 65°F (18\.3°C) it is in a location. On a day that averages 65°F and below, CDD65\=0, while a day that averages 85°F (29\.4°C) would have CDD65\=20 because it is 20 degrees Fahrenheit warmer ([U.S. Energy Information Administration 2023d](#ref-eia-cdd)). Each day in the year is summed up to indicate how hot the place is throughout the year. Similarly, HDD65 indicates the days colder than 65°F. Can energy expenditure be predicted using these temperature indicators along with square footage? Is there a significant relationship? Include main effects and two\-way interactions.
3. Continuing with our results from Exercise 2, create a plot between the actual and predicted expenditures and a residual plot for the predicted expenditures.
4. Early voting expanded in 2020 ([Sprunt 2020](#ref-npr-voting-trend)). Build a logistic model predicting early voting in 2020 (`EarlyVote2020`) using age (`Age`), education (`Education`), and party identification (`PartyID`). Include two\-way interactions.
5. Continuing from Exercise 4, predict the probability of early voting for two people. Both are 28 years old and have a graduate degree; however, one person is a strong Democrat, and the other is a strong Republican.
### Prerequisites
7\.1 Introduction
-----------------
Modeling data is a way for researchers to investigate the relationship between a single dependent variable and one or more independent variables. This builds upon the analyses conducted in Chapter [6](c06-statistical-testing.html#c06-statistical-testing), which looked at the relationships between just two variables. For example, in Example 3 in Section [6\.3\.2](c06-statistical-testing.html#stattest-ttest-examples), we investigated if there is a relationship between the electrical bill cost and whether or not the household used air conditioning (A/C). However, there are potentially other elements that could go into what the cost of electrical bills are in a household (e.g., outside temperature, desired internal temperature, types and number of appliances, etc.).
T\-tests only allow us to investigate the relationship of one independent variable at a time, but using models, we can look into multiple variables and even explore interactions between these variables. There are several types of models, but in this chapter, we cover Analysis of Variance (ANOVA) and linear regression models following common normal (Gaussian) and logit models. Jonas Kristoffer Lindeløv has an interesting [discussion](https://lindeloev.github.io/tests-as-linear/) of many statistical tests and models being equivalent to a linear model. For example, a one\-way ANOVA is a linear model with one categorical independent variable, and a two\-sample t\-test is an ANOVA where the independent variable has exactly two levels.
When modeling data, it is helpful to first create an equation that provides an overview of what we are modeling. The main structure of these models is as follows:
\\\[y\_i\=\\beta\_0 \+\\sum\_{i\=1}^p \\beta\_i x\_i \+ \\epsilon\_i\\]
where \\(y\_i\\) is the outcome, \\(\\beta\_0\\) is an intercept, \\(x\_1, \\cdots, x\_p\\) are the predictors with \\(\\beta\_1, \\cdots, \\beta\_p\\) as the associated coefficients, and \\(\\epsilon\_i\\) is the error. Not all models have all components. For example, some models may not include an intercept (\\(\\beta\_0\\)), may have interactions between different independent variables (\\(x\_i\\)), or may have different underlying structures for the dependent variable (\\(y\_i\\)). However, all linear models have the independent variables related to the dependent variable in a linear form.
To specify these models in R, the formulas are the same with both survey data and other data. The left side of the formula is the response/dependent variable, and the right side has the predictor/independent variable(s). There are many symbols used in R to specify the formula.
For example, a linear formula mathematically notated as
\\\[y\_i\=\\beta\_0\+\\beta\_1 x\_i\+\\epsilon\_i\\] would be specified in R as `y~x` where the intercept is not explicitly included. To fit a model with no intercept, that is,
\\\[y\_i\=\\beta\_1 x\_i\+\\epsilon\_i\\]
it can be specified in R as `y~x-1`. Formula notation details in R can be found in the help file for formula[21](#fn21). A quick overview of the common formula notation is in Table [7\.1](c07-modeling.html#tab:notation-common):
TABLE 7\.1: Common symbols in formula notation
| Symbol | Example | Meaning |
| --- | --- | --- |
| \+ | `+x` | include this variable |
| \- | `-x` | delete this variable |
| : | `x:z` | include the interaction between these variables |
| \* | `x*z` | include these variables and the interactions between them |
| `^n` | `(x+y+z)^3` | include these variables and all interactions up to n\-way |
| I | `I(x-z)` | as\-is: include a new variable that is calculated inside the parentheses (e.g., x\-z, x\*z, x/z are possible calculations that could be done) |
There are often multiple ways to specify the same formula. For example, consider the following equation using the `mtcars` dataset that is built into R:
\\\[mpg\_i\=\\beta\_0\+\\beta\_1cyl\_{i}\+\\beta\_2disp\_{i}\+\\beta\_3hp\_{i}\+\\beta\_4cyl\_{i}disp\_{i}\+\\beta\_5cyl\_{i}hp\_{i}\+\\beta\_6disp\_{i}hp\_{i}\+\\epsilon\_i\\]
This could be specified in R code as any of the following:
* `mpg ~ (cyl + disp + hp)^2`
* `mpg ~ cyl + disp + hp + cyl:disp + cyl:hp + disp:hp`
* `mpg ~ cyl*disp + cyl*hp + disp*hp`
In the above options, the ways the `:` and `*` notations are implemented are different. Using `:` only includes the interactions and not the main effects, while using `*` includes the main effects and all possible interactions. Table [7\.2](c07-modeling.html#tab:notation-diffs) provides an overview of the syntax and differences between the two notations.
TABLE 7\.2: Differences in formulas for `:` and `*` code syntax
| Symbol | Syntax | Formula |
| --- | --- | --- |
| : | `mpg ~ cyl:disp:hp` | \\\[ \\begin{aligned} mpg\_i \= \&\\beta\_0\+\\beta\_4cyl\_{i}disp\_{i}\+\\beta\_5cyl\_{i}hp\_{i}\+ \\\\\& \\beta\_6disp\_{i}hp\_{i}\+\\epsilon\_i\\end{aligned}\\] |
| \* | `mpg ~ cyl*disp*hp` | \\\[ \\begin{aligned} mpg\_i\= \&\\beta\_0\+\\beta\_1cyl\_{i}\+\\beta\_2disp\_{i}\+\\beta\_3hp\_{i}\+\\\\\& \\beta\_4cyl\_{i}disp\_{i}\+\\beta\_5cyl\_{i}hp\_{i}\+\\beta\_6disp\_{i}hp\_{i}\+\\\\\&\\beta\_7cyl\_{i}disp\_{i}hp\_{i}\+\\epsilon\_i\\end{aligned}\\] |
When using non\-survey data, such as experimental or observational data, researchers use the `glm()` function for linear models. With survey data, however, we use `svyglm()` from the {survey} package to ensure that we account for the survey design and weights in modeling[22](#fn22). This allows us to generalize a model to the population of interest and accounts for the fact that the observations in the survey data may not be independent. As discussed in Chapter [6](c06-statistical-testing.html#c06-statistical-testing), modeling survey data cannot be directly done in {srvyr}, but can be done in the {survey} package ([Lumley 2010](#ref-lumley2010complex)). In this chapter, we provide syntax and examples for linear models, including ANOVA, normal linear regression, and logistic regression. For details on other types of regression, including ordinal regression, log\-linear models, and survival analysis, refer to Lumley ([2010](#ref-lumley2010complex)). Lumley ([2010](#ref-lumley2010complex)) also discusses custom models such as a negative binomial or Poisson model in appendix E of his book.
7\.2 Analysis of variance
-------------------------
In ANOVA, we are testing whether the mean of an outcome is the same across two or more groups. Statistically, we set up this as follows:
* \\(H\_0: \\mu\_1 \= \\mu\_2\= \\dots \= \\mu\_k\\) where \\(\\mu\_i\\) is the mean outcome for group \\(i\\)
* \\(H\_A: \\text{At least one mean is different}\\)
An ANOVA test is also a linear model, we can re\-frame the problem using the framework as:
\\\[ y\_i\=\\sum\_{i\=1}^k \\mu\_i x\_i \+ \\epsilon\_i\\]
where \\(x\_i\\) is a group indicator for groups \\(1, \\cdots, k\\).
Some assumptions when using ANOVA on survey data include:
* The outcome variable is normally distributed within each group.
* The variances of the outcome variable between each group are approximately equal.
* We do NOT assume independence between the groups as with ANOVA on non\-survey data. The covariance is accounted for in the survey design.
### 7\.2\.1 Syntax
To perform this type of analysis in R, the general syntax is as follows:
```
des_obj %>%
svyglm(
formula = outcome ~ group,
design = .,
na.action = na.omit,
df.resid = NULL
)
```
The arguments are:
* `formula`: formula in the form of `outcome~group`. The group variable must be a factor or character.
* `design`: a `tbl_svy` object created by `as_survey`
* `na.action`: handling of missing data
* `df.resid`: degrees of freedom for Wald tests (optional); defaults to using `degf(design)-(g-1)` where \\(g\\) is the number of groups
The function `svyglm()` does not have the design as the first argument so the dot (`.`) notation is used to pass it with a pipe (see Chapter [6](c06-statistical-testing.html#c06-statistical-testing) for more details). The default for missing data is `na.omit`. This means that we are removing all records with any missing data in either predictors or outcomes from analyses. There are other options for handling missing data, and we recommend looking at the help documentation for `na.omit` (run `help(na.omit)` or `?na.omit`) for more information on options to use for `na.action`. For a discussion on how to handle missing data, see Chapter [11](c11-missing-data.html#c11-missing-data).
### 7\.2\.2 Example
Looking at an example helps us discuss the output and how to interpret the results. In RECS, respondents are asked what temperature they set their thermostat to during the evening when using A/C during the summer[23](#fn23). To analyze these data, we filter the respondents to only those using A/C (`ACUsed`)[24](#fn24). Then, if we want to see if there are regional differences, we can use `group_by()`. A descriptive analysis of the temperature at night (`SummerTempNight`) set by region and the sample sizes is displayed below.
```
recs_des %>%
filter(ACUsed) %>%
group_by(Region) %>%
summarize(
SMN = survey_mean(SummerTempNight, na.rm = TRUE),
n = unweighted(n()),
n_na = unweighted(sum(is.na(SummerTempNight)))
)
```
```
## # A tibble: 4 × 5
## Region SMN SMN_se n n_na
## <fct> <dbl> <dbl> <int> <int>
## 1 Northeast 69.7 0.103 3204 0
## 2 Midwest 71.0 0.0897 3619 0
## 3 South 71.8 0.0536 6065 0
## 4 West 72.5 0.129 3283 0
```
In the following code, we test whether this temperature varies by region by first using `svyglm()` to run the test and then using `broom::tidy()` to display the output. Note that the temperature setting is set to NA when the household does not use A/C, and since the default handling of NAs is `na.action=na.omit`, records that do not use A/C are not included in this regression.
```
anova_out <- recs_des %>%
svyglm(
design = .,
formula = SummerTempNight ~ Region
)
tidy(anova_out)
```
```
## # A tibble: 4 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 69.7 0.103 674. 3.69e-111
## 2 RegionMidwest 1.34 0.138 9.68 1.46e- 13
## 3 RegionSouth 2.05 0.128 16.0 1.36e- 22
## 4 RegionWest 2.80 0.177 15.9 2.27e- 22
```
In the output above, we can see the estimated coefficients (`estimate`), estimated standard errors of the coefficients (`std.error`), the t\-statistic (`statistic`), and the p\-value for each coefficient. In this output, the intercept represents the reference value of the Northeast region. The other coefficients indicate the difference in temperature relative to the Northeast region. For example, in the Midwest, temperatures are set, on average, 1\.34 (p\-value is \<0\.0001\) degrees higher than in the Northeast during summer nights, and each region sets its thermostats at significantly higher temperatures than the Northeast.
If we wanted to change the reference value, we would reorder the factor before modeling using the `relevel()` function from {stats} or using one of many factor ordering functions in {forcats} such as `fct_relevel()` or `fct_infreq()`. For example, if we wanted the reference level to be the Midwest region, we could use the following code with the results in Table [7\.3](c07-modeling.html#tab:model-anova-ex-tab). Note the usage of the `gt()` function on top of `tidy()` to print a nice\-looking output table ([Iannone et al. 2024](#ref-R-gt); [Robinson, Hayes, and Couch 2023](#ref-R-broom)) (see Chapter [8](c08-communicating-results.html#c08-communicating-results) for more information on the {gt} package).
```
anova_out_relevel <- recs_des %>%
mutate(Region = fct_relevel(Region, "Midwest", after = 0)) %>%
svyglm(
design = .,
formula = SummerTempNight ~ Region
)
```
```
tidy(anova_out_relevel) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 7\.3: ANOVA output for estimates of thermostat temperature setting at night by region with Midwest as the reference region, RECS 2020
| term | estimate | std.error | statistic | p.value |
| --- | --- | --- | --- | --- |
| (Intercept) | 71\.04 | 0\.09 | 791\.83 | \<0\.0001 |
| RegionNortheast | −1\.34 | 0\.14 | −9\.68 | \<0\.0001 |
| RegionSouth | 0\.71 | 0\.10 | 6\.91 | \<0\.0001 |
| RegionWest | 1\.47 | 0\.16 | 9\.17 | \<0\.0001 |
This output now has the coefficients indicating the difference in temperature relative to the Midwest region. For example, in the Northeast, temperatures are set, on average, 1\.34 (p\-value is \<0\.0001\) degrees lower than in the Midwest during summer nights, and each region sets its thermostats at significantly lower temperatures than the Midwest. This is the reverse of what we saw in the prior model, as we are still comparing the same two regions, just from different reference points.
### 7\.2\.1 Syntax
To perform this type of analysis in R, the general syntax is as follows:
```
des_obj %>%
svyglm(
formula = outcome ~ group,
design = .,
na.action = na.omit,
df.resid = NULL
)
```
The arguments are:
* `formula`: formula in the form of `outcome~group`. The group variable must be a factor or character.
* `design`: a `tbl_svy` object created by `as_survey`
* `na.action`: handling of missing data
* `df.resid`: degrees of freedom for Wald tests (optional); defaults to using `degf(design)-(g-1)` where \\(g\\) is the number of groups
The function `svyglm()` does not have the design as the first argument so the dot (`.`) notation is used to pass it with a pipe (see Chapter [6](c06-statistical-testing.html#c06-statistical-testing) for more details). The default for missing data is `na.omit`. This means that we are removing all records with any missing data in either predictors or outcomes from analyses. There are other options for handling missing data, and we recommend looking at the help documentation for `na.omit` (run `help(na.omit)` or `?na.omit`) for more information on options to use for `na.action`. For a discussion on how to handle missing data, see Chapter [11](c11-missing-data.html#c11-missing-data).
### 7\.2\.2 Example
Looking at an example helps us discuss the output and how to interpret the results. In RECS, respondents are asked what temperature they set their thermostat to during the evening when using A/C during the summer[23](#fn23). To analyze these data, we filter the respondents to only those using A/C (`ACUsed`)[24](#fn24). Then, if we want to see if there are regional differences, we can use `group_by()`. A descriptive analysis of the temperature at night (`SummerTempNight`) set by region and the sample sizes is displayed below.
```
recs_des %>%
filter(ACUsed) %>%
group_by(Region) %>%
summarize(
SMN = survey_mean(SummerTempNight, na.rm = TRUE),
n = unweighted(n()),
n_na = unweighted(sum(is.na(SummerTempNight)))
)
```
```
## # A tibble: 4 × 5
## Region SMN SMN_se n n_na
## <fct> <dbl> <dbl> <int> <int>
## 1 Northeast 69.7 0.103 3204 0
## 2 Midwest 71.0 0.0897 3619 0
## 3 South 71.8 0.0536 6065 0
## 4 West 72.5 0.129 3283 0
```
In the following code, we test whether this temperature varies by region by first using `svyglm()` to run the test and then using `broom::tidy()` to display the output. Note that the temperature setting is set to NA when the household does not use A/C, and since the default handling of NAs is `na.action=na.omit`, records that do not use A/C are not included in this regression.
```
anova_out <- recs_des %>%
svyglm(
design = .,
formula = SummerTempNight ~ Region
)
tidy(anova_out)
```
```
## # A tibble: 4 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 69.7 0.103 674. 3.69e-111
## 2 RegionMidwest 1.34 0.138 9.68 1.46e- 13
## 3 RegionSouth 2.05 0.128 16.0 1.36e- 22
## 4 RegionWest 2.80 0.177 15.9 2.27e- 22
```
In the output above, we can see the estimated coefficients (`estimate`), estimated standard errors of the coefficients (`std.error`), the t\-statistic (`statistic`), and the p\-value for each coefficient. In this output, the intercept represents the reference value of the Northeast region. The other coefficients indicate the difference in temperature relative to the Northeast region. For example, in the Midwest, temperatures are set, on average, 1\.34 (p\-value is \<0\.0001\) degrees higher than in the Northeast during summer nights, and each region sets its thermostats at significantly higher temperatures than the Northeast.
If we wanted to change the reference value, we would reorder the factor before modeling using the `relevel()` function from {stats} or using one of many factor ordering functions in {forcats} such as `fct_relevel()` or `fct_infreq()`. For example, if we wanted the reference level to be the Midwest region, we could use the following code with the results in Table [7\.3](c07-modeling.html#tab:model-anova-ex-tab). Note the usage of the `gt()` function on top of `tidy()` to print a nice\-looking output table ([Iannone et al. 2024](#ref-R-gt); [Robinson, Hayes, and Couch 2023](#ref-R-broom)) (see Chapter [8](c08-communicating-results.html#c08-communicating-results) for more information on the {gt} package).
```
anova_out_relevel <- recs_des %>%
mutate(Region = fct_relevel(Region, "Midwest", after = 0)) %>%
svyglm(
design = .,
formula = SummerTempNight ~ Region
)
```
```
tidy(anova_out_relevel) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 7\.3: ANOVA output for estimates of thermostat temperature setting at night by region with Midwest as the reference region, RECS 2020
| term | estimate | std.error | statistic | p.value |
| --- | --- | --- | --- | --- |
| (Intercept) | 71\.04 | 0\.09 | 791\.83 | \<0\.0001 |
| RegionNortheast | −1\.34 | 0\.14 | −9\.68 | \<0\.0001 |
| RegionSouth | 0\.71 | 0\.10 | 6\.91 | \<0\.0001 |
| RegionWest | 1\.47 | 0\.16 | 9\.17 | \<0\.0001 |
This output now has the coefficients indicating the difference in temperature relative to the Midwest region. For example, in the Northeast, temperatures are set, on average, 1\.34 (p\-value is \<0\.0001\) degrees lower than in the Midwest during summer nights, and each region sets its thermostats at significantly lower temperatures than the Midwest. This is the reverse of what we saw in the prior model, as we are still comparing the same two regions, just from different reference points.
7\.3 Normal linear regression
-----------------------------
Normal linear regression is a more generalized method than ANOVA, where we fit a model of a continuous outcome with any number of categorical or continuous predictors (whereas ANOVA only has categorical predictors) and is similarly specified as:
\\\[\\begin{equation}
y\_i\=\\beta\_0 \+\\sum\_{i\=1}^p \\beta\_i x\_i \+ \\epsilon\_i
\\end{equation}\\]
where \\(y\_i\\) is the outcome, \\(\\beta\_0\\) is an intercept, \\(x\_1, \\cdots, x\_p\\) are the predictors with \\(\\beta\_1, \\cdots, \\beta\_p\\) as the associated coefficients, and \\(\\epsilon\_i\\) is the error.
Assumptions in normal linear regression using survey data include:
* The residuals (\\(\\epsilon\_i\\)) are normally distributed, but there is not an assumption of independence, and the correlation structure is captured in the survey design object
* There is a linear relationship between the outcome variable and the independent variables
* The residuals are homoscedastic; that is, the error term is the same across all values of independent variables
### 7\.3\.1 Syntax
The syntax for this regression uses the same function as ANOVA but can have more than one variable listed on the right\-hand side of the formula:
```
des_obj %>%
svyglm(
formula = outcomevar ~ x1 + x2 + x3,
design = .,
na.action = na.omit,
df.resid = NULL
)
```
The arguments are:
* `formula`: formula in the form of `y~x`
* `design`: a `tbl_svy` object created by `as_survey`
* `na.action`: handling of missing data
* `df.resid`: degrees of freedom for Wald tests (optional); defaults to using `degf(design)-p` where \\(p\\) is the rank of the design matrix
As discussed in Section [7\.1](c07-modeling.html#model-intro), the formula on the right\-hand side can be specified in many ways, for example, denoting whether or not interactions are desired.
### 7\.3\.2 Examples
#### Example 1: Linear regression with a single variable
On RECS, we can obtain information on the square footage of homes[25](#fn25) and the electric bills. We assume that square footage is related to the amount of money spent on electricity and examine a model for this. Before any modeling, we first plot the data to determine whether it is reasonable to assume a linear relationship. In Figure [7\.1](c07-modeling.html#fig:model-plot-sf-elbill), each hexagon represents the weighted count of households in the bin, and we can see a general positive linear trend (as the square footage increases, so does the amount of money spent on electricity).
```
recs_2020 %>%
ggplot(aes(
x = TOTSQFT_EN,
y = DOLLAREL,
weight = NWEIGHT / 1000000
)) +
geom_hex() +
scale_fill_gradientn(
guide = "colorbar",
name = "Housing Units\n(Millions)",
labels = scales::comma,
colors = book_colors[c(3, 2, 1)]
) +
xlab("Total square footage") +
ylab("Amount spent on electricity") +
scale_y_continuous(labels = scales::dollar_format()) +
scale_x_continuous(labels = scales::comma_format()) +
theme_minimal()
```
FIGURE 7\.1: Relationship between square footage and dollars spent on electricity, RECS 2020
Given that the plot shows a potentially increasing relationship between square footage and electricity expenditure, fitting a model allows us to determine if the relationship is statistically significant. The model is fit below with electricity expenditure as the outcome.
```
m_electric_sqft <- recs_des %>%
svyglm(
design = .,
formula = DOLLAREL ~ TOTSQFT_EN,
na.action = na.omit
)
```
```
tidy(m_electric_sqft) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 7\.4: Linear regression output predicting electricity expenditure given square footage, RECS 2020
| term | estimate | std.error | statistic | p.value |
| --- | --- | --- | --- | --- |
| (Intercept) | 836\.72 | 12\.77 | 65\.51 | \<0\.0001 |
| TOTSQFT\_EN | 0\.30 | 0\.01 | 41\.67 | \<0\.0001 |
In Table [7\.4](c07-modeling.html#tab:model-slr-examp-tab), we can see the estimated coefficients (`estimate`), estimated standard errors of the coefficients (`std.error`), the t\-statistic (`statistic`), and the p\-value for each coefficient. In these results, we can say that, on average, for every additional square foot of house size, the electricity bill increases by 30 cents, and that square footage is significantly associated with electricity expenditure (p\-value is \<0\.0001\).
This is a straightforward model, and there are likely many more factors related to electricity expenditure, including the type of cooling, number of appliances, location, and more. However, starting with one\-variable models can help analysts understand what potential relationships there are between variables before fitting more complex models. Often, we start with known relationships before building models to determine what impact additional variables have on the model.
#### Example 2: Linear regression with multiple variables and interactions
In the following example, a model is fit to predict electricity expenditure, including census region (factor/categorical), urbanicity (factor/categorical), square footage (double/numeric), and whether A/C is used (logical/categorical) with all two\-way interactions also included. In this example, we are choosing to fit this model without an intercept (using `-1` in the formula). This results in an intercept estimate for each region instead of a single intercept for all data.
```
m_electric_multi <- recs_des %>%
svyglm(
design = .,
formula =
DOLLAREL ~ (Region + Urbanicity + TOTSQFT_EN + ACUsed)^2 - 1,
na.action = na.omit
)
```
```
tidy(m_electric_multi) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 7\.5: Linear regression output predicting electricity expenditure given region, urbanicity, square footage, A/C usage, and one\-way interactions, RECS 2020
| term | estimate | std.error | statistic | p.value |
| --- | --- | --- | --- | --- |
| RegionNortheast | 543\.73 | 56\.57 | 9\.61 | \<0\.0001 |
| RegionMidwest | 702\.16 | 78\.12 | 8\.99 | \<0\.0001 |
| RegionSouth | 938\.74 | 46\.99 | 19\.98 | \<0\.0001 |
| RegionWest | 603\.27 | 36\.31 | 16\.61 | \<0\.0001 |
| UrbanicityUrban Cluster | 73\.03 | 81\.50 | 0\.90 | 0\.3764 |
| UrbanicityRural | 204\.13 | 80\.69 | 2\.53 | 0\.0161 |
| TOTSQFT\_EN | 0\.24 | 0\.03 | 8\.65 | \<0\.0001 |
| ACUsedTRUE | 252\.06 | 54\.05 | 4\.66 | \<0\.0001 |
| RegionMidwest:UrbanicityUrban Cluster | 183\.06 | 82\.38 | 2\.22 | 0\.0328 |
| RegionSouth:UrbanicityUrban Cluster | 152\.56 | 76\.03 | 2\.01 | 0\.0526 |
| RegionWest:UrbanicityUrban Cluster | 98\.02 | 75\.16 | 1\.30 | 0\.2007 |
| RegionMidwest:UrbanicityRural | 312\.83 | 50\.88 | 6\.15 | \<0\.0001 |
| RegionSouth:UrbanicityRural | 220\.00 | 55\.00 | 4\.00 | 0\.0003 |
| RegionWest:UrbanicityRural | 180\.97 | 58\.70 | 3\.08 | 0\.0040 |
| RegionMidwest:TOTSQFT\_EN | −0\.05 | 0\.02 | −2\.09 | 0\.0441 |
| RegionSouth:TOTSQFT\_EN | 0\.00 | 0\.03 | 0\.11 | 0\.9109 |
| RegionWest:TOTSQFT\_EN | −0\.03 | 0\.03 | −1\.00 | 0\.3254 |
| RegionMidwest:ACUsedTRUE | −292\.97 | 60\.24 | −4\.86 | \<0\.0001 |
| RegionSouth:ACUsedTRUE | −294\.07 | 57\.44 | −5\.12 | \<0\.0001 |
| RegionWest:ACUsedTRUE | −77\.68 | 47\.05 | −1\.65 | 0\.1076 |
| UrbanicityUrban Cluster:TOTSQFT\_EN | −0\.04 | 0\.02 | −1\.63 | 0\.1112 |
| UrbanicityRural:TOTSQFT\_EN | −0\.06 | 0\.02 | −2\.60 | 0\.0137 |
| UrbanicityUrban Cluster:ACUsedTRUE | −130\.23 | 60\.30 | −2\.16 | 0\.0377 |
| UrbanicityRural:ACUsedTRUE | −33\.80 | 59\.30 | −0\.57 | 0\.5724 |
| TOTSQFT\_EN:ACUsedTRUE | 0\.08 | 0\.02 | 3\.48 | 0\.0014 |
As shown in Table [7\.5](c07-modeling.html#tab:model-lmr-examp-tab), there are many terms in this model. To test whether coefficients for a term are different from zero, the `regTermTest()` function can be used. For example, in the above regression, we can test whether the interaction of region and urbanicity is significant as follows:
```
urb_reg_test <- regTermTest(m_electric_multi, ~ Urbanicity:Region)
urb_reg_test
```
```
## Wald test for Urbanicity:Region
## in svyglm(design = ., formula = DOLLAREL ~ (Region + Urbanicity +
## TOTSQFT_EN + ACUsed)^2 - 1, na.action = na.omit)
## F = 6.851 on 6 and 35 df: p= 7.2e-05
```
This output indicates there is a significant interaction between urbanicity and region (p\-value is \<0\.0001\).
To examine the predictions, residuals, and more from the model, the `augment()` function from {broom} can be used. The `augment()` function returns a tibble with the independent and dependent variables and other fit statistics. The `augment()` function has not been specifically written for objects of class `svyglm`, and as such, a warning is displayed indicating this at this time. As it was not written exactly for this class of objects, a little tweaking needs to be done after using `augment()`. To obtain the standard error of the predicted values (`.se.fit`), we need to use the `attr()` function on the predicted values (`.fitted`) created by `augment()`. Additionally, the predicted values created are outputted with a type of `svrep`. If we want to plot the predicted values, we need to use `as.numeric()` to get the predicted values into a numeric format to work with. However, it is important to note that this adjustment must be completed after the standard error adjustment.
```
fitstats <-
augment(m_electric_multi) %>%
mutate(
.se.fit = sqrt(attr(.fitted, "var")),
.fitted = as.numeric(.fitted)
)
fitstats
```
```
## # A tibble: 18,496 × 13
## DOLLAREL Region Urbanicity TOTSQFT_EN ACUsed `(weights)` .fitted
## <dbl> <fct> <fct> <dbl> <lgl> <dbl> <dbl>
## 1 1955. West Urban Area 2100 TRUE 0.492 1397.
## 2 713. South Urban Area 590 TRUE 1.35 1090.
## 3 335. West Urban Area 900 TRUE 0.849 1043.
## 4 1425. South Urban Area 2100 TRUE 0.793 1584.
## 5 1087 Northeast Urban Area 800 TRUE 1.49 1055.
## 6 1896. South Urban Area 4520 TRUE 1.09 2375.
## 7 1418. South Urban Area 2100 TRUE 0.851 1584.
## 8 1237. South Urban Clust… 900 FALSE 1.45 1349.
## 9 538. South Urban Area 750 TRUE 0.185 1142.
## 10 625. West Urban Area 760 TRUE 1.06 1002.
## # ℹ 18,486 more rows
## # ℹ 6 more variables: .resid <dbl>, .hat <dbl>, .sigma <dbl>,
## # .cooksd <dbl>, .std.resid <dbl>, .se.fit <dbl>
```
These results can then be used in a variety of ways, including examining residual plots as illustrated in the code below and Figure [7\.2](c07-modeling.html#fig:model-aug-examp-plot). In the residual plot, we look for any patterns in the data. If we do see patterns, this may indicate a violation of the heteroscedasticity assumption and the standard errors of the coefficients may be incorrect. In Figure [7\.2](c07-modeling.html#fig:model-aug-examp-plot), we do not see a strong pattern indicating that our assumption of heteroscedasticity may hold.
```
fitstats %>%
ggplot(aes(x = .fitted, .resid)) +
geom_point(alpha = .1) +
geom_hline(yintercept = 0, color = "red") +
theme_minimal() +
xlab("Fitted value of electricity cost") +
ylab("Residual of model") +
scale_y_continuous(labels = scales::dollar_format()) +
scale_x_continuous(labels = scales::dollar_format())
```
FIGURE 7\.2: Residual plot of electric cost model with the following covariates: Region, Urbanicity, TOTSQFT\_EN, and ACUsed
Additionally, `augment()` can be used to predict outcomes for data not used in modeling. Perhaps we would like to predict the energy expenditure for a home in an urban area in the south that uses A/C and is 2,500 square feet. To do this, we first make a tibble including that additional data and then use the `newdata` argument in the `augment()` function. As before, to obtain the standard error of the predicted values, we need to use the `attr()` function.
```
add_data <- recs_2020 %>%
select(
DOEID, Region, Urbanicity,
TOTSQFT_EN, ACUsed,
DOLLAREL
) %>%
rbind(
tibble(
DOEID = NA,
Region = "South",
Urbanicity = "Urban Area",
TOTSQFT_EN = 2500,
ACUsed = TRUE,
DOLLAREL = NA
)
) %>%
tail(1)
pred_data <- augment(m_electric_multi, newdata = add_data) %>%
mutate(
.se.fit = sqrt(attr(.fitted, "var")),
.fitted = as.numeric(.fitted)
)
pred_data
```
```
## # A tibble: 1 × 8
## DOEID Region Urbanicity TOTSQFT_EN ACUsed DOLLAREL .fitted .se.fit
## <dbl> <fct> <fct> <dbl> <lgl> <dbl> <dbl> <dbl>
## 1 NA South Urban Area 2500 TRUE NA 1715. 22.6
```
In the above example, it is predicted that the energy expenditure would be $1,715\.
### 7\.3\.1 Syntax
The syntax for this regression uses the same function as ANOVA but can have more than one variable listed on the right\-hand side of the formula:
```
des_obj %>%
svyglm(
formula = outcomevar ~ x1 + x2 + x3,
design = .,
na.action = na.omit,
df.resid = NULL
)
```
The arguments are:
* `formula`: formula in the form of `y~x`
* `design`: a `tbl_svy` object created by `as_survey`
* `na.action`: handling of missing data
* `df.resid`: degrees of freedom for Wald tests (optional); defaults to using `degf(design)-p` where \\(p\\) is the rank of the design matrix
As discussed in Section [7\.1](c07-modeling.html#model-intro), the formula on the right\-hand side can be specified in many ways, for example, denoting whether or not interactions are desired.
### 7\.3\.2 Examples
#### Example 1: Linear regression with a single variable
On RECS, we can obtain information on the square footage of homes[25](#fn25) and the electric bills. We assume that square footage is related to the amount of money spent on electricity and examine a model for this. Before any modeling, we first plot the data to determine whether it is reasonable to assume a linear relationship. In Figure [7\.1](c07-modeling.html#fig:model-plot-sf-elbill), each hexagon represents the weighted count of households in the bin, and we can see a general positive linear trend (as the square footage increases, so does the amount of money spent on electricity).
```
recs_2020 %>%
ggplot(aes(
x = TOTSQFT_EN,
y = DOLLAREL,
weight = NWEIGHT / 1000000
)) +
geom_hex() +
scale_fill_gradientn(
guide = "colorbar",
name = "Housing Units\n(Millions)",
labels = scales::comma,
colors = book_colors[c(3, 2, 1)]
) +
xlab("Total square footage") +
ylab("Amount spent on electricity") +
scale_y_continuous(labels = scales::dollar_format()) +
scale_x_continuous(labels = scales::comma_format()) +
theme_minimal()
```
FIGURE 7\.1: Relationship between square footage and dollars spent on electricity, RECS 2020
Given that the plot shows a potentially increasing relationship between square footage and electricity expenditure, fitting a model allows us to determine if the relationship is statistically significant. The model is fit below with electricity expenditure as the outcome.
```
m_electric_sqft <- recs_des %>%
svyglm(
design = .,
formula = DOLLAREL ~ TOTSQFT_EN,
na.action = na.omit
)
```
```
tidy(m_electric_sqft) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 7\.4: Linear regression output predicting electricity expenditure given square footage, RECS 2020
| term | estimate | std.error | statistic | p.value |
| --- | --- | --- | --- | --- |
| (Intercept) | 836\.72 | 12\.77 | 65\.51 | \<0\.0001 |
| TOTSQFT\_EN | 0\.30 | 0\.01 | 41\.67 | \<0\.0001 |
In Table [7\.4](c07-modeling.html#tab:model-slr-examp-tab), we can see the estimated coefficients (`estimate`), estimated standard errors of the coefficients (`std.error`), the t\-statistic (`statistic`), and the p\-value for each coefficient. In these results, we can say that, on average, for every additional square foot of house size, the electricity bill increases by 30 cents, and that square footage is significantly associated with electricity expenditure (p\-value is \<0\.0001\).
This is a straightforward model, and there are likely many more factors related to electricity expenditure, including the type of cooling, number of appliances, location, and more. However, starting with one\-variable models can help analysts understand what potential relationships there are between variables before fitting more complex models. Often, we start with known relationships before building models to determine what impact additional variables have on the model.
#### Example 2: Linear regression with multiple variables and interactions
In the following example, a model is fit to predict electricity expenditure, including census region (factor/categorical), urbanicity (factor/categorical), square footage (double/numeric), and whether A/C is used (logical/categorical) with all two\-way interactions also included. In this example, we are choosing to fit this model without an intercept (using `-1` in the formula). This results in an intercept estimate for each region instead of a single intercept for all data.
```
m_electric_multi <- recs_des %>%
svyglm(
design = .,
formula =
DOLLAREL ~ (Region + Urbanicity + TOTSQFT_EN + ACUsed)^2 - 1,
na.action = na.omit
)
```
```
tidy(m_electric_multi) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 7\.5: Linear regression output predicting electricity expenditure given region, urbanicity, square footage, A/C usage, and one\-way interactions, RECS 2020
| term | estimate | std.error | statistic | p.value |
| --- | --- | --- | --- | --- |
| RegionNortheast | 543\.73 | 56\.57 | 9\.61 | \<0\.0001 |
| RegionMidwest | 702\.16 | 78\.12 | 8\.99 | \<0\.0001 |
| RegionSouth | 938\.74 | 46\.99 | 19\.98 | \<0\.0001 |
| RegionWest | 603\.27 | 36\.31 | 16\.61 | \<0\.0001 |
| UrbanicityUrban Cluster | 73\.03 | 81\.50 | 0\.90 | 0\.3764 |
| UrbanicityRural | 204\.13 | 80\.69 | 2\.53 | 0\.0161 |
| TOTSQFT\_EN | 0\.24 | 0\.03 | 8\.65 | \<0\.0001 |
| ACUsedTRUE | 252\.06 | 54\.05 | 4\.66 | \<0\.0001 |
| RegionMidwest:UrbanicityUrban Cluster | 183\.06 | 82\.38 | 2\.22 | 0\.0328 |
| RegionSouth:UrbanicityUrban Cluster | 152\.56 | 76\.03 | 2\.01 | 0\.0526 |
| RegionWest:UrbanicityUrban Cluster | 98\.02 | 75\.16 | 1\.30 | 0\.2007 |
| RegionMidwest:UrbanicityRural | 312\.83 | 50\.88 | 6\.15 | \<0\.0001 |
| RegionSouth:UrbanicityRural | 220\.00 | 55\.00 | 4\.00 | 0\.0003 |
| RegionWest:UrbanicityRural | 180\.97 | 58\.70 | 3\.08 | 0\.0040 |
| RegionMidwest:TOTSQFT\_EN | −0\.05 | 0\.02 | −2\.09 | 0\.0441 |
| RegionSouth:TOTSQFT\_EN | 0\.00 | 0\.03 | 0\.11 | 0\.9109 |
| RegionWest:TOTSQFT\_EN | −0\.03 | 0\.03 | −1\.00 | 0\.3254 |
| RegionMidwest:ACUsedTRUE | −292\.97 | 60\.24 | −4\.86 | \<0\.0001 |
| RegionSouth:ACUsedTRUE | −294\.07 | 57\.44 | −5\.12 | \<0\.0001 |
| RegionWest:ACUsedTRUE | −77\.68 | 47\.05 | −1\.65 | 0\.1076 |
| UrbanicityUrban Cluster:TOTSQFT\_EN | −0\.04 | 0\.02 | −1\.63 | 0\.1112 |
| UrbanicityRural:TOTSQFT\_EN | −0\.06 | 0\.02 | −2\.60 | 0\.0137 |
| UrbanicityUrban Cluster:ACUsedTRUE | −130\.23 | 60\.30 | −2\.16 | 0\.0377 |
| UrbanicityRural:ACUsedTRUE | −33\.80 | 59\.30 | −0\.57 | 0\.5724 |
| TOTSQFT\_EN:ACUsedTRUE | 0\.08 | 0\.02 | 3\.48 | 0\.0014 |
As shown in Table [7\.5](c07-modeling.html#tab:model-lmr-examp-tab), there are many terms in this model. To test whether coefficients for a term are different from zero, the `regTermTest()` function can be used. For example, in the above regression, we can test whether the interaction of region and urbanicity is significant as follows:
```
urb_reg_test <- regTermTest(m_electric_multi, ~ Urbanicity:Region)
urb_reg_test
```
```
## Wald test for Urbanicity:Region
## in svyglm(design = ., formula = DOLLAREL ~ (Region + Urbanicity +
## TOTSQFT_EN + ACUsed)^2 - 1, na.action = na.omit)
## F = 6.851 on 6 and 35 df: p= 7.2e-05
```
This output indicates there is a significant interaction between urbanicity and region (p\-value is \<0\.0001\).
To examine the predictions, residuals, and more from the model, the `augment()` function from {broom} can be used. The `augment()` function returns a tibble with the independent and dependent variables and other fit statistics. The `augment()` function has not been specifically written for objects of class `svyglm`, and as such, a warning is displayed indicating this at this time. As it was not written exactly for this class of objects, a little tweaking needs to be done after using `augment()`. To obtain the standard error of the predicted values (`.se.fit`), we need to use the `attr()` function on the predicted values (`.fitted`) created by `augment()`. Additionally, the predicted values created are outputted with a type of `svrep`. If we want to plot the predicted values, we need to use `as.numeric()` to get the predicted values into a numeric format to work with. However, it is important to note that this adjustment must be completed after the standard error adjustment.
```
fitstats <-
augment(m_electric_multi) %>%
mutate(
.se.fit = sqrt(attr(.fitted, "var")),
.fitted = as.numeric(.fitted)
)
fitstats
```
```
## # A tibble: 18,496 × 13
## DOLLAREL Region Urbanicity TOTSQFT_EN ACUsed `(weights)` .fitted
## <dbl> <fct> <fct> <dbl> <lgl> <dbl> <dbl>
## 1 1955. West Urban Area 2100 TRUE 0.492 1397.
## 2 713. South Urban Area 590 TRUE 1.35 1090.
## 3 335. West Urban Area 900 TRUE 0.849 1043.
## 4 1425. South Urban Area 2100 TRUE 0.793 1584.
## 5 1087 Northeast Urban Area 800 TRUE 1.49 1055.
## 6 1896. South Urban Area 4520 TRUE 1.09 2375.
## 7 1418. South Urban Area 2100 TRUE 0.851 1584.
## 8 1237. South Urban Clust… 900 FALSE 1.45 1349.
## 9 538. South Urban Area 750 TRUE 0.185 1142.
## 10 625. West Urban Area 760 TRUE 1.06 1002.
## # ℹ 18,486 more rows
## # ℹ 6 more variables: .resid <dbl>, .hat <dbl>, .sigma <dbl>,
## # .cooksd <dbl>, .std.resid <dbl>, .se.fit <dbl>
```
These results can then be used in a variety of ways, including examining residual plots as illustrated in the code below and Figure [7\.2](c07-modeling.html#fig:model-aug-examp-plot). In the residual plot, we look for any patterns in the data. If we do see patterns, this may indicate a violation of the heteroscedasticity assumption and the standard errors of the coefficients may be incorrect. In Figure [7\.2](c07-modeling.html#fig:model-aug-examp-plot), we do not see a strong pattern indicating that our assumption of heteroscedasticity may hold.
```
fitstats %>%
ggplot(aes(x = .fitted, .resid)) +
geom_point(alpha = .1) +
geom_hline(yintercept = 0, color = "red") +
theme_minimal() +
xlab("Fitted value of electricity cost") +
ylab("Residual of model") +
scale_y_continuous(labels = scales::dollar_format()) +
scale_x_continuous(labels = scales::dollar_format())
```
FIGURE 7\.2: Residual plot of electric cost model with the following covariates: Region, Urbanicity, TOTSQFT\_EN, and ACUsed
Additionally, `augment()` can be used to predict outcomes for data not used in modeling. Perhaps we would like to predict the energy expenditure for a home in an urban area in the south that uses A/C and is 2,500 square feet. To do this, we first make a tibble including that additional data and then use the `newdata` argument in the `augment()` function. As before, to obtain the standard error of the predicted values, we need to use the `attr()` function.
```
add_data <- recs_2020 %>%
select(
DOEID, Region, Urbanicity,
TOTSQFT_EN, ACUsed,
DOLLAREL
) %>%
rbind(
tibble(
DOEID = NA,
Region = "South",
Urbanicity = "Urban Area",
TOTSQFT_EN = 2500,
ACUsed = TRUE,
DOLLAREL = NA
)
) %>%
tail(1)
pred_data <- augment(m_electric_multi, newdata = add_data) %>%
mutate(
.se.fit = sqrt(attr(.fitted, "var")),
.fitted = as.numeric(.fitted)
)
pred_data
```
```
## # A tibble: 1 × 8
## DOEID Region Urbanicity TOTSQFT_EN ACUsed DOLLAREL .fitted .se.fit
## <dbl> <fct> <fct> <dbl> <lgl> <dbl> <dbl> <dbl>
## 1 NA South Urban Area 2500 TRUE NA 1715. 22.6
```
In the above example, it is predicted that the energy expenditure would be $1,715\.
#### Example 1: Linear regression with a single variable
On RECS, we can obtain information on the square footage of homes[25](#fn25) and the electric bills. We assume that square footage is related to the amount of money spent on electricity and examine a model for this. Before any modeling, we first plot the data to determine whether it is reasonable to assume a linear relationship. In Figure [7\.1](c07-modeling.html#fig:model-plot-sf-elbill), each hexagon represents the weighted count of households in the bin, and we can see a general positive linear trend (as the square footage increases, so does the amount of money spent on electricity).
```
recs_2020 %>%
ggplot(aes(
x = TOTSQFT_EN,
y = DOLLAREL,
weight = NWEIGHT / 1000000
)) +
geom_hex() +
scale_fill_gradientn(
guide = "colorbar",
name = "Housing Units\n(Millions)",
labels = scales::comma,
colors = book_colors[c(3, 2, 1)]
) +
xlab("Total square footage") +
ylab("Amount spent on electricity") +
scale_y_continuous(labels = scales::dollar_format()) +
scale_x_continuous(labels = scales::comma_format()) +
theme_minimal()
```
FIGURE 7\.1: Relationship between square footage and dollars spent on electricity, RECS 2020
Given that the plot shows a potentially increasing relationship between square footage and electricity expenditure, fitting a model allows us to determine if the relationship is statistically significant. The model is fit below with electricity expenditure as the outcome.
```
m_electric_sqft <- recs_des %>%
svyglm(
design = .,
formula = DOLLAREL ~ TOTSQFT_EN,
na.action = na.omit
)
```
```
tidy(m_electric_sqft) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 7\.4: Linear regression output predicting electricity expenditure given square footage, RECS 2020
| term | estimate | std.error | statistic | p.value |
| --- | --- | --- | --- | --- |
| (Intercept) | 836\.72 | 12\.77 | 65\.51 | \<0\.0001 |
| TOTSQFT\_EN | 0\.30 | 0\.01 | 41\.67 | \<0\.0001 |
In Table [7\.4](c07-modeling.html#tab:model-slr-examp-tab), we can see the estimated coefficients (`estimate`), estimated standard errors of the coefficients (`std.error`), the t\-statistic (`statistic`), and the p\-value for each coefficient. In these results, we can say that, on average, for every additional square foot of house size, the electricity bill increases by 30 cents, and that square footage is significantly associated with electricity expenditure (p\-value is \<0\.0001\).
This is a straightforward model, and there are likely many more factors related to electricity expenditure, including the type of cooling, number of appliances, location, and more. However, starting with one\-variable models can help analysts understand what potential relationships there are between variables before fitting more complex models. Often, we start with known relationships before building models to determine what impact additional variables have on the model.
#### Example 2: Linear regression with multiple variables and interactions
In the following example, a model is fit to predict electricity expenditure, including census region (factor/categorical), urbanicity (factor/categorical), square footage (double/numeric), and whether A/C is used (logical/categorical) with all two\-way interactions also included. In this example, we are choosing to fit this model without an intercept (using `-1` in the formula). This results in an intercept estimate for each region instead of a single intercept for all data.
```
m_electric_multi <- recs_des %>%
svyglm(
design = .,
formula =
DOLLAREL ~ (Region + Urbanicity + TOTSQFT_EN + ACUsed)^2 - 1,
na.action = na.omit
)
```
```
tidy(m_electric_multi) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 7\.5: Linear regression output predicting electricity expenditure given region, urbanicity, square footage, A/C usage, and one\-way interactions, RECS 2020
| term | estimate | std.error | statistic | p.value |
| --- | --- | --- | --- | --- |
| RegionNortheast | 543\.73 | 56\.57 | 9\.61 | \<0\.0001 |
| RegionMidwest | 702\.16 | 78\.12 | 8\.99 | \<0\.0001 |
| RegionSouth | 938\.74 | 46\.99 | 19\.98 | \<0\.0001 |
| RegionWest | 603\.27 | 36\.31 | 16\.61 | \<0\.0001 |
| UrbanicityUrban Cluster | 73\.03 | 81\.50 | 0\.90 | 0\.3764 |
| UrbanicityRural | 204\.13 | 80\.69 | 2\.53 | 0\.0161 |
| TOTSQFT\_EN | 0\.24 | 0\.03 | 8\.65 | \<0\.0001 |
| ACUsedTRUE | 252\.06 | 54\.05 | 4\.66 | \<0\.0001 |
| RegionMidwest:UrbanicityUrban Cluster | 183\.06 | 82\.38 | 2\.22 | 0\.0328 |
| RegionSouth:UrbanicityUrban Cluster | 152\.56 | 76\.03 | 2\.01 | 0\.0526 |
| RegionWest:UrbanicityUrban Cluster | 98\.02 | 75\.16 | 1\.30 | 0\.2007 |
| RegionMidwest:UrbanicityRural | 312\.83 | 50\.88 | 6\.15 | \<0\.0001 |
| RegionSouth:UrbanicityRural | 220\.00 | 55\.00 | 4\.00 | 0\.0003 |
| RegionWest:UrbanicityRural | 180\.97 | 58\.70 | 3\.08 | 0\.0040 |
| RegionMidwest:TOTSQFT\_EN | −0\.05 | 0\.02 | −2\.09 | 0\.0441 |
| RegionSouth:TOTSQFT\_EN | 0\.00 | 0\.03 | 0\.11 | 0\.9109 |
| RegionWest:TOTSQFT\_EN | −0\.03 | 0\.03 | −1\.00 | 0\.3254 |
| RegionMidwest:ACUsedTRUE | −292\.97 | 60\.24 | −4\.86 | \<0\.0001 |
| RegionSouth:ACUsedTRUE | −294\.07 | 57\.44 | −5\.12 | \<0\.0001 |
| RegionWest:ACUsedTRUE | −77\.68 | 47\.05 | −1\.65 | 0\.1076 |
| UrbanicityUrban Cluster:TOTSQFT\_EN | −0\.04 | 0\.02 | −1\.63 | 0\.1112 |
| UrbanicityRural:TOTSQFT\_EN | −0\.06 | 0\.02 | −2\.60 | 0\.0137 |
| UrbanicityUrban Cluster:ACUsedTRUE | −130\.23 | 60\.30 | −2\.16 | 0\.0377 |
| UrbanicityRural:ACUsedTRUE | −33\.80 | 59\.30 | −0\.57 | 0\.5724 |
| TOTSQFT\_EN:ACUsedTRUE | 0\.08 | 0\.02 | 3\.48 | 0\.0014 |
As shown in Table [7\.5](c07-modeling.html#tab:model-lmr-examp-tab), there are many terms in this model. To test whether coefficients for a term are different from zero, the `regTermTest()` function can be used. For example, in the above regression, we can test whether the interaction of region and urbanicity is significant as follows:
```
urb_reg_test <- regTermTest(m_electric_multi, ~ Urbanicity:Region)
urb_reg_test
```
```
## Wald test for Urbanicity:Region
## in svyglm(design = ., formula = DOLLAREL ~ (Region + Urbanicity +
## TOTSQFT_EN + ACUsed)^2 - 1, na.action = na.omit)
## F = 6.851 on 6 and 35 df: p= 7.2e-05
```
This output indicates there is a significant interaction between urbanicity and region (p\-value is \<0\.0001\).
To examine the predictions, residuals, and more from the model, the `augment()` function from {broom} can be used. The `augment()` function returns a tibble with the independent and dependent variables and other fit statistics. The `augment()` function has not been specifically written for objects of class `svyglm`, and as such, a warning is displayed indicating this at this time. As it was not written exactly for this class of objects, a little tweaking needs to be done after using `augment()`. To obtain the standard error of the predicted values (`.se.fit`), we need to use the `attr()` function on the predicted values (`.fitted`) created by `augment()`. Additionally, the predicted values created are outputted with a type of `svrep`. If we want to plot the predicted values, we need to use `as.numeric()` to get the predicted values into a numeric format to work with. However, it is important to note that this adjustment must be completed after the standard error adjustment.
```
fitstats <-
augment(m_electric_multi) %>%
mutate(
.se.fit = sqrt(attr(.fitted, "var")),
.fitted = as.numeric(.fitted)
)
fitstats
```
```
## # A tibble: 18,496 × 13
## DOLLAREL Region Urbanicity TOTSQFT_EN ACUsed `(weights)` .fitted
## <dbl> <fct> <fct> <dbl> <lgl> <dbl> <dbl>
## 1 1955. West Urban Area 2100 TRUE 0.492 1397.
## 2 713. South Urban Area 590 TRUE 1.35 1090.
## 3 335. West Urban Area 900 TRUE 0.849 1043.
## 4 1425. South Urban Area 2100 TRUE 0.793 1584.
## 5 1087 Northeast Urban Area 800 TRUE 1.49 1055.
## 6 1896. South Urban Area 4520 TRUE 1.09 2375.
## 7 1418. South Urban Area 2100 TRUE 0.851 1584.
## 8 1237. South Urban Clust… 900 FALSE 1.45 1349.
## 9 538. South Urban Area 750 TRUE 0.185 1142.
## 10 625. West Urban Area 760 TRUE 1.06 1002.
## # ℹ 18,486 more rows
## # ℹ 6 more variables: .resid <dbl>, .hat <dbl>, .sigma <dbl>,
## # .cooksd <dbl>, .std.resid <dbl>, .se.fit <dbl>
```
These results can then be used in a variety of ways, including examining residual plots as illustrated in the code below and Figure [7\.2](c07-modeling.html#fig:model-aug-examp-plot). In the residual plot, we look for any patterns in the data. If we do see patterns, this may indicate a violation of the heteroscedasticity assumption and the standard errors of the coefficients may be incorrect. In Figure [7\.2](c07-modeling.html#fig:model-aug-examp-plot), we do not see a strong pattern indicating that our assumption of heteroscedasticity may hold.
```
fitstats %>%
ggplot(aes(x = .fitted, .resid)) +
geom_point(alpha = .1) +
geom_hline(yintercept = 0, color = "red") +
theme_minimal() +
xlab("Fitted value of electricity cost") +
ylab("Residual of model") +
scale_y_continuous(labels = scales::dollar_format()) +
scale_x_continuous(labels = scales::dollar_format())
```
FIGURE 7\.2: Residual plot of electric cost model with the following covariates: Region, Urbanicity, TOTSQFT\_EN, and ACUsed
Additionally, `augment()` can be used to predict outcomes for data not used in modeling. Perhaps we would like to predict the energy expenditure for a home in an urban area in the south that uses A/C and is 2,500 square feet. To do this, we first make a tibble including that additional data and then use the `newdata` argument in the `augment()` function. As before, to obtain the standard error of the predicted values, we need to use the `attr()` function.
```
add_data <- recs_2020 %>%
select(
DOEID, Region, Urbanicity,
TOTSQFT_EN, ACUsed,
DOLLAREL
) %>%
rbind(
tibble(
DOEID = NA,
Region = "South",
Urbanicity = "Urban Area",
TOTSQFT_EN = 2500,
ACUsed = TRUE,
DOLLAREL = NA
)
) %>%
tail(1)
pred_data <- augment(m_electric_multi, newdata = add_data) %>%
mutate(
.se.fit = sqrt(attr(.fitted, "var")),
.fitted = as.numeric(.fitted)
)
pred_data
```
```
## # A tibble: 1 × 8
## DOEID Region Urbanicity TOTSQFT_EN ACUsed DOLLAREL .fitted .se.fit
## <dbl> <fct> <fct> <dbl> <lgl> <dbl> <dbl> <dbl>
## 1 NA South Urban Area 2500 TRUE NA 1715. 22.6
```
In the above example, it is predicted that the energy expenditure would be $1,715\.
7\.4 Logistic regression
------------------------
Logistic regression is used to model binary outcomes, such as whether or not someone voted. There are several instances where an outcome may not be originally binary but is collapsed into being binary. For example, given that gender is often asked in surveys with multiple response options and not a binary scale, many researchers now code gender in logistic modeling as “cis\-male” compared to not “cis\-male.” We could also convert a 4\-point Likert scale that has levels of “Strongly Agree,” “Agree,” “Disagree,” and “Strongly Disagree” to group the agreement levels into one group and disagreement levels into a second group.
Logistic regression is a specific case of the generalized linear model (GLM). A GLM uses a link function to link the response variable to the linear model. If we tried to use a normal linear regression with a binary outcome, many assumptions would not hold, namely, the response would not be continuous. Logistic regression allows us to link a linear model between the covariates and the propensity of an outcome. In logistic regression, the link model is the logit function. Specifically, the model is specified as follows:
\\\[ y\_i \\sim \\text{Bernoulli}(\\pi\_i)\\]
\\\[\\begin{equation}
\\log \\left(\\frac{\\pi\_i}{1\-\\pi\_i} \\right)\=\\beta\_0 \+\\sum\_{i\=1}^n \\beta\_i x\_i
\\end{equation}\\]
which can be re\-expressed as
\\\[ \\pi\_i\=\\frac{\\exp \\left(\\beta\_0 \+\\sum\_{i\=1}^n \\beta\_i x\_i \\right)}{1\+\\exp \\left(\\beta\_0 \+\\sum\_{i\=1}^n \\beta\_i x\_i \\right)}\\] where \\(y\_i\\) is the outcome, \\(\\beta\_0\\) is an intercept, and \\(x\_1, \\cdots, x\_n\\) are the predictors with \\(\\beta\_1, \\cdots, \\beta\_n\\) as the associated coefficients.
The Bernoulli distribution is a distribution which has an outcome of 0 or 1 given some probability (\\(\\pi\_i\\)) in this case, and we model \\(\\pi\_i\\) as a function of the covariates \\(x\_i\\) using this logit link.
Assumptions in logistic regression using survey data include:
* The outcome variable has two levels
* There is a linear relationship between the independent variables and the log odds (the equation for the logit function)
* The residuals are homoscedastic; that is, the error term is the same across all values of independent variables
### 7\.4\.1 Syntax
The syntax for logistic regression is as follows:
```
des_obj %>%
svyglm(
formula = outcomevar ~ x1 + x2 + x3,
design = .,
na.action = na.omit,
df.resid = NULL,
family = quasibinomial
)
```
The arguments are:
* `formula`: Formula in the form of `y~x`
* `design`: a `tbl_svy` object created by `as_survey`
* `na.action`: handling of missing data
* `df.resid`: degrees of freedom for Wald tests (optional); defaults to using `degf(design)-p` where \\(p\\) is the rank of the design matrix
* `family`: the error distribution/link function to be used in the model
Note `svyglm()` is the same function used in both ANOVA and normal linear regression. However, we’ve added the link function quasibinomial. While we can use the binomial link function, it is recommended to use the quasibinomial as our weights may not be integers, and the quasibinomial also allows for overdispersion ([Lumley 2010](#ref-lumley2010complex); [McCullagh and Nelder 1989](#ref-mccullagh1989binary); [R Core Team 2024](#ref-R-base)). The quasibinomial family has a default logit link, which is specified in the equations above. When specifying the outcome variable, it is likely specified in one of three ways with survey data:
* A two\-level factor variable where the first level of the factor indicates a “failure,” and the second level indicates a “success”
* A numeric variable which is 1 or 0 where 1 indicates a success
* A logical variable where TRUE indicates a success
### 7\.4\.2 Examples
#### Example 1: Logistic regression with single variable
In the following example, we use the ANES data to model whether someone usually has trust in the government[26](#fn26) by whom someone voted for president in 2020\. As a reminder, the leading candidates were Biden and Trump, though people could vote for someone else not in the Democratic or Republican parties. Those votes are all grouped into an “Other” category. We first create a binary outcome for trusting in the government by collapsing “Always” and “Most of the time” into a single\-factor level, and the other response options (“About half the time,” “Some of the time,” and “Never”) into a second factor level. Next, a scatter plot of the raw data is not useful, as it is all 0 and 1 outcomes; so instead, we plot a summary of the data.
```
anes_des_der <- anes_des %>%
mutate(TrustGovernmentUsually = case_when(
is.na(TrustGovernment) ~ NA,
TRUE ~ TrustGovernment %in% c("Always", "Most of the time")
))
anes_des_der %>%
group_by(VotedPres2020_selection) %>%
summarize(
pct_trust = survey_mean(TrustGovernmentUsually,
na.rm = TRUE,
proportion = TRUE,
vartype = "ci"
),
.groups = "drop"
) %>%
filter(complete.cases(.)) %>%
ggplot(aes(
x = VotedPres2020_selection, y = pct_trust,
fill = VotedPres2020_selection
)) +
geom_bar(stat = "identity") +
geom_errorbar(aes(ymin = pct_trust_low, ymax = pct_trust_upp),
width = .2
) +
scale_fill_manual(values = c("#0b3954", "#bfd7ea", "#8d6b94")) +
xlab("Election choice (2020)") +
ylab("Usually trust the government") +
scale_y_continuous(labels = scales::percent) +
guides(fill = "none") +
theme_minimal()
```
FIGURE 7\.3: Relationship between candidate selection and trust in government, ANES 2020
Looking at Figure [7\.3](c07-modeling.html#fig:model-logisticexamp-plot), it appears that people who voted for Trump are more likely to say that they usually have trust in the government compared to those who voted for Biden and other candidates. To determine if this insight is accurate, we next fit the model.
```
logistic_trust_vote <- anes_des_der %>%
svyglm(
design = .,
formula = TrustGovernmentUsually ~ VotedPres2020_selection,
family = quasibinomial
)
```
```
tidy(logistic_trust_vote) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 7\.6: Logistic regression output predicting trust in government by presidential candidate selection, RECS 2020
| term | estimate | std.error | statistic | p.value |
| --- | --- | --- | --- | --- |
| (Intercept) | −1\.96 | 0\.07 | −27\.45 | \<0\.0001 |
| VotedPres2020\_selectionTrump | 0\.43 | 0\.09 | 4\.72 | \<0\.0001 |
| VotedPres2020\_selectionOther | −0\.65 | 0\.44 | −1\.49 | 0\.1429 |
In Table [7\.6](c07-modeling.html#tab:model-logisticexamp-tab), we can see the estimated coefficients (`estimate`), estimated standard errors of the coefficients (`std.error`), the t\-statistic (`statistic`), and the p\-value for each coefficient. This output indicates that respondents who voted for Trump are more likely to usually have trust in the government compared to those who voted for Biden (the reference level). The coefficient of 0\.435 represents the increase in the log odds of usually trusting the government.
In most cases, it is easier to talk about the odds instead of the log odds. To do this, we need to exponentiate the coefficients. We can use the same `tidy()` function but include the argument `exponentiate = TRUE` to see the odds.
```
tidy(logistic_trust_vote, exponentiate = TRUE) %>%
select(term, estimate) %>%
gt() %>%
fmt_number()
```
TABLE 7\.7: Logistic regression predicting trust in government by presidential candidate selection with exponentiated coefficients (odds), RECS 2020
| term | estimate |
| --- | --- |
| (Intercept) | 0\.14 |
| VotedPres2020\_selectionTrump | 1\.54 |
| VotedPres2020\_selectionOther | 0\.52 |
Using the output in Table [7\.7](c07-modeling.html#tab:model-logisticexamp-model-odds-tab), we can interpret this as saying that the odds of usually trusting the government for someone who voted for Trump is 154% as likely to trust the government compared to a person who voted for Biden (the reference level). In comparison, a person who voted for neither Biden nor Trump is 52% as likely to trust the government as someone who voted for Biden.
As with linear regression, the `augment()` can be used to predict values. By default, the prediction is the link function, not the probability model. To predict the probability, add an argument of `type.predict="response"` as demonstrated below:
```
logistic_trust_vote %>%
augment(type.predict = "response") %>%
mutate(
.se.fit = sqrt(attr(.fitted, "var")),
.fitted = as.numeric(.fitted)
) %>%
select(
TrustGovernmentUsually,
VotedPres2020_selection,
.fitted,
.se.fit
)
```
```
## # A tibble: 6,212 × 4
## TrustGovernmentUsually VotedPres2020_selection .fitted .se.fit
## <lgl> <fct> <dbl> <dbl>
## 1 FALSE Other 0.0681 0.0279
## 2 FALSE Biden 0.123 0.00772
## 3 FALSE Biden 0.123 0.00772
## 4 FALSE Trump 0.178 0.00919
## 5 FALSE Biden 0.123 0.00772
## 6 FALSE Trump 0.178 0.00919
## 7 FALSE Biden 0.123 0.00772
## 8 FALSE Biden 0.123 0.00772
## 9 TRUE Biden 0.123 0.00772
## 10 FALSE Biden 0.123 0.00772
## # ℹ 6,202 more rows
```
#### Example 2: Interaction effects
Let’s look at another example with interaction effects. If we’re interested in understanding the demographics of people who voted for Biden among all voters in 2020, we could include the indicator of whether respondents voted early (`EarlyVote2020`) and their income group (`Income7`) in our model.
First, we need to subset the data to 2020 voters and then create an indicator for who voted for Biden.
```
anes_des_ind <- anes_des %>%
filter(!is.na(VotedPres2020_selection)) %>%
mutate(VoteBiden = case_when(
VotedPres2020_selection == "Biden" ~ 1,
TRUE ~ 0
))
```
Let’s first look at the main effects of income grouping and early voting behavior.
```
log_biden_main <- anes_des_ind %>%
mutate(
EarlyVote2020 = fct_relevel(EarlyVote2020, "No", after = 0)
) %>%
svyglm(
design = .,
formula = VoteBiden ~ EarlyVote2020 + Income7,
family = quasibinomial
)
```
```
tidy(log_biden_main) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 7\.8: Logistic regression output for predicting voting for Biden given early voting behavior and income; main effects only, ANES 2020
| term | estimate | std.error | statistic | p.value |
| --- | --- | --- | --- | --- |
| (Intercept) | 1\.28 | 0\.43 | 2\.99 | 0\.0047 |
| EarlyVote2020Yes | 0\.44 | 0\.34 | 1\.29 | 0\.2039 |
| Income7$20k to \< 40k | −1\.06 | 0\.49 | −2\.18 | 0\.0352 |
| Income7$40k to \< 60k | −0\.78 | 0\.42 | −1\.86 | 0\.0705 |
| Income7$60k to \< 80k | −1\.24 | 0\.70 | −1\.77 | 0\.0842 |
| Income7$80k to \< 100k | −0\.66 | 0\.64 | −1\.02 | 0\.3137 |
| Income7$100k to \< 125k | −1\.02 | 0\.54 | −1\.89 | 0\.0662 |
| Income7$125k or more | −1\.25 | 0\.44 | −2\.87 | 0\.0065 |
This main effect model (see Table [7\.8](c07-modeling.html#tab:model-logisticexamp-biden-main-tab)) indicates that people with incomes of $125,000 or more have a significant negative coefficient –1\.25 (p\-value is 0\.0065\). This indicates that people with incomes of $125,000 or more were less likely to vote for Biden in the 2020 election compared to people with incomes of $20,000 or less (reference level).
Although early voting behavior was not significant, there may be an interaction between income and early voting behavior. To determine this, we can create a model that includes the interaction effects:
```
log_biden_int <- anes_des_ind %>%
mutate(
EarlyVote2020 = fct_relevel(EarlyVote2020, "No", after = 0)
) %>%
svyglm(
design = .,
formula = VoteBiden ~ (EarlyVote2020 + Income7)^2,
family = quasibinomial
)
```
```
tidy(log_biden_int) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 7\.9: Logistic regression output for predicting voting for Biden given early voting behavior and income; with interaction, ANES 2020
| term | estimate | std.error | statistic | p.value |
| --- | --- | --- | --- | --- |
| (Intercept) | 2\.32 | 0\.67 | 3\.45 | 0\.0015 |
| EarlyVote2020Yes | −0\.81 | 0\.78 | −1\.03 | 0\.3081 |
| Income7$20k to \< 40k | −2\.33 | 0\.87 | −2\.68 | 0\.0113 |
| Income7$40k to \< 60k | −1\.67 | 0\.89 | −1\.87 | 0\.0700 |
| Income7$60k to \< 80k | −2\.05 | 1\.05 | −1\.96 | 0\.0580 |
| Income7$80k to \< 100k | −3\.42 | 1\.12 | −3\.06 | 0\.0043 |
| Income7$100k to \< 125k | −2\.33 | 1\.07 | −2\.17 | 0\.0368 |
| Income7$125k or more | −2\.09 | 0\.92 | −2\.28 | 0\.0289 |
| EarlyVote2020Yes:Income7$20k to \< 40k | 1\.60 | 0\.95 | 1\.69 | 0\.1006 |
| EarlyVote2020Yes:Income7$40k to \< 60k | 0\.99 | 1\.00 | 0\.99 | 0\.3289 |
| EarlyVote2020Yes:Income7$60k to \< 80k | 0\.90 | 1\.14 | 0\.79 | 0\.4373 |
| EarlyVote2020Yes:Income7$80k to \< 100k | 3\.22 | 1\.16 | 2\.78 | 0\.0087 |
| EarlyVote2020Yes:Income7$100k to \< 125k | 1\.64 | 1\.11 | 1\.48 | 0\.1492 |
| EarlyVote2020Yes:Income7$125k or more | 1\.00 | 1\.14 | 0\.88 | 0\.3867 |
The results from the interaction model (see Table [7\.9](c07-modeling.html#tab:model-logisticexamp-biden-int-tab)) show that one interaction between early voting behavior and income is significant. To better understand what this interaction means, we can plot the predicted probabilities with an interaction plot. Let’s first obtain the predicted probabilities for each possible combination of variables using the `augment()` function.
```
log_biden_pred <- log_biden_int %>%
augment(type.predict = "response") %>%
mutate(
.se.fit = sqrt(attr(.fitted, "var")),
.fitted = as.numeric(.fitted)
) %>%
select(VoteBiden, EarlyVote2020, Income7, .fitted, .se.fit)
```
The y\-axis is the predicted probabilities, one of our x\-variables is on the x\-axis, and the other is represented by multiple lines. Figure [7\.4](c07-modeling.html#fig:model-logisticexamp-biden-plot) shows the interaction plot with early voting behavior on the x\-axis and income represented by the lines.
```
log_biden_pred %>%
filter(VoteBiden == 1) %>%
distinct() %>%
arrange(EarlyVote2020, Income7) %>%
ggplot(aes(
x = EarlyVote2020,
y = .fitted,
group = Income7,
color = Income7,
linetype = Income7
)) +
geom_line(linewidth = 1.1) +
scale_color_manual(values = colorRampPalette(book_colors)(7)) +
ylab("Predicted Probability of Voting for Biden") +
labs(
x = "Voted Early",
color = "Income",
linetype = "Income"
) +
coord_cartesian(ylim = c(0, 1)) +
guides(fill = "none") +
theme_minimal()
```
FIGURE 7\.4: Interaction plot of early voting and income predicting the probability of voting for Biden
From Figure [7\.4](c07-modeling.html#fig:model-logisticexamp-biden-plot), we can see that people who have incomes in most groups (e.g., $40,000 to less than $60,000\) have roughly the same probability of voting for Biden regardless of whether they voted early or not. However, those with income in the $100,000 to less than $125,000 group were more likely to vote for Biden if they voted early than if they did not vote early.
Interactions in models can be difficult to understand from the coefficients alone. Using these interaction plots can help others understand the nuances of the results.
### 7\.4\.1 Syntax
The syntax for logistic regression is as follows:
```
des_obj %>%
svyglm(
formula = outcomevar ~ x1 + x2 + x3,
design = .,
na.action = na.omit,
df.resid = NULL,
family = quasibinomial
)
```
The arguments are:
* `formula`: Formula in the form of `y~x`
* `design`: a `tbl_svy` object created by `as_survey`
* `na.action`: handling of missing data
* `df.resid`: degrees of freedom for Wald tests (optional); defaults to using `degf(design)-p` where \\(p\\) is the rank of the design matrix
* `family`: the error distribution/link function to be used in the model
Note `svyglm()` is the same function used in both ANOVA and normal linear regression. However, we’ve added the link function quasibinomial. While we can use the binomial link function, it is recommended to use the quasibinomial as our weights may not be integers, and the quasibinomial also allows for overdispersion ([Lumley 2010](#ref-lumley2010complex); [McCullagh and Nelder 1989](#ref-mccullagh1989binary); [R Core Team 2024](#ref-R-base)). The quasibinomial family has a default logit link, which is specified in the equations above. When specifying the outcome variable, it is likely specified in one of three ways with survey data:
* A two\-level factor variable where the first level of the factor indicates a “failure,” and the second level indicates a “success”
* A numeric variable which is 1 or 0 where 1 indicates a success
* A logical variable where TRUE indicates a success
### 7\.4\.2 Examples
#### Example 1: Logistic regression with single variable
In the following example, we use the ANES data to model whether someone usually has trust in the government[26](#fn26) by whom someone voted for president in 2020\. As a reminder, the leading candidates were Biden and Trump, though people could vote for someone else not in the Democratic or Republican parties. Those votes are all grouped into an “Other” category. We first create a binary outcome for trusting in the government by collapsing “Always” and “Most of the time” into a single\-factor level, and the other response options (“About half the time,” “Some of the time,” and “Never”) into a second factor level. Next, a scatter plot of the raw data is not useful, as it is all 0 and 1 outcomes; so instead, we plot a summary of the data.
```
anes_des_der <- anes_des %>%
mutate(TrustGovernmentUsually = case_when(
is.na(TrustGovernment) ~ NA,
TRUE ~ TrustGovernment %in% c("Always", "Most of the time")
))
anes_des_der %>%
group_by(VotedPres2020_selection) %>%
summarize(
pct_trust = survey_mean(TrustGovernmentUsually,
na.rm = TRUE,
proportion = TRUE,
vartype = "ci"
),
.groups = "drop"
) %>%
filter(complete.cases(.)) %>%
ggplot(aes(
x = VotedPres2020_selection, y = pct_trust,
fill = VotedPres2020_selection
)) +
geom_bar(stat = "identity") +
geom_errorbar(aes(ymin = pct_trust_low, ymax = pct_trust_upp),
width = .2
) +
scale_fill_manual(values = c("#0b3954", "#bfd7ea", "#8d6b94")) +
xlab("Election choice (2020)") +
ylab("Usually trust the government") +
scale_y_continuous(labels = scales::percent) +
guides(fill = "none") +
theme_minimal()
```
FIGURE 7\.3: Relationship between candidate selection and trust in government, ANES 2020
Looking at Figure [7\.3](c07-modeling.html#fig:model-logisticexamp-plot), it appears that people who voted for Trump are more likely to say that they usually have trust in the government compared to those who voted for Biden and other candidates. To determine if this insight is accurate, we next fit the model.
```
logistic_trust_vote <- anes_des_der %>%
svyglm(
design = .,
formula = TrustGovernmentUsually ~ VotedPres2020_selection,
family = quasibinomial
)
```
```
tidy(logistic_trust_vote) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 7\.6: Logistic regression output predicting trust in government by presidential candidate selection, RECS 2020
| term | estimate | std.error | statistic | p.value |
| --- | --- | --- | --- | --- |
| (Intercept) | −1\.96 | 0\.07 | −27\.45 | \<0\.0001 |
| VotedPres2020\_selectionTrump | 0\.43 | 0\.09 | 4\.72 | \<0\.0001 |
| VotedPres2020\_selectionOther | −0\.65 | 0\.44 | −1\.49 | 0\.1429 |
In Table [7\.6](c07-modeling.html#tab:model-logisticexamp-tab), we can see the estimated coefficients (`estimate`), estimated standard errors of the coefficients (`std.error`), the t\-statistic (`statistic`), and the p\-value for each coefficient. This output indicates that respondents who voted for Trump are more likely to usually have trust in the government compared to those who voted for Biden (the reference level). The coefficient of 0\.435 represents the increase in the log odds of usually trusting the government.
In most cases, it is easier to talk about the odds instead of the log odds. To do this, we need to exponentiate the coefficients. We can use the same `tidy()` function but include the argument `exponentiate = TRUE` to see the odds.
```
tidy(logistic_trust_vote, exponentiate = TRUE) %>%
select(term, estimate) %>%
gt() %>%
fmt_number()
```
TABLE 7\.7: Logistic regression predicting trust in government by presidential candidate selection with exponentiated coefficients (odds), RECS 2020
| term | estimate |
| --- | --- |
| (Intercept) | 0\.14 |
| VotedPres2020\_selectionTrump | 1\.54 |
| VotedPres2020\_selectionOther | 0\.52 |
Using the output in Table [7\.7](c07-modeling.html#tab:model-logisticexamp-model-odds-tab), we can interpret this as saying that the odds of usually trusting the government for someone who voted for Trump is 154% as likely to trust the government compared to a person who voted for Biden (the reference level). In comparison, a person who voted for neither Biden nor Trump is 52% as likely to trust the government as someone who voted for Biden.
As with linear regression, the `augment()` can be used to predict values. By default, the prediction is the link function, not the probability model. To predict the probability, add an argument of `type.predict="response"` as demonstrated below:
```
logistic_trust_vote %>%
augment(type.predict = "response") %>%
mutate(
.se.fit = sqrt(attr(.fitted, "var")),
.fitted = as.numeric(.fitted)
) %>%
select(
TrustGovernmentUsually,
VotedPres2020_selection,
.fitted,
.se.fit
)
```
```
## # A tibble: 6,212 × 4
## TrustGovernmentUsually VotedPres2020_selection .fitted .se.fit
## <lgl> <fct> <dbl> <dbl>
## 1 FALSE Other 0.0681 0.0279
## 2 FALSE Biden 0.123 0.00772
## 3 FALSE Biden 0.123 0.00772
## 4 FALSE Trump 0.178 0.00919
## 5 FALSE Biden 0.123 0.00772
## 6 FALSE Trump 0.178 0.00919
## 7 FALSE Biden 0.123 0.00772
## 8 FALSE Biden 0.123 0.00772
## 9 TRUE Biden 0.123 0.00772
## 10 FALSE Biden 0.123 0.00772
## # ℹ 6,202 more rows
```
#### Example 2: Interaction effects
Let’s look at another example with interaction effects. If we’re interested in understanding the demographics of people who voted for Biden among all voters in 2020, we could include the indicator of whether respondents voted early (`EarlyVote2020`) and their income group (`Income7`) in our model.
First, we need to subset the data to 2020 voters and then create an indicator for who voted for Biden.
```
anes_des_ind <- anes_des %>%
filter(!is.na(VotedPres2020_selection)) %>%
mutate(VoteBiden = case_when(
VotedPres2020_selection == "Biden" ~ 1,
TRUE ~ 0
))
```
Let’s first look at the main effects of income grouping and early voting behavior.
```
log_biden_main <- anes_des_ind %>%
mutate(
EarlyVote2020 = fct_relevel(EarlyVote2020, "No", after = 0)
) %>%
svyglm(
design = .,
formula = VoteBiden ~ EarlyVote2020 + Income7,
family = quasibinomial
)
```
```
tidy(log_biden_main) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 7\.8: Logistic regression output for predicting voting for Biden given early voting behavior and income; main effects only, ANES 2020
| term | estimate | std.error | statistic | p.value |
| --- | --- | --- | --- | --- |
| (Intercept) | 1\.28 | 0\.43 | 2\.99 | 0\.0047 |
| EarlyVote2020Yes | 0\.44 | 0\.34 | 1\.29 | 0\.2039 |
| Income7$20k to \< 40k | −1\.06 | 0\.49 | −2\.18 | 0\.0352 |
| Income7$40k to \< 60k | −0\.78 | 0\.42 | −1\.86 | 0\.0705 |
| Income7$60k to \< 80k | −1\.24 | 0\.70 | −1\.77 | 0\.0842 |
| Income7$80k to \< 100k | −0\.66 | 0\.64 | −1\.02 | 0\.3137 |
| Income7$100k to \< 125k | −1\.02 | 0\.54 | −1\.89 | 0\.0662 |
| Income7$125k or more | −1\.25 | 0\.44 | −2\.87 | 0\.0065 |
This main effect model (see Table [7\.8](c07-modeling.html#tab:model-logisticexamp-biden-main-tab)) indicates that people with incomes of $125,000 or more have a significant negative coefficient –1\.25 (p\-value is 0\.0065\). This indicates that people with incomes of $125,000 or more were less likely to vote for Biden in the 2020 election compared to people with incomes of $20,000 or less (reference level).
Although early voting behavior was not significant, there may be an interaction between income and early voting behavior. To determine this, we can create a model that includes the interaction effects:
```
log_biden_int <- anes_des_ind %>%
mutate(
EarlyVote2020 = fct_relevel(EarlyVote2020, "No", after = 0)
) %>%
svyglm(
design = .,
formula = VoteBiden ~ (EarlyVote2020 + Income7)^2,
family = quasibinomial
)
```
```
tidy(log_biden_int) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 7\.9: Logistic regression output for predicting voting for Biden given early voting behavior and income; with interaction, ANES 2020
| term | estimate | std.error | statistic | p.value |
| --- | --- | --- | --- | --- |
| (Intercept) | 2\.32 | 0\.67 | 3\.45 | 0\.0015 |
| EarlyVote2020Yes | −0\.81 | 0\.78 | −1\.03 | 0\.3081 |
| Income7$20k to \< 40k | −2\.33 | 0\.87 | −2\.68 | 0\.0113 |
| Income7$40k to \< 60k | −1\.67 | 0\.89 | −1\.87 | 0\.0700 |
| Income7$60k to \< 80k | −2\.05 | 1\.05 | −1\.96 | 0\.0580 |
| Income7$80k to \< 100k | −3\.42 | 1\.12 | −3\.06 | 0\.0043 |
| Income7$100k to \< 125k | −2\.33 | 1\.07 | −2\.17 | 0\.0368 |
| Income7$125k or more | −2\.09 | 0\.92 | −2\.28 | 0\.0289 |
| EarlyVote2020Yes:Income7$20k to \< 40k | 1\.60 | 0\.95 | 1\.69 | 0\.1006 |
| EarlyVote2020Yes:Income7$40k to \< 60k | 0\.99 | 1\.00 | 0\.99 | 0\.3289 |
| EarlyVote2020Yes:Income7$60k to \< 80k | 0\.90 | 1\.14 | 0\.79 | 0\.4373 |
| EarlyVote2020Yes:Income7$80k to \< 100k | 3\.22 | 1\.16 | 2\.78 | 0\.0087 |
| EarlyVote2020Yes:Income7$100k to \< 125k | 1\.64 | 1\.11 | 1\.48 | 0\.1492 |
| EarlyVote2020Yes:Income7$125k or more | 1\.00 | 1\.14 | 0\.88 | 0\.3867 |
The results from the interaction model (see Table [7\.9](c07-modeling.html#tab:model-logisticexamp-biden-int-tab)) show that one interaction between early voting behavior and income is significant. To better understand what this interaction means, we can plot the predicted probabilities with an interaction plot. Let’s first obtain the predicted probabilities for each possible combination of variables using the `augment()` function.
```
log_biden_pred <- log_biden_int %>%
augment(type.predict = "response") %>%
mutate(
.se.fit = sqrt(attr(.fitted, "var")),
.fitted = as.numeric(.fitted)
) %>%
select(VoteBiden, EarlyVote2020, Income7, .fitted, .se.fit)
```
The y\-axis is the predicted probabilities, one of our x\-variables is on the x\-axis, and the other is represented by multiple lines. Figure [7\.4](c07-modeling.html#fig:model-logisticexamp-biden-plot) shows the interaction plot with early voting behavior on the x\-axis and income represented by the lines.
```
log_biden_pred %>%
filter(VoteBiden == 1) %>%
distinct() %>%
arrange(EarlyVote2020, Income7) %>%
ggplot(aes(
x = EarlyVote2020,
y = .fitted,
group = Income7,
color = Income7,
linetype = Income7
)) +
geom_line(linewidth = 1.1) +
scale_color_manual(values = colorRampPalette(book_colors)(7)) +
ylab("Predicted Probability of Voting for Biden") +
labs(
x = "Voted Early",
color = "Income",
linetype = "Income"
) +
coord_cartesian(ylim = c(0, 1)) +
guides(fill = "none") +
theme_minimal()
```
FIGURE 7\.4: Interaction plot of early voting and income predicting the probability of voting for Biden
From Figure [7\.4](c07-modeling.html#fig:model-logisticexamp-biden-plot), we can see that people who have incomes in most groups (e.g., $40,000 to less than $60,000\) have roughly the same probability of voting for Biden regardless of whether they voted early or not. However, those with income in the $100,000 to less than $125,000 group were more likely to vote for Biden if they voted early than if they did not vote early.
Interactions in models can be difficult to understand from the coefficients alone. Using these interaction plots can help others understand the nuances of the results.
#### Example 1: Logistic regression with single variable
In the following example, we use the ANES data to model whether someone usually has trust in the government[26](#fn26) by whom someone voted for president in 2020\. As a reminder, the leading candidates were Biden and Trump, though people could vote for someone else not in the Democratic or Republican parties. Those votes are all grouped into an “Other” category. We first create a binary outcome for trusting in the government by collapsing “Always” and “Most of the time” into a single\-factor level, and the other response options (“About half the time,” “Some of the time,” and “Never”) into a second factor level. Next, a scatter plot of the raw data is not useful, as it is all 0 and 1 outcomes; so instead, we plot a summary of the data.
```
anes_des_der <- anes_des %>%
mutate(TrustGovernmentUsually = case_when(
is.na(TrustGovernment) ~ NA,
TRUE ~ TrustGovernment %in% c("Always", "Most of the time")
))
anes_des_der %>%
group_by(VotedPres2020_selection) %>%
summarize(
pct_trust = survey_mean(TrustGovernmentUsually,
na.rm = TRUE,
proportion = TRUE,
vartype = "ci"
),
.groups = "drop"
) %>%
filter(complete.cases(.)) %>%
ggplot(aes(
x = VotedPres2020_selection, y = pct_trust,
fill = VotedPres2020_selection
)) +
geom_bar(stat = "identity") +
geom_errorbar(aes(ymin = pct_trust_low, ymax = pct_trust_upp),
width = .2
) +
scale_fill_manual(values = c("#0b3954", "#bfd7ea", "#8d6b94")) +
xlab("Election choice (2020)") +
ylab("Usually trust the government") +
scale_y_continuous(labels = scales::percent) +
guides(fill = "none") +
theme_minimal()
```
FIGURE 7\.3: Relationship between candidate selection and trust in government, ANES 2020
Looking at Figure [7\.3](c07-modeling.html#fig:model-logisticexamp-plot), it appears that people who voted for Trump are more likely to say that they usually have trust in the government compared to those who voted for Biden and other candidates. To determine if this insight is accurate, we next fit the model.
```
logistic_trust_vote <- anes_des_der %>%
svyglm(
design = .,
formula = TrustGovernmentUsually ~ VotedPres2020_selection,
family = quasibinomial
)
```
```
tidy(logistic_trust_vote) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 7\.6: Logistic regression output predicting trust in government by presidential candidate selection, RECS 2020
| term | estimate | std.error | statistic | p.value |
| --- | --- | --- | --- | --- |
| (Intercept) | −1\.96 | 0\.07 | −27\.45 | \<0\.0001 |
| VotedPres2020\_selectionTrump | 0\.43 | 0\.09 | 4\.72 | \<0\.0001 |
| VotedPres2020\_selectionOther | −0\.65 | 0\.44 | −1\.49 | 0\.1429 |
In Table [7\.6](c07-modeling.html#tab:model-logisticexamp-tab), we can see the estimated coefficients (`estimate`), estimated standard errors of the coefficients (`std.error`), the t\-statistic (`statistic`), and the p\-value for each coefficient. This output indicates that respondents who voted for Trump are more likely to usually have trust in the government compared to those who voted for Biden (the reference level). The coefficient of 0\.435 represents the increase in the log odds of usually trusting the government.
In most cases, it is easier to talk about the odds instead of the log odds. To do this, we need to exponentiate the coefficients. We can use the same `tidy()` function but include the argument `exponentiate = TRUE` to see the odds.
```
tidy(logistic_trust_vote, exponentiate = TRUE) %>%
select(term, estimate) %>%
gt() %>%
fmt_number()
```
TABLE 7\.7: Logistic regression predicting trust in government by presidential candidate selection with exponentiated coefficients (odds), RECS 2020
| term | estimate |
| --- | --- |
| (Intercept) | 0\.14 |
| VotedPres2020\_selectionTrump | 1\.54 |
| VotedPres2020\_selectionOther | 0\.52 |
Using the output in Table [7\.7](c07-modeling.html#tab:model-logisticexamp-model-odds-tab), we can interpret this as saying that the odds of usually trusting the government for someone who voted for Trump is 154% as likely to trust the government compared to a person who voted for Biden (the reference level). In comparison, a person who voted for neither Biden nor Trump is 52% as likely to trust the government as someone who voted for Biden.
As with linear regression, the `augment()` can be used to predict values. By default, the prediction is the link function, not the probability model. To predict the probability, add an argument of `type.predict="response"` as demonstrated below:
```
logistic_trust_vote %>%
augment(type.predict = "response") %>%
mutate(
.se.fit = sqrt(attr(.fitted, "var")),
.fitted = as.numeric(.fitted)
) %>%
select(
TrustGovernmentUsually,
VotedPres2020_selection,
.fitted,
.se.fit
)
```
```
## # A tibble: 6,212 × 4
## TrustGovernmentUsually VotedPres2020_selection .fitted .se.fit
## <lgl> <fct> <dbl> <dbl>
## 1 FALSE Other 0.0681 0.0279
## 2 FALSE Biden 0.123 0.00772
## 3 FALSE Biden 0.123 0.00772
## 4 FALSE Trump 0.178 0.00919
## 5 FALSE Biden 0.123 0.00772
## 6 FALSE Trump 0.178 0.00919
## 7 FALSE Biden 0.123 0.00772
## 8 FALSE Biden 0.123 0.00772
## 9 TRUE Biden 0.123 0.00772
## 10 FALSE Biden 0.123 0.00772
## # ℹ 6,202 more rows
```
#### Example 2: Interaction effects
Let’s look at another example with interaction effects. If we’re interested in understanding the demographics of people who voted for Biden among all voters in 2020, we could include the indicator of whether respondents voted early (`EarlyVote2020`) and their income group (`Income7`) in our model.
First, we need to subset the data to 2020 voters and then create an indicator for who voted for Biden.
```
anes_des_ind <- anes_des %>%
filter(!is.na(VotedPres2020_selection)) %>%
mutate(VoteBiden = case_when(
VotedPres2020_selection == "Biden" ~ 1,
TRUE ~ 0
))
```
Let’s first look at the main effects of income grouping and early voting behavior.
```
log_biden_main <- anes_des_ind %>%
mutate(
EarlyVote2020 = fct_relevel(EarlyVote2020, "No", after = 0)
) %>%
svyglm(
design = .,
formula = VoteBiden ~ EarlyVote2020 + Income7,
family = quasibinomial
)
```
```
tidy(log_biden_main) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 7\.8: Logistic regression output for predicting voting for Biden given early voting behavior and income; main effects only, ANES 2020
| term | estimate | std.error | statistic | p.value |
| --- | --- | --- | --- | --- |
| (Intercept) | 1\.28 | 0\.43 | 2\.99 | 0\.0047 |
| EarlyVote2020Yes | 0\.44 | 0\.34 | 1\.29 | 0\.2039 |
| Income7$20k to \< 40k | −1\.06 | 0\.49 | −2\.18 | 0\.0352 |
| Income7$40k to \< 60k | −0\.78 | 0\.42 | −1\.86 | 0\.0705 |
| Income7$60k to \< 80k | −1\.24 | 0\.70 | −1\.77 | 0\.0842 |
| Income7$80k to \< 100k | −0\.66 | 0\.64 | −1\.02 | 0\.3137 |
| Income7$100k to \< 125k | −1\.02 | 0\.54 | −1\.89 | 0\.0662 |
| Income7$125k or more | −1\.25 | 0\.44 | −2\.87 | 0\.0065 |
This main effect model (see Table [7\.8](c07-modeling.html#tab:model-logisticexamp-biden-main-tab)) indicates that people with incomes of $125,000 or more have a significant negative coefficient –1\.25 (p\-value is 0\.0065\). This indicates that people with incomes of $125,000 or more were less likely to vote for Biden in the 2020 election compared to people with incomes of $20,000 or less (reference level).
Although early voting behavior was not significant, there may be an interaction between income and early voting behavior. To determine this, we can create a model that includes the interaction effects:
```
log_biden_int <- anes_des_ind %>%
mutate(
EarlyVote2020 = fct_relevel(EarlyVote2020, "No", after = 0)
) %>%
svyglm(
design = .,
formula = VoteBiden ~ (EarlyVote2020 + Income7)^2,
family = quasibinomial
)
```
```
tidy(log_biden_int) %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 7\.9: Logistic regression output for predicting voting for Biden given early voting behavior and income; with interaction, ANES 2020
| term | estimate | std.error | statistic | p.value |
| --- | --- | --- | --- | --- |
| (Intercept) | 2\.32 | 0\.67 | 3\.45 | 0\.0015 |
| EarlyVote2020Yes | −0\.81 | 0\.78 | −1\.03 | 0\.3081 |
| Income7$20k to \< 40k | −2\.33 | 0\.87 | −2\.68 | 0\.0113 |
| Income7$40k to \< 60k | −1\.67 | 0\.89 | −1\.87 | 0\.0700 |
| Income7$60k to \< 80k | −2\.05 | 1\.05 | −1\.96 | 0\.0580 |
| Income7$80k to \< 100k | −3\.42 | 1\.12 | −3\.06 | 0\.0043 |
| Income7$100k to \< 125k | −2\.33 | 1\.07 | −2\.17 | 0\.0368 |
| Income7$125k or more | −2\.09 | 0\.92 | −2\.28 | 0\.0289 |
| EarlyVote2020Yes:Income7$20k to \< 40k | 1\.60 | 0\.95 | 1\.69 | 0\.1006 |
| EarlyVote2020Yes:Income7$40k to \< 60k | 0\.99 | 1\.00 | 0\.99 | 0\.3289 |
| EarlyVote2020Yes:Income7$60k to \< 80k | 0\.90 | 1\.14 | 0\.79 | 0\.4373 |
| EarlyVote2020Yes:Income7$80k to \< 100k | 3\.22 | 1\.16 | 2\.78 | 0\.0087 |
| EarlyVote2020Yes:Income7$100k to \< 125k | 1\.64 | 1\.11 | 1\.48 | 0\.1492 |
| EarlyVote2020Yes:Income7$125k or more | 1\.00 | 1\.14 | 0\.88 | 0\.3867 |
The results from the interaction model (see Table [7\.9](c07-modeling.html#tab:model-logisticexamp-biden-int-tab)) show that one interaction between early voting behavior and income is significant. To better understand what this interaction means, we can plot the predicted probabilities with an interaction plot. Let’s first obtain the predicted probabilities for each possible combination of variables using the `augment()` function.
```
log_biden_pred <- log_biden_int %>%
augment(type.predict = "response") %>%
mutate(
.se.fit = sqrt(attr(.fitted, "var")),
.fitted = as.numeric(.fitted)
) %>%
select(VoteBiden, EarlyVote2020, Income7, .fitted, .se.fit)
```
The y\-axis is the predicted probabilities, one of our x\-variables is on the x\-axis, and the other is represented by multiple lines. Figure [7\.4](c07-modeling.html#fig:model-logisticexamp-biden-plot) shows the interaction plot with early voting behavior on the x\-axis and income represented by the lines.
```
log_biden_pred %>%
filter(VoteBiden == 1) %>%
distinct() %>%
arrange(EarlyVote2020, Income7) %>%
ggplot(aes(
x = EarlyVote2020,
y = .fitted,
group = Income7,
color = Income7,
linetype = Income7
)) +
geom_line(linewidth = 1.1) +
scale_color_manual(values = colorRampPalette(book_colors)(7)) +
ylab("Predicted Probability of Voting for Biden") +
labs(
x = "Voted Early",
color = "Income",
linetype = "Income"
) +
coord_cartesian(ylim = c(0, 1)) +
guides(fill = "none") +
theme_minimal()
```
FIGURE 7\.4: Interaction plot of early voting and income predicting the probability of voting for Biden
From Figure [7\.4](c07-modeling.html#fig:model-logisticexamp-biden-plot), we can see that people who have incomes in most groups (e.g., $40,000 to less than $60,000\) have roughly the same probability of voting for Biden regardless of whether they voted early or not. However, those with income in the $100,000 to less than $125,000 group were more likely to vote for Biden if they voted early than if they did not vote early.
Interactions in models can be difficult to understand from the coefficients alone. Using these interaction plots can help others understand the nuances of the results.
7\.5 Exercises
--------------
1. The type of housing unit may have an impact on energy expenses. Is there any relationship between housing unit type (`HousingUnitType`) and total energy expenditure (`TOTALDOL`)? First, find the average energy expenditure by housing unit type as a descriptive analysis and then do the test. The reference level in the comparison should be the housing unit type that is most common.
2. Does temperature play a role in electricity expenditure? Cooling degree days are a measure of how hot a place is. CDD65 for a given day indicates the number of degrees Fahrenheit warmer than 65°F (18\.3°C) it is in a location. On a day that averages 65°F and below, CDD65\=0, while a day that averages 85°F (29\.4°C) would have CDD65\=20 because it is 20 degrees Fahrenheit warmer ([U.S. Energy Information Administration 2023d](#ref-eia-cdd)). Each day in the year is summed up to indicate how hot the place is throughout the year. Similarly, HDD65 indicates the days colder than 65°F. Can energy expenditure be predicted using these temperature indicators along with square footage? Is there a significant relationship? Include main effects and two\-way interactions.
3. Continuing with our results from Exercise 2, create a plot between the actual and predicted expenditures and a residual plot for the predicted expenditures.
4. Early voting expanded in 2020 ([Sprunt 2020](#ref-npr-voting-trend)). Build a logistic model predicting early voting in 2020 (`EarlyVote2020`) using age (`Age`), education (`Education`), and party identification (`PartyID`). Include two\-way interactions.
5. Continuing from Exercise 4, predict the probability of early voting for two people. Both are 28 years old and have a graduate degree; however, one person is a strong Democrat, and the other is a strong Republican.
| Social Science |
tidy-survey-r.github.io | https://tidy-survey-r.github.io/tidy-survey-book/c08-communicating-results.html |
Chapter 8 Communication of results
==================================
### Prerequisites
For this chapter, load the following packages:
```
library(tidyverse)
library(survey)
library(srvyr)
library(srvyrexploR)
library(gt)
library(gtsummary)
```
We are using data from ANES as described in Chapter [4](c04-getting-started.html#c04-getting-started). As a reminder, here is the code to create the design objects for each to use throughout this chapter. For ANES, we need to adjust the weight so it sums to the population instead of the sample (see the ANES documentation and Chapter [4](c04-getting-started.html#c04-getting-started) for more information).
```
targetpop <- 231592693
anes_adjwgt <- anes_2020 %>%
mutate(Weight = Weight / sum(Weight) * targetpop)
anes_des <- anes_adjwgt %>%
as_survey_design(
weights = Weight,
strata = Stratum,
ids = VarUnit,
nest = TRUE
)
```
8\.1 Introduction
-----------------
After finishing the analysis and modeling, we proceed to the task of communicating the survey results. Our audience may range from seasoned researchers familiar with our survey data to newcomers encountering the information for the first time. We should aim to explain the methodology and analysis while presenting findings in an accessible way, and it is our responsibility to report information with care.
Before beginning any dissemination of results, consider questions such as:
* How are we presenting results? Examples include a website, print, or other media. Based on the medium, we might limit or enhance the use of graphical representation.
* What is the audience’s familiarity with the study and/or data? Audiences can range from the general public to data experts. If we anticipate limited knowledge about the study, we should provide detailed descriptions (we discuss recommendations later in the chapter).
* What are we trying to communicate? It could be summary statistics, trends, patterns, or other insights. Tables may suit summary statistics, while plots are better at conveying trends and patterns.
* Is the audience accustomed to interpreting plots? If not, include explanatory text to guide them on how to interpret the plots effectively.
* What is the audience’s statistical knowledge? If the audience does not have a strong statistics background, provide text on standard errors, confidence intervals, and other estimate types to enhance understanding.
8\.2 Describing results through text
------------------------------------
As analysts, we often emphasize the data, and communicating results can sometimes be overlooked. To be effective communicators, we need to identify the appropriate information to share with our audience. Chapters [2](c02-overview-surveys.html#c02-overview-surveys) and [3](c03-survey-data-documentation.html#c03-survey-data-documentation) provide insights into factors we need to consider during analysis, and they remain relevant when presenting results to others.
### 8\.2\.1 Methodology
If we are using existing data, methodologically sound surveys provide documentation about how the survey was fielded, the questionnaires, and other necessary information for analyses. For example, the survey’s methodology reports should include the population of interest, sampling procedures, response rates, questionnaire documentation, weighting, and a general overview of disclosure statements. Many American organizations follow the American Association for Public Opinion Research’s (AAPOR) [Transparency Initiative](https://aapor.org/standards-and-ethics/transparency-initiative). The AAPOR Transparency Initiative requires organizations to include specific details in their methodology, making it clear how we can and should analyze and interpret the results. Being transparent about these methods is vital for the scientific rigor of the field.
The details provided in Chapter [2](c02-overview-surveys.html#c02-overview-surveys) about the survey process should be shared with the audience when presenting the results. When using publicly available data, like the examples in this book, we can often link to the methodology report in our final output. We should also provide high\-level information for the audience to quickly grasp the context around the findings. For example, we can mention when and where the study was conducted, the population’s age range, or other contextual details. This information helps the audience understand how generalizable the results are.
Providing this material is especially important when no methodology report is available for the analyzed data. For example, if we conducted a new survey for a specific purpose, we should document and present all the pertinent information during the analysis and reporting process. Adhering to the AAPOR Transparency Initiative guidelines is a reliable method to guarantee that all essential information is communicated to the audience.
### 8\.2\.2 Analysis
Along with the survey methodology and weight calculations, we should also share our approach to preparing, cleaning, and analyzing the data. For example, in Chapter [6](c06-statistical-testing.html#c06-statistical-testing), we compared education distributions from the ANES survey to the American Community Survey (ACS). To make the comparison, we had to collapse the education categories provided in the ANES data to match the ACS. The process for this particular example may seem straightforward (like combining bachelor’s and graduate degrees into a single category), but there are multiple ways to deal with the data. Our choice is just one of many. We should document both the original ANES question and response options and the steps we took to match them with ACS data. This transparency helps clarify our analysis to our audience.
Missing data is another instance where we want to be unambiguous and upfront with our audience. In this book, numerous examples and exercises remove missing data, as this is often the easiest way to handle them. However, there are circumstances where missing data holds substantive importance, and excluding them could introduce bias (see Chapter [11](c11-missing-data.html#c11-missing-data)). Being transparent about our handling of missing data is important to maintaining the integrity of our analysis and ensuring a comprehensive understanding of the results.
### 8\.2\.3 Results
While tables and graphs are commonly used to communicate results, there are instances where text can be more effective in sharing information. Narrative details, such as context around point estimates or model coefficients, can go a long way in improving our communication. We have several strategies to effectively convey the significance of the data to the audience through text.
First, we can highlight important data elements in a sentence using plain language. For example, if we were looking at election polling data conducted before an election, we could say:
> As of \[DATE], an estimated XX% of registered U.S. voters say they will vote for \[CANDIDATE NAME] for president in the \[YEAR] general election.
This sentence provides key pieces of information in a straightforward way:
1. \[DATE]: Given that polling data are time\-specific, providing the date of reference lets the audience know when these data were valid.
2. Registered U.S. voters: This tells the audience who we surveyed, letting them know the population of interest.
3. XX%: This part provides the estimated percentage of people voting for a specific candidate for a specific office.
4. \[YEAR] general election: Adding this gives more context about the election type and year. The estimate would take on a different meaning if we changed it to a primary election instead of a general election.
We also included the word “estimated.” When presenting aggregate survey results, we have errors around each estimate. We want to convey this uncertainty rather than talk in absolutes. Words like “estimated,” “on average,” or “around” can help communicate this uncertainty to the audience. Instead of saying “XX%,” we can also say “XX% (\+/\- Y%)” to show the margin of error. Confidence intervals can also be incorporated into the text to assist readers.
Second, providing context and discussing the meaning behind a point estimate can help the audience glean some insight into why the data are important. For example, when comparing two values, it can be helpful to highlight if there are statistically significant differences and explain the impact and relevance of this information. This is where we should do our best to be mindful of biases and present the facts logically.
Keep in mind how we discuss these findings can greatly influence how the audience interprets them. If we include speculation, phrases like “the authors speculate” or “these findings may indicate,” it relays the uncertainty around the notion while still lending a plausible solution. Additionally, we can present alternative viewpoints or competing discussion points to explain the uncertainty in the results.
8\.3 Visualizing data
---------------------
Although discussing key findings in the text is important, presenting large amounts of data in tables or visualizations is often more digestible for the audience. Effectively combining text, tables, and graphs can be powerful in communicating results. This section provides examples of using the {gt}, {gtsummary}, and {ggplot2} packages to enhance the dissemination of results ([Iannone et al. 2024](#ref-R-gt); [Sjoberg et al. 2021](#ref-gtsummarysjo); [Wickham 2016](#ref-ggplot2wickham)).
### 8\.3\.1 Tables
Tables are a great way to provide a large amount of data when individual data points need to be examined. However, it is important to present tables in a reader\-friendly format. Numbers should align, rows and columns should be easy to follow, and the table size should not compromise readability. Using key visualization techniques, we can create tables that are informative and nice to look at. Many packages create easy\-to\-read tables (e.g., {kable} \+ {kableExtra}, {gt}, {gtsummary}, {DT}, {formattable}, {flextable}, {reactable}). We appreciate the flexibility, ability to use pipes (e.g., `%>%`), and numerous extensions of the {gt} package. While we focus on {gt} here, we encourage learning about others, as they may have additional helpful features. Please note, at this time, {gtsummary} needs additional features to be widely used for survey analysis, particularly due to its lack of ability to work with replicate designs. We provide one example using {gtsummary} and hope it evolves into a more comprehensive tool over time.
#### Transitioning {srvyr} output to a {gt} table
Let’s start by using some of the data we calculated earlier in this book. In Chapter [6](c06-statistical-testing.html#c06-statistical-testing), we looked at data on trust in government with the proportions calculated below:
```
trust_gov <- anes_des %>%
drop_na(TrustGovernment) %>%
group_by(TrustGovernment) %>%
summarize(trust_gov_p = survey_prop())
trust_gov
```
```
## # A tibble: 5 × 3
## TrustGovernment trust_gov_p trust_gov_p_se
## <fct> <dbl> <dbl>
## 1 Always 0.0155 0.00204
## 2 Most of the time 0.132 0.00553
## 3 About half the time 0.309 0.00829
## 4 Some of the time 0.434 0.00855
## 5 Never 0.110 0.00566
```
The default output generated by R may work for initial viewing inside our IDE or when creating basic output in an R Markdown or Quarto document. However, when presenting these results in other publications, such as the print version of this book or with other formal dissemination modes, modifying the display can improve our reader’s experience.
Looking at the output from `trust_gov`, a couple of improvements stand out: (1\) switching to percentages instead of proportions and (2\) removing the variable names as column headers. The {gt} package is a good tool for implementing better labeling and creating publishable tables. Let’s walk through some code as we make a few changes to improve the table’s usefulness.
First, we initiate the formatted table with the `gt()` function on the `trust_gov` tibble previously created. Next, we use the argument `rowname_col()` to designate the `TrustGovernment` column as the label for each row (called the table “stub”). We apply the `cols_label()` function to create informative column labels instead of variable names and then the `tab_spanner()` function to add a label across multiple columns. In this case, we label all columns except the stub with “Trust in Government, 2020\.” We then format the proportions into percentages with the `fmt_percent()` function and reduce the number of decimals shown to one with `decimals = 1`. Finally, the `tab_caption()` function adds a table title for the HTML version of the book. We can use the caption for cross\-referencing in R Markdown, Quarto, and bookdown, as well as adding it to the list of tables in the book. These changes are all seen in Table [8\.1](c08-communicating-results.html#tab:results-table-gt1-tab).
```
trust_gov_gt <- trust_gov %>%
gt(rowname_col = "TrustGovernment") %>%
cols_label(
trust_gov_p = "%",
trust_gov_p_se = "s.e. (%)"
) %>%
tab_spanner(
label = "Trust in Government, 2020",
columns = c(trust_gov_p, trust_gov_p_se)
) %>%
fmt_percent(decimals = 1)
```
```
trust_gov_gt %>%
tab_caption("Example of {gt} table with trust in government estimate")
```
TABLE 8\.1: Example of {gt} table with trust in government estimate
| | Trust in Government, 2020 | |
| --- | --- | --- |
| % | s.e. (%) |
| Always | 1\.6% | 0\.2% |
| Most of the time | 13\.2% | 0\.6% |
| About half the time | 30\.9% | 0\.8% |
| Some of the time | 43\.4% | 0\.9% |
| Never | 11\.0% | 0\.6% |
We can add a few more enhancements, such as a title (which is different from a caption[27](#fn27)), a data source note, and a footnote with the question information, using the functions `tab_header()`, `tab_source_note()`, and `tab_footnote()`. If having the percentage sign in both the header and the cells seems redundant, we can opt for `fmt_number()` instead of `fmt_percent()` and scale the number by 100 with `scale_by = 100`. The resulting table is displayed in Table [8\.2](c08-communicating-results.html#tab:results-table-gt2-tab).
```
trust_gov_gt2 <- trust_gov_gt %>%
tab_header("American voter's trust
in the federal government, 2020") %>%
tab_source_note(
md("*Source*: American National Election Studies, 2020")
) %>%
tab_footnote(
"Question text: How often can you trust the federal government
in Washington to do what is right?"
) %>%
fmt_number(
scale_by = 100,
decimals = 1
)
```
```
trust_gov_gt2
```
TABLE 8\.2: Example of {gt} table with trust in government estimates and additional context
| American voter's trust in the federal government, 2020 | | |
| --- | --- | --- |
| | Trust in Government, 2020 | |
| % | s.e. (%) |
| Always | 1\.6 | 0\.2 |
| Most of the time | 13\.2 | 0\.6 |
| About half the time | 30\.9 | 0\.8 |
| Some of the time | 43\.4 | 0\.9 |
| Never | 11\.0 | 0\.6 |
| *Source*: American National Election Studies, 2020 | | |
| --- | --- | --- |
| Question text: How often can you trust the federal government in Washington to do what is right? | | |
| --- | --- | --- |
#### Expanding tables using {gtsummary}
The {gtsummary} package simultaneously summarizes data and creates publication\-ready tables. Initially designed for clinical trial data, it has been extended to include survey analysis in certain capacities. At this time, it is only compatible with survey objects using Taylor’s Series Linearization and not replicate methods. While it offers a restricted set of summary statistics, the following are available for categorical variables:
* `{n}` frequency
* `{N}` denominator, or respondent population
* `{p}` proportion (stylized as a percentage by default)
* `{p.std.error}` standard error of the sample proportion
* `{deff}` design effect of the sample proportion
* `{n_unweighted}` unweighted frequency
* `{N_unweighted}` unweighted denominator
* `{p_unweighted}` unweighted formatted proportion (stylized as a percentage by default)
The following summary statistics are available for continuous variables:
* `{median}` median
* `{mean}` mean
* `{mean.std.error}` standard error of the sample mean
* `{deff}` design effect of the sample mean
* `{sd}` standard deviation
* `{var}` variance
* `{min}` minimum
* `{max}` maximum
* `{p#}` any integer percentile, where `#` is an integer from 0 to 100
* `{sum}` sum
In the following example, we build a table using {gtsummary}, similar to the table in the {gt} example. The main function we use is `tbl_svysummary()`. In this function, we include the variables we want to analyze in the `include` argument and define the statistics we want to display in the `statistic` argument. To specify the statistics, we apply the syntax from the {glue} package, where we enclose the variables we want to insert within curly brackets. We must specify the desired statistics using the names listed above. For example, to specify that we want the proportion followed by the standard error of the proportion in parentheses, we use `{p} ({p.std.error})`. Table [8\.3](c08-communicating-results.html#tab:results-gts-ex-1-tab) displays the resulting table.
```
anes_des_gtsum <- anes_des %>%
tbl_svysummary(
include = TrustGovernment,
statistic = list(all_categorical() ~ "{p} ({p.std.error})")
)
```
```
anes_des_gtsum
```
TABLE 8\.3: Example of {gtsummary} table with trust in government estimates
| **Characteristic** | **N \= 231,034,125**1 |
| --- | --- |
| PRE: How often trust government in Washington to do what is right \[revised] | |
| Always | 1\.6 (0\.00\) |
| Most of the time | 13 (0\.01\) |
| About half the time | 31 (0\.01\) |
| Some of the time | 43 (0\.01\) |
| Never | 11 (0\.01\) |
| Unknown | 673,773 |
| 1 % (SE(%)) | |
| --- | --- |
The default table (shown in Table [8\.3](c08-communicating-results.html#tab:results-gts-ex-1-tab)) includes the weighted number of missing (or Unknown) records. The standard error is reported as a proportion, while the proportion is styled as a percentage. In the next step, we remove the Unknown category by setting the missing argument to “no” and format the standard error as a percentage using the `digits` argument. To improve the table for publication, we provide a more polished label for the “TrustGovernment” variable using the `label` argument. The resulting table is displayed in Table [8\.4](c08-communicating-results.html#tab:results-gts-ex-2-tab).
```
anes_des_gtsum2 <- anes_des %>%
tbl_svysummary(
include = TrustGovernment,
statistic = list(all_categorical() ~ "{p} ({p.std.error})"),
missing = "no",
digits = list(TrustGovernment ~ style_percent),
label = list(TrustGovernment ~ "Trust in Government, 2020")
)
```
```
anes_des_gtsum2
```
TABLE 8\.4: Example of {gtsummary} table with trust in government estimates with labeling and digits options
| **Characteristic** | **N \= 231,034,125**1 |
| --- | --- |
| Trust in Government, 2020 | |
| Always | 1\.6 (0\.2\) |
| Most of the time | 13 (0\.6\) |
| About half the time | 31 (0\.8\) |
| Some of the time | 43 (0\.9\) |
| Never | 11 (0\.6\) |
| 1 % (SE(%)) | |
| --- | --- |
Table [8\.4](c08-communicating-results.html#tab:results-gts-ex-2-tab) is closer to our ideal output, but we still want to make a few changes. To exclude the term “Characteristic” and the estimated population size (N), we can modify the header using the `modify_header()` function to update the `label`. Further adjustments can be made based on personal preferences, organizational guidelines, or other style guides. If we prefer having the standard error in the header, similar to the {gt} table, instead of in the footnote (the {gtsummary} default), we can make these changes by specifying `stat_0` in the `modify_header()` function. Additionally, using `modify_footnote()` with `update = everything() ~ NA` removes the standard error from the footnote. After transforming the object into a {gt} table using `as_gt()`, we can add footnotes and a title using the same methods explained in the previous section. This updated table is displayed in Table [8\.5](c08-communicating-results.html#tab:results-gts-ex-3-tab).
```
anes_des_gtsum3 <- anes_des %>%
tbl_svysummary(
include = TrustGovernment,
statistic = list(all_categorical() ~ "{p} ({p.std.error})"),
missing = "no",
digits = list(TrustGovernment ~ style_percent),
label = list(TrustGovernment ~ "Trust in Government, 2020")
) %>%
modify_footnote(update = everything() ~ NA) %>%
modify_header(
label = " ",
stat_0 = "% (s.e.)"
) %>%
as_gt() %>%
tab_header("American voter's trust
in the federal government, 2020") %>%
tab_source_note(
md("*Source*: American National Election Studies, 2020")
) %>%
tab_footnote(
"Question text: How often can you trust the federal government
in Washington to do what is right?"
)
```
```
anes_des_gtsum3
```
TABLE 8\.5: Example of {gtsummary} table with trust in government estimates with more labeling options and context
| American voter's trust in the federal government, 2020 | |
| --- | --- |
| | % (s.e.) |
| Trust in Government, 2020 | |
| Always | 1\.6 (0\.2\) |
| Most of the time | 13 (0\.6\) |
| About half the time | 31 (0\.8\) |
| Some of the time | 43 (0\.9\) |
| Never | 11 (0\.6\) |
| *Source*: American National Election Studies, 2020 | |
| --- | --- |
| Question text: How often can you trust the federal government in Washington to do what is right? | |
| --- | --- |
We can also include summaries of more than one variable in the table. These variables can be either categorical or continuous. In the following code and Table [8\.6](c08-communicating-results.html#tab:results-gts-ex-4-tab), we add the mean age by updating the `include`, `statistic`, and `digits` arguments.
```
anes_des_gtsum4 <- anes_des %>%
tbl_svysummary(
include = c(TrustGovernment, Age),
statistic = list(
all_categorical() ~ "{p} ({p.std.error})",
all_continuous() ~ "{mean} ({mean.std.error})"
),
missing = "no",
digits = list(TrustGovernment ~ style_percent,
Age ~ c(1, 2)),
label = list(TrustGovernment ~ "Trust in Government, 2020")
) %>%
modify_footnote(update = everything() ~ NA) %>%
modify_header(label = " ",
stat_0 = "% (s.e.)") %>%
as_gt() %>%
tab_header(
"American voter's trust in the federal government, 2020") %>%
tab_source_note(
md("*Source*: American National Election Studies, 2020")
) %>%
tab_footnote(
"Question text: How often can you trust the federal government
in Washington to do what is right?"
) %>%
tab_caption("Example of {gtsummary} table with trust in government
estimates and average age")
```
```
anes_des_gtsum4
```
TABLE 8\.6: Example of {gtsummary} table with trust in government estimates and average age
| American voter's trust in the federal government, 2020 | |
| --- | --- |
| | % (s.e.) |
| Trust in Government, 2020 | |
| Always | 1\.6 (0\.2\) |
| Most of the time | 13 (0\.6\) |
| About half the time | 31 (0\.8\) |
| Some of the time | 43 (0\.9\) |
| Never | 11 (0\.6\) |
| PRE: SUMMARY: Respondent age | 47\.3 (0\.36\) |
| *Source*: American National Election Studies, 2020 | |
| --- | --- |
| Question text: How often can you trust the federal government in Washington to do what is right? | |
| --- | --- |
With {gtsummary}, we can also calculate statistics by different groups. Let’s modify the previous example (displayed in Table [8\.6](c08-communicating-results.html#tab:results-gts-ex-4-tab)) to analyze data on whether a respondent voted for president in 2020\. We update the `by` argument and refine the header. The resulting table is displayed in Table [8\.7](c08-communicating-results.html#tab:results-gts-ex-5-tab).
```
anes_des_gtsum5 <- anes_des %>%
drop_na(VotedPres2020) %>%
tbl_svysummary(
include = TrustGovernment,
statistic = list(all_categorical() ~ "{p} ({p.std.error})"),
missing = "no",
digits = list(TrustGovernment ~ style_percent),
label = list(TrustGovernment ~ "Trust in Government, 2020"),
by = VotedPres2020
) %>%
modify_footnote(update = everything() ~ NA) %>%
modify_header(
label = " ",
stat_1 = "Voted",
stat_2 = "Didn't vote"
) %>%
modify_spanning_header(all_stat_cols() ~ "% (s.e.)") %>%
as_gt() %>%
tab_header(
"American voter's trust
in the federal government by whether they voted
in the 2020 presidential election"
) %>%
tab_source_note(
md("*Source*: American National Election Studies, 2020")
) %>%
tab_footnote(
"Question text: How often can you trust the federal government
in Washington to do what is right?"
)
```
```
anes_des_gtsum5
```
TABLE 8\.7: Example of {gtsummary} table with trust in government estimates by voting status
| American voter's trust in the federal government by whether they voted in the 2020 presidential election | | |
| --- | --- | --- |
| | % (s.e.) | |
| Voted | Didn’t vote |
| Trust in Government, 2020 | | |
| Always | 1\.1 (0\.2\) | 0\.9 (0\.9\) |
| Most of the time | 13 (0\.6\) | 19 (5\.3\) |
| About half the time | 32 (0\.8\) | 30 (8\.6\) |
| Some of the time | 45 (0\.8\) | 45 (8\.2\) |
| Never | 9\.1 (0\.7\) | 5\.2 (2\.2\) |
| *Source*: American National Election Studies, 2020 | | |
| --- | --- | --- |
| Question text: How often can you trust the federal government in Washington to do what is right? | | |
| --- | --- | --- |
### 8\.3\.2 Charts and plots
Survey analysis can yield an abundance of printed summary statistics and models. Even with the most careful analysis, interpreting the results can be overwhelming. This is where charts and plots play a key role in our work. By transforming complex data into a visual representation, we can recognize patterns, relationships, and trends with greater ease.
R has numerous packages for creating compelling and insightful charts. In this section, we focus on {ggplot2}, a member of the {tidyverse} collection of packages. Known for its power and flexibility, {ggplot2} is an invaluable tool for creating a wide range of data visualizations ([Wickham 2016](#ref-ggplot2wickham)).
The {ggplot2} package follows the “grammar of graphics,” a framework that incrementally adds layers of chart components. This approach allows us to customize visual elements such as scales, colors, labels, and annotations to enhance the clarity of our results. After creating the survey design object, we can modify it to include additional outcomes and calculate estimates for our desired data points. Below, we create a binary variable `TrustGovernmentUsually`, which is `TRUE` when `TrustGovernment` is “Always” or “Most of the time” and `FALSE` otherwise. Then, we calculate the percentage of people who usually trust the government based on their vote in the 2020 presidential election (`VotedPres2020_selection`). We remove the cases where people did not vote or did not indicate their choice.
```
anes_des_der <- anes_des %>%
mutate(TrustGovernmentUsually = case_when(
is.na(TrustGovernment) ~ NA,
TRUE ~ TrustGovernment %in% c("Always", "Most of the time")
)) %>%
drop_na(VotedPres2020_selection) %>%
group_by(VotedPres2020_selection) %>%
summarize(
pct_trust = survey_mean(
TrustGovernmentUsually,
na.rm = TRUE,
proportion = TRUE,
vartype = "ci"
),
.groups = "drop"
)
anes_des_der
```
```
## # A tibble: 3 × 4
## VotedPres2020_selection pct_trust pct_trust_low pct_trust_upp
## <fct> <dbl> <dbl> <dbl>
## 1 Biden 0.123 0.109 0.140
## 2 Trump 0.178 0.161 0.198
## 3 Other 0.0681 0.0290 0.152
```
Now, we can begin creating our chart with {ggplot2}. First, we set up our plot with `ggplot()`. Next, we define the data points to be displayed using aesthetics, or `aes`. Aesthetics represent the visual properties of the objects in the plot. In the following example, we create a bar chart of the percentage of people who usually trust the government by who they voted for in the 2020 election. To do this, we want to have who they voted for on the x\-axis (`VotedPres2020_selection`) and the percent they usually trust the government on the y\-axis (`pct_trust`). We specify these variables in `ggplot()` and then indicate we want a bar chart with `geom_bar()`. The resulting plot is displayed in Figure [8\.1](c08-communicating-results.html#fig:results-plot1).
```
p <- anes_des_der %>%
ggplot(aes(
x = VotedPres2020_selection,
y = pct_trust
)) +
geom_bar(stat = "identity")
p
```
FIGURE 8\.1: Bar chart of trust in government, by chosen 2020 presidential candidate
This is a great starting point: it appears that a higher percentage of people state they usually trust the government among those who voted for Trump compared to those who voted for Biden or other candidates. Now, what if we want to introduce color to better differentiate the three groups? We can add `fill` under `aesthetics`, indicating that we want to use distinct colors for each value of `VotedPres2020_selection`. In this instance, Biden and Trump are displayed in different colors in Figure [8\.2](c08-communicating-results.html#fig:results-plot2).
```
pcolor <- anes_des_der %>%
ggplot(aes(
x = VotedPres2020_selection,
y = pct_trust,
fill = VotedPres2020_selection
)) +
geom_bar(stat = "identity")
pcolor
```
FIGURE 8\.2: Bar chart of trust in government by chosen 2020 presidential candidate, with colors
Let’s say we wanted to follow proper statistical analysis practice and incorporate variability in our plot. We can add another geom, `geom_errorbar()`, to display the confidence intervals on top of our existing `geom_bar()` layer. We can add the layer using a plus sign (`+`). The resulting graph is displayed in Figure [8\.3](c08-communicating-results.html#fig:results-plot3).
```
pcol_error <- anes_des_der %>%
ggplot(aes(
x = VotedPres2020_selection,
y = pct_trust,
fill = VotedPres2020_selection
)) +
geom_bar(stat = "identity") +
geom_errorbar(
aes(
ymin = pct_trust_low,
ymax = pct_trust_upp
),
width = .2
)
pcol_error
```
FIGURE 8\.3: Bar chart of trust in government by chosen 2020 presidential candidate, with colors and error bars
We can continue adding to our plot until we achieve our desired look. For example, since the color legend does not contribute meaningful information, we can eliminate it with `guides(fill = "none")`. We can also specify colors for `fill` using `scale_fill_manual()`. Inside this function, we provide a vector of values corresponding to the colors in our plot. These values are hexadecimal (hex) color codes, denoted by a leading pound sign `#` followed by six letters or numbers. The hex code `#0b3954` used below is dark blue. There are many tools online that help pick hex codes, such as htmlcolorcodes.com. Additionally, Figure [8\.4](c08-communicating-results.html#fig:results-plot4) incorporates better labels for the x and y axes (`xlab()`, `ylab()`), a title (`labs(title=)`), and a footnote with the data source (`labs(caption=)`).
```
pfull <-
anes_des_der %>%
ggplot(aes(
x = VotedPres2020_selection,
y = pct_trust,
fill = VotedPres2020_selection
)) +
geom_bar(stat = "identity") +
geom_errorbar(
aes(
ymin = pct_trust_low,
ymax = pct_trust_upp
),
width = .2
) +
scale_fill_manual(values = c("#0b3954", "#bfd7ea", "#8d6b94")) +
xlab("Election choice (2020)") +
ylab("Usually trust the government") +
scale_y_continuous(labels = scales::percent) +
guides(fill = "none") +
labs(
title = "Percent of voters who usually trust the government
by chosen 2020 presidential candidate",
caption = "Source: American National Election Studies, 2020"
)
pfull
```
FIGURE 8\.4: Bar chart of trust in government by chosen 2020 presidential candidate with colors, labels, error bars, and title
What we have explored in this section are just the foundational aspects of {ggplot2}, and the capabilities of this package extend far beyond what we have covered. Advanced features such as annotation, faceting, and theming allow for more sophisticated and customized visualizations. The {ggplot2} book by Wickham ([2016](#ref-ggplot2wickham)) is a comprehensive guide to learning more about this powerful tool.
### Prerequisites
8\.1 Introduction
-----------------
After finishing the analysis and modeling, we proceed to the task of communicating the survey results. Our audience may range from seasoned researchers familiar with our survey data to newcomers encountering the information for the first time. We should aim to explain the methodology and analysis while presenting findings in an accessible way, and it is our responsibility to report information with care.
Before beginning any dissemination of results, consider questions such as:
* How are we presenting results? Examples include a website, print, or other media. Based on the medium, we might limit or enhance the use of graphical representation.
* What is the audience’s familiarity with the study and/or data? Audiences can range from the general public to data experts. If we anticipate limited knowledge about the study, we should provide detailed descriptions (we discuss recommendations later in the chapter).
* What are we trying to communicate? It could be summary statistics, trends, patterns, or other insights. Tables may suit summary statistics, while plots are better at conveying trends and patterns.
* Is the audience accustomed to interpreting plots? If not, include explanatory text to guide them on how to interpret the plots effectively.
* What is the audience’s statistical knowledge? If the audience does not have a strong statistics background, provide text on standard errors, confidence intervals, and other estimate types to enhance understanding.
8\.2 Describing results through text
------------------------------------
As analysts, we often emphasize the data, and communicating results can sometimes be overlooked. To be effective communicators, we need to identify the appropriate information to share with our audience. Chapters [2](c02-overview-surveys.html#c02-overview-surveys) and [3](c03-survey-data-documentation.html#c03-survey-data-documentation) provide insights into factors we need to consider during analysis, and they remain relevant when presenting results to others.
### 8\.2\.1 Methodology
If we are using existing data, methodologically sound surveys provide documentation about how the survey was fielded, the questionnaires, and other necessary information for analyses. For example, the survey’s methodology reports should include the population of interest, sampling procedures, response rates, questionnaire documentation, weighting, and a general overview of disclosure statements. Many American organizations follow the American Association for Public Opinion Research’s (AAPOR) [Transparency Initiative](https://aapor.org/standards-and-ethics/transparency-initiative). The AAPOR Transparency Initiative requires organizations to include specific details in their methodology, making it clear how we can and should analyze and interpret the results. Being transparent about these methods is vital for the scientific rigor of the field.
The details provided in Chapter [2](c02-overview-surveys.html#c02-overview-surveys) about the survey process should be shared with the audience when presenting the results. When using publicly available data, like the examples in this book, we can often link to the methodology report in our final output. We should also provide high\-level information for the audience to quickly grasp the context around the findings. For example, we can mention when and where the study was conducted, the population’s age range, or other contextual details. This information helps the audience understand how generalizable the results are.
Providing this material is especially important when no methodology report is available for the analyzed data. For example, if we conducted a new survey for a specific purpose, we should document and present all the pertinent information during the analysis and reporting process. Adhering to the AAPOR Transparency Initiative guidelines is a reliable method to guarantee that all essential information is communicated to the audience.
### 8\.2\.2 Analysis
Along with the survey methodology and weight calculations, we should also share our approach to preparing, cleaning, and analyzing the data. For example, in Chapter [6](c06-statistical-testing.html#c06-statistical-testing), we compared education distributions from the ANES survey to the American Community Survey (ACS). To make the comparison, we had to collapse the education categories provided in the ANES data to match the ACS. The process for this particular example may seem straightforward (like combining bachelor’s and graduate degrees into a single category), but there are multiple ways to deal with the data. Our choice is just one of many. We should document both the original ANES question and response options and the steps we took to match them with ACS data. This transparency helps clarify our analysis to our audience.
Missing data is another instance where we want to be unambiguous and upfront with our audience. In this book, numerous examples and exercises remove missing data, as this is often the easiest way to handle them. However, there are circumstances where missing data holds substantive importance, and excluding them could introduce bias (see Chapter [11](c11-missing-data.html#c11-missing-data)). Being transparent about our handling of missing data is important to maintaining the integrity of our analysis and ensuring a comprehensive understanding of the results.
### 8\.2\.3 Results
While tables and graphs are commonly used to communicate results, there are instances where text can be more effective in sharing information. Narrative details, such as context around point estimates or model coefficients, can go a long way in improving our communication. We have several strategies to effectively convey the significance of the data to the audience through text.
First, we can highlight important data elements in a sentence using plain language. For example, if we were looking at election polling data conducted before an election, we could say:
> As of \[DATE], an estimated XX% of registered U.S. voters say they will vote for \[CANDIDATE NAME] for president in the \[YEAR] general election.
This sentence provides key pieces of information in a straightforward way:
1. \[DATE]: Given that polling data are time\-specific, providing the date of reference lets the audience know when these data were valid.
2. Registered U.S. voters: This tells the audience who we surveyed, letting them know the population of interest.
3. XX%: This part provides the estimated percentage of people voting for a specific candidate for a specific office.
4. \[YEAR] general election: Adding this gives more context about the election type and year. The estimate would take on a different meaning if we changed it to a primary election instead of a general election.
We also included the word “estimated.” When presenting aggregate survey results, we have errors around each estimate. We want to convey this uncertainty rather than talk in absolutes. Words like “estimated,” “on average,” or “around” can help communicate this uncertainty to the audience. Instead of saying “XX%,” we can also say “XX% (\+/\- Y%)” to show the margin of error. Confidence intervals can also be incorporated into the text to assist readers.
Second, providing context and discussing the meaning behind a point estimate can help the audience glean some insight into why the data are important. For example, when comparing two values, it can be helpful to highlight if there are statistically significant differences and explain the impact and relevance of this information. This is where we should do our best to be mindful of biases and present the facts logically.
Keep in mind how we discuss these findings can greatly influence how the audience interprets them. If we include speculation, phrases like “the authors speculate” or “these findings may indicate,” it relays the uncertainty around the notion while still lending a plausible solution. Additionally, we can present alternative viewpoints or competing discussion points to explain the uncertainty in the results.
### 8\.2\.1 Methodology
If we are using existing data, methodologically sound surveys provide documentation about how the survey was fielded, the questionnaires, and other necessary information for analyses. For example, the survey’s methodology reports should include the population of interest, sampling procedures, response rates, questionnaire documentation, weighting, and a general overview of disclosure statements. Many American organizations follow the American Association for Public Opinion Research’s (AAPOR) [Transparency Initiative](https://aapor.org/standards-and-ethics/transparency-initiative). The AAPOR Transparency Initiative requires organizations to include specific details in their methodology, making it clear how we can and should analyze and interpret the results. Being transparent about these methods is vital for the scientific rigor of the field.
The details provided in Chapter [2](c02-overview-surveys.html#c02-overview-surveys) about the survey process should be shared with the audience when presenting the results. When using publicly available data, like the examples in this book, we can often link to the methodology report in our final output. We should also provide high\-level information for the audience to quickly grasp the context around the findings. For example, we can mention when and where the study was conducted, the population’s age range, or other contextual details. This information helps the audience understand how generalizable the results are.
Providing this material is especially important when no methodology report is available for the analyzed data. For example, if we conducted a new survey for a specific purpose, we should document and present all the pertinent information during the analysis and reporting process. Adhering to the AAPOR Transparency Initiative guidelines is a reliable method to guarantee that all essential information is communicated to the audience.
### 8\.2\.2 Analysis
Along with the survey methodology and weight calculations, we should also share our approach to preparing, cleaning, and analyzing the data. For example, in Chapter [6](c06-statistical-testing.html#c06-statistical-testing), we compared education distributions from the ANES survey to the American Community Survey (ACS). To make the comparison, we had to collapse the education categories provided in the ANES data to match the ACS. The process for this particular example may seem straightforward (like combining bachelor’s and graduate degrees into a single category), but there are multiple ways to deal with the data. Our choice is just one of many. We should document both the original ANES question and response options and the steps we took to match them with ACS data. This transparency helps clarify our analysis to our audience.
Missing data is another instance where we want to be unambiguous and upfront with our audience. In this book, numerous examples and exercises remove missing data, as this is often the easiest way to handle them. However, there are circumstances where missing data holds substantive importance, and excluding them could introduce bias (see Chapter [11](c11-missing-data.html#c11-missing-data)). Being transparent about our handling of missing data is important to maintaining the integrity of our analysis and ensuring a comprehensive understanding of the results.
### 8\.2\.3 Results
While tables and graphs are commonly used to communicate results, there are instances where text can be more effective in sharing information. Narrative details, such as context around point estimates or model coefficients, can go a long way in improving our communication. We have several strategies to effectively convey the significance of the data to the audience through text.
First, we can highlight important data elements in a sentence using plain language. For example, if we were looking at election polling data conducted before an election, we could say:
> As of \[DATE], an estimated XX% of registered U.S. voters say they will vote for \[CANDIDATE NAME] for president in the \[YEAR] general election.
This sentence provides key pieces of information in a straightforward way:
1. \[DATE]: Given that polling data are time\-specific, providing the date of reference lets the audience know when these data were valid.
2. Registered U.S. voters: This tells the audience who we surveyed, letting them know the population of interest.
3. XX%: This part provides the estimated percentage of people voting for a specific candidate for a specific office.
4. \[YEAR] general election: Adding this gives more context about the election type and year. The estimate would take on a different meaning if we changed it to a primary election instead of a general election.
We also included the word “estimated.” When presenting aggregate survey results, we have errors around each estimate. We want to convey this uncertainty rather than talk in absolutes. Words like “estimated,” “on average,” or “around” can help communicate this uncertainty to the audience. Instead of saying “XX%,” we can also say “XX% (\+/\- Y%)” to show the margin of error. Confidence intervals can also be incorporated into the text to assist readers.
Second, providing context and discussing the meaning behind a point estimate can help the audience glean some insight into why the data are important. For example, when comparing two values, it can be helpful to highlight if there are statistically significant differences and explain the impact and relevance of this information. This is where we should do our best to be mindful of biases and present the facts logically.
Keep in mind how we discuss these findings can greatly influence how the audience interprets them. If we include speculation, phrases like “the authors speculate” or “these findings may indicate,” it relays the uncertainty around the notion while still lending a plausible solution. Additionally, we can present alternative viewpoints or competing discussion points to explain the uncertainty in the results.
8\.3 Visualizing data
---------------------
Although discussing key findings in the text is important, presenting large amounts of data in tables or visualizations is often more digestible for the audience. Effectively combining text, tables, and graphs can be powerful in communicating results. This section provides examples of using the {gt}, {gtsummary}, and {ggplot2} packages to enhance the dissemination of results ([Iannone et al. 2024](#ref-R-gt); [Sjoberg et al. 2021](#ref-gtsummarysjo); [Wickham 2016](#ref-ggplot2wickham)).
### 8\.3\.1 Tables
Tables are a great way to provide a large amount of data when individual data points need to be examined. However, it is important to present tables in a reader\-friendly format. Numbers should align, rows and columns should be easy to follow, and the table size should not compromise readability. Using key visualization techniques, we can create tables that are informative and nice to look at. Many packages create easy\-to\-read tables (e.g., {kable} \+ {kableExtra}, {gt}, {gtsummary}, {DT}, {formattable}, {flextable}, {reactable}). We appreciate the flexibility, ability to use pipes (e.g., `%>%`), and numerous extensions of the {gt} package. While we focus on {gt} here, we encourage learning about others, as they may have additional helpful features. Please note, at this time, {gtsummary} needs additional features to be widely used for survey analysis, particularly due to its lack of ability to work with replicate designs. We provide one example using {gtsummary} and hope it evolves into a more comprehensive tool over time.
#### Transitioning {srvyr} output to a {gt} table
Let’s start by using some of the data we calculated earlier in this book. In Chapter [6](c06-statistical-testing.html#c06-statistical-testing), we looked at data on trust in government with the proportions calculated below:
```
trust_gov <- anes_des %>%
drop_na(TrustGovernment) %>%
group_by(TrustGovernment) %>%
summarize(trust_gov_p = survey_prop())
trust_gov
```
```
## # A tibble: 5 × 3
## TrustGovernment trust_gov_p trust_gov_p_se
## <fct> <dbl> <dbl>
## 1 Always 0.0155 0.00204
## 2 Most of the time 0.132 0.00553
## 3 About half the time 0.309 0.00829
## 4 Some of the time 0.434 0.00855
## 5 Never 0.110 0.00566
```
The default output generated by R may work for initial viewing inside our IDE or when creating basic output in an R Markdown or Quarto document. However, when presenting these results in other publications, such as the print version of this book or with other formal dissemination modes, modifying the display can improve our reader’s experience.
Looking at the output from `trust_gov`, a couple of improvements stand out: (1\) switching to percentages instead of proportions and (2\) removing the variable names as column headers. The {gt} package is a good tool for implementing better labeling and creating publishable tables. Let’s walk through some code as we make a few changes to improve the table’s usefulness.
First, we initiate the formatted table with the `gt()` function on the `trust_gov` tibble previously created. Next, we use the argument `rowname_col()` to designate the `TrustGovernment` column as the label for each row (called the table “stub”). We apply the `cols_label()` function to create informative column labels instead of variable names and then the `tab_spanner()` function to add a label across multiple columns. In this case, we label all columns except the stub with “Trust in Government, 2020\.” We then format the proportions into percentages with the `fmt_percent()` function and reduce the number of decimals shown to one with `decimals = 1`. Finally, the `tab_caption()` function adds a table title for the HTML version of the book. We can use the caption for cross\-referencing in R Markdown, Quarto, and bookdown, as well as adding it to the list of tables in the book. These changes are all seen in Table [8\.1](c08-communicating-results.html#tab:results-table-gt1-tab).
```
trust_gov_gt <- trust_gov %>%
gt(rowname_col = "TrustGovernment") %>%
cols_label(
trust_gov_p = "%",
trust_gov_p_se = "s.e. (%)"
) %>%
tab_spanner(
label = "Trust in Government, 2020",
columns = c(trust_gov_p, trust_gov_p_se)
) %>%
fmt_percent(decimals = 1)
```
```
trust_gov_gt %>%
tab_caption("Example of {gt} table with trust in government estimate")
```
TABLE 8\.1: Example of {gt} table with trust in government estimate
| | Trust in Government, 2020 | |
| --- | --- | --- |
| % | s.e. (%) |
| Always | 1\.6% | 0\.2% |
| Most of the time | 13\.2% | 0\.6% |
| About half the time | 30\.9% | 0\.8% |
| Some of the time | 43\.4% | 0\.9% |
| Never | 11\.0% | 0\.6% |
We can add a few more enhancements, such as a title (which is different from a caption[27](#fn27)), a data source note, and a footnote with the question information, using the functions `tab_header()`, `tab_source_note()`, and `tab_footnote()`. If having the percentage sign in both the header and the cells seems redundant, we can opt for `fmt_number()` instead of `fmt_percent()` and scale the number by 100 with `scale_by = 100`. The resulting table is displayed in Table [8\.2](c08-communicating-results.html#tab:results-table-gt2-tab).
```
trust_gov_gt2 <- trust_gov_gt %>%
tab_header("American voter's trust
in the federal government, 2020") %>%
tab_source_note(
md("*Source*: American National Election Studies, 2020")
) %>%
tab_footnote(
"Question text: How often can you trust the federal government
in Washington to do what is right?"
) %>%
fmt_number(
scale_by = 100,
decimals = 1
)
```
```
trust_gov_gt2
```
TABLE 8\.2: Example of {gt} table with trust in government estimates and additional context
| American voter's trust in the federal government, 2020 | | |
| --- | --- | --- |
| | Trust in Government, 2020 | |
| % | s.e. (%) |
| Always | 1\.6 | 0\.2 |
| Most of the time | 13\.2 | 0\.6 |
| About half the time | 30\.9 | 0\.8 |
| Some of the time | 43\.4 | 0\.9 |
| Never | 11\.0 | 0\.6 |
| *Source*: American National Election Studies, 2020 | | |
| --- | --- | --- |
| Question text: How often can you trust the federal government in Washington to do what is right? | | |
| --- | --- | --- |
#### Expanding tables using {gtsummary}
The {gtsummary} package simultaneously summarizes data and creates publication\-ready tables. Initially designed for clinical trial data, it has been extended to include survey analysis in certain capacities. At this time, it is only compatible with survey objects using Taylor’s Series Linearization and not replicate methods. While it offers a restricted set of summary statistics, the following are available for categorical variables:
* `{n}` frequency
* `{N}` denominator, or respondent population
* `{p}` proportion (stylized as a percentage by default)
* `{p.std.error}` standard error of the sample proportion
* `{deff}` design effect of the sample proportion
* `{n_unweighted}` unweighted frequency
* `{N_unweighted}` unweighted denominator
* `{p_unweighted}` unweighted formatted proportion (stylized as a percentage by default)
The following summary statistics are available for continuous variables:
* `{median}` median
* `{mean}` mean
* `{mean.std.error}` standard error of the sample mean
* `{deff}` design effect of the sample mean
* `{sd}` standard deviation
* `{var}` variance
* `{min}` minimum
* `{max}` maximum
* `{p#}` any integer percentile, where `#` is an integer from 0 to 100
* `{sum}` sum
In the following example, we build a table using {gtsummary}, similar to the table in the {gt} example. The main function we use is `tbl_svysummary()`. In this function, we include the variables we want to analyze in the `include` argument and define the statistics we want to display in the `statistic` argument. To specify the statistics, we apply the syntax from the {glue} package, where we enclose the variables we want to insert within curly brackets. We must specify the desired statistics using the names listed above. For example, to specify that we want the proportion followed by the standard error of the proportion in parentheses, we use `{p} ({p.std.error})`. Table [8\.3](c08-communicating-results.html#tab:results-gts-ex-1-tab) displays the resulting table.
```
anes_des_gtsum <- anes_des %>%
tbl_svysummary(
include = TrustGovernment,
statistic = list(all_categorical() ~ "{p} ({p.std.error})")
)
```
```
anes_des_gtsum
```
TABLE 8\.3: Example of {gtsummary} table with trust in government estimates
| **Characteristic** | **N \= 231,034,125**1 |
| --- | --- |
| PRE: How often trust government in Washington to do what is right \[revised] | |
| Always | 1\.6 (0\.00\) |
| Most of the time | 13 (0\.01\) |
| About half the time | 31 (0\.01\) |
| Some of the time | 43 (0\.01\) |
| Never | 11 (0\.01\) |
| Unknown | 673,773 |
| 1 % (SE(%)) | |
| --- | --- |
The default table (shown in Table [8\.3](c08-communicating-results.html#tab:results-gts-ex-1-tab)) includes the weighted number of missing (or Unknown) records. The standard error is reported as a proportion, while the proportion is styled as a percentage. In the next step, we remove the Unknown category by setting the missing argument to “no” and format the standard error as a percentage using the `digits` argument. To improve the table for publication, we provide a more polished label for the “TrustGovernment” variable using the `label` argument. The resulting table is displayed in Table [8\.4](c08-communicating-results.html#tab:results-gts-ex-2-tab).
```
anes_des_gtsum2 <- anes_des %>%
tbl_svysummary(
include = TrustGovernment,
statistic = list(all_categorical() ~ "{p} ({p.std.error})"),
missing = "no",
digits = list(TrustGovernment ~ style_percent),
label = list(TrustGovernment ~ "Trust in Government, 2020")
)
```
```
anes_des_gtsum2
```
TABLE 8\.4: Example of {gtsummary} table with trust in government estimates with labeling and digits options
| **Characteristic** | **N \= 231,034,125**1 |
| --- | --- |
| Trust in Government, 2020 | |
| Always | 1\.6 (0\.2\) |
| Most of the time | 13 (0\.6\) |
| About half the time | 31 (0\.8\) |
| Some of the time | 43 (0\.9\) |
| Never | 11 (0\.6\) |
| 1 % (SE(%)) | |
| --- | --- |
Table [8\.4](c08-communicating-results.html#tab:results-gts-ex-2-tab) is closer to our ideal output, but we still want to make a few changes. To exclude the term “Characteristic” and the estimated population size (N), we can modify the header using the `modify_header()` function to update the `label`. Further adjustments can be made based on personal preferences, organizational guidelines, or other style guides. If we prefer having the standard error in the header, similar to the {gt} table, instead of in the footnote (the {gtsummary} default), we can make these changes by specifying `stat_0` in the `modify_header()` function. Additionally, using `modify_footnote()` with `update = everything() ~ NA` removes the standard error from the footnote. After transforming the object into a {gt} table using `as_gt()`, we can add footnotes and a title using the same methods explained in the previous section. This updated table is displayed in Table [8\.5](c08-communicating-results.html#tab:results-gts-ex-3-tab).
```
anes_des_gtsum3 <- anes_des %>%
tbl_svysummary(
include = TrustGovernment,
statistic = list(all_categorical() ~ "{p} ({p.std.error})"),
missing = "no",
digits = list(TrustGovernment ~ style_percent),
label = list(TrustGovernment ~ "Trust in Government, 2020")
) %>%
modify_footnote(update = everything() ~ NA) %>%
modify_header(
label = " ",
stat_0 = "% (s.e.)"
) %>%
as_gt() %>%
tab_header("American voter's trust
in the federal government, 2020") %>%
tab_source_note(
md("*Source*: American National Election Studies, 2020")
) %>%
tab_footnote(
"Question text: How often can you trust the federal government
in Washington to do what is right?"
)
```
```
anes_des_gtsum3
```
TABLE 8\.5: Example of {gtsummary} table with trust in government estimates with more labeling options and context
| American voter's trust in the federal government, 2020 | |
| --- | --- |
| | % (s.e.) |
| Trust in Government, 2020 | |
| Always | 1\.6 (0\.2\) |
| Most of the time | 13 (0\.6\) |
| About half the time | 31 (0\.8\) |
| Some of the time | 43 (0\.9\) |
| Never | 11 (0\.6\) |
| *Source*: American National Election Studies, 2020 | |
| --- | --- |
| Question text: How often can you trust the federal government in Washington to do what is right? | |
| --- | --- |
We can also include summaries of more than one variable in the table. These variables can be either categorical or continuous. In the following code and Table [8\.6](c08-communicating-results.html#tab:results-gts-ex-4-tab), we add the mean age by updating the `include`, `statistic`, and `digits` arguments.
```
anes_des_gtsum4 <- anes_des %>%
tbl_svysummary(
include = c(TrustGovernment, Age),
statistic = list(
all_categorical() ~ "{p} ({p.std.error})",
all_continuous() ~ "{mean} ({mean.std.error})"
),
missing = "no",
digits = list(TrustGovernment ~ style_percent,
Age ~ c(1, 2)),
label = list(TrustGovernment ~ "Trust in Government, 2020")
) %>%
modify_footnote(update = everything() ~ NA) %>%
modify_header(label = " ",
stat_0 = "% (s.e.)") %>%
as_gt() %>%
tab_header(
"American voter's trust in the federal government, 2020") %>%
tab_source_note(
md("*Source*: American National Election Studies, 2020")
) %>%
tab_footnote(
"Question text: How often can you trust the federal government
in Washington to do what is right?"
) %>%
tab_caption("Example of {gtsummary} table with trust in government
estimates and average age")
```
```
anes_des_gtsum4
```
TABLE 8\.6: Example of {gtsummary} table with trust in government estimates and average age
| American voter's trust in the federal government, 2020 | |
| --- | --- |
| | % (s.e.) |
| Trust in Government, 2020 | |
| Always | 1\.6 (0\.2\) |
| Most of the time | 13 (0\.6\) |
| About half the time | 31 (0\.8\) |
| Some of the time | 43 (0\.9\) |
| Never | 11 (0\.6\) |
| PRE: SUMMARY: Respondent age | 47\.3 (0\.36\) |
| *Source*: American National Election Studies, 2020 | |
| --- | --- |
| Question text: How often can you trust the federal government in Washington to do what is right? | |
| --- | --- |
With {gtsummary}, we can also calculate statistics by different groups. Let’s modify the previous example (displayed in Table [8\.6](c08-communicating-results.html#tab:results-gts-ex-4-tab)) to analyze data on whether a respondent voted for president in 2020\. We update the `by` argument and refine the header. The resulting table is displayed in Table [8\.7](c08-communicating-results.html#tab:results-gts-ex-5-tab).
```
anes_des_gtsum5 <- anes_des %>%
drop_na(VotedPres2020) %>%
tbl_svysummary(
include = TrustGovernment,
statistic = list(all_categorical() ~ "{p} ({p.std.error})"),
missing = "no",
digits = list(TrustGovernment ~ style_percent),
label = list(TrustGovernment ~ "Trust in Government, 2020"),
by = VotedPres2020
) %>%
modify_footnote(update = everything() ~ NA) %>%
modify_header(
label = " ",
stat_1 = "Voted",
stat_2 = "Didn't vote"
) %>%
modify_spanning_header(all_stat_cols() ~ "% (s.e.)") %>%
as_gt() %>%
tab_header(
"American voter's trust
in the federal government by whether they voted
in the 2020 presidential election"
) %>%
tab_source_note(
md("*Source*: American National Election Studies, 2020")
) %>%
tab_footnote(
"Question text: How often can you trust the federal government
in Washington to do what is right?"
)
```
```
anes_des_gtsum5
```
TABLE 8\.7: Example of {gtsummary} table with trust in government estimates by voting status
| American voter's trust in the federal government by whether they voted in the 2020 presidential election | | |
| --- | --- | --- |
| | % (s.e.) | |
| Voted | Didn’t vote |
| Trust in Government, 2020 | | |
| Always | 1\.1 (0\.2\) | 0\.9 (0\.9\) |
| Most of the time | 13 (0\.6\) | 19 (5\.3\) |
| About half the time | 32 (0\.8\) | 30 (8\.6\) |
| Some of the time | 45 (0\.8\) | 45 (8\.2\) |
| Never | 9\.1 (0\.7\) | 5\.2 (2\.2\) |
| *Source*: American National Election Studies, 2020 | | |
| --- | --- | --- |
| Question text: How often can you trust the federal government in Washington to do what is right? | | |
| --- | --- | --- |
### 8\.3\.2 Charts and plots
Survey analysis can yield an abundance of printed summary statistics and models. Even with the most careful analysis, interpreting the results can be overwhelming. This is where charts and plots play a key role in our work. By transforming complex data into a visual representation, we can recognize patterns, relationships, and trends with greater ease.
R has numerous packages for creating compelling and insightful charts. In this section, we focus on {ggplot2}, a member of the {tidyverse} collection of packages. Known for its power and flexibility, {ggplot2} is an invaluable tool for creating a wide range of data visualizations ([Wickham 2016](#ref-ggplot2wickham)).
The {ggplot2} package follows the “grammar of graphics,” a framework that incrementally adds layers of chart components. This approach allows us to customize visual elements such as scales, colors, labels, and annotations to enhance the clarity of our results. After creating the survey design object, we can modify it to include additional outcomes and calculate estimates for our desired data points. Below, we create a binary variable `TrustGovernmentUsually`, which is `TRUE` when `TrustGovernment` is “Always” or “Most of the time” and `FALSE` otherwise. Then, we calculate the percentage of people who usually trust the government based on their vote in the 2020 presidential election (`VotedPres2020_selection`). We remove the cases where people did not vote or did not indicate their choice.
```
anes_des_der <- anes_des %>%
mutate(TrustGovernmentUsually = case_when(
is.na(TrustGovernment) ~ NA,
TRUE ~ TrustGovernment %in% c("Always", "Most of the time")
)) %>%
drop_na(VotedPres2020_selection) %>%
group_by(VotedPres2020_selection) %>%
summarize(
pct_trust = survey_mean(
TrustGovernmentUsually,
na.rm = TRUE,
proportion = TRUE,
vartype = "ci"
),
.groups = "drop"
)
anes_des_der
```
```
## # A tibble: 3 × 4
## VotedPres2020_selection pct_trust pct_trust_low pct_trust_upp
## <fct> <dbl> <dbl> <dbl>
## 1 Biden 0.123 0.109 0.140
## 2 Trump 0.178 0.161 0.198
## 3 Other 0.0681 0.0290 0.152
```
Now, we can begin creating our chart with {ggplot2}. First, we set up our plot with `ggplot()`. Next, we define the data points to be displayed using aesthetics, or `aes`. Aesthetics represent the visual properties of the objects in the plot. In the following example, we create a bar chart of the percentage of people who usually trust the government by who they voted for in the 2020 election. To do this, we want to have who they voted for on the x\-axis (`VotedPres2020_selection`) and the percent they usually trust the government on the y\-axis (`pct_trust`). We specify these variables in `ggplot()` and then indicate we want a bar chart with `geom_bar()`. The resulting plot is displayed in Figure [8\.1](c08-communicating-results.html#fig:results-plot1).
```
p <- anes_des_der %>%
ggplot(aes(
x = VotedPres2020_selection,
y = pct_trust
)) +
geom_bar(stat = "identity")
p
```
FIGURE 8\.1: Bar chart of trust in government, by chosen 2020 presidential candidate
This is a great starting point: it appears that a higher percentage of people state they usually trust the government among those who voted for Trump compared to those who voted for Biden or other candidates. Now, what if we want to introduce color to better differentiate the three groups? We can add `fill` under `aesthetics`, indicating that we want to use distinct colors for each value of `VotedPres2020_selection`. In this instance, Biden and Trump are displayed in different colors in Figure [8\.2](c08-communicating-results.html#fig:results-plot2).
```
pcolor <- anes_des_der %>%
ggplot(aes(
x = VotedPres2020_selection,
y = pct_trust,
fill = VotedPres2020_selection
)) +
geom_bar(stat = "identity")
pcolor
```
FIGURE 8\.2: Bar chart of trust in government by chosen 2020 presidential candidate, with colors
Let’s say we wanted to follow proper statistical analysis practice and incorporate variability in our plot. We can add another geom, `geom_errorbar()`, to display the confidence intervals on top of our existing `geom_bar()` layer. We can add the layer using a plus sign (`+`). The resulting graph is displayed in Figure [8\.3](c08-communicating-results.html#fig:results-plot3).
```
pcol_error <- anes_des_der %>%
ggplot(aes(
x = VotedPres2020_selection,
y = pct_trust,
fill = VotedPres2020_selection
)) +
geom_bar(stat = "identity") +
geom_errorbar(
aes(
ymin = pct_trust_low,
ymax = pct_trust_upp
),
width = .2
)
pcol_error
```
FIGURE 8\.3: Bar chart of trust in government by chosen 2020 presidential candidate, with colors and error bars
We can continue adding to our plot until we achieve our desired look. For example, since the color legend does not contribute meaningful information, we can eliminate it with `guides(fill = "none")`. We can also specify colors for `fill` using `scale_fill_manual()`. Inside this function, we provide a vector of values corresponding to the colors in our plot. These values are hexadecimal (hex) color codes, denoted by a leading pound sign `#` followed by six letters or numbers. The hex code `#0b3954` used below is dark blue. There are many tools online that help pick hex codes, such as htmlcolorcodes.com. Additionally, Figure [8\.4](c08-communicating-results.html#fig:results-plot4) incorporates better labels for the x and y axes (`xlab()`, `ylab()`), a title (`labs(title=)`), and a footnote with the data source (`labs(caption=)`).
```
pfull <-
anes_des_der %>%
ggplot(aes(
x = VotedPres2020_selection,
y = pct_trust,
fill = VotedPres2020_selection
)) +
geom_bar(stat = "identity") +
geom_errorbar(
aes(
ymin = pct_trust_low,
ymax = pct_trust_upp
),
width = .2
) +
scale_fill_manual(values = c("#0b3954", "#bfd7ea", "#8d6b94")) +
xlab("Election choice (2020)") +
ylab("Usually trust the government") +
scale_y_continuous(labels = scales::percent) +
guides(fill = "none") +
labs(
title = "Percent of voters who usually trust the government
by chosen 2020 presidential candidate",
caption = "Source: American National Election Studies, 2020"
)
pfull
```
FIGURE 8\.4: Bar chart of trust in government by chosen 2020 presidential candidate with colors, labels, error bars, and title
What we have explored in this section are just the foundational aspects of {ggplot2}, and the capabilities of this package extend far beyond what we have covered. Advanced features such as annotation, faceting, and theming allow for more sophisticated and customized visualizations. The {ggplot2} book by Wickham ([2016](#ref-ggplot2wickham)) is a comprehensive guide to learning more about this powerful tool.
### 8\.3\.1 Tables
Tables are a great way to provide a large amount of data when individual data points need to be examined. However, it is important to present tables in a reader\-friendly format. Numbers should align, rows and columns should be easy to follow, and the table size should not compromise readability. Using key visualization techniques, we can create tables that are informative and nice to look at. Many packages create easy\-to\-read tables (e.g., {kable} \+ {kableExtra}, {gt}, {gtsummary}, {DT}, {formattable}, {flextable}, {reactable}). We appreciate the flexibility, ability to use pipes (e.g., `%>%`), and numerous extensions of the {gt} package. While we focus on {gt} here, we encourage learning about others, as they may have additional helpful features. Please note, at this time, {gtsummary} needs additional features to be widely used for survey analysis, particularly due to its lack of ability to work with replicate designs. We provide one example using {gtsummary} and hope it evolves into a more comprehensive tool over time.
#### Transitioning {srvyr} output to a {gt} table
Let’s start by using some of the data we calculated earlier in this book. In Chapter [6](c06-statistical-testing.html#c06-statistical-testing), we looked at data on trust in government with the proportions calculated below:
```
trust_gov <- anes_des %>%
drop_na(TrustGovernment) %>%
group_by(TrustGovernment) %>%
summarize(trust_gov_p = survey_prop())
trust_gov
```
```
## # A tibble: 5 × 3
## TrustGovernment trust_gov_p trust_gov_p_se
## <fct> <dbl> <dbl>
## 1 Always 0.0155 0.00204
## 2 Most of the time 0.132 0.00553
## 3 About half the time 0.309 0.00829
## 4 Some of the time 0.434 0.00855
## 5 Never 0.110 0.00566
```
The default output generated by R may work for initial viewing inside our IDE or when creating basic output in an R Markdown or Quarto document. However, when presenting these results in other publications, such as the print version of this book or with other formal dissemination modes, modifying the display can improve our reader’s experience.
Looking at the output from `trust_gov`, a couple of improvements stand out: (1\) switching to percentages instead of proportions and (2\) removing the variable names as column headers. The {gt} package is a good tool for implementing better labeling and creating publishable tables. Let’s walk through some code as we make a few changes to improve the table’s usefulness.
First, we initiate the formatted table with the `gt()` function on the `trust_gov` tibble previously created. Next, we use the argument `rowname_col()` to designate the `TrustGovernment` column as the label for each row (called the table “stub”). We apply the `cols_label()` function to create informative column labels instead of variable names and then the `tab_spanner()` function to add a label across multiple columns. In this case, we label all columns except the stub with “Trust in Government, 2020\.” We then format the proportions into percentages with the `fmt_percent()` function and reduce the number of decimals shown to one with `decimals = 1`. Finally, the `tab_caption()` function adds a table title for the HTML version of the book. We can use the caption for cross\-referencing in R Markdown, Quarto, and bookdown, as well as adding it to the list of tables in the book. These changes are all seen in Table [8\.1](c08-communicating-results.html#tab:results-table-gt1-tab).
```
trust_gov_gt <- trust_gov %>%
gt(rowname_col = "TrustGovernment") %>%
cols_label(
trust_gov_p = "%",
trust_gov_p_se = "s.e. (%)"
) %>%
tab_spanner(
label = "Trust in Government, 2020",
columns = c(trust_gov_p, trust_gov_p_se)
) %>%
fmt_percent(decimals = 1)
```
```
trust_gov_gt %>%
tab_caption("Example of {gt} table with trust in government estimate")
```
TABLE 8\.1: Example of {gt} table with trust in government estimate
| | Trust in Government, 2020 | |
| --- | --- | --- |
| % | s.e. (%) |
| Always | 1\.6% | 0\.2% |
| Most of the time | 13\.2% | 0\.6% |
| About half the time | 30\.9% | 0\.8% |
| Some of the time | 43\.4% | 0\.9% |
| Never | 11\.0% | 0\.6% |
We can add a few more enhancements, such as a title (which is different from a caption[27](#fn27)), a data source note, and a footnote with the question information, using the functions `tab_header()`, `tab_source_note()`, and `tab_footnote()`. If having the percentage sign in both the header and the cells seems redundant, we can opt for `fmt_number()` instead of `fmt_percent()` and scale the number by 100 with `scale_by = 100`. The resulting table is displayed in Table [8\.2](c08-communicating-results.html#tab:results-table-gt2-tab).
```
trust_gov_gt2 <- trust_gov_gt %>%
tab_header("American voter's trust
in the federal government, 2020") %>%
tab_source_note(
md("*Source*: American National Election Studies, 2020")
) %>%
tab_footnote(
"Question text: How often can you trust the federal government
in Washington to do what is right?"
) %>%
fmt_number(
scale_by = 100,
decimals = 1
)
```
```
trust_gov_gt2
```
TABLE 8\.2: Example of {gt} table with trust in government estimates and additional context
| American voter's trust in the federal government, 2020 | | |
| --- | --- | --- |
| | Trust in Government, 2020 | |
| % | s.e. (%) |
| Always | 1\.6 | 0\.2 |
| Most of the time | 13\.2 | 0\.6 |
| About half the time | 30\.9 | 0\.8 |
| Some of the time | 43\.4 | 0\.9 |
| Never | 11\.0 | 0\.6 |
| *Source*: American National Election Studies, 2020 | | |
| --- | --- | --- |
| Question text: How often can you trust the federal government in Washington to do what is right? | | |
| --- | --- | --- |
#### Expanding tables using {gtsummary}
The {gtsummary} package simultaneously summarizes data and creates publication\-ready tables. Initially designed for clinical trial data, it has been extended to include survey analysis in certain capacities. At this time, it is only compatible with survey objects using Taylor’s Series Linearization and not replicate methods. While it offers a restricted set of summary statistics, the following are available for categorical variables:
* `{n}` frequency
* `{N}` denominator, or respondent population
* `{p}` proportion (stylized as a percentage by default)
* `{p.std.error}` standard error of the sample proportion
* `{deff}` design effect of the sample proportion
* `{n_unweighted}` unweighted frequency
* `{N_unweighted}` unweighted denominator
* `{p_unweighted}` unweighted formatted proportion (stylized as a percentage by default)
The following summary statistics are available for continuous variables:
* `{median}` median
* `{mean}` mean
* `{mean.std.error}` standard error of the sample mean
* `{deff}` design effect of the sample mean
* `{sd}` standard deviation
* `{var}` variance
* `{min}` minimum
* `{max}` maximum
* `{p#}` any integer percentile, where `#` is an integer from 0 to 100
* `{sum}` sum
In the following example, we build a table using {gtsummary}, similar to the table in the {gt} example. The main function we use is `tbl_svysummary()`. In this function, we include the variables we want to analyze in the `include` argument and define the statistics we want to display in the `statistic` argument. To specify the statistics, we apply the syntax from the {glue} package, where we enclose the variables we want to insert within curly brackets. We must specify the desired statistics using the names listed above. For example, to specify that we want the proportion followed by the standard error of the proportion in parentheses, we use `{p} ({p.std.error})`. Table [8\.3](c08-communicating-results.html#tab:results-gts-ex-1-tab) displays the resulting table.
```
anes_des_gtsum <- anes_des %>%
tbl_svysummary(
include = TrustGovernment,
statistic = list(all_categorical() ~ "{p} ({p.std.error})")
)
```
```
anes_des_gtsum
```
TABLE 8\.3: Example of {gtsummary} table with trust in government estimates
| **Characteristic** | **N \= 231,034,125**1 |
| --- | --- |
| PRE: How often trust government in Washington to do what is right \[revised] | |
| Always | 1\.6 (0\.00\) |
| Most of the time | 13 (0\.01\) |
| About half the time | 31 (0\.01\) |
| Some of the time | 43 (0\.01\) |
| Never | 11 (0\.01\) |
| Unknown | 673,773 |
| 1 % (SE(%)) | |
| --- | --- |
The default table (shown in Table [8\.3](c08-communicating-results.html#tab:results-gts-ex-1-tab)) includes the weighted number of missing (or Unknown) records. The standard error is reported as a proportion, while the proportion is styled as a percentage. In the next step, we remove the Unknown category by setting the missing argument to “no” and format the standard error as a percentage using the `digits` argument. To improve the table for publication, we provide a more polished label for the “TrustGovernment” variable using the `label` argument. The resulting table is displayed in Table [8\.4](c08-communicating-results.html#tab:results-gts-ex-2-tab).
```
anes_des_gtsum2 <- anes_des %>%
tbl_svysummary(
include = TrustGovernment,
statistic = list(all_categorical() ~ "{p} ({p.std.error})"),
missing = "no",
digits = list(TrustGovernment ~ style_percent),
label = list(TrustGovernment ~ "Trust in Government, 2020")
)
```
```
anes_des_gtsum2
```
TABLE 8\.4: Example of {gtsummary} table with trust in government estimates with labeling and digits options
| **Characteristic** | **N \= 231,034,125**1 |
| --- | --- |
| Trust in Government, 2020 | |
| Always | 1\.6 (0\.2\) |
| Most of the time | 13 (0\.6\) |
| About half the time | 31 (0\.8\) |
| Some of the time | 43 (0\.9\) |
| Never | 11 (0\.6\) |
| 1 % (SE(%)) | |
| --- | --- |
Table [8\.4](c08-communicating-results.html#tab:results-gts-ex-2-tab) is closer to our ideal output, but we still want to make a few changes. To exclude the term “Characteristic” and the estimated population size (N), we can modify the header using the `modify_header()` function to update the `label`. Further adjustments can be made based on personal preferences, organizational guidelines, or other style guides. If we prefer having the standard error in the header, similar to the {gt} table, instead of in the footnote (the {gtsummary} default), we can make these changes by specifying `stat_0` in the `modify_header()` function. Additionally, using `modify_footnote()` with `update = everything() ~ NA` removes the standard error from the footnote. After transforming the object into a {gt} table using `as_gt()`, we can add footnotes and a title using the same methods explained in the previous section. This updated table is displayed in Table [8\.5](c08-communicating-results.html#tab:results-gts-ex-3-tab).
```
anes_des_gtsum3 <- anes_des %>%
tbl_svysummary(
include = TrustGovernment,
statistic = list(all_categorical() ~ "{p} ({p.std.error})"),
missing = "no",
digits = list(TrustGovernment ~ style_percent),
label = list(TrustGovernment ~ "Trust in Government, 2020")
) %>%
modify_footnote(update = everything() ~ NA) %>%
modify_header(
label = " ",
stat_0 = "% (s.e.)"
) %>%
as_gt() %>%
tab_header("American voter's trust
in the federal government, 2020") %>%
tab_source_note(
md("*Source*: American National Election Studies, 2020")
) %>%
tab_footnote(
"Question text: How often can you trust the federal government
in Washington to do what is right?"
)
```
```
anes_des_gtsum3
```
TABLE 8\.5: Example of {gtsummary} table with trust in government estimates with more labeling options and context
| American voter's trust in the federal government, 2020 | |
| --- | --- |
| | % (s.e.) |
| Trust in Government, 2020 | |
| Always | 1\.6 (0\.2\) |
| Most of the time | 13 (0\.6\) |
| About half the time | 31 (0\.8\) |
| Some of the time | 43 (0\.9\) |
| Never | 11 (0\.6\) |
| *Source*: American National Election Studies, 2020 | |
| --- | --- |
| Question text: How often can you trust the federal government in Washington to do what is right? | |
| --- | --- |
We can also include summaries of more than one variable in the table. These variables can be either categorical or continuous. In the following code and Table [8\.6](c08-communicating-results.html#tab:results-gts-ex-4-tab), we add the mean age by updating the `include`, `statistic`, and `digits` arguments.
```
anes_des_gtsum4 <- anes_des %>%
tbl_svysummary(
include = c(TrustGovernment, Age),
statistic = list(
all_categorical() ~ "{p} ({p.std.error})",
all_continuous() ~ "{mean} ({mean.std.error})"
),
missing = "no",
digits = list(TrustGovernment ~ style_percent,
Age ~ c(1, 2)),
label = list(TrustGovernment ~ "Trust in Government, 2020")
) %>%
modify_footnote(update = everything() ~ NA) %>%
modify_header(label = " ",
stat_0 = "% (s.e.)") %>%
as_gt() %>%
tab_header(
"American voter's trust in the federal government, 2020") %>%
tab_source_note(
md("*Source*: American National Election Studies, 2020")
) %>%
tab_footnote(
"Question text: How often can you trust the federal government
in Washington to do what is right?"
) %>%
tab_caption("Example of {gtsummary} table with trust in government
estimates and average age")
```
```
anes_des_gtsum4
```
TABLE 8\.6: Example of {gtsummary} table with trust in government estimates and average age
| American voter's trust in the federal government, 2020 | |
| --- | --- |
| | % (s.e.) |
| Trust in Government, 2020 | |
| Always | 1\.6 (0\.2\) |
| Most of the time | 13 (0\.6\) |
| About half the time | 31 (0\.8\) |
| Some of the time | 43 (0\.9\) |
| Never | 11 (0\.6\) |
| PRE: SUMMARY: Respondent age | 47\.3 (0\.36\) |
| *Source*: American National Election Studies, 2020 | |
| --- | --- |
| Question text: How often can you trust the federal government in Washington to do what is right? | |
| --- | --- |
With {gtsummary}, we can also calculate statistics by different groups. Let’s modify the previous example (displayed in Table [8\.6](c08-communicating-results.html#tab:results-gts-ex-4-tab)) to analyze data on whether a respondent voted for president in 2020\. We update the `by` argument and refine the header. The resulting table is displayed in Table [8\.7](c08-communicating-results.html#tab:results-gts-ex-5-tab).
```
anes_des_gtsum5 <- anes_des %>%
drop_na(VotedPres2020) %>%
tbl_svysummary(
include = TrustGovernment,
statistic = list(all_categorical() ~ "{p} ({p.std.error})"),
missing = "no",
digits = list(TrustGovernment ~ style_percent),
label = list(TrustGovernment ~ "Trust in Government, 2020"),
by = VotedPres2020
) %>%
modify_footnote(update = everything() ~ NA) %>%
modify_header(
label = " ",
stat_1 = "Voted",
stat_2 = "Didn't vote"
) %>%
modify_spanning_header(all_stat_cols() ~ "% (s.e.)") %>%
as_gt() %>%
tab_header(
"American voter's trust
in the federal government by whether they voted
in the 2020 presidential election"
) %>%
tab_source_note(
md("*Source*: American National Election Studies, 2020")
) %>%
tab_footnote(
"Question text: How often can you trust the federal government
in Washington to do what is right?"
)
```
```
anes_des_gtsum5
```
TABLE 8\.7: Example of {gtsummary} table with trust in government estimates by voting status
| American voter's trust in the federal government by whether they voted in the 2020 presidential election | | |
| --- | --- | --- |
| | % (s.e.) | |
| Voted | Didn’t vote |
| Trust in Government, 2020 | | |
| Always | 1\.1 (0\.2\) | 0\.9 (0\.9\) |
| Most of the time | 13 (0\.6\) | 19 (5\.3\) |
| About half the time | 32 (0\.8\) | 30 (8\.6\) |
| Some of the time | 45 (0\.8\) | 45 (8\.2\) |
| Never | 9\.1 (0\.7\) | 5\.2 (2\.2\) |
| *Source*: American National Election Studies, 2020 | | |
| --- | --- | --- |
| Question text: How often can you trust the federal government in Washington to do what is right? | | |
| --- | --- | --- |
#### Transitioning {srvyr} output to a {gt} table
Let’s start by using some of the data we calculated earlier in this book. In Chapter [6](c06-statistical-testing.html#c06-statistical-testing), we looked at data on trust in government with the proportions calculated below:
```
trust_gov <- anes_des %>%
drop_na(TrustGovernment) %>%
group_by(TrustGovernment) %>%
summarize(trust_gov_p = survey_prop())
trust_gov
```
```
## # A tibble: 5 × 3
## TrustGovernment trust_gov_p trust_gov_p_se
## <fct> <dbl> <dbl>
## 1 Always 0.0155 0.00204
## 2 Most of the time 0.132 0.00553
## 3 About half the time 0.309 0.00829
## 4 Some of the time 0.434 0.00855
## 5 Never 0.110 0.00566
```
The default output generated by R may work for initial viewing inside our IDE or when creating basic output in an R Markdown or Quarto document. However, when presenting these results in other publications, such as the print version of this book or with other formal dissemination modes, modifying the display can improve our reader’s experience.
Looking at the output from `trust_gov`, a couple of improvements stand out: (1\) switching to percentages instead of proportions and (2\) removing the variable names as column headers. The {gt} package is a good tool for implementing better labeling and creating publishable tables. Let’s walk through some code as we make a few changes to improve the table’s usefulness.
First, we initiate the formatted table with the `gt()` function on the `trust_gov` tibble previously created. Next, we use the argument `rowname_col()` to designate the `TrustGovernment` column as the label for each row (called the table “stub”). We apply the `cols_label()` function to create informative column labels instead of variable names and then the `tab_spanner()` function to add a label across multiple columns. In this case, we label all columns except the stub with “Trust in Government, 2020\.” We then format the proportions into percentages with the `fmt_percent()` function and reduce the number of decimals shown to one with `decimals = 1`. Finally, the `tab_caption()` function adds a table title for the HTML version of the book. We can use the caption for cross\-referencing in R Markdown, Quarto, and bookdown, as well as adding it to the list of tables in the book. These changes are all seen in Table [8\.1](c08-communicating-results.html#tab:results-table-gt1-tab).
```
trust_gov_gt <- trust_gov %>%
gt(rowname_col = "TrustGovernment") %>%
cols_label(
trust_gov_p = "%",
trust_gov_p_se = "s.e. (%)"
) %>%
tab_spanner(
label = "Trust in Government, 2020",
columns = c(trust_gov_p, trust_gov_p_se)
) %>%
fmt_percent(decimals = 1)
```
```
trust_gov_gt %>%
tab_caption("Example of {gt} table with trust in government estimate")
```
TABLE 8\.1: Example of {gt} table with trust in government estimate
| | Trust in Government, 2020 | |
| --- | --- | --- |
| % | s.e. (%) |
| Always | 1\.6% | 0\.2% |
| Most of the time | 13\.2% | 0\.6% |
| About half the time | 30\.9% | 0\.8% |
| Some of the time | 43\.4% | 0\.9% |
| Never | 11\.0% | 0\.6% |
We can add a few more enhancements, such as a title (which is different from a caption[27](#fn27)), a data source note, and a footnote with the question information, using the functions `tab_header()`, `tab_source_note()`, and `tab_footnote()`. If having the percentage sign in both the header and the cells seems redundant, we can opt for `fmt_number()` instead of `fmt_percent()` and scale the number by 100 with `scale_by = 100`. The resulting table is displayed in Table [8\.2](c08-communicating-results.html#tab:results-table-gt2-tab).
```
trust_gov_gt2 <- trust_gov_gt %>%
tab_header("American voter's trust
in the federal government, 2020") %>%
tab_source_note(
md("*Source*: American National Election Studies, 2020")
) %>%
tab_footnote(
"Question text: How often can you trust the federal government
in Washington to do what is right?"
) %>%
fmt_number(
scale_by = 100,
decimals = 1
)
```
```
trust_gov_gt2
```
TABLE 8\.2: Example of {gt} table with trust in government estimates and additional context
| American voter's trust in the federal government, 2020 | | |
| --- | --- | --- |
| | Trust in Government, 2020 | |
| % | s.e. (%) |
| Always | 1\.6 | 0\.2 |
| Most of the time | 13\.2 | 0\.6 |
| About half the time | 30\.9 | 0\.8 |
| Some of the time | 43\.4 | 0\.9 |
| Never | 11\.0 | 0\.6 |
| *Source*: American National Election Studies, 2020 | | |
| --- | --- | --- |
| Question text: How often can you trust the federal government in Washington to do what is right? | | |
| --- | --- | --- |
#### Expanding tables using {gtsummary}
The {gtsummary} package simultaneously summarizes data and creates publication\-ready tables. Initially designed for clinical trial data, it has been extended to include survey analysis in certain capacities. At this time, it is only compatible with survey objects using Taylor’s Series Linearization and not replicate methods. While it offers a restricted set of summary statistics, the following are available for categorical variables:
* `{n}` frequency
* `{N}` denominator, or respondent population
* `{p}` proportion (stylized as a percentage by default)
* `{p.std.error}` standard error of the sample proportion
* `{deff}` design effect of the sample proportion
* `{n_unweighted}` unweighted frequency
* `{N_unweighted}` unweighted denominator
* `{p_unweighted}` unweighted formatted proportion (stylized as a percentage by default)
The following summary statistics are available for continuous variables:
* `{median}` median
* `{mean}` mean
* `{mean.std.error}` standard error of the sample mean
* `{deff}` design effect of the sample mean
* `{sd}` standard deviation
* `{var}` variance
* `{min}` minimum
* `{max}` maximum
* `{p#}` any integer percentile, where `#` is an integer from 0 to 100
* `{sum}` sum
In the following example, we build a table using {gtsummary}, similar to the table in the {gt} example. The main function we use is `tbl_svysummary()`. In this function, we include the variables we want to analyze in the `include` argument and define the statistics we want to display in the `statistic` argument. To specify the statistics, we apply the syntax from the {glue} package, where we enclose the variables we want to insert within curly brackets. We must specify the desired statistics using the names listed above. For example, to specify that we want the proportion followed by the standard error of the proportion in parentheses, we use `{p} ({p.std.error})`. Table [8\.3](c08-communicating-results.html#tab:results-gts-ex-1-tab) displays the resulting table.
```
anes_des_gtsum <- anes_des %>%
tbl_svysummary(
include = TrustGovernment,
statistic = list(all_categorical() ~ "{p} ({p.std.error})")
)
```
```
anes_des_gtsum
```
TABLE 8\.3: Example of {gtsummary} table with trust in government estimates
| **Characteristic** | **N \= 231,034,125**1 |
| --- | --- |
| PRE: How often trust government in Washington to do what is right \[revised] | |
| Always | 1\.6 (0\.00\) |
| Most of the time | 13 (0\.01\) |
| About half the time | 31 (0\.01\) |
| Some of the time | 43 (0\.01\) |
| Never | 11 (0\.01\) |
| Unknown | 673,773 |
| 1 % (SE(%)) | |
| --- | --- |
The default table (shown in Table [8\.3](c08-communicating-results.html#tab:results-gts-ex-1-tab)) includes the weighted number of missing (or Unknown) records. The standard error is reported as a proportion, while the proportion is styled as a percentage. In the next step, we remove the Unknown category by setting the missing argument to “no” and format the standard error as a percentage using the `digits` argument. To improve the table for publication, we provide a more polished label for the “TrustGovernment” variable using the `label` argument. The resulting table is displayed in Table [8\.4](c08-communicating-results.html#tab:results-gts-ex-2-tab).
```
anes_des_gtsum2 <- anes_des %>%
tbl_svysummary(
include = TrustGovernment,
statistic = list(all_categorical() ~ "{p} ({p.std.error})"),
missing = "no",
digits = list(TrustGovernment ~ style_percent),
label = list(TrustGovernment ~ "Trust in Government, 2020")
)
```
```
anes_des_gtsum2
```
TABLE 8\.4: Example of {gtsummary} table with trust in government estimates with labeling and digits options
| **Characteristic** | **N \= 231,034,125**1 |
| --- | --- |
| Trust in Government, 2020 | |
| Always | 1\.6 (0\.2\) |
| Most of the time | 13 (0\.6\) |
| About half the time | 31 (0\.8\) |
| Some of the time | 43 (0\.9\) |
| Never | 11 (0\.6\) |
| 1 % (SE(%)) | |
| --- | --- |
Table [8\.4](c08-communicating-results.html#tab:results-gts-ex-2-tab) is closer to our ideal output, but we still want to make a few changes. To exclude the term “Characteristic” and the estimated population size (N), we can modify the header using the `modify_header()` function to update the `label`. Further adjustments can be made based on personal preferences, organizational guidelines, or other style guides. If we prefer having the standard error in the header, similar to the {gt} table, instead of in the footnote (the {gtsummary} default), we can make these changes by specifying `stat_0` in the `modify_header()` function. Additionally, using `modify_footnote()` with `update = everything() ~ NA` removes the standard error from the footnote. After transforming the object into a {gt} table using `as_gt()`, we can add footnotes and a title using the same methods explained in the previous section. This updated table is displayed in Table [8\.5](c08-communicating-results.html#tab:results-gts-ex-3-tab).
```
anes_des_gtsum3 <- anes_des %>%
tbl_svysummary(
include = TrustGovernment,
statistic = list(all_categorical() ~ "{p} ({p.std.error})"),
missing = "no",
digits = list(TrustGovernment ~ style_percent),
label = list(TrustGovernment ~ "Trust in Government, 2020")
) %>%
modify_footnote(update = everything() ~ NA) %>%
modify_header(
label = " ",
stat_0 = "% (s.e.)"
) %>%
as_gt() %>%
tab_header("American voter's trust
in the federal government, 2020") %>%
tab_source_note(
md("*Source*: American National Election Studies, 2020")
) %>%
tab_footnote(
"Question text: How often can you trust the federal government
in Washington to do what is right?"
)
```
```
anes_des_gtsum3
```
TABLE 8\.5: Example of {gtsummary} table with trust in government estimates with more labeling options and context
| American voter's trust in the federal government, 2020 | |
| --- | --- |
| | % (s.e.) |
| Trust in Government, 2020 | |
| Always | 1\.6 (0\.2\) |
| Most of the time | 13 (0\.6\) |
| About half the time | 31 (0\.8\) |
| Some of the time | 43 (0\.9\) |
| Never | 11 (0\.6\) |
| *Source*: American National Election Studies, 2020 | |
| --- | --- |
| Question text: How often can you trust the federal government in Washington to do what is right? | |
| --- | --- |
We can also include summaries of more than one variable in the table. These variables can be either categorical or continuous. In the following code and Table [8\.6](c08-communicating-results.html#tab:results-gts-ex-4-tab), we add the mean age by updating the `include`, `statistic`, and `digits` arguments.
```
anes_des_gtsum4 <- anes_des %>%
tbl_svysummary(
include = c(TrustGovernment, Age),
statistic = list(
all_categorical() ~ "{p} ({p.std.error})",
all_continuous() ~ "{mean} ({mean.std.error})"
),
missing = "no",
digits = list(TrustGovernment ~ style_percent,
Age ~ c(1, 2)),
label = list(TrustGovernment ~ "Trust in Government, 2020")
) %>%
modify_footnote(update = everything() ~ NA) %>%
modify_header(label = " ",
stat_0 = "% (s.e.)") %>%
as_gt() %>%
tab_header(
"American voter's trust in the federal government, 2020") %>%
tab_source_note(
md("*Source*: American National Election Studies, 2020")
) %>%
tab_footnote(
"Question text: How often can you trust the federal government
in Washington to do what is right?"
) %>%
tab_caption("Example of {gtsummary} table with trust in government
estimates and average age")
```
```
anes_des_gtsum4
```
TABLE 8\.6: Example of {gtsummary} table with trust in government estimates and average age
| American voter's trust in the federal government, 2020 | |
| --- | --- |
| | % (s.e.) |
| Trust in Government, 2020 | |
| Always | 1\.6 (0\.2\) |
| Most of the time | 13 (0\.6\) |
| About half the time | 31 (0\.8\) |
| Some of the time | 43 (0\.9\) |
| Never | 11 (0\.6\) |
| PRE: SUMMARY: Respondent age | 47\.3 (0\.36\) |
| *Source*: American National Election Studies, 2020 | |
| --- | --- |
| Question text: How often can you trust the federal government in Washington to do what is right? | |
| --- | --- |
With {gtsummary}, we can also calculate statistics by different groups. Let’s modify the previous example (displayed in Table [8\.6](c08-communicating-results.html#tab:results-gts-ex-4-tab)) to analyze data on whether a respondent voted for president in 2020\. We update the `by` argument and refine the header. The resulting table is displayed in Table [8\.7](c08-communicating-results.html#tab:results-gts-ex-5-tab).
```
anes_des_gtsum5 <- anes_des %>%
drop_na(VotedPres2020) %>%
tbl_svysummary(
include = TrustGovernment,
statistic = list(all_categorical() ~ "{p} ({p.std.error})"),
missing = "no",
digits = list(TrustGovernment ~ style_percent),
label = list(TrustGovernment ~ "Trust in Government, 2020"),
by = VotedPres2020
) %>%
modify_footnote(update = everything() ~ NA) %>%
modify_header(
label = " ",
stat_1 = "Voted",
stat_2 = "Didn't vote"
) %>%
modify_spanning_header(all_stat_cols() ~ "% (s.e.)") %>%
as_gt() %>%
tab_header(
"American voter's trust
in the federal government by whether they voted
in the 2020 presidential election"
) %>%
tab_source_note(
md("*Source*: American National Election Studies, 2020")
) %>%
tab_footnote(
"Question text: How often can you trust the federal government
in Washington to do what is right?"
)
```
```
anes_des_gtsum5
```
TABLE 8\.7: Example of {gtsummary} table with trust in government estimates by voting status
| American voter's trust in the federal government by whether they voted in the 2020 presidential election | | |
| --- | --- | --- |
| | % (s.e.) | |
| Voted | Didn’t vote |
| Trust in Government, 2020 | | |
| Always | 1\.1 (0\.2\) | 0\.9 (0\.9\) |
| Most of the time | 13 (0\.6\) | 19 (5\.3\) |
| About half the time | 32 (0\.8\) | 30 (8\.6\) |
| Some of the time | 45 (0\.8\) | 45 (8\.2\) |
| Never | 9\.1 (0\.7\) | 5\.2 (2\.2\) |
| *Source*: American National Election Studies, 2020 | | |
| --- | --- | --- |
| Question text: How often can you trust the federal government in Washington to do what is right? | | |
| --- | --- | --- |
### 8\.3\.2 Charts and plots
Survey analysis can yield an abundance of printed summary statistics and models. Even with the most careful analysis, interpreting the results can be overwhelming. This is where charts and plots play a key role in our work. By transforming complex data into a visual representation, we can recognize patterns, relationships, and trends with greater ease.
R has numerous packages for creating compelling and insightful charts. In this section, we focus on {ggplot2}, a member of the {tidyverse} collection of packages. Known for its power and flexibility, {ggplot2} is an invaluable tool for creating a wide range of data visualizations ([Wickham 2016](#ref-ggplot2wickham)).
The {ggplot2} package follows the “grammar of graphics,” a framework that incrementally adds layers of chart components. This approach allows us to customize visual elements such as scales, colors, labels, and annotations to enhance the clarity of our results. After creating the survey design object, we can modify it to include additional outcomes and calculate estimates for our desired data points. Below, we create a binary variable `TrustGovernmentUsually`, which is `TRUE` when `TrustGovernment` is “Always” or “Most of the time” and `FALSE` otherwise. Then, we calculate the percentage of people who usually trust the government based on their vote in the 2020 presidential election (`VotedPres2020_selection`). We remove the cases where people did not vote or did not indicate their choice.
```
anes_des_der <- anes_des %>%
mutate(TrustGovernmentUsually = case_when(
is.na(TrustGovernment) ~ NA,
TRUE ~ TrustGovernment %in% c("Always", "Most of the time")
)) %>%
drop_na(VotedPres2020_selection) %>%
group_by(VotedPres2020_selection) %>%
summarize(
pct_trust = survey_mean(
TrustGovernmentUsually,
na.rm = TRUE,
proportion = TRUE,
vartype = "ci"
),
.groups = "drop"
)
anes_des_der
```
```
## # A tibble: 3 × 4
## VotedPres2020_selection pct_trust pct_trust_low pct_trust_upp
## <fct> <dbl> <dbl> <dbl>
## 1 Biden 0.123 0.109 0.140
## 2 Trump 0.178 0.161 0.198
## 3 Other 0.0681 0.0290 0.152
```
Now, we can begin creating our chart with {ggplot2}. First, we set up our plot with `ggplot()`. Next, we define the data points to be displayed using aesthetics, or `aes`. Aesthetics represent the visual properties of the objects in the plot. In the following example, we create a bar chart of the percentage of people who usually trust the government by who they voted for in the 2020 election. To do this, we want to have who they voted for on the x\-axis (`VotedPres2020_selection`) and the percent they usually trust the government on the y\-axis (`pct_trust`). We specify these variables in `ggplot()` and then indicate we want a bar chart with `geom_bar()`. The resulting plot is displayed in Figure [8\.1](c08-communicating-results.html#fig:results-plot1).
```
p <- anes_des_der %>%
ggplot(aes(
x = VotedPres2020_selection,
y = pct_trust
)) +
geom_bar(stat = "identity")
p
```
FIGURE 8\.1: Bar chart of trust in government, by chosen 2020 presidential candidate
This is a great starting point: it appears that a higher percentage of people state they usually trust the government among those who voted for Trump compared to those who voted for Biden or other candidates. Now, what if we want to introduce color to better differentiate the three groups? We can add `fill` under `aesthetics`, indicating that we want to use distinct colors for each value of `VotedPres2020_selection`. In this instance, Biden and Trump are displayed in different colors in Figure [8\.2](c08-communicating-results.html#fig:results-plot2).
```
pcolor <- anes_des_der %>%
ggplot(aes(
x = VotedPres2020_selection,
y = pct_trust,
fill = VotedPres2020_selection
)) +
geom_bar(stat = "identity")
pcolor
```
FIGURE 8\.2: Bar chart of trust in government by chosen 2020 presidential candidate, with colors
Let’s say we wanted to follow proper statistical analysis practice and incorporate variability in our plot. We can add another geom, `geom_errorbar()`, to display the confidence intervals on top of our existing `geom_bar()` layer. We can add the layer using a plus sign (`+`). The resulting graph is displayed in Figure [8\.3](c08-communicating-results.html#fig:results-plot3).
```
pcol_error <- anes_des_der %>%
ggplot(aes(
x = VotedPres2020_selection,
y = pct_trust,
fill = VotedPres2020_selection
)) +
geom_bar(stat = "identity") +
geom_errorbar(
aes(
ymin = pct_trust_low,
ymax = pct_trust_upp
),
width = .2
)
pcol_error
```
FIGURE 8\.3: Bar chart of trust in government by chosen 2020 presidential candidate, with colors and error bars
We can continue adding to our plot until we achieve our desired look. For example, since the color legend does not contribute meaningful information, we can eliminate it with `guides(fill = "none")`. We can also specify colors for `fill` using `scale_fill_manual()`. Inside this function, we provide a vector of values corresponding to the colors in our plot. These values are hexadecimal (hex) color codes, denoted by a leading pound sign `#` followed by six letters or numbers. The hex code `#0b3954` used below is dark blue. There are many tools online that help pick hex codes, such as htmlcolorcodes.com. Additionally, Figure [8\.4](c08-communicating-results.html#fig:results-plot4) incorporates better labels for the x and y axes (`xlab()`, `ylab()`), a title (`labs(title=)`), and a footnote with the data source (`labs(caption=)`).
```
pfull <-
anes_des_der %>%
ggplot(aes(
x = VotedPres2020_selection,
y = pct_trust,
fill = VotedPres2020_selection
)) +
geom_bar(stat = "identity") +
geom_errorbar(
aes(
ymin = pct_trust_low,
ymax = pct_trust_upp
),
width = .2
) +
scale_fill_manual(values = c("#0b3954", "#bfd7ea", "#8d6b94")) +
xlab("Election choice (2020)") +
ylab("Usually trust the government") +
scale_y_continuous(labels = scales::percent) +
guides(fill = "none") +
labs(
title = "Percent of voters who usually trust the government
by chosen 2020 presidential candidate",
caption = "Source: American National Election Studies, 2020"
)
pfull
```
FIGURE 8\.4: Bar chart of trust in government by chosen 2020 presidential candidate with colors, labels, error bars, and title
What we have explored in this section are just the foundational aspects of {ggplot2}, and the capabilities of this package extend far beyond what we have covered. Advanced features such as annotation, faceting, and theming allow for more sophisticated and customized visualizations. The {ggplot2} book by Wickham ([2016](#ref-ggplot2wickham)) is a comprehensive guide to learning more about this powerful tool.
| Social Science |
tidy-survey-r.github.io | https://tidy-survey-r.github.io/tidy-survey-book/c09-reprex-data.html |
Chapter 9 Reproducible research
===============================
9\.1 Introduction
-----------------
Reproducing results is an important aspect of any research. First, reproducibility serves as a form of quality assurance. If we pass an analysis project to another person, they should be able to run the entire project from start to finish and obtain the same results. They can critically assess the methodology and code while detecting potential errors. Another goal of reproducibility is enabling the verification of our analysis. When someone else is able to check our results, it ensures the integrity of the analyses by determining that the conclusions are not dependent on a particular person running the code or workflow on a particular day or in a particular environment.
Not only is reproducibility a key component in ethical and accurate research, but it is also a requirement for many scientific journals. For example, the *Journal of Survey Statistics and Methodology* (JSSAM) and *Public Opinion Quarterly* (POQ) require authors to make code, data, and methodology transparent and accessible to other researchers who wish to verify or build on existing work.
Reproducible research requires that the key components of analysis are available, discoverable, documented, and shared with others. The four main components that we should consider are:
* Code: source code used for data cleaning, analysis, modeling, and reporting
* Data: raw data used in the workflow, or if data are sensitive or proprietary, as much data as possible that would allow others to run our workflow or provide details on how to access the data (e.g., access to a restricted use file (RUF))
* Environment: environment of the project, including the R version, packages, operating system, and other dependencies used in the analysis
* Methodology: survey and analysis methodology, including rationale behind sample, questionnaire and analysis decisions, interpretations, and assumptions
In Chapter [8](c08-communicating-results.html#c08-communicating-results), we briefly mention how each of these is important to include in the methodology report and when communicating the findings of a study. However, to be transparent and effective analysts, we need to ensure we not only discuss these through text but also provide files and additional information when requested. Often, when starting a project, we may be eager to jump into the data and make decisions as we go without full documentation. This can be challenging if we need to go back and make changes or understand even what we did a few months ago. It benefits other analysts and potentially our future selves to document everything from the start. The good news is that many tools, practices, and project management techniques make survey analysis projects easy to reproduce. For best results, we should decide which techniques and tools to use before starting a project (or very early on).
This chapter covers some of our suggestions for tools and techniques we can use in projects. This list is not comprehensive but aims to provide a starting point for those looking to create a reproducible workflow.
9\.2 Project\-based workflows
-----------------------------
We recommend a project\-based workflow for analysis projects as described by Wickham, Çetinkaya\-Rundel, and Grolemund ([2023](#ref-wickham2023r4ds)). A project\-based workflow maintains a “source of truth” for our analyses. It helps with file system discipline by putting everything related to a project in a designated folder. Since all associated files are in a single location, they are easy to find and organize. When we reopen the project, we can recreate the environment in which we originally ran the code to reproduce our results.
The RStudio IDE has built\-in support for projects. When we create a project in RStudio, it creates an `.Rproj` file that stores settings specific to that project. Once we have created a project, we can create folders that help us organize our workflow. For example, a project directory could look like this:
```
| anes_analysis/
| anes_analysis.Rproj
| README.md
| codebooks
| codebook2020.pdf
| codebook2016.pdf
| rawdata
| anes2020_raw.csv
| anes2016_raw.csv
| scripts
| data-prep.R
| data
| anes2020_clean.csv
| anes2016_clean.csv
| report
| anes_report.Rmd
| anes_report.html
| anes_report.pdf
```
In a project\-based workflow, all paths are relative and, by default, relative to the folder the `.Rproj` file is located in. By using relative paths, others can open and run our files even if their directory configuration differs from ours (e.g., Mac and Windows users have different directory path structures). The {here} package enables easy file referencing, and we can start by using the `here::here()` function to build the path for loading or saving data ([Müller 2020](#ref-R-here)). Below, we ask R to read the CSV file `anes_2020.csv` in the project directory’s `data` folder:
```
anes <-
read_csv(here::here("data", "anes2020_clean.csv"))
```
The combination of projects and the {here} package keep all associated files organized. This workflow makes it more likely that our analyses can be reproduced by us or our colleagues.
9\.3 Functions and packages
---------------------------
We may find that we are repeating ourselves in our script, and the chance of errors increases whenever we copy and paste our code. By creating a function, we can create a consistent set of commands that reduce the likelihood of mistakes. Functions also organize our code, improve the code readability, and allow others to execute the same commands. For example, in Chapter [13](c13-ncvs-vignette.html#c13-ncvs-vignette), we create a function to run sequences of `rename()`, `filter()`, `group_by()`, and summarize statements across different variables. Creating functions helps us avoid overlooking necessary steps.
A package is made up of a collection of functions. If we find ourselves sharing functions with others to replicate the same series of commands in a separate project, creating a package can be a useful tool for sharing the code along with data and documentation.
9\.4 Version control with Git
-----------------------------
Often, a survey analysis project produces a lot of code. Keeping track of the latest version can become challenging, as files evolve throughout a project. If a team of analysts is working on the same script, someone may use an outdated version, resulting in incorrect results or redundant work.
Version control systems like Git can help alleviate these pains. Git is a system that tracks changes in files. We can use Git to follow code evaluation and manage asynchronous work. With Git, it is easy to see any changes made in a script, revert changes, and resolve differences between code versions (called conflicts).
Services such as GitHub or GitLab provide hosting and sharing of files as well as version control with Git. For example, we can visit the [GitHub repository for this book](https://github.com/tidy-survey-r/tidy-survey-book) and see the files that build the book, when they were committed to the repository, and the history of modifications over time.
In addition to code scripts, platforms like GitHub can store data and documentation. They provide a way to maintain a history of data modifications through versioning and timestamps. By saving the data and documentation alongside the code, it becomes easier for others to refer to and access everything they need in one place.
Using version control in analysis projects makes collaboration and maintenance more manageable. To connect Git with R, we recommend referencing the book [Happy Git and GitHub for the useR](https://happygitwithr.com/) ([Bryan 2023](#ref-git-w-R)).
9\.5 Package management with {renv}
-----------------------------------
Ensuring reproducibility involves not only using version control of code but also managing the versions of packages. If two people run the same code but use different package versions, the results might differ because of changes to those packages. For example, this book currently uses a version of the {srvyr} package from GitHub and not from CRAN. This is because the version of {srvyr} on CRAN has some bugs (errors) that result in incorrect calculations. The version on GitHub has corrected these errors, so we have asked readers to install the GitHub version to obtain the same results.
One way to handle different package versions is with the {renv} package. This package allows researchers to set the versions for each used package and manage package dependencies. Specifically, {renv} creates isolated, project\-specific environments that record the packages and their versions used in the code. When initiated by a new user, {renv} checks whether the installed packages are consistent with the recorded version for the project. If not, it installs the appropriate versions so that others can replicate the project’s environment to rerun the code and obtain consistent results ([Ushey and Wickham 2024](#ref-R-renv)).
9\.6 R environments with Docker
-------------------------------
Just as different versions of packages can introduce discrepancies or compatibility issues, the version of R can also prevent reproducibility. Tools such as Docker can help with this potential issue by creating isolated environments that define the version of R being used, along with other dependencies and configurations. The entire environment is bundled in a container. The container, defined by a Dockerfile, can be shared so that anybody, regardless of their local setup, can run the R code in the same environment.
9\.7 Workflow management with {targets}
---------------------------------------
With complex studies involving multiple code files and dependencies, it is important to ensure each step is executed in the intended sequence. We can do this manually, e.g., by numbering files to indicate the order or providing detailed documentation on the order. Alternatively, we can automate the process so the code flows sequentially. Making sure that the code runs in the correct order helps ensure that the research is reproducible. Anyone should be able to pick up the set of scripts and get the same results by following the workflow.
The {targets} package is an increasingly popular workflow manager that documents, automates, and executes complex data workflows with multiple steps and dependencies. With this package, we first define the order of execution for our code, and then it consistently executes the code in that order each time it is run. One beneficial feature of {targets} is that if code changes later in the workflow, only the affected code and its downstream targets (i.e., the subsequent code files) are re\-executed when we change a script. The {targets} package also provides interactive progress monitoring and reporting, allowing us to track the status and progress of our analysis pipeline ([Landau 2021](#ref-targetslandau)).
9\.8 Documentation with Quarto and R Markdown
---------------------------------------------
Tools like Quarto and R Markdown aid in reproducibility by creating documents that weave together code, text, and results. We can present analysis results alongside the report’s narrative, so there’s no need to copy and paste code output into the final documentation. By eliminating manual steps, we can reduce the chances of errors in the final output.
Quarto and R Markdown documents also allow users to re\-execute the underlying code when needed. Another analyst can see the steps we took, follow the scripts, and recreate the report. We can include details about our work in one place thanks to the combination of text and code, making our work transparent and easier to verify ([Allaire et al. 2024](#ref-R-quarto); [Xie, Dervieux, and Riederer 2020](#ref-rmarkdown2020man)).
### 9\.8\.1 Parameterization
Another useful feature of Quarto and R Markdown is the ability to reduce repetitive code by parameterizing the files. Parameters can control various aspects of the analysis, such as dates, geography, or other analysis variables. We can define and modify these parameters to explore different scenarios or inputs. For example, suppose we start by creating a document that provides survey analysis results for North Carolina but then later decide we want to look at another state. In that case, we can define a `state` parameter and rerun the same analysis for a state like Washington without having to edit the code throughout the document.
Parameters can be defined in the header or code chunks of our Quarto or R Markdown documents and easily modified and documented. By manually editing code throughout the script, we reduce errors that may occur and offer a flexible way for others to replicate the analysis and explore variations.
9\.9 Other tips for reproducibility
-----------------------------------
### 9\.9\.1 Random number seeds
Some tasks in survey analysis require randomness, such as imputation, model training, or creating random samples. By default, the random numbers generated by R change each time we rerun the code, making it difficult to reproduce the same results. By “setting the seed,” we can control the randomness and ensure that the random numbers remain consistent whenever we rerun the code. Others can use the same seed value to reproduce our random numbers and achieve the same results.
In R, we can use the `set.seed()` function to control the randomness in our code. We set a seed value by providing an integer in the function argument. The following code chunk sets a seed using `999`, then runs a random number function (`runif()`) to get five random numbers from a uniform distribution.
```
set.seed(999)
runif(5)
```
```
## [1] 0.38907 0.58306 0.09467 0.85263 0.78675
```
Since the seed is set to `999`, running `runif(5)` multiple times always produces the same output. The choice of the seed number is up to the analyst. For example, this could be the date (`20240102`) or time of day (`1056`) when the analysis was first conducted, a phone number (`8675309`), or the first few numbers that come to mind (`369`). As long as the seed is set for a given analysis, the actual number is up to the analyst to decide. It is important to note that `set.seed()` should be used before random number generation. Run it once per program, and the seed is applied to the entire script. We recommend setting the seed at the beginning of a script, where libraries are loaded.
### 9\.9\.2 Descriptive names and labels
Using descriptive variable names or labeling data can also assist with reproducible research. For example, in the ANES data, the variable names in the raw data all start with `V20` and are a string of numbers. To make things easier to reproduce in this book, we opted to change the variable names to be more descriptive of what they contained (e.g., `Age`). This can also be done with the data values themselves. One way to accomplish this is by creating factors for categorical data, which can ensure that we know that a value of `1` really means `Female`, for example. There are other ways of handling this, such as attaching labels to the data instead of recoding variables to be descriptive (see Chapter [11](c11-missing-data.html#c11-missing-data)). As with random number seeds, the exact method is up to the analyst, but providing this information can help ensure our research is reproducible.
9\.10 Additional resources
--------------------------
We can promote accuracy and verification of results by making our analysis reproducible. There are various tools and guides available to help achieve reproducibility in analysis work, a few of which were described in this chapter. Here are additional resources to explore:
* [R for Data Science chapter on project\-based workflows](https://r4ds.hadley.nz/workflow-scripts.html#projects)
* [Building reproducible analytical pipelines with R](https://raps-with-r.dev/)
* [Posit Solutions Site page on reproducible environments](https://solutions.posit.co/envs-pkgs/environments/)
9\.1 Introduction
-----------------
Reproducing results is an important aspect of any research. First, reproducibility serves as a form of quality assurance. If we pass an analysis project to another person, they should be able to run the entire project from start to finish and obtain the same results. They can critically assess the methodology and code while detecting potential errors. Another goal of reproducibility is enabling the verification of our analysis. When someone else is able to check our results, it ensures the integrity of the analyses by determining that the conclusions are not dependent on a particular person running the code or workflow on a particular day or in a particular environment.
Not only is reproducibility a key component in ethical and accurate research, but it is also a requirement for many scientific journals. For example, the *Journal of Survey Statistics and Methodology* (JSSAM) and *Public Opinion Quarterly* (POQ) require authors to make code, data, and methodology transparent and accessible to other researchers who wish to verify or build on existing work.
Reproducible research requires that the key components of analysis are available, discoverable, documented, and shared with others. The four main components that we should consider are:
* Code: source code used for data cleaning, analysis, modeling, and reporting
* Data: raw data used in the workflow, or if data are sensitive or proprietary, as much data as possible that would allow others to run our workflow or provide details on how to access the data (e.g., access to a restricted use file (RUF))
* Environment: environment of the project, including the R version, packages, operating system, and other dependencies used in the analysis
* Methodology: survey and analysis methodology, including rationale behind sample, questionnaire and analysis decisions, interpretations, and assumptions
In Chapter [8](c08-communicating-results.html#c08-communicating-results), we briefly mention how each of these is important to include in the methodology report and when communicating the findings of a study. However, to be transparent and effective analysts, we need to ensure we not only discuss these through text but also provide files and additional information when requested. Often, when starting a project, we may be eager to jump into the data and make decisions as we go without full documentation. This can be challenging if we need to go back and make changes or understand even what we did a few months ago. It benefits other analysts and potentially our future selves to document everything from the start. The good news is that many tools, practices, and project management techniques make survey analysis projects easy to reproduce. For best results, we should decide which techniques and tools to use before starting a project (or very early on).
This chapter covers some of our suggestions for tools and techniques we can use in projects. This list is not comprehensive but aims to provide a starting point for those looking to create a reproducible workflow.
9\.2 Project\-based workflows
-----------------------------
We recommend a project\-based workflow for analysis projects as described by Wickham, Çetinkaya\-Rundel, and Grolemund ([2023](#ref-wickham2023r4ds)). A project\-based workflow maintains a “source of truth” for our analyses. It helps with file system discipline by putting everything related to a project in a designated folder. Since all associated files are in a single location, they are easy to find and organize. When we reopen the project, we can recreate the environment in which we originally ran the code to reproduce our results.
The RStudio IDE has built\-in support for projects. When we create a project in RStudio, it creates an `.Rproj` file that stores settings specific to that project. Once we have created a project, we can create folders that help us organize our workflow. For example, a project directory could look like this:
```
| anes_analysis/
| anes_analysis.Rproj
| README.md
| codebooks
| codebook2020.pdf
| codebook2016.pdf
| rawdata
| anes2020_raw.csv
| anes2016_raw.csv
| scripts
| data-prep.R
| data
| anes2020_clean.csv
| anes2016_clean.csv
| report
| anes_report.Rmd
| anes_report.html
| anes_report.pdf
```
In a project\-based workflow, all paths are relative and, by default, relative to the folder the `.Rproj` file is located in. By using relative paths, others can open and run our files even if their directory configuration differs from ours (e.g., Mac and Windows users have different directory path structures). The {here} package enables easy file referencing, and we can start by using the `here::here()` function to build the path for loading or saving data ([Müller 2020](#ref-R-here)). Below, we ask R to read the CSV file `anes_2020.csv` in the project directory’s `data` folder:
```
anes <-
read_csv(here::here("data", "anes2020_clean.csv"))
```
The combination of projects and the {here} package keep all associated files organized. This workflow makes it more likely that our analyses can be reproduced by us or our colleagues.
9\.3 Functions and packages
---------------------------
We may find that we are repeating ourselves in our script, and the chance of errors increases whenever we copy and paste our code. By creating a function, we can create a consistent set of commands that reduce the likelihood of mistakes. Functions also organize our code, improve the code readability, and allow others to execute the same commands. For example, in Chapter [13](c13-ncvs-vignette.html#c13-ncvs-vignette), we create a function to run sequences of `rename()`, `filter()`, `group_by()`, and summarize statements across different variables. Creating functions helps us avoid overlooking necessary steps.
A package is made up of a collection of functions. If we find ourselves sharing functions with others to replicate the same series of commands in a separate project, creating a package can be a useful tool for sharing the code along with data and documentation.
9\.4 Version control with Git
-----------------------------
Often, a survey analysis project produces a lot of code. Keeping track of the latest version can become challenging, as files evolve throughout a project. If a team of analysts is working on the same script, someone may use an outdated version, resulting in incorrect results or redundant work.
Version control systems like Git can help alleviate these pains. Git is a system that tracks changes in files. We can use Git to follow code evaluation and manage asynchronous work. With Git, it is easy to see any changes made in a script, revert changes, and resolve differences between code versions (called conflicts).
Services such as GitHub or GitLab provide hosting and sharing of files as well as version control with Git. For example, we can visit the [GitHub repository for this book](https://github.com/tidy-survey-r/tidy-survey-book) and see the files that build the book, when they were committed to the repository, and the history of modifications over time.
In addition to code scripts, platforms like GitHub can store data and documentation. They provide a way to maintain a history of data modifications through versioning and timestamps. By saving the data and documentation alongside the code, it becomes easier for others to refer to and access everything they need in one place.
Using version control in analysis projects makes collaboration and maintenance more manageable. To connect Git with R, we recommend referencing the book [Happy Git and GitHub for the useR](https://happygitwithr.com/) ([Bryan 2023](#ref-git-w-R)).
9\.5 Package management with {renv}
-----------------------------------
Ensuring reproducibility involves not only using version control of code but also managing the versions of packages. If two people run the same code but use different package versions, the results might differ because of changes to those packages. For example, this book currently uses a version of the {srvyr} package from GitHub and not from CRAN. This is because the version of {srvyr} on CRAN has some bugs (errors) that result in incorrect calculations. The version on GitHub has corrected these errors, so we have asked readers to install the GitHub version to obtain the same results.
One way to handle different package versions is with the {renv} package. This package allows researchers to set the versions for each used package and manage package dependencies. Specifically, {renv} creates isolated, project\-specific environments that record the packages and their versions used in the code. When initiated by a new user, {renv} checks whether the installed packages are consistent with the recorded version for the project. If not, it installs the appropriate versions so that others can replicate the project’s environment to rerun the code and obtain consistent results ([Ushey and Wickham 2024](#ref-R-renv)).
9\.6 R environments with Docker
-------------------------------
Just as different versions of packages can introduce discrepancies or compatibility issues, the version of R can also prevent reproducibility. Tools such as Docker can help with this potential issue by creating isolated environments that define the version of R being used, along with other dependencies and configurations. The entire environment is bundled in a container. The container, defined by a Dockerfile, can be shared so that anybody, regardless of their local setup, can run the R code in the same environment.
9\.7 Workflow management with {targets}
---------------------------------------
With complex studies involving multiple code files and dependencies, it is important to ensure each step is executed in the intended sequence. We can do this manually, e.g., by numbering files to indicate the order or providing detailed documentation on the order. Alternatively, we can automate the process so the code flows sequentially. Making sure that the code runs in the correct order helps ensure that the research is reproducible. Anyone should be able to pick up the set of scripts and get the same results by following the workflow.
The {targets} package is an increasingly popular workflow manager that documents, automates, and executes complex data workflows with multiple steps and dependencies. With this package, we first define the order of execution for our code, and then it consistently executes the code in that order each time it is run. One beneficial feature of {targets} is that if code changes later in the workflow, only the affected code and its downstream targets (i.e., the subsequent code files) are re\-executed when we change a script. The {targets} package also provides interactive progress monitoring and reporting, allowing us to track the status and progress of our analysis pipeline ([Landau 2021](#ref-targetslandau)).
9\.8 Documentation with Quarto and R Markdown
---------------------------------------------
Tools like Quarto and R Markdown aid in reproducibility by creating documents that weave together code, text, and results. We can present analysis results alongside the report’s narrative, so there’s no need to copy and paste code output into the final documentation. By eliminating manual steps, we can reduce the chances of errors in the final output.
Quarto and R Markdown documents also allow users to re\-execute the underlying code when needed. Another analyst can see the steps we took, follow the scripts, and recreate the report. We can include details about our work in one place thanks to the combination of text and code, making our work transparent and easier to verify ([Allaire et al. 2024](#ref-R-quarto); [Xie, Dervieux, and Riederer 2020](#ref-rmarkdown2020man)).
### 9\.8\.1 Parameterization
Another useful feature of Quarto and R Markdown is the ability to reduce repetitive code by parameterizing the files. Parameters can control various aspects of the analysis, such as dates, geography, or other analysis variables. We can define and modify these parameters to explore different scenarios or inputs. For example, suppose we start by creating a document that provides survey analysis results for North Carolina but then later decide we want to look at another state. In that case, we can define a `state` parameter and rerun the same analysis for a state like Washington without having to edit the code throughout the document.
Parameters can be defined in the header or code chunks of our Quarto or R Markdown documents and easily modified and documented. By manually editing code throughout the script, we reduce errors that may occur and offer a flexible way for others to replicate the analysis and explore variations.
### 9\.8\.1 Parameterization
Another useful feature of Quarto and R Markdown is the ability to reduce repetitive code by parameterizing the files. Parameters can control various aspects of the analysis, such as dates, geography, or other analysis variables. We can define and modify these parameters to explore different scenarios or inputs. For example, suppose we start by creating a document that provides survey analysis results for North Carolina but then later decide we want to look at another state. In that case, we can define a `state` parameter and rerun the same analysis for a state like Washington without having to edit the code throughout the document.
Parameters can be defined in the header or code chunks of our Quarto or R Markdown documents and easily modified and documented. By manually editing code throughout the script, we reduce errors that may occur and offer a flexible way for others to replicate the analysis and explore variations.
9\.9 Other tips for reproducibility
-----------------------------------
### 9\.9\.1 Random number seeds
Some tasks in survey analysis require randomness, such as imputation, model training, or creating random samples. By default, the random numbers generated by R change each time we rerun the code, making it difficult to reproduce the same results. By “setting the seed,” we can control the randomness and ensure that the random numbers remain consistent whenever we rerun the code. Others can use the same seed value to reproduce our random numbers and achieve the same results.
In R, we can use the `set.seed()` function to control the randomness in our code. We set a seed value by providing an integer in the function argument. The following code chunk sets a seed using `999`, then runs a random number function (`runif()`) to get five random numbers from a uniform distribution.
```
set.seed(999)
runif(5)
```
```
## [1] 0.38907 0.58306 0.09467 0.85263 0.78675
```
Since the seed is set to `999`, running `runif(5)` multiple times always produces the same output. The choice of the seed number is up to the analyst. For example, this could be the date (`20240102`) or time of day (`1056`) when the analysis was first conducted, a phone number (`8675309`), or the first few numbers that come to mind (`369`). As long as the seed is set for a given analysis, the actual number is up to the analyst to decide. It is important to note that `set.seed()` should be used before random number generation. Run it once per program, and the seed is applied to the entire script. We recommend setting the seed at the beginning of a script, where libraries are loaded.
### 9\.9\.2 Descriptive names and labels
Using descriptive variable names or labeling data can also assist with reproducible research. For example, in the ANES data, the variable names in the raw data all start with `V20` and are a string of numbers. To make things easier to reproduce in this book, we opted to change the variable names to be more descriptive of what they contained (e.g., `Age`). This can also be done with the data values themselves. One way to accomplish this is by creating factors for categorical data, which can ensure that we know that a value of `1` really means `Female`, for example. There are other ways of handling this, such as attaching labels to the data instead of recoding variables to be descriptive (see Chapter [11](c11-missing-data.html#c11-missing-data)). As with random number seeds, the exact method is up to the analyst, but providing this information can help ensure our research is reproducible.
### 9\.9\.1 Random number seeds
Some tasks in survey analysis require randomness, such as imputation, model training, or creating random samples. By default, the random numbers generated by R change each time we rerun the code, making it difficult to reproduce the same results. By “setting the seed,” we can control the randomness and ensure that the random numbers remain consistent whenever we rerun the code. Others can use the same seed value to reproduce our random numbers and achieve the same results.
In R, we can use the `set.seed()` function to control the randomness in our code. We set a seed value by providing an integer in the function argument. The following code chunk sets a seed using `999`, then runs a random number function (`runif()`) to get five random numbers from a uniform distribution.
```
set.seed(999)
runif(5)
```
```
## [1] 0.38907 0.58306 0.09467 0.85263 0.78675
```
Since the seed is set to `999`, running `runif(5)` multiple times always produces the same output. The choice of the seed number is up to the analyst. For example, this could be the date (`20240102`) or time of day (`1056`) when the analysis was first conducted, a phone number (`8675309`), or the first few numbers that come to mind (`369`). As long as the seed is set for a given analysis, the actual number is up to the analyst to decide. It is important to note that `set.seed()` should be used before random number generation. Run it once per program, and the seed is applied to the entire script. We recommend setting the seed at the beginning of a script, where libraries are loaded.
### 9\.9\.2 Descriptive names and labels
Using descriptive variable names or labeling data can also assist with reproducible research. For example, in the ANES data, the variable names in the raw data all start with `V20` and are a string of numbers. To make things easier to reproduce in this book, we opted to change the variable names to be more descriptive of what they contained (e.g., `Age`). This can also be done with the data values themselves. One way to accomplish this is by creating factors for categorical data, which can ensure that we know that a value of `1` really means `Female`, for example. There are other ways of handling this, such as attaching labels to the data instead of recoding variables to be descriptive (see Chapter [11](c11-missing-data.html#c11-missing-data)). As with random number seeds, the exact method is up to the analyst, but providing this information can help ensure our research is reproducible.
9\.10 Additional resources
--------------------------
We can promote accuracy and verification of results by making our analysis reproducible. There are various tools and guides available to help achieve reproducibility in analysis work, a few of which were described in this chapter. Here are additional resources to explore:
* [R for Data Science chapter on project\-based workflows](https://r4ds.hadley.nz/workflow-scripts.html#projects)
* [Building reproducible analytical pipelines with R](https://raps-with-r.dev/)
* [Posit Solutions Site page on reproducible environments](https://solutions.posit.co/envs-pkgs/environments/)
| Social Science |
tidy-survey-r.github.io | https://tidy-survey-r.github.io/tidy-survey-book/c10-sample-designs-replicate-weights.html |
Chapter 10 Sample designs and replicate weights
===============================================
### Prerequisites
For this chapter, load the following packages:
```
library(tidyverse)
library(survey)
library(srvyr)
library(srvyrexploR)
```
To help explain the different types of sample designs, this chapter uses the `api` and `scd` data that are included in the {survey} package ([Lumley 2010](#ref-lumley2010complex)):
```
data(api)
data(scd)
```
This chapter uses data from the Residential Energy Consumption Survey (RECS), both 2015 and 2020, so we load the RECS data from the {srvyrexploR} package using their object names `recs_2015` and `recs_2020`, respectively ([Zimmer, Powell, and Velásquez 2024](#ref-R-srvyrexploR)).
10\.1 Introduction
------------------
The primary reason for using packages like {survey} and {srvyr} is to account for the sampling design or replicate weights into point and uncertainty estimates ([Freedman Ellis and Schneider 2024](#ref-R-srvyr); [Lumley 2010](#ref-lumley2010complex)). By incorporating the sampling design or replicate weights, these estimates are appropriately calculated.
In this chapter, we introduce common sampling designs and common types of replicate weights, the mathematical methods for calculating estimates and standard errors for a given sampling design, and the R syntax to specify the sampling design or replicate weights. While we show the math behind the estimates, the functions in these packages handle the calculation. To deeply understand the math and the derivation, refer to Penn State ([2019](#ref-pennstate506)), Särndal, Swensson, and Wretman ([2003](#ref-sarndal2003model)), Wolter ([2007](#ref-wolter2007introduction)), or Fuller ([2011](#ref-fuller2011sampling)) (these are listed in order of increasing statistical rigorousness).
The general process for estimation in the {srvyr} package is to:
1. Create a `tbl_svy` object (a survey object) using: `as_survey_design()` or `as_survey_rep()`
2. Subset data (if needed) using `filter()` (subpopulations)
3. Specify domains of analysis using `group_by()`
4. Within `summarize()`, specify variables to calculate, including means, totals, proportions, quantiles, and more
This chapter includes details on the first step: creating the survey object. Once this survey object is created, it can be used in the other steps (detailed in Chapters [5](c05-descriptive-analysis.html#c05-descriptive-analysis) through [7](c07-modeling.html#c07-modeling)) to account for the complex survey design.
10\.2 Common sampling designs
-----------------------------
A sampling design is the method used to draw a sample. Both logistical and statistical elements are considered when developing a sampling design. When specifying a sampling design in R, we specify the levels of sampling along with the weights. The weight for each record is constructed so that the particular record represents that many units in the population. For example, in a survey of 6th\-grade students in the United States, the weight associated with each responding student reflects how many 6th\-grade students across the country that record represents. Generally, the weights represent the inverse of the probability of selection, such that the sum of the weights corresponds to the total population size, although some studies may have the sum of the weights equal to the number of respondent records.
Some common terminology across the designs are:
* sample size, generally denoted as \\(n\\), is the number of units selected to be sampled
* population size, generally denoted as \\(N\\), is the number of units in the population of interest
* sampling frame, the list of units from which the sample is drawn (see Chapter [2](c02-overview-surveys.html#c02-overview-surveys) for more information)
### 10\.2\.1 Simple random sample without replacement
The simple random sample (SRS) without replacement is a sampling design in which a fixed sample size is selected from a sampling frame, and every possible subsample has an equal probability of selection. Without replacement refers to the fact that once a sampling unit has been selected, it is removed from the sample frame and cannot be selected again.
* Requirements: The sampling frame must include the entire population.
* Advantages: SRS requires no information about the units apart from contact information.
* Disadvantages: The sampling frame may not be available for the entire population.
* Example: Randomly select students in a university from a roster provided by the registrar’s office.
#### The math
The estimate for the population mean of variable \\(y\\) is:
\\\[\\bar{y}\=\\frac{1}{n}\\sum\_{i\=1}^n y\_i\\]
where \\(\\bar{y}\\) represents the sample mean, \\(n\\) is the total number of respondents (or observations), and \\(y\_i\\) is each individual value of \\(y\\).
The estimate of the standard error of the mean is:
\\\[se(\\bar{y})\=\\sqrt{\\frac{s^2}{n}\\left( 1\-\\frac{n}{N} \\right)}\\] where
\\\[s^2\=\\frac{1}{n\-1}\\sum\_{i\=1}^n\\left(y\_i\-\\bar{y}\\right)^2\.\\]
and \\(N\\) is the population size. This standard error estimate might look very similar to equations in other statistical applications except for the part on the right side of the equation: \\(1\-\\frac{n}{N}\\). This is called the finite population correction (FPC) factor. If the size of the frame, \\(N\\), is very large in comparison to the sample, the FPC is negligible, so it is often ignored. A common guideline is if the sample is less than 10% of the population, the FPC is negligible.
To estimate proportions, we define \\(x\_i\\) as the indicator if the outcome is observed. That is, \\(x\_i\=1\\) if the outcome is observed, and \\(x\_i\=0\\) if the outcome is not observed for respondent \\(i\\). Then the estimated proportion from an SRS design is:
\\\[\\hat{p}\=\\frac{1}{n}\\sum\_{i\=1}^n x\_i \\]
and the estimated standard error of the proportion is:
\\\[se(\\hat{p})\=\\sqrt{\\frac{\\hat{p}(1\-\\hat{p})}{n\-1}\\left(1\-\\frac{n}{N}\\right)} \\]
#### The syntax
If a sample was drawn through SRS and had no nonresponse or other weighting adjustments, we specify this design in R as:
```
srs1_des <- dat %>%
as_survey_design(fpc = fpcvar)
```
where `dat` is a tibble or data.frame with the survey data, and `fpcvar` is a variable in the data indicating the sampling frame’s size (this variable has the same value for all cases in an SRS design). If the frame is very large, sometimes the frame size is not provided. In that case, the FPC is not needed, and we specify the design as:
```
srs2_des <- dat %>%
as_survey_design()
```
If some post\-survey adjustments were implemented and the weights are not all equal, we specify the design as:
```
srs3_des <- dat %>%
as_survey_design(weights = wtvar,
fpc = fpcvar)
```
where `wtvar` is a variable in the data indicating the weight for each case. Again, the FPC can be omitted if it is unnecessary because the frame is large compared to the sample size.
#### Example
The {survey} package in R provides some example datasets that we use throughout this chapter. One of the example datasets we use is from the Academic Performance Index Program (APIP). The APIP program administered by the California Department of Education, and the {survey} package includes a population file (sample frame) of all schools with at least 100 students and several different samples pulled from that data using different sampling methods. For this first example, we use the `apisrs` dataset, which contains an SRS of 200 schools. For printing purposes, we create a new dataset called `apisrs_slim`, which sorts the data by the school district and school ID and subsets the data to only a few columns. The SRS sample data are illustrated below:
```
apisrs_slim <-
apisrs %>%
as_tibble() %>%
arrange(dnum, snum) %>%
select(cds, dnum, snum, dname, sname, fpc, pw)
apisrs_slim
```
```
## # A tibble: 200 × 7
## cds dnum snum dname sname fpc pw
## <chr> <int> <dbl> <chr> <chr> <dbl> <dbl>
## 1 19642126061220 1 1121 ABC Unified Haske… 6194 31.0
## 2 19642126066716 1 1124 ABC Unified Stowe… 6194 31.0
## 3 36675876035174 5 3895 Adelanto Elementary Adela… 6194 31.0
## 4 33669776031512 19 3347 Alvord Unified Arlan… 6194 31.0
## 5 33669776031595 19 3352 Alvord Unified Wells… 6194 31.0
## 6 31667876031033 39 3271 Auburn Union Elementary Cain … 6194 31.0
## 7 19642876011407 42 1169 Baldwin Park Unified Deanz… 6194 31.0
## 8 19642876011464 42 1175 Baldwin Park Unified Heath… 6194 31.0
## 9 19642956011589 48 1187 Bassett Unified Erwin… 6194 31.0
## 10 41688586043392 49 4948 Bayshore Elementary Baysh… 6194 31.0
## # ℹ 190 more rows
```
Table [10\.1](c10-sample-designs-replicate-weights.html#tab:apidata) provides details on all the variables in this dataset.
TABLE 10\.1: Overview of Variables in APIP Data
| Variable Name | Description |
| --- | --- |
| `cds` | Unique identifier for each school |
| `dnum` | School district identifier within county |
| `snum` | School identifier within district |
| `dname` | District Name |
| `sname` | School Name |
| `fpc` | Finite population correction factor |
| `pw` | Weight |
To create the `tbl_survey` object for the SRS data, we specify the design as:
```
apisrs_des <- apisrs_slim %>%
as_survey_design(
weights = pw,
fpc = fpc
)
apisrs_des
```
```
## Independent Sampling design
## Called via srvyr
## Sampling variables:
## - ids: `1`
## - fpc: fpc
## - weights: pw
## Data variables:
## - cds (chr), dnum (int), snum (dbl), dname (chr), sname (chr), fpc
## (dbl), pw (dbl)
```
In the printed design object, the design is described as an “Independent Sampling design,” which is another term for SRS. The ids are specified as `1`, which means there is no clustering (a topic described in Section [10\.2\.4](c10-sample-designs-replicate-weights.html#samp-cluster)), the FPC variable is indicated, and the weights are indicated. We can also look at the summary of the design object (`summary()`) and see the distribution of the probabilities (inverse of the weights) along with the population size and a list of the variables in the dataset.
```
summary(apisrs_des)
```
```
## Independent Sampling design
## Called via srvyr
## Probabilities:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.0323 0.0323 0.0323 0.0323 0.0323 0.0323
## Population size (PSUs): 6194
## Data variables:
## [1] "cds" "dnum" "snum" "dname" "sname" "fpc" "pw"
```
### 10\.2\.2 Simple random sample with replacement
Similar to the SRS design, the simple random sample with replacement (SRSWR) design randomly selects the sample from the entire sampling frame. However, while SRS removes sampled units before selecting again, the SRSWR instead replaces each sampled unit before drawing again, so units can be selected more than once.
* Requirements: The sampling frame must include the entire population.
* Advantages: SRSWR requires no information about the units apart from contact information.
* Disadvantages:
+ The sampling frame may not be available for the entire population.
+ Units can be selected more than once, resulting in a smaller realized sample size because receiving duplicate information from a single respondent does not provide additional information.
+ For small populations, SRSWR has larger standard errors than SRS designs.
* Example: A professor puts all students’ names on paper slips and selects them randomly to ask students questions, but the professor replaces the paper after calling on the student so they can be selected again at any time.
In general for surveys, using an SRS design (without replacement) is preferred as we do not want respondents to answer a survey more than once.
#### The math
The estimate for the population mean of variable \\(y\\) is:
\\\[\\bar{y}\=\\frac{1}{n}\\sum\_{i\=1}^n y\_i\\]
and the estimate of the standard error of mean is:
\\\[se(\\bar{y})\=\\sqrt{\\frac{s^2}{n}}\\] where
\\\[s^2\=\\frac{1}{n\-1}\\sum\_{i\=1}^n\\left(y\_i\-\\bar{y}\\right)^2\.\\]
To calculate the estimated proportion, we define \\(x\_i\\) as the indicator that the outcome is observed (as we did with SRS):
\\\[\\hat{p}\=\\frac{1}{n}\\sum\_{i\=1}^n x\_i \\]
and the estimated standard error of the proportion is:
\\\[se(\\hat{p})\=\\sqrt{\\frac{\\hat{p}(1\-\\hat{p})}{n}} \\]
#### The syntax
If we had a sample that was drawn through SRSWR and had no nonresponse or other weighting adjustments, in R, we specify this design as:
```
srswr1_des <- dat %>%
as_survey_design()
```
where `dat` is a tibble or data.frame containing our survey data. This syntax is the same as an SRS design, except an FPC is not included. This is because when calculating a sample with replacement, the population pool to select from is no longer finite, so a correction is not needed. Therefore, with large populations where the FPC is negligible, the underlying formulas for SRS and SRSWR designs are the same.
If some post\-survey adjustments were implemented and the weights are not all equal, we specify the design as:
```
srswr2_des <- dat %>%
as_survey_design(weights = wtvar)
```
where `wtvar` is the variable for the weight of the data.
#### Example
The {survey} package does not include an example of SRSWR. To illustrate this design, we need to create an example. We use the APIP population data provided by the {survey} package (`apipop`) and select a sample of 200 cases using the `slice_sample()` function from the tidyverse. One of the arguments in the `slice_sample()` function is `replace`. If `replace=TRUE`, then we are conducting an SRSWR. We then calculate selection weights as the inverse of the probability of selection and call this new dataset `apisrswr`.
```
set.seed(409963)
apisrswr <- apipop %>%
as_tibble() %>%
slice_sample(n = 200, replace = TRUE) %>%
select(cds, dnum, snum, dname, sname) %>%
mutate(weight = nrow(apipop) / 200)
head(apisrswr)
```
```
## # A tibble: 6 × 6
## cds dnum snum dname sname weight
## <chr> <int> <dbl> <chr> <chr> <dbl>
## 1 43696416060065 533 5348 Palo Alto Unified Jordan (Da… 31.0
## 2 07618046005060 650 509 San Ramon Valley Unified Alamo Elem… 31.0
## 3 19648086085674 457 2134 Montebello Unified La Merced … 31.0
## 4 07617056003719 346 377 Knightsen Elementary Knightsen … 31.0
## 5 19650606023022 744 2351 Torrance Unified Carr (Evel… 31.0
## 6 01611196090120 6 13 Alameda City Unified Paden (Wil… 31.0
```
Because this is an SRS design with replacement, there may be duplicates in the data. It is important to keep the duplicates in the data for proper estimation. For reference, we can view the duplicates in the example data we just created.
```
apisrswr %>%
group_by(cds) %>%
filter(n() > 1) %>%
arrange(cds)
```
```
## # A tibble: 4 × 6
## # Groups: cds [2]
## cds dnum snum dname sname weight
## <chr> <int> <dbl> <chr> <chr> <dbl>
## 1 15633216008841 41 869 Bakersfield City Elem Chipman Junio… 31.0
## 2 15633216008841 41 869 Bakersfield City Elem Chipman Junio… 31.0
## 3 39686766042782 716 4880 Stockton City Unified Tyler Skills … 31.0
## 4 39686766042782 716 4880 Stockton City Unified Tyler Skills … 31.0
```
We created a weight variable in this example data, which is the inverse of the probability of selection. We specify the sampling design for `apisrswr` as:
```
apisrswr_des <- apisrswr %>%
as_survey_design(weights = weight)
apisrswr_des
```
```
## Independent Sampling design (with replacement)
## Called via srvyr
## Sampling variables:
## - ids: `1`
## - weights: weight
## Data variables:
## - cds (chr), dnum (int), snum (dbl), dname (chr), sname (chr), weight
## (dbl)
```
```
summary(apisrswr_des)
```
```
## Independent Sampling design (with replacement)
## Called via srvyr
## Probabilities:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.0323 0.0323 0.0323 0.0323 0.0323 0.0323
## Data variables:
## [1] "cds" "dnum" "snum" "dname" "sname" "weight"
```
In the output above, the design object and the object summary are shown. Both note that the sampling is done “with replacement” because no FPC was specified. The probabilities, which are derived from the weights, are summarized in the summary function output.
### 10\.2\.3 Stratified sampling
Stratified sampling occurs when a population is divided into mutually exclusive subpopulations (strata), and then samples are selected independently within each stratum.
* Requirements: The sampling frame must include the information to divide the population into strata for every unit.
* Advantages:
+ This design ensures sample representation in all subpopulations.
+ If the strata are correlated with survey outcomes, a stratified sample has smaller standard errors compared to a SRS sample of the same size.
+ This results in a more efficient design.
* Disadvantages: Auxiliary data may not exist to divide the sampling frame into strata, or the data may be outdated.
* Examples:
+ Example 1: A population of North Carolina residents could be stratified into urban and rural areas, and then an SRS of residents from both rural and urban areas is selected independently. This ensures there are residents from both areas in the sample.
+ Example 2: Law enforcement agencies could be stratified into the three primary general\-purpose categories in the U.S.: local police, sheriff’s departments, and state police. An SRS of agencies from each of the three types is then selected independently to ensure all three types of agencies are represented.
#### The math
Let \\(\\bar{y}\_h\\) be the sample mean for stratum \\(h\\), \\(N\_h\\) be the population size of stratum \\(h\\), \\(n\_h\\) be the sample size of stratum \\(h\\), and \\(H\\) be the total number of strata. Then, the estimate for the population mean under stratified SRS sampling is:
\\\[\\bar{y}\=\\frac{1}{N}\\sum\_{h\=1}^H N\_h\\bar{y}\_h\\]
and the estimate of the standard error of \\(\\bar{y}\\) is:
\\\[se(\\bar{y})\=\\sqrt{\\frac{1}{N^2} \\sum\_{h\=1}^H N\_h^2 \\frac{s\_h^2}{n\_h}\\left(1\-\\frac{n\_h}{N\_h}\\right)} \\]
where
\\\[s\_h^2\=\\frac{1}{n\_h\-1}\\sum\_{i\=1}^{n\_h}\\left(y\_{i,h}\-\\bar{y}\_h\\right)^2\\]
For estimates of proportions, let \\(\\hat{p}\_h\\) be the estimated proportion in stratum \\(h\\). Then, the population proportion estimate is:
\\\[\\hat{p}\= \\frac{1}{N}\\sum\_{h\=1}^H N\_h \\hat{p}\_h\\]
The standard error of the proportion is:
\\\[se(\\hat{p}) \= \\frac{1}{N} \\sqrt{ \\sum\_{h\=1}^H N\_h^2 \\frac{\\hat{p}\_h(1\-\\hat{p}\_h)}{n\_h\-1} \\left(1\-\\frac{n\_h}{N\_h}\\right)}\\]
#### The syntax
In addition to the `fpc` and `weights` arguments discussed in the types above, stratified designs require the addition of the `strata` argument. For example, to specify a stratified SRS design in {srvyr} when using the FPC, that is, where the population sizes of the strata are not too large and are known, we specify the design as:
```
stsrs1_des <- dat %>%
as_survey_design(fpc = fpcvar,
strata = stratavar)
```
where `fpcvar` is a variable on our data that indicates \\(N\_h\\) for each row, and `stratavar` is a variable indicating the stratum for each row. We can omit the FPC if it is not applicable. Additionally, we can indicate the weight variable if it is present where `wtvar` is a variable on our data with a numeric weight.
```
stsrs2_des <- dat %>%
as_survey_design(weights = wtvar,
strata = stratavar)
```
#### Example
In the example APIP data, `apistrat` is a stratified random sample, stratified by school type (`stype`) with three levels: `E` for elementary school, `M` for middle school, and `H` for high school. As with the SRS example above, we sort and select specific variables for use in printing. The data are illustrated below, including a count of the number of cases per stratum:
```
apistrat_slim <-
apistrat %>%
as_tibble() %>%
arrange(dnum, snum) %>%
select(cds, dnum, snum, dname, sname, stype, fpc, pw)
apistrat_slim %>%
count(stype, fpc)
```
```
## # A tibble: 3 × 3
## stype fpc n
## <fct> <dbl> <int>
## 1 E 4421 100
## 2 H 755 50
## 3 M 1018 50
```
The FPC is the same for each case within each stratum. This output also shows that 100 elementary schools, 50 middle schools, and 50 high schools were sampled. It is often common for the number of units sampled from each strata to be different based on the goals of the project, or to mirror the size of each strata in the population. We specify the design as:
```
apistrat_des <- apistrat_slim %>%
as_survey_design(
strata = stype,
weights = pw,
fpc = fpc
)
apistrat_des
```
```
## Stratified Independent Sampling design
## Called via srvyr
## Sampling variables:
## - ids: `1`
## - strata: stype
## - fpc: fpc
## - weights: pw
## Data variables:
## - cds (chr), dnum (int), snum (dbl), dname (chr), sname (chr), stype
## (fct), fpc (dbl), pw (dbl)
```
```
summary(apistrat_des)
```
```
## Stratified Independent Sampling design
## Called via srvyr
## Probabilities:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.0226 0.0226 0.0359 0.0401 0.0534 0.0662
## Stratum Sizes:
## E H M
## obs 100 50 50
## design.PSU 100 50 50
## actual.PSU 100 50 50
## Population stratum sizes (PSUs):
## E H M
## 4421 755 1018
## Data variables:
## [1] "cds" "dnum" "snum" "dname" "sname" "stype" "fpc" "pw"
```
When printing the object, it is specified as a “Stratified Independent Sampling design,” also known as a stratified SRS, and the strata variable is included. Printing the summary, we see a distribution of probabilities, as we saw with SRS; but we also see the sample and population sizes by stratum.
### 10\.2\.4 Clustered sampling
Clustered sampling occurs when a population is divided into mutually exclusive subgroups called clusters or primary sampling units (PSUs). A random selection of PSUs is sampled, and then another level of sampling is done within these clusters. There can be multiple levels of this selection. Clustered sampling is often used when a list of the entire population is not available or data collection involves interviewers needing direct contact with respondents.
* Requirements: There must be a way to divide the population into clusters. Clusters are commonly structural, such as institutions (e.g., schools, prisons) or geography (e.g., states, counties).
* Advantages:
+ Clustered sampling is advantageous when data collection is done in person, so interviewers are sent to specific sampled areas rather than completely at random across a country.
+ With clustered sampling, a list of the entire population is not necessary. For example, if sampling students, we do not need a list of all students, but only a list of all schools. Once the schools are sampled, lists of students can be obtained within the sampled schools.
* Disadvantages: Compared to a simple random sample for the same sample size, clustered samples generally have larger standard errors of estimates.
* Examples:
+ Example 1: Consider a study needing a sample of 6th\-grade students in the United States. No list likely exists of all these students. However, it is more likely to obtain a list of schools that enroll 6th graders, so a study design could select a random sample of schools that enroll 6th graders. The selected schools can then provide a list of students to do a second stage of sampling where 6th\-grade students are randomly sampled within each of the sampled schools. This is a one\-stage sample design (the one representing the number of clusters) and is the type of design we discuss in the formulas below.
+ Example 2: Consider a study sending interviewers to households for a survey. This is a more complicated example that requires two levels of clustering (two\-stage sample design) to efficiently use interviewers in geographic clusters. First, in the U.S., counties could be selected as the PSU and then census block groups within counties could be selected as the secondary sampling unit (SSU). Households could then be randomly sampled within the block groups. This type of design is popular for in\-person surveys, as it reduces the travel necessary for interviewers.
#### The math
Consider a survey where \\(a\\) clusters are sampled from a population of \\(A\\) clusters via SRS. Within each sampled cluster, \\(i\\), there are \\(B\_i\\) units in the population, and \\(b\_i\\) units are sampled via SRS. Let \\(\\bar{y}\_{i}\\) be the sample mean of cluster \\(i\\). Then, a ratio estimator of the population mean is:
\\\[\\bar{y}\=\\frac{\\sum\_{i\=1}^a B\_i \\bar{y}\_{i}}{ \\sum\_{i\=1}^a B\_i}\\]
Note this is a consistent but biased estimator. Often the population size is not known, so this is a method to estimate a mean without knowing the population size. The estimated standard error of the mean is:
\\\[se(\\bar{y})\= \\frac{1}{\\hat{N}}\\sqrt{\\left(1\-\\frac{a}{A}\\right)\\frac{s\_a^2}{a} \+ \\frac{A}{a} \\sum\_{i\=1}^a \\left(1\-\\frac{b\_i}{B\_i}\\right) \\frac{s\_i^2}{b\_i} }\\]
where \\(\\hat{N}\\) is the estimated population size, \\(s\_a^2\\) is the between\-cluster variance, and \\(s\_i^2\\) is the within\-cluster variance.
The formula for the between\-cluster variance (\\(s\_a^2\\)) is:
\\\[s\_a^2\=\\frac{1}{a\-1}\\sum\_{i\=1}^a \\left( \\hat{y}\_i \- \\frac{\\sum\_{i\=1}^a \\hat{y}\_{i} }{a}\\right)^2\\]
where \\(\\hat{y}\_i \=B\_i\\bar{y\_i}\\).
The formula for the within\-cluster variance (\\(s\_i^2\\)) is:
\\\[s\_i^2\=\\frac{1}{a(b\_i\-1\)} \\sum\_{j\=1}^{b\_i} \\left(y\_{ij}\-\\bar{y}\_i\\right)^2\\]
where \\(y\_{ij}\\) is the outcome for sampled unit \\(j\\) within cluster \\(i\\).
#### The syntax
Clustered sampling designs require the addition of the `ids` argument, which specifies the cluster level variable(s). To specify a two\-stage clustered design without replacement, we specify the design as:
```
clus2_des <- dat %>%
as_survey_design(weights = wtvar,
ids = c(PSU, SSU),
fpc = c(A, B))
```
where `PSU` and `SSU` are the variables indicating the PSU and SSU identifiers, and `A` and `B` are the variables indicating the population sizes for each level (i.e., `A` is the number of clusters, and `B` is the number of units within each cluster). Note that `A` is the same for all records, and `B` is the same for all records within the same cluster.
If clusters were sampled with replacement or from a very large population, the FPC is unnecessary. Additionally, only the first stage of selection is necessary regardless of whether the units were selected with replacement at any stage. The subsequent stages of selection are ignored in computation as their contribution to the variance is overpowered by the first stage (see Särndal, Swensson, and Wretman ([2003](#ref-sarndal2003model)) or Wolter ([2007](#ref-wolter2007introduction)) for a more in\-depth discussion). Therefore, the two design objects specified below yield the same estimates in the end:
```
clus2ex1_des <- dat %>%
as_survey_design(weights = wtvar,
ids = c(PSU, SSU))
clus2ex2_des <- dat %>%
as_survey_design(weights = wtvar,
ids = PSU)
```
Note that there is one additional argument that is sometimes necessary, which is `nest = TRUE`. This option relabels cluster IDs to enforce nesting within strata. Sometimes, as an example, there may be a cluster `1` within each stratum, but cluster `1` in stratum `1` is a different cluster than cluster `1` in stratum `2`. These are actually different clusters. This option indicates that repeated numbering does not mean it is the same cluster. If this option is not used and there are repeated cluster IDs across different strata, an error is generated.
#### Example
The `survey` package includes a two\-stage cluster sample data, `apiclus2`, in which school districts were sampled, and then a random sample of five schools was selected within each district. For districts with fewer than five schools, all schools were sampled. School districts are identified by `dnum`, and schools are identified by `snum`. The variable `fpc1` indicates how many districts there are in California (the total number of PSUs or `A`), and `fpc2` indicates how many schools were in a given district with at least 100 students (the total number of SSUs or `B`). The data include a row for each school. In the data printed below, there are 757 school districts, as indicated by `fpc1`, and there are nine schools in District 731, one school in District 742, two schools in District 768, and so on as indicated by `fpc2`. For illustration purposes, the object `apiclus2_slim` has been created from `apiclus2`, which subsets the data to only the necessary columns and sorts the data.
```
apiclus2_slim <-
apiclus2 %>%
as_tibble() %>%
arrange(desc(dnum), snum) %>%
select(cds, dnum, snum, fpc1, fpc2, pw)
apiclus2_slim
```
```
## # A tibble: 126 × 6
## cds dnum snum fpc1 fpc2 pw
## <chr> <int> <dbl> <dbl> <int[1d]> <dbl>
## 1 47704826050942 795 5552 757 1 18.9
## 2 07618126005169 781 530 757 6 22.7
## 3 07618126005177 781 531 757 6 22.7
## 4 07618126005185 781 532 757 6 22.7
## 5 07618126005193 781 533 757 6 22.7
## 6 07618126005243 781 535 757 6 22.7
## 7 19650786023337 768 2371 757 2 18.9
## 8 19650786023345 768 2372 757 2 18.9
## 9 54722076054423 742 5898 757 1 18.9
## 10 50712906053086 731 5781 757 9 34.1
## # ℹ 116 more rows
```
To specify this design in R, we use the following:
```
apiclus2_des <- apiclus2_slim %>%
as_survey_design(
ids = c(dnum, snum),
fpc = c(fpc1, fpc2),
weights = pw
)
apiclus2_des
```
```
## 2 - level Cluster Sampling design
## With (40, 126) clusters.
## Called via srvyr
## Sampling variables:
## - ids: `dnum + snum`
## - fpc: `fpc1 + fpc2`
## - weights: pw
## Data variables:
## - cds (chr), dnum (int), snum (dbl), fpc1 (dbl), fpc2 (int[1d]), pw
## (dbl)
```
```
summary(apiclus2_des)
```
```
## 2 - level Cluster Sampling design
## With (40, 126) clusters.
## Called via srvyr
## Probabilities:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.00367 0.03774 0.05284 0.04239 0.05284 0.05284
## Population size (PSUs): 757
## Data variables:
## [1] "cds" "dnum" "snum" "fpc1" "fpc2" "pw"
```
The design objects are described as “2 \- level Cluster Sampling design,” and include the ids (cluster), FPC, and weight variables. The summary notes that the sample includes 40 first\-level clusters (PSUs), which are school districts, and 126 second\-level clusters (SSUs), which are schools. Additionally, the summary includes a numeric summary of the probabilities of selection and the population size (number of PSUs) as 757\.
10\.3 Combining sampling methods
--------------------------------
SRS, stratified, and clustered designs are the backbone of sampling designs, and the features are often combined in one design. Additionally, rather than using SRS for selection, other sampling mechanisms are commonly used, such as probability proportional to size (PPS), systematic sampling, or selection with unequal probabilities, which are briefly described here. In PPS sampling, a size measure is constructed for each unit (e.g., the population of the PSU or the number of occupied housing units), and units with larger size measures are more likely to be sampled. Systematic sampling is commonly used to ensure representation across a population. Units are sorted by a feature, and then every \\(k\\) units is selected from a random start point so the sample is spread across the population. In addition to PPS, other unequal probabilities of selection may be used. For example, in a study of establishments (e.g., businesses or public institutions) that conducts a survey every year, an establishment that recently participated (e.g., participated last year) may have a reduced chance of selection in a subsequent round to reduce the burden on the establishment. To learn more about sampling designs, refer to Valliant, Dever, and Kreuter ([2013](#ref-valliant2013practical)), Cox et al. ([2011](#ref-cox2011business)), Cochran ([1977](#ref-cochran1977sampling)), and Deming ([1991](#ref-deming1991sample)).
A common method of sampling is to stratify PSUs, select PSUs within the stratum using PPS selection, and then select units within the PSUs either with SRS or PPS. Reading survey documentation is an important first step in survey analysis to understand the design of the survey we are using and variables necessary to specify the design. Good documentation highlights the variables necessary to specify the design. This is often found in the user guide, methodology report, analysis guide, or technical documentation (see Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation) for more details).
### Example
For example, the [2017\-2019 National Survey of Family Growth](https://www.cdc.gov/nchs/data/nsfg/NSFG-2017-2019-Sample-Design-Documentation-508.pdf) had a stratified multi\-stage area probability sample:
1. In the first stage, PSUs are counties or collections of counties and are stratified by Census region/division, size (population), and MSA status. Within each stratum, PSUs were selected via PPS.
2. In the second stage, neighborhoods were selected within the sampled PSUs using PPS selection.
3. In the third stage, housing units were selected within the sampled neighborhoods.
4. In the fourth stage, a person was randomly chosen among eligible persons within the selected housing units using unequal probabilities based on the person’s age and sex.
The public use file does not include all these levels of selection and instead has pseudo\-strata and pseudo\-clusters, which are the variables used in R to specify the design. As specified on page 4 of the documentation, the stratum variable is `SEST`, the cluster variable is `SECU`, and the weight variable is `WGT2017_2019`. Thus, to specify this design in R, we use the following syntax:
```
nsfg_des <- nsfgdata %>%
as_survey_design(ids = SECU,
strata = SEST,
weights = WGT2017_2019)
```
10\.4 Replicate weights
-----------------------
Replicate weights are often included on analysis files instead of, or in addition to, the design variables (strata and PSUs). Replicate weights are used as another method to estimate variability. Often, researchers choose to use replicate weights to avoid publishing design variables (strata or clustering variables) as a measure to reduce the risk of disclosure. There are several types of replicate weights, including balanced repeated replication (BRR), Fay’s BRR, jackknife, and bootstrap methods. An overview of the process for using replicate weights is as follows:
1. Divide the sample into subsample replicates that mirror the design of the sample
2. Calculate weights for each replicate using the same procedures for the full\-sample weight (i.e., nonresponse and post\-stratification)
3. Calculate estimates for each replicate using the same method as the full\-sample estimate
4. Calculate the estimated variance, which is proportional to the variance of the replicate estimates
The different types of replicate weights largely differ between step 1 (how the sample is divided into subsamples) and step 4 (which multiplication factors, scales, are used to multiply the variance). The general format for the standard error is:
\\\[ \\sqrt{\\alpha \\sum\_{r\=1}^R \\alpha\_r (\\hat{\\theta}\_r \- \\hat{\\theta})^2 }\\]
where \\(R\\) is the number of replicates, \\(\\alpha\\) is a constant that depends on the replication method, \\(\\alpha\_r\\) is a factor associated with each replicate, \\(\\hat{\\theta}\\) is the weighted estimate based on the full sample, and \\(\\hat{\\theta}\_r\\) is the weighted estimate of \\(\\theta\\) based on the \\(r^{\\text{th}}\\) replicate.
To create the design object for surveys with replicate weights, we use `as_survey_rep()` instead of `as_survey_design()`, which we use for the common sampling designs in the sections above.
### 10\.4\.1 Balanced Repeated Replication method
The balanced repeated replication (BRR) method requires a stratified sample design with two PSUs in each stratum. Each replicate is constructed by deleting one PSU per stratum using a Hadamard matrix. For the PSU that is included, the weight is generally multiplied by two but may have other adjustments, such as post\-stratification. A Hadamard matrix is a special square matrix with entries of \+1 or –1 with mutually orthogonal rows. Hadamard matrices must have one row, two rows, or a multiple of four rows. The size of the Hadamard matrix is determined by the first multiple of 4 greater than or equal to the number of strata. For example, if a survey had seven strata, the Hadamard matrix would be an \\(8\\times8\\) matrix. Additionally, a survey with eight strata would also have an \\(8\\times8\\) Hadamard matrix. The columns in the matrix specify the strata, and the rows specify the replicate. In each replicate (row), a \+1 means to use the first PSU, and a –1 means to use the second PSU in the estimate. For example, here is a \\(4\\times4\\) Hadamard matrix:
\\\[ \\begin{array}{rrrr} \+1 \&\+1 \&\+1 \&\+1\\\\ \+1\&\-1\&\+1\&\-1\\\\ \+1\&\+1\&\-1\&\-1\\\\ \+1 \&\-1\&\-1\&\+1 \\end{array} \\]
In the first replicate (row), all the values are \+1; so in each stratum, the first PSU would be used in the estimate. In the second replicate, the first PSU would be used in strata 1 and 3, while the second PSU would be used in strata 2 and 4\. In the third replicate, the first PSU would be used in strata 1 and 2, while the second PSU would be used in strata 3 and 4\. Finally, in the fourth replicate, the first PSU would be used in strata 1 and 4, while the second PSU would be used in strata 2 and 3\. For more information about Hadamard matrices, see Wolter ([2007](#ref-wolter2007introduction)). Note that supplied BRR weights from a data provider already incorporate this adjustment, and the {survey} package generates the Hadamard matrix, if necessary, for calculating BRR weights; so an analyst does not need to create or provide the matrix.
#### The math
A weighted estimate for the full sample is calculated as \\(\\hat{\\theta}\\), and then a weighted estimate for each replicate is calculated as \\(\\hat{\\theta}\_r\\) for \\(R\\) replicates. Using the generic notation above, \\(\\alpha\=\\frac{1}{R}\\) and \\(\\alpha\_r\=1\\) for each \\(r\\). The standard error of the estimate is calculated as follows:
\\\[se(\\hat{\\theta})\=\\sqrt{\\frac{1}{R} \\sum\_{r\=1}^R \\left( \\hat{\\theta}\_r\-\\hat{\\theta}\\right)^2}\\]
Specifying replicate weights in R requires specifying the type of replicate weights, the main weight variable, the replicate weight variables, and other options. One of the key options is for the mean squared error (MSE). If `mse=TRUE`, variances are computed around the point estimate \\((\\hat{\\theta})\\); whereas if `mse=FALSE`, variances are computed around the mean of the replicates \\((\\bar{\\theta})\\) instead, which looks like this:
\\\[se(\\hat{\\theta})\=\\sqrt{\\frac{1}{R} \\sum\_{r\=1}^R \\left( \\hat{\\theta}\_r\-\\bar{\\theta}\\right)^2}\\] where \\\[\\bar{\\theta}\=\\frac{1}{R}\\sum\_{r\=1}^R \\hat{\\theta}\_r\\]
The default option for `mse` is to use the global option of “survey.replicates.mse,” which is set to `FALSE` initially unless a user changes it. To determine if `mse` should be set to `TRUE` or `FALSE`, read the survey documentation. If there is no indication in the survey documentation for BRR, we recommend setting `mse` to `TRUE`, as this is the default in other software (e.g., SAS, SUDAAN).
#### The syntax
Replicate weights generally come in groups and are sequentially numbered, such as PWGTP1, PWGTP2, …, PWGTP80 for the person weights in the American Community Survey (ACS) ([U.S. Census Bureau 2021](#ref-acs-pums-2021)) or BRRWT1, BRRWT2, …, BRRWT96 in the 2015 Residential Energy Consumption Survey (RECS) ([U.S. Energy Information Administration 2017](#ref-recs-2015-micro)). This makes it easy to use some of the [tidy selection](https://dplyr.tidyverse.org/reference/dplyr_tidy_select.html) functions in R.
To specify a BRR design, we need to specify the weight variable (`weights`), the replicate weight variables (`repweights`), the type of replicate weights as BRR (`type = BRR`), and whether the mean squared error should be used (`mse = TRUE`) or not (`mse = FALSE`). For example, if a dataset had WT0 for the main weight and had 20 BRR weights indicated WT1, WT2, …, WT20, we can use the following syntax (both are equivalent):
```
brr_des <- dat %>%
as_survey_rep(weights = WT0,
repweights = all_of(str_c("WT", 1:20)),
type = "BRR",
mse = TRUE)
brr_des <- dat %>%
as_survey_rep(weights = WT0,
repweights = num_range("WT", 1:20),
type = "BRR",
mse = TRUE)
```
If a dataset had WT for the main weight and had 20 BRR weights indicated REPWT1, REPWT2, …, REPWT20, we can use the following syntax (both are equivalent):
```
brr_des <- dat %>%
as_survey_rep(weights = WT,
repweights = all_of(str_c("REPWT", 1:20)),
type = "BRR",
mse = TRUE)
brr_des <- dat %>%
as_survey_rep(weights = WT,
repweights = starts_with("REPWT"),
type = "BRR",
mse = TRUE)
```
If the replicate weight variables are in the file consecutively, we can also use the following syntax:
```
brr_des <- dat %>%
as_survey_rep(weights = WT,
repweights = REPWT1:REPWT20,
type = "BRR",
mse = TRUE)
```
Typically, each replicate weight sums to a value similar to the main weight, as both the replicate weights and the main weight are supposed to provide population estimates. Rarely, an alternative method is used where the replicate weights have values of 0 or 2 in the case of BRR weights. This would be indicated in the documentation (see Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation) for more information on reading documentation). In this case, the replicate weights are not combined, and the option `combined_weights = FALSE` should be indicated, as the default value for this argument is `TRUE`. This specific syntax is shown below:
```
brr_des <- dat %>%
as_survey_rep(weights = WT,
repweights = starts_with("REPWT"),
type = "BRR",
combined_weights = FALSE,
mse = TRUE)
```
#### Example
The {survey} package includes a data example from section 12\.2 of Levy and Lemeshow ([2013](#ref-levy2013sampling)). In this fictional data, two out of five ambulance stations were sampled from each of three emergency service areas (ESAs); thus BRR weights are appropriate with two PSUs (stations) sampled in each stratum (ESA). In the code below, we create BRR weights as was done by Levy and Lemeshow ([2013](#ref-levy2013sampling)).
```
scdbrr <- scd %>%
as_tibble() %>%
mutate(
wt = 5 / 2,
rep1 = 2 * c(1, 0, 1, 0, 1, 0),
rep2 = 2 * c(1, 0, 0, 1, 0, 1),
rep3 = 2 * c(0, 1, 1, 0, 0, 1),
rep4 = 2 * c(0, 1, 0, 1, 1, 0)
)
scdbrr
```
```
## # A tibble: 6 × 9
## ESA ambulance arrests alive wt rep1 rep2 rep3 rep4
## <int> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 1 120 25 2.5 2 2 0 0
## 2 1 2 78 24 2.5 0 0 2 2
## 3 2 1 185 30 2.5 2 0 2 0
## 4 2 2 228 49 2.5 0 2 0 2
## 5 3 1 670 80 2.5 2 0 0 2
## 6 3 2 530 70 2.5 0 2 2 0
```
To specify the BRR weights, we use the following syntax:
```
scdbrr_des <- scdbrr %>%
as_survey_rep(
type = "BRR",
repweights = starts_with("rep"),
combined_weights = FALSE,
weight = wt
)
scdbrr_des
```
```
## Call: Called via srvyr
## Balanced Repeated Replicates with 4 replicates.
## Sampling variables:
## - repweights: `rep1 + rep2 + rep3 + rep4`
## - weights: wt
## Data variables:
## - ESA (int), ambulance (int), arrests (dbl), alive (dbl), wt (dbl),
## rep1 (dbl), rep2 (dbl), rep3 (dbl), rep4 (dbl)
```
```
summary(scdbrr_des)
```
```
## Call: Called via srvyr
## Balanced Repeated Replicates with 4 replicates.
## Sampling variables:
## - repweights: `rep1 + rep2 + rep3 + rep4`
## - weights: wt
## Data variables:
## - ESA (int), ambulance (int), arrests (dbl), alive (dbl), wt (dbl),
## rep1 (dbl), rep2 (dbl), rep3 (dbl), rep4 (dbl)
## Variables:
## [1] "ESA" "ambulance" "arrests" "alive" "wt"
## [6] "rep1" "rep2" "rep3" "rep4"
```
Note that `combined_weights` was specified as `FALSE` because these weights are simply specified as 0 and 2 and do not incorporate the overall weight. When printing the object, the type of replication is noted as Balanced Repeated Replicates, and the replicate weights and the weight variable are specified. Additionally, the summary lists the variables included in the data and design object.
### 10\.4\.2 Fay’s BRR method
Fay’s BRR method for replicate weights is similar to the BRR method in that it uses a Hadamard matrix to construct replicate weights. However, rather than deleting PSUs for each replicate, with Fay’s BRR, half of the PSUs have a replicate weight, which is the main weight multiplied by \\(\\rho\\), and the other half have the main weight multiplied by \\((2\-\\rho)\\), where \\(0 \\le \\rho \< 1\\). Note that when \\(\\rho\=0\\), this is equivalent to the standard BRR weights, and as \\(\\rho\\) becomes closer to 1, this method is more similar to jackknife discussed in Section [10\.4\.3](c10-sample-designs-replicate-weights.html#samp-jackknife). To obtain the value of \\(\\rho\\), it is necessary to read the survey documentation (see Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation)).
#### The math
The standard error estimate for \\(\\hat{\\theta}\\) is slightly different than the BRR, due to the addition of the multiplier of \\(\\rho\\). Using the generic notation above, \\(\\alpha\=\\frac{1}{R \\left(1\-\\rho\\right)^2}\\) and \\(\\alpha\_r\=1 \\text{ for all } r\\). The standard error is calculated as:
\\\[se(\\hat{\\theta})\=\\sqrt{\\frac{1}{R (1\-\\rho)^2} \\sum\_{r\=1}^R \\left( \\hat{\\theta}\_r\-\\hat{\\theta}\\right)^2}\\]
#### The syntax
The syntax is very similar for BRR and Fay’s BRR. To specify a Fay’s BRR design, we need to specify the weight variable (`weights`), the replicate weight variables (`repweights`), the type of replicate weights as Fay’s BRR (`type = Fay`), whether the mean squared error should be used (`mse = TRUE`) or not (`mse = FALSE`), and Fay’s multiplier (`rho`). For example, if a dataset had WT0 for the main weight and had 20 BRR weights indicated as WT1, WT2, …, WT20, and Fay’s multiplier is 0\.3, we use the following syntax:
```
fay_des <- dat %>%
as_survey_rep(weights = WT0,
repweights = num_range("WT", 1:20),
type = "Fay",
mse = TRUE,
rho = 0.3)
```
#### Example
The 2015 RECS ([U.S. Energy Information Administration 2017](#ref-recs-2015-micro)) uses Fay’s BRR weights with the final weight as NWEIGHT and replicate weights as BRRWT1 \- BRRWT96, and the documentation specifies a Fay’s multiplier of 0\.5\. On the file, DOEID is a unique identifier for each respondent, TOTALDOL is the total energy cost, TOTSQFT\_EN is the total square footage of the residence, and REGOINC is the census region. We use the 2015 RECS data from the {srvyrexploR} package that provides data for this book (see the Prerequisites box at the beginning of this chapter). To specify the design for the `recs_2015` data, we use the following syntax:
```
recs_2015_des <- recs_2015 %>%
as_survey_rep(
weights = NWEIGHT,
repweights = BRRWT1:BRRWT96,
type = "Fay",
rho = 0.5,
mse = TRUE,
variables = c(DOEID, TOTALDOL, TOTSQFT_EN, REGIONC)
)
recs_2015_des
```
```
## Call: Called via srvyr
## Fay's variance method (rho= 0.5 ) with 96 replicates and MSE variances.
## Sampling variables:
## - repweights: `BRRWT1 + BRRWT2 + BRRWT3 + BRRWT4 + BRRWT5 + BRRWT6 +
## BRRWT7 + BRRWT8 + BRRWT9 + BRRWT10 + BRRWT11 + BRRWT12 + BRRWT13 +
## BRRWT14 + BRRWT15 + BRRWT16 + BRRWT17 + BRRWT18 + BRRWT19 + BRRWT20
## + BRRWT21 + BRRWT22 + BRRWT23 + BRRWT24 + BRRWT25 + BRRWT26 +
## BRRWT27 + BRRWT28 + BRRWT29 + BRRWT30 + BRRWT31 + BRRWT32 + BRRWT33
## + BRRWT34 + BRRWT35 + BRRWT36 + BRRWT37 + BRRWT38 + BRRWT39 +
## BRRWT40 + BRRWT41 + BRRWT42 + BRRWT43 + BRRWT44 + BRRWT45 + BRRWT46
## + BRRWT47 + BRRWT48 + BRRWT49 + BRRWT50 + BRRWT51 + BRRWT52 +
## BRRWT53 + BRRWT54 + BRRWT55 + BRRWT56 + BRRWT57 + BRRWT58 + BRRWT59
## + BRRWT60 + BRRWT61 + BRRWT62 + BRRWT63 + BRRWT64 + BRRWT65 +
## BRRWT66 + BRRWT67 + BRRWT68 + BRRWT69 + BRRWT70 + BRRWT71 + BRRWT72
## + BRRWT73 + BRRWT74 + BRRWT75 + BRRWT76 + BRRWT77 + BRRWT78 +
## BRRWT79 + BRRWT80 + BRRWT81 + BRRWT82 + BRRWT83 + BRRWT84 + BRRWT85
## + BRRWT86 + BRRWT87 + BRRWT88 + BRRWT89 + BRRWT90 + BRRWT91 +
## BRRWT92 + BRRWT93 + BRRWT94 + BRRWT95 + BRRWT96`
## - weights: NWEIGHT
## Data variables:
## - DOEID (dbl), TOTALDOL (dbl), TOTSQFT_EN (dbl), REGIONC (dbl)
```
```
summary(recs_2015_des)
```
```
## Call: Called via srvyr
## Fay's variance method (rho= 0.5 ) with 96 replicates and MSE variances.
## Sampling variables:
## - repweights: `BRRWT1 + BRRWT2 + BRRWT3 + BRRWT4 + BRRWT5 + BRRWT6 +
## BRRWT7 + BRRWT8 + BRRWT9 + BRRWT10 + BRRWT11 + BRRWT12 + BRRWT13 +
## BRRWT14 + BRRWT15 + BRRWT16 + BRRWT17 + BRRWT18 + BRRWT19 + BRRWT20
## + BRRWT21 + BRRWT22 + BRRWT23 + BRRWT24 + BRRWT25 + BRRWT26 +
## BRRWT27 + BRRWT28 + BRRWT29 + BRRWT30 + BRRWT31 + BRRWT32 + BRRWT33
## + BRRWT34 + BRRWT35 + BRRWT36 + BRRWT37 + BRRWT38 + BRRWT39 +
## BRRWT40 + BRRWT41 + BRRWT42 + BRRWT43 + BRRWT44 + BRRWT45 + BRRWT46
## + BRRWT47 + BRRWT48 + BRRWT49 + BRRWT50 + BRRWT51 + BRRWT52 +
## BRRWT53 + BRRWT54 + BRRWT55 + BRRWT56 + BRRWT57 + BRRWT58 + BRRWT59
## + BRRWT60 + BRRWT61 + BRRWT62 + BRRWT63 + BRRWT64 + BRRWT65 +
## BRRWT66 + BRRWT67 + BRRWT68 + BRRWT69 + BRRWT70 + BRRWT71 + BRRWT72
## + BRRWT73 + BRRWT74 + BRRWT75 + BRRWT76 + BRRWT77 + BRRWT78 +
## BRRWT79 + BRRWT80 + BRRWT81 + BRRWT82 + BRRWT83 + BRRWT84 + BRRWT85
## + BRRWT86 + BRRWT87 + BRRWT88 + BRRWT89 + BRRWT90 + BRRWT91 +
## BRRWT92 + BRRWT93 + BRRWT94 + BRRWT95 + BRRWT96`
## - weights: NWEIGHT
## Data variables:
## - DOEID (dbl), TOTALDOL (dbl), TOTSQFT_EN (dbl), REGIONC (dbl)
## Variables:
## [1] "DOEID" "TOTALDOL" "TOTSQFT_EN" "REGIONC"
```
In specifying the design, the `variables` option was also used to include which variables might be used in analyses. This is optional but can make our object smaller and easier to work with. When printing the design object or looking at the summary, the replicate weight type is re\-iterated as `Fay's variance method (rho= 0.5) with 96 replicates and MSE variances`, and the variables are included. No weight or probability summary is included in this output, as we have seen in some other design objects.
### 10\.4\.3 Jackknife method
There are three jackknife estimators implemented in {srvyr}: jackknife 1 (JK1\), jackknife n (JKn), and jackknife 2 (JK2\). The JK1 method can be used for unstratified designs, and replicates are created by removing one PSU at a time so the number of replicates is the same as the number of PSUs. If there is no clustering, then the PSU is the ultimate sampling unit (e.g., students).
The JKn method is used for stratified designs and requires two or more PSUs per stratum. In this case, each replicate is created by deleting one PSU from a single stratum, so the number of replicates is the number of total PSUs across all strata. The JK2 method is a special case of JKn when there are exactly 2 PSUs sampled per stratum. For variance estimation, we also need to specify the scaling constants.
#### The math
Using the generic notation above, \\(\\alpha\=\\frac{R\-1}{R}\\) and \\(\\alpha\_r\=1 \\text{ for all } r\\). For the JK1 method, the standard error estimate for \\(\\hat{\\theta}\\) is calculated as:
\\\[se(\\hat{\\theta})\=\\sqrt{\\frac{R\-1}{R} \\sum\_{r\=1}^R \\left( \\hat{\\theta}\_r\-\\hat{\\theta}\\right)^2}\\]
The JKn method is a bit more complex, but the coefficients are generally provided with restricted and public\-use files. For each replicate, one stratum has a PSU removed, and the weights are adjusted by \\(n\_h/(n\_h\-1\)\\) where \\(n\_h\\) is the number of PSUs in stratum \\(h\\). The coefficients in other strata are set to 1\. Denote the coefficient that results from this process for replicate \\(r\\) as \\(\\alpha\_r\\), then the standard error estimate for \\(\\hat{\\theta}\\) is calculated as:
\\\[se(\\hat{\\theta})\=\\sqrt{\\sum\_{r\=1}^R \\alpha\_r \\left( \\hat{\\theta}\_r\-\\hat{\\theta}\\right)^2}\\]
#### The syntax
To specify the jackknife method, we use the survey documentation to understand the type of jackknife (1, n, or 2\) and the multiplier. In the syntax, we need to specify the weight variable (`weights`), the replicate weight variables (`repweights`), the type of replicate weights as jackknife 1 (`type = "JK1"`), n (`type = "JKN"`), or 2 (`type = "JK2"`), whether the mean squared error should be used (`mse = TRUE`) or not (`mse = FALSE`), and the multiplier (`scale`). For example, if the survey is a jackknife 1 method with a multiplier of \\(\\alpha\_r\=(R\-1\)/R\=19/20\=0\.95\\), the dataset has WT0 for the main weight and 20 replicate weights indicated as WT1, WT2, …, WT20, we use the following syntax:
```
jk1_des <- dat %>%
as_survey_rep(
weights = WT0,
repweights = num_range("WT", 1:20),
type = "JK1",
mse = TRUE,
scale = 0.95
)
```
For a jackknife n method, we need to specify the multiplier for all replicates. In this case, we use the `rscales` argument to specify each one. The documentation provides details on what the multipliers (\\(\\alpha\_r\\)) are, and they may be the same for all replicates. For example, consider a case where \\(\\alpha\_r\=0\.1\\) for all replicates, and the dataset had WT0 for the main weight and had 20 replicate weights indicated as WT1, WT2, …, WT20\. We specify the type as `type = "JKN"`, and the multiplier as `rscales=rep(0.1,20)`:
```
jkn_des <- dat %>%
as_survey_rep(
weights = WT0,
repweights = num_range("WT", 1:20),
type = "JKN",
mse = TRUE,
rscales = rep(0.1, 20)
)
```
#### Example
The 2020 RECS ([U.S. Energy Information Administration 2023c](#ref-recs-2020-micro)) uses jackknife weights with the final weight as NWEIGHT and replicate weights as NWEIGHT1 \- NWEIGHT60 with a scale of \\((R\-1\)/R\=59/60\\). On the file, DOEID is a unique identifier for each respondent, TOTALDOL is the total cost of energy, TOTSQFT\_EN is the total square footage of the residence, and REGOINC is the census region. We use the 2020 RECS data from the {srvyrexploR} package that provides data for this book (see the Prerequisites box at the beginning of this chapter).
To specify this design, we use the following syntax:
```
recs_des <- recs_2020 %>%
as_survey_rep(
weights = NWEIGHT,
repweights = NWEIGHT1:NWEIGHT60,
type = "JK1",
scale = 59 / 60,
mse = TRUE,
variables = c(DOEID, TOTALDOL, TOTSQFT_EN, REGIONC)
)
recs_des
```
```
## Call: Called via srvyr
## Unstratified cluster jacknife (JK1) with 60 replicates and MSE variances.
## Sampling variables:
## - repweights: `NWEIGHT1 + NWEIGHT2 + NWEIGHT3 + NWEIGHT4 + NWEIGHT5 +
## NWEIGHT6 + NWEIGHT7 + NWEIGHT8 + NWEIGHT9 + NWEIGHT10 + NWEIGHT11 +
## NWEIGHT12 + NWEIGHT13 + NWEIGHT14 + NWEIGHT15 + NWEIGHT16 +
## NWEIGHT17 + NWEIGHT18 + NWEIGHT19 + NWEIGHT20 + NWEIGHT21 +
## NWEIGHT22 + NWEIGHT23 + NWEIGHT24 + NWEIGHT25 + NWEIGHT26 +
## NWEIGHT27 + NWEIGHT28 + NWEIGHT29 + NWEIGHT30 + NWEIGHT31 +
## NWEIGHT32 + NWEIGHT33 + NWEIGHT34 + NWEIGHT35 + NWEIGHT36 +
## NWEIGHT37 + NWEIGHT38 + NWEIGHT39 + NWEIGHT40 + NWEIGHT41 +
## NWEIGHT42 + NWEIGHT43 + NWEIGHT44 + NWEIGHT45 + NWEIGHT46 +
## NWEIGHT47 + NWEIGHT48 + NWEIGHT49 + NWEIGHT50 + NWEIGHT51 +
## NWEIGHT52 + NWEIGHT53 + NWEIGHT54 + NWEIGHT55 + NWEIGHT56 +
## NWEIGHT57 + NWEIGHT58 + NWEIGHT59 + NWEIGHT60`
## - weights: NWEIGHT
## Data variables:
## - DOEID (dbl), TOTALDOL (dbl), TOTSQFT_EN (dbl), REGIONC (chr)
```
```
summary(recs_des)
```
```
## Call: Called via srvyr
## Unstratified cluster jacknife (JK1) with 60 replicates and MSE variances.
## Sampling variables:
## - repweights: `NWEIGHT1 + NWEIGHT2 + NWEIGHT3 + NWEIGHT4 + NWEIGHT5 +
## NWEIGHT6 + NWEIGHT7 + NWEIGHT8 + NWEIGHT9 + NWEIGHT10 + NWEIGHT11 +
## NWEIGHT12 + NWEIGHT13 + NWEIGHT14 + NWEIGHT15 + NWEIGHT16 +
## NWEIGHT17 + NWEIGHT18 + NWEIGHT19 + NWEIGHT20 + NWEIGHT21 +
## NWEIGHT22 + NWEIGHT23 + NWEIGHT24 + NWEIGHT25 + NWEIGHT26 +
## NWEIGHT27 + NWEIGHT28 + NWEIGHT29 + NWEIGHT30 + NWEIGHT31 +
## NWEIGHT32 + NWEIGHT33 + NWEIGHT34 + NWEIGHT35 + NWEIGHT36 +
## NWEIGHT37 + NWEIGHT38 + NWEIGHT39 + NWEIGHT40 + NWEIGHT41 +
## NWEIGHT42 + NWEIGHT43 + NWEIGHT44 + NWEIGHT45 + NWEIGHT46 +
## NWEIGHT47 + NWEIGHT48 + NWEIGHT49 + NWEIGHT50 + NWEIGHT51 +
## NWEIGHT52 + NWEIGHT53 + NWEIGHT54 + NWEIGHT55 + NWEIGHT56 +
## NWEIGHT57 + NWEIGHT58 + NWEIGHT59 + NWEIGHT60`
## - weights: NWEIGHT
## Data variables:
## - DOEID (dbl), TOTALDOL (dbl), TOTSQFT_EN (dbl), REGIONC (chr)
## Variables:
## [1] "DOEID" "TOTALDOL" "TOTSQFT_EN" "REGIONC"
```
When printing the design object or looking at the summary, the replicate weight type is reiterated as `Unstratified cluster jacknife (JK1) with 60 replicates and MSE variances`, and the variables are included. No weight or probability summary is included.
### 10\.4\.4 Bootstrap method
In bootstrap resampling, replicates are created by selecting random samples of the PSUs with replacement (SRSWR). If there are \\(A\\) PSUs in the sample, then each replicate is created by selecting a random sample of \\(A\\) PSUs with replacement. Each replicate is created independently, and the weights for each replicate are adjusted to reflect the population, generally using the same method as how the analysis weight was adjusted.
#### The math
A weighted estimate for the full sample is calculated as \\(\\hat{\\theta}\\), and then a weighted estimate for each replicate is calculated as \\(\\hat{\\theta}\_r\\) for \\(R\\) replicates. Then the standard error of the estimate is calculated as follows:
\\\[se(\\hat{\\theta})\=\\sqrt{\\alpha \\sum\_{r\=1}^R \\left( \\hat{\\theta}\_r\-\\hat{\\theta}\\right)^2}\\]
where \\(\\alpha\\) is the scaling constant. Note that the scaling constant (\\(\\alpha\\)) is provided in the survey documentation, as there are many types of bootstrap methods that generate custom scaling constants.
#### The syntax
To specify a bootstrap method, we need to specify the weight variable (`weights`), the replicate weight variables (`repweights`), the type of replicate weights as bootstrap (`type = "bootstrap"`), whether the mean squared error should be used (`mse = TRUE`) or not (`mse = FALSE`), and the multiplier (`scale`). For example, if a dataset had WT0 for the main weight, 20 bootstrap weights indicated WT1, WT2, …, WT20, and a multiplier of \\(\\alpha\=.02\\), we use the following syntax:
```
bs_des <- dat %>%
as_survey_rep(
weights = WT0,
repweights = num_range("WT", 1:20),
type = "bootstrap",
mse = TRUE,
scale = .02
)
```
#### Example
Returning to the APIP example, we are going to create a dataset with bootstrap weights to use as an example. In this example, we construct a one\-cluster design with 50 replicate weights[28](#fn28).
```
apiclus1_slim <-
apiclus1 %>%
as_tibble() %>%
arrange(dnum) %>%
select(cds, dnum, fpc, pw)
set.seed(662152)
apibw <-
bootweights(
psu = apiclus1_slim$dnum,
strata = rep(1, nrow(apiclus1_slim)),
fpc = apiclus1_slim$fpc,
replicates = 50
)
bwmata <-
apibw$repweights$weights[apibw$repweights$index, ] * apiclus1_slim$pw
apiclus1_slim <- bwmata %>%
as.data.frame() %>%
set_names(str_c("pw", 1:50)) %>%
cbind(apiclus1_slim) %>%
as_tibble() %>%
select(cds, dnum, fpc, pw, everything())
apiclus1_slim
```
```
## # A tibble: 183 × 54
## cds dnum fpc pw pw1 pw2 pw3 pw4 pw5 pw6 pw7
## <chr> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 2 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 3 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 4 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 5 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 6 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 7 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 8 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 9 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 10 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## # ℹ 173 more rows
## # ℹ 43 more variables: pw8 <dbl>, pw9 <dbl>, pw10 <dbl>, pw11 <dbl>,
## # pw12 <dbl>, pw13 <dbl>, pw14 <dbl>, pw15 <dbl>, pw16 <dbl>,
## # pw17 <dbl>, pw18 <dbl>, pw19 <dbl>, pw20 <dbl>, pw21 <dbl>,
## # pw22 <dbl>, pw23 <dbl>, pw24 <dbl>, pw25 <dbl>, pw26 <dbl>,
## # pw27 <dbl>, pw28 <dbl>, pw29 <dbl>, pw30 <dbl>, pw31 <dbl>,
## # pw32 <dbl>, pw33 <dbl>, pw34 <dbl>, pw35 <dbl>, pw36 <dbl>, …
```
The output of `apiclus1_slim` includes the same variables we have seen in other APIP examples (see Table [10\.1](c10-sample-designs-replicate-weights.html#tab:apidata)), but now it additionally includes bootstrap weights `pw1`, …, `pw50`. When creating the survey design object, we use the bootstrap weights as the replicate weights. Additionally, with replicate weights we need to include the scale (\\(\\alpha\\)). For this example, we created:
\\\[\\alpha\=\\frac{A}{(A\-1\)(R\-1\)}\=\\frac{15}{(15\-1\)\*(50\-1\)}\=0\.02186589\\]
where \\(A\\) is the average number of PSUs per stratum, and \\(R\\) is the number of replicates. There is only 1 stratum and the number of clusters/PSUs is 15 so \\(A\=15\\). Using this information, we specify the design object as:
```
api1_bs_des <- apiclus1_slim %>%
as_survey_rep(
weights = pw,
repweights = pw1:pw50,
type = "bootstrap",
scale = 0.02186589,
mse = TRUE
)
api1_bs_des
```
```
## Call: Called via srvyr
## Survey bootstrap with 50 replicates and MSE variances.
## Sampling variables:
## - repweights: `pw1 + pw2 + pw3 + pw4 + pw5 + pw6 + pw7 + pw8 + pw9 +
## pw10 + pw11 + pw12 + pw13 + pw14 + pw15 + pw16 + pw17 + pw18 + pw19
## + pw20 + pw21 + pw22 + pw23 + pw24 + pw25 + pw26 + pw27 + pw28 +
## pw29 + pw30 + pw31 + pw32 + pw33 + pw34 + pw35 + pw36 + pw37 + pw38
## + pw39 + pw40 + pw41 + pw42 + pw43 + pw44 + pw45 + pw46 + pw47 +
## pw48 + pw49 + pw50`
## - weights: pw
## Data variables:
## - cds (chr), dnum (int), fpc (dbl), pw (dbl), pw1 (dbl), pw2 (dbl),
## pw3 (dbl), pw4 (dbl), pw5 (dbl), pw6 (dbl), pw7 (dbl), pw8 (dbl),
## pw9 (dbl), pw10 (dbl), pw11 (dbl), pw12 (dbl), pw13 (dbl), pw14
## (dbl), pw15 (dbl), pw16 (dbl), pw17 (dbl), pw18 (dbl), pw19 (dbl),
## pw20 (dbl), pw21 (dbl), pw22 (dbl), pw23 (dbl), pw24 (dbl), pw25
## (dbl), pw26 (dbl), pw27 (dbl), pw28 (dbl), pw29 (dbl), pw30 (dbl),
## pw31 (dbl), pw32 (dbl), pw33 (dbl), pw34 (dbl), pw35 (dbl), pw36
## (dbl), pw37 (dbl), pw38 (dbl), pw39 (dbl), pw40 (dbl), pw41 (dbl),
## pw42 (dbl), pw43 (dbl), pw44 (dbl), pw45 (dbl), pw46 (dbl), pw47
## (dbl), pw48 (dbl), pw49 (dbl), pw50 (dbl)
```
```
summary(api1_bs_des)
```
```
## Call: Called via srvyr
## Survey bootstrap with 50 replicates and MSE variances.
## Sampling variables:
## - repweights: `pw1 + pw2 + pw3 + pw4 + pw5 + pw6 + pw7 + pw8 + pw9 +
## pw10 + pw11 + pw12 + pw13 + pw14 + pw15 + pw16 + pw17 + pw18 + pw19
## + pw20 + pw21 + pw22 + pw23 + pw24 + pw25 + pw26 + pw27 + pw28 +
## pw29 + pw30 + pw31 + pw32 + pw33 + pw34 + pw35 + pw36 + pw37 + pw38
## + pw39 + pw40 + pw41 + pw42 + pw43 + pw44 + pw45 + pw46 + pw47 +
## pw48 + pw49 + pw50`
## - weights: pw
## Data variables:
## - cds (chr), dnum (int), fpc (dbl), pw (dbl), pw1 (dbl), pw2 (dbl),
## pw3 (dbl), pw4 (dbl), pw5 (dbl), pw6 (dbl), pw7 (dbl), pw8 (dbl),
## pw9 (dbl), pw10 (dbl), pw11 (dbl), pw12 (dbl), pw13 (dbl), pw14
## (dbl), pw15 (dbl), pw16 (dbl), pw17 (dbl), pw18 (dbl), pw19 (dbl),
## pw20 (dbl), pw21 (dbl), pw22 (dbl), pw23 (dbl), pw24 (dbl), pw25
## (dbl), pw26 (dbl), pw27 (dbl), pw28 (dbl), pw29 (dbl), pw30 (dbl),
## pw31 (dbl), pw32 (dbl), pw33 (dbl), pw34 (dbl), pw35 (dbl), pw36
## (dbl), pw37 (dbl), pw38 (dbl), pw39 (dbl), pw40 (dbl), pw41 (dbl),
## pw42 (dbl), pw43 (dbl), pw44 (dbl), pw45 (dbl), pw46 (dbl), pw47
## (dbl), pw48 (dbl), pw49 (dbl), pw50 (dbl)
## Variables:
## [1] "cds" "dnum" "fpc" "pw" "pw1" "pw2" "pw3" "pw4" "pw5"
## [10] "pw6" "pw7" "pw8" "pw9" "pw10" "pw11" "pw12" "pw13" "pw14"
## [19] "pw15" "pw16" "pw17" "pw18" "pw19" "pw20" "pw21" "pw22" "pw23"
## [28] "pw24" "pw25" "pw26" "pw27" "pw28" "pw29" "pw30" "pw31" "pw32"
## [37] "pw33" "pw34" "pw35" "pw36" "pw37" "pw38" "pw39" "pw40" "pw41"
## [46] "pw42" "pw43" "pw44" "pw45" "pw46" "pw47" "pw48" "pw49" "pw50"
```
As with other replicate design objects, when printing the object or looking at the summary, the replicate weights are provided along with the data variables.
10\.5 Exercises
---------------
For this chapter, the exercises entail reading public documentation to determine how to specify the survey design. While reading the documentation, be on the lookout for description of the weights and the survey design variables or replicate weights.
1. The National Health Interview Survey (NHIS) is an annual household survey conducted by the National Center for Health Statistics (NCHS). The NHIS includes a wide variety of health topics for adults including health status and conditions, functioning and disability, health care access and health service utilization, health\-related behaviors, health promotion, mental health, barriers to receiving care, and community engagement. Like many national in\-person surveys, the sampling design is a stratified clustered design with details included in the Survey Description ([National Center for Health Statistics 2023](#ref-nhis-svy-des)). The Survey Description provides information on setting up syntax in SUDAAN, Stata, SPSS, SAS, and R ({survey} package implementation). We have imported the data and the variable containing the data as: `nhis_adult_data`. How would we specify the design using either `as_survey_design()` or `as_survey_rep()`?
2. The General Social Survey (GSS) is a survey that has been administered since 1972 on social, behavioral, and attitudinal topics. The 2016\-2020 GSS Panel codebook provides examples of setting up syntax in SAS and Stata but not R ([Davern et al. 2021](#ref-gss-codebook)). We have imported the data and the variable containing the data as: `gss_data`. How would we specify the design in R using either `as_survey_design()` or `as_survey_rep()`?
### Prerequisites
10\.1 Introduction
------------------
The primary reason for using packages like {survey} and {srvyr} is to account for the sampling design or replicate weights into point and uncertainty estimates ([Freedman Ellis and Schneider 2024](#ref-R-srvyr); [Lumley 2010](#ref-lumley2010complex)). By incorporating the sampling design or replicate weights, these estimates are appropriately calculated.
In this chapter, we introduce common sampling designs and common types of replicate weights, the mathematical methods for calculating estimates and standard errors for a given sampling design, and the R syntax to specify the sampling design or replicate weights. While we show the math behind the estimates, the functions in these packages handle the calculation. To deeply understand the math and the derivation, refer to Penn State ([2019](#ref-pennstate506)), Särndal, Swensson, and Wretman ([2003](#ref-sarndal2003model)), Wolter ([2007](#ref-wolter2007introduction)), or Fuller ([2011](#ref-fuller2011sampling)) (these are listed in order of increasing statistical rigorousness).
The general process for estimation in the {srvyr} package is to:
1. Create a `tbl_svy` object (a survey object) using: `as_survey_design()` or `as_survey_rep()`
2. Subset data (if needed) using `filter()` (subpopulations)
3. Specify domains of analysis using `group_by()`
4. Within `summarize()`, specify variables to calculate, including means, totals, proportions, quantiles, and more
This chapter includes details on the first step: creating the survey object. Once this survey object is created, it can be used in the other steps (detailed in Chapters [5](c05-descriptive-analysis.html#c05-descriptive-analysis) through [7](c07-modeling.html#c07-modeling)) to account for the complex survey design.
10\.2 Common sampling designs
-----------------------------
A sampling design is the method used to draw a sample. Both logistical and statistical elements are considered when developing a sampling design. When specifying a sampling design in R, we specify the levels of sampling along with the weights. The weight for each record is constructed so that the particular record represents that many units in the population. For example, in a survey of 6th\-grade students in the United States, the weight associated with each responding student reflects how many 6th\-grade students across the country that record represents. Generally, the weights represent the inverse of the probability of selection, such that the sum of the weights corresponds to the total population size, although some studies may have the sum of the weights equal to the number of respondent records.
Some common terminology across the designs are:
* sample size, generally denoted as \\(n\\), is the number of units selected to be sampled
* population size, generally denoted as \\(N\\), is the number of units in the population of interest
* sampling frame, the list of units from which the sample is drawn (see Chapter [2](c02-overview-surveys.html#c02-overview-surveys) for more information)
### 10\.2\.1 Simple random sample without replacement
The simple random sample (SRS) without replacement is a sampling design in which a fixed sample size is selected from a sampling frame, and every possible subsample has an equal probability of selection. Without replacement refers to the fact that once a sampling unit has been selected, it is removed from the sample frame and cannot be selected again.
* Requirements: The sampling frame must include the entire population.
* Advantages: SRS requires no information about the units apart from contact information.
* Disadvantages: The sampling frame may not be available for the entire population.
* Example: Randomly select students in a university from a roster provided by the registrar’s office.
#### The math
The estimate for the population mean of variable \\(y\\) is:
\\\[\\bar{y}\=\\frac{1}{n}\\sum\_{i\=1}^n y\_i\\]
where \\(\\bar{y}\\) represents the sample mean, \\(n\\) is the total number of respondents (or observations), and \\(y\_i\\) is each individual value of \\(y\\).
The estimate of the standard error of the mean is:
\\\[se(\\bar{y})\=\\sqrt{\\frac{s^2}{n}\\left( 1\-\\frac{n}{N} \\right)}\\] where
\\\[s^2\=\\frac{1}{n\-1}\\sum\_{i\=1}^n\\left(y\_i\-\\bar{y}\\right)^2\.\\]
and \\(N\\) is the population size. This standard error estimate might look very similar to equations in other statistical applications except for the part on the right side of the equation: \\(1\-\\frac{n}{N}\\). This is called the finite population correction (FPC) factor. If the size of the frame, \\(N\\), is very large in comparison to the sample, the FPC is negligible, so it is often ignored. A common guideline is if the sample is less than 10% of the population, the FPC is negligible.
To estimate proportions, we define \\(x\_i\\) as the indicator if the outcome is observed. That is, \\(x\_i\=1\\) if the outcome is observed, and \\(x\_i\=0\\) if the outcome is not observed for respondent \\(i\\). Then the estimated proportion from an SRS design is:
\\\[\\hat{p}\=\\frac{1}{n}\\sum\_{i\=1}^n x\_i \\]
and the estimated standard error of the proportion is:
\\\[se(\\hat{p})\=\\sqrt{\\frac{\\hat{p}(1\-\\hat{p})}{n\-1}\\left(1\-\\frac{n}{N}\\right)} \\]
#### The syntax
If a sample was drawn through SRS and had no nonresponse or other weighting adjustments, we specify this design in R as:
```
srs1_des <- dat %>%
as_survey_design(fpc = fpcvar)
```
where `dat` is a tibble or data.frame with the survey data, and `fpcvar` is a variable in the data indicating the sampling frame’s size (this variable has the same value for all cases in an SRS design). If the frame is very large, sometimes the frame size is not provided. In that case, the FPC is not needed, and we specify the design as:
```
srs2_des <- dat %>%
as_survey_design()
```
If some post\-survey adjustments were implemented and the weights are not all equal, we specify the design as:
```
srs3_des <- dat %>%
as_survey_design(weights = wtvar,
fpc = fpcvar)
```
where `wtvar` is a variable in the data indicating the weight for each case. Again, the FPC can be omitted if it is unnecessary because the frame is large compared to the sample size.
#### Example
The {survey} package in R provides some example datasets that we use throughout this chapter. One of the example datasets we use is from the Academic Performance Index Program (APIP). The APIP program administered by the California Department of Education, and the {survey} package includes a population file (sample frame) of all schools with at least 100 students and several different samples pulled from that data using different sampling methods. For this first example, we use the `apisrs` dataset, which contains an SRS of 200 schools. For printing purposes, we create a new dataset called `apisrs_slim`, which sorts the data by the school district and school ID and subsets the data to only a few columns. The SRS sample data are illustrated below:
```
apisrs_slim <-
apisrs %>%
as_tibble() %>%
arrange(dnum, snum) %>%
select(cds, dnum, snum, dname, sname, fpc, pw)
apisrs_slim
```
```
## # A tibble: 200 × 7
## cds dnum snum dname sname fpc pw
## <chr> <int> <dbl> <chr> <chr> <dbl> <dbl>
## 1 19642126061220 1 1121 ABC Unified Haske… 6194 31.0
## 2 19642126066716 1 1124 ABC Unified Stowe… 6194 31.0
## 3 36675876035174 5 3895 Adelanto Elementary Adela… 6194 31.0
## 4 33669776031512 19 3347 Alvord Unified Arlan… 6194 31.0
## 5 33669776031595 19 3352 Alvord Unified Wells… 6194 31.0
## 6 31667876031033 39 3271 Auburn Union Elementary Cain … 6194 31.0
## 7 19642876011407 42 1169 Baldwin Park Unified Deanz… 6194 31.0
## 8 19642876011464 42 1175 Baldwin Park Unified Heath… 6194 31.0
## 9 19642956011589 48 1187 Bassett Unified Erwin… 6194 31.0
## 10 41688586043392 49 4948 Bayshore Elementary Baysh… 6194 31.0
## # ℹ 190 more rows
```
Table [10\.1](c10-sample-designs-replicate-weights.html#tab:apidata) provides details on all the variables in this dataset.
TABLE 10\.1: Overview of Variables in APIP Data
| Variable Name | Description |
| --- | --- |
| `cds` | Unique identifier for each school |
| `dnum` | School district identifier within county |
| `snum` | School identifier within district |
| `dname` | District Name |
| `sname` | School Name |
| `fpc` | Finite population correction factor |
| `pw` | Weight |
To create the `tbl_survey` object for the SRS data, we specify the design as:
```
apisrs_des <- apisrs_slim %>%
as_survey_design(
weights = pw,
fpc = fpc
)
apisrs_des
```
```
## Independent Sampling design
## Called via srvyr
## Sampling variables:
## - ids: `1`
## - fpc: fpc
## - weights: pw
## Data variables:
## - cds (chr), dnum (int), snum (dbl), dname (chr), sname (chr), fpc
## (dbl), pw (dbl)
```
In the printed design object, the design is described as an “Independent Sampling design,” which is another term for SRS. The ids are specified as `1`, which means there is no clustering (a topic described in Section [10\.2\.4](c10-sample-designs-replicate-weights.html#samp-cluster)), the FPC variable is indicated, and the weights are indicated. We can also look at the summary of the design object (`summary()`) and see the distribution of the probabilities (inverse of the weights) along with the population size and a list of the variables in the dataset.
```
summary(apisrs_des)
```
```
## Independent Sampling design
## Called via srvyr
## Probabilities:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.0323 0.0323 0.0323 0.0323 0.0323 0.0323
## Population size (PSUs): 6194
## Data variables:
## [1] "cds" "dnum" "snum" "dname" "sname" "fpc" "pw"
```
### 10\.2\.2 Simple random sample with replacement
Similar to the SRS design, the simple random sample with replacement (SRSWR) design randomly selects the sample from the entire sampling frame. However, while SRS removes sampled units before selecting again, the SRSWR instead replaces each sampled unit before drawing again, so units can be selected more than once.
* Requirements: The sampling frame must include the entire population.
* Advantages: SRSWR requires no information about the units apart from contact information.
* Disadvantages:
+ The sampling frame may not be available for the entire population.
+ Units can be selected more than once, resulting in a smaller realized sample size because receiving duplicate information from a single respondent does not provide additional information.
+ For small populations, SRSWR has larger standard errors than SRS designs.
* Example: A professor puts all students’ names on paper slips and selects them randomly to ask students questions, but the professor replaces the paper after calling on the student so they can be selected again at any time.
In general for surveys, using an SRS design (without replacement) is preferred as we do not want respondents to answer a survey more than once.
#### The math
The estimate for the population mean of variable \\(y\\) is:
\\\[\\bar{y}\=\\frac{1}{n}\\sum\_{i\=1}^n y\_i\\]
and the estimate of the standard error of mean is:
\\\[se(\\bar{y})\=\\sqrt{\\frac{s^2}{n}}\\] where
\\\[s^2\=\\frac{1}{n\-1}\\sum\_{i\=1}^n\\left(y\_i\-\\bar{y}\\right)^2\.\\]
To calculate the estimated proportion, we define \\(x\_i\\) as the indicator that the outcome is observed (as we did with SRS):
\\\[\\hat{p}\=\\frac{1}{n}\\sum\_{i\=1}^n x\_i \\]
and the estimated standard error of the proportion is:
\\\[se(\\hat{p})\=\\sqrt{\\frac{\\hat{p}(1\-\\hat{p})}{n}} \\]
#### The syntax
If we had a sample that was drawn through SRSWR and had no nonresponse or other weighting adjustments, in R, we specify this design as:
```
srswr1_des <- dat %>%
as_survey_design()
```
where `dat` is a tibble or data.frame containing our survey data. This syntax is the same as an SRS design, except an FPC is not included. This is because when calculating a sample with replacement, the population pool to select from is no longer finite, so a correction is not needed. Therefore, with large populations where the FPC is negligible, the underlying formulas for SRS and SRSWR designs are the same.
If some post\-survey adjustments were implemented and the weights are not all equal, we specify the design as:
```
srswr2_des <- dat %>%
as_survey_design(weights = wtvar)
```
where `wtvar` is the variable for the weight of the data.
#### Example
The {survey} package does not include an example of SRSWR. To illustrate this design, we need to create an example. We use the APIP population data provided by the {survey} package (`apipop`) and select a sample of 200 cases using the `slice_sample()` function from the tidyverse. One of the arguments in the `slice_sample()` function is `replace`. If `replace=TRUE`, then we are conducting an SRSWR. We then calculate selection weights as the inverse of the probability of selection and call this new dataset `apisrswr`.
```
set.seed(409963)
apisrswr <- apipop %>%
as_tibble() %>%
slice_sample(n = 200, replace = TRUE) %>%
select(cds, dnum, snum, dname, sname) %>%
mutate(weight = nrow(apipop) / 200)
head(apisrswr)
```
```
## # A tibble: 6 × 6
## cds dnum snum dname sname weight
## <chr> <int> <dbl> <chr> <chr> <dbl>
## 1 43696416060065 533 5348 Palo Alto Unified Jordan (Da… 31.0
## 2 07618046005060 650 509 San Ramon Valley Unified Alamo Elem… 31.0
## 3 19648086085674 457 2134 Montebello Unified La Merced … 31.0
## 4 07617056003719 346 377 Knightsen Elementary Knightsen … 31.0
## 5 19650606023022 744 2351 Torrance Unified Carr (Evel… 31.0
## 6 01611196090120 6 13 Alameda City Unified Paden (Wil… 31.0
```
Because this is an SRS design with replacement, there may be duplicates in the data. It is important to keep the duplicates in the data for proper estimation. For reference, we can view the duplicates in the example data we just created.
```
apisrswr %>%
group_by(cds) %>%
filter(n() > 1) %>%
arrange(cds)
```
```
## # A tibble: 4 × 6
## # Groups: cds [2]
## cds dnum snum dname sname weight
## <chr> <int> <dbl> <chr> <chr> <dbl>
## 1 15633216008841 41 869 Bakersfield City Elem Chipman Junio… 31.0
## 2 15633216008841 41 869 Bakersfield City Elem Chipman Junio… 31.0
## 3 39686766042782 716 4880 Stockton City Unified Tyler Skills … 31.0
## 4 39686766042782 716 4880 Stockton City Unified Tyler Skills … 31.0
```
We created a weight variable in this example data, which is the inverse of the probability of selection. We specify the sampling design for `apisrswr` as:
```
apisrswr_des <- apisrswr %>%
as_survey_design(weights = weight)
apisrswr_des
```
```
## Independent Sampling design (with replacement)
## Called via srvyr
## Sampling variables:
## - ids: `1`
## - weights: weight
## Data variables:
## - cds (chr), dnum (int), snum (dbl), dname (chr), sname (chr), weight
## (dbl)
```
```
summary(apisrswr_des)
```
```
## Independent Sampling design (with replacement)
## Called via srvyr
## Probabilities:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.0323 0.0323 0.0323 0.0323 0.0323 0.0323
## Data variables:
## [1] "cds" "dnum" "snum" "dname" "sname" "weight"
```
In the output above, the design object and the object summary are shown. Both note that the sampling is done “with replacement” because no FPC was specified. The probabilities, which are derived from the weights, are summarized in the summary function output.
### 10\.2\.3 Stratified sampling
Stratified sampling occurs when a population is divided into mutually exclusive subpopulations (strata), and then samples are selected independently within each stratum.
* Requirements: The sampling frame must include the information to divide the population into strata for every unit.
* Advantages:
+ This design ensures sample representation in all subpopulations.
+ If the strata are correlated with survey outcomes, a stratified sample has smaller standard errors compared to a SRS sample of the same size.
+ This results in a more efficient design.
* Disadvantages: Auxiliary data may not exist to divide the sampling frame into strata, or the data may be outdated.
* Examples:
+ Example 1: A population of North Carolina residents could be stratified into urban and rural areas, and then an SRS of residents from both rural and urban areas is selected independently. This ensures there are residents from both areas in the sample.
+ Example 2: Law enforcement agencies could be stratified into the three primary general\-purpose categories in the U.S.: local police, sheriff’s departments, and state police. An SRS of agencies from each of the three types is then selected independently to ensure all three types of agencies are represented.
#### The math
Let \\(\\bar{y}\_h\\) be the sample mean for stratum \\(h\\), \\(N\_h\\) be the population size of stratum \\(h\\), \\(n\_h\\) be the sample size of stratum \\(h\\), and \\(H\\) be the total number of strata. Then, the estimate for the population mean under stratified SRS sampling is:
\\\[\\bar{y}\=\\frac{1}{N}\\sum\_{h\=1}^H N\_h\\bar{y}\_h\\]
and the estimate of the standard error of \\(\\bar{y}\\) is:
\\\[se(\\bar{y})\=\\sqrt{\\frac{1}{N^2} \\sum\_{h\=1}^H N\_h^2 \\frac{s\_h^2}{n\_h}\\left(1\-\\frac{n\_h}{N\_h}\\right)} \\]
where
\\\[s\_h^2\=\\frac{1}{n\_h\-1}\\sum\_{i\=1}^{n\_h}\\left(y\_{i,h}\-\\bar{y}\_h\\right)^2\\]
For estimates of proportions, let \\(\\hat{p}\_h\\) be the estimated proportion in stratum \\(h\\). Then, the population proportion estimate is:
\\\[\\hat{p}\= \\frac{1}{N}\\sum\_{h\=1}^H N\_h \\hat{p}\_h\\]
The standard error of the proportion is:
\\\[se(\\hat{p}) \= \\frac{1}{N} \\sqrt{ \\sum\_{h\=1}^H N\_h^2 \\frac{\\hat{p}\_h(1\-\\hat{p}\_h)}{n\_h\-1} \\left(1\-\\frac{n\_h}{N\_h}\\right)}\\]
#### The syntax
In addition to the `fpc` and `weights` arguments discussed in the types above, stratified designs require the addition of the `strata` argument. For example, to specify a stratified SRS design in {srvyr} when using the FPC, that is, where the population sizes of the strata are not too large and are known, we specify the design as:
```
stsrs1_des <- dat %>%
as_survey_design(fpc = fpcvar,
strata = stratavar)
```
where `fpcvar` is a variable on our data that indicates \\(N\_h\\) for each row, and `stratavar` is a variable indicating the stratum for each row. We can omit the FPC if it is not applicable. Additionally, we can indicate the weight variable if it is present where `wtvar` is a variable on our data with a numeric weight.
```
stsrs2_des <- dat %>%
as_survey_design(weights = wtvar,
strata = stratavar)
```
#### Example
In the example APIP data, `apistrat` is a stratified random sample, stratified by school type (`stype`) with three levels: `E` for elementary school, `M` for middle school, and `H` for high school. As with the SRS example above, we sort and select specific variables for use in printing. The data are illustrated below, including a count of the number of cases per stratum:
```
apistrat_slim <-
apistrat %>%
as_tibble() %>%
arrange(dnum, snum) %>%
select(cds, dnum, snum, dname, sname, stype, fpc, pw)
apistrat_slim %>%
count(stype, fpc)
```
```
## # A tibble: 3 × 3
## stype fpc n
## <fct> <dbl> <int>
## 1 E 4421 100
## 2 H 755 50
## 3 M 1018 50
```
The FPC is the same for each case within each stratum. This output also shows that 100 elementary schools, 50 middle schools, and 50 high schools were sampled. It is often common for the number of units sampled from each strata to be different based on the goals of the project, or to mirror the size of each strata in the population. We specify the design as:
```
apistrat_des <- apistrat_slim %>%
as_survey_design(
strata = stype,
weights = pw,
fpc = fpc
)
apistrat_des
```
```
## Stratified Independent Sampling design
## Called via srvyr
## Sampling variables:
## - ids: `1`
## - strata: stype
## - fpc: fpc
## - weights: pw
## Data variables:
## - cds (chr), dnum (int), snum (dbl), dname (chr), sname (chr), stype
## (fct), fpc (dbl), pw (dbl)
```
```
summary(apistrat_des)
```
```
## Stratified Independent Sampling design
## Called via srvyr
## Probabilities:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.0226 0.0226 0.0359 0.0401 0.0534 0.0662
## Stratum Sizes:
## E H M
## obs 100 50 50
## design.PSU 100 50 50
## actual.PSU 100 50 50
## Population stratum sizes (PSUs):
## E H M
## 4421 755 1018
## Data variables:
## [1] "cds" "dnum" "snum" "dname" "sname" "stype" "fpc" "pw"
```
When printing the object, it is specified as a “Stratified Independent Sampling design,” also known as a stratified SRS, and the strata variable is included. Printing the summary, we see a distribution of probabilities, as we saw with SRS; but we also see the sample and population sizes by stratum.
### 10\.2\.4 Clustered sampling
Clustered sampling occurs when a population is divided into mutually exclusive subgroups called clusters or primary sampling units (PSUs). A random selection of PSUs is sampled, and then another level of sampling is done within these clusters. There can be multiple levels of this selection. Clustered sampling is often used when a list of the entire population is not available or data collection involves interviewers needing direct contact with respondents.
* Requirements: There must be a way to divide the population into clusters. Clusters are commonly structural, such as institutions (e.g., schools, prisons) or geography (e.g., states, counties).
* Advantages:
+ Clustered sampling is advantageous when data collection is done in person, so interviewers are sent to specific sampled areas rather than completely at random across a country.
+ With clustered sampling, a list of the entire population is not necessary. For example, if sampling students, we do not need a list of all students, but only a list of all schools. Once the schools are sampled, lists of students can be obtained within the sampled schools.
* Disadvantages: Compared to a simple random sample for the same sample size, clustered samples generally have larger standard errors of estimates.
* Examples:
+ Example 1: Consider a study needing a sample of 6th\-grade students in the United States. No list likely exists of all these students. However, it is more likely to obtain a list of schools that enroll 6th graders, so a study design could select a random sample of schools that enroll 6th graders. The selected schools can then provide a list of students to do a second stage of sampling where 6th\-grade students are randomly sampled within each of the sampled schools. This is a one\-stage sample design (the one representing the number of clusters) and is the type of design we discuss in the formulas below.
+ Example 2: Consider a study sending interviewers to households for a survey. This is a more complicated example that requires two levels of clustering (two\-stage sample design) to efficiently use interviewers in geographic clusters. First, in the U.S., counties could be selected as the PSU and then census block groups within counties could be selected as the secondary sampling unit (SSU). Households could then be randomly sampled within the block groups. This type of design is popular for in\-person surveys, as it reduces the travel necessary for interviewers.
#### The math
Consider a survey where \\(a\\) clusters are sampled from a population of \\(A\\) clusters via SRS. Within each sampled cluster, \\(i\\), there are \\(B\_i\\) units in the population, and \\(b\_i\\) units are sampled via SRS. Let \\(\\bar{y}\_{i}\\) be the sample mean of cluster \\(i\\). Then, a ratio estimator of the population mean is:
\\\[\\bar{y}\=\\frac{\\sum\_{i\=1}^a B\_i \\bar{y}\_{i}}{ \\sum\_{i\=1}^a B\_i}\\]
Note this is a consistent but biased estimator. Often the population size is not known, so this is a method to estimate a mean without knowing the population size. The estimated standard error of the mean is:
\\\[se(\\bar{y})\= \\frac{1}{\\hat{N}}\\sqrt{\\left(1\-\\frac{a}{A}\\right)\\frac{s\_a^2}{a} \+ \\frac{A}{a} \\sum\_{i\=1}^a \\left(1\-\\frac{b\_i}{B\_i}\\right) \\frac{s\_i^2}{b\_i} }\\]
where \\(\\hat{N}\\) is the estimated population size, \\(s\_a^2\\) is the between\-cluster variance, and \\(s\_i^2\\) is the within\-cluster variance.
The formula for the between\-cluster variance (\\(s\_a^2\\)) is:
\\\[s\_a^2\=\\frac{1}{a\-1}\\sum\_{i\=1}^a \\left( \\hat{y}\_i \- \\frac{\\sum\_{i\=1}^a \\hat{y}\_{i} }{a}\\right)^2\\]
where \\(\\hat{y}\_i \=B\_i\\bar{y\_i}\\).
The formula for the within\-cluster variance (\\(s\_i^2\\)) is:
\\\[s\_i^2\=\\frac{1}{a(b\_i\-1\)} \\sum\_{j\=1}^{b\_i} \\left(y\_{ij}\-\\bar{y}\_i\\right)^2\\]
where \\(y\_{ij}\\) is the outcome for sampled unit \\(j\\) within cluster \\(i\\).
#### The syntax
Clustered sampling designs require the addition of the `ids` argument, which specifies the cluster level variable(s). To specify a two\-stage clustered design without replacement, we specify the design as:
```
clus2_des <- dat %>%
as_survey_design(weights = wtvar,
ids = c(PSU, SSU),
fpc = c(A, B))
```
where `PSU` and `SSU` are the variables indicating the PSU and SSU identifiers, and `A` and `B` are the variables indicating the population sizes for each level (i.e., `A` is the number of clusters, and `B` is the number of units within each cluster). Note that `A` is the same for all records, and `B` is the same for all records within the same cluster.
If clusters were sampled with replacement or from a very large population, the FPC is unnecessary. Additionally, only the first stage of selection is necessary regardless of whether the units were selected with replacement at any stage. The subsequent stages of selection are ignored in computation as their contribution to the variance is overpowered by the first stage (see Särndal, Swensson, and Wretman ([2003](#ref-sarndal2003model)) or Wolter ([2007](#ref-wolter2007introduction)) for a more in\-depth discussion). Therefore, the two design objects specified below yield the same estimates in the end:
```
clus2ex1_des <- dat %>%
as_survey_design(weights = wtvar,
ids = c(PSU, SSU))
clus2ex2_des <- dat %>%
as_survey_design(weights = wtvar,
ids = PSU)
```
Note that there is one additional argument that is sometimes necessary, which is `nest = TRUE`. This option relabels cluster IDs to enforce nesting within strata. Sometimes, as an example, there may be a cluster `1` within each stratum, but cluster `1` in stratum `1` is a different cluster than cluster `1` in stratum `2`. These are actually different clusters. This option indicates that repeated numbering does not mean it is the same cluster. If this option is not used and there are repeated cluster IDs across different strata, an error is generated.
#### Example
The `survey` package includes a two\-stage cluster sample data, `apiclus2`, in which school districts were sampled, and then a random sample of five schools was selected within each district. For districts with fewer than five schools, all schools were sampled. School districts are identified by `dnum`, and schools are identified by `snum`. The variable `fpc1` indicates how many districts there are in California (the total number of PSUs or `A`), and `fpc2` indicates how many schools were in a given district with at least 100 students (the total number of SSUs or `B`). The data include a row for each school. In the data printed below, there are 757 school districts, as indicated by `fpc1`, and there are nine schools in District 731, one school in District 742, two schools in District 768, and so on as indicated by `fpc2`. For illustration purposes, the object `apiclus2_slim` has been created from `apiclus2`, which subsets the data to only the necessary columns and sorts the data.
```
apiclus2_slim <-
apiclus2 %>%
as_tibble() %>%
arrange(desc(dnum), snum) %>%
select(cds, dnum, snum, fpc1, fpc2, pw)
apiclus2_slim
```
```
## # A tibble: 126 × 6
## cds dnum snum fpc1 fpc2 pw
## <chr> <int> <dbl> <dbl> <int[1d]> <dbl>
## 1 47704826050942 795 5552 757 1 18.9
## 2 07618126005169 781 530 757 6 22.7
## 3 07618126005177 781 531 757 6 22.7
## 4 07618126005185 781 532 757 6 22.7
## 5 07618126005193 781 533 757 6 22.7
## 6 07618126005243 781 535 757 6 22.7
## 7 19650786023337 768 2371 757 2 18.9
## 8 19650786023345 768 2372 757 2 18.9
## 9 54722076054423 742 5898 757 1 18.9
## 10 50712906053086 731 5781 757 9 34.1
## # ℹ 116 more rows
```
To specify this design in R, we use the following:
```
apiclus2_des <- apiclus2_slim %>%
as_survey_design(
ids = c(dnum, snum),
fpc = c(fpc1, fpc2),
weights = pw
)
apiclus2_des
```
```
## 2 - level Cluster Sampling design
## With (40, 126) clusters.
## Called via srvyr
## Sampling variables:
## - ids: `dnum + snum`
## - fpc: `fpc1 + fpc2`
## - weights: pw
## Data variables:
## - cds (chr), dnum (int), snum (dbl), fpc1 (dbl), fpc2 (int[1d]), pw
## (dbl)
```
```
summary(apiclus2_des)
```
```
## 2 - level Cluster Sampling design
## With (40, 126) clusters.
## Called via srvyr
## Probabilities:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.00367 0.03774 0.05284 0.04239 0.05284 0.05284
## Population size (PSUs): 757
## Data variables:
## [1] "cds" "dnum" "snum" "fpc1" "fpc2" "pw"
```
The design objects are described as “2 \- level Cluster Sampling design,” and include the ids (cluster), FPC, and weight variables. The summary notes that the sample includes 40 first\-level clusters (PSUs), which are school districts, and 126 second\-level clusters (SSUs), which are schools. Additionally, the summary includes a numeric summary of the probabilities of selection and the population size (number of PSUs) as 757\.
### 10\.2\.1 Simple random sample without replacement
The simple random sample (SRS) without replacement is a sampling design in which a fixed sample size is selected from a sampling frame, and every possible subsample has an equal probability of selection. Without replacement refers to the fact that once a sampling unit has been selected, it is removed from the sample frame and cannot be selected again.
* Requirements: The sampling frame must include the entire population.
* Advantages: SRS requires no information about the units apart from contact information.
* Disadvantages: The sampling frame may not be available for the entire population.
* Example: Randomly select students in a university from a roster provided by the registrar’s office.
#### The math
The estimate for the population mean of variable \\(y\\) is:
\\\[\\bar{y}\=\\frac{1}{n}\\sum\_{i\=1}^n y\_i\\]
where \\(\\bar{y}\\) represents the sample mean, \\(n\\) is the total number of respondents (or observations), and \\(y\_i\\) is each individual value of \\(y\\).
The estimate of the standard error of the mean is:
\\\[se(\\bar{y})\=\\sqrt{\\frac{s^2}{n}\\left( 1\-\\frac{n}{N} \\right)}\\] where
\\\[s^2\=\\frac{1}{n\-1}\\sum\_{i\=1}^n\\left(y\_i\-\\bar{y}\\right)^2\.\\]
and \\(N\\) is the population size. This standard error estimate might look very similar to equations in other statistical applications except for the part on the right side of the equation: \\(1\-\\frac{n}{N}\\). This is called the finite population correction (FPC) factor. If the size of the frame, \\(N\\), is very large in comparison to the sample, the FPC is negligible, so it is often ignored. A common guideline is if the sample is less than 10% of the population, the FPC is negligible.
To estimate proportions, we define \\(x\_i\\) as the indicator if the outcome is observed. That is, \\(x\_i\=1\\) if the outcome is observed, and \\(x\_i\=0\\) if the outcome is not observed for respondent \\(i\\). Then the estimated proportion from an SRS design is:
\\\[\\hat{p}\=\\frac{1}{n}\\sum\_{i\=1}^n x\_i \\]
and the estimated standard error of the proportion is:
\\\[se(\\hat{p})\=\\sqrt{\\frac{\\hat{p}(1\-\\hat{p})}{n\-1}\\left(1\-\\frac{n}{N}\\right)} \\]
#### The syntax
If a sample was drawn through SRS and had no nonresponse or other weighting adjustments, we specify this design in R as:
```
srs1_des <- dat %>%
as_survey_design(fpc = fpcvar)
```
where `dat` is a tibble or data.frame with the survey data, and `fpcvar` is a variable in the data indicating the sampling frame’s size (this variable has the same value for all cases in an SRS design). If the frame is very large, sometimes the frame size is not provided. In that case, the FPC is not needed, and we specify the design as:
```
srs2_des <- dat %>%
as_survey_design()
```
If some post\-survey adjustments were implemented and the weights are not all equal, we specify the design as:
```
srs3_des <- dat %>%
as_survey_design(weights = wtvar,
fpc = fpcvar)
```
where `wtvar` is a variable in the data indicating the weight for each case. Again, the FPC can be omitted if it is unnecessary because the frame is large compared to the sample size.
#### Example
The {survey} package in R provides some example datasets that we use throughout this chapter. One of the example datasets we use is from the Academic Performance Index Program (APIP). The APIP program administered by the California Department of Education, and the {survey} package includes a population file (sample frame) of all schools with at least 100 students and several different samples pulled from that data using different sampling methods. For this first example, we use the `apisrs` dataset, which contains an SRS of 200 schools. For printing purposes, we create a new dataset called `apisrs_slim`, which sorts the data by the school district and school ID and subsets the data to only a few columns. The SRS sample data are illustrated below:
```
apisrs_slim <-
apisrs %>%
as_tibble() %>%
arrange(dnum, snum) %>%
select(cds, dnum, snum, dname, sname, fpc, pw)
apisrs_slim
```
```
## # A tibble: 200 × 7
## cds dnum snum dname sname fpc pw
## <chr> <int> <dbl> <chr> <chr> <dbl> <dbl>
## 1 19642126061220 1 1121 ABC Unified Haske… 6194 31.0
## 2 19642126066716 1 1124 ABC Unified Stowe… 6194 31.0
## 3 36675876035174 5 3895 Adelanto Elementary Adela… 6194 31.0
## 4 33669776031512 19 3347 Alvord Unified Arlan… 6194 31.0
## 5 33669776031595 19 3352 Alvord Unified Wells… 6194 31.0
## 6 31667876031033 39 3271 Auburn Union Elementary Cain … 6194 31.0
## 7 19642876011407 42 1169 Baldwin Park Unified Deanz… 6194 31.0
## 8 19642876011464 42 1175 Baldwin Park Unified Heath… 6194 31.0
## 9 19642956011589 48 1187 Bassett Unified Erwin… 6194 31.0
## 10 41688586043392 49 4948 Bayshore Elementary Baysh… 6194 31.0
## # ℹ 190 more rows
```
Table [10\.1](c10-sample-designs-replicate-weights.html#tab:apidata) provides details on all the variables in this dataset.
TABLE 10\.1: Overview of Variables in APIP Data
| Variable Name | Description |
| --- | --- |
| `cds` | Unique identifier for each school |
| `dnum` | School district identifier within county |
| `snum` | School identifier within district |
| `dname` | District Name |
| `sname` | School Name |
| `fpc` | Finite population correction factor |
| `pw` | Weight |
To create the `tbl_survey` object for the SRS data, we specify the design as:
```
apisrs_des <- apisrs_slim %>%
as_survey_design(
weights = pw,
fpc = fpc
)
apisrs_des
```
```
## Independent Sampling design
## Called via srvyr
## Sampling variables:
## - ids: `1`
## - fpc: fpc
## - weights: pw
## Data variables:
## - cds (chr), dnum (int), snum (dbl), dname (chr), sname (chr), fpc
## (dbl), pw (dbl)
```
In the printed design object, the design is described as an “Independent Sampling design,” which is another term for SRS. The ids are specified as `1`, which means there is no clustering (a topic described in Section [10\.2\.4](c10-sample-designs-replicate-weights.html#samp-cluster)), the FPC variable is indicated, and the weights are indicated. We can also look at the summary of the design object (`summary()`) and see the distribution of the probabilities (inverse of the weights) along with the population size and a list of the variables in the dataset.
```
summary(apisrs_des)
```
```
## Independent Sampling design
## Called via srvyr
## Probabilities:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.0323 0.0323 0.0323 0.0323 0.0323 0.0323
## Population size (PSUs): 6194
## Data variables:
## [1] "cds" "dnum" "snum" "dname" "sname" "fpc" "pw"
```
#### The math
The estimate for the population mean of variable \\(y\\) is:
\\\[\\bar{y}\=\\frac{1}{n}\\sum\_{i\=1}^n y\_i\\]
where \\(\\bar{y}\\) represents the sample mean, \\(n\\) is the total number of respondents (or observations), and \\(y\_i\\) is each individual value of \\(y\\).
The estimate of the standard error of the mean is:
\\\[se(\\bar{y})\=\\sqrt{\\frac{s^2}{n}\\left( 1\-\\frac{n}{N} \\right)}\\] where
\\\[s^2\=\\frac{1}{n\-1}\\sum\_{i\=1}^n\\left(y\_i\-\\bar{y}\\right)^2\.\\]
and \\(N\\) is the population size. This standard error estimate might look very similar to equations in other statistical applications except for the part on the right side of the equation: \\(1\-\\frac{n}{N}\\). This is called the finite population correction (FPC) factor. If the size of the frame, \\(N\\), is very large in comparison to the sample, the FPC is negligible, so it is often ignored. A common guideline is if the sample is less than 10% of the population, the FPC is negligible.
To estimate proportions, we define \\(x\_i\\) as the indicator if the outcome is observed. That is, \\(x\_i\=1\\) if the outcome is observed, and \\(x\_i\=0\\) if the outcome is not observed for respondent \\(i\\). Then the estimated proportion from an SRS design is:
\\\[\\hat{p}\=\\frac{1}{n}\\sum\_{i\=1}^n x\_i \\]
and the estimated standard error of the proportion is:
\\\[se(\\hat{p})\=\\sqrt{\\frac{\\hat{p}(1\-\\hat{p})}{n\-1}\\left(1\-\\frac{n}{N}\\right)} \\]
#### The syntax
If a sample was drawn through SRS and had no nonresponse or other weighting adjustments, we specify this design in R as:
```
srs1_des <- dat %>%
as_survey_design(fpc = fpcvar)
```
where `dat` is a tibble or data.frame with the survey data, and `fpcvar` is a variable in the data indicating the sampling frame’s size (this variable has the same value for all cases in an SRS design). If the frame is very large, sometimes the frame size is not provided. In that case, the FPC is not needed, and we specify the design as:
```
srs2_des <- dat %>%
as_survey_design()
```
If some post\-survey adjustments were implemented and the weights are not all equal, we specify the design as:
```
srs3_des <- dat %>%
as_survey_design(weights = wtvar,
fpc = fpcvar)
```
where `wtvar` is a variable in the data indicating the weight for each case. Again, the FPC can be omitted if it is unnecessary because the frame is large compared to the sample size.
#### Example
The {survey} package in R provides some example datasets that we use throughout this chapter. One of the example datasets we use is from the Academic Performance Index Program (APIP). The APIP program administered by the California Department of Education, and the {survey} package includes a population file (sample frame) of all schools with at least 100 students and several different samples pulled from that data using different sampling methods. For this first example, we use the `apisrs` dataset, which contains an SRS of 200 schools. For printing purposes, we create a new dataset called `apisrs_slim`, which sorts the data by the school district and school ID and subsets the data to only a few columns. The SRS sample data are illustrated below:
```
apisrs_slim <-
apisrs %>%
as_tibble() %>%
arrange(dnum, snum) %>%
select(cds, dnum, snum, dname, sname, fpc, pw)
apisrs_slim
```
```
## # A tibble: 200 × 7
## cds dnum snum dname sname fpc pw
## <chr> <int> <dbl> <chr> <chr> <dbl> <dbl>
## 1 19642126061220 1 1121 ABC Unified Haske… 6194 31.0
## 2 19642126066716 1 1124 ABC Unified Stowe… 6194 31.0
## 3 36675876035174 5 3895 Adelanto Elementary Adela… 6194 31.0
## 4 33669776031512 19 3347 Alvord Unified Arlan… 6194 31.0
## 5 33669776031595 19 3352 Alvord Unified Wells… 6194 31.0
## 6 31667876031033 39 3271 Auburn Union Elementary Cain … 6194 31.0
## 7 19642876011407 42 1169 Baldwin Park Unified Deanz… 6194 31.0
## 8 19642876011464 42 1175 Baldwin Park Unified Heath… 6194 31.0
## 9 19642956011589 48 1187 Bassett Unified Erwin… 6194 31.0
## 10 41688586043392 49 4948 Bayshore Elementary Baysh… 6194 31.0
## # ℹ 190 more rows
```
Table [10\.1](c10-sample-designs-replicate-weights.html#tab:apidata) provides details on all the variables in this dataset.
TABLE 10\.1: Overview of Variables in APIP Data
| Variable Name | Description |
| --- | --- |
| `cds` | Unique identifier for each school |
| `dnum` | School district identifier within county |
| `snum` | School identifier within district |
| `dname` | District Name |
| `sname` | School Name |
| `fpc` | Finite population correction factor |
| `pw` | Weight |
To create the `tbl_survey` object for the SRS data, we specify the design as:
```
apisrs_des <- apisrs_slim %>%
as_survey_design(
weights = pw,
fpc = fpc
)
apisrs_des
```
```
## Independent Sampling design
## Called via srvyr
## Sampling variables:
## - ids: `1`
## - fpc: fpc
## - weights: pw
## Data variables:
## - cds (chr), dnum (int), snum (dbl), dname (chr), sname (chr), fpc
## (dbl), pw (dbl)
```
In the printed design object, the design is described as an “Independent Sampling design,” which is another term for SRS. The ids are specified as `1`, which means there is no clustering (a topic described in Section [10\.2\.4](c10-sample-designs-replicate-weights.html#samp-cluster)), the FPC variable is indicated, and the weights are indicated. We can also look at the summary of the design object (`summary()`) and see the distribution of the probabilities (inverse of the weights) along with the population size and a list of the variables in the dataset.
```
summary(apisrs_des)
```
```
## Independent Sampling design
## Called via srvyr
## Probabilities:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.0323 0.0323 0.0323 0.0323 0.0323 0.0323
## Population size (PSUs): 6194
## Data variables:
## [1] "cds" "dnum" "snum" "dname" "sname" "fpc" "pw"
```
### 10\.2\.2 Simple random sample with replacement
Similar to the SRS design, the simple random sample with replacement (SRSWR) design randomly selects the sample from the entire sampling frame. However, while SRS removes sampled units before selecting again, the SRSWR instead replaces each sampled unit before drawing again, so units can be selected more than once.
* Requirements: The sampling frame must include the entire population.
* Advantages: SRSWR requires no information about the units apart from contact information.
* Disadvantages:
+ The sampling frame may not be available for the entire population.
+ Units can be selected more than once, resulting in a smaller realized sample size because receiving duplicate information from a single respondent does not provide additional information.
+ For small populations, SRSWR has larger standard errors than SRS designs.
* Example: A professor puts all students’ names on paper slips and selects them randomly to ask students questions, but the professor replaces the paper after calling on the student so they can be selected again at any time.
In general for surveys, using an SRS design (without replacement) is preferred as we do not want respondents to answer a survey more than once.
#### The math
The estimate for the population mean of variable \\(y\\) is:
\\\[\\bar{y}\=\\frac{1}{n}\\sum\_{i\=1}^n y\_i\\]
and the estimate of the standard error of mean is:
\\\[se(\\bar{y})\=\\sqrt{\\frac{s^2}{n}}\\] where
\\\[s^2\=\\frac{1}{n\-1}\\sum\_{i\=1}^n\\left(y\_i\-\\bar{y}\\right)^2\.\\]
To calculate the estimated proportion, we define \\(x\_i\\) as the indicator that the outcome is observed (as we did with SRS):
\\\[\\hat{p}\=\\frac{1}{n}\\sum\_{i\=1}^n x\_i \\]
and the estimated standard error of the proportion is:
\\\[se(\\hat{p})\=\\sqrt{\\frac{\\hat{p}(1\-\\hat{p})}{n}} \\]
#### The syntax
If we had a sample that was drawn through SRSWR and had no nonresponse or other weighting adjustments, in R, we specify this design as:
```
srswr1_des <- dat %>%
as_survey_design()
```
where `dat` is a tibble or data.frame containing our survey data. This syntax is the same as an SRS design, except an FPC is not included. This is because when calculating a sample with replacement, the population pool to select from is no longer finite, so a correction is not needed. Therefore, with large populations where the FPC is negligible, the underlying formulas for SRS and SRSWR designs are the same.
If some post\-survey adjustments were implemented and the weights are not all equal, we specify the design as:
```
srswr2_des <- dat %>%
as_survey_design(weights = wtvar)
```
where `wtvar` is the variable for the weight of the data.
#### Example
The {survey} package does not include an example of SRSWR. To illustrate this design, we need to create an example. We use the APIP population data provided by the {survey} package (`apipop`) and select a sample of 200 cases using the `slice_sample()` function from the tidyverse. One of the arguments in the `slice_sample()` function is `replace`. If `replace=TRUE`, then we are conducting an SRSWR. We then calculate selection weights as the inverse of the probability of selection and call this new dataset `apisrswr`.
```
set.seed(409963)
apisrswr <- apipop %>%
as_tibble() %>%
slice_sample(n = 200, replace = TRUE) %>%
select(cds, dnum, snum, dname, sname) %>%
mutate(weight = nrow(apipop) / 200)
head(apisrswr)
```
```
## # A tibble: 6 × 6
## cds dnum snum dname sname weight
## <chr> <int> <dbl> <chr> <chr> <dbl>
## 1 43696416060065 533 5348 Palo Alto Unified Jordan (Da… 31.0
## 2 07618046005060 650 509 San Ramon Valley Unified Alamo Elem… 31.0
## 3 19648086085674 457 2134 Montebello Unified La Merced … 31.0
## 4 07617056003719 346 377 Knightsen Elementary Knightsen … 31.0
## 5 19650606023022 744 2351 Torrance Unified Carr (Evel… 31.0
## 6 01611196090120 6 13 Alameda City Unified Paden (Wil… 31.0
```
Because this is an SRS design with replacement, there may be duplicates in the data. It is important to keep the duplicates in the data for proper estimation. For reference, we can view the duplicates in the example data we just created.
```
apisrswr %>%
group_by(cds) %>%
filter(n() > 1) %>%
arrange(cds)
```
```
## # A tibble: 4 × 6
## # Groups: cds [2]
## cds dnum snum dname sname weight
## <chr> <int> <dbl> <chr> <chr> <dbl>
## 1 15633216008841 41 869 Bakersfield City Elem Chipman Junio… 31.0
## 2 15633216008841 41 869 Bakersfield City Elem Chipman Junio… 31.0
## 3 39686766042782 716 4880 Stockton City Unified Tyler Skills … 31.0
## 4 39686766042782 716 4880 Stockton City Unified Tyler Skills … 31.0
```
We created a weight variable in this example data, which is the inverse of the probability of selection. We specify the sampling design for `apisrswr` as:
```
apisrswr_des <- apisrswr %>%
as_survey_design(weights = weight)
apisrswr_des
```
```
## Independent Sampling design (with replacement)
## Called via srvyr
## Sampling variables:
## - ids: `1`
## - weights: weight
## Data variables:
## - cds (chr), dnum (int), snum (dbl), dname (chr), sname (chr), weight
## (dbl)
```
```
summary(apisrswr_des)
```
```
## Independent Sampling design (with replacement)
## Called via srvyr
## Probabilities:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.0323 0.0323 0.0323 0.0323 0.0323 0.0323
## Data variables:
## [1] "cds" "dnum" "snum" "dname" "sname" "weight"
```
In the output above, the design object and the object summary are shown. Both note that the sampling is done “with replacement” because no FPC was specified. The probabilities, which are derived from the weights, are summarized in the summary function output.
#### The math
The estimate for the population mean of variable \\(y\\) is:
\\\[\\bar{y}\=\\frac{1}{n}\\sum\_{i\=1}^n y\_i\\]
and the estimate of the standard error of mean is:
\\\[se(\\bar{y})\=\\sqrt{\\frac{s^2}{n}}\\] where
\\\[s^2\=\\frac{1}{n\-1}\\sum\_{i\=1}^n\\left(y\_i\-\\bar{y}\\right)^2\.\\]
To calculate the estimated proportion, we define \\(x\_i\\) as the indicator that the outcome is observed (as we did with SRS):
\\\[\\hat{p}\=\\frac{1}{n}\\sum\_{i\=1}^n x\_i \\]
and the estimated standard error of the proportion is:
\\\[se(\\hat{p})\=\\sqrt{\\frac{\\hat{p}(1\-\\hat{p})}{n}} \\]
#### The syntax
If we had a sample that was drawn through SRSWR and had no nonresponse or other weighting adjustments, in R, we specify this design as:
```
srswr1_des <- dat %>%
as_survey_design()
```
where `dat` is a tibble or data.frame containing our survey data. This syntax is the same as an SRS design, except an FPC is not included. This is because when calculating a sample with replacement, the population pool to select from is no longer finite, so a correction is not needed. Therefore, with large populations where the FPC is negligible, the underlying formulas for SRS and SRSWR designs are the same.
If some post\-survey adjustments were implemented and the weights are not all equal, we specify the design as:
```
srswr2_des <- dat %>%
as_survey_design(weights = wtvar)
```
where `wtvar` is the variable for the weight of the data.
#### Example
The {survey} package does not include an example of SRSWR. To illustrate this design, we need to create an example. We use the APIP population data provided by the {survey} package (`apipop`) and select a sample of 200 cases using the `slice_sample()` function from the tidyverse. One of the arguments in the `slice_sample()` function is `replace`. If `replace=TRUE`, then we are conducting an SRSWR. We then calculate selection weights as the inverse of the probability of selection and call this new dataset `apisrswr`.
```
set.seed(409963)
apisrswr <- apipop %>%
as_tibble() %>%
slice_sample(n = 200, replace = TRUE) %>%
select(cds, dnum, snum, dname, sname) %>%
mutate(weight = nrow(apipop) / 200)
head(apisrswr)
```
```
## # A tibble: 6 × 6
## cds dnum snum dname sname weight
## <chr> <int> <dbl> <chr> <chr> <dbl>
## 1 43696416060065 533 5348 Palo Alto Unified Jordan (Da… 31.0
## 2 07618046005060 650 509 San Ramon Valley Unified Alamo Elem… 31.0
## 3 19648086085674 457 2134 Montebello Unified La Merced … 31.0
## 4 07617056003719 346 377 Knightsen Elementary Knightsen … 31.0
## 5 19650606023022 744 2351 Torrance Unified Carr (Evel… 31.0
## 6 01611196090120 6 13 Alameda City Unified Paden (Wil… 31.0
```
Because this is an SRS design with replacement, there may be duplicates in the data. It is important to keep the duplicates in the data for proper estimation. For reference, we can view the duplicates in the example data we just created.
```
apisrswr %>%
group_by(cds) %>%
filter(n() > 1) %>%
arrange(cds)
```
```
## # A tibble: 4 × 6
## # Groups: cds [2]
## cds dnum snum dname sname weight
## <chr> <int> <dbl> <chr> <chr> <dbl>
## 1 15633216008841 41 869 Bakersfield City Elem Chipman Junio… 31.0
## 2 15633216008841 41 869 Bakersfield City Elem Chipman Junio… 31.0
## 3 39686766042782 716 4880 Stockton City Unified Tyler Skills … 31.0
## 4 39686766042782 716 4880 Stockton City Unified Tyler Skills … 31.0
```
We created a weight variable in this example data, which is the inverse of the probability of selection. We specify the sampling design for `apisrswr` as:
```
apisrswr_des <- apisrswr %>%
as_survey_design(weights = weight)
apisrswr_des
```
```
## Independent Sampling design (with replacement)
## Called via srvyr
## Sampling variables:
## - ids: `1`
## - weights: weight
## Data variables:
## - cds (chr), dnum (int), snum (dbl), dname (chr), sname (chr), weight
## (dbl)
```
```
summary(apisrswr_des)
```
```
## Independent Sampling design (with replacement)
## Called via srvyr
## Probabilities:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.0323 0.0323 0.0323 0.0323 0.0323 0.0323
## Data variables:
## [1] "cds" "dnum" "snum" "dname" "sname" "weight"
```
In the output above, the design object and the object summary are shown. Both note that the sampling is done “with replacement” because no FPC was specified. The probabilities, which are derived from the weights, are summarized in the summary function output.
### 10\.2\.3 Stratified sampling
Stratified sampling occurs when a population is divided into mutually exclusive subpopulations (strata), and then samples are selected independently within each stratum.
* Requirements: The sampling frame must include the information to divide the population into strata for every unit.
* Advantages:
+ This design ensures sample representation in all subpopulations.
+ If the strata are correlated with survey outcomes, a stratified sample has smaller standard errors compared to a SRS sample of the same size.
+ This results in a more efficient design.
* Disadvantages: Auxiliary data may not exist to divide the sampling frame into strata, or the data may be outdated.
* Examples:
+ Example 1: A population of North Carolina residents could be stratified into urban and rural areas, and then an SRS of residents from both rural and urban areas is selected independently. This ensures there are residents from both areas in the sample.
+ Example 2: Law enforcement agencies could be stratified into the three primary general\-purpose categories in the U.S.: local police, sheriff’s departments, and state police. An SRS of agencies from each of the three types is then selected independently to ensure all three types of agencies are represented.
#### The math
Let \\(\\bar{y}\_h\\) be the sample mean for stratum \\(h\\), \\(N\_h\\) be the population size of stratum \\(h\\), \\(n\_h\\) be the sample size of stratum \\(h\\), and \\(H\\) be the total number of strata. Then, the estimate for the population mean under stratified SRS sampling is:
\\\[\\bar{y}\=\\frac{1}{N}\\sum\_{h\=1}^H N\_h\\bar{y}\_h\\]
and the estimate of the standard error of \\(\\bar{y}\\) is:
\\\[se(\\bar{y})\=\\sqrt{\\frac{1}{N^2} \\sum\_{h\=1}^H N\_h^2 \\frac{s\_h^2}{n\_h}\\left(1\-\\frac{n\_h}{N\_h}\\right)} \\]
where
\\\[s\_h^2\=\\frac{1}{n\_h\-1}\\sum\_{i\=1}^{n\_h}\\left(y\_{i,h}\-\\bar{y}\_h\\right)^2\\]
For estimates of proportions, let \\(\\hat{p}\_h\\) be the estimated proportion in stratum \\(h\\). Then, the population proportion estimate is:
\\\[\\hat{p}\= \\frac{1}{N}\\sum\_{h\=1}^H N\_h \\hat{p}\_h\\]
The standard error of the proportion is:
\\\[se(\\hat{p}) \= \\frac{1}{N} \\sqrt{ \\sum\_{h\=1}^H N\_h^2 \\frac{\\hat{p}\_h(1\-\\hat{p}\_h)}{n\_h\-1} \\left(1\-\\frac{n\_h}{N\_h}\\right)}\\]
#### The syntax
In addition to the `fpc` and `weights` arguments discussed in the types above, stratified designs require the addition of the `strata` argument. For example, to specify a stratified SRS design in {srvyr} when using the FPC, that is, where the population sizes of the strata are not too large and are known, we specify the design as:
```
stsrs1_des <- dat %>%
as_survey_design(fpc = fpcvar,
strata = stratavar)
```
where `fpcvar` is a variable on our data that indicates \\(N\_h\\) for each row, and `stratavar` is a variable indicating the stratum for each row. We can omit the FPC if it is not applicable. Additionally, we can indicate the weight variable if it is present where `wtvar` is a variable on our data with a numeric weight.
```
stsrs2_des <- dat %>%
as_survey_design(weights = wtvar,
strata = stratavar)
```
#### Example
In the example APIP data, `apistrat` is a stratified random sample, stratified by school type (`stype`) with three levels: `E` for elementary school, `M` for middle school, and `H` for high school. As with the SRS example above, we sort and select specific variables for use in printing. The data are illustrated below, including a count of the number of cases per stratum:
```
apistrat_slim <-
apistrat %>%
as_tibble() %>%
arrange(dnum, snum) %>%
select(cds, dnum, snum, dname, sname, stype, fpc, pw)
apistrat_slim %>%
count(stype, fpc)
```
```
## # A tibble: 3 × 3
## stype fpc n
## <fct> <dbl> <int>
## 1 E 4421 100
## 2 H 755 50
## 3 M 1018 50
```
The FPC is the same for each case within each stratum. This output also shows that 100 elementary schools, 50 middle schools, and 50 high schools were sampled. It is often common for the number of units sampled from each strata to be different based on the goals of the project, or to mirror the size of each strata in the population. We specify the design as:
```
apistrat_des <- apistrat_slim %>%
as_survey_design(
strata = stype,
weights = pw,
fpc = fpc
)
apistrat_des
```
```
## Stratified Independent Sampling design
## Called via srvyr
## Sampling variables:
## - ids: `1`
## - strata: stype
## - fpc: fpc
## - weights: pw
## Data variables:
## - cds (chr), dnum (int), snum (dbl), dname (chr), sname (chr), stype
## (fct), fpc (dbl), pw (dbl)
```
```
summary(apistrat_des)
```
```
## Stratified Independent Sampling design
## Called via srvyr
## Probabilities:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.0226 0.0226 0.0359 0.0401 0.0534 0.0662
## Stratum Sizes:
## E H M
## obs 100 50 50
## design.PSU 100 50 50
## actual.PSU 100 50 50
## Population stratum sizes (PSUs):
## E H M
## 4421 755 1018
## Data variables:
## [1] "cds" "dnum" "snum" "dname" "sname" "stype" "fpc" "pw"
```
When printing the object, it is specified as a “Stratified Independent Sampling design,” also known as a stratified SRS, and the strata variable is included. Printing the summary, we see a distribution of probabilities, as we saw with SRS; but we also see the sample and population sizes by stratum.
#### The math
Let \\(\\bar{y}\_h\\) be the sample mean for stratum \\(h\\), \\(N\_h\\) be the population size of stratum \\(h\\), \\(n\_h\\) be the sample size of stratum \\(h\\), and \\(H\\) be the total number of strata. Then, the estimate for the population mean under stratified SRS sampling is:
\\\[\\bar{y}\=\\frac{1}{N}\\sum\_{h\=1}^H N\_h\\bar{y}\_h\\]
and the estimate of the standard error of \\(\\bar{y}\\) is:
\\\[se(\\bar{y})\=\\sqrt{\\frac{1}{N^2} \\sum\_{h\=1}^H N\_h^2 \\frac{s\_h^2}{n\_h}\\left(1\-\\frac{n\_h}{N\_h}\\right)} \\]
where
\\\[s\_h^2\=\\frac{1}{n\_h\-1}\\sum\_{i\=1}^{n\_h}\\left(y\_{i,h}\-\\bar{y}\_h\\right)^2\\]
For estimates of proportions, let \\(\\hat{p}\_h\\) be the estimated proportion in stratum \\(h\\). Then, the population proportion estimate is:
\\\[\\hat{p}\= \\frac{1}{N}\\sum\_{h\=1}^H N\_h \\hat{p}\_h\\]
The standard error of the proportion is:
\\\[se(\\hat{p}) \= \\frac{1}{N} \\sqrt{ \\sum\_{h\=1}^H N\_h^2 \\frac{\\hat{p}\_h(1\-\\hat{p}\_h)}{n\_h\-1} \\left(1\-\\frac{n\_h}{N\_h}\\right)}\\]
#### The syntax
In addition to the `fpc` and `weights` arguments discussed in the types above, stratified designs require the addition of the `strata` argument. For example, to specify a stratified SRS design in {srvyr} when using the FPC, that is, where the population sizes of the strata are not too large and are known, we specify the design as:
```
stsrs1_des <- dat %>%
as_survey_design(fpc = fpcvar,
strata = stratavar)
```
where `fpcvar` is a variable on our data that indicates \\(N\_h\\) for each row, and `stratavar` is a variable indicating the stratum for each row. We can omit the FPC if it is not applicable. Additionally, we can indicate the weight variable if it is present where `wtvar` is a variable on our data with a numeric weight.
```
stsrs2_des <- dat %>%
as_survey_design(weights = wtvar,
strata = stratavar)
```
#### Example
In the example APIP data, `apistrat` is a stratified random sample, stratified by school type (`stype`) with three levels: `E` for elementary school, `M` for middle school, and `H` for high school. As with the SRS example above, we sort and select specific variables for use in printing. The data are illustrated below, including a count of the number of cases per stratum:
```
apistrat_slim <-
apistrat %>%
as_tibble() %>%
arrange(dnum, snum) %>%
select(cds, dnum, snum, dname, sname, stype, fpc, pw)
apistrat_slim %>%
count(stype, fpc)
```
```
## # A tibble: 3 × 3
## stype fpc n
## <fct> <dbl> <int>
## 1 E 4421 100
## 2 H 755 50
## 3 M 1018 50
```
The FPC is the same for each case within each stratum. This output also shows that 100 elementary schools, 50 middle schools, and 50 high schools were sampled. It is often common for the number of units sampled from each strata to be different based on the goals of the project, or to mirror the size of each strata in the population. We specify the design as:
```
apistrat_des <- apistrat_slim %>%
as_survey_design(
strata = stype,
weights = pw,
fpc = fpc
)
apistrat_des
```
```
## Stratified Independent Sampling design
## Called via srvyr
## Sampling variables:
## - ids: `1`
## - strata: stype
## - fpc: fpc
## - weights: pw
## Data variables:
## - cds (chr), dnum (int), snum (dbl), dname (chr), sname (chr), stype
## (fct), fpc (dbl), pw (dbl)
```
```
summary(apistrat_des)
```
```
## Stratified Independent Sampling design
## Called via srvyr
## Probabilities:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.0226 0.0226 0.0359 0.0401 0.0534 0.0662
## Stratum Sizes:
## E H M
## obs 100 50 50
## design.PSU 100 50 50
## actual.PSU 100 50 50
## Population stratum sizes (PSUs):
## E H M
## 4421 755 1018
## Data variables:
## [1] "cds" "dnum" "snum" "dname" "sname" "stype" "fpc" "pw"
```
When printing the object, it is specified as a “Stratified Independent Sampling design,” also known as a stratified SRS, and the strata variable is included. Printing the summary, we see a distribution of probabilities, as we saw with SRS; but we also see the sample and population sizes by stratum.
### 10\.2\.4 Clustered sampling
Clustered sampling occurs when a population is divided into mutually exclusive subgroups called clusters or primary sampling units (PSUs). A random selection of PSUs is sampled, and then another level of sampling is done within these clusters. There can be multiple levels of this selection. Clustered sampling is often used when a list of the entire population is not available or data collection involves interviewers needing direct contact with respondents.
* Requirements: There must be a way to divide the population into clusters. Clusters are commonly structural, such as institutions (e.g., schools, prisons) or geography (e.g., states, counties).
* Advantages:
+ Clustered sampling is advantageous when data collection is done in person, so interviewers are sent to specific sampled areas rather than completely at random across a country.
+ With clustered sampling, a list of the entire population is not necessary. For example, if sampling students, we do not need a list of all students, but only a list of all schools. Once the schools are sampled, lists of students can be obtained within the sampled schools.
* Disadvantages: Compared to a simple random sample for the same sample size, clustered samples generally have larger standard errors of estimates.
* Examples:
+ Example 1: Consider a study needing a sample of 6th\-grade students in the United States. No list likely exists of all these students. However, it is more likely to obtain a list of schools that enroll 6th graders, so a study design could select a random sample of schools that enroll 6th graders. The selected schools can then provide a list of students to do a second stage of sampling where 6th\-grade students are randomly sampled within each of the sampled schools. This is a one\-stage sample design (the one representing the number of clusters) and is the type of design we discuss in the formulas below.
+ Example 2: Consider a study sending interviewers to households for a survey. This is a more complicated example that requires two levels of clustering (two\-stage sample design) to efficiently use interviewers in geographic clusters. First, in the U.S., counties could be selected as the PSU and then census block groups within counties could be selected as the secondary sampling unit (SSU). Households could then be randomly sampled within the block groups. This type of design is popular for in\-person surveys, as it reduces the travel necessary for interviewers.
#### The math
Consider a survey where \\(a\\) clusters are sampled from a population of \\(A\\) clusters via SRS. Within each sampled cluster, \\(i\\), there are \\(B\_i\\) units in the population, and \\(b\_i\\) units are sampled via SRS. Let \\(\\bar{y}\_{i}\\) be the sample mean of cluster \\(i\\). Then, a ratio estimator of the population mean is:
\\\[\\bar{y}\=\\frac{\\sum\_{i\=1}^a B\_i \\bar{y}\_{i}}{ \\sum\_{i\=1}^a B\_i}\\]
Note this is a consistent but biased estimator. Often the population size is not known, so this is a method to estimate a mean without knowing the population size. The estimated standard error of the mean is:
\\\[se(\\bar{y})\= \\frac{1}{\\hat{N}}\\sqrt{\\left(1\-\\frac{a}{A}\\right)\\frac{s\_a^2}{a} \+ \\frac{A}{a} \\sum\_{i\=1}^a \\left(1\-\\frac{b\_i}{B\_i}\\right) \\frac{s\_i^2}{b\_i} }\\]
where \\(\\hat{N}\\) is the estimated population size, \\(s\_a^2\\) is the between\-cluster variance, and \\(s\_i^2\\) is the within\-cluster variance.
The formula for the between\-cluster variance (\\(s\_a^2\\)) is:
\\\[s\_a^2\=\\frac{1}{a\-1}\\sum\_{i\=1}^a \\left( \\hat{y}\_i \- \\frac{\\sum\_{i\=1}^a \\hat{y}\_{i} }{a}\\right)^2\\]
where \\(\\hat{y}\_i \=B\_i\\bar{y\_i}\\).
The formula for the within\-cluster variance (\\(s\_i^2\\)) is:
\\\[s\_i^2\=\\frac{1}{a(b\_i\-1\)} \\sum\_{j\=1}^{b\_i} \\left(y\_{ij}\-\\bar{y}\_i\\right)^2\\]
where \\(y\_{ij}\\) is the outcome for sampled unit \\(j\\) within cluster \\(i\\).
#### The syntax
Clustered sampling designs require the addition of the `ids` argument, which specifies the cluster level variable(s). To specify a two\-stage clustered design without replacement, we specify the design as:
```
clus2_des <- dat %>%
as_survey_design(weights = wtvar,
ids = c(PSU, SSU),
fpc = c(A, B))
```
where `PSU` and `SSU` are the variables indicating the PSU and SSU identifiers, and `A` and `B` are the variables indicating the population sizes for each level (i.e., `A` is the number of clusters, and `B` is the number of units within each cluster). Note that `A` is the same for all records, and `B` is the same for all records within the same cluster.
If clusters were sampled with replacement or from a very large population, the FPC is unnecessary. Additionally, only the first stage of selection is necessary regardless of whether the units were selected with replacement at any stage. The subsequent stages of selection are ignored in computation as their contribution to the variance is overpowered by the first stage (see Särndal, Swensson, and Wretman ([2003](#ref-sarndal2003model)) or Wolter ([2007](#ref-wolter2007introduction)) for a more in\-depth discussion). Therefore, the two design objects specified below yield the same estimates in the end:
```
clus2ex1_des <- dat %>%
as_survey_design(weights = wtvar,
ids = c(PSU, SSU))
clus2ex2_des <- dat %>%
as_survey_design(weights = wtvar,
ids = PSU)
```
Note that there is one additional argument that is sometimes necessary, which is `nest = TRUE`. This option relabels cluster IDs to enforce nesting within strata. Sometimes, as an example, there may be a cluster `1` within each stratum, but cluster `1` in stratum `1` is a different cluster than cluster `1` in stratum `2`. These are actually different clusters. This option indicates that repeated numbering does not mean it is the same cluster. If this option is not used and there are repeated cluster IDs across different strata, an error is generated.
#### Example
The `survey` package includes a two\-stage cluster sample data, `apiclus2`, in which school districts were sampled, and then a random sample of five schools was selected within each district. For districts with fewer than five schools, all schools were sampled. School districts are identified by `dnum`, and schools are identified by `snum`. The variable `fpc1` indicates how many districts there are in California (the total number of PSUs or `A`), and `fpc2` indicates how many schools were in a given district with at least 100 students (the total number of SSUs or `B`). The data include a row for each school. In the data printed below, there are 757 school districts, as indicated by `fpc1`, and there are nine schools in District 731, one school in District 742, two schools in District 768, and so on as indicated by `fpc2`. For illustration purposes, the object `apiclus2_slim` has been created from `apiclus2`, which subsets the data to only the necessary columns and sorts the data.
```
apiclus2_slim <-
apiclus2 %>%
as_tibble() %>%
arrange(desc(dnum), snum) %>%
select(cds, dnum, snum, fpc1, fpc2, pw)
apiclus2_slim
```
```
## # A tibble: 126 × 6
## cds dnum snum fpc1 fpc2 pw
## <chr> <int> <dbl> <dbl> <int[1d]> <dbl>
## 1 47704826050942 795 5552 757 1 18.9
## 2 07618126005169 781 530 757 6 22.7
## 3 07618126005177 781 531 757 6 22.7
## 4 07618126005185 781 532 757 6 22.7
## 5 07618126005193 781 533 757 6 22.7
## 6 07618126005243 781 535 757 6 22.7
## 7 19650786023337 768 2371 757 2 18.9
## 8 19650786023345 768 2372 757 2 18.9
## 9 54722076054423 742 5898 757 1 18.9
## 10 50712906053086 731 5781 757 9 34.1
## # ℹ 116 more rows
```
To specify this design in R, we use the following:
```
apiclus2_des <- apiclus2_slim %>%
as_survey_design(
ids = c(dnum, snum),
fpc = c(fpc1, fpc2),
weights = pw
)
apiclus2_des
```
```
## 2 - level Cluster Sampling design
## With (40, 126) clusters.
## Called via srvyr
## Sampling variables:
## - ids: `dnum + snum`
## - fpc: `fpc1 + fpc2`
## - weights: pw
## Data variables:
## - cds (chr), dnum (int), snum (dbl), fpc1 (dbl), fpc2 (int[1d]), pw
## (dbl)
```
```
summary(apiclus2_des)
```
```
## 2 - level Cluster Sampling design
## With (40, 126) clusters.
## Called via srvyr
## Probabilities:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.00367 0.03774 0.05284 0.04239 0.05284 0.05284
## Population size (PSUs): 757
## Data variables:
## [1] "cds" "dnum" "snum" "fpc1" "fpc2" "pw"
```
The design objects are described as “2 \- level Cluster Sampling design,” and include the ids (cluster), FPC, and weight variables. The summary notes that the sample includes 40 first\-level clusters (PSUs), which are school districts, and 126 second\-level clusters (SSUs), which are schools. Additionally, the summary includes a numeric summary of the probabilities of selection and the population size (number of PSUs) as 757\.
#### The math
Consider a survey where \\(a\\) clusters are sampled from a population of \\(A\\) clusters via SRS. Within each sampled cluster, \\(i\\), there are \\(B\_i\\) units in the population, and \\(b\_i\\) units are sampled via SRS. Let \\(\\bar{y}\_{i}\\) be the sample mean of cluster \\(i\\). Then, a ratio estimator of the population mean is:
\\\[\\bar{y}\=\\frac{\\sum\_{i\=1}^a B\_i \\bar{y}\_{i}}{ \\sum\_{i\=1}^a B\_i}\\]
Note this is a consistent but biased estimator. Often the population size is not known, so this is a method to estimate a mean without knowing the population size. The estimated standard error of the mean is:
\\\[se(\\bar{y})\= \\frac{1}{\\hat{N}}\\sqrt{\\left(1\-\\frac{a}{A}\\right)\\frac{s\_a^2}{a} \+ \\frac{A}{a} \\sum\_{i\=1}^a \\left(1\-\\frac{b\_i}{B\_i}\\right) \\frac{s\_i^2}{b\_i} }\\]
where \\(\\hat{N}\\) is the estimated population size, \\(s\_a^2\\) is the between\-cluster variance, and \\(s\_i^2\\) is the within\-cluster variance.
The formula for the between\-cluster variance (\\(s\_a^2\\)) is:
\\\[s\_a^2\=\\frac{1}{a\-1}\\sum\_{i\=1}^a \\left( \\hat{y}\_i \- \\frac{\\sum\_{i\=1}^a \\hat{y}\_{i} }{a}\\right)^2\\]
where \\(\\hat{y}\_i \=B\_i\\bar{y\_i}\\).
The formula for the within\-cluster variance (\\(s\_i^2\\)) is:
\\\[s\_i^2\=\\frac{1}{a(b\_i\-1\)} \\sum\_{j\=1}^{b\_i} \\left(y\_{ij}\-\\bar{y}\_i\\right)^2\\]
where \\(y\_{ij}\\) is the outcome for sampled unit \\(j\\) within cluster \\(i\\).
#### The syntax
Clustered sampling designs require the addition of the `ids` argument, which specifies the cluster level variable(s). To specify a two\-stage clustered design without replacement, we specify the design as:
```
clus2_des <- dat %>%
as_survey_design(weights = wtvar,
ids = c(PSU, SSU),
fpc = c(A, B))
```
where `PSU` and `SSU` are the variables indicating the PSU and SSU identifiers, and `A` and `B` are the variables indicating the population sizes for each level (i.e., `A` is the number of clusters, and `B` is the number of units within each cluster). Note that `A` is the same for all records, and `B` is the same for all records within the same cluster.
If clusters were sampled with replacement or from a very large population, the FPC is unnecessary. Additionally, only the first stage of selection is necessary regardless of whether the units were selected with replacement at any stage. The subsequent stages of selection are ignored in computation as their contribution to the variance is overpowered by the first stage (see Särndal, Swensson, and Wretman ([2003](#ref-sarndal2003model)) or Wolter ([2007](#ref-wolter2007introduction)) for a more in\-depth discussion). Therefore, the two design objects specified below yield the same estimates in the end:
```
clus2ex1_des <- dat %>%
as_survey_design(weights = wtvar,
ids = c(PSU, SSU))
clus2ex2_des <- dat %>%
as_survey_design(weights = wtvar,
ids = PSU)
```
Note that there is one additional argument that is sometimes necessary, which is `nest = TRUE`. This option relabels cluster IDs to enforce nesting within strata. Sometimes, as an example, there may be a cluster `1` within each stratum, but cluster `1` in stratum `1` is a different cluster than cluster `1` in stratum `2`. These are actually different clusters. This option indicates that repeated numbering does not mean it is the same cluster. If this option is not used and there are repeated cluster IDs across different strata, an error is generated.
#### Example
The `survey` package includes a two\-stage cluster sample data, `apiclus2`, in which school districts were sampled, and then a random sample of five schools was selected within each district. For districts with fewer than five schools, all schools were sampled. School districts are identified by `dnum`, and schools are identified by `snum`. The variable `fpc1` indicates how many districts there are in California (the total number of PSUs or `A`), and `fpc2` indicates how many schools were in a given district with at least 100 students (the total number of SSUs or `B`). The data include a row for each school. In the data printed below, there are 757 school districts, as indicated by `fpc1`, and there are nine schools in District 731, one school in District 742, two schools in District 768, and so on as indicated by `fpc2`. For illustration purposes, the object `apiclus2_slim` has been created from `apiclus2`, which subsets the data to only the necessary columns and sorts the data.
```
apiclus2_slim <-
apiclus2 %>%
as_tibble() %>%
arrange(desc(dnum), snum) %>%
select(cds, dnum, snum, fpc1, fpc2, pw)
apiclus2_slim
```
```
## # A tibble: 126 × 6
## cds dnum snum fpc1 fpc2 pw
## <chr> <int> <dbl> <dbl> <int[1d]> <dbl>
## 1 47704826050942 795 5552 757 1 18.9
## 2 07618126005169 781 530 757 6 22.7
## 3 07618126005177 781 531 757 6 22.7
## 4 07618126005185 781 532 757 6 22.7
## 5 07618126005193 781 533 757 6 22.7
## 6 07618126005243 781 535 757 6 22.7
## 7 19650786023337 768 2371 757 2 18.9
## 8 19650786023345 768 2372 757 2 18.9
## 9 54722076054423 742 5898 757 1 18.9
## 10 50712906053086 731 5781 757 9 34.1
## # ℹ 116 more rows
```
To specify this design in R, we use the following:
```
apiclus2_des <- apiclus2_slim %>%
as_survey_design(
ids = c(dnum, snum),
fpc = c(fpc1, fpc2),
weights = pw
)
apiclus2_des
```
```
## 2 - level Cluster Sampling design
## With (40, 126) clusters.
## Called via srvyr
## Sampling variables:
## - ids: `dnum + snum`
## - fpc: `fpc1 + fpc2`
## - weights: pw
## Data variables:
## - cds (chr), dnum (int), snum (dbl), fpc1 (dbl), fpc2 (int[1d]), pw
## (dbl)
```
```
summary(apiclus2_des)
```
```
## 2 - level Cluster Sampling design
## With (40, 126) clusters.
## Called via srvyr
## Probabilities:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.00367 0.03774 0.05284 0.04239 0.05284 0.05284
## Population size (PSUs): 757
## Data variables:
## [1] "cds" "dnum" "snum" "fpc1" "fpc2" "pw"
```
The design objects are described as “2 \- level Cluster Sampling design,” and include the ids (cluster), FPC, and weight variables. The summary notes that the sample includes 40 first\-level clusters (PSUs), which are school districts, and 126 second\-level clusters (SSUs), which are schools. Additionally, the summary includes a numeric summary of the probabilities of selection and the population size (number of PSUs) as 757\.
10\.3 Combining sampling methods
--------------------------------
SRS, stratified, and clustered designs are the backbone of sampling designs, and the features are often combined in one design. Additionally, rather than using SRS for selection, other sampling mechanisms are commonly used, such as probability proportional to size (PPS), systematic sampling, or selection with unequal probabilities, which are briefly described here. In PPS sampling, a size measure is constructed for each unit (e.g., the population of the PSU or the number of occupied housing units), and units with larger size measures are more likely to be sampled. Systematic sampling is commonly used to ensure representation across a population. Units are sorted by a feature, and then every \\(k\\) units is selected from a random start point so the sample is spread across the population. In addition to PPS, other unequal probabilities of selection may be used. For example, in a study of establishments (e.g., businesses or public institutions) that conducts a survey every year, an establishment that recently participated (e.g., participated last year) may have a reduced chance of selection in a subsequent round to reduce the burden on the establishment. To learn more about sampling designs, refer to Valliant, Dever, and Kreuter ([2013](#ref-valliant2013practical)), Cox et al. ([2011](#ref-cox2011business)), Cochran ([1977](#ref-cochran1977sampling)), and Deming ([1991](#ref-deming1991sample)).
A common method of sampling is to stratify PSUs, select PSUs within the stratum using PPS selection, and then select units within the PSUs either with SRS or PPS. Reading survey documentation is an important first step in survey analysis to understand the design of the survey we are using and variables necessary to specify the design. Good documentation highlights the variables necessary to specify the design. This is often found in the user guide, methodology report, analysis guide, or technical documentation (see Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation) for more details).
### Example
For example, the [2017\-2019 National Survey of Family Growth](https://www.cdc.gov/nchs/data/nsfg/NSFG-2017-2019-Sample-Design-Documentation-508.pdf) had a stratified multi\-stage area probability sample:
1. In the first stage, PSUs are counties or collections of counties and are stratified by Census region/division, size (population), and MSA status. Within each stratum, PSUs were selected via PPS.
2. In the second stage, neighborhoods were selected within the sampled PSUs using PPS selection.
3. In the third stage, housing units were selected within the sampled neighborhoods.
4. In the fourth stage, a person was randomly chosen among eligible persons within the selected housing units using unequal probabilities based on the person’s age and sex.
The public use file does not include all these levels of selection and instead has pseudo\-strata and pseudo\-clusters, which are the variables used in R to specify the design. As specified on page 4 of the documentation, the stratum variable is `SEST`, the cluster variable is `SECU`, and the weight variable is `WGT2017_2019`. Thus, to specify this design in R, we use the following syntax:
```
nsfg_des <- nsfgdata %>%
as_survey_design(ids = SECU,
strata = SEST,
weights = WGT2017_2019)
```
### Example
For example, the [2017\-2019 National Survey of Family Growth](https://www.cdc.gov/nchs/data/nsfg/NSFG-2017-2019-Sample-Design-Documentation-508.pdf) had a stratified multi\-stage area probability sample:
1. In the first stage, PSUs are counties or collections of counties and are stratified by Census region/division, size (population), and MSA status. Within each stratum, PSUs were selected via PPS.
2. In the second stage, neighborhoods were selected within the sampled PSUs using PPS selection.
3. In the third stage, housing units were selected within the sampled neighborhoods.
4. In the fourth stage, a person was randomly chosen among eligible persons within the selected housing units using unequal probabilities based on the person’s age and sex.
The public use file does not include all these levels of selection and instead has pseudo\-strata and pseudo\-clusters, which are the variables used in R to specify the design. As specified on page 4 of the documentation, the stratum variable is `SEST`, the cluster variable is `SECU`, and the weight variable is `WGT2017_2019`. Thus, to specify this design in R, we use the following syntax:
```
nsfg_des <- nsfgdata %>%
as_survey_design(ids = SECU,
strata = SEST,
weights = WGT2017_2019)
```
10\.4 Replicate weights
-----------------------
Replicate weights are often included on analysis files instead of, or in addition to, the design variables (strata and PSUs). Replicate weights are used as another method to estimate variability. Often, researchers choose to use replicate weights to avoid publishing design variables (strata or clustering variables) as a measure to reduce the risk of disclosure. There are several types of replicate weights, including balanced repeated replication (BRR), Fay’s BRR, jackknife, and bootstrap methods. An overview of the process for using replicate weights is as follows:
1. Divide the sample into subsample replicates that mirror the design of the sample
2. Calculate weights for each replicate using the same procedures for the full\-sample weight (i.e., nonresponse and post\-stratification)
3. Calculate estimates for each replicate using the same method as the full\-sample estimate
4. Calculate the estimated variance, which is proportional to the variance of the replicate estimates
The different types of replicate weights largely differ between step 1 (how the sample is divided into subsamples) and step 4 (which multiplication factors, scales, are used to multiply the variance). The general format for the standard error is:
\\\[ \\sqrt{\\alpha \\sum\_{r\=1}^R \\alpha\_r (\\hat{\\theta}\_r \- \\hat{\\theta})^2 }\\]
where \\(R\\) is the number of replicates, \\(\\alpha\\) is a constant that depends on the replication method, \\(\\alpha\_r\\) is a factor associated with each replicate, \\(\\hat{\\theta}\\) is the weighted estimate based on the full sample, and \\(\\hat{\\theta}\_r\\) is the weighted estimate of \\(\\theta\\) based on the \\(r^{\\text{th}}\\) replicate.
To create the design object for surveys with replicate weights, we use `as_survey_rep()` instead of `as_survey_design()`, which we use for the common sampling designs in the sections above.
### 10\.4\.1 Balanced Repeated Replication method
The balanced repeated replication (BRR) method requires a stratified sample design with two PSUs in each stratum. Each replicate is constructed by deleting one PSU per stratum using a Hadamard matrix. For the PSU that is included, the weight is generally multiplied by two but may have other adjustments, such as post\-stratification. A Hadamard matrix is a special square matrix with entries of \+1 or –1 with mutually orthogonal rows. Hadamard matrices must have one row, two rows, or a multiple of four rows. The size of the Hadamard matrix is determined by the first multiple of 4 greater than or equal to the number of strata. For example, if a survey had seven strata, the Hadamard matrix would be an \\(8\\times8\\) matrix. Additionally, a survey with eight strata would also have an \\(8\\times8\\) Hadamard matrix. The columns in the matrix specify the strata, and the rows specify the replicate. In each replicate (row), a \+1 means to use the first PSU, and a –1 means to use the second PSU in the estimate. For example, here is a \\(4\\times4\\) Hadamard matrix:
\\\[ \\begin{array}{rrrr} \+1 \&\+1 \&\+1 \&\+1\\\\ \+1\&\-1\&\+1\&\-1\\\\ \+1\&\+1\&\-1\&\-1\\\\ \+1 \&\-1\&\-1\&\+1 \\end{array} \\]
In the first replicate (row), all the values are \+1; so in each stratum, the first PSU would be used in the estimate. In the second replicate, the first PSU would be used in strata 1 and 3, while the second PSU would be used in strata 2 and 4\. In the third replicate, the first PSU would be used in strata 1 and 2, while the second PSU would be used in strata 3 and 4\. Finally, in the fourth replicate, the first PSU would be used in strata 1 and 4, while the second PSU would be used in strata 2 and 3\. For more information about Hadamard matrices, see Wolter ([2007](#ref-wolter2007introduction)). Note that supplied BRR weights from a data provider already incorporate this adjustment, and the {survey} package generates the Hadamard matrix, if necessary, for calculating BRR weights; so an analyst does not need to create or provide the matrix.
#### The math
A weighted estimate for the full sample is calculated as \\(\\hat{\\theta}\\), and then a weighted estimate for each replicate is calculated as \\(\\hat{\\theta}\_r\\) for \\(R\\) replicates. Using the generic notation above, \\(\\alpha\=\\frac{1}{R}\\) and \\(\\alpha\_r\=1\\) for each \\(r\\). The standard error of the estimate is calculated as follows:
\\\[se(\\hat{\\theta})\=\\sqrt{\\frac{1}{R} \\sum\_{r\=1}^R \\left( \\hat{\\theta}\_r\-\\hat{\\theta}\\right)^2}\\]
Specifying replicate weights in R requires specifying the type of replicate weights, the main weight variable, the replicate weight variables, and other options. One of the key options is for the mean squared error (MSE). If `mse=TRUE`, variances are computed around the point estimate \\((\\hat{\\theta})\\); whereas if `mse=FALSE`, variances are computed around the mean of the replicates \\((\\bar{\\theta})\\) instead, which looks like this:
\\\[se(\\hat{\\theta})\=\\sqrt{\\frac{1}{R} \\sum\_{r\=1}^R \\left( \\hat{\\theta}\_r\-\\bar{\\theta}\\right)^2}\\] where \\\[\\bar{\\theta}\=\\frac{1}{R}\\sum\_{r\=1}^R \\hat{\\theta}\_r\\]
The default option for `mse` is to use the global option of “survey.replicates.mse,” which is set to `FALSE` initially unless a user changes it. To determine if `mse` should be set to `TRUE` or `FALSE`, read the survey documentation. If there is no indication in the survey documentation for BRR, we recommend setting `mse` to `TRUE`, as this is the default in other software (e.g., SAS, SUDAAN).
#### The syntax
Replicate weights generally come in groups and are sequentially numbered, such as PWGTP1, PWGTP2, …, PWGTP80 for the person weights in the American Community Survey (ACS) ([U.S. Census Bureau 2021](#ref-acs-pums-2021)) or BRRWT1, BRRWT2, …, BRRWT96 in the 2015 Residential Energy Consumption Survey (RECS) ([U.S. Energy Information Administration 2017](#ref-recs-2015-micro)). This makes it easy to use some of the [tidy selection](https://dplyr.tidyverse.org/reference/dplyr_tidy_select.html) functions in R.
To specify a BRR design, we need to specify the weight variable (`weights`), the replicate weight variables (`repweights`), the type of replicate weights as BRR (`type = BRR`), and whether the mean squared error should be used (`mse = TRUE`) or not (`mse = FALSE`). For example, if a dataset had WT0 for the main weight and had 20 BRR weights indicated WT1, WT2, …, WT20, we can use the following syntax (both are equivalent):
```
brr_des <- dat %>%
as_survey_rep(weights = WT0,
repweights = all_of(str_c("WT", 1:20)),
type = "BRR",
mse = TRUE)
brr_des <- dat %>%
as_survey_rep(weights = WT0,
repweights = num_range("WT", 1:20),
type = "BRR",
mse = TRUE)
```
If a dataset had WT for the main weight and had 20 BRR weights indicated REPWT1, REPWT2, …, REPWT20, we can use the following syntax (both are equivalent):
```
brr_des <- dat %>%
as_survey_rep(weights = WT,
repweights = all_of(str_c("REPWT", 1:20)),
type = "BRR",
mse = TRUE)
brr_des <- dat %>%
as_survey_rep(weights = WT,
repweights = starts_with("REPWT"),
type = "BRR",
mse = TRUE)
```
If the replicate weight variables are in the file consecutively, we can also use the following syntax:
```
brr_des <- dat %>%
as_survey_rep(weights = WT,
repweights = REPWT1:REPWT20,
type = "BRR",
mse = TRUE)
```
Typically, each replicate weight sums to a value similar to the main weight, as both the replicate weights and the main weight are supposed to provide population estimates. Rarely, an alternative method is used where the replicate weights have values of 0 or 2 in the case of BRR weights. This would be indicated in the documentation (see Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation) for more information on reading documentation). In this case, the replicate weights are not combined, and the option `combined_weights = FALSE` should be indicated, as the default value for this argument is `TRUE`. This specific syntax is shown below:
```
brr_des <- dat %>%
as_survey_rep(weights = WT,
repweights = starts_with("REPWT"),
type = "BRR",
combined_weights = FALSE,
mse = TRUE)
```
#### Example
The {survey} package includes a data example from section 12\.2 of Levy and Lemeshow ([2013](#ref-levy2013sampling)). In this fictional data, two out of five ambulance stations were sampled from each of three emergency service areas (ESAs); thus BRR weights are appropriate with two PSUs (stations) sampled in each stratum (ESA). In the code below, we create BRR weights as was done by Levy and Lemeshow ([2013](#ref-levy2013sampling)).
```
scdbrr <- scd %>%
as_tibble() %>%
mutate(
wt = 5 / 2,
rep1 = 2 * c(1, 0, 1, 0, 1, 0),
rep2 = 2 * c(1, 0, 0, 1, 0, 1),
rep3 = 2 * c(0, 1, 1, 0, 0, 1),
rep4 = 2 * c(0, 1, 0, 1, 1, 0)
)
scdbrr
```
```
## # A tibble: 6 × 9
## ESA ambulance arrests alive wt rep1 rep2 rep3 rep4
## <int> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 1 120 25 2.5 2 2 0 0
## 2 1 2 78 24 2.5 0 0 2 2
## 3 2 1 185 30 2.5 2 0 2 0
## 4 2 2 228 49 2.5 0 2 0 2
## 5 3 1 670 80 2.5 2 0 0 2
## 6 3 2 530 70 2.5 0 2 2 0
```
To specify the BRR weights, we use the following syntax:
```
scdbrr_des <- scdbrr %>%
as_survey_rep(
type = "BRR",
repweights = starts_with("rep"),
combined_weights = FALSE,
weight = wt
)
scdbrr_des
```
```
## Call: Called via srvyr
## Balanced Repeated Replicates with 4 replicates.
## Sampling variables:
## - repweights: `rep1 + rep2 + rep3 + rep4`
## - weights: wt
## Data variables:
## - ESA (int), ambulance (int), arrests (dbl), alive (dbl), wt (dbl),
## rep1 (dbl), rep2 (dbl), rep3 (dbl), rep4 (dbl)
```
```
summary(scdbrr_des)
```
```
## Call: Called via srvyr
## Balanced Repeated Replicates with 4 replicates.
## Sampling variables:
## - repweights: `rep1 + rep2 + rep3 + rep4`
## - weights: wt
## Data variables:
## - ESA (int), ambulance (int), arrests (dbl), alive (dbl), wt (dbl),
## rep1 (dbl), rep2 (dbl), rep3 (dbl), rep4 (dbl)
## Variables:
## [1] "ESA" "ambulance" "arrests" "alive" "wt"
## [6] "rep1" "rep2" "rep3" "rep4"
```
Note that `combined_weights` was specified as `FALSE` because these weights are simply specified as 0 and 2 and do not incorporate the overall weight. When printing the object, the type of replication is noted as Balanced Repeated Replicates, and the replicate weights and the weight variable are specified. Additionally, the summary lists the variables included in the data and design object.
### 10\.4\.2 Fay’s BRR method
Fay’s BRR method for replicate weights is similar to the BRR method in that it uses a Hadamard matrix to construct replicate weights. However, rather than deleting PSUs for each replicate, with Fay’s BRR, half of the PSUs have a replicate weight, which is the main weight multiplied by \\(\\rho\\), and the other half have the main weight multiplied by \\((2\-\\rho)\\), where \\(0 \\le \\rho \< 1\\). Note that when \\(\\rho\=0\\), this is equivalent to the standard BRR weights, and as \\(\\rho\\) becomes closer to 1, this method is more similar to jackknife discussed in Section [10\.4\.3](c10-sample-designs-replicate-weights.html#samp-jackknife). To obtain the value of \\(\\rho\\), it is necessary to read the survey documentation (see Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation)).
#### The math
The standard error estimate for \\(\\hat{\\theta}\\) is slightly different than the BRR, due to the addition of the multiplier of \\(\\rho\\). Using the generic notation above, \\(\\alpha\=\\frac{1}{R \\left(1\-\\rho\\right)^2}\\) and \\(\\alpha\_r\=1 \\text{ for all } r\\). The standard error is calculated as:
\\\[se(\\hat{\\theta})\=\\sqrt{\\frac{1}{R (1\-\\rho)^2} \\sum\_{r\=1}^R \\left( \\hat{\\theta}\_r\-\\hat{\\theta}\\right)^2}\\]
#### The syntax
The syntax is very similar for BRR and Fay’s BRR. To specify a Fay’s BRR design, we need to specify the weight variable (`weights`), the replicate weight variables (`repweights`), the type of replicate weights as Fay’s BRR (`type = Fay`), whether the mean squared error should be used (`mse = TRUE`) or not (`mse = FALSE`), and Fay’s multiplier (`rho`). For example, if a dataset had WT0 for the main weight and had 20 BRR weights indicated as WT1, WT2, …, WT20, and Fay’s multiplier is 0\.3, we use the following syntax:
```
fay_des <- dat %>%
as_survey_rep(weights = WT0,
repweights = num_range("WT", 1:20),
type = "Fay",
mse = TRUE,
rho = 0.3)
```
#### Example
The 2015 RECS ([U.S. Energy Information Administration 2017](#ref-recs-2015-micro)) uses Fay’s BRR weights with the final weight as NWEIGHT and replicate weights as BRRWT1 \- BRRWT96, and the documentation specifies a Fay’s multiplier of 0\.5\. On the file, DOEID is a unique identifier for each respondent, TOTALDOL is the total energy cost, TOTSQFT\_EN is the total square footage of the residence, and REGOINC is the census region. We use the 2015 RECS data from the {srvyrexploR} package that provides data for this book (see the Prerequisites box at the beginning of this chapter). To specify the design for the `recs_2015` data, we use the following syntax:
```
recs_2015_des <- recs_2015 %>%
as_survey_rep(
weights = NWEIGHT,
repweights = BRRWT1:BRRWT96,
type = "Fay",
rho = 0.5,
mse = TRUE,
variables = c(DOEID, TOTALDOL, TOTSQFT_EN, REGIONC)
)
recs_2015_des
```
```
## Call: Called via srvyr
## Fay's variance method (rho= 0.5 ) with 96 replicates and MSE variances.
## Sampling variables:
## - repweights: `BRRWT1 + BRRWT2 + BRRWT3 + BRRWT4 + BRRWT5 + BRRWT6 +
## BRRWT7 + BRRWT8 + BRRWT9 + BRRWT10 + BRRWT11 + BRRWT12 + BRRWT13 +
## BRRWT14 + BRRWT15 + BRRWT16 + BRRWT17 + BRRWT18 + BRRWT19 + BRRWT20
## + BRRWT21 + BRRWT22 + BRRWT23 + BRRWT24 + BRRWT25 + BRRWT26 +
## BRRWT27 + BRRWT28 + BRRWT29 + BRRWT30 + BRRWT31 + BRRWT32 + BRRWT33
## + BRRWT34 + BRRWT35 + BRRWT36 + BRRWT37 + BRRWT38 + BRRWT39 +
## BRRWT40 + BRRWT41 + BRRWT42 + BRRWT43 + BRRWT44 + BRRWT45 + BRRWT46
## + BRRWT47 + BRRWT48 + BRRWT49 + BRRWT50 + BRRWT51 + BRRWT52 +
## BRRWT53 + BRRWT54 + BRRWT55 + BRRWT56 + BRRWT57 + BRRWT58 + BRRWT59
## + BRRWT60 + BRRWT61 + BRRWT62 + BRRWT63 + BRRWT64 + BRRWT65 +
## BRRWT66 + BRRWT67 + BRRWT68 + BRRWT69 + BRRWT70 + BRRWT71 + BRRWT72
## + BRRWT73 + BRRWT74 + BRRWT75 + BRRWT76 + BRRWT77 + BRRWT78 +
## BRRWT79 + BRRWT80 + BRRWT81 + BRRWT82 + BRRWT83 + BRRWT84 + BRRWT85
## + BRRWT86 + BRRWT87 + BRRWT88 + BRRWT89 + BRRWT90 + BRRWT91 +
## BRRWT92 + BRRWT93 + BRRWT94 + BRRWT95 + BRRWT96`
## - weights: NWEIGHT
## Data variables:
## - DOEID (dbl), TOTALDOL (dbl), TOTSQFT_EN (dbl), REGIONC (dbl)
```
```
summary(recs_2015_des)
```
```
## Call: Called via srvyr
## Fay's variance method (rho= 0.5 ) with 96 replicates and MSE variances.
## Sampling variables:
## - repweights: `BRRWT1 + BRRWT2 + BRRWT3 + BRRWT4 + BRRWT5 + BRRWT6 +
## BRRWT7 + BRRWT8 + BRRWT9 + BRRWT10 + BRRWT11 + BRRWT12 + BRRWT13 +
## BRRWT14 + BRRWT15 + BRRWT16 + BRRWT17 + BRRWT18 + BRRWT19 + BRRWT20
## + BRRWT21 + BRRWT22 + BRRWT23 + BRRWT24 + BRRWT25 + BRRWT26 +
## BRRWT27 + BRRWT28 + BRRWT29 + BRRWT30 + BRRWT31 + BRRWT32 + BRRWT33
## + BRRWT34 + BRRWT35 + BRRWT36 + BRRWT37 + BRRWT38 + BRRWT39 +
## BRRWT40 + BRRWT41 + BRRWT42 + BRRWT43 + BRRWT44 + BRRWT45 + BRRWT46
## + BRRWT47 + BRRWT48 + BRRWT49 + BRRWT50 + BRRWT51 + BRRWT52 +
## BRRWT53 + BRRWT54 + BRRWT55 + BRRWT56 + BRRWT57 + BRRWT58 + BRRWT59
## + BRRWT60 + BRRWT61 + BRRWT62 + BRRWT63 + BRRWT64 + BRRWT65 +
## BRRWT66 + BRRWT67 + BRRWT68 + BRRWT69 + BRRWT70 + BRRWT71 + BRRWT72
## + BRRWT73 + BRRWT74 + BRRWT75 + BRRWT76 + BRRWT77 + BRRWT78 +
## BRRWT79 + BRRWT80 + BRRWT81 + BRRWT82 + BRRWT83 + BRRWT84 + BRRWT85
## + BRRWT86 + BRRWT87 + BRRWT88 + BRRWT89 + BRRWT90 + BRRWT91 +
## BRRWT92 + BRRWT93 + BRRWT94 + BRRWT95 + BRRWT96`
## - weights: NWEIGHT
## Data variables:
## - DOEID (dbl), TOTALDOL (dbl), TOTSQFT_EN (dbl), REGIONC (dbl)
## Variables:
## [1] "DOEID" "TOTALDOL" "TOTSQFT_EN" "REGIONC"
```
In specifying the design, the `variables` option was also used to include which variables might be used in analyses. This is optional but can make our object smaller and easier to work with. When printing the design object or looking at the summary, the replicate weight type is re\-iterated as `Fay's variance method (rho= 0.5) with 96 replicates and MSE variances`, and the variables are included. No weight or probability summary is included in this output, as we have seen in some other design objects.
### 10\.4\.3 Jackknife method
There are three jackknife estimators implemented in {srvyr}: jackknife 1 (JK1\), jackknife n (JKn), and jackknife 2 (JK2\). The JK1 method can be used for unstratified designs, and replicates are created by removing one PSU at a time so the number of replicates is the same as the number of PSUs. If there is no clustering, then the PSU is the ultimate sampling unit (e.g., students).
The JKn method is used for stratified designs and requires two or more PSUs per stratum. In this case, each replicate is created by deleting one PSU from a single stratum, so the number of replicates is the number of total PSUs across all strata. The JK2 method is a special case of JKn when there are exactly 2 PSUs sampled per stratum. For variance estimation, we also need to specify the scaling constants.
#### The math
Using the generic notation above, \\(\\alpha\=\\frac{R\-1}{R}\\) and \\(\\alpha\_r\=1 \\text{ for all } r\\). For the JK1 method, the standard error estimate for \\(\\hat{\\theta}\\) is calculated as:
\\\[se(\\hat{\\theta})\=\\sqrt{\\frac{R\-1}{R} \\sum\_{r\=1}^R \\left( \\hat{\\theta}\_r\-\\hat{\\theta}\\right)^2}\\]
The JKn method is a bit more complex, but the coefficients are generally provided with restricted and public\-use files. For each replicate, one stratum has a PSU removed, and the weights are adjusted by \\(n\_h/(n\_h\-1\)\\) where \\(n\_h\\) is the number of PSUs in stratum \\(h\\). The coefficients in other strata are set to 1\. Denote the coefficient that results from this process for replicate \\(r\\) as \\(\\alpha\_r\\), then the standard error estimate for \\(\\hat{\\theta}\\) is calculated as:
\\\[se(\\hat{\\theta})\=\\sqrt{\\sum\_{r\=1}^R \\alpha\_r \\left( \\hat{\\theta}\_r\-\\hat{\\theta}\\right)^2}\\]
#### The syntax
To specify the jackknife method, we use the survey documentation to understand the type of jackknife (1, n, or 2\) and the multiplier. In the syntax, we need to specify the weight variable (`weights`), the replicate weight variables (`repweights`), the type of replicate weights as jackknife 1 (`type = "JK1"`), n (`type = "JKN"`), or 2 (`type = "JK2"`), whether the mean squared error should be used (`mse = TRUE`) or not (`mse = FALSE`), and the multiplier (`scale`). For example, if the survey is a jackknife 1 method with a multiplier of \\(\\alpha\_r\=(R\-1\)/R\=19/20\=0\.95\\), the dataset has WT0 for the main weight and 20 replicate weights indicated as WT1, WT2, …, WT20, we use the following syntax:
```
jk1_des <- dat %>%
as_survey_rep(
weights = WT0,
repweights = num_range("WT", 1:20),
type = "JK1",
mse = TRUE,
scale = 0.95
)
```
For a jackknife n method, we need to specify the multiplier for all replicates. In this case, we use the `rscales` argument to specify each one. The documentation provides details on what the multipliers (\\(\\alpha\_r\\)) are, and they may be the same for all replicates. For example, consider a case where \\(\\alpha\_r\=0\.1\\) for all replicates, and the dataset had WT0 for the main weight and had 20 replicate weights indicated as WT1, WT2, …, WT20\. We specify the type as `type = "JKN"`, and the multiplier as `rscales=rep(0.1,20)`:
```
jkn_des <- dat %>%
as_survey_rep(
weights = WT0,
repweights = num_range("WT", 1:20),
type = "JKN",
mse = TRUE,
rscales = rep(0.1, 20)
)
```
#### Example
The 2020 RECS ([U.S. Energy Information Administration 2023c](#ref-recs-2020-micro)) uses jackknife weights with the final weight as NWEIGHT and replicate weights as NWEIGHT1 \- NWEIGHT60 with a scale of \\((R\-1\)/R\=59/60\\). On the file, DOEID is a unique identifier for each respondent, TOTALDOL is the total cost of energy, TOTSQFT\_EN is the total square footage of the residence, and REGOINC is the census region. We use the 2020 RECS data from the {srvyrexploR} package that provides data for this book (see the Prerequisites box at the beginning of this chapter).
To specify this design, we use the following syntax:
```
recs_des <- recs_2020 %>%
as_survey_rep(
weights = NWEIGHT,
repweights = NWEIGHT1:NWEIGHT60,
type = "JK1",
scale = 59 / 60,
mse = TRUE,
variables = c(DOEID, TOTALDOL, TOTSQFT_EN, REGIONC)
)
recs_des
```
```
## Call: Called via srvyr
## Unstratified cluster jacknife (JK1) with 60 replicates and MSE variances.
## Sampling variables:
## - repweights: `NWEIGHT1 + NWEIGHT2 + NWEIGHT3 + NWEIGHT4 + NWEIGHT5 +
## NWEIGHT6 + NWEIGHT7 + NWEIGHT8 + NWEIGHT9 + NWEIGHT10 + NWEIGHT11 +
## NWEIGHT12 + NWEIGHT13 + NWEIGHT14 + NWEIGHT15 + NWEIGHT16 +
## NWEIGHT17 + NWEIGHT18 + NWEIGHT19 + NWEIGHT20 + NWEIGHT21 +
## NWEIGHT22 + NWEIGHT23 + NWEIGHT24 + NWEIGHT25 + NWEIGHT26 +
## NWEIGHT27 + NWEIGHT28 + NWEIGHT29 + NWEIGHT30 + NWEIGHT31 +
## NWEIGHT32 + NWEIGHT33 + NWEIGHT34 + NWEIGHT35 + NWEIGHT36 +
## NWEIGHT37 + NWEIGHT38 + NWEIGHT39 + NWEIGHT40 + NWEIGHT41 +
## NWEIGHT42 + NWEIGHT43 + NWEIGHT44 + NWEIGHT45 + NWEIGHT46 +
## NWEIGHT47 + NWEIGHT48 + NWEIGHT49 + NWEIGHT50 + NWEIGHT51 +
## NWEIGHT52 + NWEIGHT53 + NWEIGHT54 + NWEIGHT55 + NWEIGHT56 +
## NWEIGHT57 + NWEIGHT58 + NWEIGHT59 + NWEIGHT60`
## - weights: NWEIGHT
## Data variables:
## - DOEID (dbl), TOTALDOL (dbl), TOTSQFT_EN (dbl), REGIONC (chr)
```
```
summary(recs_des)
```
```
## Call: Called via srvyr
## Unstratified cluster jacknife (JK1) with 60 replicates and MSE variances.
## Sampling variables:
## - repweights: `NWEIGHT1 + NWEIGHT2 + NWEIGHT3 + NWEIGHT4 + NWEIGHT5 +
## NWEIGHT6 + NWEIGHT7 + NWEIGHT8 + NWEIGHT9 + NWEIGHT10 + NWEIGHT11 +
## NWEIGHT12 + NWEIGHT13 + NWEIGHT14 + NWEIGHT15 + NWEIGHT16 +
## NWEIGHT17 + NWEIGHT18 + NWEIGHT19 + NWEIGHT20 + NWEIGHT21 +
## NWEIGHT22 + NWEIGHT23 + NWEIGHT24 + NWEIGHT25 + NWEIGHT26 +
## NWEIGHT27 + NWEIGHT28 + NWEIGHT29 + NWEIGHT30 + NWEIGHT31 +
## NWEIGHT32 + NWEIGHT33 + NWEIGHT34 + NWEIGHT35 + NWEIGHT36 +
## NWEIGHT37 + NWEIGHT38 + NWEIGHT39 + NWEIGHT40 + NWEIGHT41 +
## NWEIGHT42 + NWEIGHT43 + NWEIGHT44 + NWEIGHT45 + NWEIGHT46 +
## NWEIGHT47 + NWEIGHT48 + NWEIGHT49 + NWEIGHT50 + NWEIGHT51 +
## NWEIGHT52 + NWEIGHT53 + NWEIGHT54 + NWEIGHT55 + NWEIGHT56 +
## NWEIGHT57 + NWEIGHT58 + NWEIGHT59 + NWEIGHT60`
## - weights: NWEIGHT
## Data variables:
## - DOEID (dbl), TOTALDOL (dbl), TOTSQFT_EN (dbl), REGIONC (chr)
## Variables:
## [1] "DOEID" "TOTALDOL" "TOTSQFT_EN" "REGIONC"
```
When printing the design object or looking at the summary, the replicate weight type is reiterated as `Unstratified cluster jacknife (JK1) with 60 replicates and MSE variances`, and the variables are included. No weight or probability summary is included.
### 10\.4\.4 Bootstrap method
In bootstrap resampling, replicates are created by selecting random samples of the PSUs with replacement (SRSWR). If there are \\(A\\) PSUs in the sample, then each replicate is created by selecting a random sample of \\(A\\) PSUs with replacement. Each replicate is created independently, and the weights for each replicate are adjusted to reflect the population, generally using the same method as how the analysis weight was adjusted.
#### The math
A weighted estimate for the full sample is calculated as \\(\\hat{\\theta}\\), and then a weighted estimate for each replicate is calculated as \\(\\hat{\\theta}\_r\\) for \\(R\\) replicates. Then the standard error of the estimate is calculated as follows:
\\\[se(\\hat{\\theta})\=\\sqrt{\\alpha \\sum\_{r\=1}^R \\left( \\hat{\\theta}\_r\-\\hat{\\theta}\\right)^2}\\]
where \\(\\alpha\\) is the scaling constant. Note that the scaling constant (\\(\\alpha\\)) is provided in the survey documentation, as there are many types of bootstrap methods that generate custom scaling constants.
#### The syntax
To specify a bootstrap method, we need to specify the weight variable (`weights`), the replicate weight variables (`repweights`), the type of replicate weights as bootstrap (`type = "bootstrap"`), whether the mean squared error should be used (`mse = TRUE`) or not (`mse = FALSE`), and the multiplier (`scale`). For example, if a dataset had WT0 for the main weight, 20 bootstrap weights indicated WT1, WT2, …, WT20, and a multiplier of \\(\\alpha\=.02\\), we use the following syntax:
```
bs_des <- dat %>%
as_survey_rep(
weights = WT0,
repweights = num_range("WT", 1:20),
type = "bootstrap",
mse = TRUE,
scale = .02
)
```
#### Example
Returning to the APIP example, we are going to create a dataset with bootstrap weights to use as an example. In this example, we construct a one\-cluster design with 50 replicate weights[28](#fn28).
```
apiclus1_slim <-
apiclus1 %>%
as_tibble() %>%
arrange(dnum) %>%
select(cds, dnum, fpc, pw)
set.seed(662152)
apibw <-
bootweights(
psu = apiclus1_slim$dnum,
strata = rep(1, nrow(apiclus1_slim)),
fpc = apiclus1_slim$fpc,
replicates = 50
)
bwmata <-
apibw$repweights$weights[apibw$repweights$index, ] * apiclus1_slim$pw
apiclus1_slim <- bwmata %>%
as.data.frame() %>%
set_names(str_c("pw", 1:50)) %>%
cbind(apiclus1_slim) %>%
as_tibble() %>%
select(cds, dnum, fpc, pw, everything())
apiclus1_slim
```
```
## # A tibble: 183 × 54
## cds dnum fpc pw pw1 pw2 pw3 pw4 pw5 pw6 pw7
## <chr> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 2 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 3 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 4 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 5 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 6 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 7 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 8 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 9 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 10 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## # ℹ 173 more rows
## # ℹ 43 more variables: pw8 <dbl>, pw9 <dbl>, pw10 <dbl>, pw11 <dbl>,
## # pw12 <dbl>, pw13 <dbl>, pw14 <dbl>, pw15 <dbl>, pw16 <dbl>,
## # pw17 <dbl>, pw18 <dbl>, pw19 <dbl>, pw20 <dbl>, pw21 <dbl>,
## # pw22 <dbl>, pw23 <dbl>, pw24 <dbl>, pw25 <dbl>, pw26 <dbl>,
## # pw27 <dbl>, pw28 <dbl>, pw29 <dbl>, pw30 <dbl>, pw31 <dbl>,
## # pw32 <dbl>, pw33 <dbl>, pw34 <dbl>, pw35 <dbl>, pw36 <dbl>, …
```
The output of `apiclus1_slim` includes the same variables we have seen in other APIP examples (see Table [10\.1](c10-sample-designs-replicate-weights.html#tab:apidata)), but now it additionally includes bootstrap weights `pw1`, …, `pw50`. When creating the survey design object, we use the bootstrap weights as the replicate weights. Additionally, with replicate weights we need to include the scale (\\(\\alpha\\)). For this example, we created:
\\\[\\alpha\=\\frac{A}{(A\-1\)(R\-1\)}\=\\frac{15}{(15\-1\)\*(50\-1\)}\=0\.02186589\\]
where \\(A\\) is the average number of PSUs per stratum, and \\(R\\) is the number of replicates. There is only 1 stratum and the number of clusters/PSUs is 15 so \\(A\=15\\). Using this information, we specify the design object as:
```
api1_bs_des <- apiclus1_slim %>%
as_survey_rep(
weights = pw,
repweights = pw1:pw50,
type = "bootstrap",
scale = 0.02186589,
mse = TRUE
)
api1_bs_des
```
```
## Call: Called via srvyr
## Survey bootstrap with 50 replicates and MSE variances.
## Sampling variables:
## - repweights: `pw1 + pw2 + pw3 + pw4 + pw5 + pw6 + pw7 + pw8 + pw9 +
## pw10 + pw11 + pw12 + pw13 + pw14 + pw15 + pw16 + pw17 + pw18 + pw19
## + pw20 + pw21 + pw22 + pw23 + pw24 + pw25 + pw26 + pw27 + pw28 +
## pw29 + pw30 + pw31 + pw32 + pw33 + pw34 + pw35 + pw36 + pw37 + pw38
## + pw39 + pw40 + pw41 + pw42 + pw43 + pw44 + pw45 + pw46 + pw47 +
## pw48 + pw49 + pw50`
## - weights: pw
## Data variables:
## - cds (chr), dnum (int), fpc (dbl), pw (dbl), pw1 (dbl), pw2 (dbl),
## pw3 (dbl), pw4 (dbl), pw5 (dbl), pw6 (dbl), pw7 (dbl), pw8 (dbl),
## pw9 (dbl), pw10 (dbl), pw11 (dbl), pw12 (dbl), pw13 (dbl), pw14
## (dbl), pw15 (dbl), pw16 (dbl), pw17 (dbl), pw18 (dbl), pw19 (dbl),
## pw20 (dbl), pw21 (dbl), pw22 (dbl), pw23 (dbl), pw24 (dbl), pw25
## (dbl), pw26 (dbl), pw27 (dbl), pw28 (dbl), pw29 (dbl), pw30 (dbl),
## pw31 (dbl), pw32 (dbl), pw33 (dbl), pw34 (dbl), pw35 (dbl), pw36
## (dbl), pw37 (dbl), pw38 (dbl), pw39 (dbl), pw40 (dbl), pw41 (dbl),
## pw42 (dbl), pw43 (dbl), pw44 (dbl), pw45 (dbl), pw46 (dbl), pw47
## (dbl), pw48 (dbl), pw49 (dbl), pw50 (dbl)
```
```
summary(api1_bs_des)
```
```
## Call: Called via srvyr
## Survey bootstrap with 50 replicates and MSE variances.
## Sampling variables:
## - repweights: `pw1 + pw2 + pw3 + pw4 + pw5 + pw6 + pw7 + pw8 + pw9 +
## pw10 + pw11 + pw12 + pw13 + pw14 + pw15 + pw16 + pw17 + pw18 + pw19
## + pw20 + pw21 + pw22 + pw23 + pw24 + pw25 + pw26 + pw27 + pw28 +
## pw29 + pw30 + pw31 + pw32 + pw33 + pw34 + pw35 + pw36 + pw37 + pw38
## + pw39 + pw40 + pw41 + pw42 + pw43 + pw44 + pw45 + pw46 + pw47 +
## pw48 + pw49 + pw50`
## - weights: pw
## Data variables:
## - cds (chr), dnum (int), fpc (dbl), pw (dbl), pw1 (dbl), pw2 (dbl),
## pw3 (dbl), pw4 (dbl), pw5 (dbl), pw6 (dbl), pw7 (dbl), pw8 (dbl),
## pw9 (dbl), pw10 (dbl), pw11 (dbl), pw12 (dbl), pw13 (dbl), pw14
## (dbl), pw15 (dbl), pw16 (dbl), pw17 (dbl), pw18 (dbl), pw19 (dbl),
## pw20 (dbl), pw21 (dbl), pw22 (dbl), pw23 (dbl), pw24 (dbl), pw25
## (dbl), pw26 (dbl), pw27 (dbl), pw28 (dbl), pw29 (dbl), pw30 (dbl),
## pw31 (dbl), pw32 (dbl), pw33 (dbl), pw34 (dbl), pw35 (dbl), pw36
## (dbl), pw37 (dbl), pw38 (dbl), pw39 (dbl), pw40 (dbl), pw41 (dbl),
## pw42 (dbl), pw43 (dbl), pw44 (dbl), pw45 (dbl), pw46 (dbl), pw47
## (dbl), pw48 (dbl), pw49 (dbl), pw50 (dbl)
## Variables:
## [1] "cds" "dnum" "fpc" "pw" "pw1" "pw2" "pw3" "pw4" "pw5"
## [10] "pw6" "pw7" "pw8" "pw9" "pw10" "pw11" "pw12" "pw13" "pw14"
## [19] "pw15" "pw16" "pw17" "pw18" "pw19" "pw20" "pw21" "pw22" "pw23"
## [28] "pw24" "pw25" "pw26" "pw27" "pw28" "pw29" "pw30" "pw31" "pw32"
## [37] "pw33" "pw34" "pw35" "pw36" "pw37" "pw38" "pw39" "pw40" "pw41"
## [46] "pw42" "pw43" "pw44" "pw45" "pw46" "pw47" "pw48" "pw49" "pw50"
```
As with other replicate design objects, when printing the object or looking at the summary, the replicate weights are provided along with the data variables.
### 10\.4\.1 Balanced Repeated Replication method
The balanced repeated replication (BRR) method requires a stratified sample design with two PSUs in each stratum. Each replicate is constructed by deleting one PSU per stratum using a Hadamard matrix. For the PSU that is included, the weight is generally multiplied by two but may have other adjustments, such as post\-stratification. A Hadamard matrix is a special square matrix with entries of \+1 or –1 with mutually orthogonal rows. Hadamard matrices must have one row, two rows, or a multiple of four rows. The size of the Hadamard matrix is determined by the first multiple of 4 greater than or equal to the number of strata. For example, if a survey had seven strata, the Hadamard matrix would be an \\(8\\times8\\) matrix. Additionally, a survey with eight strata would also have an \\(8\\times8\\) Hadamard matrix. The columns in the matrix specify the strata, and the rows specify the replicate. In each replicate (row), a \+1 means to use the first PSU, and a –1 means to use the second PSU in the estimate. For example, here is a \\(4\\times4\\) Hadamard matrix:
\\\[ \\begin{array}{rrrr} \+1 \&\+1 \&\+1 \&\+1\\\\ \+1\&\-1\&\+1\&\-1\\\\ \+1\&\+1\&\-1\&\-1\\\\ \+1 \&\-1\&\-1\&\+1 \\end{array} \\]
In the first replicate (row), all the values are \+1; so in each stratum, the first PSU would be used in the estimate. In the second replicate, the first PSU would be used in strata 1 and 3, while the second PSU would be used in strata 2 and 4\. In the third replicate, the first PSU would be used in strata 1 and 2, while the second PSU would be used in strata 3 and 4\. Finally, in the fourth replicate, the first PSU would be used in strata 1 and 4, while the second PSU would be used in strata 2 and 3\. For more information about Hadamard matrices, see Wolter ([2007](#ref-wolter2007introduction)). Note that supplied BRR weights from a data provider already incorporate this adjustment, and the {survey} package generates the Hadamard matrix, if necessary, for calculating BRR weights; so an analyst does not need to create or provide the matrix.
#### The math
A weighted estimate for the full sample is calculated as \\(\\hat{\\theta}\\), and then a weighted estimate for each replicate is calculated as \\(\\hat{\\theta}\_r\\) for \\(R\\) replicates. Using the generic notation above, \\(\\alpha\=\\frac{1}{R}\\) and \\(\\alpha\_r\=1\\) for each \\(r\\). The standard error of the estimate is calculated as follows:
\\\[se(\\hat{\\theta})\=\\sqrt{\\frac{1}{R} \\sum\_{r\=1}^R \\left( \\hat{\\theta}\_r\-\\hat{\\theta}\\right)^2}\\]
Specifying replicate weights in R requires specifying the type of replicate weights, the main weight variable, the replicate weight variables, and other options. One of the key options is for the mean squared error (MSE). If `mse=TRUE`, variances are computed around the point estimate \\((\\hat{\\theta})\\); whereas if `mse=FALSE`, variances are computed around the mean of the replicates \\((\\bar{\\theta})\\) instead, which looks like this:
\\\[se(\\hat{\\theta})\=\\sqrt{\\frac{1}{R} \\sum\_{r\=1}^R \\left( \\hat{\\theta}\_r\-\\bar{\\theta}\\right)^2}\\] where \\\[\\bar{\\theta}\=\\frac{1}{R}\\sum\_{r\=1}^R \\hat{\\theta}\_r\\]
The default option for `mse` is to use the global option of “survey.replicates.mse,” which is set to `FALSE` initially unless a user changes it. To determine if `mse` should be set to `TRUE` or `FALSE`, read the survey documentation. If there is no indication in the survey documentation for BRR, we recommend setting `mse` to `TRUE`, as this is the default in other software (e.g., SAS, SUDAAN).
#### The syntax
Replicate weights generally come in groups and are sequentially numbered, such as PWGTP1, PWGTP2, …, PWGTP80 for the person weights in the American Community Survey (ACS) ([U.S. Census Bureau 2021](#ref-acs-pums-2021)) or BRRWT1, BRRWT2, …, BRRWT96 in the 2015 Residential Energy Consumption Survey (RECS) ([U.S. Energy Information Administration 2017](#ref-recs-2015-micro)). This makes it easy to use some of the [tidy selection](https://dplyr.tidyverse.org/reference/dplyr_tidy_select.html) functions in R.
To specify a BRR design, we need to specify the weight variable (`weights`), the replicate weight variables (`repweights`), the type of replicate weights as BRR (`type = BRR`), and whether the mean squared error should be used (`mse = TRUE`) or not (`mse = FALSE`). For example, if a dataset had WT0 for the main weight and had 20 BRR weights indicated WT1, WT2, …, WT20, we can use the following syntax (both are equivalent):
```
brr_des <- dat %>%
as_survey_rep(weights = WT0,
repweights = all_of(str_c("WT", 1:20)),
type = "BRR",
mse = TRUE)
brr_des <- dat %>%
as_survey_rep(weights = WT0,
repweights = num_range("WT", 1:20),
type = "BRR",
mse = TRUE)
```
If a dataset had WT for the main weight and had 20 BRR weights indicated REPWT1, REPWT2, …, REPWT20, we can use the following syntax (both are equivalent):
```
brr_des <- dat %>%
as_survey_rep(weights = WT,
repweights = all_of(str_c("REPWT", 1:20)),
type = "BRR",
mse = TRUE)
brr_des <- dat %>%
as_survey_rep(weights = WT,
repweights = starts_with("REPWT"),
type = "BRR",
mse = TRUE)
```
If the replicate weight variables are in the file consecutively, we can also use the following syntax:
```
brr_des <- dat %>%
as_survey_rep(weights = WT,
repweights = REPWT1:REPWT20,
type = "BRR",
mse = TRUE)
```
Typically, each replicate weight sums to a value similar to the main weight, as both the replicate weights and the main weight are supposed to provide population estimates. Rarely, an alternative method is used where the replicate weights have values of 0 or 2 in the case of BRR weights. This would be indicated in the documentation (see Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation) for more information on reading documentation). In this case, the replicate weights are not combined, and the option `combined_weights = FALSE` should be indicated, as the default value for this argument is `TRUE`. This specific syntax is shown below:
```
brr_des <- dat %>%
as_survey_rep(weights = WT,
repweights = starts_with("REPWT"),
type = "BRR",
combined_weights = FALSE,
mse = TRUE)
```
#### Example
The {survey} package includes a data example from section 12\.2 of Levy and Lemeshow ([2013](#ref-levy2013sampling)). In this fictional data, two out of five ambulance stations were sampled from each of three emergency service areas (ESAs); thus BRR weights are appropriate with two PSUs (stations) sampled in each stratum (ESA). In the code below, we create BRR weights as was done by Levy and Lemeshow ([2013](#ref-levy2013sampling)).
```
scdbrr <- scd %>%
as_tibble() %>%
mutate(
wt = 5 / 2,
rep1 = 2 * c(1, 0, 1, 0, 1, 0),
rep2 = 2 * c(1, 0, 0, 1, 0, 1),
rep3 = 2 * c(0, 1, 1, 0, 0, 1),
rep4 = 2 * c(0, 1, 0, 1, 1, 0)
)
scdbrr
```
```
## # A tibble: 6 × 9
## ESA ambulance arrests alive wt rep1 rep2 rep3 rep4
## <int> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 1 120 25 2.5 2 2 0 0
## 2 1 2 78 24 2.5 0 0 2 2
## 3 2 1 185 30 2.5 2 0 2 0
## 4 2 2 228 49 2.5 0 2 0 2
## 5 3 1 670 80 2.5 2 0 0 2
## 6 3 2 530 70 2.5 0 2 2 0
```
To specify the BRR weights, we use the following syntax:
```
scdbrr_des <- scdbrr %>%
as_survey_rep(
type = "BRR",
repweights = starts_with("rep"),
combined_weights = FALSE,
weight = wt
)
scdbrr_des
```
```
## Call: Called via srvyr
## Balanced Repeated Replicates with 4 replicates.
## Sampling variables:
## - repweights: `rep1 + rep2 + rep3 + rep4`
## - weights: wt
## Data variables:
## - ESA (int), ambulance (int), arrests (dbl), alive (dbl), wt (dbl),
## rep1 (dbl), rep2 (dbl), rep3 (dbl), rep4 (dbl)
```
```
summary(scdbrr_des)
```
```
## Call: Called via srvyr
## Balanced Repeated Replicates with 4 replicates.
## Sampling variables:
## - repweights: `rep1 + rep2 + rep3 + rep4`
## - weights: wt
## Data variables:
## - ESA (int), ambulance (int), arrests (dbl), alive (dbl), wt (dbl),
## rep1 (dbl), rep2 (dbl), rep3 (dbl), rep4 (dbl)
## Variables:
## [1] "ESA" "ambulance" "arrests" "alive" "wt"
## [6] "rep1" "rep2" "rep3" "rep4"
```
Note that `combined_weights` was specified as `FALSE` because these weights are simply specified as 0 and 2 and do not incorporate the overall weight. When printing the object, the type of replication is noted as Balanced Repeated Replicates, and the replicate weights and the weight variable are specified. Additionally, the summary lists the variables included in the data and design object.
#### The math
A weighted estimate for the full sample is calculated as \\(\\hat{\\theta}\\), and then a weighted estimate for each replicate is calculated as \\(\\hat{\\theta}\_r\\) for \\(R\\) replicates. Using the generic notation above, \\(\\alpha\=\\frac{1}{R}\\) and \\(\\alpha\_r\=1\\) for each \\(r\\). The standard error of the estimate is calculated as follows:
\\\[se(\\hat{\\theta})\=\\sqrt{\\frac{1}{R} \\sum\_{r\=1}^R \\left( \\hat{\\theta}\_r\-\\hat{\\theta}\\right)^2}\\]
Specifying replicate weights in R requires specifying the type of replicate weights, the main weight variable, the replicate weight variables, and other options. One of the key options is for the mean squared error (MSE). If `mse=TRUE`, variances are computed around the point estimate \\((\\hat{\\theta})\\); whereas if `mse=FALSE`, variances are computed around the mean of the replicates \\((\\bar{\\theta})\\) instead, which looks like this:
\\\[se(\\hat{\\theta})\=\\sqrt{\\frac{1}{R} \\sum\_{r\=1}^R \\left( \\hat{\\theta}\_r\-\\bar{\\theta}\\right)^2}\\] where \\\[\\bar{\\theta}\=\\frac{1}{R}\\sum\_{r\=1}^R \\hat{\\theta}\_r\\]
The default option for `mse` is to use the global option of “survey.replicates.mse,” which is set to `FALSE` initially unless a user changes it. To determine if `mse` should be set to `TRUE` or `FALSE`, read the survey documentation. If there is no indication in the survey documentation for BRR, we recommend setting `mse` to `TRUE`, as this is the default in other software (e.g., SAS, SUDAAN).
#### The syntax
Replicate weights generally come in groups and are sequentially numbered, such as PWGTP1, PWGTP2, …, PWGTP80 for the person weights in the American Community Survey (ACS) ([U.S. Census Bureau 2021](#ref-acs-pums-2021)) or BRRWT1, BRRWT2, …, BRRWT96 in the 2015 Residential Energy Consumption Survey (RECS) ([U.S. Energy Information Administration 2017](#ref-recs-2015-micro)). This makes it easy to use some of the [tidy selection](https://dplyr.tidyverse.org/reference/dplyr_tidy_select.html) functions in R.
To specify a BRR design, we need to specify the weight variable (`weights`), the replicate weight variables (`repweights`), the type of replicate weights as BRR (`type = BRR`), and whether the mean squared error should be used (`mse = TRUE`) or not (`mse = FALSE`). For example, if a dataset had WT0 for the main weight and had 20 BRR weights indicated WT1, WT2, …, WT20, we can use the following syntax (both are equivalent):
```
brr_des <- dat %>%
as_survey_rep(weights = WT0,
repweights = all_of(str_c("WT", 1:20)),
type = "BRR",
mse = TRUE)
brr_des <- dat %>%
as_survey_rep(weights = WT0,
repweights = num_range("WT", 1:20),
type = "BRR",
mse = TRUE)
```
If a dataset had WT for the main weight and had 20 BRR weights indicated REPWT1, REPWT2, …, REPWT20, we can use the following syntax (both are equivalent):
```
brr_des <- dat %>%
as_survey_rep(weights = WT,
repweights = all_of(str_c("REPWT", 1:20)),
type = "BRR",
mse = TRUE)
brr_des <- dat %>%
as_survey_rep(weights = WT,
repweights = starts_with("REPWT"),
type = "BRR",
mse = TRUE)
```
If the replicate weight variables are in the file consecutively, we can also use the following syntax:
```
brr_des <- dat %>%
as_survey_rep(weights = WT,
repweights = REPWT1:REPWT20,
type = "BRR",
mse = TRUE)
```
Typically, each replicate weight sums to a value similar to the main weight, as both the replicate weights and the main weight are supposed to provide population estimates. Rarely, an alternative method is used where the replicate weights have values of 0 or 2 in the case of BRR weights. This would be indicated in the documentation (see Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation) for more information on reading documentation). In this case, the replicate weights are not combined, and the option `combined_weights = FALSE` should be indicated, as the default value for this argument is `TRUE`. This specific syntax is shown below:
```
brr_des <- dat %>%
as_survey_rep(weights = WT,
repweights = starts_with("REPWT"),
type = "BRR",
combined_weights = FALSE,
mse = TRUE)
```
#### Example
The {survey} package includes a data example from section 12\.2 of Levy and Lemeshow ([2013](#ref-levy2013sampling)). In this fictional data, two out of five ambulance stations were sampled from each of three emergency service areas (ESAs); thus BRR weights are appropriate with two PSUs (stations) sampled in each stratum (ESA). In the code below, we create BRR weights as was done by Levy and Lemeshow ([2013](#ref-levy2013sampling)).
```
scdbrr <- scd %>%
as_tibble() %>%
mutate(
wt = 5 / 2,
rep1 = 2 * c(1, 0, 1, 0, 1, 0),
rep2 = 2 * c(1, 0, 0, 1, 0, 1),
rep3 = 2 * c(0, 1, 1, 0, 0, 1),
rep4 = 2 * c(0, 1, 0, 1, 1, 0)
)
scdbrr
```
```
## # A tibble: 6 × 9
## ESA ambulance arrests alive wt rep1 rep2 rep3 rep4
## <int> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 1 120 25 2.5 2 2 0 0
## 2 1 2 78 24 2.5 0 0 2 2
## 3 2 1 185 30 2.5 2 0 2 0
## 4 2 2 228 49 2.5 0 2 0 2
## 5 3 1 670 80 2.5 2 0 0 2
## 6 3 2 530 70 2.5 0 2 2 0
```
To specify the BRR weights, we use the following syntax:
```
scdbrr_des <- scdbrr %>%
as_survey_rep(
type = "BRR",
repweights = starts_with("rep"),
combined_weights = FALSE,
weight = wt
)
scdbrr_des
```
```
## Call: Called via srvyr
## Balanced Repeated Replicates with 4 replicates.
## Sampling variables:
## - repweights: `rep1 + rep2 + rep3 + rep4`
## - weights: wt
## Data variables:
## - ESA (int), ambulance (int), arrests (dbl), alive (dbl), wt (dbl),
## rep1 (dbl), rep2 (dbl), rep3 (dbl), rep4 (dbl)
```
```
summary(scdbrr_des)
```
```
## Call: Called via srvyr
## Balanced Repeated Replicates with 4 replicates.
## Sampling variables:
## - repweights: `rep1 + rep2 + rep3 + rep4`
## - weights: wt
## Data variables:
## - ESA (int), ambulance (int), arrests (dbl), alive (dbl), wt (dbl),
## rep1 (dbl), rep2 (dbl), rep3 (dbl), rep4 (dbl)
## Variables:
## [1] "ESA" "ambulance" "arrests" "alive" "wt"
## [6] "rep1" "rep2" "rep3" "rep4"
```
Note that `combined_weights` was specified as `FALSE` because these weights are simply specified as 0 and 2 and do not incorporate the overall weight. When printing the object, the type of replication is noted as Balanced Repeated Replicates, and the replicate weights and the weight variable are specified. Additionally, the summary lists the variables included in the data and design object.
### 10\.4\.2 Fay’s BRR method
Fay’s BRR method for replicate weights is similar to the BRR method in that it uses a Hadamard matrix to construct replicate weights. However, rather than deleting PSUs for each replicate, with Fay’s BRR, half of the PSUs have a replicate weight, which is the main weight multiplied by \\(\\rho\\), and the other half have the main weight multiplied by \\((2\-\\rho)\\), where \\(0 \\le \\rho \< 1\\). Note that when \\(\\rho\=0\\), this is equivalent to the standard BRR weights, and as \\(\\rho\\) becomes closer to 1, this method is more similar to jackknife discussed in Section [10\.4\.3](c10-sample-designs-replicate-weights.html#samp-jackknife). To obtain the value of \\(\\rho\\), it is necessary to read the survey documentation (see Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation)).
#### The math
The standard error estimate for \\(\\hat{\\theta}\\) is slightly different than the BRR, due to the addition of the multiplier of \\(\\rho\\). Using the generic notation above, \\(\\alpha\=\\frac{1}{R \\left(1\-\\rho\\right)^2}\\) and \\(\\alpha\_r\=1 \\text{ for all } r\\). The standard error is calculated as:
\\\[se(\\hat{\\theta})\=\\sqrt{\\frac{1}{R (1\-\\rho)^2} \\sum\_{r\=1}^R \\left( \\hat{\\theta}\_r\-\\hat{\\theta}\\right)^2}\\]
#### The syntax
The syntax is very similar for BRR and Fay’s BRR. To specify a Fay’s BRR design, we need to specify the weight variable (`weights`), the replicate weight variables (`repweights`), the type of replicate weights as Fay’s BRR (`type = Fay`), whether the mean squared error should be used (`mse = TRUE`) or not (`mse = FALSE`), and Fay’s multiplier (`rho`). For example, if a dataset had WT0 for the main weight and had 20 BRR weights indicated as WT1, WT2, …, WT20, and Fay’s multiplier is 0\.3, we use the following syntax:
```
fay_des <- dat %>%
as_survey_rep(weights = WT0,
repweights = num_range("WT", 1:20),
type = "Fay",
mse = TRUE,
rho = 0.3)
```
#### Example
The 2015 RECS ([U.S. Energy Information Administration 2017](#ref-recs-2015-micro)) uses Fay’s BRR weights with the final weight as NWEIGHT and replicate weights as BRRWT1 \- BRRWT96, and the documentation specifies a Fay’s multiplier of 0\.5\. On the file, DOEID is a unique identifier for each respondent, TOTALDOL is the total energy cost, TOTSQFT\_EN is the total square footage of the residence, and REGOINC is the census region. We use the 2015 RECS data from the {srvyrexploR} package that provides data for this book (see the Prerequisites box at the beginning of this chapter). To specify the design for the `recs_2015` data, we use the following syntax:
```
recs_2015_des <- recs_2015 %>%
as_survey_rep(
weights = NWEIGHT,
repweights = BRRWT1:BRRWT96,
type = "Fay",
rho = 0.5,
mse = TRUE,
variables = c(DOEID, TOTALDOL, TOTSQFT_EN, REGIONC)
)
recs_2015_des
```
```
## Call: Called via srvyr
## Fay's variance method (rho= 0.5 ) with 96 replicates and MSE variances.
## Sampling variables:
## - repweights: `BRRWT1 + BRRWT2 + BRRWT3 + BRRWT4 + BRRWT5 + BRRWT6 +
## BRRWT7 + BRRWT8 + BRRWT9 + BRRWT10 + BRRWT11 + BRRWT12 + BRRWT13 +
## BRRWT14 + BRRWT15 + BRRWT16 + BRRWT17 + BRRWT18 + BRRWT19 + BRRWT20
## + BRRWT21 + BRRWT22 + BRRWT23 + BRRWT24 + BRRWT25 + BRRWT26 +
## BRRWT27 + BRRWT28 + BRRWT29 + BRRWT30 + BRRWT31 + BRRWT32 + BRRWT33
## + BRRWT34 + BRRWT35 + BRRWT36 + BRRWT37 + BRRWT38 + BRRWT39 +
## BRRWT40 + BRRWT41 + BRRWT42 + BRRWT43 + BRRWT44 + BRRWT45 + BRRWT46
## + BRRWT47 + BRRWT48 + BRRWT49 + BRRWT50 + BRRWT51 + BRRWT52 +
## BRRWT53 + BRRWT54 + BRRWT55 + BRRWT56 + BRRWT57 + BRRWT58 + BRRWT59
## + BRRWT60 + BRRWT61 + BRRWT62 + BRRWT63 + BRRWT64 + BRRWT65 +
## BRRWT66 + BRRWT67 + BRRWT68 + BRRWT69 + BRRWT70 + BRRWT71 + BRRWT72
## + BRRWT73 + BRRWT74 + BRRWT75 + BRRWT76 + BRRWT77 + BRRWT78 +
## BRRWT79 + BRRWT80 + BRRWT81 + BRRWT82 + BRRWT83 + BRRWT84 + BRRWT85
## + BRRWT86 + BRRWT87 + BRRWT88 + BRRWT89 + BRRWT90 + BRRWT91 +
## BRRWT92 + BRRWT93 + BRRWT94 + BRRWT95 + BRRWT96`
## - weights: NWEIGHT
## Data variables:
## - DOEID (dbl), TOTALDOL (dbl), TOTSQFT_EN (dbl), REGIONC (dbl)
```
```
summary(recs_2015_des)
```
```
## Call: Called via srvyr
## Fay's variance method (rho= 0.5 ) with 96 replicates and MSE variances.
## Sampling variables:
## - repweights: `BRRWT1 + BRRWT2 + BRRWT3 + BRRWT4 + BRRWT5 + BRRWT6 +
## BRRWT7 + BRRWT8 + BRRWT9 + BRRWT10 + BRRWT11 + BRRWT12 + BRRWT13 +
## BRRWT14 + BRRWT15 + BRRWT16 + BRRWT17 + BRRWT18 + BRRWT19 + BRRWT20
## + BRRWT21 + BRRWT22 + BRRWT23 + BRRWT24 + BRRWT25 + BRRWT26 +
## BRRWT27 + BRRWT28 + BRRWT29 + BRRWT30 + BRRWT31 + BRRWT32 + BRRWT33
## + BRRWT34 + BRRWT35 + BRRWT36 + BRRWT37 + BRRWT38 + BRRWT39 +
## BRRWT40 + BRRWT41 + BRRWT42 + BRRWT43 + BRRWT44 + BRRWT45 + BRRWT46
## + BRRWT47 + BRRWT48 + BRRWT49 + BRRWT50 + BRRWT51 + BRRWT52 +
## BRRWT53 + BRRWT54 + BRRWT55 + BRRWT56 + BRRWT57 + BRRWT58 + BRRWT59
## + BRRWT60 + BRRWT61 + BRRWT62 + BRRWT63 + BRRWT64 + BRRWT65 +
## BRRWT66 + BRRWT67 + BRRWT68 + BRRWT69 + BRRWT70 + BRRWT71 + BRRWT72
## + BRRWT73 + BRRWT74 + BRRWT75 + BRRWT76 + BRRWT77 + BRRWT78 +
## BRRWT79 + BRRWT80 + BRRWT81 + BRRWT82 + BRRWT83 + BRRWT84 + BRRWT85
## + BRRWT86 + BRRWT87 + BRRWT88 + BRRWT89 + BRRWT90 + BRRWT91 +
## BRRWT92 + BRRWT93 + BRRWT94 + BRRWT95 + BRRWT96`
## - weights: NWEIGHT
## Data variables:
## - DOEID (dbl), TOTALDOL (dbl), TOTSQFT_EN (dbl), REGIONC (dbl)
## Variables:
## [1] "DOEID" "TOTALDOL" "TOTSQFT_EN" "REGIONC"
```
In specifying the design, the `variables` option was also used to include which variables might be used in analyses. This is optional but can make our object smaller and easier to work with. When printing the design object or looking at the summary, the replicate weight type is re\-iterated as `Fay's variance method (rho= 0.5) with 96 replicates and MSE variances`, and the variables are included. No weight or probability summary is included in this output, as we have seen in some other design objects.
#### The math
The standard error estimate for \\(\\hat{\\theta}\\) is slightly different than the BRR, due to the addition of the multiplier of \\(\\rho\\). Using the generic notation above, \\(\\alpha\=\\frac{1}{R \\left(1\-\\rho\\right)^2}\\) and \\(\\alpha\_r\=1 \\text{ for all } r\\). The standard error is calculated as:
\\\[se(\\hat{\\theta})\=\\sqrt{\\frac{1}{R (1\-\\rho)^2} \\sum\_{r\=1}^R \\left( \\hat{\\theta}\_r\-\\hat{\\theta}\\right)^2}\\]
#### The syntax
The syntax is very similar for BRR and Fay’s BRR. To specify a Fay’s BRR design, we need to specify the weight variable (`weights`), the replicate weight variables (`repweights`), the type of replicate weights as Fay’s BRR (`type = Fay`), whether the mean squared error should be used (`mse = TRUE`) or not (`mse = FALSE`), and Fay’s multiplier (`rho`). For example, if a dataset had WT0 for the main weight and had 20 BRR weights indicated as WT1, WT2, …, WT20, and Fay’s multiplier is 0\.3, we use the following syntax:
```
fay_des <- dat %>%
as_survey_rep(weights = WT0,
repweights = num_range("WT", 1:20),
type = "Fay",
mse = TRUE,
rho = 0.3)
```
#### Example
The 2015 RECS ([U.S. Energy Information Administration 2017](#ref-recs-2015-micro)) uses Fay’s BRR weights with the final weight as NWEIGHT and replicate weights as BRRWT1 \- BRRWT96, and the documentation specifies a Fay’s multiplier of 0\.5\. On the file, DOEID is a unique identifier for each respondent, TOTALDOL is the total energy cost, TOTSQFT\_EN is the total square footage of the residence, and REGOINC is the census region. We use the 2015 RECS data from the {srvyrexploR} package that provides data for this book (see the Prerequisites box at the beginning of this chapter). To specify the design for the `recs_2015` data, we use the following syntax:
```
recs_2015_des <- recs_2015 %>%
as_survey_rep(
weights = NWEIGHT,
repweights = BRRWT1:BRRWT96,
type = "Fay",
rho = 0.5,
mse = TRUE,
variables = c(DOEID, TOTALDOL, TOTSQFT_EN, REGIONC)
)
recs_2015_des
```
```
## Call: Called via srvyr
## Fay's variance method (rho= 0.5 ) with 96 replicates and MSE variances.
## Sampling variables:
## - repweights: `BRRWT1 + BRRWT2 + BRRWT3 + BRRWT4 + BRRWT5 + BRRWT6 +
## BRRWT7 + BRRWT8 + BRRWT9 + BRRWT10 + BRRWT11 + BRRWT12 + BRRWT13 +
## BRRWT14 + BRRWT15 + BRRWT16 + BRRWT17 + BRRWT18 + BRRWT19 + BRRWT20
## + BRRWT21 + BRRWT22 + BRRWT23 + BRRWT24 + BRRWT25 + BRRWT26 +
## BRRWT27 + BRRWT28 + BRRWT29 + BRRWT30 + BRRWT31 + BRRWT32 + BRRWT33
## + BRRWT34 + BRRWT35 + BRRWT36 + BRRWT37 + BRRWT38 + BRRWT39 +
## BRRWT40 + BRRWT41 + BRRWT42 + BRRWT43 + BRRWT44 + BRRWT45 + BRRWT46
## + BRRWT47 + BRRWT48 + BRRWT49 + BRRWT50 + BRRWT51 + BRRWT52 +
## BRRWT53 + BRRWT54 + BRRWT55 + BRRWT56 + BRRWT57 + BRRWT58 + BRRWT59
## + BRRWT60 + BRRWT61 + BRRWT62 + BRRWT63 + BRRWT64 + BRRWT65 +
## BRRWT66 + BRRWT67 + BRRWT68 + BRRWT69 + BRRWT70 + BRRWT71 + BRRWT72
## + BRRWT73 + BRRWT74 + BRRWT75 + BRRWT76 + BRRWT77 + BRRWT78 +
## BRRWT79 + BRRWT80 + BRRWT81 + BRRWT82 + BRRWT83 + BRRWT84 + BRRWT85
## + BRRWT86 + BRRWT87 + BRRWT88 + BRRWT89 + BRRWT90 + BRRWT91 +
## BRRWT92 + BRRWT93 + BRRWT94 + BRRWT95 + BRRWT96`
## - weights: NWEIGHT
## Data variables:
## - DOEID (dbl), TOTALDOL (dbl), TOTSQFT_EN (dbl), REGIONC (dbl)
```
```
summary(recs_2015_des)
```
```
## Call: Called via srvyr
## Fay's variance method (rho= 0.5 ) with 96 replicates and MSE variances.
## Sampling variables:
## - repweights: `BRRWT1 + BRRWT2 + BRRWT3 + BRRWT4 + BRRWT5 + BRRWT6 +
## BRRWT7 + BRRWT8 + BRRWT9 + BRRWT10 + BRRWT11 + BRRWT12 + BRRWT13 +
## BRRWT14 + BRRWT15 + BRRWT16 + BRRWT17 + BRRWT18 + BRRWT19 + BRRWT20
## + BRRWT21 + BRRWT22 + BRRWT23 + BRRWT24 + BRRWT25 + BRRWT26 +
## BRRWT27 + BRRWT28 + BRRWT29 + BRRWT30 + BRRWT31 + BRRWT32 + BRRWT33
## + BRRWT34 + BRRWT35 + BRRWT36 + BRRWT37 + BRRWT38 + BRRWT39 +
## BRRWT40 + BRRWT41 + BRRWT42 + BRRWT43 + BRRWT44 + BRRWT45 + BRRWT46
## + BRRWT47 + BRRWT48 + BRRWT49 + BRRWT50 + BRRWT51 + BRRWT52 +
## BRRWT53 + BRRWT54 + BRRWT55 + BRRWT56 + BRRWT57 + BRRWT58 + BRRWT59
## + BRRWT60 + BRRWT61 + BRRWT62 + BRRWT63 + BRRWT64 + BRRWT65 +
## BRRWT66 + BRRWT67 + BRRWT68 + BRRWT69 + BRRWT70 + BRRWT71 + BRRWT72
## + BRRWT73 + BRRWT74 + BRRWT75 + BRRWT76 + BRRWT77 + BRRWT78 +
## BRRWT79 + BRRWT80 + BRRWT81 + BRRWT82 + BRRWT83 + BRRWT84 + BRRWT85
## + BRRWT86 + BRRWT87 + BRRWT88 + BRRWT89 + BRRWT90 + BRRWT91 +
## BRRWT92 + BRRWT93 + BRRWT94 + BRRWT95 + BRRWT96`
## - weights: NWEIGHT
## Data variables:
## - DOEID (dbl), TOTALDOL (dbl), TOTSQFT_EN (dbl), REGIONC (dbl)
## Variables:
## [1] "DOEID" "TOTALDOL" "TOTSQFT_EN" "REGIONC"
```
In specifying the design, the `variables` option was also used to include which variables might be used in analyses. This is optional but can make our object smaller and easier to work with. When printing the design object or looking at the summary, the replicate weight type is re\-iterated as `Fay's variance method (rho= 0.5) with 96 replicates and MSE variances`, and the variables are included. No weight or probability summary is included in this output, as we have seen in some other design objects.
### 10\.4\.3 Jackknife method
There are three jackknife estimators implemented in {srvyr}: jackknife 1 (JK1\), jackknife n (JKn), and jackknife 2 (JK2\). The JK1 method can be used for unstratified designs, and replicates are created by removing one PSU at a time so the number of replicates is the same as the number of PSUs. If there is no clustering, then the PSU is the ultimate sampling unit (e.g., students).
The JKn method is used for stratified designs and requires two or more PSUs per stratum. In this case, each replicate is created by deleting one PSU from a single stratum, so the number of replicates is the number of total PSUs across all strata. The JK2 method is a special case of JKn when there are exactly 2 PSUs sampled per stratum. For variance estimation, we also need to specify the scaling constants.
#### The math
Using the generic notation above, \\(\\alpha\=\\frac{R\-1}{R}\\) and \\(\\alpha\_r\=1 \\text{ for all } r\\). For the JK1 method, the standard error estimate for \\(\\hat{\\theta}\\) is calculated as:
\\\[se(\\hat{\\theta})\=\\sqrt{\\frac{R\-1}{R} \\sum\_{r\=1}^R \\left( \\hat{\\theta}\_r\-\\hat{\\theta}\\right)^2}\\]
The JKn method is a bit more complex, but the coefficients are generally provided with restricted and public\-use files. For each replicate, one stratum has a PSU removed, and the weights are adjusted by \\(n\_h/(n\_h\-1\)\\) where \\(n\_h\\) is the number of PSUs in stratum \\(h\\). The coefficients in other strata are set to 1\. Denote the coefficient that results from this process for replicate \\(r\\) as \\(\\alpha\_r\\), then the standard error estimate for \\(\\hat{\\theta}\\) is calculated as:
\\\[se(\\hat{\\theta})\=\\sqrt{\\sum\_{r\=1}^R \\alpha\_r \\left( \\hat{\\theta}\_r\-\\hat{\\theta}\\right)^2}\\]
#### The syntax
To specify the jackknife method, we use the survey documentation to understand the type of jackknife (1, n, or 2\) and the multiplier. In the syntax, we need to specify the weight variable (`weights`), the replicate weight variables (`repweights`), the type of replicate weights as jackknife 1 (`type = "JK1"`), n (`type = "JKN"`), or 2 (`type = "JK2"`), whether the mean squared error should be used (`mse = TRUE`) or not (`mse = FALSE`), and the multiplier (`scale`). For example, if the survey is a jackknife 1 method with a multiplier of \\(\\alpha\_r\=(R\-1\)/R\=19/20\=0\.95\\), the dataset has WT0 for the main weight and 20 replicate weights indicated as WT1, WT2, …, WT20, we use the following syntax:
```
jk1_des <- dat %>%
as_survey_rep(
weights = WT0,
repweights = num_range("WT", 1:20),
type = "JK1",
mse = TRUE,
scale = 0.95
)
```
For a jackknife n method, we need to specify the multiplier for all replicates. In this case, we use the `rscales` argument to specify each one. The documentation provides details on what the multipliers (\\(\\alpha\_r\\)) are, and they may be the same for all replicates. For example, consider a case where \\(\\alpha\_r\=0\.1\\) for all replicates, and the dataset had WT0 for the main weight and had 20 replicate weights indicated as WT1, WT2, …, WT20\. We specify the type as `type = "JKN"`, and the multiplier as `rscales=rep(0.1,20)`:
```
jkn_des <- dat %>%
as_survey_rep(
weights = WT0,
repweights = num_range("WT", 1:20),
type = "JKN",
mse = TRUE,
rscales = rep(0.1, 20)
)
```
#### Example
The 2020 RECS ([U.S. Energy Information Administration 2023c](#ref-recs-2020-micro)) uses jackknife weights with the final weight as NWEIGHT and replicate weights as NWEIGHT1 \- NWEIGHT60 with a scale of \\((R\-1\)/R\=59/60\\). On the file, DOEID is a unique identifier for each respondent, TOTALDOL is the total cost of energy, TOTSQFT\_EN is the total square footage of the residence, and REGOINC is the census region. We use the 2020 RECS data from the {srvyrexploR} package that provides data for this book (see the Prerequisites box at the beginning of this chapter).
To specify this design, we use the following syntax:
```
recs_des <- recs_2020 %>%
as_survey_rep(
weights = NWEIGHT,
repweights = NWEIGHT1:NWEIGHT60,
type = "JK1",
scale = 59 / 60,
mse = TRUE,
variables = c(DOEID, TOTALDOL, TOTSQFT_EN, REGIONC)
)
recs_des
```
```
## Call: Called via srvyr
## Unstratified cluster jacknife (JK1) with 60 replicates and MSE variances.
## Sampling variables:
## - repweights: `NWEIGHT1 + NWEIGHT2 + NWEIGHT3 + NWEIGHT4 + NWEIGHT5 +
## NWEIGHT6 + NWEIGHT7 + NWEIGHT8 + NWEIGHT9 + NWEIGHT10 + NWEIGHT11 +
## NWEIGHT12 + NWEIGHT13 + NWEIGHT14 + NWEIGHT15 + NWEIGHT16 +
## NWEIGHT17 + NWEIGHT18 + NWEIGHT19 + NWEIGHT20 + NWEIGHT21 +
## NWEIGHT22 + NWEIGHT23 + NWEIGHT24 + NWEIGHT25 + NWEIGHT26 +
## NWEIGHT27 + NWEIGHT28 + NWEIGHT29 + NWEIGHT30 + NWEIGHT31 +
## NWEIGHT32 + NWEIGHT33 + NWEIGHT34 + NWEIGHT35 + NWEIGHT36 +
## NWEIGHT37 + NWEIGHT38 + NWEIGHT39 + NWEIGHT40 + NWEIGHT41 +
## NWEIGHT42 + NWEIGHT43 + NWEIGHT44 + NWEIGHT45 + NWEIGHT46 +
## NWEIGHT47 + NWEIGHT48 + NWEIGHT49 + NWEIGHT50 + NWEIGHT51 +
## NWEIGHT52 + NWEIGHT53 + NWEIGHT54 + NWEIGHT55 + NWEIGHT56 +
## NWEIGHT57 + NWEIGHT58 + NWEIGHT59 + NWEIGHT60`
## - weights: NWEIGHT
## Data variables:
## - DOEID (dbl), TOTALDOL (dbl), TOTSQFT_EN (dbl), REGIONC (chr)
```
```
summary(recs_des)
```
```
## Call: Called via srvyr
## Unstratified cluster jacknife (JK1) with 60 replicates and MSE variances.
## Sampling variables:
## - repweights: `NWEIGHT1 + NWEIGHT2 + NWEIGHT3 + NWEIGHT4 + NWEIGHT5 +
## NWEIGHT6 + NWEIGHT7 + NWEIGHT8 + NWEIGHT9 + NWEIGHT10 + NWEIGHT11 +
## NWEIGHT12 + NWEIGHT13 + NWEIGHT14 + NWEIGHT15 + NWEIGHT16 +
## NWEIGHT17 + NWEIGHT18 + NWEIGHT19 + NWEIGHT20 + NWEIGHT21 +
## NWEIGHT22 + NWEIGHT23 + NWEIGHT24 + NWEIGHT25 + NWEIGHT26 +
## NWEIGHT27 + NWEIGHT28 + NWEIGHT29 + NWEIGHT30 + NWEIGHT31 +
## NWEIGHT32 + NWEIGHT33 + NWEIGHT34 + NWEIGHT35 + NWEIGHT36 +
## NWEIGHT37 + NWEIGHT38 + NWEIGHT39 + NWEIGHT40 + NWEIGHT41 +
## NWEIGHT42 + NWEIGHT43 + NWEIGHT44 + NWEIGHT45 + NWEIGHT46 +
## NWEIGHT47 + NWEIGHT48 + NWEIGHT49 + NWEIGHT50 + NWEIGHT51 +
## NWEIGHT52 + NWEIGHT53 + NWEIGHT54 + NWEIGHT55 + NWEIGHT56 +
## NWEIGHT57 + NWEIGHT58 + NWEIGHT59 + NWEIGHT60`
## - weights: NWEIGHT
## Data variables:
## - DOEID (dbl), TOTALDOL (dbl), TOTSQFT_EN (dbl), REGIONC (chr)
## Variables:
## [1] "DOEID" "TOTALDOL" "TOTSQFT_EN" "REGIONC"
```
When printing the design object or looking at the summary, the replicate weight type is reiterated as `Unstratified cluster jacknife (JK1) with 60 replicates and MSE variances`, and the variables are included. No weight or probability summary is included.
#### The math
Using the generic notation above, \\(\\alpha\=\\frac{R\-1}{R}\\) and \\(\\alpha\_r\=1 \\text{ for all } r\\). For the JK1 method, the standard error estimate for \\(\\hat{\\theta}\\) is calculated as:
\\\[se(\\hat{\\theta})\=\\sqrt{\\frac{R\-1}{R} \\sum\_{r\=1}^R \\left( \\hat{\\theta}\_r\-\\hat{\\theta}\\right)^2}\\]
The JKn method is a bit more complex, but the coefficients are generally provided with restricted and public\-use files. For each replicate, one stratum has a PSU removed, and the weights are adjusted by \\(n\_h/(n\_h\-1\)\\) where \\(n\_h\\) is the number of PSUs in stratum \\(h\\). The coefficients in other strata are set to 1\. Denote the coefficient that results from this process for replicate \\(r\\) as \\(\\alpha\_r\\), then the standard error estimate for \\(\\hat{\\theta}\\) is calculated as:
\\\[se(\\hat{\\theta})\=\\sqrt{\\sum\_{r\=1}^R \\alpha\_r \\left( \\hat{\\theta}\_r\-\\hat{\\theta}\\right)^2}\\]
#### The syntax
To specify the jackknife method, we use the survey documentation to understand the type of jackknife (1, n, or 2\) and the multiplier. In the syntax, we need to specify the weight variable (`weights`), the replicate weight variables (`repweights`), the type of replicate weights as jackknife 1 (`type = "JK1"`), n (`type = "JKN"`), or 2 (`type = "JK2"`), whether the mean squared error should be used (`mse = TRUE`) or not (`mse = FALSE`), and the multiplier (`scale`). For example, if the survey is a jackknife 1 method with a multiplier of \\(\\alpha\_r\=(R\-1\)/R\=19/20\=0\.95\\), the dataset has WT0 for the main weight and 20 replicate weights indicated as WT1, WT2, …, WT20, we use the following syntax:
```
jk1_des <- dat %>%
as_survey_rep(
weights = WT0,
repweights = num_range("WT", 1:20),
type = "JK1",
mse = TRUE,
scale = 0.95
)
```
For a jackknife n method, we need to specify the multiplier for all replicates. In this case, we use the `rscales` argument to specify each one. The documentation provides details on what the multipliers (\\(\\alpha\_r\\)) are, and they may be the same for all replicates. For example, consider a case where \\(\\alpha\_r\=0\.1\\) for all replicates, and the dataset had WT0 for the main weight and had 20 replicate weights indicated as WT1, WT2, …, WT20\. We specify the type as `type = "JKN"`, and the multiplier as `rscales=rep(0.1,20)`:
```
jkn_des <- dat %>%
as_survey_rep(
weights = WT0,
repweights = num_range("WT", 1:20),
type = "JKN",
mse = TRUE,
rscales = rep(0.1, 20)
)
```
#### Example
The 2020 RECS ([U.S. Energy Information Administration 2023c](#ref-recs-2020-micro)) uses jackknife weights with the final weight as NWEIGHT and replicate weights as NWEIGHT1 \- NWEIGHT60 with a scale of \\((R\-1\)/R\=59/60\\). On the file, DOEID is a unique identifier for each respondent, TOTALDOL is the total cost of energy, TOTSQFT\_EN is the total square footage of the residence, and REGOINC is the census region. We use the 2020 RECS data from the {srvyrexploR} package that provides data for this book (see the Prerequisites box at the beginning of this chapter).
To specify this design, we use the following syntax:
```
recs_des <- recs_2020 %>%
as_survey_rep(
weights = NWEIGHT,
repweights = NWEIGHT1:NWEIGHT60,
type = "JK1",
scale = 59 / 60,
mse = TRUE,
variables = c(DOEID, TOTALDOL, TOTSQFT_EN, REGIONC)
)
recs_des
```
```
## Call: Called via srvyr
## Unstratified cluster jacknife (JK1) with 60 replicates and MSE variances.
## Sampling variables:
## - repweights: `NWEIGHT1 + NWEIGHT2 + NWEIGHT3 + NWEIGHT4 + NWEIGHT5 +
## NWEIGHT6 + NWEIGHT7 + NWEIGHT8 + NWEIGHT9 + NWEIGHT10 + NWEIGHT11 +
## NWEIGHT12 + NWEIGHT13 + NWEIGHT14 + NWEIGHT15 + NWEIGHT16 +
## NWEIGHT17 + NWEIGHT18 + NWEIGHT19 + NWEIGHT20 + NWEIGHT21 +
## NWEIGHT22 + NWEIGHT23 + NWEIGHT24 + NWEIGHT25 + NWEIGHT26 +
## NWEIGHT27 + NWEIGHT28 + NWEIGHT29 + NWEIGHT30 + NWEIGHT31 +
## NWEIGHT32 + NWEIGHT33 + NWEIGHT34 + NWEIGHT35 + NWEIGHT36 +
## NWEIGHT37 + NWEIGHT38 + NWEIGHT39 + NWEIGHT40 + NWEIGHT41 +
## NWEIGHT42 + NWEIGHT43 + NWEIGHT44 + NWEIGHT45 + NWEIGHT46 +
## NWEIGHT47 + NWEIGHT48 + NWEIGHT49 + NWEIGHT50 + NWEIGHT51 +
## NWEIGHT52 + NWEIGHT53 + NWEIGHT54 + NWEIGHT55 + NWEIGHT56 +
## NWEIGHT57 + NWEIGHT58 + NWEIGHT59 + NWEIGHT60`
## - weights: NWEIGHT
## Data variables:
## - DOEID (dbl), TOTALDOL (dbl), TOTSQFT_EN (dbl), REGIONC (chr)
```
```
summary(recs_des)
```
```
## Call: Called via srvyr
## Unstratified cluster jacknife (JK1) with 60 replicates and MSE variances.
## Sampling variables:
## - repweights: `NWEIGHT1 + NWEIGHT2 + NWEIGHT3 + NWEIGHT4 + NWEIGHT5 +
## NWEIGHT6 + NWEIGHT7 + NWEIGHT8 + NWEIGHT9 + NWEIGHT10 + NWEIGHT11 +
## NWEIGHT12 + NWEIGHT13 + NWEIGHT14 + NWEIGHT15 + NWEIGHT16 +
## NWEIGHT17 + NWEIGHT18 + NWEIGHT19 + NWEIGHT20 + NWEIGHT21 +
## NWEIGHT22 + NWEIGHT23 + NWEIGHT24 + NWEIGHT25 + NWEIGHT26 +
## NWEIGHT27 + NWEIGHT28 + NWEIGHT29 + NWEIGHT30 + NWEIGHT31 +
## NWEIGHT32 + NWEIGHT33 + NWEIGHT34 + NWEIGHT35 + NWEIGHT36 +
## NWEIGHT37 + NWEIGHT38 + NWEIGHT39 + NWEIGHT40 + NWEIGHT41 +
## NWEIGHT42 + NWEIGHT43 + NWEIGHT44 + NWEIGHT45 + NWEIGHT46 +
## NWEIGHT47 + NWEIGHT48 + NWEIGHT49 + NWEIGHT50 + NWEIGHT51 +
## NWEIGHT52 + NWEIGHT53 + NWEIGHT54 + NWEIGHT55 + NWEIGHT56 +
## NWEIGHT57 + NWEIGHT58 + NWEIGHT59 + NWEIGHT60`
## - weights: NWEIGHT
## Data variables:
## - DOEID (dbl), TOTALDOL (dbl), TOTSQFT_EN (dbl), REGIONC (chr)
## Variables:
## [1] "DOEID" "TOTALDOL" "TOTSQFT_EN" "REGIONC"
```
When printing the design object or looking at the summary, the replicate weight type is reiterated as `Unstratified cluster jacknife (JK1) with 60 replicates and MSE variances`, and the variables are included. No weight or probability summary is included.
### 10\.4\.4 Bootstrap method
In bootstrap resampling, replicates are created by selecting random samples of the PSUs with replacement (SRSWR). If there are \\(A\\) PSUs in the sample, then each replicate is created by selecting a random sample of \\(A\\) PSUs with replacement. Each replicate is created independently, and the weights for each replicate are adjusted to reflect the population, generally using the same method as how the analysis weight was adjusted.
#### The math
A weighted estimate for the full sample is calculated as \\(\\hat{\\theta}\\), and then a weighted estimate for each replicate is calculated as \\(\\hat{\\theta}\_r\\) for \\(R\\) replicates. Then the standard error of the estimate is calculated as follows:
\\\[se(\\hat{\\theta})\=\\sqrt{\\alpha \\sum\_{r\=1}^R \\left( \\hat{\\theta}\_r\-\\hat{\\theta}\\right)^2}\\]
where \\(\\alpha\\) is the scaling constant. Note that the scaling constant (\\(\\alpha\\)) is provided in the survey documentation, as there are many types of bootstrap methods that generate custom scaling constants.
#### The syntax
To specify a bootstrap method, we need to specify the weight variable (`weights`), the replicate weight variables (`repweights`), the type of replicate weights as bootstrap (`type = "bootstrap"`), whether the mean squared error should be used (`mse = TRUE`) or not (`mse = FALSE`), and the multiplier (`scale`). For example, if a dataset had WT0 for the main weight, 20 bootstrap weights indicated WT1, WT2, …, WT20, and a multiplier of \\(\\alpha\=.02\\), we use the following syntax:
```
bs_des <- dat %>%
as_survey_rep(
weights = WT0,
repweights = num_range("WT", 1:20),
type = "bootstrap",
mse = TRUE,
scale = .02
)
```
#### Example
Returning to the APIP example, we are going to create a dataset with bootstrap weights to use as an example. In this example, we construct a one\-cluster design with 50 replicate weights[28](#fn28).
```
apiclus1_slim <-
apiclus1 %>%
as_tibble() %>%
arrange(dnum) %>%
select(cds, dnum, fpc, pw)
set.seed(662152)
apibw <-
bootweights(
psu = apiclus1_slim$dnum,
strata = rep(1, nrow(apiclus1_slim)),
fpc = apiclus1_slim$fpc,
replicates = 50
)
bwmata <-
apibw$repweights$weights[apibw$repweights$index, ] * apiclus1_slim$pw
apiclus1_slim <- bwmata %>%
as.data.frame() %>%
set_names(str_c("pw", 1:50)) %>%
cbind(apiclus1_slim) %>%
as_tibble() %>%
select(cds, dnum, fpc, pw, everything())
apiclus1_slim
```
```
## # A tibble: 183 × 54
## cds dnum fpc pw pw1 pw2 pw3 pw4 pw5 pw6 pw7
## <chr> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 2 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 3 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 4 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 5 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 6 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 7 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 8 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 9 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 10 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## # ℹ 173 more rows
## # ℹ 43 more variables: pw8 <dbl>, pw9 <dbl>, pw10 <dbl>, pw11 <dbl>,
## # pw12 <dbl>, pw13 <dbl>, pw14 <dbl>, pw15 <dbl>, pw16 <dbl>,
## # pw17 <dbl>, pw18 <dbl>, pw19 <dbl>, pw20 <dbl>, pw21 <dbl>,
## # pw22 <dbl>, pw23 <dbl>, pw24 <dbl>, pw25 <dbl>, pw26 <dbl>,
## # pw27 <dbl>, pw28 <dbl>, pw29 <dbl>, pw30 <dbl>, pw31 <dbl>,
## # pw32 <dbl>, pw33 <dbl>, pw34 <dbl>, pw35 <dbl>, pw36 <dbl>, …
```
The output of `apiclus1_slim` includes the same variables we have seen in other APIP examples (see Table [10\.1](c10-sample-designs-replicate-weights.html#tab:apidata)), but now it additionally includes bootstrap weights `pw1`, …, `pw50`. When creating the survey design object, we use the bootstrap weights as the replicate weights. Additionally, with replicate weights we need to include the scale (\\(\\alpha\\)). For this example, we created:
\\\[\\alpha\=\\frac{A}{(A\-1\)(R\-1\)}\=\\frac{15}{(15\-1\)\*(50\-1\)}\=0\.02186589\\]
where \\(A\\) is the average number of PSUs per stratum, and \\(R\\) is the number of replicates. There is only 1 stratum and the number of clusters/PSUs is 15 so \\(A\=15\\). Using this information, we specify the design object as:
```
api1_bs_des <- apiclus1_slim %>%
as_survey_rep(
weights = pw,
repweights = pw1:pw50,
type = "bootstrap",
scale = 0.02186589,
mse = TRUE
)
api1_bs_des
```
```
## Call: Called via srvyr
## Survey bootstrap with 50 replicates and MSE variances.
## Sampling variables:
## - repweights: `pw1 + pw2 + pw3 + pw4 + pw5 + pw6 + pw7 + pw8 + pw9 +
## pw10 + pw11 + pw12 + pw13 + pw14 + pw15 + pw16 + pw17 + pw18 + pw19
## + pw20 + pw21 + pw22 + pw23 + pw24 + pw25 + pw26 + pw27 + pw28 +
## pw29 + pw30 + pw31 + pw32 + pw33 + pw34 + pw35 + pw36 + pw37 + pw38
## + pw39 + pw40 + pw41 + pw42 + pw43 + pw44 + pw45 + pw46 + pw47 +
## pw48 + pw49 + pw50`
## - weights: pw
## Data variables:
## - cds (chr), dnum (int), fpc (dbl), pw (dbl), pw1 (dbl), pw2 (dbl),
## pw3 (dbl), pw4 (dbl), pw5 (dbl), pw6 (dbl), pw7 (dbl), pw8 (dbl),
## pw9 (dbl), pw10 (dbl), pw11 (dbl), pw12 (dbl), pw13 (dbl), pw14
## (dbl), pw15 (dbl), pw16 (dbl), pw17 (dbl), pw18 (dbl), pw19 (dbl),
## pw20 (dbl), pw21 (dbl), pw22 (dbl), pw23 (dbl), pw24 (dbl), pw25
## (dbl), pw26 (dbl), pw27 (dbl), pw28 (dbl), pw29 (dbl), pw30 (dbl),
## pw31 (dbl), pw32 (dbl), pw33 (dbl), pw34 (dbl), pw35 (dbl), pw36
## (dbl), pw37 (dbl), pw38 (dbl), pw39 (dbl), pw40 (dbl), pw41 (dbl),
## pw42 (dbl), pw43 (dbl), pw44 (dbl), pw45 (dbl), pw46 (dbl), pw47
## (dbl), pw48 (dbl), pw49 (dbl), pw50 (dbl)
```
```
summary(api1_bs_des)
```
```
## Call: Called via srvyr
## Survey bootstrap with 50 replicates and MSE variances.
## Sampling variables:
## - repweights: `pw1 + pw2 + pw3 + pw4 + pw5 + pw6 + pw7 + pw8 + pw9 +
## pw10 + pw11 + pw12 + pw13 + pw14 + pw15 + pw16 + pw17 + pw18 + pw19
## + pw20 + pw21 + pw22 + pw23 + pw24 + pw25 + pw26 + pw27 + pw28 +
## pw29 + pw30 + pw31 + pw32 + pw33 + pw34 + pw35 + pw36 + pw37 + pw38
## + pw39 + pw40 + pw41 + pw42 + pw43 + pw44 + pw45 + pw46 + pw47 +
## pw48 + pw49 + pw50`
## - weights: pw
## Data variables:
## - cds (chr), dnum (int), fpc (dbl), pw (dbl), pw1 (dbl), pw2 (dbl),
## pw3 (dbl), pw4 (dbl), pw5 (dbl), pw6 (dbl), pw7 (dbl), pw8 (dbl),
## pw9 (dbl), pw10 (dbl), pw11 (dbl), pw12 (dbl), pw13 (dbl), pw14
## (dbl), pw15 (dbl), pw16 (dbl), pw17 (dbl), pw18 (dbl), pw19 (dbl),
## pw20 (dbl), pw21 (dbl), pw22 (dbl), pw23 (dbl), pw24 (dbl), pw25
## (dbl), pw26 (dbl), pw27 (dbl), pw28 (dbl), pw29 (dbl), pw30 (dbl),
## pw31 (dbl), pw32 (dbl), pw33 (dbl), pw34 (dbl), pw35 (dbl), pw36
## (dbl), pw37 (dbl), pw38 (dbl), pw39 (dbl), pw40 (dbl), pw41 (dbl),
## pw42 (dbl), pw43 (dbl), pw44 (dbl), pw45 (dbl), pw46 (dbl), pw47
## (dbl), pw48 (dbl), pw49 (dbl), pw50 (dbl)
## Variables:
## [1] "cds" "dnum" "fpc" "pw" "pw1" "pw2" "pw3" "pw4" "pw5"
## [10] "pw6" "pw7" "pw8" "pw9" "pw10" "pw11" "pw12" "pw13" "pw14"
## [19] "pw15" "pw16" "pw17" "pw18" "pw19" "pw20" "pw21" "pw22" "pw23"
## [28] "pw24" "pw25" "pw26" "pw27" "pw28" "pw29" "pw30" "pw31" "pw32"
## [37] "pw33" "pw34" "pw35" "pw36" "pw37" "pw38" "pw39" "pw40" "pw41"
## [46] "pw42" "pw43" "pw44" "pw45" "pw46" "pw47" "pw48" "pw49" "pw50"
```
As with other replicate design objects, when printing the object or looking at the summary, the replicate weights are provided along with the data variables.
#### The math
A weighted estimate for the full sample is calculated as \\(\\hat{\\theta}\\), and then a weighted estimate for each replicate is calculated as \\(\\hat{\\theta}\_r\\) for \\(R\\) replicates. Then the standard error of the estimate is calculated as follows:
\\\[se(\\hat{\\theta})\=\\sqrt{\\alpha \\sum\_{r\=1}^R \\left( \\hat{\\theta}\_r\-\\hat{\\theta}\\right)^2}\\]
where \\(\\alpha\\) is the scaling constant. Note that the scaling constant (\\(\\alpha\\)) is provided in the survey documentation, as there are many types of bootstrap methods that generate custom scaling constants.
#### The syntax
To specify a bootstrap method, we need to specify the weight variable (`weights`), the replicate weight variables (`repweights`), the type of replicate weights as bootstrap (`type = "bootstrap"`), whether the mean squared error should be used (`mse = TRUE`) or not (`mse = FALSE`), and the multiplier (`scale`). For example, if a dataset had WT0 for the main weight, 20 bootstrap weights indicated WT1, WT2, …, WT20, and a multiplier of \\(\\alpha\=.02\\), we use the following syntax:
```
bs_des <- dat %>%
as_survey_rep(
weights = WT0,
repweights = num_range("WT", 1:20),
type = "bootstrap",
mse = TRUE,
scale = .02
)
```
#### Example
Returning to the APIP example, we are going to create a dataset with bootstrap weights to use as an example. In this example, we construct a one\-cluster design with 50 replicate weights[28](#fn28).
```
apiclus1_slim <-
apiclus1 %>%
as_tibble() %>%
arrange(dnum) %>%
select(cds, dnum, fpc, pw)
set.seed(662152)
apibw <-
bootweights(
psu = apiclus1_slim$dnum,
strata = rep(1, nrow(apiclus1_slim)),
fpc = apiclus1_slim$fpc,
replicates = 50
)
bwmata <-
apibw$repweights$weights[apibw$repweights$index, ] * apiclus1_slim$pw
apiclus1_slim <- bwmata %>%
as.data.frame() %>%
set_names(str_c("pw", 1:50)) %>%
cbind(apiclus1_slim) %>%
as_tibble() %>%
select(cds, dnum, fpc, pw, everything())
apiclus1_slim
```
```
## # A tibble: 183 × 54
## cds dnum fpc pw pw1 pw2 pw3 pw4 pw5 pw6 pw7
## <chr> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 2 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 3 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 4 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 5 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 6 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 7 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 8 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 9 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## 10 43693776… 61 757 33.8 33.8 0 0 33.8 0 33.8 0
## # ℹ 173 more rows
## # ℹ 43 more variables: pw8 <dbl>, pw9 <dbl>, pw10 <dbl>, pw11 <dbl>,
## # pw12 <dbl>, pw13 <dbl>, pw14 <dbl>, pw15 <dbl>, pw16 <dbl>,
## # pw17 <dbl>, pw18 <dbl>, pw19 <dbl>, pw20 <dbl>, pw21 <dbl>,
## # pw22 <dbl>, pw23 <dbl>, pw24 <dbl>, pw25 <dbl>, pw26 <dbl>,
## # pw27 <dbl>, pw28 <dbl>, pw29 <dbl>, pw30 <dbl>, pw31 <dbl>,
## # pw32 <dbl>, pw33 <dbl>, pw34 <dbl>, pw35 <dbl>, pw36 <dbl>, …
```
The output of `apiclus1_slim` includes the same variables we have seen in other APIP examples (see Table [10\.1](c10-sample-designs-replicate-weights.html#tab:apidata)), but now it additionally includes bootstrap weights `pw1`, …, `pw50`. When creating the survey design object, we use the bootstrap weights as the replicate weights. Additionally, with replicate weights we need to include the scale (\\(\\alpha\\)). For this example, we created:
\\\[\\alpha\=\\frac{A}{(A\-1\)(R\-1\)}\=\\frac{15}{(15\-1\)\*(50\-1\)}\=0\.02186589\\]
where \\(A\\) is the average number of PSUs per stratum, and \\(R\\) is the number of replicates. There is only 1 stratum and the number of clusters/PSUs is 15 so \\(A\=15\\). Using this information, we specify the design object as:
```
api1_bs_des <- apiclus1_slim %>%
as_survey_rep(
weights = pw,
repweights = pw1:pw50,
type = "bootstrap",
scale = 0.02186589,
mse = TRUE
)
api1_bs_des
```
```
## Call: Called via srvyr
## Survey bootstrap with 50 replicates and MSE variances.
## Sampling variables:
## - repweights: `pw1 + pw2 + pw3 + pw4 + pw5 + pw6 + pw7 + pw8 + pw9 +
## pw10 + pw11 + pw12 + pw13 + pw14 + pw15 + pw16 + pw17 + pw18 + pw19
## + pw20 + pw21 + pw22 + pw23 + pw24 + pw25 + pw26 + pw27 + pw28 +
## pw29 + pw30 + pw31 + pw32 + pw33 + pw34 + pw35 + pw36 + pw37 + pw38
## + pw39 + pw40 + pw41 + pw42 + pw43 + pw44 + pw45 + pw46 + pw47 +
## pw48 + pw49 + pw50`
## - weights: pw
## Data variables:
## - cds (chr), dnum (int), fpc (dbl), pw (dbl), pw1 (dbl), pw2 (dbl),
## pw3 (dbl), pw4 (dbl), pw5 (dbl), pw6 (dbl), pw7 (dbl), pw8 (dbl),
## pw9 (dbl), pw10 (dbl), pw11 (dbl), pw12 (dbl), pw13 (dbl), pw14
## (dbl), pw15 (dbl), pw16 (dbl), pw17 (dbl), pw18 (dbl), pw19 (dbl),
## pw20 (dbl), pw21 (dbl), pw22 (dbl), pw23 (dbl), pw24 (dbl), pw25
## (dbl), pw26 (dbl), pw27 (dbl), pw28 (dbl), pw29 (dbl), pw30 (dbl),
## pw31 (dbl), pw32 (dbl), pw33 (dbl), pw34 (dbl), pw35 (dbl), pw36
## (dbl), pw37 (dbl), pw38 (dbl), pw39 (dbl), pw40 (dbl), pw41 (dbl),
## pw42 (dbl), pw43 (dbl), pw44 (dbl), pw45 (dbl), pw46 (dbl), pw47
## (dbl), pw48 (dbl), pw49 (dbl), pw50 (dbl)
```
```
summary(api1_bs_des)
```
```
## Call: Called via srvyr
## Survey bootstrap with 50 replicates and MSE variances.
## Sampling variables:
## - repweights: `pw1 + pw2 + pw3 + pw4 + pw5 + pw6 + pw7 + pw8 + pw9 +
## pw10 + pw11 + pw12 + pw13 + pw14 + pw15 + pw16 + pw17 + pw18 + pw19
## + pw20 + pw21 + pw22 + pw23 + pw24 + pw25 + pw26 + pw27 + pw28 +
## pw29 + pw30 + pw31 + pw32 + pw33 + pw34 + pw35 + pw36 + pw37 + pw38
## + pw39 + pw40 + pw41 + pw42 + pw43 + pw44 + pw45 + pw46 + pw47 +
## pw48 + pw49 + pw50`
## - weights: pw
## Data variables:
## - cds (chr), dnum (int), fpc (dbl), pw (dbl), pw1 (dbl), pw2 (dbl),
## pw3 (dbl), pw4 (dbl), pw5 (dbl), pw6 (dbl), pw7 (dbl), pw8 (dbl),
## pw9 (dbl), pw10 (dbl), pw11 (dbl), pw12 (dbl), pw13 (dbl), pw14
## (dbl), pw15 (dbl), pw16 (dbl), pw17 (dbl), pw18 (dbl), pw19 (dbl),
## pw20 (dbl), pw21 (dbl), pw22 (dbl), pw23 (dbl), pw24 (dbl), pw25
## (dbl), pw26 (dbl), pw27 (dbl), pw28 (dbl), pw29 (dbl), pw30 (dbl),
## pw31 (dbl), pw32 (dbl), pw33 (dbl), pw34 (dbl), pw35 (dbl), pw36
## (dbl), pw37 (dbl), pw38 (dbl), pw39 (dbl), pw40 (dbl), pw41 (dbl),
## pw42 (dbl), pw43 (dbl), pw44 (dbl), pw45 (dbl), pw46 (dbl), pw47
## (dbl), pw48 (dbl), pw49 (dbl), pw50 (dbl)
## Variables:
## [1] "cds" "dnum" "fpc" "pw" "pw1" "pw2" "pw3" "pw4" "pw5"
## [10] "pw6" "pw7" "pw8" "pw9" "pw10" "pw11" "pw12" "pw13" "pw14"
## [19] "pw15" "pw16" "pw17" "pw18" "pw19" "pw20" "pw21" "pw22" "pw23"
## [28] "pw24" "pw25" "pw26" "pw27" "pw28" "pw29" "pw30" "pw31" "pw32"
## [37] "pw33" "pw34" "pw35" "pw36" "pw37" "pw38" "pw39" "pw40" "pw41"
## [46] "pw42" "pw43" "pw44" "pw45" "pw46" "pw47" "pw48" "pw49" "pw50"
```
As with other replicate design objects, when printing the object or looking at the summary, the replicate weights are provided along with the data variables.
10\.5 Exercises
---------------
For this chapter, the exercises entail reading public documentation to determine how to specify the survey design. While reading the documentation, be on the lookout for description of the weights and the survey design variables or replicate weights.
1. The National Health Interview Survey (NHIS) is an annual household survey conducted by the National Center for Health Statistics (NCHS). The NHIS includes a wide variety of health topics for adults including health status and conditions, functioning and disability, health care access and health service utilization, health\-related behaviors, health promotion, mental health, barriers to receiving care, and community engagement. Like many national in\-person surveys, the sampling design is a stratified clustered design with details included in the Survey Description ([National Center for Health Statistics 2023](#ref-nhis-svy-des)). The Survey Description provides information on setting up syntax in SUDAAN, Stata, SPSS, SAS, and R ({survey} package implementation). We have imported the data and the variable containing the data as: `nhis_adult_data`. How would we specify the design using either `as_survey_design()` or `as_survey_rep()`?
2. The General Social Survey (GSS) is a survey that has been administered since 1972 on social, behavioral, and attitudinal topics. The 2016\-2020 GSS Panel codebook provides examples of setting up syntax in SAS and Stata but not R ([Davern et al. 2021](#ref-gss-codebook)). We have imported the data and the variable containing the data as: `gss_data`. How would we specify the design in R using either `as_survey_design()` or `as_survey_rep()`?
| Social Science |
tidy-survey-r.github.io | https://tidy-survey-r.github.io/tidy-survey-book/c11-missing-data.html |
Chapter 11 Missing data
=======================
### Prerequisites
For this chapter, load the following packages:
```
library(tidyverse)
library(survey)
library(srvyr)
library(srvyrexploR)
library(naniar)
library(haven)
library(gt)
```
We are using data from ANES and RECS described in Chapter [4](c04-getting-started.html#c04-getting-started). As a reminder, here is the code to create the design objects for each to use throughout this chapter. For ANES, we need to adjust the weight so it sums to the population instead of the sample (see the ANES documentation and Chapter [4](c04-getting-started.html#c04-getting-started) for more information).
```
targetpop <- 231592693
anes_adjwgt <- anes_2020 %>%
mutate(Weight = Weight / sum(Weight) * targetpop)
anes_des <- anes_adjwgt %>%
as_survey_design(
weights = Weight,
strata = Stratum,
ids = VarUnit,
nest = TRUE
)
```
For RECS, details are included in the RECS documentation and Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights).
```
recs_des <- recs_2020 %>%
as_survey_rep(
weights = NWEIGHT,
repweights = NWEIGHT1:NWEIGHT60,
type = "JK1",
scale = 59 / 60,
mse = TRUE
)
```
11\.1 Introduction
------------------
Missing data in surveys refer to situations where participants do not provide complete responses to survey questions. Respondents may not have seen a question by design. Or, they may not respond to a question for various other reasons, such as not wanting to answer a particular question, not understanding the question, or simply forgetting to answer. Missing data are important to consider and account for, as they can introduce bias and reduce the representativeness of the data. This chapter provides an overview of the types of missing data, how to assess missing data in surveys, and how to conduct analysis when missing data are present. Understanding this complex topic can help ensure accurate reporting of survey results and provide insight into potential changes to the survey design for the future.
11\.2 Missing data mechanisms
-----------------------------
There are two main categories that missing data typically fall into: missing by design and unintentional missing data. Missing by design is part of the survey plan and can be more easily incorporated into weights and analyses. Unintentional missing data, on the other hand, can lead to bias in survey estimates if not correctly accounted for. Below we provide more information on the types of missing data.
1. Missing by design/questionnaire skip logic: This type of missingness occurs when certain respondents are intentionally directed to skip specific questions based on their previous responses or characteristics. For example, in a survey about employment, if a respondent indicates that they are not employed, they may be directed to skip questions related to their job responsibilities. Additionally, some surveys randomize questions or modules so that not all participants respond to all questions. In these instances, respondents would have missing data for the modules not randomly assigned to them.
2. Unintentional missing data: This type of missingness occurs when researchers do not intend for there to be missing data on a particular question, for example, if respondents did not finish the survey or refused to answer individual questions. There are three main types of unintentional missing data that each should be considered and handled differently ([Mack, Su, and Westreich 2018](#ref-mack); [Schafer and Graham 2002](#ref-Schafer2002)):
1. Missing completely at random (MCAR): The missing data are unrelated to both observed and unobserved data, and the probability of being missing is the same across all cases. For example, if a respondent missed a question because they had to leave the survey early due to an emergency.
2. Missing at random (MAR): The missing data are related to observed data but not unobserved data, and the probability of being missing is the same within groups. For example, we know the respondents’ ages and older respondents choose not to answer specific questions but younger respondents do answer them.
3. Missing not at random (MNAR): The missing data are related to unobserved data, and the probability of being missing varies for reasons we are not measuring. For example, if respondents with depression do not answer a question about depression severity.
11\.3 Assessing missing data
----------------------------
Before beginning an analysis, we should explore the data to determine if there is missing data and what types of missing data are present. Conducting descriptive analysis can help with the analysis and reporting of survey data and can inform the survey design in future studies. For example, large amounts of unexpected missing data may indicate the questions were unclear or difficult to recall. There are several ways to explore missing data, which we walk through below. When assessing the missing data, we recommend using a data.frame object and not the survey object, as most of the analysis is about patterns of records, and weights are not necessary.
### 11\.3\.1 Summarize data
A very rudimentary first exploration is to use the `summary()` function to summarize the data, which illuminates `NA` values in the data. Let’s look at a few analytic variables on the ANES 2020 data using `summary()`:
```
anes_2020 %>%
select(V202051:EarlyVote2020) %>%
summary()
```
```
## V202051 Income7 Income
## Min. :-9.000 $125k or more:1468 Under $9,999 : 647
## 1st Qu.:-1.000 Under $20k :1076 $50,000-59,999 : 485
## Median :-1.000 $20k to < 40k:1051 $100,000-109,999: 451
## Mean :-0.726 $40k to < 60k: 984 $250,000 or more: 405
## 3rd Qu.:-1.000 $60k to < 80k: 920 $80,000-89,999 : 383
## Max. : 3.000 (Other) :1437 (Other) :4565
## NA's : 517 NA's : 517
## V201617x V201616 V201615 V201613 V201611
## Min. :-9.0 Min. :-3 Min. :-3 Min. :-3 Min. :-3
## 1st Qu.: 4.0 1st Qu.:-3 1st Qu.:-3 1st Qu.:-3 1st Qu.:-3
## Median :11.0 Median :-3 Median :-3 Median :-3 Median :-3
## Mean :10.4 Mean :-3 Mean :-3 Mean :-3 Mean :-3
## 3rd Qu.:17.0 3rd Qu.:-3 3rd Qu.:-3 3rd Qu.:-3 3rd Qu.:-3
## Max. :22.0 Max. :-3 Max. :-3 Max. :-3 Max. :-3
##
## V201610 V201607 Gender V201600
## Min. :-3 Min. :-3 Male :3375 Min. :-9.00
## 1st Qu.:-3 1st Qu.:-3 Female:4027 1st Qu.: 1.00
## Median :-3 Median :-3 NA's : 51 Median : 2.00
## Mean :-3 Mean :-3 Mean : 1.47
## 3rd Qu.:-3 3rd Qu.:-3 3rd Qu.: 2.00
## Max. :-3 Max. :-3 Max. : 2.00
##
## RaceEth V201549x V201547z V201547e
## White :5420 Min. :-9.0 Min. :-3 Min. :-3
## Black : 650 1st Qu.: 1.0 1st Qu.:-3 1st Qu.:-3
## Hispanic : 662 Median : 1.0 Median :-3 Median :-3
## Asian, NH/PI : 248 Mean : 1.5 Mean :-3 Mean :-3
## AI/AN : 155 3rd Qu.: 2.0 3rd Qu.:-3 3rd Qu.:-3
## Other/multiple race: 237 Max. : 6.0 Max. :-3 Max. :-3
## NA's : 81
## V201547d V201547c V201547b V201547a V201546
## Min. :-3 Min. :-3 Min. :-3 Min. :-3 Min. :-9.00
## 1st Qu.:-3 1st Qu.:-3 1st Qu.:-3 1st Qu.:-3 1st Qu.: 2.00
## Median :-3 Median :-3 Median :-3 Median :-3 Median : 2.00
## Mean :-3 Mean :-3 Mean :-3 Mean :-3 Mean : 1.84
## 3rd Qu.:-3 3rd Qu.:-3 3rd Qu.:-3 3rd Qu.:-3 3rd Qu.: 2.00
## Max. :-3 Max. :-3 Max. :-3 Max. :-3 Max. : 2.00
##
## Education V201510 AgeGroup Age
## Less than HS: 312 Min. :-9.00 18-29 : 871 Min. :18.0
## High school :1160 1st Qu.: 3.00 30-39 :1241 1st Qu.:37.0
## Post HS :2514 Median : 5.00 40-49 :1081 Median :53.0
## Bachelor's :1877 Mean : 5.62 50-59 :1200 Mean :51.8
## Graduate :1474 3rd Qu.: 6.00 60-69 :1436 3rd Qu.:66.0
## NA's : 116 Max. :95.00 70 or older:1330 Max. :80.0
## NA's : 294 NA's :294
## V201507x TrustPeople V201237
## Min. :-9.0 Always : 48 Min. :-9.00
## 1st Qu.:35.0 Most of the time :3511 1st Qu.: 2.00
## Median :51.0 About half the time:2020 Median : 3.00
## Mean :49.4 Some of the time :1597 Mean : 2.78
## 3rd Qu.:66.0 Never : 264 3rd Qu.: 3.00
## Max. :80.0 NA's : 13 Max. : 5.00
##
## TrustGovernment V201233
## Always : 80 Min. :-9.00
## Most of the time :1016 1st Qu.: 3.00
## About half the time:2313 Median : 4.00
## Some of the time :3313 Mean : 3.43
## Never : 702 3rd Qu.: 4.00
## NA's : 29 Max. : 5.00
##
## PartyID V201231x V201230
## Strong democrat :1796 Min. :-9.00 Min. :-9.000
## Strong republican :1545 1st Qu.: 2.00 1st Qu.:-1.000
## Independent-democrat : 881 Median : 4.00 Median :-1.000
## Independent : 876 Mean : 3.83 Mean : 0.013
## Not very strong democrat: 790 3rd Qu.: 6.00 3rd Qu.: 1.000
## (Other) :1540 Max. : 7.00 Max. : 3.000
## NA's : 25
## V201229 V201228 VotedPres2016_selection
## Min. :-9.000 Min. :-9.00 Clinton:2911
## 1st Qu.:-1.000 1st Qu.: 1.00 Trump :2466
## Median : 1.000 Median : 2.00 Other : 390
## Mean : 0.515 Mean : 1.99 NA's :1686
## 3rd Qu.: 1.000 3rd Qu.: 3.00
## Max. : 2.000 Max. : 5.00
##
## V201103 VotedPres2016 V201102 V201101
## Min. :-9.00 Yes :5810 Min. :-9.000 Min. :-9.000
## 1st Qu.: 1.00 No :1622 1st Qu.:-1.000 1st Qu.:-1.000
## Median : 1.00 NA's: 21 Median : 1.000 Median :-1.000
## Mean : 1.04 Mean : 0.105 Mean : 0.085
## 3rd Qu.: 2.00 3rd Qu.: 1.000 3rd Qu.: 1.000
## Max. : 5.00 Max. : 2.000 Max. : 2.000
##
## V201029 V201028 V201025x V201024
## Min. :-9.000 Min. :-9.0 Min. :-4.00 Min. :-9.00
## 1st Qu.:-1.000 1st Qu.:-1.0 1st Qu.: 3.00 1st Qu.:-1.00
## Median :-1.000 Median :-1.0 Median : 3.00 Median :-1.00
## Mean :-0.897 Mean :-0.9 Mean : 2.92 Mean :-0.86
## 3rd Qu.:-1.000 3rd Qu.:-1.0 3rd Qu.: 3.00 3rd Qu.:-1.00
## Max. :12.000 Max. : 2.0 Max. : 4.00 Max. : 4.00
##
## EarlyVote2020
## Yes : 375
## No : 115
## NA's:6963
##
##
##
##
```
We see that there are `NA` values in several of the derived variables (those not beginning with “V”) and negative values in the original variables (those beginning with “V”). We can also use the `count()` function to get an understanding of the different types of missing data on the original variables. For example, let’s look at the count of data for `V202072`, which corresponds to our `VotedPres2020` variable.
```
anes_2020 %>%
count(VotedPres2020, V202072)
```
```
## # A tibble: 7 × 3
## VotedPres2020 V202072 n
## <fct> <dbl+lbl> <int>
## 1 Yes -1 [-1. Inapplicable] 361
## 2 Yes 1 [1. Yes, voted for President] 5952
## 3 No -1 [-1. Inapplicable] 10
## 4 No 2 [2. No, didn't vote for President] 77
## 5 <NA> -9 [-9. Refused] 2
## 6 <NA> -6 [-6. No post-election interview] 4
## 7 <NA> -1 [-1. Inapplicable] 1047
```
Here, we can see that there are three types of missing data, and the majority of them fall under the “Inapplicable” category. This is usually a term associated with data missing due to skip patterns and is considered to be missing data by design. Based on the documentation from ANES ([DeBell 2010](#ref-debell)), we can see that this question was only asked to respondents who voted in the election.
### 11\.3\.2 Visualization of missing data
It can be challenging to look at tables for every variable and instead may be more efficient to view missing data in a graphical format to help narrow in on patterns or unique variables. The {naniar} package is very useful in exploring missing data visually. We can use the `vis_miss()` function available in both {visdat} and {naniar} packages to view the amount of missing data by variable (see Figure [11\.1](c11-missing-data.html#fig:missing-anes-vismiss)) ([Tierney 2017](#ref-visdattierney); [Tierney and Cook 2023](#ref-naniar2023)).
```
anes_2020_derived <- anes_2020 %>%
select(
-starts_with("V2"), -CaseID, -InterviewMode,
-Weight, -Stratum, -VarUnit
)
anes_2020_derived %>%
vis_miss(cluster = TRUE, show_perc = FALSE) +
scale_fill_manual(
values = book_colors[c(3, 1)],
labels = c("Present", "Missing"),
name = ""
) +
theme(
plot.margin = margin(5.5, 30, 5.5, 5.5, "pt"),
axis.text.x = element_text(angle = 70)
)
```
FIGURE 11\.1: Visual depiction of missing data in the ANES 2020 data
From the visualization in Figure [11\.1](c11-missing-data.html#fig:missing-anes-vismiss), we can start to get a picture of what questions may be connected in terms of missing data. Even if we did not have the informative variable names, we could deduce that `VotedPres2020`, `VotedPres2020_selection`, and `EarlyVote2020` are likely connected since their missing data patterns are similar.
Additionally, we can also look at `VotedPres2016_selection` and see that there are a lot of missing data in that variable. The missing data are likely due to a skip pattern, and we can look at other graphics to see how they relate to other variables. The {naniar} package has multiple visualization functions that can help dive deeper, such as the `gg_miss_fct()` function, which looks at missing data for all variables by levels of another variable (see Figure [11\.2](c11-missing-data.html#fig:missing-anes-ggmissfct)).
```
anes_2020_derived %>%
gg_miss_fct(VotedPres2016) +
scale_fill_gradientn(
guide = "colorbar",
name = "% Miss",
colors = book_colors[c(3, 2, 1)]
) +
ylab("Variable") +
xlab("Voted for President in 2016")
```
FIGURE 11\.2: Missingness in variables for each level of ‘VotedPres2016,’ in the ANES 2020 data
In Figure [11\.2](c11-missing-data.html#fig:missing-anes-ggmissfct), we can see that if respondents did not vote for president in 2016 or did not answer that question, then they were not asked about who they voted for in 2016 (the percentage of missing data is 100%). Additionally, we can see with Figure [11\.2](c11-missing-data.html#fig:missing-anes-ggmissfct) that there are more missing data across all questions if they did not provide an answer to `VotedPres2016`.
There are other visualizations that work well with numeric data. For example, in the RECS 2020 data, we can plot two continuous variables and the missing data associated with them to see if there are any patterns in the missingness. To do this, we can use the `bind_shadow()` function from the {naniar} package. This creates a nabular (combination of “na” with “tabular”), which features the original columns followed by the same number of columns with a specific `NA` format. These `NA` columns are indicators of whether the value in the original data is missing or not. The example printed below shows how most levels of `HeatingBehavior` are not missing (`!NA`) in the NA variable of `HeatingBehavior_NA`, but those missing in `HeatingBehavior` are also missing in `HeatingBehavior_NA`.
```
recs_2020_shadow <- recs_2020 %>%
bind_shadow()
ncol(recs_2020)
```
```
## [1] 100
```
```
ncol(recs_2020_shadow)
```
```
## [1] 200
```
```
recs_2020_shadow %>%
count(HeatingBehavior, HeatingBehavior_NA)
```
```
## # A tibble: 7 × 3
## HeatingBehavior HeatingBehavior_NA n
## <fct> <fct> <int>
## 1 Set one temp and leave it !NA 7806
## 2 Manually adjust at night/no one home !NA 4654
## 3 Programmable or smart thermostat automatical… !NA 3310
## 4 Turn on or off as needed !NA 1491
## 5 No control !NA 438
## 6 Other !NA 46
## 7 <NA> NA 751
```
We can then use these new variables to plot the missing data alongside the actual data. For example, let’s plot a histogram of the total electric bill grouped by those missing and not missing by heating behavior (see Figure [11\.3](c11-missing-data.html#fig:missing-recs-hist)).
```
recs_2020_shadow %>%
filter(TOTALDOL < 5000) %>%
ggplot(aes(x = TOTALDOL, fill = HeatingBehavior_NA)) +
geom_histogram() +
scale_fill_manual(
values = book_colors[c(3, 1)],
labels = c("Present", "Missing"),
name = "Heating Behavior"
) +
theme_minimal() +
xlab("Total Energy Cost (Truncated at $5000)") +
ylab("Number of Households")
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
FIGURE 11\.3: Histogram of energy cost by heating behavior missing data
Figure [11\.3](c11-missing-data.html#fig:missing-recs-hist) indicates that respondents who did not provide a response for the heating behavior question may have a different distribution of total energy cost compared to respondents who did provide a response. This view of the raw data and missingness could indicate some bias in the data. Researchers take these different bias aspects into account when calculating weights, and we need to make sure that we incorporate the weights when analyzing the data.
There are many other visualizations that can be helpful in reviewing the data, and we recommend reviewing the {naniar} documentation for more information ([Tierney and Cook 2023](#ref-naniar2023)).
11\.4 Analysis with missing data
--------------------------------
Once we understand the types of missingness, we can begin the analysis of the data. Different missingness types may be handled in different ways. In most publicly available datasets, researchers have already calculated weights and imputed missing values if necessary. Often, there are imputation flags included in the data that indicate if each value in a given variable is imputed. For example, in the RECS data we may see a logical variable of `ZWinterTempNight`, where a value of `TRUE` means that the value of `WinterTempNight` for that respondent was imputed, and `FALSE` means that it was not imputed. We may use these imputation flags if we are interested in examining the nonresponse rates in the original data. For those interested in learning more about how to calculate weights and impute data for different missing data mechanisms, we recommend Kim and Shao ([2021](#ref-Kim2021)) and Valliant and Dever ([2018](#ref-Valliant2018weights)).
Even with weights and imputation, missing data are most likely still present and need to be accounted for in analysis. This section provides an overview on how to recode missing data in R, and how to account for skip patterns in analysis.
### 11\.4\.1 Recoding missing data
Even within a variable, there can be different reasons for missing data. In publicly released data, negative values are often present to provide different meanings for values. For example, in the ANES 2020 data, they have the following negative values to represent different types of missing data:
* –9: Refused
* –8: Don’t Know
* –7: No post\-election data, deleted due to incomplete interview
* –6: No post\-election interview
* –5: Interview breakoff (sufficient partial IW)
* –4: Technical error
* –3: Restricted
* –2: Other missing reason (question specific)
* –1: Inapplicable
When we created the derived variables for use in this book, we coded all negative values as `NA` and proceeded to analyze the data. For most cases, this is an appropriate approach as long as we filter the data appropriately to account for skip patterns (see Section [11\.4\.2](c11-missing-data.html#missing-skip-patt)). However, the {naniar} package does have the option to code special missing values. For example, if we wanted to have two `NA` values, one that indicated the question was missing by design (e.g., due to skip patterns) and one for the other missing categories, we can use the `nabular` format to incorporate these with the `recode_shadow()` function.
```
anes_2020_shadow <- anes_2020 %>%
select(starts_with("V2")) %>%
mutate(across(everything(), ~ case_when(
.x < -1 ~ NA,
TRUE ~ .x
))) %>%
bind_shadow() %>%
recode_shadow(V201103 = .where(V201103 == -1 ~ "skip"))
anes_2020_shadow %>%
count(V201103, V201103_NA)
```
```
## # A tibble: 5 × 3
## V201103 V201103_NA n
## <dbl+lbl> <fct> <int>
## 1 -1 [-1. Inapplicable] NA_skip 1643
## 2 1 [1. Hillary Clinton] !NA 2911
## 3 2 [2. Donald Trump] !NA 2466
## 4 5 [5. Other {SPECIFY}] !NA 390
## 5 NA NA 43
```
However, it is important to note that at the time of publication, there is no easy way to implement `recode_shadow()` to multiple variables at once (e.g., we cannot use the tidyverse feature of `across()`). The example code above only implements this for a single variable, so this would have to be done manually or in a loop for all variables of interest.
### 11\.4\.2 Accounting for skip patterns
When questions are skipped by design in a survey, it is meaningful that the data are later missing. For example, the RECS asks people how they control the heat in their home in the winter (`HeatingBehavior`). This is only among those who have heat in their home (`SpaceHeatingUsed`). If there is no heating equipment used, the value of `HeatingBehavior` is missing. One has several choices when analyzing these data which include: (1\) only including those with a valid value of `HeatingBehavior` and specifying the universe as those with heat or (2\) including those who do not have heat. It is important to specify what population an analysis generalizes to.
Here is an example where we only include those with a valid value of `HeatingBehavior` (choice 1\). Note that we use the design object (`recs_des`) and then filter to those that are not missing on `HeatingBehavior`.
```
heat_cntl_1 <- recs_des %>%
filter(!is.na(HeatingBehavior)) %>%
group_by(HeatingBehavior) %>%
summarize(
p = survey_prop()
)
heat_cntl_1
```
```
## # A tibble: 6 × 3
## HeatingBehavior p p_se
## <fct> <dbl> <dbl>
## 1 Set one temp and leave it 0.430 4.69e-3
## 2 Manually adjust at night/no one home 0.264 4.54e-3
## 3 Programmable or smart thermostat automatically adjust… 0.168 3.12e-3
## 4 Turn on or off as needed 0.102 2.89e-3
## 5 No control 0.0333 1.70e-3
## 6 Other 0.00208 3.59e-4
```
Here is an example where we include those who do not have heat (choice 2\). To help understand what we are looking at, we have included the output to show both variables, `SpaceHeatingUsed` and `HeatingBehavior`.
```
heat_cntl_2 <- recs_des %>%
group_by(interact(SpaceHeatingUsed, HeatingBehavior)) %>%
summarize(
p = survey_prop()
)
heat_cntl_2
```
```
## # A tibble: 7 × 4
## SpaceHeatingUsed HeatingBehavior p p_se
## <lgl> <fct> <dbl> <dbl>
## 1 FALSE <NA> 0.0469 2.07e-3
## 2 TRUE Set one temp and leave it 0.410 4.60e-3
## 3 TRUE Manually adjust at night/no one home 0.251 4.36e-3
## 4 TRUE Programmable or smart thermostat aut… 0.160 2.95e-3
## 5 TRUE Turn on or off as needed 0.0976 2.79e-3
## 6 TRUE No control 0.0317 1.62e-3
## 7 TRUE Other 0.00198 3.41e-4
```
If we ran the first analysis, we would say that 16\.8% of households with heat use a programmable or smart thermostat for heating their home. If we used the results from the second analysis, we would say that 16% of households use a programmable or smart thermostat for heating their home. The distinction between the two statements is made bold for emphasis. Skip patterns often change the universe we are talking about and need to be carefully examined.
Filtering to the correct universe is important when handling these types of missing data. The `nabular` we created above can also help with this. If we have `NA_skip` values in the shadow, we can make sure that we filter out all of these values and only include relevant missing values. To do this with survey data, we could first create the `nabular`, then create the design object on that data, and then use the shadow variables to assist with filtering the data. Let’s use the `nabular` we created above for ANES 2020 (`anes_2020_shadow`) to create the design object.
```
anes_adjwgt_shadow <- anes_2020_shadow %>%
mutate(V200010b = V200010b / sum(V200010b) * targetpop)
anes_des_shadow <- anes_adjwgt_shadow %>%
as_survey_design(
weights = V200010b,
strata = V200010d,
ids = V200010c,
nest = TRUE
)
```
Then, we can use this design object to look at the percentage of the population who voted for each candidate in 2016 (`V201103`). First, let’s look at the percentages without removing any cases:
```
pres16_select1 <- anes_des_shadow %>%
group_by(V201103) %>%
summarize(
All_Missing = survey_prop()
)
pres16_select1
```
```
## # A tibble: 5 × 3
## V201103 All_Missing All_Missing_se
## <dbl+lbl> <dbl> <dbl>
## 1 -1 [-1. Inapplicable] 0.324 0.00933
## 2 1 [1. Hillary Clinton] 0.330 0.00728
## 3 2 [2. Donald Trump] 0.299 0.00728
## 4 5 [5. Other {SPECIFY}] 0.0409 0.00230
## 5 NA 0.00627 0.00121
```
Next, we look at the percentages, removing only those missing due to skip patterns (i.e., they did not receive this question).
```
pres16_select2 <- anes_des_shadow %>%
filter(V201103_NA != "NA_skip") %>%
group_by(V201103) %>%
summarize(
No_Skip_Missing = survey_prop()
)
pres16_select2
```
```
## # A tibble: 4 × 3
## V201103 No_Skip_Missing No_Skip_Missing_se
## <dbl+lbl> <dbl> <dbl>
## 1 1 [1. Hillary Clinton] 0.488 0.00870
## 2 2 [2. Donald Trump] 0.443 0.00856
## 3 5 [5. Other {SPECIFY}] 0.0606 0.00330
## 4 NA 0.00928 0.00178
```
Finally, we look at the percentages, removing all missing values both due to skip patterns and due to those who refused to answer the question.
```
pres16_select3 <- anes_des_shadow %>%
filter(V201103_NA == "!NA") %>%
group_by(V201103) %>%
summarize(
No_Missing = survey_prop()
)
pres16_select3
```
```
## # A tibble: 3 × 3
## V201103 No_Missing No_Missing_se
## <dbl+lbl> <dbl> <dbl>
## 1 1 [1. Hillary Clinton] 0.492 0.00875
## 2 2 [2. Donald Trump] 0.447 0.00861
## 3 5 [5. Other {SPECIFY}] 0.0611 0.00332
```
TABLE 11\.1: Percentage of votes by candidate for different missing data inclusions
| Candidate | Including All Missing Data | | Removing Skip Patterns Only | | Removing All Missing Data | |
| --- | --- | --- | --- | --- | --- | --- |
| % | s.e. (%) | % | s.e. (%) | % | s.e. (%) |
| Did Not Vote for President in 2016 | 32\.4 | 0\.9 | NA | NA | NA | NA |
| Hillary Clinton | 33\.0 | 0\.7 | 48\.8 | 0\.9 | 49\.2 | 0\.9 |
| Donald Trump | 29\.9 | 0\.7 | 44\.3 | 0\.9 | 44\.7 | 0\.9 |
| Other Candidate | 4\.1 | 0\.2 | 6\.1 | 0\.3 | 6\.1 | 0\.3 |
| Missing | 0\.6 | 0\.1 | 0\.9 | 0\.2 | NA | NA |
As Table [11\.1](c11-missing-data.html#tab:missing-anes-shadow-tab) shows, the results can vary greatly depending on which type of missing data are removed. If we remove only the skip patterns, the margin between Clinton and Trump is 4\.5 percentage points; but if we include all data, even those who did not vote in 2016, the margin is 3\.1 percentage points. How we handle the different types of missing values is important for interpreting the data.
### Prerequisites
11\.1 Introduction
------------------
Missing data in surveys refer to situations where participants do not provide complete responses to survey questions. Respondents may not have seen a question by design. Or, they may not respond to a question for various other reasons, such as not wanting to answer a particular question, not understanding the question, or simply forgetting to answer. Missing data are important to consider and account for, as they can introduce bias and reduce the representativeness of the data. This chapter provides an overview of the types of missing data, how to assess missing data in surveys, and how to conduct analysis when missing data are present. Understanding this complex topic can help ensure accurate reporting of survey results and provide insight into potential changes to the survey design for the future.
11\.2 Missing data mechanisms
-----------------------------
There are two main categories that missing data typically fall into: missing by design and unintentional missing data. Missing by design is part of the survey plan and can be more easily incorporated into weights and analyses. Unintentional missing data, on the other hand, can lead to bias in survey estimates if not correctly accounted for. Below we provide more information on the types of missing data.
1. Missing by design/questionnaire skip logic: This type of missingness occurs when certain respondents are intentionally directed to skip specific questions based on their previous responses or characteristics. For example, in a survey about employment, if a respondent indicates that they are not employed, they may be directed to skip questions related to their job responsibilities. Additionally, some surveys randomize questions or modules so that not all participants respond to all questions. In these instances, respondents would have missing data for the modules not randomly assigned to them.
2. Unintentional missing data: This type of missingness occurs when researchers do not intend for there to be missing data on a particular question, for example, if respondents did not finish the survey or refused to answer individual questions. There are three main types of unintentional missing data that each should be considered and handled differently ([Mack, Su, and Westreich 2018](#ref-mack); [Schafer and Graham 2002](#ref-Schafer2002)):
1. Missing completely at random (MCAR): The missing data are unrelated to both observed and unobserved data, and the probability of being missing is the same across all cases. For example, if a respondent missed a question because they had to leave the survey early due to an emergency.
2. Missing at random (MAR): The missing data are related to observed data but not unobserved data, and the probability of being missing is the same within groups. For example, we know the respondents’ ages and older respondents choose not to answer specific questions but younger respondents do answer them.
3. Missing not at random (MNAR): The missing data are related to unobserved data, and the probability of being missing varies for reasons we are not measuring. For example, if respondents with depression do not answer a question about depression severity.
11\.3 Assessing missing data
----------------------------
Before beginning an analysis, we should explore the data to determine if there is missing data and what types of missing data are present. Conducting descriptive analysis can help with the analysis and reporting of survey data and can inform the survey design in future studies. For example, large amounts of unexpected missing data may indicate the questions were unclear or difficult to recall. There are several ways to explore missing data, which we walk through below. When assessing the missing data, we recommend using a data.frame object and not the survey object, as most of the analysis is about patterns of records, and weights are not necessary.
### 11\.3\.1 Summarize data
A very rudimentary first exploration is to use the `summary()` function to summarize the data, which illuminates `NA` values in the data. Let’s look at a few analytic variables on the ANES 2020 data using `summary()`:
```
anes_2020 %>%
select(V202051:EarlyVote2020) %>%
summary()
```
```
## V202051 Income7 Income
## Min. :-9.000 $125k or more:1468 Under $9,999 : 647
## 1st Qu.:-1.000 Under $20k :1076 $50,000-59,999 : 485
## Median :-1.000 $20k to < 40k:1051 $100,000-109,999: 451
## Mean :-0.726 $40k to < 60k: 984 $250,000 or more: 405
## 3rd Qu.:-1.000 $60k to < 80k: 920 $80,000-89,999 : 383
## Max. : 3.000 (Other) :1437 (Other) :4565
## NA's : 517 NA's : 517
## V201617x V201616 V201615 V201613 V201611
## Min. :-9.0 Min. :-3 Min. :-3 Min. :-3 Min. :-3
## 1st Qu.: 4.0 1st Qu.:-3 1st Qu.:-3 1st Qu.:-3 1st Qu.:-3
## Median :11.0 Median :-3 Median :-3 Median :-3 Median :-3
## Mean :10.4 Mean :-3 Mean :-3 Mean :-3 Mean :-3
## 3rd Qu.:17.0 3rd Qu.:-3 3rd Qu.:-3 3rd Qu.:-3 3rd Qu.:-3
## Max. :22.0 Max. :-3 Max. :-3 Max. :-3 Max. :-3
##
## V201610 V201607 Gender V201600
## Min. :-3 Min. :-3 Male :3375 Min. :-9.00
## 1st Qu.:-3 1st Qu.:-3 Female:4027 1st Qu.: 1.00
## Median :-3 Median :-3 NA's : 51 Median : 2.00
## Mean :-3 Mean :-3 Mean : 1.47
## 3rd Qu.:-3 3rd Qu.:-3 3rd Qu.: 2.00
## Max. :-3 Max. :-3 Max. : 2.00
##
## RaceEth V201549x V201547z V201547e
## White :5420 Min. :-9.0 Min. :-3 Min. :-3
## Black : 650 1st Qu.: 1.0 1st Qu.:-3 1st Qu.:-3
## Hispanic : 662 Median : 1.0 Median :-3 Median :-3
## Asian, NH/PI : 248 Mean : 1.5 Mean :-3 Mean :-3
## AI/AN : 155 3rd Qu.: 2.0 3rd Qu.:-3 3rd Qu.:-3
## Other/multiple race: 237 Max. : 6.0 Max. :-3 Max. :-3
## NA's : 81
## V201547d V201547c V201547b V201547a V201546
## Min. :-3 Min. :-3 Min. :-3 Min. :-3 Min. :-9.00
## 1st Qu.:-3 1st Qu.:-3 1st Qu.:-3 1st Qu.:-3 1st Qu.: 2.00
## Median :-3 Median :-3 Median :-3 Median :-3 Median : 2.00
## Mean :-3 Mean :-3 Mean :-3 Mean :-3 Mean : 1.84
## 3rd Qu.:-3 3rd Qu.:-3 3rd Qu.:-3 3rd Qu.:-3 3rd Qu.: 2.00
## Max. :-3 Max. :-3 Max. :-3 Max. :-3 Max. : 2.00
##
## Education V201510 AgeGroup Age
## Less than HS: 312 Min. :-9.00 18-29 : 871 Min. :18.0
## High school :1160 1st Qu.: 3.00 30-39 :1241 1st Qu.:37.0
## Post HS :2514 Median : 5.00 40-49 :1081 Median :53.0
## Bachelor's :1877 Mean : 5.62 50-59 :1200 Mean :51.8
## Graduate :1474 3rd Qu.: 6.00 60-69 :1436 3rd Qu.:66.0
## NA's : 116 Max. :95.00 70 or older:1330 Max. :80.0
## NA's : 294 NA's :294
## V201507x TrustPeople V201237
## Min. :-9.0 Always : 48 Min. :-9.00
## 1st Qu.:35.0 Most of the time :3511 1st Qu.: 2.00
## Median :51.0 About half the time:2020 Median : 3.00
## Mean :49.4 Some of the time :1597 Mean : 2.78
## 3rd Qu.:66.0 Never : 264 3rd Qu.: 3.00
## Max. :80.0 NA's : 13 Max. : 5.00
##
## TrustGovernment V201233
## Always : 80 Min. :-9.00
## Most of the time :1016 1st Qu.: 3.00
## About half the time:2313 Median : 4.00
## Some of the time :3313 Mean : 3.43
## Never : 702 3rd Qu.: 4.00
## NA's : 29 Max. : 5.00
##
## PartyID V201231x V201230
## Strong democrat :1796 Min. :-9.00 Min. :-9.000
## Strong republican :1545 1st Qu.: 2.00 1st Qu.:-1.000
## Independent-democrat : 881 Median : 4.00 Median :-1.000
## Independent : 876 Mean : 3.83 Mean : 0.013
## Not very strong democrat: 790 3rd Qu.: 6.00 3rd Qu.: 1.000
## (Other) :1540 Max. : 7.00 Max. : 3.000
## NA's : 25
## V201229 V201228 VotedPres2016_selection
## Min. :-9.000 Min. :-9.00 Clinton:2911
## 1st Qu.:-1.000 1st Qu.: 1.00 Trump :2466
## Median : 1.000 Median : 2.00 Other : 390
## Mean : 0.515 Mean : 1.99 NA's :1686
## 3rd Qu.: 1.000 3rd Qu.: 3.00
## Max. : 2.000 Max. : 5.00
##
## V201103 VotedPres2016 V201102 V201101
## Min. :-9.00 Yes :5810 Min. :-9.000 Min. :-9.000
## 1st Qu.: 1.00 No :1622 1st Qu.:-1.000 1st Qu.:-1.000
## Median : 1.00 NA's: 21 Median : 1.000 Median :-1.000
## Mean : 1.04 Mean : 0.105 Mean : 0.085
## 3rd Qu.: 2.00 3rd Qu.: 1.000 3rd Qu.: 1.000
## Max. : 5.00 Max. : 2.000 Max. : 2.000
##
## V201029 V201028 V201025x V201024
## Min. :-9.000 Min. :-9.0 Min. :-4.00 Min. :-9.00
## 1st Qu.:-1.000 1st Qu.:-1.0 1st Qu.: 3.00 1st Qu.:-1.00
## Median :-1.000 Median :-1.0 Median : 3.00 Median :-1.00
## Mean :-0.897 Mean :-0.9 Mean : 2.92 Mean :-0.86
## 3rd Qu.:-1.000 3rd Qu.:-1.0 3rd Qu.: 3.00 3rd Qu.:-1.00
## Max. :12.000 Max. : 2.0 Max. : 4.00 Max. : 4.00
##
## EarlyVote2020
## Yes : 375
## No : 115
## NA's:6963
##
##
##
##
```
We see that there are `NA` values in several of the derived variables (those not beginning with “V”) and negative values in the original variables (those beginning with “V”). We can also use the `count()` function to get an understanding of the different types of missing data on the original variables. For example, let’s look at the count of data for `V202072`, which corresponds to our `VotedPres2020` variable.
```
anes_2020 %>%
count(VotedPres2020, V202072)
```
```
## # A tibble: 7 × 3
## VotedPres2020 V202072 n
## <fct> <dbl+lbl> <int>
## 1 Yes -1 [-1. Inapplicable] 361
## 2 Yes 1 [1. Yes, voted for President] 5952
## 3 No -1 [-1. Inapplicable] 10
## 4 No 2 [2. No, didn't vote for President] 77
## 5 <NA> -9 [-9. Refused] 2
## 6 <NA> -6 [-6. No post-election interview] 4
## 7 <NA> -1 [-1. Inapplicable] 1047
```
Here, we can see that there are three types of missing data, and the majority of them fall under the “Inapplicable” category. This is usually a term associated with data missing due to skip patterns and is considered to be missing data by design. Based on the documentation from ANES ([DeBell 2010](#ref-debell)), we can see that this question was only asked to respondents who voted in the election.
### 11\.3\.2 Visualization of missing data
It can be challenging to look at tables for every variable and instead may be more efficient to view missing data in a graphical format to help narrow in on patterns or unique variables. The {naniar} package is very useful in exploring missing data visually. We can use the `vis_miss()` function available in both {visdat} and {naniar} packages to view the amount of missing data by variable (see Figure [11\.1](c11-missing-data.html#fig:missing-anes-vismiss)) ([Tierney 2017](#ref-visdattierney); [Tierney and Cook 2023](#ref-naniar2023)).
```
anes_2020_derived <- anes_2020 %>%
select(
-starts_with("V2"), -CaseID, -InterviewMode,
-Weight, -Stratum, -VarUnit
)
anes_2020_derived %>%
vis_miss(cluster = TRUE, show_perc = FALSE) +
scale_fill_manual(
values = book_colors[c(3, 1)],
labels = c("Present", "Missing"),
name = ""
) +
theme(
plot.margin = margin(5.5, 30, 5.5, 5.5, "pt"),
axis.text.x = element_text(angle = 70)
)
```
FIGURE 11\.1: Visual depiction of missing data in the ANES 2020 data
From the visualization in Figure [11\.1](c11-missing-data.html#fig:missing-anes-vismiss), we can start to get a picture of what questions may be connected in terms of missing data. Even if we did not have the informative variable names, we could deduce that `VotedPres2020`, `VotedPres2020_selection`, and `EarlyVote2020` are likely connected since their missing data patterns are similar.
Additionally, we can also look at `VotedPres2016_selection` and see that there are a lot of missing data in that variable. The missing data are likely due to a skip pattern, and we can look at other graphics to see how they relate to other variables. The {naniar} package has multiple visualization functions that can help dive deeper, such as the `gg_miss_fct()` function, which looks at missing data for all variables by levels of another variable (see Figure [11\.2](c11-missing-data.html#fig:missing-anes-ggmissfct)).
```
anes_2020_derived %>%
gg_miss_fct(VotedPres2016) +
scale_fill_gradientn(
guide = "colorbar",
name = "% Miss",
colors = book_colors[c(3, 2, 1)]
) +
ylab("Variable") +
xlab("Voted for President in 2016")
```
FIGURE 11\.2: Missingness in variables for each level of ‘VotedPres2016,’ in the ANES 2020 data
In Figure [11\.2](c11-missing-data.html#fig:missing-anes-ggmissfct), we can see that if respondents did not vote for president in 2016 or did not answer that question, then they were not asked about who they voted for in 2016 (the percentage of missing data is 100%). Additionally, we can see with Figure [11\.2](c11-missing-data.html#fig:missing-anes-ggmissfct) that there are more missing data across all questions if they did not provide an answer to `VotedPres2016`.
There are other visualizations that work well with numeric data. For example, in the RECS 2020 data, we can plot two continuous variables and the missing data associated with them to see if there are any patterns in the missingness. To do this, we can use the `bind_shadow()` function from the {naniar} package. This creates a nabular (combination of “na” with “tabular”), which features the original columns followed by the same number of columns with a specific `NA` format. These `NA` columns are indicators of whether the value in the original data is missing or not. The example printed below shows how most levels of `HeatingBehavior` are not missing (`!NA`) in the NA variable of `HeatingBehavior_NA`, but those missing in `HeatingBehavior` are also missing in `HeatingBehavior_NA`.
```
recs_2020_shadow <- recs_2020 %>%
bind_shadow()
ncol(recs_2020)
```
```
## [1] 100
```
```
ncol(recs_2020_shadow)
```
```
## [1] 200
```
```
recs_2020_shadow %>%
count(HeatingBehavior, HeatingBehavior_NA)
```
```
## # A tibble: 7 × 3
## HeatingBehavior HeatingBehavior_NA n
## <fct> <fct> <int>
## 1 Set one temp and leave it !NA 7806
## 2 Manually adjust at night/no one home !NA 4654
## 3 Programmable or smart thermostat automatical… !NA 3310
## 4 Turn on or off as needed !NA 1491
## 5 No control !NA 438
## 6 Other !NA 46
## 7 <NA> NA 751
```
We can then use these new variables to plot the missing data alongside the actual data. For example, let’s plot a histogram of the total electric bill grouped by those missing and not missing by heating behavior (see Figure [11\.3](c11-missing-data.html#fig:missing-recs-hist)).
```
recs_2020_shadow %>%
filter(TOTALDOL < 5000) %>%
ggplot(aes(x = TOTALDOL, fill = HeatingBehavior_NA)) +
geom_histogram() +
scale_fill_manual(
values = book_colors[c(3, 1)],
labels = c("Present", "Missing"),
name = "Heating Behavior"
) +
theme_minimal() +
xlab("Total Energy Cost (Truncated at $5000)") +
ylab("Number of Households")
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
FIGURE 11\.3: Histogram of energy cost by heating behavior missing data
Figure [11\.3](c11-missing-data.html#fig:missing-recs-hist) indicates that respondents who did not provide a response for the heating behavior question may have a different distribution of total energy cost compared to respondents who did provide a response. This view of the raw data and missingness could indicate some bias in the data. Researchers take these different bias aspects into account when calculating weights, and we need to make sure that we incorporate the weights when analyzing the data.
There are many other visualizations that can be helpful in reviewing the data, and we recommend reviewing the {naniar} documentation for more information ([Tierney and Cook 2023](#ref-naniar2023)).
### 11\.3\.1 Summarize data
A very rudimentary first exploration is to use the `summary()` function to summarize the data, which illuminates `NA` values in the data. Let’s look at a few analytic variables on the ANES 2020 data using `summary()`:
```
anes_2020 %>%
select(V202051:EarlyVote2020) %>%
summary()
```
```
## V202051 Income7 Income
## Min. :-9.000 $125k or more:1468 Under $9,999 : 647
## 1st Qu.:-1.000 Under $20k :1076 $50,000-59,999 : 485
## Median :-1.000 $20k to < 40k:1051 $100,000-109,999: 451
## Mean :-0.726 $40k to < 60k: 984 $250,000 or more: 405
## 3rd Qu.:-1.000 $60k to < 80k: 920 $80,000-89,999 : 383
## Max. : 3.000 (Other) :1437 (Other) :4565
## NA's : 517 NA's : 517
## V201617x V201616 V201615 V201613 V201611
## Min. :-9.0 Min. :-3 Min. :-3 Min. :-3 Min. :-3
## 1st Qu.: 4.0 1st Qu.:-3 1st Qu.:-3 1st Qu.:-3 1st Qu.:-3
## Median :11.0 Median :-3 Median :-3 Median :-3 Median :-3
## Mean :10.4 Mean :-3 Mean :-3 Mean :-3 Mean :-3
## 3rd Qu.:17.0 3rd Qu.:-3 3rd Qu.:-3 3rd Qu.:-3 3rd Qu.:-3
## Max. :22.0 Max. :-3 Max. :-3 Max. :-3 Max. :-3
##
## V201610 V201607 Gender V201600
## Min. :-3 Min. :-3 Male :3375 Min. :-9.00
## 1st Qu.:-3 1st Qu.:-3 Female:4027 1st Qu.: 1.00
## Median :-3 Median :-3 NA's : 51 Median : 2.00
## Mean :-3 Mean :-3 Mean : 1.47
## 3rd Qu.:-3 3rd Qu.:-3 3rd Qu.: 2.00
## Max. :-3 Max. :-3 Max. : 2.00
##
## RaceEth V201549x V201547z V201547e
## White :5420 Min. :-9.0 Min. :-3 Min. :-3
## Black : 650 1st Qu.: 1.0 1st Qu.:-3 1st Qu.:-3
## Hispanic : 662 Median : 1.0 Median :-3 Median :-3
## Asian, NH/PI : 248 Mean : 1.5 Mean :-3 Mean :-3
## AI/AN : 155 3rd Qu.: 2.0 3rd Qu.:-3 3rd Qu.:-3
## Other/multiple race: 237 Max. : 6.0 Max. :-3 Max. :-3
## NA's : 81
## V201547d V201547c V201547b V201547a V201546
## Min. :-3 Min. :-3 Min. :-3 Min. :-3 Min. :-9.00
## 1st Qu.:-3 1st Qu.:-3 1st Qu.:-3 1st Qu.:-3 1st Qu.: 2.00
## Median :-3 Median :-3 Median :-3 Median :-3 Median : 2.00
## Mean :-3 Mean :-3 Mean :-3 Mean :-3 Mean : 1.84
## 3rd Qu.:-3 3rd Qu.:-3 3rd Qu.:-3 3rd Qu.:-3 3rd Qu.: 2.00
## Max. :-3 Max. :-3 Max. :-3 Max. :-3 Max. : 2.00
##
## Education V201510 AgeGroup Age
## Less than HS: 312 Min. :-9.00 18-29 : 871 Min. :18.0
## High school :1160 1st Qu.: 3.00 30-39 :1241 1st Qu.:37.0
## Post HS :2514 Median : 5.00 40-49 :1081 Median :53.0
## Bachelor's :1877 Mean : 5.62 50-59 :1200 Mean :51.8
## Graduate :1474 3rd Qu.: 6.00 60-69 :1436 3rd Qu.:66.0
## NA's : 116 Max. :95.00 70 or older:1330 Max. :80.0
## NA's : 294 NA's :294
## V201507x TrustPeople V201237
## Min. :-9.0 Always : 48 Min. :-9.00
## 1st Qu.:35.0 Most of the time :3511 1st Qu.: 2.00
## Median :51.0 About half the time:2020 Median : 3.00
## Mean :49.4 Some of the time :1597 Mean : 2.78
## 3rd Qu.:66.0 Never : 264 3rd Qu.: 3.00
## Max. :80.0 NA's : 13 Max. : 5.00
##
## TrustGovernment V201233
## Always : 80 Min. :-9.00
## Most of the time :1016 1st Qu.: 3.00
## About half the time:2313 Median : 4.00
## Some of the time :3313 Mean : 3.43
## Never : 702 3rd Qu.: 4.00
## NA's : 29 Max. : 5.00
##
## PartyID V201231x V201230
## Strong democrat :1796 Min. :-9.00 Min. :-9.000
## Strong republican :1545 1st Qu.: 2.00 1st Qu.:-1.000
## Independent-democrat : 881 Median : 4.00 Median :-1.000
## Independent : 876 Mean : 3.83 Mean : 0.013
## Not very strong democrat: 790 3rd Qu.: 6.00 3rd Qu.: 1.000
## (Other) :1540 Max. : 7.00 Max. : 3.000
## NA's : 25
## V201229 V201228 VotedPres2016_selection
## Min. :-9.000 Min. :-9.00 Clinton:2911
## 1st Qu.:-1.000 1st Qu.: 1.00 Trump :2466
## Median : 1.000 Median : 2.00 Other : 390
## Mean : 0.515 Mean : 1.99 NA's :1686
## 3rd Qu.: 1.000 3rd Qu.: 3.00
## Max. : 2.000 Max. : 5.00
##
## V201103 VotedPres2016 V201102 V201101
## Min. :-9.00 Yes :5810 Min. :-9.000 Min. :-9.000
## 1st Qu.: 1.00 No :1622 1st Qu.:-1.000 1st Qu.:-1.000
## Median : 1.00 NA's: 21 Median : 1.000 Median :-1.000
## Mean : 1.04 Mean : 0.105 Mean : 0.085
## 3rd Qu.: 2.00 3rd Qu.: 1.000 3rd Qu.: 1.000
## Max. : 5.00 Max. : 2.000 Max. : 2.000
##
## V201029 V201028 V201025x V201024
## Min. :-9.000 Min. :-9.0 Min. :-4.00 Min. :-9.00
## 1st Qu.:-1.000 1st Qu.:-1.0 1st Qu.: 3.00 1st Qu.:-1.00
## Median :-1.000 Median :-1.0 Median : 3.00 Median :-1.00
## Mean :-0.897 Mean :-0.9 Mean : 2.92 Mean :-0.86
## 3rd Qu.:-1.000 3rd Qu.:-1.0 3rd Qu.: 3.00 3rd Qu.:-1.00
## Max. :12.000 Max. : 2.0 Max. : 4.00 Max. : 4.00
##
## EarlyVote2020
## Yes : 375
## No : 115
## NA's:6963
##
##
##
##
```
We see that there are `NA` values in several of the derived variables (those not beginning with “V”) and negative values in the original variables (those beginning with “V”). We can also use the `count()` function to get an understanding of the different types of missing data on the original variables. For example, let’s look at the count of data for `V202072`, which corresponds to our `VotedPres2020` variable.
```
anes_2020 %>%
count(VotedPres2020, V202072)
```
```
## # A tibble: 7 × 3
## VotedPres2020 V202072 n
## <fct> <dbl+lbl> <int>
## 1 Yes -1 [-1. Inapplicable] 361
## 2 Yes 1 [1. Yes, voted for President] 5952
## 3 No -1 [-1. Inapplicable] 10
## 4 No 2 [2. No, didn't vote for President] 77
## 5 <NA> -9 [-9. Refused] 2
## 6 <NA> -6 [-6. No post-election interview] 4
## 7 <NA> -1 [-1. Inapplicable] 1047
```
Here, we can see that there are three types of missing data, and the majority of them fall under the “Inapplicable” category. This is usually a term associated with data missing due to skip patterns and is considered to be missing data by design. Based on the documentation from ANES ([DeBell 2010](#ref-debell)), we can see that this question was only asked to respondents who voted in the election.
### 11\.3\.2 Visualization of missing data
It can be challenging to look at tables for every variable and instead may be more efficient to view missing data in a graphical format to help narrow in on patterns or unique variables. The {naniar} package is very useful in exploring missing data visually. We can use the `vis_miss()` function available in both {visdat} and {naniar} packages to view the amount of missing data by variable (see Figure [11\.1](c11-missing-data.html#fig:missing-anes-vismiss)) ([Tierney 2017](#ref-visdattierney); [Tierney and Cook 2023](#ref-naniar2023)).
```
anes_2020_derived <- anes_2020 %>%
select(
-starts_with("V2"), -CaseID, -InterviewMode,
-Weight, -Stratum, -VarUnit
)
anes_2020_derived %>%
vis_miss(cluster = TRUE, show_perc = FALSE) +
scale_fill_manual(
values = book_colors[c(3, 1)],
labels = c("Present", "Missing"),
name = ""
) +
theme(
plot.margin = margin(5.5, 30, 5.5, 5.5, "pt"),
axis.text.x = element_text(angle = 70)
)
```
FIGURE 11\.1: Visual depiction of missing data in the ANES 2020 data
From the visualization in Figure [11\.1](c11-missing-data.html#fig:missing-anes-vismiss), we can start to get a picture of what questions may be connected in terms of missing data. Even if we did not have the informative variable names, we could deduce that `VotedPres2020`, `VotedPres2020_selection`, and `EarlyVote2020` are likely connected since their missing data patterns are similar.
Additionally, we can also look at `VotedPres2016_selection` and see that there are a lot of missing data in that variable. The missing data are likely due to a skip pattern, and we can look at other graphics to see how they relate to other variables. The {naniar} package has multiple visualization functions that can help dive deeper, such as the `gg_miss_fct()` function, which looks at missing data for all variables by levels of another variable (see Figure [11\.2](c11-missing-data.html#fig:missing-anes-ggmissfct)).
```
anes_2020_derived %>%
gg_miss_fct(VotedPres2016) +
scale_fill_gradientn(
guide = "colorbar",
name = "% Miss",
colors = book_colors[c(3, 2, 1)]
) +
ylab("Variable") +
xlab("Voted for President in 2016")
```
FIGURE 11\.2: Missingness in variables for each level of ‘VotedPres2016,’ in the ANES 2020 data
In Figure [11\.2](c11-missing-data.html#fig:missing-anes-ggmissfct), we can see that if respondents did not vote for president in 2016 or did not answer that question, then they were not asked about who they voted for in 2016 (the percentage of missing data is 100%). Additionally, we can see with Figure [11\.2](c11-missing-data.html#fig:missing-anes-ggmissfct) that there are more missing data across all questions if they did not provide an answer to `VotedPres2016`.
There are other visualizations that work well with numeric data. For example, in the RECS 2020 data, we can plot two continuous variables and the missing data associated with them to see if there are any patterns in the missingness. To do this, we can use the `bind_shadow()` function from the {naniar} package. This creates a nabular (combination of “na” with “tabular”), which features the original columns followed by the same number of columns with a specific `NA` format. These `NA` columns are indicators of whether the value in the original data is missing or not. The example printed below shows how most levels of `HeatingBehavior` are not missing (`!NA`) in the NA variable of `HeatingBehavior_NA`, but those missing in `HeatingBehavior` are also missing in `HeatingBehavior_NA`.
```
recs_2020_shadow <- recs_2020 %>%
bind_shadow()
ncol(recs_2020)
```
```
## [1] 100
```
```
ncol(recs_2020_shadow)
```
```
## [1] 200
```
```
recs_2020_shadow %>%
count(HeatingBehavior, HeatingBehavior_NA)
```
```
## # A tibble: 7 × 3
## HeatingBehavior HeatingBehavior_NA n
## <fct> <fct> <int>
## 1 Set one temp and leave it !NA 7806
## 2 Manually adjust at night/no one home !NA 4654
## 3 Programmable or smart thermostat automatical… !NA 3310
## 4 Turn on or off as needed !NA 1491
## 5 No control !NA 438
## 6 Other !NA 46
## 7 <NA> NA 751
```
We can then use these new variables to plot the missing data alongside the actual data. For example, let’s plot a histogram of the total electric bill grouped by those missing and not missing by heating behavior (see Figure [11\.3](c11-missing-data.html#fig:missing-recs-hist)).
```
recs_2020_shadow %>%
filter(TOTALDOL < 5000) %>%
ggplot(aes(x = TOTALDOL, fill = HeatingBehavior_NA)) +
geom_histogram() +
scale_fill_manual(
values = book_colors[c(3, 1)],
labels = c("Present", "Missing"),
name = "Heating Behavior"
) +
theme_minimal() +
xlab("Total Energy Cost (Truncated at $5000)") +
ylab("Number of Households")
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
FIGURE 11\.3: Histogram of energy cost by heating behavior missing data
Figure [11\.3](c11-missing-data.html#fig:missing-recs-hist) indicates that respondents who did not provide a response for the heating behavior question may have a different distribution of total energy cost compared to respondents who did provide a response. This view of the raw data and missingness could indicate some bias in the data. Researchers take these different bias aspects into account when calculating weights, and we need to make sure that we incorporate the weights when analyzing the data.
There are many other visualizations that can be helpful in reviewing the data, and we recommend reviewing the {naniar} documentation for more information ([Tierney and Cook 2023](#ref-naniar2023)).
11\.4 Analysis with missing data
--------------------------------
Once we understand the types of missingness, we can begin the analysis of the data. Different missingness types may be handled in different ways. In most publicly available datasets, researchers have already calculated weights and imputed missing values if necessary. Often, there are imputation flags included in the data that indicate if each value in a given variable is imputed. For example, in the RECS data we may see a logical variable of `ZWinterTempNight`, where a value of `TRUE` means that the value of `WinterTempNight` for that respondent was imputed, and `FALSE` means that it was not imputed. We may use these imputation flags if we are interested in examining the nonresponse rates in the original data. For those interested in learning more about how to calculate weights and impute data for different missing data mechanisms, we recommend Kim and Shao ([2021](#ref-Kim2021)) and Valliant and Dever ([2018](#ref-Valliant2018weights)).
Even with weights and imputation, missing data are most likely still present and need to be accounted for in analysis. This section provides an overview on how to recode missing data in R, and how to account for skip patterns in analysis.
### 11\.4\.1 Recoding missing data
Even within a variable, there can be different reasons for missing data. In publicly released data, negative values are often present to provide different meanings for values. For example, in the ANES 2020 data, they have the following negative values to represent different types of missing data:
* –9: Refused
* –8: Don’t Know
* –7: No post\-election data, deleted due to incomplete interview
* –6: No post\-election interview
* –5: Interview breakoff (sufficient partial IW)
* –4: Technical error
* –3: Restricted
* –2: Other missing reason (question specific)
* –1: Inapplicable
When we created the derived variables for use in this book, we coded all negative values as `NA` and proceeded to analyze the data. For most cases, this is an appropriate approach as long as we filter the data appropriately to account for skip patterns (see Section [11\.4\.2](c11-missing-data.html#missing-skip-patt)). However, the {naniar} package does have the option to code special missing values. For example, if we wanted to have two `NA` values, one that indicated the question was missing by design (e.g., due to skip patterns) and one for the other missing categories, we can use the `nabular` format to incorporate these with the `recode_shadow()` function.
```
anes_2020_shadow <- anes_2020 %>%
select(starts_with("V2")) %>%
mutate(across(everything(), ~ case_when(
.x < -1 ~ NA,
TRUE ~ .x
))) %>%
bind_shadow() %>%
recode_shadow(V201103 = .where(V201103 == -1 ~ "skip"))
anes_2020_shadow %>%
count(V201103, V201103_NA)
```
```
## # A tibble: 5 × 3
## V201103 V201103_NA n
## <dbl+lbl> <fct> <int>
## 1 -1 [-1. Inapplicable] NA_skip 1643
## 2 1 [1. Hillary Clinton] !NA 2911
## 3 2 [2. Donald Trump] !NA 2466
## 4 5 [5. Other {SPECIFY}] !NA 390
## 5 NA NA 43
```
However, it is important to note that at the time of publication, there is no easy way to implement `recode_shadow()` to multiple variables at once (e.g., we cannot use the tidyverse feature of `across()`). The example code above only implements this for a single variable, so this would have to be done manually or in a loop for all variables of interest.
### 11\.4\.2 Accounting for skip patterns
When questions are skipped by design in a survey, it is meaningful that the data are later missing. For example, the RECS asks people how they control the heat in their home in the winter (`HeatingBehavior`). This is only among those who have heat in their home (`SpaceHeatingUsed`). If there is no heating equipment used, the value of `HeatingBehavior` is missing. One has several choices when analyzing these data which include: (1\) only including those with a valid value of `HeatingBehavior` and specifying the universe as those with heat or (2\) including those who do not have heat. It is important to specify what population an analysis generalizes to.
Here is an example where we only include those with a valid value of `HeatingBehavior` (choice 1\). Note that we use the design object (`recs_des`) and then filter to those that are not missing on `HeatingBehavior`.
```
heat_cntl_1 <- recs_des %>%
filter(!is.na(HeatingBehavior)) %>%
group_by(HeatingBehavior) %>%
summarize(
p = survey_prop()
)
heat_cntl_1
```
```
## # A tibble: 6 × 3
## HeatingBehavior p p_se
## <fct> <dbl> <dbl>
## 1 Set one temp and leave it 0.430 4.69e-3
## 2 Manually adjust at night/no one home 0.264 4.54e-3
## 3 Programmable or smart thermostat automatically adjust… 0.168 3.12e-3
## 4 Turn on or off as needed 0.102 2.89e-3
## 5 No control 0.0333 1.70e-3
## 6 Other 0.00208 3.59e-4
```
Here is an example where we include those who do not have heat (choice 2\). To help understand what we are looking at, we have included the output to show both variables, `SpaceHeatingUsed` and `HeatingBehavior`.
```
heat_cntl_2 <- recs_des %>%
group_by(interact(SpaceHeatingUsed, HeatingBehavior)) %>%
summarize(
p = survey_prop()
)
heat_cntl_2
```
```
## # A tibble: 7 × 4
## SpaceHeatingUsed HeatingBehavior p p_se
## <lgl> <fct> <dbl> <dbl>
## 1 FALSE <NA> 0.0469 2.07e-3
## 2 TRUE Set one temp and leave it 0.410 4.60e-3
## 3 TRUE Manually adjust at night/no one home 0.251 4.36e-3
## 4 TRUE Programmable or smart thermostat aut… 0.160 2.95e-3
## 5 TRUE Turn on or off as needed 0.0976 2.79e-3
## 6 TRUE No control 0.0317 1.62e-3
## 7 TRUE Other 0.00198 3.41e-4
```
If we ran the first analysis, we would say that 16\.8% of households with heat use a programmable or smart thermostat for heating their home. If we used the results from the second analysis, we would say that 16% of households use a programmable or smart thermostat for heating their home. The distinction between the two statements is made bold for emphasis. Skip patterns often change the universe we are talking about and need to be carefully examined.
Filtering to the correct universe is important when handling these types of missing data. The `nabular` we created above can also help with this. If we have `NA_skip` values in the shadow, we can make sure that we filter out all of these values and only include relevant missing values. To do this with survey data, we could first create the `nabular`, then create the design object on that data, and then use the shadow variables to assist with filtering the data. Let’s use the `nabular` we created above for ANES 2020 (`anes_2020_shadow`) to create the design object.
```
anes_adjwgt_shadow <- anes_2020_shadow %>%
mutate(V200010b = V200010b / sum(V200010b) * targetpop)
anes_des_shadow <- anes_adjwgt_shadow %>%
as_survey_design(
weights = V200010b,
strata = V200010d,
ids = V200010c,
nest = TRUE
)
```
Then, we can use this design object to look at the percentage of the population who voted for each candidate in 2016 (`V201103`). First, let’s look at the percentages without removing any cases:
```
pres16_select1 <- anes_des_shadow %>%
group_by(V201103) %>%
summarize(
All_Missing = survey_prop()
)
pres16_select1
```
```
## # A tibble: 5 × 3
## V201103 All_Missing All_Missing_se
## <dbl+lbl> <dbl> <dbl>
## 1 -1 [-1. Inapplicable] 0.324 0.00933
## 2 1 [1. Hillary Clinton] 0.330 0.00728
## 3 2 [2. Donald Trump] 0.299 0.00728
## 4 5 [5. Other {SPECIFY}] 0.0409 0.00230
## 5 NA 0.00627 0.00121
```
Next, we look at the percentages, removing only those missing due to skip patterns (i.e., they did not receive this question).
```
pres16_select2 <- anes_des_shadow %>%
filter(V201103_NA != "NA_skip") %>%
group_by(V201103) %>%
summarize(
No_Skip_Missing = survey_prop()
)
pres16_select2
```
```
## # A tibble: 4 × 3
## V201103 No_Skip_Missing No_Skip_Missing_se
## <dbl+lbl> <dbl> <dbl>
## 1 1 [1. Hillary Clinton] 0.488 0.00870
## 2 2 [2. Donald Trump] 0.443 0.00856
## 3 5 [5. Other {SPECIFY}] 0.0606 0.00330
## 4 NA 0.00928 0.00178
```
Finally, we look at the percentages, removing all missing values both due to skip patterns and due to those who refused to answer the question.
```
pres16_select3 <- anes_des_shadow %>%
filter(V201103_NA == "!NA") %>%
group_by(V201103) %>%
summarize(
No_Missing = survey_prop()
)
pres16_select3
```
```
## # A tibble: 3 × 3
## V201103 No_Missing No_Missing_se
## <dbl+lbl> <dbl> <dbl>
## 1 1 [1. Hillary Clinton] 0.492 0.00875
## 2 2 [2. Donald Trump] 0.447 0.00861
## 3 5 [5. Other {SPECIFY}] 0.0611 0.00332
```
TABLE 11\.1: Percentage of votes by candidate for different missing data inclusions
| Candidate | Including All Missing Data | | Removing Skip Patterns Only | | Removing All Missing Data | |
| --- | --- | --- | --- | --- | --- | --- |
| % | s.e. (%) | % | s.e. (%) | % | s.e. (%) |
| Did Not Vote for President in 2016 | 32\.4 | 0\.9 | NA | NA | NA | NA |
| Hillary Clinton | 33\.0 | 0\.7 | 48\.8 | 0\.9 | 49\.2 | 0\.9 |
| Donald Trump | 29\.9 | 0\.7 | 44\.3 | 0\.9 | 44\.7 | 0\.9 |
| Other Candidate | 4\.1 | 0\.2 | 6\.1 | 0\.3 | 6\.1 | 0\.3 |
| Missing | 0\.6 | 0\.1 | 0\.9 | 0\.2 | NA | NA |
As Table [11\.1](c11-missing-data.html#tab:missing-anes-shadow-tab) shows, the results can vary greatly depending on which type of missing data are removed. If we remove only the skip patterns, the margin between Clinton and Trump is 4\.5 percentage points; but if we include all data, even those who did not vote in 2016, the margin is 3\.1 percentage points. How we handle the different types of missing values is important for interpreting the data.
### 11\.4\.1 Recoding missing data
Even within a variable, there can be different reasons for missing data. In publicly released data, negative values are often present to provide different meanings for values. For example, in the ANES 2020 data, they have the following negative values to represent different types of missing data:
* –9: Refused
* –8: Don’t Know
* –7: No post\-election data, deleted due to incomplete interview
* –6: No post\-election interview
* –5: Interview breakoff (sufficient partial IW)
* –4: Technical error
* –3: Restricted
* –2: Other missing reason (question specific)
* –1: Inapplicable
When we created the derived variables for use in this book, we coded all negative values as `NA` and proceeded to analyze the data. For most cases, this is an appropriate approach as long as we filter the data appropriately to account for skip patterns (see Section [11\.4\.2](c11-missing-data.html#missing-skip-patt)). However, the {naniar} package does have the option to code special missing values. For example, if we wanted to have two `NA` values, one that indicated the question was missing by design (e.g., due to skip patterns) and one for the other missing categories, we can use the `nabular` format to incorporate these with the `recode_shadow()` function.
```
anes_2020_shadow <- anes_2020 %>%
select(starts_with("V2")) %>%
mutate(across(everything(), ~ case_when(
.x < -1 ~ NA,
TRUE ~ .x
))) %>%
bind_shadow() %>%
recode_shadow(V201103 = .where(V201103 == -1 ~ "skip"))
anes_2020_shadow %>%
count(V201103, V201103_NA)
```
```
## # A tibble: 5 × 3
## V201103 V201103_NA n
## <dbl+lbl> <fct> <int>
## 1 -1 [-1. Inapplicable] NA_skip 1643
## 2 1 [1. Hillary Clinton] !NA 2911
## 3 2 [2. Donald Trump] !NA 2466
## 4 5 [5. Other {SPECIFY}] !NA 390
## 5 NA NA 43
```
However, it is important to note that at the time of publication, there is no easy way to implement `recode_shadow()` to multiple variables at once (e.g., we cannot use the tidyverse feature of `across()`). The example code above only implements this for a single variable, so this would have to be done manually or in a loop for all variables of interest.
### 11\.4\.2 Accounting for skip patterns
When questions are skipped by design in a survey, it is meaningful that the data are later missing. For example, the RECS asks people how they control the heat in their home in the winter (`HeatingBehavior`). This is only among those who have heat in their home (`SpaceHeatingUsed`). If there is no heating equipment used, the value of `HeatingBehavior` is missing. One has several choices when analyzing these data which include: (1\) only including those with a valid value of `HeatingBehavior` and specifying the universe as those with heat or (2\) including those who do not have heat. It is important to specify what population an analysis generalizes to.
Here is an example where we only include those with a valid value of `HeatingBehavior` (choice 1\). Note that we use the design object (`recs_des`) and then filter to those that are not missing on `HeatingBehavior`.
```
heat_cntl_1 <- recs_des %>%
filter(!is.na(HeatingBehavior)) %>%
group_by(HeatingBehavior) %>%
summarize(
p = survey_prop()
)
heat_cntl_1
```
```
## # A tibble: 6 × 3
## HeatingBehavior p p_se
## <fct> <dbl> <dbl>
## 1 Set one temp and leave it 0.430 4.69e-3
## 2 Manually adjust at night/no one home 0.264 4.54e-3
## 3 Programmable or smart thermostat automatically adjust… 0.168 3.12e-3
## 4 Turn on or off as needed 0.102 2.89e-3
## 5 No control 0.0333 1.70e-3
## 6 Other 0.00208 3.59e-4
```
Here is an example where we include those who do not have heat (choice 2\). To help understand what we are looking at, we have included the output to show both variables, `SpaceHeatingUsed` and `HeatingBehavior`.
```
heat_cntl_2 <- recs_des %>%
group_by(interact(SpaceHeatingUsed, HeatingBehavior)) %>%
summarize(
p = survey_prop()
)
heat_cntl_2
```
```
## # A tibble: 7 × 4
## SpaceHeatingUsed HeatingBehavior p p_se
## <lgl> <fct> <dbl> <dbl>
## 1 FALSE <NA> 0.0469 2.07e-3
## 2 TRUE Set one temp and leave it 0.410 4.60e-3
## 3 TRUE Manually adjust at night/no one home 0.251 4.36e-3
## 4 TRUE Programmable or smart thermostat aut… 0.160 2.95e-3
## 5 TRUE Turn on or off as needed 0.0976 2.79e-3
## 6 TRUE No control 0.0317 1.62e-3
## 7 TRUE Other 0.00198 3.41e-4
```
If we ran the first analysis, we would say that 16\.8% of households with heat use a programmable or smart thermostat for heating their home. If we used the results from the second analysis, we would say that 16% of households use a programmable or smart thermostat for heating their home. The distinction between the two statements is made bold for emphasis. Skip patterns often change the universe we are talking about and need to be carefully examined.
Filtering to the correct universe is important when handling these types of missing data. The `nabular` we created above can also help with this. If we have `NA_skip` values in the shadow, we can make sure that we filter out all of these values and only include relevant missing values. To do this with survey data, we could first create the `nabular`, then create the design object on that data, and then use the shadow variables to assist with filtering the data. Let’s use the `nabular` we created above for ANES 2020 (`anes_2020_shadow`) to create the design object.
```
anes_adjwgt_shadow <- anes_2020_shadow %>%
mutate(V200010b = V200010b / sum(V200010b) * targetpop)
anes_des_shadow <- anes_adjwgt_shadow %>%
as_survey_design(
weights = V200010b,
strata = V200010d,
ids = V200010c,
nest = TRUE
)
```
Then, we can use this design object to look at the percentage of the population who voted for each candidate in 2016 (`V201103`). First, let’s look at the percentages without removing any cases:
```
pres16_select1 <- anes_des_shadow %>%
group_by(V201103) %>%
summarize(
All_Missing = survey_prop()
)
pres16_select1
```
```
## # A tibble: 5 × 3
## V201103 All_Missing All_Missing_se
## <dbl+lbl> <dbl> <dbl>
## 1 -1 [-1. Inapplicable] 0.324 0.00933
## 2 1 [1. Hillary Clinton] 0.330 0.00728
## 3 2 [2. Donald Trump] 0.299 0.00728
## 4 5 [5. Other {SPECIFY}] 0.0409 0.00230
## 5 NA 0.00627 0.00121
```
Next, we look at the percentages, removing only those missing due to skip patterns (i.e., they did not receive this question).
```
pres16_select2 <- anes_des_shadow %>%
filter(V201103_NA != "NA_skip") %>%
group_by(V201103) %>%
summarize(
No_Skip_Missing = survey_prop()
)
pres16_select2
```
```
## # A tibble: 4 × 3
## V201103 No_Skip_Missing No_Skip_Missing_se
## <dbl+lbl> <dbl> <dbl>
## 1 1 [1. Hillary Clinton] 0.488 0.00870
## 2 2 [2. Donald Trump] 0.443 0.00856
## 3 5 [5. Other {SPECIFY}] 0.0606 0.00330
## 4 NA 0.00928 0.00178
```
Finally, we look at the percentages, removing all missing values both due to skip patterns and due to those who refused to answer the question.
```
pres16_select3 <- anes_des_shadow %>%
filter(V201103_NA == "!NA") %>%
group_by(V201103) %>%
summarize(
No_Missing = survey_prop()
)
pres16_select3
```
```
## # A tibble: 3 × 3
## V201103 No_Missing No_Missing_se
## <dbl+lbl> <dbl> <dbl>
## 1 1 [1. Hillary Clinton] 0.492 0.00875
## 2 2 [2. Donald Trump] 0.447 0.00861
## 3 5 [5. Other {SPECIFY}] 0.0611 0.00332
```
TABLE 11\.1: Percentage of votes by candidate for different missing data inclusions
| Candidate | Including All Missing Data | | Removing Skip Patterns Only | | Removing All Missing Data | |
| --- | --- | --- | --- | --- | --- | --- |
| % | s.e. (%) | % | s.e. (%) | % | s.e. (%) |
| Did Not Vote for President in 2016 | 32\.4 | 0\.9 | NA | NA | NA | NA |
| Hillary Clinton | 33\.0 | 0\.7 | 48\.8 | 0\.9 | 49\.2 | 0\.9 |
| Donald Trump | 29\.9 | 0\.7 | 44\.3 | 0\.9 | 44\.7 | 0\.9 |
| Other Candidate | 4\.1 | 0\.2 | 6\.1 | 0\.3 | 6\.1 | 0\.3 |
| Missing | 0\.6 | 0\.1 | 0\.9 | 0\.2 | NA | NA |
As Table [11\.1](c11-missing-data.html#tab:missing-anes-shadow-tab) shows, the results can vary greatly depending on which type of missing data are removed. If we remove only the skip patterns, the margin between Clinton and Trump is 4\.5 percentage points; but if we include all data, even those who did not vote in 2016, the margin is 3\.1 percentage points. How we handle the different types of missing values is important for interpreting the data.
| Social Science |
tidy-survey-r.github.io | https://tidy-survey-r.github.io/tidy-survey-book/c12-recommendations.html |
Chapter 12 Successful survey analysis recommendations
=====================================================
### Prerequisites
For this chapter, load the following packages:
```
library(tidyverse)
library(survey)
library(srvyr)
library(srvyrexploR)
```
To illustrate the importance of data visualization, we discuss Anscombe’s Quartet. The dataset can be replicated by running the code below:
```
anscombe_tidy <- anscombe %>%
mutate(obs = row_number()) %>%
pivot_longer(-obs, names_to = "key", values_to = "value") %>%
separate(key, c("variable", "set"), 1, convert = TRUE) %>%
mutate(set = c("I", "II", "III", "IV")[set]) %>%
pivot_wider(names_from = variable, values_from = value)
```
We create an example survey dataset to explain potential pitfalls and how to overcome them in survey analysis. To recreate the dataset, run the code below:
```
example_srvy <- tribble(
~id, ~region, ~q_d1, ~q_d2_1, ~gender, ~weight,
1L, 1L, 1L, "Somewhat interested", "female", 1740,
2L, 1L, 1L, "Not at all interested", "female", 1428,
3L, 2L, NA, "Somewhat interested", "female", 496,
4L, 2L, 1L, "Not at all interested", "female", 550,
5L, 3L, 1L, "Somewhat interested", "female", 1762,
6L, 4L, NA, "Very interested", "female", 1004,
7L, 4L, NA, "Somewhat interested", "female", 522,
8L, 3L, 2L, "Not at all interested", "female", 1099,
9L, 4L, 2L, "Somewhat interested", "female", 1295,
10L, 2L, 2L, "Somewhat interested", "male", 983
)
example_des <-
example_srvy %>%
as_survey_design(weights = weight)
```
12\.1 Introduction
------------------
The previous chapters in this book aimed to provide the technical skills and knowledge required for running survey analyses. This chapter builds upon the previously mentioned best practices to present a curated set of recommendations for running a successful survey analysis. We hope this list provides practical insights that assist in producing meaningful and reliable results.
12\.2 Follow the survey analysis process
----------------------------------------
As we first introduced in Chapter [4](c04-getting-started.html#c04-getting-started), there are four main steps to successfully analyze survey data:
1. Create a `tbl_svy` object (a survey object) using: `as_survey_design()` or `as_survey_rep()`
2. Subset data (if needed) using `filter()` (to create subpopulations)
3. Specify domains of analysis using `group_by()`
4. Within `summarize()`, specify variables to calculate, including means, totals, proportions, quantiles, and more
The order of these steps matters in survey analysis. For example, if we need to subset the data, we must use `filter()` on our data after creating the survey design. If we do this before the survey design is created, we may not be correctly accounting for the study design, resulting in inaccurate findings.
Additionally, correctly identifying the survey design is one of the most important steps in survey analysis. Knowing the type of sample design (e.g., clustered, stratified) helps ensure the underlying error structure is correctly calculated and weights are correctly used. Learning about complex design factors such as clustering, stratification, and weighting is foundational to complex survey analysis, and we recommend that all analysts review Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights) before creating their first design object. Reviewing the documentation (see Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation)) helps us understand what variables to use from the data.
Making sure to use the survey analysis functions from the {srvyr} and {survey} packages is also important in survey analysis. For example, using `mean()` and `survey_mean()` on the same data results in different findings and outputs. Each of the survey functions from {srvyr} and {survey} impacts standard errors and variance, and we cannot treat complex surveys as unweighted simple random samples if we want to produce unbiased estimates ([Freedman Ellis and Schneider 2024](#ref-R-srvyr); [Lumley 2010](#ref-lumley2010complex)).
12\.3 Begin with descriptive analysis
-------------------------------------
When receiving a fresh batch of data, it is tempting to jump right into running models to find significant results. However, a successful data analyst begins by exploring the dataset. Chapter [11](c11-missing-data.html#c11-missing-data) talks about the importance of reviewing data when examining missing data patterns. In this chapter, we illustrate the value of reviewing all types of data. This involves running descriptive analysis on the dataset as a whole, as well as individual variables and combinations of variables. As described in Chapter [5](c05-descriptive-analysis.html#c05-descriptive-analysis), descriptive analyses should always precede statistical analysis to prevent avoidable (and potentially embarrassing) mistakes.
### 12\.3\.1 Table review
Even before applying weights, consider running cross\-tabulations on the raw data. Cross\-tabs can help us see if any patterns stand out that may be alarming or something worth further investigating.
For example, let’s explore the example survey dataset introduced in the Prerequisites box, `example_srvy`. We run the code below on the unweighted data to inspect the `gender` variable:
```
example_srvy %>%
group_by(gender) %>%
summarize(n = n())
```
```
## # A tibble: 2 × 2
## gender n
## <chr> <int>
## 1 female 9
## 2 male 1
```
The data show that females comprise 9 out of 10, or 90%, of the sample. Generally, we assume something close to a 50/50 split between male and female respondents in a population. The sizable female proportion could indicate either a unique sample or a potential error in the data. If we review the survey documentation and see this was a deliberate part of the design, we can continue our analysis using the appropriate methods. If this was not an intentional choice by the researchers, the results alert us that something may be incorrect in the data or our code, and we can verify if there’s an issue by comparing the results with the weighted means.
### 12\.3\.2 Graphical review
Tables provide a quick check of our assumptions, but there is no substitute for graphs and plots to visualize the distribution of data. We might miss outliers or nuances if we scan only summary statistics.
For example, Anscombe’s Quartet demonstrates the importance of visualization in analysis. Let’s say we have a dataset with x\- and y\-variables in an object called `anscombe_tidy`. Let’s take a look at how the dataset is structured:
```
head(anscombe_tidy)
```
```
## # A tibble: 6 × 4
## obs set x y
## <int> <chr> <dbl> <dbl>
## 1 1 I 10 8.04
## 2 1 II 10 9.14
## 3 1 III 10 7.46
## 4 1 IV 8 6.58
## 5 2 I 8 6.95
## 6 2 II 8 8.14
```
We can begin by checking one set of variables. For Set I, the x\-variables have an average of 9 with a standard deviation of 3\.3; for y, we have an average of 7\.5 with a standard deviation of 2\.03\. The two variables have a correlation of 0\.81\.
```
anscombe_tidy %>%
filter(set == "I") %>%
summarize(
x_mean = mean(x),
x_sd = sd(x),
y_mean = mean(y),
y_sd = sd(y),
correlation = cor(x, y)
)
```
```
## # A tibble: 1 × 5
## x_mean x_sd y_mean y_sd correlation
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 9 3.32 7.50 2.03 0.816
```
These are useful statistics. We can note that the data do not have high variability, and the two variables are strongly correlated. Now, let’s check all the sets (I\-IV) in the Anscombe data. Notice anything interesting?
```
anscombe_tidy %>%
group_by(set) %>%
summarize(
x_mean = mean(x),
x_sd = sd(x, na.rm = TRUE),
y_mean = mean(y),
y_sd = sd(y, na.rm = TRUE),
correlation = cor(x, y)
)
```
```
## # A tibble: 4 × 6
## set x_mean x_sd y_mean y_sd correlation
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 I 9 3.32 7.50 2.03 0.816
## 2 II 9 3.32 7.50 2.03 0.816
## 3 III 9 3.32 7.5 2.03 0.816
## 4 IV 9 3.32 7.50 2.03 0.817
```
The summary results for these four sets are nearly identical! Based on this, we might assume that each distribution is similar. Let’s look at a graphical visualization to see if our assumption is correct (see Figure [12\.1](c12-recommendations.html#fig:recommendations-anscombe-plot)).
```
ggplot(anscombe_tidy, aes(x, y)) +
geom_point() +
facet_wrap(~set) +
geom_smooth(method = "lm", se = FALSE, alpha = 0.5) +
theme_minimal()
```
FIGURE 12\.1: Plot of Anscombe’s Quartet data and the importance of reviewing data graphically
Although each of the four sets has the same summary statistics and regression line, when reviewing the plots (see Figure [12\.1](c12-recommendations.html#fig:recommendations-anscombe-plot)), it becomes apparent that the distributions of the data are not the same at all. Each set of points results in different shapes and distributions. Imagine sharing each set (I\-IV) and the corresponding plot with a different colleague. The interpretations and descriptions of the data would be very different even though the statistics are similar. Plotting data can also ensure that we are using the correct analysis method on the data, so understanding the underlying distributions is an important first step.
12\.4 Check variable types
--------------------------
When we pull the data from surveys into R, the data may be listed as character, factor, numeric, or logical/Boolean. The tidyverse functions that read in data (e.g., `read_csv()`, `read_excel()`) default to have all strings load as character variables. This is important when dealing with survey data, as many strings may be better suited for factors than character variables. For example, let’s revisit the `example_srvy` data. Taking a `glimpse()` of the data gives us insight into what it contains:
```
example_srvy %>%
glimpse()
```
```
## Rows: 10
## Columns: 6
## $ id <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
## $ region <int> 1, 1, 2, 2, 3, 4, 4, 3, 4, 2
## $ q_d1 <int> 1, 1, NA, 1, 1, NA, NA, 2, 2, 2
## $ q_d2_1 <chr> "Somewhat interested", "Not at all interested", "Somewh…
## $ gender <chr> "female", "female", "female", "female", "female", "fema…
## $ weight <dbl> 1740, 1428, 496, 550, 1762, 1004, 522, 1099, 1295, 983
```
The output shows that `q_d2_1` is a character variable, but the values of that variable show three options (Very interested / Somewhat interested / Not at all interested). In this case, we most likely want to change `q_d2_1` to be a factor variable and order the factor levels to indicate that this is an ordinal variable. Here is some code on how we might approach this task using the {forcats} package ([Wickham 2023](#ref-R-forcats)):
```
example_srvy_fct <- example_srvy %>%
mutate(q_d2_1_fct = factor(
q_d2_1,
levels = c(
"Very interested",
"Somewhat interested",
"Not at all interested"
)
))
example_srvy_fct %>%
glimpse()
```
```
## Rows: 10
## Columns: 7
## $ id <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
## $ region <int> 1, 1, 2, 2, 3, 4, 4, 3, 4, 2
## $ q_d1 <int> 1, 1, NA, 1, 1, NA, NA, 2, 2, 2
## $ q_d2_1 <chr> "Somewhat interested", "Not at all interested", "So…
## $ gender <chr> "female", "female", "female", "female", "female", "…
## $ weight <dbl> 1740, 1428, 496, 550, 1762, 1004, 522, 1099, 1295, …
## $ q_d2_1_fct <fct> Somewhat interested, Not at all interested, Somewha…
```
```
example_srvy_fct %>%
count(q_d2_1_fct, q_d2_1)
```
```
## # A tibble: 3 × 3
## q_d2_1_fct q_d2_1 n
## <fct> <chr> <int>
## 1 Very interested Very interested 1
## 2 Somewhat interested Somewhat interested 6
## 3 Not at all interested Not at all interested 3
```
This example dataset also includes a column called `region`, which is imported as a number (`<int>`). This is a good reminder to use the questionnaire and codebook along with the data to find out if the values actually reflect a number or are perhaps a coded categorical variable (see Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation) for more details). R calculates the mean even if it is not appropriate, leading to the common mistake of applying an average to categorical values instead of a proportion function. For example, for ease of coding, we may use the `across()` function to calculate the mean across all numeric variables:
```
example_des %>%
select(-weight) %>%
summarize(across(where(is.numeric), ~ survey_mean(.x, na.rm = TRUE)))
```
```
## # A tibble: 1 × 6
## id id_se region region_se q_d1 q_d1_se
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 5.24 1.12 2.49 0.428 1.38 0.196
```
In this example, if we do not adjust `region` to be a factor variable type, we might accidentally report an average region of 2\.49 in our findings, which is meaningless. Checking that our variables are appropriate avoids this pitfall and ensures the measures and models are suitable for the variable type.
12\.5 Improve debugging skills
------------------------------
It is common for analysts working in R to come across warning or error messages, and learning how to debug these messages (i.e., find and fix issues) ensures we can proceed with our work and avoid potential mistakes.
We’ve discussed a few examples in this book. For example, if we calculate an average with `survey_mean()` and get `NA` instead of a number, it may be because our column has missing values.
```
example_des %>%
summarize(mean = survey_mean(q_d1))
```
```
## # A tibble: 1 × 2
## mean mean_se
## <dbl> <dbl>
## 1 NA NaN
```
Including the `na.rm = TRUE` would resolve the issue:
```
example_des %>%
summarize(mean = survey_mean(q_d1, na.rm = TRUE))
```
```
## # A tibble: 1 × 2
## mean mean_se
## <dbl> <dbl>
## 1 1.38 0.196
```
Another common error message that we may see with survey analysis may look something like the following:
```
example_des %>%
svyttest(q_d1 ~ gender)
```
```
## Error in UseMethod("svymean", design): no applicable method for 'svymean' applied to an object of class "formula"
```
In this case, we need to remember that with functions from the {survey} packages like `svyttest()`, the design object is not the first argument, and we have to use the dot (`.`) notation (see Chapter [6](c06-statistical-testing.html#c06-statistical-testing)). Adding in the named argument of `design=.` fixes this error.
```
example_des %>%
svyttest(q_d1 ~ gender,
design = .
)
```
```
##
## Design-based t-test
##
## data: q_d1 ~ gender
## t = 3.5, df = 5, p-value = 0.02
## alternative hypothesis: true difference in mean is not equal to 0
## 95 percent confidence interval:
## 0.1878 1.2041
## sample estimates:
## difference in mean
## 0.696
```
Often, debugging involves interpreting the message from R. For example, if our code results in this error:
```
Error in `contrasts<-`(`*tmp*`, value = contr.funs[1 + isOF[nn]]) :
contrasts can be applied only to factors with 2 or more levels
```
We can see that the error has to do with a function requiring a factor with two or more levels and that it has been applied to something else. This ties back to our section on using appropriate variable types. We can check the variable of interest to examine whether it is the correct type.
The internet also offers many resources for debugging. Searching for a specific error message can often lead to a solution. In addition, we can post on community forums like [Posit Community](https://forum.posit.co/) for direct help from others.
12\.6 Think critically about conclusions
----------------------------------------
Once we have our findings, we need to learn to think critically about them. As mentioned in Chapter [2](c02-overview-surveys.html#c02-overview-surveys), many aspects of the study design can impact our interpretation of the results, for example, the number and types of response options provided to the respondent or who was asked the question (both thinking about the full sample and any skip patterns). Knowing the overall study design can help us accurately think through what the findings may mean and identify any issues with our analyses. Additionally, we should make sure that our survey design object is correctly defined (see Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights)), carefully consider how we are managing missing data (see Chapter [11](c11-missing-data.html#c11-missing-data)), and follow statistical analysis procedures such as avoiding model overfitting by using too many variables in our formulas.
These considerations allow us to conduct our analyses and review findings for statistically significant results. It is important to note that even significant results do not mean that they are meaningful or important. A large enough sample can produce statistically significant results. Therefore, we want to look at our results in context, such as comparing them with results from other studies or analyzing them in conjunction with confidence intervals and other measures.
Communicating the results (see Chapter [8](c08-communicating-results.html#c08-communicating-results)) in an unbiased manner is also a critical step in any analysis project. If we present results without error measures or only present results that support our initial hypotheses, we are not thinking critically and may incorrectly represent the data. As survey data analysts, we often interpret the survey data for the public. We must ensure that we are the best stewards of the data and work to bring light to meaningful and interesting findings that the public wants and needs to know about.
### Prerequisites
12\.1 Introduction
------------------
The previous chapters in this book aimed to provide the technical skills and knowledge required for running survey analyses. This chapter builds upon the previously mentioned best practices to present a curated set of recommendations for running a successful survey analysis. We hope this list provides practical insights that assist in producing meaningful and reliable results.
12\.2 Follow the survey analysis process
----------------------------------------
As we first introduced in Chapter [4](c04-getting-started.html#c04-getting-started), there are four main steps to successfully analyze survey data:
1. Create a `tbl_svy` object (a survey object) using: `as_survey_design()` or `as_survey_rep()`
2. Subset data (if needed) using `filter()` (to create subpopulations)
3. Specify domains of analysis using `group_by()`
4. Within `summarize()`, specify variables to calculate, including means, totals, proportions, quantiles, and more
The order of these steps matters in survey analysis. For example, if we need to subset the data, we must use `filter()` on our data after creating the survey design. If we do this before the survey design is created, we may not be correctly accounting for the study design, resulting in inaccurate findings.
Additionally, correctly identifying the survey design is one of the most important steps in survey analysis. Knowing the type of sample design (e.g., clustered, stratified) helps ensure the underlying error structure is correctly calculated and weights are correctly used. Learning about complex design factors such as clustering, stratification, and weighting is foundational to complex survey analysis, and we recommend that all analysts review Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights) before creating their first design object. Reviewing the documentation (see Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation)) helps us understand what variables to use from the data.
Making sure to use the survey analysis functions from the {srvyr} and {survey} packages is also important in survey analysis. For example, using `mean()` and `survey_mean()` on the same data results in different findings and outputs. Each of the survey functions from {srvyr} and {survey} impacts standard errors and variance, and we cannot treat complex surveys as unweighted simple random samples if we want to produce unbiased estimates ([Freedman Ellis and Schneider 2024](#ref-R-srvyr); [Lumley 2010](#ref-lumley2010complex)).
12\.3 Begin with descriptive analysis
-------------------------------------
When receiving a fresh batch of data, it is tempting to jump right into running models to find significant results. However, a successful data analyst begins by exploring the dataset. Chapter [11](c11-missing-data.html#c11-missing-data) talks about the importance of reviewing data when examining missing data patterns. In this chapter, we illustrate the value of reviewing all types of data. This involves running descriptive analysis on the dataset as a whole, as well as individual variables and combinations of variables. As described in Chapter [5](c05-descriptive-analysis.html#c05-descriptive-analysis), descriptive analyses should always precede statistical analysis to prevent avoidable (and potentially embarrassing) mistakes.
### 12\.3\.1 Table review
Even before applying weights, consider running cross\-tabulations on the raw data. Cross\-tabs can help us see if any patterns stand out that may be alarming or something worth further investigating.
For example, let’s explore the example survey dataset introduced in the Prerequisites box, `example_srvy`. We run the code below on the unweighted data to inspect the `gender` variable:
```
example_srvy %>%
group_by(gender) %>%
summarize(n = n())
```
```
## # A tibble: 2 × 2
## gender n
## <chr> <int>
## 1 female 9
## 2 male 1
```
The data show that females comprise 9 out of 10, or 90%, of the sample. Generally, we assume something close to a 50/50 split between male and female respondents in a population. The sizable female proportion could indicate either a unique sample or a potential error in the data. If we review the survey documentation and see this was a deliberate part of the design, we can continue our analysis using the appropriate methods. If this was not an intentional choice by the researchers, the results alert us that something may be incorrect in the data or our code, and we can verify if there’s an issue by comparing the results with the weighted means.
### 12\.3\.2 Graphical review
Tables provide a quick check of our assumptions, but there is no substitute for graphs and plots to visualize the distribution of data. We might miss outliers or nuances if we scan only summary statistics.
For example, Anscombe’s Quartet demonstrates the importance of visualization in analysis. Let’s say we have a dataset with x\- and y\-variables in an object called `anscombe_tidy`. Let’s take a look at how the dataset is structured:
```
head(anscombe_tidy)
```
```
## # A tibble: 6 × 4
## obs set x y
## <int> <chr> <dbl> <dbl>
## 1 1 I 10 8.04
## 2 1 II 10 9.14
## 3 1 III 10 7.46
## 4 1 IV 8 6.58
## 5 2 I 8 6.95
## 6 2 II 8 8.14
```
We can begin by checking one set of variables. For Set I, the x\-variables have an average of 9 with a standard deviation of 3\.3; for y, we have an average of 7\.5 with a standard deviation of 2\.03\. The two variables have a correlation of 0\.81\.
```
anscombe_tidy %>%
filter(set == "I") %>%
summarize(
x_mean = mean(x),
x_sd = sd(x),
y_mean = mean(y),
y_sd = sd(y),
correlation = cor(x, y)
)
```
```
## # A tibble: 1 × 5
## x_mean x_sd y_mean y_sd correlation
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 9 3.32 7.50 2.03 0.816
```
These are useful statistics. We can note that the data do not have high variability, and the two variables are strongly correlated. Now, let’s check all the sets (I\-IV) in the Anscombe data. Notice anything interesting?
```
anscombe_tidy %>%
group_by(set) %>%
summarize(
x_mean = mean(x),
x_sd = sd(x, na.rm = TRUE),
y_mean = mean(y),
y_sd = sd(y, na.rm = TRUE),
correlation = cor(x, y)
)
```
```
## # A tibble: 4 × 6
## set x_mean x_sd y_mean y_sd correlation
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 I 9 3.32 7.50 2.03 0.816
## 2 II 9 3.32 7.50 2.03 0.816
## 3 III 9 3.32 7.5 2.03 0.816
## 4 IV 9 3.32 7.50 2.03 0.817
```
The summary results for these four sets are nearly identical! Based on this, we might assume that each distribution is similar. Let’s look at a graphical visualization to see if our assumption is correct (see Figure [12\.1](c12-recommendations.html#fig:recommendations-anscombe-plot)).
```
ggplot(anscombe_tidy, aes(x, y)) +
geom_point() +
facet_wrap(~set) +
geom_smooth(method = "lm", se = FALSE, alpha = 0.5) +
theme_minimal()
```
FIGURE 12\.1: Plot of Anscombe’s Quartet data and the importance of reviewing data graphically
Although each of the four sets has the same summary statistics and regression line, when reviewing the plots (see Figure [12\.1](c12-recommendations.html#fig:recommendations-anscombe-plot)), it becomes apparent that the distributions of the data are not the same at all. Each set of points results in different shapes and distributions. Imagine sharing each set (I\-IV) and the corresponding plot with a different colleague. The interpretations and descriptions of the data would be very different even though the statistics are similar. Plotting data can also ensure that we are using the correct analysis method on the data, so understanding the underlying distributions is an important first step.
### 12\.3\.1 Table review
Even before applying weights, consider running cross\-tabulations on the raw data. Cross\-tabs can help us see if any patterns stand out that may be alarming or something worth further investigating.
For example, let’s explore the example survey dataset introduced in the Prerequisites box, `example_srvy`. We run the code below on the unweighted data to inspect the `gender` variable:
```
example_srvy %>%
group_by(gender) %>%
summarize(n = n())
```
```
## # A tibble: 2 × 2
## gender n
## <chr> <int>
## 1 female 9
## 2 male 1
```
The data show that females comprise 9 out of 10, or 90%, of the sample. Generally, we assume something close to a 50/50 split between male and female respondents in a population. The sizable female proportion could indicate either a unique sample or a potential error in the data. If we review the survey documentation and see this was a deliberate part of the design, we can continue our analysis using the appropriate methods. If this was not an intentional choice by the researchers, the results alert us that something may be incorrect in the data or our code, and we can verify if there’s an issue by comparing the results with the weighted means.
### 12\.3\.2 Graphical review
Tables provide a quick check of our assumptions, but there is no substitute for graphs and plots to visualize the distribution of data. We might miss outliers or nuances if we scan only summary statistics.
For example, Anscombe’s Quartet demonstrates the importance of visualization in analysis. Let’s say we have a dataset with x\- and y\-variables in an object called `anscombe_tidy`. Let’s take a look at how the dataset is structured:
```
head(anscombe_tidy)
```
```
## # A tibble: 6 × 4
## obs set x y
## <int> <chr> <dbl> <dbl>
## 1 1 I 10 8.04
## 2 1 II 10 9.14
## 3 1 III 10 7.46
## 4 1 IV 8 6.58
## 5 2 I 8 6.95
## 6 2 II 8 8.14
```
We can begin by checking one set of variables. For Set I, the x\-variables have an average of 9 with a standard deviation of 3\.3; for y, we have an average of 7\.5 with a standard deviation of 2\.03\. The two variables have a correlation of 0\.81\.
```
anscombe_tidy %>%
filter(set == "I") %>%
summarize(
x_mean = mean(x),
x_sd = sd(x),
y_mean = mean(y),
y_sd = sd(y),
correlation = cor(x, y)
)
```
```
## # A tibble: 1 × 5
## x_mean x_sd y_mean y_sd correlation
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 9 3.32 7.50 2.03 0.816
```
These are useful statistics. We can note that the data do not have high variability, and the two variables are strongly correlated. Now, let’s check all the sets (I\-IV) in the Anscombe data. Notice anything interesting?
```
anscombe_tidy %>%
group_by(set) %>%
summarize(
x_mean = mean(x),
x_sd = sd(x, na.rm = TRUE),
y_mean = mean(y),
y_sd = sd(y, na.rm = TRUE),
correlation = cor(x, y)
)
```
```
## # A tibble: 4 × 6
## set x_mean x_sd y_mean y_sd correlation
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 I 9 3.32 7.50 2.03 0.816
## 2 II 9 3.32 7.50 2.03 0.816
## 3 III 9 3.32 7.5 2.03 0.816
## 4 IV 9 3.32 7.50 2.03 0.817
```
The summary results for these four sets are nearly identical! Based on this, we might assume that each distribution is similar. Let’s look at a graphical visualization to see if our assumption is correct (see Figure [12\.1](c12-recommendations.html#fig:recommendations-anscombe-plot)).
```
ggplot(anscombe_tidy, aes(x, y)) +
geom_point() +
facet_wrap(~set) +
geom_smooth(method = "lm", se = FALSE, alpha = 0.5) +
theme_minimal()
```
FIGURE 12\.1: Plot of Anscombe’s Quartet data and the importance of reviewing data graphically
Although each of the four sets has the same summary statistics and regression line, when reviewing the plots (see Figure [12\.1](c12-recommendations.html#fig:recommendations-anscombe-plot)), it becomes apparent that the distributions of the data are not the same at all. Each set of points results in different shapes and distributions. Imagine sharing each set (I\-IV) and the corresponding plot with a different colleague. The interpretations and descriptions of the data would be very different even though the statistics are similar. Plotting data can also ensure that we are using the correct analysis method on the data, so understanding the underlying distributions is an important first step.
12\.4 Check variable types
--------------------------
When we pull the data from surveys into R, the data may be listed as character, factor, numeric, or logical/Boolean. The tidyverse functions that read in data (e.g., `read_csv()`, `read_excel()`) default to have all strings load as character variables. This is important when dealing with survey data, as many strings may be better suited for factors than character variables. For example, let’s revisit the `example_srvy` data. Taking a `glimpse()` of the data gives us insight into what it contains:
```
example_srvy %>%
glimpse()
```
```
## Rows: 10
## Columns: 6
## $ id <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
## $ region <int> 1, 1, 2, 2, 3, 4, 4, 3, 4, 2
## $ q_d1 <int> 1, 1, NA, 1, 1, NA, NA, 2, 2, 2
## $ q_d2_1 <chr> "Somewhat interested", "Not at all interested", "Somewh…
## $ gender <chr> "female", "female", "female", "female", "female", "fema…
## $ weight <dbl> 1740, 1428, 496, 550, 1762, 1004, 522, 1099, 1295, 983
```
The output shows that `q_d2_1` is a character variable, but the values of that variable show three options (Very interested / Somewhat interested / Not at all interested). In this case, we most likely want to change `q_d2_1` to be a factor variable and order the factor levels to indicate that this is an ordinal variable. Here is some code on how we might approach this task using the {forcats} package ([Wickham 2023](#ref-R-forcats)):
```
example_srvy_fct <- example_srvy %>%
mutate(q_d2_1_fct = factor(
q_d2_1,
levels = c(
"Very interested",
"Somewhat interested",
"Not at all interested"
)
))
example_srvy_fct %>%
glimpse()
```
```
## Rows: 10
## Columns: 7
## $ id <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
## $ region <int> 1, 1, 2, 2, 3, 4, 4, 3, 4, 2
## $ q_d1 <int> 1, 1, NA, 1, 1, NA, NA, 2, 2, 2
## $ q_d2_1 <chr> "Somewhat interested", "Not at all interested", "So…
## $ gender <chr> "female", "female", "female", "female", "female", "…
## $ weight <dbl> 1740, 1428, 496, 550, 1762, 1004, 522, 1099, 1295, …
## $ q_d2_1_fct <fct> Somewhat interested, Not at all interested, Somewha…
```
```
example_srvy_fct %>%
count(q_d2_1_fct, q_d2_1)
```
```
## # A tibble: 3 × 3
## q_d2_1_fct q_d2_1 n
## <fct> <chr> <int>
## 1 Very interested Very interested 1
## 2 Somewhat interested Somewhat interested 6
## 3 Not at all interested Not at all interested 3
```
This example dataset also includes a column called `region`, which is imported as a number (`<int>`). This is a good reminder to use the questionnaire and codebook along with the data to find out if the values actually reflect a number or are perhaps a coded categorical variable (see Chapter [3](c03-survey-data-documentation.html#c03-survey-data-documentation) for more details). R calculates the mean even if it is not appropriate, leading to the common mistake of applying an average to categorical values instead of a proportion function. For example, for ease of coding, we may use the `across()` function to calculate the mean across all numeric variables:
```
example_des %>%
select(-weight) %>%
summarize(across(where(is.numeric), ~ survey_mean(.x, na.rm = TRUE)))
```
```
## # A tibble: 1 × 6
## id id_se region region_se q_d1 q_d1_se
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 5.24 1.12 2.49 0.428 1.38 0.196
```
In this example, if we do not adjust `region` to be a factor variable type, we might accidentally report an average region of 2\.49 in our findings, which is meaningless. Checking that our variables are appropriate avoids this pitfall and ensures the measures and models are suitable for the variable type.
12\.5 Improve debugging skills
------------------------------
It is common for analysts working in R to come across warning or error messages, and learning how to debug these messages (i.e., find and fix issues) ensures we can proceed with our work and avoid potential mistakes.
We’ve discussed a few examples in this book. For example, if we calculate an average with `survey_mean()` and get `NA` instead of a number, it may be because our column has missing values.
```
example_des %>%
summarize(mean = survey_mean(q_d1))
```
```
## # A tibble: 1 × 2
## mean mean_se
## <dbl> <dbl>
## 1 NA NaN
```
Including the `na.rm = TRUE` would resolve the issue:
```
example_des %>%
summarize(mean = survey_mean(q_d1, na.rm = TRUE))
```
```
## # A tibble: 1 × 2
## mean mean_se
## <dbl> <dbl>
## 1 1.38 0.196
```
Another common error message that we may see with survey analysis may look something like the following:
```
example_des %>%
svyttest(q_d1 ~ gender)
```
```
## Error in UseMethod("svymean", design): no applicable method for 'svymean' applied to an object of class "formula"
```
In this case, we need to remember that with functions from the {survey} packages like `svyttest()`, the design object is not the first argument, and we have to use the dot (`.`) notation (see Chapter [6](c06-statistical-testing.html#c06-statistical-testing)). Adding in the named argument of `design=.` fixes this error.
```
example_des %>%
svyttest(q_d1 ~ gender,
design = .
)
```
```
##
## Design-based t-test
##
## data: q_d1 ~ gender
## t = 3.5, df = 5, p-value = 0.02
## alternative hypothesis: true difference in mean is not equal to 0
## 95 percent confidence interval:
## 0.1878 1.2041
## sample estimates:
## difference in mean
## 0.696
```
Often, debugging involves interpreting the message from R. For example, if our code results in this error:
```
Error in `contrasts<-`(`*tmp*`, value = contr.funs[1 + isOF[nn]]) :
contrasts can be applied only to factors with 2 or more levels
```
We can see that the error has to do with a function requiring a factor with two or more levels and that it has been applied to something else. This ties back to our section on using appropriate variable types. We can check the variable of interest to examine whether it is the correct type.
The internet also offers many resources for debugging. Searching for a specific error message can often lead to a solution. In addition, we can post on community forums like [Posit Community](https://forum.posit.co/) for direct help from others.
12\.6 Think critically about conclusions
----------------------------------------
Once we have our findings, we need to learn to think critically about them. As mentioned in Chapter [2](c02-overview-surveys.html#c02-overview-surveys), many aspects of the study design can impact our interpretation of the results, for example, the number and types of response options provided to the respondent or who was asked the question (both thinking about the full sample and any skip patterns). Knowing the overall study design can help us accurately think through what the findings may mean and identify any issues with our analyses. Additionally, we should make sure that our survey design object is correctly defined (see Chapter [10](c10-sample-designs-replicate-weights.html#c10-sample-designs-replicate-weights)), carefully consider how we are managing missing data (see Chapter [11](c11-missing-data.html#c11-missing-data)), and follow statistical analysis procedures such as avoiding model overfitting by using too many variables in our formulas.
These considerations allow us to conduct our analyses and review findings for statistically significant results. It is important to note that even significant results do not mean that they are meaningful or important. A large enough sample can produce statistically significant results. Therefore, we want to look at our results in context, such as comparing them with results from other studies or analyzing them in conjunction with confidence intervals and other measures.
Communicating the results (see Chapter [8](c08-communicating-results.html#c08-communicating-results)) in an unbiased manner is also a critical step in any analysis project. If we present results without error measures or only present results that support our initial hypotheses, we are not thinking critically and may incorrectly represent the data. As survey data analysts, we often interpret the survey data for the public. We must ensure that we are the best stewards of the data and work to bring light to meaningful and interesting findings that the public wants and needs to know about.
| Social Science |
tidy-survey-r.github.io | https://tidy-survey-r.github.io/tidy-survey-book/c13-ncvs-vignette.html |
Chapter 13 National Crime Victimization Survey vignette
=======================================================
### Prerequisites
For this chapter, load the following packages:
```
library(tidyverse)
library(survey)
library(srvyr)
library(srvyrexploR)
library(gt)
```
We use data from the United States National Crime Victimization Survey (NCVS). These data are available in the {srvyrexploR} package as `ncvs_2021_incident`, `ncvs_2021_household`, and `ncvs_2021_person`.
13\.1 Introduction
------------------
The National Crime Victimization Survey (NCVS) is a household survey sponsored by the Bureau of Justice Statistics (BJS), which collects data on criminal victimization, including characteristics of the crimes, offenders, and victims. Crime types include both household and personal crimes, as well as violent and non\-violent crimes. The population of interest of this survey is all people in the United States age 12 and older living in housing units and non\-institutional group quarters.
The NCVS has been ongoing since 1992\. An earlier survey, the National Crime Survey, was run from 1972 to 1991 ([U. S. Bureau of Justice Statistics 2017](#ref-ncvs_tech_2016)). The survey is administered using a rotating panel. When an address enters the sample, the residents of that address are interviewed every 6 months for a total of 7 interviews. If the initial residents move away from the address during the period and new residents move in, the new residents are included in the survey, as people are not followed when they move.
NCVS data are publicly available and distributed by Inter\-university Consortium for Political and Social Research (ICPSR), with data going back to 1992\. The vignette in this book includes data from 2021 ([U.S. Bureau of Justice Statistics 2022](#ref-ncvs_data_2021)). The NCVS data structure is complicated, and the User’s Guide contains examples for analysis in SAS, SUDAAN, SPSS, and Stata, but not R ([Shook\-Sa, Couzens, and Berzofsky 2015](#ref-ncvs_user_guide)). This vignette adapts those examples for R.
13\.2 Data structure
--------------------
The data from ICPSR are distributed with five files, each having its unique identifier indicated:
* Address Record \- `YEARQ`, `IDHH`
* Household Record \- `YEARQ`, `IDHH`
* Person Record \- `YEARQ`, `IDHH`, `IDPER`
* Incident Record \- `YEARQ`, `IDHH`, `IDPER`
* 2021 Collection Year Incident \- `YEARQ`, `IDHH`, `IDPER`
In this vignette, we focus on the household, person, and incident files and have selected a subset of columns for use in the examples. We have included data in the {srvyexploR} package with this subset of columns, but the complete data files can be downloaded from [ICPSR](https://www.icpsr.umich.edu/web/NACJD/studies/38429).
13\.3 Survey notation
---------------------
The NCVS User Guide ([Shook\-Sa, Couzens, and Berzofsky 2015](#ref-ncvs_user_guide)) uses the following notation:
* \\(i\\) represents NCVS households, identified on the household\-level file with the household identification number `IDHH`.
* \\(j\\) represents NCVS individual respondents within household \\(i\\), identified on the person\-level file with the person identification number `IDPER`.
* \\(k\\) represents reporting periods (i.e., `YEARQ`) for household \\(i\\) and individual respondent \\(j\\).
* \\(l\\) represents victimization records for respondent \\(j\\) in household \\(i\\) and reporting period \\(k\\). Each record on the NCVS incident\-level file is associated with a victimization record \\(l\\).
* \\(D\\) represents one or more domain characteristics of interest in the calculation of NCVS estimates. For victimization totals and proportions, domains can be defined on the basis of crime types (e.g., violent crimes, property crimes), characteristics of victims (e.g., age, sex, household income), or characteristics of the victimizations (e.g., victimizations reported to police, victimizations committed with a weapon present). Domains could also be a combination of all of these types of characteristics. For example, in the calculation of victimization rates, domains are defined on the basis of the characteristics of the victims.
* \\(A\_a\\) represents the level \\(a\\) of covariate \\(A\\). Covariate \\(A\\) is defined in the calculation of victimization proportions and represents the characteristic we want to obtain the distribution of victimizations in domain \\(D\\).
* \\(C\\) represents the personal or property crime for which we want to obtain a victimization rate.
In this vignette, we discuss four estimates:
1. Victimization totals estimate the number of criminal victimizations with a given characteristic. As demonstrated below, these can be calculated from any of the data files. The estimated victimization total, \\(\\hat{t}\_D\\) for domain \\(D\\) is estimated as
\\\[ \\hat{t}\_D \= \\sum\_{ijkl \\in D} v\_{ijkl}\\]
where \\(v\_{ijkl}\\) is the series\-adjusted victimization weight for household \\(i\\), respondent \\(j\\), reporting period \\(k\\), and victimization \\(l\\), represented in the data as `WGTVICCY`.
2. Victimization proportions estimate characteristics among victimizations or victims. Victimization proportions are calculated using the incident data file. The estimated victimization proportion for domain \\(D\\) across level \\(a\\) of covariate \\(A\\), \\(\\hat{p}\_{A\_a,D}\\) is
\\\[ \\hat{p}\_{A\_a,D} \=\\frac{\\sum\_{ijkl \\in A\_a, D} v\_{ijkl}}{\\sum\_{ijkl \\in D} v\_{ijkl}}.\\]
The numerator is the number of incidents with a particular characteristic in a domain, and the denominator is the number of incidents in a domain.
3. Victimization rates are estimates of the number of victimizations per 1,000 persons or households in the population[29](#fn29). Victimization rates are calculated using the household or person\-level data files. The estimated victimization rate for crime \\(C\\) in domain \\(D\\) is
\\\[\\hat{VR}\_{C,D}\= \\frac{\\sum\_{ijkl \\in C,D} v\_{ijkl}}{\\sum\_{ijk \\in D} w\_{ijk}}\\times 1000\\]
where \\(w\_{ijk}\\) is the person weight (`WGTPERCY`) for personal crimes or household weight (`WGTHHCY`) for household crimes. The numerator is the number of incidents in a domain, and the denominator is the number of persons or households in a domain. Notice that the weights in the numerator and denominator are different; this is important, and in the syntax and examples below, we discuss how to make an estimate that involves two weights.
4. Prevalence rates are estimates of the percentage of the population (persons or households) who are victims of a crime. These are estimated using the household or person\-level data files. The estimated prevalence rate for crime \\(C\\) in domain \\(D\\) is
\\\[ \\hat{PR}\_{C, D}\= \\frac{\\sum\_{ijk \\in {C,D}} I\_{ij}w\_{ijk}}{\\sum\_{ijk \\in D} w\_{ijk}} \\times 100\\]
where \\(I\_{ij}\\) is an indicator that a person or household in domain \\(D\\) was a victim of crime \\(C\\) at any time in the year. The numerator is the number of victims in domain \\(D\\) for crime \\(C\\), and the denominator is the number of people or households in the population.
13\.4 Data file preparation
---------------------------
Some work is necessary to prepare the files before analysis. The design variables indicating pseudo\-stratum (`V2117`) and half\-sample code (`V2118`) are only included on the household file, so they must be added to the person and incident files for any analysis.
For victimization rates, we need to know the victimization status for both victims and non\-victims. Therefore, the incident file must be summarized and merged onto the household or person files for household\-level and person\-level crimes, respectively. We begin this vignette by discussing how to create these incident summary files. This is following Section 2\.2 of the NCVS User’s Guide ([Shook\-Sa, Couzens, and Berzofsky 2015](#ref-ncvs_user_guide)).
### 13\.4\.1 Preparing files for estimation of victimization rates
Each record on the incident file represents one victimization, which is not the same as one incident. Some victimizations have several instances that make it difficult for the victim to differentiate the details of these incidents, labeled as “series crimes.” Appendix A of the User’s Guide indicates how to calculate the series weight in other statistical languages.
Here, we adapt that code for R. Essentially, if a victimization is a series crime, its series weight is top\-coded at 10 based on the number of actual victimizations, that is, even if the crime occurred more than 10 times, it is counted as 10 times to reduce the influence of extreme outliers. If an incident is a series crime, but the number of occurrences is unknown, the series weight is set to 6\. A description of the variables used to create indicators of series and the associated weights is included in Table [13\.1](c13-ncvs-vignette.html#tab:cb-incident).
TABLE 13\.1: Codebook for incident variables, related to series weight
| | Description | Value | Label |
| --- | --- | --- | --- |
| V4016 | How many times incident occur last 6 months | 1–996 | Number of times |
| | | 997 | Don’t know |
| V4017 | How many incidents | 1 | 1–5 incidents (not a “series”) |
| | | 2 | 6 or more incidents |
| | | 8 | Residue (invalid data) |
| V4018 | Incidents similar in detail | 1 | Similar |
| | | 2 | Different (not in a “series”) |
| | | 8 | Residue (invalid data) |
| V4019 | Enough detail to distinguish incidents | 1 | Yes (not a “series”) |
| | | 2 | No (is a “series”) |
| | | 8 | Residue (invalid data) |
| WGTVICCY | Adjusted victimization weight | | Numeric |
We want to create four variables to indicate if an incident is a series crime. First, we create a variable called `series` using `V4017`, `V4018`, and `V4019` where an incident is considered a series crime if there are 6 or more incidents (`V4107`), the incidents are similar in detail (`V4018`), or there is not enough detail to distinguish the incidents (`V4019`). Second, we top\-code the number of incidents (`V4016`) by creating a variable `n10v4016`, which is set to 10 if `V4016 > 10`. Third, we create the `serieswgt` using the two new variables `series` and `n10v4019` to classify the max series based on missing data and number of incidents. Finally, we create the new weight using our new `serieswgt` variable and the existing weight (`WGTVICCY`).
```
inc_series <- ncvs_2021_incident %>%
mutate(
series = case_when(
V4017 %in% c(1, 8) ~ 1,
V4018 %in% c(2, 8) ~ 1,
V4019 %in% c(1, 8) ~ 1,
TRUE ~ 2
),
n10v4016 = case_when(
V4016 %in% c(997, 998) ~ NA_real_,
V4016 > 10 ~ 10,
TRUE ~ V4016
),
serieswgt = case_when(
series == 2 & is.na(n10v4016) ~ 6,
series == 2 ~ n10v4016,
TRUE ~ 1
),
NEWWGT = WGTVICCY * serieswgt
)
```
The next step in preparing the files for estimation is to create indicators on the victimization file for characteristics of interest. Almost all BJS publications limit the analysis to records where the victimization occurred in the United States (where `V4022` is not equal to 1\). We do this for all estimates as well. A brief codebook of variables for this task is located in Table [13\.2](c13-ncvs-vignette.html#tab:cb-crimetype).
TABLE 13\.2: Codebook for incident variables, crime type indicators and characteristics
| Variable | Description | Value | Label |
| --- | --- | --- | --- |
| V4022 | In what city/town/village | 1 | Outside U.S. |
| | | 2 | Not inside a city/town/village |
| | | 3 | Same city/town/village as present residence |
| | | 4 | Different city/town/village as present residence |
| | | 5 | Don’t know |
| | | 6 | Don’t know if 2, 4, or 5 |
| V4049 | Did offender have a weapon | 1 | Yes |
| | | 2 | No |
| | | 3 | Don’t know |
| V4050 | What was the weapon that offender had | 1 | At least one good entry |
| | | 3 | Indicates “Yes\-Type Weapon\-NA” |
| | | 7 | Indicates “Gun Type Unknown” |
| | | 8 | No good entry |
| V4051 | Hand gun | 0 | No |
| | | 1 | Yes |
| V4052 | Other gun | 0 | No |
| | | 1 | Yes |
| V4053 | Knife | 0 | No |
| | | 1 | Yes |
| V4399 | Reported to police | 1 | Yes |
| | | 2 | No |
| | | 3 | Don’t know |
| V4529 | Type of crime code | 01 | Completed rape |
| | | 02 | Attempted rape |
| | | 03 | Sexual attack with serious assault |
| | | 04 | Sexual attack with minor assault |
| | | 05 | Completed robbery with injury from serious assault |
| | | 06 | Completed robbery with injury from minor assault |
| | | 07 | Completed robbery without injury from minor assault |
| | | 08 | Attempted robbery with injury from serious assault |
| | | 09 | Attempted robbery with injury from minor assault |
| | | 10 | Attempted robbery without injury |
| | | 11 | Completed aggravated assault with injury |
| | | 12 | Attempted aggravated assault with weapon |
| | | 13 | Threatened assault with weapon |
| | | 14 | Simple assault completed with injury |
| | | 15 | Sexual assault without injury |
| | | 16 | Unwanted sexual contact without force |
| | | 17 | Assault without weapon without injury |
| | | 18 | Verbal threat of rape |
| | | 19 | Verbal threat of sexual assault |
| | | 20 | Verbal threat of assault |
| | | 21 | Completed purse snatching |
| | | 22 | Attempted purse snatching |
| | | 23 | Pocket picking (completed only) |
| | | 31 | Completed burglary, forcible entry |
| | | 32 | Completed burglary, unlawful entry without force |
| | | 33 | Attempted forcible entry |
| | | 40 | Completed motor vehicle theft |
| | | 41 | Attempted motor vehicle theft |
| | | 54 | Completed theft less than $10 |
| | | 55 | Completed theft $10 to $49 |
| | | 56 | Completed theft $50 to $249 |
| | | 57 | Completed theft $250 or greater |
| | | 58 | Completed theft value NA |
| | | 59 | Attempted theft |
Using these variables, we create the following indicators:
1. Property crime
* `V4529` \\(\\ge\\) 31
* Variable: `Property`
2. Violent crime
* `V4529` \\(\\le\\) 20
* Variable: `Violent`
3. Property crime reported to the police
* `V4529` \\(\\ge\\) 31 and `V4399`\=1
* Variable: `Property_ReportPolice`
4. Violent crime reported to the police
* `V4529` \< 31 and `V4399`\=1
* Variable: `Violent_ReportPolice`
5. Aggravated assault without a weapon
* `V4529` in 11:12 and `V4049`\=2
* Variable: `AAST_NoWeap`
6. Aggravated assault with a firearm
* `V4529` in 11:12 and `V4049`\=1 and (`V4051`\=1 or `V4052`\=1 or `V4050`\=7\)
* Variable: `AAST_Firearm`
7. Aggravated assault with a knife or sharp object
* `V4529` in 11:12 and `V4049`\=1 and (`V4053`\=1 or `V4054`\=1\)
* Variable: `AAST_Knife`
8. Aggravated assault with another type of weapon
* `V4529` in 11:12 and `V4049`\=1 and `V4050`\=1 and not firearm or knife
* Variable: `AAST_Other`
```
inc_ind <- inc_series %>%
filter(V4022 != 1) %>%
mutate(
WeapCat = case_when(
is.na(V4049) ~ NA_character_,
V4049 == 2 ~ "NoWeap",
V4049 == 3 ~ "UnkWeapUse",
V4050 == 3 ~ "Other",
V4051 == 1 | V4052 == 1 | V4050 == 7 ~ "Firearm",
V4053 == 1 | V4054 == 1 ~ "Knife",
TRUE ~ "Other"
),
V4529_num = parse_number(as.character(V4529)),
ReportPolice = V4399 == 1,
Property = V4529_num >= 31,
Violent = V4529_num <= 20,
Property_ReportPolice = Property & ReportPolice,
Violent_ReportPolice = Violent & ReportPolice,
AAST = V4529_num %in% 11:13,
AAST_NoWeap = AAST & WeapCat == "NoWeap",
AAST_Firearm = AAST & WeapCat == "Firearm",
AAST_Knife = AAST & WeapCat == "Knife",
AAST_Other = AAST & WeapCat == "Other"
)
```
This is a good point to pause to look at the output of crosswalks between an original variable and a derived one to check that the logic was programmed correctly and that everything ends up in the expected category.
```
inc_series %>% count(V4022)
```
```
## # A tibble: 6 × 2
## V4022 n
## <fct> <int>
## 1 1 34
## 2 2 65
## 3 3 7697
## 4 4 1143
## 5 5 39
## 6 8 4
```
```
inc_ind %>% count(V4022)
```
```
## # A tibble: 5 × 2
## V4022 n
## <fct> <int>
## 1 2 65
## 2 3 7697
## 3 4 1143
## 4 5 39
## 5 8 4
```
```
inc_ind %>%
count(WeapCat, V4049, V4050, V4051, V4052, V4052, V4053, V4054)
```
```
## # A tibble: 13 × 8
## WeapCat V4049 V4050 V4051 V4052 V4053 V4054 n
## <chr> <fct> <fct> <fct> <fct> <fct> <fct> <int>
## 1 Firearm 1 1 0 1 0 0 15
## 2 Firearm 1 1 0 1 1 1 1
## 3 Firearm 1 1 1 0 0 0 125
## 4 Firearm 1 1 1 0 1 0 2
## 5 Firearm 1 1 1 1 0 0 3
## 6 Firearm 1 7 0 0 0 0 3
## 7 Knife 1 1 0 0 0 1 14
## 8 Knife 1 1 0 0 1 0 71
## 9 NoWeap 2 <NA> <NA> <NA> <NA> <NA> 1794
## 10 Other 1 1 0 0 0 0 147
## 11 Other 1 3 0 0 0 0 26
## 12 UnkWeapUse 3 <NA> <NA> <NA> <NA> <NA> 519
## 13 <NA> <NA> <NA> <NA> <NA> <NA> <NA> 6228
```
```
inc_ind %>%
count(V4529, Property, Violent, AAST) %>%
print(n = 40)
```
```
## # A tibble: 34 × 5
## V4529 Property Violent AAST n
## <fct> <lgl> <lgl> <lgl> <int>
## 1 1 FALSE TRUE FALSE 45
## 2 2 FALSE TRUE FALSE 20
## 3 3 FALSE TRUE FALSE 11
## 4 4 FALSE TRUE FALSE 3
## 5 5 FALSE TRUE FALSE 24
## 6 6 FALSE TRUE FALSE 26
## 7 7 FALSE TRUE FALSE 59
## 8 8 FALSE TRUE FALSE 5
## 9 9 FALSE TRUE FALSE 7
## 10 10 FALSE TRUE FALSE 57
## 11 11 FALSE TRUE TRUE 97
## 12 12 FALSE TRUE TRUE 91
## 13 13 FALSE TRUE TRUE 163
## 14 14 FALSE TRUE FALSE 165
## 15 15 FALSE TRUE FALSE 24
## 16 16 FALSE TRUE FALSE 12
## 17 17 FALSE TRUE FALSE 357
## 18 18 FALSE TRUE FALSE 14
## 19 19 FALSE TRUE FALSE 3
## 20 20 FALSE TRUE FALSE 607
## 21 21 FALSE FALSE FALSE 2
## 22 22 FALSE FALSE FALSE 2
## 23 23 FALSE FALSE FALSE 19
## 24 31 TRUE FALSE FALSE 248
## 25 32 TRUE FALSE FALSE 634
## 26 33 TRUE FALSE FALSE 188
## 27 40 TRUE FALSE FALSE 256
## 28 41 TRUE FALSE FALSE 97
## 29 54 TRUE FALSE FALSE 407
## 30 55 TRUE FALSE FALSE 1006
## 31 56 TRUE FALSE FALSE 1686
## 32 57 TRUE FALSE FALSE 1420
## 33 58 TRUE FALSE FALSE 798
## 34 59 TRUE FALSE FALSE 395
```
```
inc_ind %>% count(ReportPolice, V4399)
```
```
## # A tibble: 4 × 3
## ReportPolice V4399 n
## <lgl> <fct> <int>
## 1 FALSE 2 5670
## 2 FALSE 3 103
## 3 FALSE 8 12
## 4 TRUE 1 3163
```
```
inc_ind %>%
count(
AAST,
WeapCat,
AAST_NoWeap,
AAST_Firearm,
AAST_Knife,
AAST_Other
)
```
```
## # A tibble: 11 × 7
## AAST WeapCat AAST_NoWeap AAST_Firearm AAST_Knife AAST_Other n
## <lgl> <chr> <lgl> <lgl> <lgl> <lgl> <int>
## 1 FALSE Firearm FALSE FALSE FALSE FALSE 34
## 2 FALSE Knife FALSE FALSE FALSE FALSE 23
## 3 FALSE NoWeap FALSE FALSE FALSE FALSE 1769
## 4 FALSE Other FALSE FALSE FALSE FALSE 27
## 5 FALSE UnkWeapUse FALSE FALSE FALSE FALSE 516
## 6 FALSE <NA> FALSE FALSE FALSE FALSE 6228
## 7 TRUE Firearm FALSE TRUE FALSE FALSE 115
## 8 TRUE Knife FALSE FALSE TRUE FALSE 62
## 9 TRUE NoWeap TRUE FALSE FALSE FALSE 25
## 10 TRUE Other FALSE FALSE FALSE TRUE 146
## 11 TRUE UnkWeapUse FALSE FALSE FALSE FALSE 3
```
After creating indicators of victimization types and characteristics, the file is summarized, and crimes are summed across persons or households by `YEARQ.` Property crimes (i.e., crimes committed against households, such as household burglary or motor vehicle theft) are summed across households, and personal crimes (i.e., crimes committed against an individual, such as assault, robbery, and personal theft) are summed across persons. The indicators are summed using our created series weight variable (`serieswgt`). Additionally, the existing weight variable (`WGTVICCY`) needs to be retained for later analysis.
```
inc_hh_sums <-
inc_ind %>%
filter(V4529_num > 23) %>% # restrict to household crimes
group_by(YEARQ, IDHH) %>%
summarize(
WGTVICCY = WGTVICCY[1],
across(starts_with("Property"),
~ sum(. * serieswgt),
.names = "{.col}"
),
.groups = "drop"
)
inc_pers_sums <-
inc_ind %>%
filter(V4529_num <= 23) %>% # restrict to person crimes
group_by(YEARQ, IDHH, IDPER) %>%
summarize(
WGTVICCY = WGTVICCY[1],
across(c(starts_with("Violent"), starts_with("AAST")),
~ sum(. * serieswgt),
.names = "{.col}"
),
.groups = "drop"
)
```
Now, we merge the victimization summary files into the appropriate files. For any record on the household or person file that is not on the victimization file, the victimization counts are set to 0 after merging. In this step, we also create the victimization adjustment factor. See Section 2\.2\.4 in the User’s Guide for details of why this adjustment is created ([Shook\-Sa, Couzens, and Berzofsky 2015](#ref-ncvs_user_guide)). It is calculated as follows:
\\\[ A\_{ijk}\=\\frac{v\_{ijk}}{w\_{ijk}}\\]
where \\(w\_{ijk}\\) is the person weight (`WGTPERCY`) for personal crimes or the household weight (`WGTHHCY`) for household crimes, and \\(v\_{ijk}\\) is the victimization weight (`WGTVICCY`) for household \\(i\\), respondent \\(j\\), in reporting period \\(k\\). The adjustment factor is set to 0 if no incidents are reported.
```
hh_z_list <- rep(0, ncol(inc_hh_sums) - 3) %>%
as.list() %>%
setNames(names(inc_hh_sums)[-(1:3)])
pers_z_list <- rep(0, ncol(inc_pers_sums) - 4) %>%
as.list() %>%
setNames(names(inc_pers_sums)[-(1:4)])
hh_vsum <- ncvs_2021_household %>%
full_join(inc_hh_sums, by = c("YEARQ", "IDHH")) %>%
replace_na(hh_z_list) %>%
mutate(ADJINC_WT = if_else(is.na(WGTVICCY), 0, WGTVICCY / WGTHHCY))
pers_vsum <- ncvs_2021_person %>%
full_join(inc_pers_sums, by = c("YEARQ", "IDHH", "IDPER")) %>%
replace_na(pers_z_list) %>%
mutate(ADJINC_WT = if_else(is.na(WGTVICCY), 0, WGTVICCY / WGTPERCY))
```
### 13\.4\.2 Derived demographic variables
A final step in file preparation for the household and person files is creating any derived variables on the household and person files, such as income categories or age categories, for subgroup analysis. We can do this step before or after merging the victimization counts.
#### 13\.4\.2\.1 Household variables
For the household file, we create categories for tenure (rental status), urbanicity, income, place size, and region. A codebook of the household variables is listed in Table [13\.3](c13-ncvs-vignette.html#tab:cb-hh).
TABLE 13\.3: Codebook for household variables
| Variable | Description | Value | Label |
| --- | --- | --- | --- |
| V2015 | Tenure | 1 | Owned or being bought |
| | | 2 | Rented for cash |
| | | 3 | No cash rent |
| SC214A | Household Income | 01 | Less than $5,000 |
| | | 02 | $5,000–7,499 |
| | | 03 | $7,500–9,999 |
| | | 04 | $10,000–12,499 |
| | | 05 | $12,500–14,999 |
| | | 06 | $15,000–17,499 |
| | | 07 | $17,500–19,999 |
| | | 08 | $20,000–24,999 |
| | | 09 | $25,000–29,999 |
| | | 10 | $30,000–34,999 |
| | | 11 | $35,000–39,999 |
| | | 12 | $40,000–49,999 |
| | | 13 | $50,000–74,999 |
| | | 15 | $75,000–99,999 |
| | | 16 | $100,000–149,999 |
| | | 17 | $150,000–199,999 |
| | | 18 | $200,000 or more |
| V2126B | Place Size (Population) Code | 00 | Not in a place |
| | | 13 | Population under 10,000 |
| | | 16 | 10,000–49,999 |
| | | 17 | 50,000–99,999 |
| | | 18 | 100,000–249,999 |
| | | 19 | 250,000–499,999 |
| | | 20 | 500,000–999,999 |
| | | 21 | 1,000,000–2,499,999 |
| | | 22 | 2,500,000–4,999,999 |
| | | 23 | 5,000,000 or more |
| V2127B | Region | 1 | Northeast |
| | | 2 | Midwest |
| | | 3 | South |
| | | 4 | West |
| V2143 | Urbanicity | 1 | Urban |
| | | 2 | Suburban |
| | | 3 | Rural |
```
hh_vsum_der <- hh_vsum %>%
mutate(
Tenure = factor(
case_when(
V2015 == 1 ~ "Owned",
!is.na(V2015) ~ "Rented"
),
levels = c("Owned", "Rented")
),
Urbanicity = factor(
case_when(
V2143 == 1 ~ "Urban",
V2143 == 2 ~ "Suburban",
V2143 == 3 ~ "Rural"
),
levels = c("Urban", "Suburban", "Rural")
),
SC214A_num = as.numeric(as.character(SC214A)),
Income = case_when(
SC214A_num <= 8 ~ "Less than $25,000",
SC214A_num <= 12 ~ "$25,000--49,999",
SC214A_num <= 15 ~ "$50,000--99,999",
SC214A_num <= 17 ~ "$100,000--199,999",
SC214A_num <= 18 ~ "$200,000 or more"
),
Income = fct_reorder(Income, SC214A_num, .na_rm = FALSE),
PlaceSize = case_match(
as.numeric(as.character(V2126B)),
0 ~ "Not in a place",
13 ~ "Population under 10,000",
16 ~ "10,000--49,999",
17 ~ "50,000--99,999",
18 ~ "100,000--249,999",
19 ~ "250,000--499,999",
20 ~ "500,000--999,999",
c(21, 22, 23) ~ "1,000,000 or more"
),
PlaceSize = fct_reorder(PlaceSize, as.numeric(V2126B)),
Region = case_match(
as.numeric(V2127B),
1 ~ "Northeast",
2 ~ "Midwest",
3 ~ "South",
4 ~ "West"
),
Region = fct_reorder(Region, as.numeric(V2127B))
)
```
As before, we want to check to make sure the recoded variables we create match the existing data as expected.
```
hh_vsum_der %>% count(Tenure, V2015)
```
```
## # A tibble: 4 × 3
## Tenure V2015 n
## <fct> <fct> <int>
## 1 Owned 1 101944
## 2 Rented 2 46269
## 3 Rented 3 1925
## 4 <NA> <NA> 106322
```
```
hh_vsum_der %>% count(Urbanicity, V2143)
```
```
## # A tibble: 3 × 3
## Urbanicity V2143 n
## <fct> <fct> <int>
## 1 Urban 1 26878
## 2 Suburban 2 173491
## 3 Rural 3 56091
```
```
hh_vsum_der %>% count(Income, SC214A)
```
```
## # A tibble: 18 × 3
## Income SC214A n
## <fct> <fct> <int>
## 1 Less than $25,000 1 7841
## 2 Less than $25,000 2 2626
## 3 Less than $25,000 3 3949
## 4 Less than $25,000 4 5546
## 5 Less than $25,000 5 5445
## 6 Less than $25,000 6 4821
## 7 Less than $25,000 7 5038
## 8 Less than $25,000 8 11887
## 9 $25,000--49,999 9 11550
## 10 $25,000--49,999 10 13689
## 11 $25,000--49,999 11 13655
## 12 $25,000--49,999 12 23282
## 13 $50,000--99,999 13 44601
## 14 $50,000--99,999 15 33353
## 15 $100,000--199,999 16 34287
## 16 $100,000--199,999 17 15317
## 17 $200,000 or more 18 16892
## 18 <NA> <NA> 2681
```
```
hh_vsum_der %>% count(PlaceSize, V2126B)
```
```
## # A tibble: 10 × 3
## PlaceSize V2126B n
## <fct> <fct> <int>
## 1 Not in a place 0 69484
## 2 Population under 10,000 13 39873
## 3 10,000--49,999 16 53002
## 4 50,000--99,999 17 27205
## 5 100,000--249,999 18 24461
## 6 250,000--499,999 19 13111
## 7 500,000--999,999 20 15194
## 8 1,000,000 or more 21 6167
## 9 1,000,000 or more 22 3857
## 10 1,000,000 or more 23 4106
```
```
hh_vsum_der %>% count(Region, V2127B)
```
```
## # A tibble: 4 × 3
## Region V2127B n
## <fct> <fct> <int>
## 1 Northeast 1 41585
## 2 Midwest 2 74666
## 3 South 3 87783
## 4 West 4 52426
```
#### 13\.4\.2\.2 Person variables
For the person file, we create categories for sex, race/Hispanic origin, age categories, and marital status. A codebook of the household variables is located in Table [13\.4](c13-ncvs-vignette.html#tab:cb-pers). We also merge the household demographics to the person file as well as the design variables (`V2117` and `V2118`).
TABLE 13\.4: Codebook for person variables
| Variable | Description | Value | Label |
| --- | --- | --- | --- |
| V3014 | Age | | 12–90 |
| V3015 | Current Marital Status | 1 | Married |
| | | 2 | Widowed |
| | | 3 | Divorced |
| | | 4 | Separated |
| | | 5 | Never married |
| V3018 | Sex | 1 | Male |
| | | 2 | Female |
| V3023A | Race | 01 | White only |
| | | 02 | Black only |
| | | 03 | American Indian, Alaska native only |
| | | 04 | Asian only |
| | | 05 | Hawaiian/Pacific Islander only |
| | | 06 | White\-Black |
| | | 07 | White\-American Indian |
| | | 08 | White\-Asian |
| | | 09 | White\-Hawaiian |
| | | 10 | Black\-American Indian |
| | | 11 | Black\-Asian |
| | | 12 | Black\-Hawaiian/Pacific Islander |
| | | 13 | American Indian\-Asian |
| | | 14 | Asian\-Hawaiian/Pacific Islander |
| | | 15 | White\-Black\-American Indian |
| | | 16 | White\-Black\-Asian |
| | | 17 | White\-American Indian\-Asian |
| | | 18 | White\-Asian\-Hawaiian |
| | | 19 | 2 or 3 races |
| | | 20 | 4 or 5 races |
| V3024 | Hispanic Origin | 1 | Yes |
| | | 2 | No |
```
NHOPI <- "Native Hawaiian or Other Pacific Islander"
pers_vsum_der <- pers_vsum %>%
mutate(
Sex = factor(case_when(
V3018 == 1 ~ "Male",
V3018 == 2 ~ "Female"
)),
RaceHispOrigin = factor(
case_when(
V3024 == 1 ~ "Hispanic",
V3023A == 1 ~ "White",
V3023A == 2 ~ "Black",
V3023A == 4 ~ "Asian",
V3023A == 5 ~ NHOPI,
TRUE ~ "Other"
),
levels = c(
"White", "Black", "Hispanic",
"Asian", NHOPI, "Other"
)
),
V3014_num = as.numeric(as.character(V3014)),
AgeGroup = case_when(
V3014_num <= 17 ~ "12--17",
V3014_num <= 24 ~ "18--24",
V3014_num <= 34 ~ "25--34",
V3014_num <= 49 ~ "35--49",
V3014_num <= 64 ~ "50--64",
V3014_num <= 90 ~ "65 or older"
),
AgeGroup = fct_reorder(AgeGroup, V3014_num),
MaritalStatus = factor(
case_when(
V3015 == 1 ~ "Married",
V3015 == 2 ~ "Widowed",
V3015 == 3 ~ "Divorced",
V3015 == 4 ~ "Separated",
V3015 == 5 ~ "Never married"
),
levels = c(
"Never married", "Married",
"Widowed", "Divorced",
"Separated"
)
)
) %>%
left_join(
hh_vsum_der %>% select(
YEARQ, IDHH,
V2117, V2118, Tenure:Region
),
by = c("YEARQ", "IDHH")
)
```
As before, we want to check to make sure the recoded variables we create match the existing data as expected.
```
pers_vsum_der %>% count(Sex, V3018)
```
```
## # A tibble: 2 × 3
## Sex V3018 n
## <fct> <fct> <int>
## 1 Female 2 150956
## 2 Male 1 140922
```
```
pers_vsum_der %>% count(RaceHispOrigin, V3024)
```
```
## # A tibble: 11 × 3
## RaceHispOrigin V3024 n
## <fct> <fct> <int>
## 1 White 2 197292
## 2 White 8 883
## 3 Black 2 29947
## 4 Black 8 120
## 5 Hispanic 1 41450
## 6 Asian 2 16015
## 7 Asian 8 61
## 8 Native Hawaiian or Other Pacific Islander 2 891
## 9 Native Hawaiian or Other Pacific Islander 8 9
## 10 Other 2 5161
## 11 Other 8 49
```
```
pers_vsum_der %>%
filter(RaceHispOrigin != "Hispanic" |
is.na(RaceHispOrigin)) %>%
count(RaceHispOrigin, V3023A)
```
```
## # A tibble: 20 × 3
## RaceHispOrigin V3023A n
## <fct> <fct> <int>
## 1 White 1 198175
## 2 Black 2 30067
## 3 Asian 4 16076
## 4 Native Hawaiian or Other Pacific Islander 5 900
## 5 Other 3 1319
## 6 Other 6 1217
## 7 Other 7 1025
## 8 Other 8 837
## 9 Other 9 184
## 10 Other 10 178
## 11 Other 11 87
## 12 Other 12 27
## 13 Other 13 13
## 14 Other 14 53
## 15 Other 15 136
## 16 Other 16 45
## 17 Other 17 11
## 18 Other 18 33
## 19 Other 19 22
## 20 Other 20 23
```
```
pers_vsum_der %>%
group_by(AgeGroup) %>%
summarize(
minAge = min(V3014),
maxAge = max(V3014),
.groups = "drop"
)
```
```
## # A tibble: 6 × 3
## AgeGroup minAge maxAge
## <fct> <dbl> <dbl>
## 1 12--17 12 17
## 2 18--24 18 24
## 3 25--34 25 34
## 4 35--49 35 49
## 5 50--64 50 64
## 6 65 or older 65 90
```
```
pers_vsum_der %>% count(MaritalStatus, V3015)
```
```
## # A tibble: 6 × 3
## MaritalStatus V3015 n
## <fct> <fct> <int>
## 1 Never married 5 90425
## 2 Married 1 148131
## 3 Widowed 2 17668
## 4 Divorced 3 28596
## 5 Separated 4 4524
## 6 <NA> 8 2534
```
We then create tibbles that contain only the variables we need, which makes it easier to use them for analyses.
```
hh_vsum_slim <- hh_vsum_der %>%
select(
YEARQ:V2118,
WGTVICCY:ADJINC_WT,
Tenure,
Urbanicity,
Income,
PlaceSize,
Region
)
pers_vsum_slim <- pers_vsum_der %>%
select(YEARQ:WGTPERCY, WGTVICCY:ADJINC_WT, Sex:Region)
```
To calculate estimates about types of crime, such as what percentage of violent crimes are reported to the police, we must use the incident file. The incident file is not guaranteed to have every pseudo\-stratum and half\-sample code, so dummy records are created to append before estimation. Finally, we merge demographic variables onto the incident tibble.
```
dummy_records <- hh_vsum_slim %>%
distinct(V2117, V2118) %>%
mutate(
Dummy = 1,
WGTVICCY = 1,
NEWWGT = 1
)
inc_analysis <- inc_ind %>%
mutate(Dummy = 0) %>%
left_join(select(pers_vsum_slim, YEARQ, IDHH, IDPER, Sex:Region),
by = c("YEARQ", "IDHH", "IDPER")
) %>%
bind_rows(dummy_records) %>%
select(
YEARQ:IDPER,
WGTVICCY,
NEWWGT,
V4529,
WeapCat,
ReportPolice,
Property:Region
)
```
The tibbles `hh_vsum_slim`, `pers_vsum_slim`, and `inc_analysis` can now be used to create design objects and calculate crime rate estimates.
13\.5 Survey design objects
---------------------------
All the data preparation above is necessary to create the design objects and finally begin analysis. We create three design objects for different types of analysis, depending on the estimate we are creating. For the incident data, the weight of analysis is `NEWWGT`, which we constructed previously. The household and person\-level data use `WGTHHCY` and `WGTPERCY`, respectively. For all analyses, `V2117` is the strata variable, and `V2118` is the cluster/PSU variable for analysis. This information can be found in the User’s Guide ([Shook\-Sa, Couzens, and Berzofsky 2015](#ref-ncvs_user_guide)).
```
inc_des <- inc_analysis %>%
as_survey_design(
weight = NEWWGT,
strata = V2117,
ids = V2118,
nest = TRUE
)
hh_des <- hh_vsum_slim %>%
as_survey_design(
weight = WGTHHCY,
strata = V2117,
ids = V2118,
nest = TRUE
)
pers_des <- pers_vsum_slim %>%
as_survey_design(
weight = WGTPERCY,
strata = V2117,
ids = V2118,
nest = TRUE
)
```
13\.6 Calculating estimates
---------------------------
Now that we have prepared our data and created the design objects, we can calculate our estimates. As a reminder, those are:
1. Victimization totals estimate the number of criminal victimizations with a given characteristic.
2. Victimization proportions estimate characteristics among victimizations or victims.
3. Victimization rates are estimates of the number of victimizations per 1,000 persons or households in the population.
4. Prevalence rates are estimates of the percentage of the population (persons or households) who are victims of a crime.
### 13\.6\.1 Estimation 1: Victimization totals
There are two ways to calculate victimization totals. Using the incident design object (`inc_des`) is the most straightforward method, but the person (`pers_des`) and household (`hh_des`) design objects can be used as well if the adjustment factor (`ADJINC_WT`) is incorporated. In the example below, the total number of property and violent victimizations is first calculated using the incident file and then using the household and person design objects. The incident file is smaller, and thus, estimation is faster using that file, but the estimates are the same as illustrated in Table [13\.5](c13-ncvs-vignette.html#tab:ncvs-vign-vt1), Table [13\.6](c13-ncvs-vignette.html#tab:ncvs-vign-vt2a), and Table [13\.7](c13-ncvs-vignette.html#tab:ncvs-vign-vt2b).
```
vt1 <-
inc_des %>%
summarize(
Property_Vzn = survey_total(Property, na.rm = TRUE),
Violent_Vzn = survey_total(Violent, na.rm = TRUE)
) %>%
gt() %>%
tab_spanner(
label = "Property Crime",
columns = starts_with("Property")
) %>%
tab_spanner(
label = "Violent Crime",
columns = starts_with("Violent")
) %>%
cols_label(
ends_with("Vzn") ~ "Total",
ends_with("se") ~ "S.E."
) %>%
fmt_number(decimals = 0)
vt2a <- hh_des %>%
summarize(Property_Vzn = survey_total(Property * ADJINC_WT,
na.rm = TRUE
)) %>%
gt() %>%
tab_spanner(
label = "Property Crime",
columns = starts_with("Property")
) %>%
cols_label(
ends_with("Vzn") ~ "Total",
ends_with("se") ~ "S.E."
) %>%
fmt_number(decimals = 0)
vt2b <- pers_des %>%
summarize(Violent_Vzn = survey_total(Violent * ADJINC_WT,
na.rm = TRUE
)) %>%
gt() %>%
tab_spanner(
label = "Violent Crime",
columns = starts_with("Violent")
) %>%
cols_label(
ends_with("Vzn") ~ "Total",
ends_with("se") ~ "S.E."
) %>%
fmt_number(decimals = 0)
```
TABLE 13\.5: Estimates of total property and violent victimizations with standard errors calculated using the incident design object, 2021 (vt1\)
| Property Crime | | Violent Crime | |
| --- | --- | --- | --- |
| Total | S.E. | Total | S.E. |
| 11,682,056 | 263,844 | 4,598,306 | 198,115 |
TABLE 13\.6: Estimates of total property victimizations with standard errors calculated using the household design object, 2021 (vt2a)
| Property Crime | |
| --- | --- |
| Total | S.E. |
| 11,682,056 | 263,844 |
TABLE 13\.7: Estimates of total violent victimizations with standard errors calculated using the person design object, 2021 (vt2b)
| Violent Crime | |
| --- | --- |
| Total | S.E. |
| 4,598,306 | 198,115 |
The number of victimizations estimated using the incident file is equivalent to the person and household file method. There were an estimated 11,682,056 property victimizations and 4,598,306 violent victimizations in 2021\.
### 13\.6\.2 Estimation 2: Victimization proportions
Victimization proportions are proportions describing features of a victimization. The key here is that these are estimates among victimizations, not among the population. These types of estimates can only be calculated using the incident design object (`inc_des`).
For example, we could be interested in the percentage of property victimizations reported to the police as shown in the following code with an estimate, the standard error, and 95% confidence interval:
```
prop1 <- inc_des %>%
filter(Property) %>%
summarize(Pct = survey_mean(ReportPolice,
na.rm = TRUE,
proportion = TRUE,
vartype = c("se", "ci")
) * 100)
prop1
```
```
## # A tibble: 1 × 4
## Pct Pct_se Pct_low Pct_upp
## <dbl> <dbl> <dbl> <dbl>
## 1 30.8 0.798 29.2 32.4
```
Or, the percentage of violent victimizations that are in urban areas:
```
prop2 <- inc_des %>%
filter(Violent) %>%
summarize(Pct = survey_mean(Urbanicity == "Urban",
na.rm = TRUE
) * 100)
prop2
```
```
## # A tibble: 1 × 2
## Pct Pct_se
## <dbl> <dbl>
## 1 18.1 1.49
```
In 2021, we estimate that 30\.8% of property crimes were reported to the police, and 18\.1% of violent crimes occurred in urban areas.
### 13\.6\.3 Estimation 3: Victimization rates
Victimization rates measure the number of victimizations per population. They are not an estimate of the proportion of households or persons who are victimized, which is the prevalence rate described in Section [13\.6\.4](c13-ncvs-vignette.html#prev-rate). Victimization rates are estimated using the household (`hh_des`) or person (`pers_des`) design objects depending on the type of crime, and the adjustment factor (`ADJINC_WT`) must be incorporated. We return to the example of property and violent victimizations used in the example for victimization totals (Section [13\.6\.1](c13-ncvs-vignette.html#vic-tot)). In the following example, the property victimization totals are calculated as above, as well as the property victimization rate (using `survey_mean()`) and the population size using `survey_total()`.
Victimization rates use the incident weight in the numerator and the person or household weight in the denominator. This is accomplished by calculating the rates with the weight adjustment (`ADJINC_WT`) multiplied by the estimate of interest. Let’s look at an example of property victimization.
```
vr_prop <- hh_des %>%
summarize(
Property_Vzn = survey_total(Property * ADJINC_WT,
na.rm = TRUE
),
Property_Rate = survey_mean(Property * ADJINC_WT * 1000,
na.rm = TRUE
),
PopSize = survey_total(1, vartype = NULL)
)
vr_prop
```
```
## # A tibble: 1 × 5
## Property_Vzn Property_Vzn_se Property_Rate Property_Rate_se PopSize
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 11682056. 263844. 90.3 1.95 129319232.
```
In the output above, we see the estimate for property victimization rate in 2021 was 90\.3 per 1,000 households. This is consistent with calculating the number of victimizations per 1,000 population, as demonstrated in the following code output.
```
vr_prop %>%
select(-ends_with("se")) %>%
mutate(Property_Rate_manual = Property_Vzn / PopSize * 1000)
```
```
## # A tibble: 1 × 4
## Property_Vzn Property_Rate PopSize Property_Rate_manual
## <dbl> <dbl> <dbl> <dbl>
## 1 11682056. 90.3 129319232. 90.3
```
Victimization rates can also be calculated based on particular characteristics of the victimization. In the following example, we calculate the rate of aggravated assault with no weapon, firearm, knife, and another weapon.
```
pers_des %>%
summarize(across(
starts_with("AAST_"),
~ survey_mean(. * ADJINC_WT * 1000, na.rm = TRUE)
))
```
```
## # A tibble: 1 × 8
## AAST_NoWeap AAST_NoWeap_se AAST_Firearm AAST_Firearm_se AAST_Knife
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 0.249 0.0595 0.860 0.101 0.455
## # ℹ 3 more variables: AAST_Knife_se <dbl>, AAST_Other <dbl>,
## # AAST_Other_se <dbl>
```
A common desire is to calculate victimization rates by several characteristics. For example, we may want to calculate the violent victimization rate and aggravated assault rate by sex, race/Hispanic origin, age group, marital status, and household income. This requires a separate `group_by()` statement for each categorization. Thus, we make a function to do this and then use the `map_df()` function from the {purrr} package to loop through the variables ([Wickham and Henry 2023](#ref-R-purrr)). This function takes a demographic variable as its input (`byarvar`) and calculates the violent and aggravated assault victimization rate for each level. It then creates some columns with the variable, the level of each variable, and a numeric version of the variable (`LevelNum`) for sorting later. The function is run across multiple variables using `map()` and then stacks the results into a single output using `bind_rows()`.
```
pers_est_by <- function(byvar) {
pers_des %>%
rename(Level := {{ byvar }}) %>%
filter(!is.na(Level)) %>%
group_by(Level) %>%
summarize(
Violent = survey_mean(Violent * ADJINC_WT * 1000, na.rm = TRUE),
AAST = survey_mean(AAST * ADJINC_WT * 1000, na.rm = TRUE)
) %>%
mutate(
Variable = byvar,
LevelNum = as.numeric(Level),
Level = as.character(Level)
) %>%
select(Variable, Level, LevelNum, everything())
}
pers_est_df <-
c("Sex", "RaceHispOrigin", "AgeGroup", "MaritalStatus", "Income") %>%
map(pers_est_by) %>%
bind_rows()
```
The output from all the estimates is cleaned to create better labels, such as going from “RaceHispOrigin” to “Race/Hispanic Origin.” Finally, the {gt} package is used to make a publishable table (Table [13\.8](c13-ncvs-vignette.html#tab:ncvs-vign-rates-demo-tab)). Using the functions from the {gt} package, we add column labels and footnotes and present estimates rounded to the first decimal place ([Iannone et al. 2024](#ref-R-gt)).
```
vr_gt <- pers_est_df %>%
mutate(
Variable = case_when(
Variable == "RaceHispOrigin" ~ "Race/Hispanic Origin",
Variable == "MaritalStatus" ~ "Marital Status",
Variable == "AgeGroup" ~ "Age",
TRUE ~ Variable
)
) %>%
select(-LevelNum) %>%
group_by(Variable) %>%
gt(rowname_col = "Level") %>%
tab_spanner(
label = "Violent Crime",
id = "viol_span",
columns = c("Violent", "Violent_se")
) %>%
tab_spanner(
label = "Aggravated Assault",
columns = c("AAST", "AAST_se")
) %>%
cols_label(
Violent = "Rate",
Violent_se = "S.E.",
AAST = "Rate",
AAST_se = "S.E.",
) %>%
fmt_number(
columns = c("Violent", "Violent_se", "AAST", "AAST_se"),
decimals = 1
) %>%
tab_footnote(
footnote = "Includes rape or sexual assault, robbery,
aggravated assault, and simple assault.",
locations = cells_column_spanners(spanners = "viol_span")
) %>%
tab_footnote(
footnote = "Excludes persons of Hispanic origin.",
locations =
cells_stub(rows = Level %in%
c("White", "Black", "Asian", NHOPI, "Other"))
) %>%
tab_footnote(
footnote = "Includes persons who identified as
Native Hawaiian or Other Pacific Islander only.",
locations = cells_stub(rows = Level == NHOPI)
) %>%
tab_footnote(
footnote = "Includes persons who identified as American Indian or
Alaska Native only or as two or more races.",
locations = cells_stub(rows = Level == "Other")
) %>%
tab_source_note(
source_note = md("*Note*: Rates per 1,000 persons age 12 or older.")
) %>%
tab_source_note(
source_note = md("*Source*: Bureau of Justice Statistics,
National Crime Victimization Survey, 2021.")
) %>%
tab_stubhead(label = "Victim Demographic") %>%
tab_caption("Rate and standard error of violent victimization,
by type of crime and demographic characteristics, 2021")
```
```
vr_gt
```
TABLE 13\.8: Rate and standard error of violent victimization, by type of crime and demographic characteristics, 2021
| Victim Demographic | Violent Crime1 | | Aggravated Assault | |
| --- | --- | --- | --- | --- |
| Rate | S.E. | Rate | S.E. |
| Sex | | | | |
| --- | --- | --- | --- | --- |
| Female | 15\.5 | 0\.9 | 2\.3 | 0\.2 |
| Male | 17\.5 | 1\.1 | 3\.2 | 0\.3 |
| Race/Hispanic Origin | | | | |
| White2 | 16\.1 | 0\.9 | 2\.7 | 0\.3 |
| Black2 | 18\.5 | 2\.2 | 3\.7 | 0\.7 |
| Hispanic | 15\.9 | 1\.7 | 2\.3 | 0\.4 |
| Asian2 | 8\.6 | 1\.3 | 1\.9 | 0\.6 |
| Native Hawaiian or Other Pacific Islander2,3 | 36\.1 | 34\.4 | 0\.0 | 0\.0 |
| Other2,4 | 45\.4 | 13\.0 | 6\.2 | 2\.0 |
| Age | | | | |
| 12\-\-17 | 13\.2 | 2\.2 | 2\.5 | 0\.8 |
| 18\-\-24 | 23\.1 | 2\.1 | 3\.9 | 0\.9 |
| 25\-\-34 | 22\.0 | 2\.1 | 4\.0 | 0\.6 |
| 35\-\-49 | 19\.4 | 1\.6 | 3\.6 | 0\.5 |
| 50\-\-64 | 16\.9 | 1\.9 | 2\.0 | 0\.3 |
| 65 or older | 6\.4 | 1\.1 | 1\.1 | 0\.3 |
| Marital Status | | | | |
| Never married | 22\.2 | 1\.4 | 4\.0 | 0\.4 |
| Married | 9\.5 | 0\.9 | 1\.5 | 0\.2 |
| Widowed | 10\.7 | 3\.5 | 0\.9 | 0\.2 |
| Divorced | 27\.4 | 2\.9 | 4\.0 | 0\.7 |
| Separated | 36\.8 | 6\.7 | 8\.8 | 3\.1 |
| Income | | | | |
| Less than $25,000 | 29\.6 | 2\.5 | 5\.1 | 0\.7 |
| $25,000\-\-49,999 | 16\.9 | 1\.5 | 3\.0 | 0\.4 |
| $50,000\-\-99,999 | 14\.6 | 1\.1 | 1\.9 | 0\.3 |
| $100,000\-\-199,999 | 12\.2 | 1\.3 | 2\.5 | 0\.4 |
| $200,000 or more | 9\.7 | 1\.4 | 1\.7 | 0\.6 |
| *Note*: Rates per 1,000 persons age 12 or older. | | | | |
| --- | --- | --- | --- | --- |
| *Source*: Bureau of Justice Statistics, National Crime Victimization Survey, 2021\. | | | | |
| 1 Includes rape or sexual assault, robbery, aggravated assault, and simple assault. | | | | |
| --- | --- | --- | --- | --- |
| 2 Excludes persons of Hispanic origin. | | | | |
| 3 Includes persons who identified as Native Hawaiian or Other Pacific Islander only. | | | | |
| 4 Includes persons who identified as American Indian or Alaska Native only or as two or more races. | | | | |
### 13\.6\.4 Estimation 4: Prevalence rates
Prevalence rates differ from victimization rates, as the numerator is the number of people or households victimized rather than the number of victimizations. To calculate the prevalence rates, we must run another summary of the data by calculating an indicator for whether a person or household is a victim of a particular crime at any point in the year. Below is an example of calculating the indicator and then the prevalence rate of violent crime and aggravated assault.
```
pers_prev_des <-
pers_vsum_slim %>%
mutate(Year = floor(YEARQ)) %>%
mutate(
Violent_Ind = sum(Violent) > 0,
AAST_Ind = sum(AAST) > 0,
.by = c("Year", "IDHH", "IDPER")
) %>%
as_survey(
weight = WGTPERCY,
strata = V2117,
ids = V2118,
nest = TRUE
)
pers_prev_ests <- pers_prev_des %>%
summarize(
Violent_Prev = survey_mean(Violent_Ind * 100),
AAST_Prev = survey_mean(AAST_Ind * 100)
)
pers_prev_ests
```
```
## # A tibble: 1 × 4
## Violent_Prev Violent_Prev_se AAST_Prev AAST_Prev_se
## <dbl> <dbl> <dbl> <dbl>
## 1 0.980 0.0349 0.215 0.0143
```
In the example above, the indicator is multiplied by 100 to return a percentage rather than a proportion. In 2021, we estimate that 0\.98% of people aged 12 and older were victims of violent crime in the United States, and 0\.22% were victims of aggravated assault.
13\.7 Statistical testing
-------------------------
For any of the types of estimates discussed, we can also perform statistical testing. For example, we could test whether property victimization rates are different between properties that are owned versus rented. First, we calculate the point estimates.
```
prop_tenure <- hh_des %>%
group_by(Tenure) %>%
summarize(
Property_Rate = survey_mean(Property * ADJINC_WT * 1000,
na.rm = TRUE, vartype = "ci"
),
)
prop_tenure
```
```
## # A tibble: 3 × 4
## Tenure Property_Rate Property_Rate_low Property_Rate_upp
## <fct> <dbl> <dbl> <dbl>
## 1 Owned 68.2 64.3 72.1
## 2 Rented 130. 123. 137.
## 3 <NA> NaN NaN NaN
```
The property victimization rate for rented households is 129\.8 per 1,000 households, while the property victimization rate for owned households is 68\.2, which seem very different, especially given the non\-overlapping confidence intervals. However, survey data are inherently non\-independent, so statistical testing cannot be done by comparing confidence intervals. To conduct the statistical test, we first need to create a variable that incorporates the adjusted incident weight (`ADJINC_WT`), and then the test can be conducted on this adjusted variable as discussed in Chapter [6](c06-statistical-testing.html#c06-statistical-testing).
```
prop_tenure_test <- hh_des %>%
mutate(
Prop_Adj = Property * ADJINC_WT * 1000
) %>%
svyttest(
formula = Prop_Adj ~ Tenure,
design = .,
na.rm = TRUE
) %>%
broom::tidy()
```
```
prop_tenure_test %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 13\.9: T\-test output for estimates of property victimization rates between properties that are owned versus rented, NCVS 2021
| estimate | statistic | p.value | parameter | conf.low | conf.high | method | alternative |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 61\.62 | 16\.04 | \<0\.0001 | 169\.00 | 54\.03 | 69\.21 | Design\-based t\-test | two.sided |
The output of the statistical test shown in Table [13\.9](c13-ncvs-vignette.html#tab:ncvs-vign-prop-stat-test-gt-tab) indicates a difference of 61\.6 between the property victimization rates of renters and owners, and the test is highly significant with the p\-value of \<0\.0001\.
13\.8 Exercises
---------------
1. What proportion of completed motor vehicle thefts are not reported to the police? Hint: Use the codebook to look at the definition of Type of Crime (V4529\).
2. How many violent crimes occur in each region?
3. What is the property victimization rate among each income level?
4. What is the difference between the violent victimization rate between males and females? Is it statistically different?
### Prerequisites
13\.1 Introduction
------------------
The National Crime Victimization Survey (NCVS) is a household survey sponsored by the Bureau of Justice Statistics (BJS), which collects data on criminal victimization, including characteristics of the crimes, offenders, and victims. Crime types include both household and personal crimes, as well as violent and non\-violent crimes. The population of interest of this survey is all people in the United States age 12 and older living in housing units and non\-institutional group quarters.
The NCVS has been ongoing since 1992\. An earlier survey, the National Crime Survey, was run from 1972 to 1991 ([U. S. Bureau of Justice Statistics 2017](#ref-ncvs_tech_2016)). The survey is administered using a rotating panel. When an address enters the sample, the residents of that address are interviewed every 6 months for a total of 7 interviews. If the initial residents move away from the address during the period and new residents move in, the new residents are included in the survey, as people are not followed when they move.
NCVS data are publicly available and distributed by Inter\-university Consortium for Political and Social Research (ICPSR), with data going back to 1992\. The vignette in this book includes data from 2021 ([U.S. Bureau of Justice Statistics 2022](#ref-ncvs_data_2021)). The NCVS data structure is complicated, and the User’s Guide contains examples for analysis in SAS, SUDAAN, SPSS, and Stata, but not R ([Shook\-Sa, Couzens, and Berzofsky 2015](#ref-ncvs_user_guide)). This vignette adapts those examples for R.
13\.2 Data structure
--------------------
The data from ICPSR are distributed with five files, each having its unique identifier indicated:
* Address Record \- `YEARQ`, `IDHH`
* Household Record \- `YEARQ`, `IDHH`
* Person Record \- `YEARQ`, `IDHH`, `IDPER`
* Incident Record \- `YEARQ`, `IDHH`, `IDPER`
* 2021 Collection Year Incident \- `YEARQ`, `IDHH`, `IDPER`
In this vignette, we focus on the household, person, and incident files and have selected a subset of columns for use in the examples. We have included data in the {srvyexploR} package with this subset of columns, but the complete data files can be downloaded from [ICPSR](https://www.icpsr.umich.edu/web/NACJD/studies/38429).
13\.3 Survey notation
---------------------
The NCVS User Guide ([Shook\-Sa, Couzens, and Berzofsky 2015](#ref-ncvs_user_guide)) uses the following notation:
* \\(i\\) represents NCVS households, identified on the household\-level file with the household identification number `IDHH`.
* \\(j\\) represents NCVS individual respondents within household \\(i\\), identified on the person\-level file with the person identification number `IDPER`.
* \\(k\\) represents reporting periods (i.e., `YEARQ`) for household \\(i\\) and individual respondent \\(j\\).
* \\(l\\) represents victimization records for respondent \\(j\\) in household \\(i\\) and reporting period \\(k\\). Each record on the NCVS incident\-level file is associated with a victimization record \\(l\\).
* \\(D\\) represents one or more domain characteristics of interest in the calculation of NCVS estimates. For victimization totals and proportions, domains can be defined on the basis of crime types (e.g., violent crimes, property crimes), characteristics of victims (e.g., age, sex, household income), or characteristics of the victimizations (e.g., victimizations reported to police, victimizations committed with a weapon present). Domains could also be a combination of all of these types of characteristics. For example, in the calculation of victimization rates, domains are defined on the basis of the characteristics of the victims.
* \\(A\_a\\) represents the level \\(a\\) of covariate \\(A\\). Covariate \\(A\\) is defined in the calculation of victimization proportions and represents the characteristic we want to obtain the distribution of victimizations in domain \\(D\\).
* \\(C\\) represents the personal or property crime for which we want to obtain a victimization rate.
In this vignette, we discuss four estimates:
1. Victimization totals estimate the number of criminal victimizations with a given characteristic. As demonstrated below, these can be calculated from any of the data files. The estimated victimization total, \\(\\hat{t}\_D\\) for domain \\(D\\) is estimated as
\\\[ \\hat{t}\_D \= \\sum\_{ijkl \\in D} v\_{ijkl}\\]
where \\(v\_{ijkl}\\) is the series\-adjusted victimization weight for household \\(i\\), respondent \\(j\\), reporting period \\(k\\), and victimization \\(l\\), represented in the data as `WGTVICCY`.
2. Victimization proportions estimate characteristics among victimizations or victims. Victimization proportions are calculated using the incident data file. The estimated victimization proportion for domain \\(D\\) across level \\(a\\) of covariate \\(A\\), \\(\\hat{p}\_{A\_a,D}\\) is
\\\[ \\hat{p}\_{A\_a,D} \=\\frac{\\sum\_{ijkl \\in A\_a, D} v\_{ijkl}}{\\sum\_{ijkl \\in D} v\_{ijkl}}.\\]
The numerator is the number of incidents with a particular characteristic in a domain, and the denominator is the number of incidents in a domain.
3. Victimization rates are estimates of the number of victimizations per 1,000 persons or households in the population[29](#fn29). Victimization rates are calculated using the household or person\-level data files. The estimated victimization rate for crime \\(C\\) in domain \\(D\\) is
\\\[\\hat{VR}\_{C,D}\= \\frac{\\sum\_{ijkl \\in C,D} v\_{ijkl}}{\\sum\_{ijk \\in D} w\_{ijk}}\\times 1000\\]
where \\(w\_{ijk}\\) is the person weight (`WGTPERCY`) for personal crimes or household weight (`WGTHHCY`) for household crimes. The numerator is the number of incidents in a domain, and the denominator is the number of persons or households in a domain. Notice that the weights in the numerator and denominator are different; this is important, and in the syntax and examples below, we discuss how to make an estimate that involves two weights.
4. Prevalence rates are estimates of the percentage of the population (persons or households) who are victims of a crime. These are estimated using the household or person\-level data files. The estimated prevalence rate for crime \\(C\\) in domain \\(D\\) is
\\\[ \\hat{PR}\_{C, D}\= \\frac{\\sum\_{ijk \\in {C,D}} I\_{ij}w\_{ijk}}{\\sum\_{ijk \\in D} w\_{ijk}} \\times 100\\]
where \\(I\_{ij}\\) is an indicator that a person or household in domain \\(D\\) was a victim of crime \\(C\\) at any time in the year. The numerator is the number of victims in domain \\(D\\) for crime \\(C\\), and the denominator is the number of people or households in the population.
13\.4 Data file preparation
---------------------------
Some work is necessary to prepare the files before analysis. The design variables indicating pseudo\-stratum (`V2117`) and half\-sample code (`V2118`) are only included on the household file, so they must be added to the person and incident files for any analysis.
For victimization rates, we need to know the victimization status for both victims and non\-victims. Therefore, the incident file must be summarized and merged onto the household or person files for household\-level and person\-level crimes, respectively. We begin this vignette by discussing how to create these incident summary files. This is following Section 2\.2 of the NCVS User’s Guide ([Shook\-Sa, Couzens, and Berzofsky 2015](#ref-ncvs_user_guide)).
### 13\.4\.1 Preparing files for estimation of victimization rates
Each record on the incident file represents one victimization, which is not the same as one incident. Some victimizations have several instances that make it difficult for the victim to differentiate the details of these incidents, labeled as “series crimes.” Appendix A of the User’s Guide indicates how to calculate the series weight in other statistical languages.
Here, we adapt that code for R. Essentially, if a victimization is a series crime, its series weight is top\-coded at 10 based on the number of actual victimizations, that is, even if the crime occurred more than 10 times, it is counted as 10 times to reduce the influence of extreme outliers. If an incident is a series crime, but the number of occurrences is unknown, the series weight is set to 6\. A description of the variables used to create indicators of series and the associated weights is included in Table [13\.1](c13-ncvs-vignette.html#tab:cb-incident).
TABLE 13\.1: Codebook for incident variables, related to series weight
| | Description | Value | Label |
| --- | --- | --- | --- |
| V4016 | How many times incident occur last 6 months | 1–996 | Number of times |
| | | 997 | Don’t know |
| V4017 | How many incidents | 1 | 1–5 incidents (not a “series”) |
| | | 2 | 6 or more incidents |
| | | 8 | Residue (invalid data) |
| V4018 | Incidents similar in detail | 1 | Similar |
| | | 2 | Different (not in a “series”) |
| | | 8 | Residue (invalid data) |
| V4019 | Enough detail to distinguish incidents | 1 | Yes (not a “series”) |
| | | 2 | No (is a “series”) |
| | | 8 | Residue (invalid data) |
| WGTVICCY | Adjusted victimization weight | | Numeric |
We want to create four variables to indicate if an incident is a series crime. First, we create a variable called `series` using `V4017`, `V4018`, and `V4019` where an incident is considered a series crime if there are 6 or more incidents (`V4107`), the incidents are similar in detail (`V4018`), or there is not enough detail to distinguish the incidents (`V4019`). Second, we top\-code the number of incidents (`V4016`) by creating a variable `n10v4016`, which is set to 10 if `V4016 > 10`. Third, we create the `serieswgt` using the two new variables `series` and `n10v4019` to classify the max series based on missing data and number of incidents. Finally, we create the new weight using our new `serieswgt` variable and the existing weight (`WGTVICCY`).
```
inc_series <- ncvs_2021_incident %>%
mutate(
series = case_when(
V4017 %in% c(1, 8) ~ 1,
V4018 %in% c(2, 8) ~ 1,
V4019 %in% c(1, 8) ~ 1,
TRUE ~ 2
),
n10v4016 = case_when(
V4016 %in% c(997, 998) ~ NA_real_,
V4016 > 10 ~ 10,
TRUE ~ V4016
),
serieswgt = case_when(
series == 2 & is.na(n10v4016) ~ 6,
series == 2 ~ n10v4016,
TRUE ~ 1
),
NEWWGT = WGTVICCY * serieswgt
)
```
The next step in preparing the files for estimation is to create indicators on the victimization file for characteristics of interest. Almost all BJS publications limit the analysis to records where the victimization occurred in the United States (where `V4022` is not equal to 1\). We do this for all estimates as well. A brief codebook of variables for this task is located in Table [13\.2](c13-ncvs-vignette.html#tab:cb-crimetype).
TABLE 13\.2: Codebook for incident variables, crime type indicators and characteristics
| Variable | Description | Value | Label |
| --- | --- | --- | --- |
| V4022 | In what city/town/village | 1 | Outside U.S. |
| | | 2 | Not inside a city/town/village |
| | | 3 | Same city/town/village as present residence |
| | | 4 | Different city/town/village as present residence |
| | | 5 | Don’t know |
| | | 6 | Don’t know if 2, 4, or 5 |
| V4049 | Did offender have a weapon | 1 | Yes |
| | | 2 | No |
| | | 3 | Don’t know |
| V4050 | What was the weapon that offender had | 1 | At least one good entry |
| | | 3 | Indicates “Yes\-Type Weapon\-NA” |
| | | 7 | Indicates “Gun Type Unknown” |
| | | 8 | No good entry |
| V4051 | Hand gun | 0 | No |
| | | 1 | Yes |
| V4052 | Other gun | 0 | No |
| | | 1 | Yes |
| V4053 | Knife | 0 | No |
| | | 1 | Yes |
| V4399 | Reported to police | 1 | Yes |
| | | 2 | No |
| | | 3 | Don’t know |
| V4529 | Type of crime code | 01 | Completed rape |
| | | 02 | Attempted rape |
| | | 03 | Sexual attack with serious assault |
| | | 04 | Sexual attack with minor assault |
| | | 05 | Completed robbery with injury from serious assault |
| | | 06 | Completed robbery with injury from minor assault |
| | | 07 | Completed robbery without injury from minor assault |
| | | 08 | Attempted robbery with injury from serious assault |
| | | 09 | Attempted robbery with injury from minor assault |
| | | 10 | Attempted robbery without injury |
| | | 11 | Completed aggravated assault with injury |
| | | 12 | Attempted aggravated assault with weapon |
| | | 13 | Threatened assault with weapon |
| | | 14 | Simple assault completed with injury |
| | | 15 | Sexual assault without injury |
| | | 16 | Unwanted sexual contact without force |
| | | 17 | Assault without weapon without injury |
| | | 18 | Verbal threat of rape |
| | | 19 | Verbal threat of sexual assault |
| | | 20 | Verbal threat of assault |
| | | 21 | Completed purse snatching |
| | | 22 | Attempted purse snatching |
| | | 23 | Pocket picking (completed only) |
| | | 31 | Completed burglary, forcible entry |
| | | 32 | Completed burglary, unlawful entry without force |
| | | 33 | Attempted forcible entry |
| | | 40 | Completed motor vehicle theft |
| | | 41 | Attempted motor vehicle theft |
| | | 54 | Completed theft less than $10 |
| | | 55 | Completed theft $10 to $49 |
| | | 56 | Completed theft $50 to $249 |
| | | 57 | Completed theft $250 or greater |
| | | 58 | Completed theft value NA |
| | | 59 | Attempted theft |
Using these variables, we create the following indicators:
1. Property crime
* `V4529` \\(\\ge\\) 31
* Variable: `Property`
2. Violent crime
* `V4529` \\(\\le\\) 20
* Variable: `Violent`
3. Property crime reported to the police
* `V4529` \\(\\ge\\) 31 and `V4399`\=1
* Variable: `Property_ReportPolice`
4. Violent crime reported to the police
* `V4529` \< 31 and `V4399`\=1
* Variable: `Violent_ReportPolice`
5. Aggravated assault without a weapon
* `V4529` in 11:12 and `V4049`\=2
* Variable: `AAST_NoWeap`
6. Aggravated assault with a firearm
* `V4529` in 11:12 and `V4049`\=1 and (`V4051`\=1 or `V4052`\=1 or `V4050`\=7\)
* Variable: `AAST_Firearm`
7. Aggravated assault with a knife or sharp object
* `V4529` in 11:12 and `V4049`\=1 and (`V4053`\=1 or `V4054`\=1\)
* Variable: `AAST_Knife`
8. Aggravated assault with another type of weapon
* `V4529` in 11:12 and `V4049`\=1 and `V4050`\=1 and not firearm or knife
* Variable: `AAST_Other`
```
inc_ind <- inc_series %>%
filter(V4022 != 1) %>%
mutate(
WeapCat = case_when(
is.na(V4049) ~ NA_character_,
V4049 == 2 ~ "NoWeap",
V4049 == 3 ~ "UnkWeapUse",
V4050 == 3 ~ "Other",
V4051 == 1 | V4052 == 1 | V4050 == 7 ~ "Firearm",
V4053 == 1 | V4054 == 1 ~ "Knife",
TRUE ~ "Other"
),
V4529_num = parse_number(as.character(V4529)),
ReportPolice = V4399 == 1,
Property = V4529_num >= 31,
Violent = V4529_num <= 20,
Property_ReportPolice = Property & ReportPolice,
Violent_ReportPolice = Violent & ReportPolice,
AAST = V4529_num %in% 11:13,
AAST_NoWeap = AAST & WeapCat == "NoWeap",
AAST_Firearm = AAST & WeapCat == "Firearm",
AAST_Knife = AAST & WeapCat == "Knife",
AAST_Other = AAST & WeapCat == "Other"
)
```
This is a good point to pause to look at the output of crosswalks between an original variable and a derived one to check that the logic was programmed correctly and that everything ends up in the expected category.
```
inc_series %>% count(V4022)
```
```
## # A tibble: 6 × 2
## V4022 n
## <fct> <int>
## 1 1 34
## 2 2 65
## 3 3 7697
## 4 4 1143
## 5 5 39
## 6 8 4
```
```
inc_ind %>% count(V4022)
```
```
## # A tibble: 5 × 2
## V4022 n
## <fct> <int>
## 1 2 65
## 2 3 7697
## 3 4 1143
## 4 5 39
## 5 8 4
```
```
inc_ind %>%
count(WeapCat, V4049, V4050, V4051, V4052, V4052, V4053, V4054)
```
```
## # A tibble: 13 × 8
## WeapCat V4049 V4050 V4051 V4052 V4053 V4054 n
## <chr> <fct> <fct> <fct> <fct> <fct> <fct> <int>
## 1 Firearm 1 1 0 1 0 0 15
## 2 Firearm 1 1 0 1 1 1 1
## 3 Firearm 1 1 1 0 0 0 125
## 4 Firearm 1 1 1 0 1 0 2
## 5 Firearm 1 1 1 1 0 0 3
## 6 Firearm 1 7 0 0 0 0 3
## 7 Knife 1 1 0 0 0 1 14
## 8 Knife 1 1 0 0 1 0 71
## 9 NoWeap 2 <NA> <NA> <NA> <NA> <NA> 1794
## 10 Other 1 1 0 0 0 0 147
## 11 Other 1 3 0 0 0 0 26
## 12 UnkWeapUse 3 <NA> <NA> <NA> <NA> <NA> 519
## 13 <NA> <NA> <NA> <NA> <NA> <NA> <NA> 6228
```
```
inc_ind %>%
count(V4529, Property, Violent, AAST) %>%
print(n = 40)
```
```
## # A tibble: 34 × 5
## V4529 Property Violent AAST n
## <fct> <lgl> <lgl> <lgl> <int>
## 1 1 FALSE TRUE FALSE 45
## 2 2 FALSE TRUE FALSE 20
## 3 3 FALSE TRUE FALSE 11
## 4 4 FALSE TRUE FALSE 3
## 5 5 FALSE TRUE FALSE 24
## 6 6 FALSE TRUE FALSE 26
## 7 7 FALSE TRUE FALSE 59
## 8 8 FALSE TRUE FALSE 5
## 9 9 FALSE TRUE FALSE 7
## 10 10 FALSE TRUE FALSE 57
## 11 11 FALSE TRUE TRUE 97
## 12 12 FALSE TRUE TRUE 91
## 13 13 FALSE TRUE TRUE 163
## 14 14 FALSE TRUE FALSE 165
## 15 15 FALSE TRUE FALSE 24
## 16 16 FALSE TRUE FALSE 12
## 17 17 FALSE TRUE FALSE 357
## 18 18 FALSE TRUE FALSE 14
## 19 19 FALSE TRUE FALSE 3
## 20 20 FALSE TRUE FALSE 607
## 21 21 FALSE FALSE FALSE 2
## 22 22 FALSE FALSE FALSE 2
## 23 23 FALSE FALSE FALSE 19
## 24 31 TRUE FALSE FALSE 248
## 25 32 TRUE FALSE FALSE 634
## 26 33 TRUE FALSE FALSE 188
## 27 40 TRUE FALSE FALSE 256
## 28 41 TRUE FALSE FALSE 97
## 29 54 TRUE FALSE FALSE 407
## 30 55 TRUE FALSE FALSE 1006
## 31 56 TRUE FALSE FALSE 1686
## 32 57 TRUE FALSE FALSE 1420
## 33 58 TRUE FALSE FALSE 798
## 34 59 TRUE FALSE FALSE 395
```
```
inc_ind %>% count(ReportPolice, V4399)
```
```
## # A tibble: 4 × 3
## ReportPolice V4399 n
## <lgl> <fct> <int>
## 1 FALSE 2 5670
## 2 FALSE 3 103
## 3 FALSE 8 12
## 4 TRUE 1 3163
```
```
inc_ind %>%
count(
AAST,
WeapCat,
AAST_NoWeap,
AAST_Firearm,
AAST_Knife,
AAST_Other
)
```
```
## # A tibble: 11 × 7
## AAST WeapCat AAST_NoWeap AAST_Firearm AAST_Knife AAST_Other n
## <lgl> <chr> <lgl> <lgl> <lgl> <lgl> <int>
## 1 FALSE Firearm FALSE FALSE FALSE FALSE 34
## 2 FALSE Knife FALSE FALSE FALSE FALSE 23
## 3 FALSE NoWeap FALSE FALSE FALSE FALSE 1769
## 4 FALSE Other FALSE FALSE FALSE FALSE 27
## 5 FALSE UnkWeapUse FALSE FALSE FALSE FALSE 516
## 6 FALSE <NA> FALSE FALSE FALSE FALSE 6228
## 7 TRUE Firearm FALSE TRUE FALSE FALSE 115
## 8 TRUE Knife FALSE FALSE TRUE FALSE 62
## 9 TRUE NoWeap TRUE FALSE FALSE FALSE 25
## 10 TRUE Other FALSE FALSE FALSE TRUE 146
## 11 TRUE UnkWeapUse FALSE FALSE FALSE FALSE 3
```
After creating indicators of victimization types and characteristics, the file is summarized, and crimes are summed across persons or households by `YEARQ.` Property crimes (i.e., crimes committed against households, such as household burglary or motor vehicle theft) are summed across households, and personal crimes (i.e., crimes committed against an individual, such as assault, robbery, and personal theft) are summed across persons. The indicators are summed using our created series weight variable (`serieswgt`). Additionally, the existing weight variable (`WGTVICCY`) needs to be retained for later analysis.
```
inc_hh_sums <-
inc_ind %>%
filter(V4529_num > 23) %>% # restrict to household crimes
group_by(YEARQ, IDHH) %>%
summarize(
WGTVICCY = WGTVICCY[1],
across(starts_with("Property"),
~ sum(. * serieswgt),
.names = "{.col}"
),
.groups = "drop"
)
inc_pers_sums <-
inc_ind %>%
filter(V4529_num <= 23) %>% # restrict to person crimes
group_by(YEARQ, IDHH, IDPER) %>%
summarize(
WGTVICCY = WGTVICCY[1],
across(c(starts_with("Violent"), starts_with("AAST")),
~ sum(. * serieswgt),
.names = "{.col}"
),
.groups = "drop"
)
```
Now, we merge the victimization summary files into the appropriate files. For any record on the household or person file that is not on the victimization file, the victimization counts are set to 0 after merging. In this step, we also create the victimization adjustment factor. See Section 2\.2\.4 in the User’s Guide for details of why this adjustment is created ([Shook\-Sa, Couzens, and Berzofsky 2015](#ref-ncvs_user_guide)). It is calculated as follows:
\\\[ A\_{ijk}\=\\frac{v\_{ijk}}{w\_{ijk}}\\]
where \\(w\_{ijk}\\) is the person weight (`WGTPERCY`) for personal crimes or the household weight (`WGTHHCY`) for household crimes, and \\(v\_{ijk}\\) is the victimization weight (`WGTVICCY`) for household \\(i\\), respondent \\(j\\), in reporting period \\(k\\). The adjustment factor is set to 0 if no incidents are reported.
```
hh_z_list <- rep(0, ncol(inc_hh_sums) - 3) %>%
as.list() %>%
setNames(names(inc_hh_sums)[-(1:3)])
pers_z_list <- rep(0, ncol(inc_pers_sums) - 4) %>%
as.list() %>%
setNames(names(inc_pers_sums)[-(1:4)])
hh_vsum <- ncvs_2021_household %>%
full_join(inc_hh_sums, by = c("YEARQ", "IDHH")) %>%
replace_na(hh_z_list) %>%
mutate(ADJINC_WT = if_else(is.na(WGTVICCY), 0, WGTVICCY / WGTHHCY))
pers_vsum <- ncvs_2021_person %>%
full_join(inc_pers_sums, by = c("YEARQ", "IDHH", "IDPER")) %>%
replace_na(pers_z_list) %>%
mutate(ADJINC_WT = if_else(is.na(WGTVICCY), 0, WGTVICCY / WGTPERCY))
```
### 13\.4\.2 Derived demographic variables
A final step in file preparation for the household and person files is creating any derived variables on the household and person files, such as income categories or age categories, for subgroup analysis. We can do this step before or after merging the victimization counts.
#### 13\.4\.2\.1 Household variables
For the household file, we create categories for tenure (rental status), urbanicity, income, place size, and region. A codebook of the household variables is listed in Table [13\.3](c13-ncvs-vignette.html#tab:cb-hh).
TABLE 13\.3: Codebook for household variables
| Variable | Description | Value | Label |
| --- | --- | --- | --- |
| V2015 | Tenure | 1 | Owned or being bought |
| | | 2 | Rented for cash |
| | | 3 | No cash rent |
| SC214A | Household Income | 01 | Less than $5,000 |
| | | 02 | $5,000–7,499 |
| | | 03 | $7,500–9,999 |
| | | 04 | $10,000–12,499 |
| | | 05 | $12,500–14,999 |
| | | 06 | $15,000–17,499 |
| | | 07 | $17,500–19,999 |
| | | 08 | $20,000–24,999 |
| | | 09 | $25,000–29,999 |
| | | 10 | $30,000–34,999 |
| | | 11 | $35,000–39,999 |
| | | 12 | $40,000–49,999 |
| | | 13 | $50,000–74,999 |
| | | 15 | $75,000–99,999 |
| | | 16 | $100,000–149,999 |
| | | 17 | $150,000–199,999 |
| | | 18 | $200,000 or more |
| V2126B | Place Size (Population) Code | 00 | Not in a place |
| | | 13 | Population under 10,000 |
| | | 16 | 10,000–49,999 |
| | | 17 | 50,000–99,999 |
| | | 18 | 100,000–249,999 |
| | | 19 | 250,000–499,999 |
| | | 20 | 500,000–999,999 |
| | | 21 | 1,000,000–2,499,999 |
| | | 22 | 2,500,000–4,999,999 |
| | | 23 | 5,000,000 or more |
| V2127B | Region | 1 | Northeast |
| | | 2 | Midwest |
| | | 3 | South |
| | | 4 | West |
| V2143 | Urbanicity | 1 | Urban |
| | | 2 | Suburban |
| | | 3 | Rural |
```
hh_vsum_der <- hh_vsum %>%
mutate(
Tenure = factor(
case_when(
V2015 == 1 ~ "Owned",
!is.na(V2015) ~ "Rented"
),
levels = c("Owned", "Rented")
),
Urbanicity = factor(
case_when(
V2143 == 1 ~ "Urban",
V2143 == 2 ~ "Suburban",
V2143 == 3 ~ "Rural"
),
levels = c("Urban", "Suburban", "Rural")
),
SC214A_num = as.numeric(as.character(SC214A)),
Income = case_when(
SC214A_num <= 8 ~ "Less than $25,000",
SC214A_num <= 12 ~ "$25,000--49,999",
SC214A_num <= 15 ~ "$50,000--99,999",
SC214A_num <= 17 ~ "$100,000--199,999",
SC214A_num <= 18 ~ "$200,000 or more"
),
Income = fct_reorder(Income, SC214A_num, .na_rm = FALSE),
PlaceSize = case_match(
as.numeric(as.character(V2126B)),
0 ~ "Not in a place",
13 ~ "Population under 10,000",
16 ~ "10,000--49,999",
17 ~ "50,000--99,999",
18 ~ "100,000--249,999",
19 ~ "250,000--499,999",
20 ~ "500,000--999,999",
c(21, 22, 23) ~ "1,000,000 or more"
),
PlaceSize = fct_reorder(PlaceSize, as.numeric(V2126B)),
Region = case_match(
as.numeric(V2127B),
1 ~ "Northeast",
2 ~ "Midwest",
3 ~ "South",
4 ~ "West"
),
Region = fct_reorder(Region, as.numeric(V2127B))
)
```
As before, we want to check to make sure the recoded variables we create match the existing data as expected.
```
hh_vsum_der %>% count(Tenure, V2015)
```
```
## # A tibble: 4 × 3
## Tenure V2015 n
## <fct> <fct> <int>
## 1 Owned 1 101944
## 2 Rented 2 46269
## 3 Rented 3 1925
## 4 <NA> <NA> 106322
```
```
hh_vsum_der %>% count(Urbanicity, V2143)
```
```
## # A tibble: 3 × 3
## Urbanicity V2143 n
## <fct> <fct> <int>
## 1 Urban 1 26878
## 2 Suburban 2 173491
## 3 Rural 3 56091
```
```
hh_vsum_der %>% count(Income, SC214A)
```
```
## # A tibble: 18 × 3
## Income SC214A n
## <fct> <fct> <int>
## 1 Less than $25,000 1 7841
## 2 Less than $25,000 2 2626
## 3 Less than $25,000 3 3949
## 4 Less than $25,000 4 5546
## 5 Less than $25,000 5 5445
## 6 Less than $25,000 6 4821
## 7 Less than $25,000 7 5038
## 8 Less than $25,000 8 11887
## 9 $25,000--49,999 9 11550
## 10 $25,000--49,999 10 13689
## 11 $25,000--49,999 11 13655
## 12 $25,000--49,999 12 23282
## 13 $50,000--99,999 13 44601
## 14 $50,000--99,999 15 33353
## 15 $100,000--199,999 16 34287
## 16 $100,000--199,999 17 15317
## 17 $200,000 or more 18 16892
## 18 <NA> <NA> 2681
```
```
hh_vsum_der %>% count(PlaceSize, V2126B)
```
```
## # A tibble: 10 × 3
## PlaceSize V2126B n
## <fct> <fct> <int>
## 1 Not in a place 0 69484
## 2 Population under 10,000 13 39873
## 3 10,000--49,999 16 53002
## 4 50,000--99,999 17 27205
## 5 100,000--249,999 18 24461
## 6 250,000--499,999 19 13111
## 7 500,000--999,999 20 15194
## 8 1,000,000 or more 21 6167
## 9 1,000,000 or more 22 3857
## 10 1,000,000 or more 23 4106
```
```
hh_vsum_der %>% count(Region, V2127B)
```
```
## # A tibble: 4 × 3
## Region V2127B n
## <fct> <fct> <int>
## 1 Northeast 1 41585
## 2 Midwest 2 74666
## 3 South 3 87783
## 4 West 4 52426
```
#### 13\.4\.2\.2 Person variables
For the person file, we create categories for sex, race/Hispanic origin, age categories, and marital status. A codebook of the household variables is located in Table [13\.4](c13-ncvs-vignette.html#tab:cb-pers). We also merge the household demographics to the person file as well as the design variables (`V2117` and `V2118`).
TABLE 13\.4: Codebook for person variables
| Variable | Description | Value | Label |
| --- | --- | --- | --- |
| V3014 | Age | | 12–90 |
| V3015 | Current Marital Status | 1 | Married |
| | | 2 | Widowed |
| | | 3 | Divorced |
| | | 4 | Separated |
| | | 5 | Never married |
| V3018 | Sex | 1 | Male |
| | | 2 | Female |
| V3023A | Race | 01 | White only |
| | | 02 | Black only |
| | | 03 | American Indian, Alaska native only |
| | | 04 | Asian only |
| | | 05 | Hawaiian/Pacific Islander only |
| | | 06 | White\-Black |
| | | 07 | White\-American Indian |
| | | 08 | White\-Asian |
| | | 09 | White\-Hawaiian |
| | | 10 | Black\-American Indian |
| | | 11 | Black\-Asian |
| | | 12 | Black\-Hawaiian/Pacific Islander |
| | | 13 | American Indian\-Asian |
| | | 14 | Asian\-Hawaiian/Pacific Islander |
| | | 15 | White\-Black\-American Indian |
| | | 16 | White\-Black\-Asian |
| | | 17 | White\-American Indian\-Asian |
| | | 18 | White\-Asian\-Hawaiian |
| | | 19 | 2 or 3 races |
| | | 20 | 4 or 5 races |
| V3024 | Hispanic Origin | 1 | Yes |
| | | 2 | No |
```
NHOPI <- "Native Hawaiian or Other Pacific Islander"
pers_vsum_der <- pers_vsum %>%
mutate(
Sex = factor(case_when(
V3018 == 1 ~ "Male",
V3018 == 2 ~ "Female"
)),
RaceHispOrigin = factor(
case_when(
V3024 == 1 ~ "Hispanic",
V3023A == 1 ~ "White",
V3023A == 2 ~ "Black",
V3023A == 4 ~ "Asian",
V3023A == 5 ~ NHOPI,
TRUE ~ "Other"
),
levels = c(
"White", "Black", "Hispanic",
"Asian", NHOPI, "Other"
)
),
V3014_num = as.numeric(as.character(V3014)),
AgeGroup = case_when(
V3014_num <= 17 ~ "12--17",
V3014_num <= 24 ~ "18--24",
V3014_num <= 34 ~ "25--34",
V3014_num <= 49 ~ "35--49",
V3014_num <= 64 ~ "50--64",
V3014_num <= 90 ~ "65 or older"
),
AgeGroup = fct_reorder(AgeGroup, V3014_num),
MaritalStatus = factor(
case_when(
V3015 == 1 ~ "Married",
V3015 == 2 ~ "Widowed",
V3015 == 3 ~ "Divorced",
V3015 == 4 ~ "Separated",
V3015 == 5 ~ "Never married"
),
levels = c(
"Never married", "Married",
"Widowed", "Divorced",
"Separated"
)
)
) %>%
left_join(
hh_vsum_der %>% select(
YEARQ, IDHH,
V2117, V2118, Tenure:Region
),
by = c("YEARQ", "IDHH")
)
```
As before, we want to check to make sure the recoded variables we create match the existing data as expected.
```
pers_vsum_der %>% count(Sex, V3018)
```
```
## # A tibble: 2 × 3
## Sex V3018 n
## <fct> <fct> <int>
## 1 Female 2 150956
## 2 Male 1 140922
```
```
pers_vsum_der %>% count(RaceHispOrigin, V3024)
```
```
## # A tibble: 11 × 3
## RaceHispOrigin V3024 n
## <fct> <fct> <int>
## 1 White 2 197292
## 2 White 8 883
## 3 Black 2 29947
## 4 Black 8 120
## 5 Hispanic 1 41450
## 6 Asian 2 16015
## 7 Asian 8 61
## 8 Native Hawaiian or Other Pacific Islander 2 891
## 9 Native Hawaiian or Other Pacific Islander 8 9
## 10 Other 2 5161
## 11 Other 8 49
```
```
pers_vsum_der %>%
filter(RaceHispOrigin != "Hispanic" |
is.na(RaceHispOrigin)) %>%
count(RaceHispOrigin, V3023A)
```
```
## # A tibble: 20 × 3
## RaceHispOrigin V3023A n
## <fct> <fct> <int>
## 1 White 1 198175
## 2 Black 2 30067
## 3 Asian 4 16076
## 4 Native Hawaiian or Other Pacific Islander 5 900
## 5 Other 3 1319
## 6 Other 6 1217
## 7 Other 7 1025
## 8 Other 8 837
## 9 Other 9 184
## 10 Other 10 178
## 11 Other 11 87
## 12 Other 12 27
## 13 Other 13 13
## 14 Other 14 53
## 15 Other 15 136
## 16 Other 16 45
## 17 Other 17 11
## 18 Other 18 33
## 19 Other 19 22
## 20 Other 20 23
```
```
pers_vsum_der %>%
group_by(AgeGroup) %>%
summarize(
minAge = min(V3014),
maxAge = max(V3014),
.groups = "drop"
)
```
```
## # A tibble: 6 × 3
## AgeGroup minAge maxAge
## <fct> <dbl> <dbl>
## 1 12--17 12 17
## 2 18--24 18 24
## 3 25--34 25 34
## 4 35--49 35 49
## 5 50--64 50 64
## 6 65 or older 65 90
```
```
pers_vsum_der %>% count(MaritalStatus, V3015)
```
```
## # A tibble: 6 × 3
## MaritalStatus V3015 n
## <fct> <fct> <int>
## 1 Never married 5 90425
## 2 Married 1 148131
## 3 Widowed 2 17668
## 4 Divorced 3 28596
## 5 Separated 4 4524
## 6 <NA> 8 2534
```
We then create tibbles that contain only the variables we need, which makes it easier to use them for analyses.
```
hh_vsum_slim <- hh_vsum_der %>%
select(
YEARQ:V2118,
WGTVICCY:ADJINC_WT,
Tenure,
Urbanicity,
Income,
PlaceSize,
Region
)
pers_vsum_slim <- pers_vsum_der %>%
select(YEARQ:WGTPERCY, WGTVICCY:ADJINC_WT, Sex:Region)
```
To calculate estimates about types of crime, such as what percentage of violent crimes are reported to the police, we must use the incident file. The incident file is not guaranteed to have every pseudo\-stratum and half\-sample code, so dummy records are created to append before estimation. Finally, we merge demographic variables onto the incident tibble.
```
dummy_records <- hh_vsum_slim %>%
distinct(V2117, V2118) %>%
mutate(
Dummy = 1,
WGTVICCY = 1,
NEWWGT = 1
)
inc_analysis <- inc_ind %>%
mutate(Dummy = 0) %>%
left_join(select(pers_vsum_slim, YEARQ, IDHH, IDPER, Sex:Region),
by = c("YEARQ", "IDHH", "IDPER")
) %>%
bind_rows(dummy_records) %>%
select(
YEARQ:IDPER,
WGTVICCY,
NEWWGT,
V4529,
WeapCat,
ReportPolice,
Property:Region
)
```
The tibbles `hh_vsum_slim`, `pers_vsum_slim`, and `inc_analysis` can now be used to create design objects and calculate crime rate estimates.
### 13\.4\.1 Preparing files for estimation of victimization rates
Each record on the incident file represents one victimization, which is not the same as one incident. Some victimizations have several instances that make it difficult for the victim to differentiate the details of these incidents, labeled as “series crimes.” Appendix A of the User’s Guide indicates how to calculate the series weight in other statistical languages.
Here, we adapt that code for R. Essentially, if a victimization is a series crime, its series weight is top\-coded at 10 based on the number of actual victimizations, that is, even if the crime occurred more than 10 times, it is counted as 10 times to reduce the influence of extreme outliers. If an incident is a series crime, but the number of occurrences is unknown, the series weight is set to 6\. A description of the variables used to create indicators of series and the associated weights is included in Table [13\.1](c13-ncvs-vignette.html#tab:cb-incident).
TABLE 13\.1: Codebook for incident variables, related to series weight
| | Description | Value | Label |
| --- | --- | --- | --- |
| V4016 | How many times incident occur last 6 months | 1–996 | Number of times |
| | | 997 | Don’t know |
| V4017 | How many incidents | 1 | 1–5 incidents (not a “series”) |
| | | 2 | 6 or more incidents |
| | | 8 | Residue (invalid data) |
| V4018 | Incidents similar in detail | 1 | Similar |
| | | 2 | Different (not in a “series”) |
| | | 8 | Residue (invalid data) |
| V4019 | Enough detail to distinguish incidents | 1 | Yes (not a “series”) |
| | | 2 | No (is a “series”) |
| | | 8 | Residue (invalid data) |
| WGTVICCY | Adjusted victimization weight | | Numeric |
We want to create four variables to indicate if an incident is a series crime. First, we create a variable called `series` using `V4017`, `V4018`, and `V4019` where an incident is considered a series crime if there are 6 or more incidents (`V4107`), the incidents are similar in detail (`V4018`), or there is not enough detail to distinguish the incidents (`V4019`). Second, we top\-code the number of incidents (`V4016`) by creating a variable `n10v4016`, which is set to 10 if `V4016 > 10`. Third, we create the `serieswgt` using the two new variables `series` and `n10v4019` to classify the max series based on missing data and number of incidents. Finally, we create the new weight using our new `serieswgt` variable and the existing weight (`WGTVICCY`).
```
inc_series <- ncvs_2021_incident %>%
mutate(
series = case_when(
V4017 %in% c(1, 8) ~ 1,
V4018 %in% c(2, 8) ~ 1,
V4019 %in% c(1, 8) ~ 1,
TRUE ~ 2
),
n10v4016 = case_when(
V4016 %in% c(997, 998) ~ NA_real_,
V4016 > 10 ~ 10,
TRUE ~ V4016
),
serieswgt = case_when(
series == 2 & is.na(n10v4016) ~ 6,
series == 2 ~ n10v4016,
TRUE ~ 1
),
NEWWGT = WGTVICCY * serieswgt
)
```
The next step in preparing the files for estimation is to create indicators on the victimization file for characteristics of interest. Almost all BJS publications limit the analysis to records where the victimization occurred in the United States (where `V4022` is not equal to 1\). We do this for all estimates as well. A brief codebook of variables for this task is located in Table [13\.2](c13-ncvs-vignette.html#tab:cb-crimetype).
TABLE 13\.2: Codebook for incident variables, crime type indicators and characteristics
| Variable | Description | Value | Label |
| --- | --- | --- | --- |
| V4022 | In what city/town/village | 1 | Outside U.S. |
| | | 2 | Not inside a city/town/village |
| | | 3 | Same city/town/village as present residence |
| | | 4 | Different city/town/village as present residence |
| | | 5 | Don’t know |
| | | 6 | Don’t know if 2, 4, or 5 |
| V4049 | Did offender have a weapon | 1 | Yes |
| | | 2 | No |
| | | 3 | Don’t know |
| V4050 | What was the weapon that offender had | 1 | At least one good entry |
| | | 3 | Indicates “Yes\-Type Weapon\-NA” |
| | | 7 | Indicates “Gun Type Unknown” |
| | | 8 | No good entry |
| V4051 | Hand gun | 0 | No |
| | | 1 | Yes |
| V4052 | Other gun | 0 | No |
| | | 1 | Yes |
| V4053 | Knife | 0 | No |
| | | 1 | Yes |
| V4399 | Reported to police | 1 | Yes |
| | | 2 | No |
| | | 3 | Don’t know |
| V4529 | Type of crime code | 01 | Completed rape |
| | | 02 | Attempted rape |
| | | 03 | Sexual attack with serious assault |
| | | 04 | Sexual attack with minor assault |
| | | 05 | Completed robbery with injury from serious assault |
| | | 06 | Completed robbery with injury from minor assault |
| | | 07 | Completed robbery without injury from minor assault |
| | | 08 | Attempted robbery with injury from serious assault |
| | | 09 | Attempted robbery with injury from minor assault |
| | | 10 | Attempted robbery without injury |
| | | 11 | Completed aggravated assault with injury |
| | | 12 | Attempted aggravated assault with weapon |
| | | 13 | Threatened assault with weapon |
| | | 14 | Simple assault completed with injury |
| | | 15 | Sexual assault without injury |
| | | 16 | Unwanted sexual contact without force |
| | | 17 | Assault without weapon without injury |
| | | 18 | Verbal threat of rape |
| | | 19 | Verbal threat of sexual assault |
| | | 20 | Verbal threat of assault |
| | | 21 | Completed purse snatching |
| | | 22 | Attempted purse snatching |
| | | 23 | Pocket picking (completed only) |
| | | 31 | Completed burglary, forcible entry |
| | | 32 | Completed burglary, unlawful entry without force |
| | | 33 | Attempted forcible entry |
| | | 40 | Completed motor vehicle theft |
| | | 41 | Attempted motor vehicle theft |
| | | 54 | Completed theft less than $10 |
| | | 55 | Completed theft $10 to $49 |
| | | 56 | Completed theft $50 to $249 |
| | | 57 | Completed theft $250 or greater |
| | | 58 | Completed theft value NA |
| | | 59 | Attempted theft |
Using these variables, we create the following indicators:
1. Property crime
* `V4529` \\(\\ge\\) 31
* Variable: `Property`
2. Violent crime
* `V4529` \\(\\le\\) 20
* Variable: `Violent`
3. Property crime reported to the police
* `V4529` \\(\\ge\\) 31 and `V4399`\=1
* Variable: `Property_ReportPolice`
4. Violent crime reported to the police
* `V4529` \< 31 and `V4399`\=1
* Variable: `Violent_ReportPolice`
5. Aggravated assault without a weapon
* `V4529` in 11:12 and `V4049`\=2
* Variable: `AAST_NoWeap`
6. Aggravated assault with a firearm
* `V4529` in 11:12 and `V4049`\=1 and (`V4051`\=1 or `V4052`\=1 or `V4050`\=7\)
* Variable: `AAST_Firearm`
7. Aggravated assault with a knife or sharp object
* `V4529` in 11:12 and `V4049`\=1 and (`V4053`\=1 or `V4054`\=1\)
* Variable: `AAST_Knife`
8. Aggravated assault with another type of weapon
* `V4529` in 11:12 and `V4049`\=1 and `V4050`\=1 and not firearm or knife
* Variable: `AAST_Other`
```
inc_ind <- inc_series %>%
filter(V4022 != 1) %>%
mutate(
WeapCat = case_when(
is.na(V4049) ~ NA_character_,
V4049 == 2 ~ "NoWeap",
V4049 == 3 ~ "UnkWeapUse",
V4050 == 3 ~ "Other",
V4051 == 1 | V4052 == 1 | V4050 == 7 ~ "Firearm",
V4053 == 1 | V4054 == 1 ~ "Knife",
TRUE ~ "Other"
),
V4529_num = parse_number(as.character(V4529)),
ReportPolice = V4399 == 1,
Property = V4529_num >= 31,
Violent = V4529_num <= 20,
Property_ReportPolice = Property & ReportPolice,
Violent_ReportPolice = Violent & ReportPolice,
AAST = V4529_num %in% 11:13,
AAST_NoWeap = AAST & WeapCat == "NoWeap",
AAST_Firearm = AAST & WeapCat == "Firearm",
AAST_Knife = AAST & WeapCat == "Knife",
AAST_Other = AAST & WeapCat == "Other"
)
```
This is a good point to pause to look at the output of crosswalks between an original variable and a derived one to check that the logic was programmed correctly and that everything ends up in the expected category.
```
inc_series %>% count(V4022)
```
```
## # A tibble: 6 × 2
## V4022 n
## <fct> <int>
## 1 1 34
## 2 2 65
## 3 3 7697
## 4 4 1143
## 5 5 39
## 6 8 4
```
```
inc_ind %>% count(V4022)
```
```
## # A tibble: 5 × 2
## V4022 n
## <fct> <int>
## 1 2 65
## 2 3 7697
## 3 4 1143
## 4 5 39
## 5 8 4
```
```
inc_ind %>%
count(WeapCat, V4049, V4050, V4051, V4052, V4052, V4053, V4054)
```
```
## # A tibble: 13 × 8
## WeapCat V4049 V4050 V4051 V4052 V4053 V4054 n
## <chr> <fct> <fct> <fct> <fct> <fct> <fct> <int>
## 1 Firearm 1 1 0 1 0 0 15
## 2 Firearm 1 1 0 1 1 1 1
## 3 Firearm 1 1 1 0 0 0 125
## 4 Firearm 1 1 1 0 1 0 2
## 5 Firearm 1 1 1 1 0 0 3
## 6 Firearm 1 7 0 0 0 0 3
## 7 Knife 1 1 0 0 0 1 14
## 8 Knife 1 1 0 0 1 0 71
## 9 NoWeap 2 <NA> <NA> <NA> <NA> <NA> 1794
## 10 Other 1 1 0 0 0 0 147
## 11 Other 1 3 0 0 0 0 26
## 12 UnkWeapUse 3 <NA> <NA> <NA> <NA> <NA> 519
## 13 <NA> <NA> <NA> <NA> <NA> <NA> <NA> 6228
```
```
inc_ind %>%
count(V4529, Property, Violent, AAST) %>%
print(n = 40)
```
```
## # A tibble: 34 × 5
## V4529 Property Violent AAST n
## <fct> <lgl> <lgl> <lgl> <int>
## 1 1 FALSE TRUE FALSE 45
## 2 2 FALSE TRUE FALSE 20
## 3 3 FALSE TRUE FALSE 11
## 4 4 FALSE TRUE FALSE 3
## 5 5 FALSE TRUE FALSE 24
## 6 6 FALSE TRUE FALSE 26
## 7 7 FALSE TRUE FALSE 59
## 8 8 FALSE TRUE FALSE 5
## 9 9 FALSE TRUE FALSE 7
## 10 10 FALSE TRUE FALSE 57
## 11 11 FALSE TRUE TRUE 97
## 12 12 FALSE TRUE TRUE 91
## 13 13 FALSE TRUE TRUE 163
## 14 14 FALSE TRUE FALSE 165
## 15 15 FALSE TRUE FALSE 24
## 16 16 FALSE TRUE FALSE 12
## 17 17 FALSE TRUE FALSE 357
## 18 18 FALSE TRUE FALSE 14
## 19 19 FALSE TRUE FALSE 3
## 20 20 FALSE TRUE FALSE 607
## 21 21 FALSE FALSE FALSE 2
## 22 22 FALSE FALSE FALSE 2
## 23 23 FALSE FALSE FALSE 19
## 24 31 TRUE FALSE FALSE 248
## 25 32 TRUE FALSE FALSE 634
## 26 33 TRUE FALSE FALSE 188
## 27 40 TRUE FALSE FALSE 256
## 28 41 TRUE FALSE FALSE 97
## 29 54 TRUE FALSE FALSE 407
## 30 55 TRUE FALSE FALSE 1006
## 31 56 TRUE FALSE FALSE 1686
## 32 57 TRUE FALSE FALSE 1420
## 33 58 TRUE FALSE FALSE 798
## 34 59 TRUE FALSE FALSE 395
```
```
inc_ind %>% count(ReportPolice, V4399)
```
```
## # A tibble: 4 × 3
## ReportPolice V4399 n
## <lgl> <fct> <int>
## 1 FALSE 2 5670
## 2 FALSE 3 103
## 3 FALSE 8 12
## 4 TRUE 1 3163
```
```
inc_ind %>%
count(
AAST,
WeapCat,
AAST_NoWeap,
AAST_Firearm,
AAST_Knife,
AAST_Other
)
```
```
## # A tibble: 11 × 7
## AAST WeapCat AAST_NoWeap AAST_Firearm AAST_Knife AAST_Other n
## <lgl> <chr> <lgl> <lgl> <lgl> <lgl> <int>
## 1 FALSE Firearm FALSE FALSE FALSE FALSE 34
## 2 FALSE Knife FALSE FALSE FALSE FALSE 23
## 3 FALSE NoWeap FALSE FALSE FALSE FALSE 1769
## 4 FALSE Other FALSE FALSE FALSE FALSE 27
## 5 FALSE UnkWeapUse FALSE FALSE FALSE FALSE 516
## 6 FALSE <NA> FALSE FALSE FALSE FALSE 6228
## 7 TRUE Firearm FALSE TRUE FALSE FALSE 115
## 8 TRUE Knife FALSE FALSE TRUE FALSE 62
## 9 TRUE NoWeap TRUE FALSE FALSE FALSE 25
## 10 TRUE Other FALSE FALSE FALSE TRUE 146
## 11 TRUE UnkWeapUse FALSE FALSE FALSE FALSE 3
```
After creating indicators of victimization types and characteristics, the file is summarized, and crimes are summed across persons or households by `YEARQ.` Property crimes (i.e., crimes committed against households, such as household burglary or motor vehicle theft) are summed across households, and personal crimes (i.e., crimes committed against an individual, such as assault, robbery, and personal theft) are summed across persons. The indicators are summed using our created series weight variable (`serieswgt`). Additionally, the existing weight variable (`WGTVICCY`) needs to be retained for later analysis.
```
inc_hh_sums <-
inc_ind %>%
filter(V4529_num > 23) %>% # restrict to household crimes
group_by(YEARQ, IDHH) %>%
summarize(
WGTVICCY = WGTVICCY[1],
across(starts_with("Property"),
~ sum(. * serieswgt),
.names = "{.col}"
),
.groups = "drop"
)
inc_pers_sums <-
inc_ind %>%
filter(V4529_num <= 23) %>% # restrict to person crimes
group_by(YEARQ, IDHH, IDPER) %>%
summarize(
WGTVICCY = WGTVICCY[1],
across(c(starts_with("Violent"), starts_with("AAST")),
~ sum(. * serieswgt),
.names = "{.col}"
),
.groups = "drop"
)
```
Now, we merge the victimization summary files into the appropriate files. For any record on the household or person file that is not on the victimization file, the victimization counts are set to 0 after merging. In this step, we also create the victimization adjustment factor. See Section 2\.2\.4 in the User’s Guide for details of why this adjustment is created ([Shook\-Sa, Couzens, and Berzofsky 2015](#ref-ncvs_user_guide)). It is calculated as follows:
\\\[ A\_{ijk}\=\\frac{v\_{ijk}}{w\_{ijk}}\\]
where \\(w\_{ijk}\\) is the person weight (`WGTPERCY`) for personal crimes or the household weight (`WGTHHCY`) for household crimes, and \\(v\_{ijk}\\) is the victimization weight (`WGTVICCY`) for household \\(i\\), respondent \\(j\\), in reporting period \\(k\\). The adjustment factor is set to 0 if no incidents are reported.
```
hh_z_list <- rep(0, ncol(inc_hh_sums) - 3) %>%
as.list() %>%
setNames(names(inc_hh_sums)[-(1:3)])
pers_z_list <- rep(0, ncol(inc_pers_sums) - 4) %>%
as.list() %>%
setNames(names(inc_pers_sums)[-(1:4)])
hh_vsum <- ncvs_2021_household %>%
full_join(inc_hh_sums, by = c("YEARQ", "IDHH")) %>%
replace_na(hh_z_list) %>%
mutate(ADJINC_WT = if_else(is.na(WGTVICCY), 0, WGTVICCY / WGTHHCY))
pers_vsum <- ncvs_2021_person %>%
full_join(inc_pers_sums, by = c("YEARQ", "IDHH", "IDPER")) %>%
replace_na(pers_z_list) %>%
mutate(ADJINC_WT = if_else(is.na(WGTVICCY), 0, WGTVICCY / WGTPERCY))
```
### 13\.4\.2 Derived demographic variables
A final step in file preparation for the household and person files is creating any derived variables on the household and person files, such as income categories or age categories, for subgroup analysis. We can do this step before or after merging the victimization counts.
#### 13\.4\.2\.1 Household variables
For the household file, we create categories for tenure (rental status), urbanicity, income, place size, and region. A codebook of the household variables is listed in Table [13\.3](c13-ncvs-vignette.html#tab:cb-hh).
TABLE 13\.3: Codebook for household variables
| Variable | Description | Value | Label |
| --- | --- | --- | --- |
| V2015 | Tenure | 1 | Owned or being bought |
| | | 2 | Rented for cash |
| | | 3 | No cash rent |
| SC214A | Household Income | 01 | Less than $5,000 |
| | | 02 | $5,000–7,499 |
| | | 03 | $7,500–9,999 |
| | | 04 | $10,000–12,499 |
| | | 05 | $12,500–14,999 |
| | | 06 | $15,000–17,499 |
| | | 07 | $17,500–19,999 |
| | | 08 | $20,000–24,999 |
| | | 09 | $25,000–29,999 |
| | | 10 | $30,000–34,999 |
| | | 11 | $35,000–39,999 |
| | | 12 | $40,000–49,999 |
| | | 13 | $50,000–74,999 |
| | | 15 | $75,000–99,999 |
| | | 16 | $100,000–149,999 |
| | | 17 | $150,000–199,999 |
| | | 18 | $200,000 or more |
| V2126B | Place Size (Population) Code | 00 | Not in a place |
| | | 13 | Population under 10,000 |
| | | 16 | 10,000–49,999 |
| | | 17 | 50,000–99,999 |
| | | 18 | 100,000–249,999 |
| | | 19 | 250,000–499,999 |
| | | 20 | 500,000–999,999 |
| | | 21 | 1,000,000–2,499,999 |
| | | 22 | 2,500,000–4,999,999 |
| | | 23 | 5,000,000 or more |
| V2127B | Region | 1 | Northeast |
| | | 2 | Midwest |
| | | 3 | South |
| | | 4 | West |
| V2143 | Urbanicity | 1 | Urban |
| | | 2 | Suburban |
| | | 3 | Rural |
```
hh_vsum_der <- hh_vsum %>%
mutate(
Tenure = factor(
case_when(
V2015 == 1 ~ "Owned",
!is.na(V2015) ~ "Rented"
),
levels = c("Owned", "Rented")
),
Urbanicity = factor(
case_when(
V2143 == 1 ~ "Urban",
V2143 == 2 ~ "Suburban",
V2143 == 3 ~ "Rural"
),
levels = c("Urban", "Suburban", "Rural")
),
SC214A_num = as.numeric(as.character(SC214A)),
Income = case_when(
SC214A_num <= 8 ~ "Less than $25,000",
SC214A_num <= 12 ~ "$25,000--49,999",
SC214A_num <= 15 ~ "$50,000--99,999",
SC214A_num <= 17 ~ "$100,000--199,999",
SC214A_num <= 18 ~ "$200,000 or more"
),
Income = fct_reorder(Income, SC214A_num, .na_rm = FALSE),
PlaceSize = case_match(
as.numeric(as.character(V2126B)),
0 ~ "Not in a place",
13 ~ "Population under 10,000",
16 ~ "10,000--49,999",
17 ~ "50,000--99,999",
18 ~ "100,000--249,999",
19 ~ "250,000--499,999",
20 ~ "500,000--999,999",
c(21, 22, 23) ~ "1,000,000 or more"
),
PlaceSize = fct_reorder(PlaceSize, as.numeric(V2126B)),
Region = case_match(
as.numeric(V2127B),
1 ~ "Northeast",
2 ~ "Midwest",
3 ~ "South",
4 ~ "West"
),
Region = fct_reorder(Region, as.numeric(V2127B))
)
```
As before, we want to check to make sure the recoded variables we create match the existing data as expected.
```
hh_vsum_der %>% count(Tenure, V2015)
```
```
## # A tibble: 4 × 3
## Tenure V2015 n
## <fct> <fct> <int>
## 1 Owned 1 101944
## 2 Rented 2 46269
## 3 Rented 3 1925
## 4 <NA> <NA> 106322
```
```
hh_vsum_der %>% count(Urbanicity, V2143)
```
```
## # A tibble: 3 × 3
## Urbanicity V2143 n
## <fct> <fct> <int>
## 1 Urban 1 26878
## 2 Suburban 2 173491
## 3 Rural 3 56091
```
```
hh_vsum_der %>% count(Income, SC214A)
```
```
## # A tibble: 18 × 3
## Income SC214A n
## <fct> <fct> <int>
## 1 Less than $25,000 1 7841
## 2 Less than $25,000 2 2626
## 3 Less than $25,000 3 3949
## 4 Less than $25,000 4 5546
## 5 Less than $25,000 5 5445
## 6 Less than $25,000 6 4821
## 7 Less than $25,000 7 5038
## 8 Less than $25,000 8 11887
## 9 $25,000--49,999 9 11550
## 10 $25,000--49,999 10 13689
## 11 $25,000--49,999 11 13655
## 12 $25,000--49,999 12 23282
## 13 $50,000--99,999 13 44601
## 14 $50,000--99,999 15 33353
## 15 $100,000--199,999 16 34287
## 16 $100,000--199,999 17 15317
## 17 $200,000 or more 18 16892
## 18 <NA> <NA> 2681
```
```
hh_vsum_der %>% count(PlaceSize, V2126B)
```
```
## # A tibble: 10 × 3
## PlaceSize V2126B n
## <fct> <fct> <int>
## 1 Not in a place 0 69484
## 2 Population under 10,000 13 39873
## 3 10,000--49,999 16 53002
## 4 50,000--99,999 17 27205
## 5 100,000--249,999 18 24461
## 6 250,000--499,999 19 13111
## 7 500,000--999,999 20 15194
## 8 1,000,000 or more 21 6167
## 9 1,000,000 or more 22 3857
## 10 1,000,000 or more 23 4106
```
```
hh_vsum_der %>% count(Region, V2127B)
```
```
## # A tibble: 4 × 3
## Region V2127B n
## <fct> <fct> <int>
## 1 Northeast 1 41585
## 2 Midwest 2 74666
## 3 South 3 87783
## 4 West 4 52426
```
#### 13\.4\.2\.2 Person variables
For the person file, we create categories for sex, race/Hispanic origin, age categories, and marital status. A codebook of the household variables is located in Table [13\.4](c13-ncvs-vignette.html#tab:cb-pers). We also merge the household demographics to the person file as well as the design variables (`V2117` and `V2118`).
TABLE 13\.4: Codebook for person variables
| Variable | Description | Value | Label |
| --- | --- | --- | --- |
| V3014 | Age | | 12–90 |
| V3015 | Current Marital Status | 1 | Married |
| | | 2 | Widowed |
| | | 3 | Divorced |
| | | 4 | Separated |
| | | 5 | Never married |
| V3018 | Sex | 1 | Male |
| | | 2 | Female |
| V3023A | Race | 01 | White only |
| | | 02 | Black only |
| | | 03 | American Indian, Alaska native only |
| | | 04 | Asian only |
| | | 05 | Hawaiian/Pacific Islander only |
| | | 06 | White\-Black |
| | | 07 | White\-American Indian |
| | | 08 | White\-Asian |
| | | 09 | White\-Hawaiian |
| | | 10 | Black\-American Indian |
| | | 11 | Black\-Asian |
| | | 12 | Black\-Hawaiian/Pacific Islander |
| | | 13 | American Indian\-Asian |
| | | 14 | Asian\-Hawaiian/Pacific Islander |
| | | 15 | White\-Black\-American Indian |
| | | 16 | White\-Black\-Asian |
| | | 17 | White\-American Indian\-Asian |
| | | 18 | White\-Asian\-Hawaiian |
| | | 19 | 2 or 3 races |
| | | 20 | 4 or 5 races |
| V3024 | Hispanic Origin | 1 | Yes |
| | | 2 | No |
```
NHOPI <- "Native Hawaiian or Other Pacific Islander"
pers_vsum_der <- pers_vsum %>%
mutate(
Sex = factor(case_when(
V3018 == 1 ~ "Male",
V3018 == 2 ~ "Female"
)),
RaceHispOrigin = factor(
case_when(
V3024 == 1 ~ "Hispanic",
V3023A == 1 ~ "White",
V3023A == 2 ~ "Black",
V3023A == 4 ~ "Asian",
V3023A == 5 ~ NHOPI,
TRUE ~ "Other"
),
levels = c(
"White", "Black", "Hispanic",
"Asian", NHOPI, "Other"
)
),
V3014_num = as.numeric(as.character(V3014)),
AgeGroup = case_when(
V3014_num <= 17 ~ "12--17",
V3014_num <= 24 ~ "18--24",
V3014_num <= 34 ~ "25--34",
V3014_num <= 49 ~ "35--49",
V3014_num <= 64 ~ "50--64",
V3014_num <= 90 ~ "65 or older"
),
AgeGroup = fct_reorder(AgeGroup, V3014_num),
MaritalStatus = factor(
case_when(
V3015 == 1 ~ "Married",
V3015 == 2 ~ "Widowed",
V3015 == 3 ~ "Divorced",
V3015 == 4 ~ "Separated",
V3015 == 5 ~ "Never married"
),
levels = c(
"Never married", "Married",
"Widowed", "Divorced",
"Separated"
)
)
) %>%
left_join(
hh_vsum_der %>% select(
YEARQ, IDHH,
V2117, V2118, Tenure:Region
),
by = c("YEARQ", "IDHH")
)
```
As before, we want to check to make sure the recoded variables we create match the existing data as expected.
```
pers_vsum_der %>% count(Sex, V3018)
```
```
## # A tibble: 2 × 3
## Sex V3018 n
## <fct> <fct> <int>
## 1 Female 2 150956
## 2 Male 1 140922
```
```
pers_vsum_der %>% count(RaceHispOrigin, V3024)
```
```
## # A tibble: 11 × 3
## RaceHispOrigin V3024 n
## <fct> <fct> <int>
## 1 White 2 197292
## 2 White 8 883
## 3 Black 2 29947
## 4 Black 8 120
## 5 Hispanic 1 41450
## 6 Asian 2 16015
## 7 Asian 8 61
## 8 Native Hawaiian or Other Pacific Islander 2 891
## 9 Native Hawaiian or Other Pacific Islander 8 9
## 10 Other 2 5161
## 11 Other 8 49
```
```
pers_vsum_der %>%
filter(RaceHispOrigin != "Hispanic" |
is.na(RaceHispOrigin)) %>%
count(RaceHispOrigin, V3023A)
```
```
## # A tibble: 20 × 3
## RaceHispOrigin V3023A n
## <fct> <fct> <int>
## 1 White 1 198175
## 2 Black 2 30067
## 3 Asian 4 16076
## 4 Native Hawaiian or Other Pacific Islander 5 900
## 5 Other 3 1319
## 6 Other 6 1217
## 7 Other 7 1025
## 8 Other 8 837
## 9 Other 9 184
## 10 Other 10 178
## 11 Other 11 87
## 12 Other 12 27
## 13 Other 13 13
## 14 Other 14 53
## 15 Other 15 136
## 16 Other 16 45
## 17 Other 17 11
## 18 Other 18 33
## 19 Other 19 22
## 20 Other 20 23
```
```
pers_vsum_der %>%
group_by(AgeGroup) %>%
summarize(
minAge = min(V3014),
maxAge = max(V3014),
.groups = "drop"
)
```
```
## # A tibble: 6 × 3
## AgeGroup minAge maxAge
## <fct> <dbl> <dbl>
## 1 12--17 12 17
## 2 18--24 18 24
## 3 25--34 25 34
## 4 35--49 35 49
## 5 50--64 50 64
## 6 65 or older 65 90
```
```
pers_vsum_der %>% count(MaritalStatus, V3015)
```
```
## # A tibble: 6 × 3
## MaritalStatus V3015 n
## <fct> <fct> <int>
## 1 Never married 5 90425
## 2 Married 1 148131
## 3 Widowed 2 17668
## 4 Divorced 3 28596
## 5 Separated 4 4524
## 6 <NA> 8 2534
```
We then create tibbles that contain only the variables we need, which makes it easier to use them for analyses.
```
hh_vsum_slim <- hh_vsum_der %>%
select(
YEARQ:V2118,
WGTVICCY:ADJINC_WT,
Tenure,
Urbanicity,
Income,
PlaceSize,
Region
)
pers_vsum_slim <- pers_vsum_der %>%
select(YEARQ:WGTPERCY, WGTVICCY:ADJINC_WT, Sex:Region)
```
To calculate estimates about types of crime, such as what percentage of violent crimes are reported to the police, we must use the incident file. The incident file is not guaranteed to have every pseudo\-stratum and half\-sample code, so dummy records are created to append before estimation. Finally, we merge demographic variables onto the incident tibble.
```
dummy_records <- hh_vsum_slim %>%
distinct(V2117, V2118) %>%
mutate(
Dummy = 1,
WGTVICCY = 1,
NEWWGT = 1
)
inc_analysis <- inc_ind %>%
mutate(Dummy = 0) %>%
left_join(select(pers_vsum_slim, YEARQ, IDHH, IDPER, Sex:Region),
by = c("YEARQ", "IDHH", "IDPER")
) %>%
bind_rows(dummy_records) %>%
select(
YEARQ:IDPER,
WGTVICCY,
NEWWGT,
V4529,
WeapCat,
ReportPolice,
Property:Region
)
```
The tibbles `hh_vsum_slim`, `pers_vsum_slim`, and `inc_analysis` can now be used to create design objects and calculate crime rate estimates.
#### 13\.4\.2\.1 Household variables
For the household file, we create categories for tenure (rental status), urbanicity, income, place size, and region. A codebook of the household variables is listed in Table [13\.3](c13-ncvs-vignette.html#tab:cb-hh).
TABLE 13\.3: Codebook for household variables
| Variable | Description | Value | Label |
| --- | --- | --- | --- |
| V2015 | Tenure | 1 | Owned or being bought |
| | | 2 | Rented for cash |
| | | 3 | No cash rent |
| SC214A | Household Income | 01 | Less than $5,000 |
| | | 02 | $5,000–7,499 |
| | | 03 | $7,500–9,999 |
| | | 04 | $10,000–12,499 |
| | | 05 | $12,500–14,999 |
| | | 06 | $15,000–17,499 |
| | | 07 | $17,500–19,999 |
| | | 08 | $20,000–24,999 |
| | | 09 | $25,000–29,999 |
| | | 10 | $30,000–34,999 |
| | | 11 | $35,000–39,999 |
| | | 12 | $40,000–49,999 |
| | | 13 | $50,000–74,999 |
| | | 15 | $75,000–99,999 |
| | | 16 | $100,000–149,999 |
| | | 17 | $150,000–199,999 |
| | | 18 | $200,000 or more |
| V2126B | Place Size (Population) Code | 00 | Not in a place |
| | | 13 | Population under 10,000 |
| | | 16 | 10,000–49,999 |
| | | 17 | 50,000–99,999 |
| | | 18 | 100,000–249,999 |
| | | 19 | 250,000–499,999 |
| | | 20 | 500,000–999,999 |
| | | 21 | 1,000,000–2,499,999 |
| | | 22 | 2,500,000–4,999,999 |
| | | 23 | 5,000,000 or more |
| V2127B | Region | 1 | Northeast |
| | | 2 | Midwest |
| | | 3 | South |
| | | 4 | West |
| V2143 | Urbanicity | 1 | Urban |
| | | 2 | Suburban |
| | | 3 | Rural |
```
hh_vsum_der <- hh_vsum %>%
mutate(
Tenure = factor(
case_when(
V2015 == 1 ~ "Owned",
!is.na(V2015) ~ "Rented"
),
levels = c("Owned", "Rented")
),
Urbanicity = factor(
case_when(
V2143 == 1 ~ "Urban",
V2143 == 2 ~ "Suburban",
V2143 == 3 ~ "Rural"
),
levels = c("Urban", "Suburban", "Rural")
),
SC214A_num = as.numeric(as.character(SC214A)),
Income = case_when(
SC214A_num <= 8 ~ "Less than $25,000",
SC214A_num <= 12 ~ "$25,000--49,999",
SC214A_num <= 15 ~ "$50,000--99,999",
SC214A_num <= 17 ~ "$100,000--199,999",
SC214A_num <= 18 ~ "$200,000 or more"
),
Income = fct_reorder(Income, SC214A_num, .na_rm = FALSE),
PlaceSize = case_match(
as.numeric(as.character(V2126B)),
0 ~ "Not in a place",
13 ~ "Population under 10,000",
16 ~ "10,000--49,999",
17 ~ "50,000--99,999",
18 ~ "100,000--249,999",
19 ~ "250,000--499,999",
20 ~ "500,000--999,999",
c(21, 22, 23) ~ "1,000,000 or more"
),
PlaceSize = fct_reorder(PlaceSize, as.numeric(V2126B)),
Region = case_match(
as.numeric(V2127B),
1 ~ "Northeast",
2 ~ "Midwest",
3 ~ "South",
4 ~ "West"
),
Region = fct_reorder(Region, as.numeric(V2127B))
)
```
As before, we want to check to make sure the recoded variables we create match the existing data as expected.
```
hh_vsum_der %>% count(Tenure, V2015)
```
```
## # A tibble: 4 × 3
## Tenure V2015 n
## <fct> <fct> <int>
## 1 Owned 1 101944
## 2 Rented 2 46269
## 3 Rented 3 1925
## 4 <NA> <NA> 106322
```
```
hh_vsum_der %>% count(Urbanicity, V2143)
```
```
## # A tibble: 3 × 3
## Urbanicity V2143 n
## <fct> <fct> <int>
## 1 Urban 1 26878
## 2 Suburban 2 173491
## 3 Rural 3 56091
```
```
hh_vsum_der %>% count(Income, SC214A)
```
```
## # A tibble: 18 × 3
## Income SC214A n
## <fct> <fct> <int>
## 1 Less than $25,000 1 7841
## 2 Less than $25,000 2 2626
## 3 Less than $25,000 3 3949
## 4 Less than $25,000 4 5546
## 5 Less than $25,000 5 5445
## 6 Less than $25,000 6 4821
## 7 Less than $25,000 7 5038
## 8 Less than $25,000 8 11887
## 9 $25,000--49,999 9 11550
## 10 $25,000--49,999 10 13689
## 11 $25,000--49,999 11 13655
## 12 $25,000--49,999 12 23282
## 13 $50,000--99,999 13 44601
## 14 $50,000--99,999 15 33353
## 15 $100,000--199,999 16 34287
## 16 $100,000--199,999 17 15317
## 17 $200,000 or more 18 16892
## 18 <NA> <NA> 2681
```
```
hh_vsum_der %>% count(PlaceSize, V2126B)
```
```
## # A tibble: 10 × 3
## PlaceSize V2126B n
## <fct> <fct> <int>
## 1 Not in a place 0 69484
## 2 Population under 10,000 13 39873
## 3 10,000--49,999 16 53002
## 4 50,000--99,999 17 27205
## 5 100,000--249,999 18 24461
## 6 250,000--499,999 19 13111
## 7 500,000--999,999 20 15194
## 8 1,000,000 or more 21 6167
## 9 1,000,000 or more 22 3857
## 10 1,000,000 or more 23 4106
```
```
hh_vsum_der %>% count(Region, V2127B)
```
```
## # A tibble: 4 × 3
## Region V2127B n
## <fct> <fct> <int>
## 1 Northeast 1 41585
## 2 Midwest 2 74666
## 3 South 3 87783
## 4 West 4 52426
```
#### 13\.4\.2\.2 Person variables
For the person file, we create categories for sex, race/Hispanic origin, age categories, and marital status. A codebook of the household variables is located in Table [13\.4](c13-ncvs-vignette.html#tab:cb-pers). We also merge the household demographics to the person file as well as the design variables (`V2117` and `V2118`).
TABLE 13\.4: Codebook for person variables
| Variable | Description | Value | Label |
| --- | --- | --- | --- |
| V3014 | Age | | 12–90 |
| V3015 | Current Marital Status | 1 | Married |
| | | 2 | Widowed |
| | | 3 | Divorced |
| | | 4 | Separated |
| | | 5 | Never married |
| V3018 | Sex | 1 | Male |
| | | 2 | Female |
| V3023A | Race | 01 | White only |
| | | 02 | Black only |
| | | 03 | American Indian, Alaska native only |
| | | 04 | Asian only |
| | | 05 | Hawaiian/Pacific Islander only |
| | | 06 | White\-Black |
| | | 07 | White\-American Indian |
| | | 08 | White\-Asian |
| | | 09 | White\-Hawaiian |
| | | 10 | Black\-American Indian |
| | | 11 | Black\-Asian |
| | | 12 | Black\-Hawaiian/Pacific Islander |
| | | 13 | American Indian\-Asian |
| | | 14 | Asian\-Hawaiian/Pacific Islander |
| | | 15 | White\-Black\-American Indian |
| | | 16 | White\-Black\-Asian |
| | | 17 | White\-American Indian\-Asian |
| | | 18 | White\-Asian\-Hawaiian |
| | | 19 | 2 or 3 races |
| | | 20 | 4 or 5 races |
| V3024 | Hispanic Origin | 1 | Yes |
| | | 2 | No |
```
NHOPI <- "Native Hawaiian or Other Pacific Islander"
pers_vsum_der <- pers_vsum %>%
mutate(
Sex = factor(case_when(
V3018 == 1 ~ "Male",
V3018 == 2 ~ "Female"
)),
RaceHispOrigin = factor(
case_when(
V3024 == 1 ~ "Hispanic",
V3023A == 1 ~ "White",
V3023A == 2 ~ "Black",
V3023A == 4 ~ "Asian",
V3023A == 5 ~ NHOPI,
TRUE ~ "Other"
),
levels = c(
"White", "Black", "Hispanic",
"Asian", NHOPI, "Other"
)
),
V3014_num = as.numeric(as.character(V3014)),
AgeGroup = case_when(
V3014_num <= 17 ~ "12--17",
V3014_num <= 24 ~ "18--24",
V3014_num <= 34 ~ "25--34",
V3014_num <= 49 ~ "35--49",
V3014_num <= 64 ~ "50--64",
V3014_num <= 90 ~ "65 or older"
),
AgeGroup = fct_reorder(AgeGroup, V3014_num),
MaritalStatus = factor(
case_when(
V3015 == 1 ~ "Married",
V3015 == 2 ~ "Widowed",
V3015 == 3 ~ "Divorced",
V3015 == 4 ~ "Separated",
V3015 == 5 ~ "Never married"
),
levels = c(
"Never married", "Married",
"Widowed", "Divorced",
"Separated"
)
)
) %>%
left_join(
hh_vsum_der %>% select(
YEARQ, IDHH,
V2117, V2118, Tenure:Region
),
by = c("YEARQ", "IDHH")
)
```
As before, we want to check to make sure the recoded variables we create match the existing data as expected.
```
pers_vsum_der %>% count(Sex, V3018)
```
```
## # A tibble: 2 × 3
## Sex V3018 n
## <fct> <fct> <int>
## 1 Female 2 150956
## 2 Male 1 140922
```
```
pers_vsum_der %>% count(RaceHispOrigin, V3024)
```
```
## # A tibble: 11 × 3
## RaceHispOrigin V3024 n
## <fct> <fct> <int>
## 1 White 2 197292
## 2 White 8 883
## 3 Black 2 29947
## 4 Black 8 120
## 5 Hispanic 1 41450
## 6 Asian 2 16015
## 7 Asian 8 61
## 8 Native Hawaiian or Other Pacific Islander 2 891
## 9 Native Hawaiian or Other Pacific Islander 8 9
## 10 Other 2 5161
## 11 Other 8 49
```
```
pers_vsum_der %>%
filter(RaceHispOrigin != "Hispanic" |
is.na(RaceHispOrigin)) %>%
count(RaceHispOrigin, V3023A)
```
```
## # A tibble: 20 × 3
## RaceHispOrigin V3023A n
## <fct> <fct> <int>
## 1 White 1 198175
## 2 Black 2 30067
## 3 Asian 4 16076
## 4 Native Hawaiian or Other Pacific Islander 5 900
## 5 Other 3 1319
## 6 Other 6 1217
## 7 Other 7 1025
## 8 Other 8 837
## 9 Other 9 184
## 10 Other 10 178
## 11 Other 11 87
## 12 Other 12 27
## 13 Other 13 13
## 14 Other 14 53
## 15 Other 15 136
## 16 Other 16 45
## 17 Other 17 11
## 18 Other 18 33
## 19 Other 19 22
## 20 Other 20 23
```
```
pers_vsum_der %>%
group_by(AgeGroup) %>%
summarize(
minAge = min(V3014),
maxAge = max(V3014),
.groups = "drop"
)
```
```
## # A tibble: 6 × 3
## AgeGroup minAge maxAge
## <fct> <dbl> <dbl>
## 1 12--17 12 17
## 2 18--24 18 24
## 3 25--34 25 34
## 4 35--49 35 49
## 5 50--64 50 64
## 6 65 or older 65 90
```
```
pers_vsum_der %>% count(MaritalStatus, V3015)
```
```
## # A tibble: 6 × 3
## MaritalStatus V3015 n
## <fct> <fct> <int>
## 1 Never married 5 90425
## 2 Married 1 148131
## 3 Widowed 2 17668
## 4 Divorced 3 28596
## 5 Separated 4 4524
## 6 <NA> 8 2534
```
We then create tibbles that contain only the variables we need, which makes it easier to use them for analyses.
```
hh_vsum_slim <- hh_vsum_der %>%
select(
YEARQ:V2118,
WGTVICCY:ADJINC_WT,
Tenure,
Urbanicity,
Income,
PlaceSize,
Region
)
pers_vsum_slim <- pers_vsum_der %>%
select(YEARQ:WGTPERCY, WGTVICCY:ADJINC_WT, Sex:Region)
```
To calculate estimates about types of crime, such as what percentage of violent crimes are reported to the police, we must use the incident file. The incident file is not guaranteed to have every pseudo\-stratum and half\-sample code, so dummy records are created to append before estimation. Finally, we merge demographic variables onto the incident tibble.
```
dummy_records <- hh_vsum_slim %>%
distinct(V2117, V2118) %>%
mutate(
Dummy = 1,
WGTVICCY = 1,
NEWWGT = 1
)
inc_analysis <- inc_ind %>%
mutate(Dummy = 0) %>%
left_join(select(pers_vsum_slim, YEARQ, IDHH, IDPER, Sex:Region),
by = c("YEARQ", "IDHH", "IDPER")
) %>%
bind_rows(dummy_records) %>%
select(
YEARQ:IDPER,
WGTVICCY,
NEWWGT,
V4529,
WeapCat,
ReportPolice,
Property:Region
)
```
The tibbles `hh_vsum_slim`, `pers_vsum_slim`, and `inc_analysis` can now be used to create design objects and calculate crime rate estimates.
13\.5 Survey design objects
---------------------------
All the data preparation above is necessary to create the design objects and finally begin analysis. We create three design objects for different types of analysis, depending on the estimate we are creating. For the incident data, the weight of analysis is `NEWWGT`, which we constructed previously. The household and person\-level data use `WGTHHCY` and `WGTPERCY`, respectively. For all analyses, `V2117` is the strata variable, and `V2118` is the cluster/PSU variable for analysis. This information can be found in the User’s Guide ([Shook\-Sa, Couzens, and Berzofsky 2015](#ref-ncvs_user_guide)).
```
inc_des <- inc_analysis %>%
as_survey_design(
weight = NEWWGT,
strata = V2117,
ids = V2118,
nest = TRUE
)
hh_des <- hh_vsum_slim %>%
as_survey_design(
weight = WGTHHCY,
strata = V2117,
ids = V2118,
nest = TRUE
)
pers_des <- pers_vsum_slim %>%
as_survey_design(
weight = WGTPERCY,
strata = V2117,
ids = V2118,
nest = TRUE
)
```
13\.6 Calculating estimates
---------------------------
Now that we have prepared our data and created the design objects, we can calculate our estimates. As a reminder, those are:
1. Victimization totals estimate the number of criminal victimizations with a given characteristic.
2. Victimization proportions estimate characteristics among victimizations or victims.
3. Victimization rates are estimates of the number of victimizations per 1,000 persons or households in the population.
4. Prevalence rates are estimates of the percentage of the population (persons or households) who are victims of a crime.
### 13\.6\.1 Estimation 1: Victimization totals
There are two ways to calculate victimization totals. Using the incident design object (`inc_des`) is the most straightforward method, but the person (`pers_des`) and household (`hh_des`) design objects can be used as well if the adjustment factor (`ADJINC_WT`) is incorporated. In the example below, the total number of property and violent victimizations is first calculated using the incident file and then using the household and person design objects. The incident file is smaller, and thus, estimation is faster using that file, but the estimates are the same as illustrated in Table [13\.5](c13-ncvs-vignette.html#tab:ncvs-vign-vt1), Table [13\.6](c13-ncvs-vignette.html#tab:ncvs-vign-vt2a), and Table [13\.7](c13-ncvs-vignette.html#tab:ncvs-vign-vt2b).
```
vt1 <-
inc_des %>%
summarize(
Property_Vzn = survey_total(Property, na.rm = TRUE),
Violent_Vzn = survey_total(Violent, na.rm = TRUE)
) %>%
gt() %>%
tab_spanner(
label = "Property Crime",
columns = starts_with("Property")
) %>%
tab_spanner(
label = "Violent Crime",
columns = starts_with("Violent")
) %>%
cols_label(
ends_with("Vzn") ~ "Total",
ends_with("se") ~ "S.E."
) %>%
fmt_number(decimals = 0)
vt2a <- hh_des %>%
summarize(Property_Vzn = survey_total(Property * ADJINC_WT,
na.rm = TRUE
)) %>%
gt() %>%
tab_spanner(
label = "Property Crime",
columns = starts_with("Property")
) %>%
cols_label(
ends_with("Vzn") ~ "Total",
ends_with("se") ~ "S.E."
) %>%
fmt_number(decimals = 0)
vt2b <- pers_des %>%
summarize(Violent_Vzn = survey_total(Violent * ADJINC_WT,
na.rm = TRUE
)) %>%
gt() %>%
tab_spanner(
label = "Violent Crime",
columns = starts_with("Violent")
) %>%
cols_label(
ends_with("Vzn") ~ "Total",
ends_with("se") ~ "S.E."
) %>%
fmt_number(decimals = 0)
```
TABLE 13\.5: Estimates of total property and violent victimizations with standard errors calculated using the incident design object, 2021 (vt1\)
| Property Crime | | Violent Crime | |
| --- | --- | --- | --- |
| Total | S.E. | Total | S.E. |
| 11,682,056 | 263,844 | 4,598,306 | 198,115 |
TABLE 13\.6: Estimates of total property victimizations with standard errors calculated using the household design object, 2021 (vt2a)
| Property Crime | |
| --- | --- |
| Total | S.E. |
| 11,682,056 | 263,844 |
TABLE 13\.7: Estimates of total violent victimizations with standard errors calculated using the person design object, 2021 (vt2b)
| Violent Crime | |
| --- | --- |
| Total | S.E. |
| 4,598,306 | 198,115 |
The number of victimizations estimated using the incident file is equivalent to the person and household file method. There were an estimated 11,682,056 property victimizations and 4,598,306 violent victimizations in 2021\.
### 13\.6\.2 Estimation 2: Victimization proportions
Victimization proportions are proportions describing features of a victimization. The key here is that these are estimates among victimizations, not among the population. These types of estimates can only be calculated using the incident design object (`inc_des`).
For example, we could be interested in the percentage of property victimizations reported to the police as shown in the following code with an estimate, the standard error, and 95% confidence interval:
```
prop1 <- inc_des %>%
filter(Property) %>%
summarize(Pct = survey_mean(ReportPolice,
na.rm = TRUE,
proportion = TRUE,
vartype = c("se", "ci")
) * 100)
prop1
```
```
## # A tibble: 1 × 4
## Pct Pct_se Pct_low Pct_upp
## <dbl> <dbl> <dbl> <dbl>
## 1 30.8 0.798 29.2 32.4
```
Or, the percentage of violent victimizations that are in urban areas:
```
prop2 <- inc_des %>%
filter(Violent) %>%
summarize(Pct = survey_mean(Urbanicity == "Urban",
na.rm = TRUE
) * 100)
prop2
```
```
## # A tibble: 1 × 2
## Pct Pct_se
## <dbl> <dbl>
## 1 18.1 1.49
```
In 2021, we estimate that 30\.8% of property crimes were reported to the police, and 18\.1% of violent crimes occurred in urban areas.
### 13\.6\.3 Estimation 3: Victimization rates
Victimization rates measure the number of victimizations per population. They are not an estimate of the proportion of households or persons who are victimized, which is the prevalence rate described in Section [13\.6\.4](c13-ncvs-vignette.html#prev-rate). Victimization rates are estimated using the household (`hh_des`) or person (`pers_des`) design objects depending on the type of crime, and the adjustment factor (`ADJINC_WT`) must be incorporated. We return to the example of property and violent victimizations used in the example for victimization totals (Section [13\.6\.1](c13-ncvs-vignette.html#vic-tot)). In the following example, the property victimization totals are calculated as above, as well as the property victimization rate (using `survey_mean()`) and the population size using `survey_total()`.
Victimization rates use the incident weight in the numerator and the person or household weight in the denominator. This is accomplished by calculating the rates with the weight adjustment (`ADJINC_WT`) multiplied by the estimate of interest. Let’s look at an example of property victimization.
```
vr_prop <- hh_des %>%
summarize(
Property_Vzn = survey_total(Property * ADJINC_WT,
na.rm = TRUE
),
Property_Rate = survey_mean(Property * ADJINC_WT * 1000,
na.rm = TRUE
),
PopSize = survey_total(1, vartype = NULL)
)
vr_prop
```
```
## # A tibble: 1 × 5
## Property_Vzn Property_Vzn_se Property_Rate Property_Rate_se PopSize
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 11682056. 263844. 90.3 1.95 129319232.
```
In the output above, we see the estimate for property victimization rate in 2021 was 90\.3 per 1,000 households. This is consistent with calculating the number of victimizations per 1,000 population, as demonstrated in the following code output.
```
vr_prop %>%
select(-ends_with("se")) %>%
mutate(Property_Rate_manual = Property_Vzn / PopSize * 1000)
```
```
## # A tibble: 1 × 4
## Property_Vzn Property_Rate PopSize Property_Rate_manual
## <dbl> <dbl> <dbl> <dbl>
## 1 11682056. 90.3 129319232. 90.3
```
Victimization rates can also be calculated based on particular characteristics of the victimization. In the following example, we calculate the rate of aggravated assault with no weapon, firearm, knife, and another weapon.
```
pers_des %>%
summarize(across(
starts_with("AAST_"),
~ survey_mean(. * ADJINC_WT * 1000, na.rm = TRUE)
))
```
```
## # A tibble: 1 × 8
## AAST_NoWeap AAST_NoWeap_se AAST_Firearm AAST_Firearm_se AAST_Knife
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 0.249 0.0595 0.860 0.101 0.455
## # ℹ 3 more variables: AAST_Knife_se <dbl>, AAST_Other <dbl>,
## # AAST_Other_se <dbl>
```
A common desire is to calculate victimization rates by several characteristics. For example, we may want to calculate the violent victimization rate and aggravated assault rate by sex, race/Hispanic origin, age group, marital status, and household income. This requires a separate `group_by()` statement for each categorization. Thus, we make a function to do this and then use the `map_df()` function from the {purrr} package to loop through the variables ([Wickham and Henry 2023](#ref-R-purrr)). This function takes a demographic variable as its input (`byarvar`) and calculates the violent and aggravated assault victimization rate for each level. It then creates some columns with the variable, the level of each variable, and a numeric version of the variable (`LevelNum`) for sorting later. The function is run across multiple variables using `map()` and then stacks the results into a single output using `bind_rows()`.
```
pers_est_by <- function(byvar) {
pers_des %>%
rename(Level := {{ byvar }}) %>%
filter(!is.na(Level)) %>%
group_by(Level) %>%
summarize(
Violent = survey_mean(Violent * ADJINC_WT * 1000, na.rm = TRUE),
AAST = survey_mean(AAST * ADJINC_WT * 1000, na.rm = TRUE)
) %>%
mutate(
Variable = byvar,
LevelNum = as.numeric(Level),
Level = as.character(Level)
) %>%
select(Variable, Level, LevelNum, everything())
}
pers_est_df <-
c("Sex", "RaceHispOrigin", "AgeGroup", "MaritalStatus", "Income") %>%
map(pers_est_by) %>%
bind_rows()
```
The output from all the estimates is cleaned to create better labels, such as going from “RaceHispOrigin” to “Race/Hispanic Origin.” Finally, the {gt} package is used to make a publishable table (Table [13\.8](c13-ncvs-vignette.html#tab:ncvs-vign-rates-demo-tab)). Using the functions from the {gt} package, we add column labels and footnotes and present estimates rounded to the first decimal place ([Iannone et al. 2024](#ref-R-gt)).
```
vr_gt <- pers_est_df %>%
mutate(
Variable = case_when(
Variable == "RaceHispOrigin" ~ "Race/Hispanic Origin",
Variable == "MaritalStatus" ~ "Marital Status",
Variable == "AgeGroup" ~ "Age",
TRUE ~ Variable
)
) %>%
select(-LevelNum) %>%
group_by(Variable) %>%
gt(rowname_col = "Level") %>%
tab_spanner(
label = "Violent Crime",
id = "viol_span",
columns = c("Violent", "Violent_se")
) %>%
tab_spanner(
label = "Aggravated Assault",
columns = c("AAST", "AAST_se")
) %>%
cols_label(
Violent = "Rate",
Violent_se = "S.E.",
AAST = "Rate",
AAST_se = "S.E.",
) %>%
fmt_number(
columns = c("Violent", "Violent_se", "AAST", "AAST_se"),
decimals = 1
) %>%
tab_footnote(
footnote = "Includes rape or sexual assault, robbery,
aggravated assault, and simple assault.",
locations = cells_column_spanners(spanners = "viol_span")
) %>%
tab_footnote(
footnote = "Excludes persons of Hispanic origin.",
locations =
cells_stub(rows = Level %in%
c("White", "Black", "Asian", NHOPI, "Other"))
) %>%
tab_footnote(
footnote = "Includes persons who identified as
Native Hawaiian or Other Pacific Islander only.",
locations = cells_stub(rows = Level == NHOPI)
) %>%
tab_footnote(
footnote = "Includes persons who identified as American Indian or
Alaska Native only or as two or more races.",
locations = cells_stub(rows = Level == "Other")
) %>%
tab_source_note(
source_note = md("*Note*: Rates per 1,000 persons age 12 or older.")
) %>%
tab_source_note(
source_note = md("*Source*: Bureau of Justice Statistics,
National Crime Victimization Survey, 2021.")
) %>%
tab_stubhead(label = "Victim Demographic") %>%
tab_caption("Rate and standard error of violent victimization,
by type of crime and demographic characteristics, 2021")
```
```
vr_gt
```
TABLE 13\.8: Rate and standard error of violent victimization, by type of crime and demographic characteristics, 2021
| Victim Demographic | Violent Crime1 | | Aggravated Assault | |
| --- | --- | --- | --- | --- |
| Rate | S.E. | Rate | S.E. |
| Sex | | | | |
| --- | --- | --- | --- | --- |
| Female | 15\.5 | 0\.9 | 2\.3 | 0\.2 |
| Male | 17\.5 | 1\.1 | 3\.2 | 0\.3 |
| Race/Hispanic Origin | | | | |
| White2 | 16\.1 | 0\.9 | 2\.7 | 0\.3 |
| Black2 | 18\.5 | 2\.2 | 3\.7 | 0\.7 |
| Hispanic | 15\.9 | 1\.7 | 2\.3 | 0\.4 |
| Asian2 | 8\.6 | 1\.3 | 1\.9 | 0\.6 |
| Native Hawaiian or Other Pacific Islander2,3 | 36\.1 | 34\.4 | 0\.0 | 0\.0 |
| Other2,4 | 45\.4 | 13\.0 | 6\.2 | 2\.0 |
| Age | | | | |
| 12\-\-17 | 13\.2 | 2\.2 | 2\.5 | 0\.8 |
| 18\-\-24 | 23\.1 | 2\.1 | 3\.9 | 0\.9 |
| 25\-\-34 | 22\.0 | 2\.1 | 4\.0 | 0\.6 |
| 35\-\-49 | 19\.4 | 1\.6 | 3\.6 | 0\.5 |
| 50\-\-64 | 16\.9 | 1\.9 | 2\.0 | 0\.3 |
| 65 or older | 6\.4 | 1\.1 | 1\.1 | 0\.3 |
| Marital Status | | | | |
| Never married | 22\.2 | 1\.4 | 4\.0 | 0\.4 |
| Married | 9\.5 | 0\.9 | 1\.5 | 0\.2 |
| Widowed | 10\.7 | 3\.5 | 0\.9 | 0\.2 |
| Divorced | 27\.4 | 2\.9 | 4\.0 | 0\.7 |
| Separated | 36\.8 | 6\.7 | 8\.8 | 3\.1 |
| Income | | | | |
| Less than $25,000 | 29\.6 | 2\.5 | 5\.1 | 0\.7 |
| $25,000\-\-49,999 | 16\.9 | 1\.5 | 3\.0 | 0\.4 |
| $50,000\-\-99,999 | 14\.6 | 1\.1 | 1\.9 | 0\.3 |
| $100,000\-\-199,999 | 12\.2 | 1\.3 | 2\.5 | 0\.4 |
| $200,000 or more | 9\.7 | 1\.4 | 1\.7 | 0\.6 |
| *Note*: Rates per 1,000 persons age 12 or older. | | | | |
| --- | --- | --- | --- | --- |
| *Source*: Bureau of Justice Statistics, National Crime Victimization Survey, 2021\. | | | | |
| 1 Includes rape or sexual assault, robbery, aggravated assault, and simple assault. | | | | |
| --- | --- | --- | --- | --- |
| 2 Excludes persons of Hispanic origin. | | | | |
| 3 Includes persons who identified as Native Hawaiian or Other Pacific Islander only. | | | | |
| 4 Includes persons who identified as American Indian or Alaska Native only or as two or more races. | | | | |
### 13\.6\.4 Estimation 4: Prevalence rates
Prevalence rates differ from victimization rates, as the numerator is the number of people or households victimized rather than the number of victimizations. To calculate the prevalence rates, we must run another summary of the data by calculating an indicator for whether a person or household is a victim of a particular crime at any point in the year. Below is an example of calculating the indicator and then the prevalence rate of violent crime and aggravated assault.
```
pers_prev_des <-
pers_vsum_slim %>%
mutate(Year = floor(YEARQ)) %>%
mutate(
Violent_Ind = sum(Violent) > 0,
AAST_Ind = sum(AAST) > 0,
.by = c("Year", "IDHH", "IDPER")
) %>%
as_survey(
weight = WGTPERCY,
strata = V2117,
ids = V2118,
nest = TRUE
)
pers_prev_ests <- pers_prev_des %>%
summarize(
Violent_Prev = survey_mean(Violent_Ind * 100),
AAST_Prev = survey_mean(AAST_Ind * 100)
)
pers_prev_ests
```
```
## # A tibble: 1 × 4
## Violent_Prev Violent_Prev_se AAST_Prev AAST_Prev_se
## <dbl> <dbl> <dbl> <dbl>
## 1 0.980 0.0349 0.215 0.0143
```
In the example above, the indicator is multiplied by 100 to return a percentage rather than a proportion. In 2021, we estimate that 0\.98% of people aged 12 and older were victims of violent crime in the United States, and 0\.22% were victims of aggravated assault.
### 13\.6\.1 Estimation 1: Victimization totals
There are two ways to calculate victimization totals. Using the incident design object (`inc_des`) is the most straightforward method, but the person (`pers_des`) and household (`hh_des`) design objects can be used as well if the adjustment factor (`ADJINC_WT`) is incorporated. In the example below, the total number of property and violent victimizations is first calculated using the incident file and then using the household and person design objects. The incident file is smaller, and thus, estimation is faster using that file, but the estimates are the same as illustrated in Table [13\.5](c13-ncvs-vignette.html#tab:ncvs-vign-vt1), Table [13\.6](c13-ncvs-vignette.html#tab:ncvs-vign-vt2a), and Table [13\.7](c13-ncvs-vignette.html#tab:ncvs-vign-vt2b).
```
vt1 <-
inc_des %>%
summarize(
Property_Vzn = survey_total(Property, na.rm = TRUE),
Violent_Vzn = survey_total(Violent, na.rm = TRUE)
) %>%
gt() %>%
tab_spanner(
label = "Property Crime",
columns = starts_with("Property")
) %>%
tab_spanner(
label = "Violent Crime",
columns = starts_with("Violent")
) %>%
cols_label(
ends_with("Vzn") ~ "Total",
ends_with("se") ~ "S.E."
) %>%
fmt_number(decimals = 0)
vt2a <- hh_des %>%
summarize(Property_Vzn = survey_total(Property * ADJINC_WT,
na.rm = TRUE
)) %>%
gt() %>%
tab_spanner(
label = "Property Crime",
columns = starts_with("Property")
) %>%
cols_label(
ends_with("Vzn") ~ "Total",
ends_with("se") ~ "S.E."
) %>%
fmt_number(decimals = 0)
vt2b <- pers_des %>%
summarize(Violent_Vzn = survey_total(Violent * ADJINC_WT,
na.rm = TRUE
)) %>%
gt() %>%
tab_spanner(
label = "Violent Crime",
columns = starts_with("Violent")
) %>%
cols_label(
ends_with("Vzn") ~ "Total",
ends_with("se") ~ "S.E."
) %>%
fmt_number(decimals = 0)
```
TABLE 13\.5: Estimates of total property and violent victimizations with standard errors calculated using the incident design object, 2021 (vt1\)
| Property Crime | | Violent Crime | |
| --- | --- | --- | --- |
| Total | S.E. | Total | S.E. |
| 11,682,056 | 263,844 | 4,598,306 | 198,115 |
TABLE 13\.6: Estimates of total property victimizations with standard errors calculated using the household design object, 2021 (vt2a)
| Property Crime | |
| --- | --- |
| Total | S.E. |
| 11,682,056 | 263,844 |
TABLE 13\.7: Estimates of total violent victimizations with standard errors calculated using the person design object, 2021 (vt2b)
| Violent Crime | |
| --- | --- |
| Total | S.E. |
| 4,598,306 | 198,115 |
The number of victimizations estimated using the incident file is equivalent to the person and household file method. There were an estimated 11,682,056 property victimizations and 4,598,306 violent victimizations in 2021\.
### 13\.6\.2 Estimation 2: Victimization proportions
Victimization proportions are proportions describing features of a victimization. The key here is that these are estimates among victimizations, not among the population. These types of estimates can only be calculated using the incident design object (`inc_des`).
For example, we could be interested in the percentage of property victimizations reported to the police as shown in the following code with an estimate, the standard error, and 95% confidence interval:
```
prop1 <- inc_des %>%
filter(Property) %>%
summarize(Pct = survey_mean(ReportPolice,
na.rm = TRUE,
proportion = TRUE,
vartype = c("se", "ci")
) * 100)
prop1
```
```
## # A tibble: 1 × 4
## Pct Pct_se Pct_low Pct_upp
## <dbl> <dbl> <dbl> <dbl>
## 1 30.8 0.798 29.2 32.4
```
Or, the percentage of violent victimizations that are in urban areas:
```
prop2 <- inc_des %>%
filter(Violent) %>%
summarize(Pct = survey_mean(Urbanicity == "Urban",
na.rm = TRUE
) * 100)
prop2
```
```
## # A tibble: 1 × 2
## Pct Pct_se
## <dbl> <dbl>
## 1 18.1 1.49
```
In 2021, we estimate that 30\.8% of property crimes were reported to the police, and 18\.1% of violent crimes occurred in urban areas.
### 13\.6\.3 Estimation 3: Victimization rates
Victimization rates measure the number of victimizations per population. They are not an estimate of the proportion of households or persons who are victimized, which is the prevalence rate described in Section [13\.6\.4](c13-ncvs-vignette.html#prev-rate). Victimization rates are estimated using the household (`hh_des`) or person (`pers_des`) design objects depending on the type of crime, and the adjustment factor (`ADJINC_WT`) must be incorporated. We return to the example of property and violent victimizations used in the example for victimization totals (Section [13\.6\.1](c13-ncvs-vignette.html#vic-tot)). In the following example, the property victimization totals are calculated as above, as well as the property victimization rate (using `survey_mean()`) and the population size using `survey_total()`.
Victimization rates use the incident weight in the numerator and the person or household weight in the denominator. This is accomplished by calculating the rates with the weight adjustment (`ADJINC_WT`) multiplied by the estimate of interest. Let’s look at an example of property victimization.
```
vr_prop <- hh_des %>%
summarize(
Property_Vzn = survey_total(Property * ADJINC_WT,
na.rm = TRUE
),
Property_Rate = survey_mean(Property * ADJINC_WT * 1000,
na.rm = TRUE
),
PopSize = survey_total(1, vartype = NULL)
)
vr_prop
```
```
## # A tibble: 1 × 5
## Property_Vzn Property_Vzn_se Property_Rate Property_Rate_se PopSize
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 11682056. 263844. 90.3 1.95 129319232.
```
In the output above, we see the estimate for property victimization rate in 2021 was 90\.3 per 1,000 households. This is consistent with calculating the number of victimizations per 1,000 population, as demonstrated in the following code output.
```
vr_prop %>%
select(-ends_with("se")) %>%
mutate(Property_Rate_manual = Property_Vzn / PopSize * 1000)
```
```
## # A tibble: 1 × 4
## Property_Vzn Property_Rate PopSize Property_Rate_manual
## <dbl> <dbl> <dbl> <dbl>
## 1 11682056. 90.3 129319232. 90.3
```
Victimization rates can also be calculated based on particular characteristics of the victimization. In the following example, we calculate the rate of aggravated assault with no weapon, firearm, knife, and another weapon.
```
pers_des %>%
summarize(across(
starts_with("AAST_"),
~ survey_mean(. * ADJINC_WT * 1000, na.rm = TRUE)
))
```
```
## # A tibble: 1 × 8
## AAST_NoWeap AAST_NoWeap_se AAST_Firearm AAST_Firearm_se AAST_Knife
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 0.249 0.0595 0.860 0.101 0.455
## # ℹ 3 more variables: AAST_Knife_se <dbl>, AAST_Other <dbl>,
## # AAST_Other_se <dbl>
```
A common desire is to calculate victimization rates by several characteristics. For example, we may want to calculate the violent victimization rate and aggravated assault rate by sex, race/Hispanic origin, age group, marital status, and household income. This requires a separate `group_by()` statement for each categorization. Thus, we make a function to do this and then use the `map_df()` function from the {purrr} package to loop through the variables ([Wickham and Henry 2023](#ref-R-purrr)). This function takes a demographic variable as its input (`byarvar`) and calculates the violent and aggravated assault victimization rate for each level. It then creates some columns with the variable, the level of each variable, and a numeric version of the variable (`LevelNum`) for sorting later. The function is run across multiple variables using `map()` and then stacks the results into a single output using `bind_rows()`.
```
pers_est_by <- function(byvar) {
pers_des %>%
rename(Level := {{ byvar }}) %>%
filter(!is.na(Level)) %>%
group_by(Level) %>%
summarize(
Violent = survey_mean(Violent * ADJINC_WT * 1000, na.rm = TRUE),
AAST = survey_mean(AAST * ADJINC_WT * 1000, na.rm = TRUE)
) %>%
mutate(
Variable = byvar,
LevelNum = as.numeric(Level),
Level = as.character(Level)
) %>%
select(Variable, Level, LevelNum, everything())
}
pers_est_df <-
c("Sex", "RaceHispOrigin", "AgeGroup", "MaritalStatus", "Income") %>%
map(pers_est_by) %>%
bind_rows()
```
The output from all the estimates is cleaned to create better labels, such as going from “RaceHispOrigin” to “Race/Hispanic Origin.” Finally, the {gt} package is used to make a publishable table (Table [13\.8](c13-ncvs-vignette.html#tab:ncvs-vign-rates-demo-tab)). Using the functions from the {gt} package, we add column labels and footnotes and present estimates rounded to the first decimal place ([Iannone et al. 2024](#ref-R-gt)).
```
vr_gt <- pers_est_df %>%
mutate(
Variable = case_when(
Variable == "RaceHispOrigin" ~ "Race/Hispanic Origin",
Variable == "MaritalStatus" ~ "Marital Status",
Variable == "AgeGroup" ~ "Age",
TRUE ~ Variable
)
) %>%
select(-LevelNum) %>%
group_by(Variable) %>%
gt(rowname_col = "Level") %>%
tab_spanner(
label = "Violent Crime",
id = "viol_span",
columns = c("Violent", "Violent_se")
) %>%
tab_spanner(
label = "Aggravated Assault",
columns = c("AAST", "AAST_se")
) %>%
cols_label(
Violent = "Rate",
Violent_se = "S.E.",
AAST = "Rate",
AAST_se = "S.E.",
) %>%
fmt_number(
columns = c("Violent", "Violent_se", "AAST", "AAST_se"),
decimals = 1
) %>%
tab_footnote(
footnote = "Includes rape or sexual assault, robbery,
aggravated assault, and simple assault.",
locations = cells_column_spanners(spanners = "viol_span")
) %>%
tab_footnote(
footnote = "Excludes persons of Hispanic origin.",
locations =
cells_stub(rows = Level %in%
c("White", "Black", "Asian", NHOPI, "Other"))
) %>%
tab_footnote(
footnote = "Includes persons who identified as
Native Hawaiian or Other Pacific Islander only.",
locations = cells_stub(rows = Level == NHOPI)
) %>%
tab_footnote(
footnote = "Includes persons who identified as American Indian or
Alaska Native only or as two or more races.",
locations = cells_stub(rows = Level == "Other")
) %>%
tab_source_note(
source_note = md("*Note*: Rates per 1,000 persons age 12 or older.")
) %>%
tab_source_note(
source_note = md("*Source*: Bureau of Justice Statistics,
National Crime Victimization Survey, 2021.")
) %>%
tab_stubhead(label = "Victim Demographic") %>%
tab_caption("Rate and standard error of violent victimization,
by type of crime and demographic characteristics, 2021")
```
```
vr_gt
```
TABLE 13\.8: Rate and standard error of violent victimization, by type of crime and demographic characteristics, 2021
| Victim Demographic | Violent Crime1 | | Aggravated Assault | |
| --- | --- | --- | --- | --- |
| Rate | S.E. | Rate | S.E. |
| Sex | | | | |
| --- | --- | --- | --- | --- |
| Female | 15\.5 | 0\.9 | 2\.3 | 0\.2 |
| Male | 17\.5 | 1\.1 | 3\.2 | 0\.3 |
| Race/Hispanic Origin | | | | |
| White2 | 16\.1 | 0\.9 | 2\.7 | 0\.3 |
| Black2 | 18\.5 | 2\.2 | 3\.7 | 0\.7 |
| Hispanic | 15\.9 | 1\.7 | 2\.3 | 0\.4 |
| Asian2 | 8\.6 | 1\.3 | 1\.9 | 0\.6 |
| Native Hawaiian or Other Pacific Islander2,3 | 36\.1 | 34\.4 | 0\.0 | 0\.0 |
| Other2,4 | 45\.4 | 13\.0 | 6\.2 | 2\.0 |
| Age | | | | |
| 12\-\-17 | 13\.2 | 2\.2 | 2\.5 | 0\.8 |
| 18\-\-24 | 23\.1 | 2\.1 | 3\.9 | 0\.9 |
| 25\-\-34 | 22\.0 | 2\.1 | 4\.0 | 0\.6 |
| 35\-\-49 | 19\.4 | 1\.6 | 3\.6 | 0\.5 |
| 50\-\-64 | 16\.9 | 1\.9 | 2\.0 | 0\.3 |
| 65 or older | 6\.4 | 1\.1 | 1\.1 | 0\.3 |
| Marital Status | | | | |
| Never married | 22\.2 | 1\.4 | 4\.0 | 0\.4 |
| Married | 9\.5 | 0\.9 | 1\.5 | 0\.2 |
| Widowed | 10\.7 | 3\.5 | 0\.9 | 0\.2 |
| Divorced | 27\.4 | 2\.9 | 4\.0 | 0\.7 |
| Separated | 36\.8 | 6\.7 | 8\.8 | 3\.1 |
| Income | | | | |
| Less than $25,000 | 29\.6 | 2\.5 | 5\.1 | 0\.7 |
| $25,000\-\-49,999 | 16\.9 | 1\.5 | 3\.0 | 0\.4 |
| $50,000\-\-99,999 | 14\.6 | 1\.1 | 1\.9 | 0\.3 |
| $100,000\-\-199,999 | 12\.2 | 1\.3 | 2\.5 | 0\.4 |
| $200,000 or more | 9\.7 | 1\.4 | 1\.7 | 0\.6 |
| *Note*: Rates per 1,000 persons age 12 or older. | | | | |
| --- | --- | --- | --- | --- |
| *Source*: Bureau of Justice Statistics, National Crime Victimization Survey, 2021\. | | | | |
| 1 Includes rape or sexual assault, robbery, aggravated assault, and simple assault. | | | | |
| --- | --- | --- | --- | --- |
| 2 Excludes persons of Hispanic origin. | | | | |
| 3 Includes persons who identified as Native Hawaiian or Other Pacific Islander only. | | | | |
| 4 Includes persons who identified as American Indian or Alaska Native only or as two or more races. | | | | |
### 13\.6\.4 Estimation 4: Prevalence rates
Prevalence rates differ from victimization rates, as the numerator is the number of people or households victimized rather than the number of victimizations. To calculate the prevalence rates, we must run another summary of the data by calculating an indicator for whether a person or household is a victim of a particular crime at any point in the year. Below is an example of calculating the indicator and then the prevalence rate of violent crime and aggravated assault.
```
pers_prev_des <-
pers_vsum_slim %>%
mutate(Year = floor(YEARQ)) %>%
mutate(
Violent_Ind = sum(Violent) > 0,
AAST_Ind = sum(AAST) > 0,
.by = c("Year", "IDHH", "IDPER")
) %>%
as_survey(
weight = WGTPERCY,
strata = V2117,
ids = V2118,
nest = TRUE
)
pers_prev_ests <- pers_prev_des %>%
summarize(
Violent_Prev = survey_mean(Violent_Ind * 100),
AAST_Prev = survey_mean(AAST_Ind * 100)
)
pers_prev_ests
```
```
## # A tibble: 1 × 4
## Violent_Prev Violent_Prev_se AAST_Prev AAST_Prev_se
## <dbl> <dbl> <dbl> <dbl>
## 1 0.980 0.0349 0.215 0.0143
```
In the example above, the indicator is multiplied by 100 to return a percentage rather than a proportion. In 2021, we estimate that 0\.98% of people aged 12 and older were victims of violent crime in the United States, and 0\.22% were victims of aggravated assault.
13\.7 Statistical testing
-------------------------
For any of the types of estimates discussed, we can also perform statistical testing. For example, we could test whether property victimization rates are different between properties that are owned versus rented. First, we calculate the point estimates.
```
prop_tenure <- hh_des %>%
group_by(Tenure) %>%
summarize(
Property_Rate = survey_mean(Property * ADJINC_WT * 1000,
na.rm = TRUE, vartype = "ci"
),
)
prop_tenure
```
```
## # A tibble: 3 × 4
## Tenure Property_Rate Property_Rate_low Property_Rate_upp
## <fct> <dbl> <dbl> <dbl>
## 1 Owned 68.2 64.3 72.1
## 2 Rented 130. 123. 137.
## 3 <NA> NaN NaN NaN
```
The property victimization rate for rented households is 129\.8 per 1,000 households, while the property victimization rate for owned households is 68\.2, which seem very different, especially given the non\-overlapping confidence intervals. However, survey data are inherently non\-independent, so statistical testing cannot be done by comparing confidence intervals. To conduct the statistical test, we first need to create a variable that incorporates the adjusted incident weight (`ADJINC_WT`), and then the test can be conducted on this adjusted variable as discussed in Chapter [6](c06-statistical-testing.html#c06-statistical-testing).
```
prop_tenure_test <- hh_des %>%
mutate(
Prop_Adj = Property * ADJINC_WT * 1000
) %>%
svyttest(
formula = Prop_Adj ~ Tenure,
design = .,
na.rm = TRUE
) %>%
broom::tidy()
```
```
prop_tenure_test %>%
mutate(p.value = pretty_p_value(p.value)) %>%
gt() %>%
fmt_number()
```
TABLE 13\.9: T\-test output for estimates of property victimization rates between properties that are owned versus rented, NCVS 2021
| estimate | statistic | p.value | parameter | conf.low | conf.high | method | alternative |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 61\.62 | 16\.04 | \<0\.0001 | 169\.00 | 54\.03 | 69\.21 | Design\-based t\-test | two.sided |
The output of the statistical test shown in Table [13\.9](c13-ncvs-vignette.html#tab:ncvs-vign-prop-stat-test-gt-tab) indicates a difference of 61\.6 between the property victimization rates of renters and owners, and the test is highly significant with the p\-value of \<0\.0001\.
13\.8 Exercises
---------------
1. What proportion of completed motor vehicle thefts are not reported to the police? Hint: Use the codebook to look at the definition of Type of Crime (V4529\).
2. How many violent crimes occur in each region?
3. What is the property victimization rate among each income level?
4. What is the difference between the violent victimization rate between males and females? Is it statistically different?
| Social Science |
tidy-survey-r.github.io | https://tidy-survey-r.github.io/tidy-survey-book/c14-ambarom-vignette.html |
Chapter 14 AmericasBarometer vignette
=====================================
### Prerequisites
For this chapter, load the following packages:
```
library(tidyverse)
library(survey)
library(srvyr)
library(sf)
library(rnaturalearth)
library(rnaturalearthdata)
library(gt)
library(ggpattern)
```
This vignette uses a subset of data from the 2021 AmericasBarometer survey. Download the raw files, available on the [LAPOP website](http://datasets.americasbarometer.org/database/index.php). We work with version 1\.2 of the data, and there are separate files for each of the 22 countries. To import all files into R while ignoring the Stata labels, we recommend running the following code using the `read_stata()` function from the {haven} package ([Wickham, Miller, and Smith 2023](#ref-R-haven)):
```
stata_files <- list.files(here("RawData", "LAPOP_2021"), "*.dta")
read_stata_unlabeled <- function(file) {
read_stata(file) %>%
zap_labels() %>%
zap_label()
}
ambarom_in <- here("RawData", "LAPOP_2021", stata_files) %>%
map_df(read_stata_unlabeled) %>%
select(pais, strata, upm, weight1500, strata, core_a_core_b,
q2, q1tb, covid2at, a4, idio2, idio2cov, it1, jc13,
m1, mil10a, mil10e, ccch1, ccch3, ccus1, ccus3,
edr, ocup4a, q14, q11n, q12c, q12bn,
starts_with("covidedu1"), gi0n,
r15, r18n, r18)
```
The code above reads all the `.dta` files and combines them into one tibble.
14\.1 Introduction
------------------
The AmericasBarometer surveys, conducted by the LAPOP Lab ([LAPOP 2023b](#ref-lapop)), are public opinion surveys of the Americas focused on democracy. The study was launched in 2004/2005 with 11 countries. Though the participating countries change over time, AmericasBarometer maintains a consistent methodology across many of them. In 2021, the study included 22 countries ranging from Canada in the north to Chile and Argentina in the south ([LAPOP 2023a](#ref-lapop-about)).
Historically, surveys were administered through in\-person household interviews, but the COVID\-19 pandemic changed the study significantly. Now, random\-digit dialing (RDD) of mobile phones is used in all countries except the United States and Canada ([LAPOP 2021c](#ref-lapop-tech)). In Canada, LAPOP collaborated with the Environics Institute to collect data from a panel of Canadians using a web survey ([LAPOP 2021a](#ref-lapop-can)). In the United States, YouGov conducted a web survey on behalf of LAPOP among its panelists ([LAPOP 2021b](#ref-lapop-usa)).
The survey includes a core set of questions for all countries, but not every question is asked in each country. Additionally, some questions are only posed to half of the respondents in a country, with different randomized sections ([LAPOP 2021d](#ref-lapop-svy)).
14\.2 Data structure
--------------------
Each country and year has its own file available in Stata format (`.dta`). In this vignette, we download and combine all the data from the 22 participating countries in 2021\. We subset the data to a smaller set of columns, as noted in the Prerequisites box. We recommend reviewing the core questionnaire to understand the common variables across the countries ([LAPOP 2021d](#ref-lapop-svy)).
14\.3 Preparing files
---------------------
Many of the variables are coded as numeric and do not have intuitive variable names, so the next step is to create derived variables and wrangle the data for analysis. Using the core questionnaire as a codebook, we reference the factor descriptions to create derived variables with informative names:
```
ambarom <- ambarom_in %>%
mutate(
Country = factor(
case_match(
pais,
1 ~ "Mexico",
2 ~ "Guatemala",
3 ~ "El Salvador",
4 ~ "Honduras",
5 ~ "Nicaragua",
6 ~ "Costa Rica",
7 ~ "Panama",
8 ~ "Colombia",
9 ~ "Ecuador",
10 ~ "Bolivia",
11 ~ "Peru",
12 ~ "Paraguay",
13 ~ "Chile",
14 ~ "Uruguay",
15 ~ "Brazil",
17 ~ "Argentina",
21 ~ "Dominican Republic",
22 ~ "Haiti",
23 ~ "Jamaica",
24 ~ "Guyana",
40 ~ "United States",
41 ~ "Canada"
)
),
CovidWorry = fct_reorder(
case_match(
covid2at,
1 ~ "Very worried",
2 ~ "Somewhat worried",
3 ~ "A little worried",
4 ~ "Not worried at all"
),
covid2at,
.na_rm = FALSE
)
) %>%
rename(
Educ_NotInSchool = covidedu1_1,
Educ_NormalSchool = covidedu1_2,
Educ_VirtualSchool = covidedu1_3,
Educ_Hybrid = covidedu1_4,
Educ_NoSchool = covidedu1_5,
BroadbandInternet = r18n,
Internet = r18
)
```
At this point, it is a good time to check the cross\-tabs between the original and newly derived variables. These tables help us confirm that we have correctly matched the numeric data from the original dataset to the renamed factor data in the new dataset. For instance, let’s check the original variable `pais` and the derived variable `Country`. We can consult the questionnaire or codebook to confirm that Argentina is coded as `17`, Bolivia as `10`, etc. Similarly, for `CovidWorry` and `covid2at`, we can verify that `Very worried` is coded as `1`, and so on for the other variables.
```
ambarom %>%
count(Country, pais) %>%
print(n = 22)
```
```
## # A tibble: 22 × 3
## Country pais n
## <fct> <dbl> <int>
## 1 Argentina 17 3011
## 2 Bolivia 10 3002
## 3 Brazil 15 3016
## 4 Canada 41 2201
## 5 Chile 13 2954
## 6 Colombia 8 2993
## 7 Costa Rica 6 2977
## 8 Dominican Republic 21 3000
## 9 Ecuador 9 3005
## 10 El Salvador 3 3245
## 11 Guatemala 2 3000
## 12 Guyana 24 3011
## 13 Haiti 22 3088
## 14 Honduras 4 2999
## 15 Jamaica 23 3121
## 16 Mexico 1 2998
## 17 Nicaragua 5 2997
## 18 Panama 7 3183
## 19 Paraguay 12 3004
## 20 Peru 11 3038
## 21 United States 40 1500
## 22 Uruguay 14 3009
```
```
ambarom %>%
count(CovidWorry, covid2at)
```
```
## # A tibble: 5 × 3
## CovidWorry covid2at n
## <fct> <dbl> <int>
## 1 Very worried 1 24327
## 2 Somewhat worried 2 13233
## 3 A little worried 3 11478
## 4 Not worried at all 4 8628
## 5 <NA> NA 6686
```
14\.4 Survey design objects
---------------------------
The technical report is the best reference for understanding how to specify the sampling design in R ([LAPOP 2021c](#ref-lapop-tech)). The data include two weights: `wt` and `weight1500`. The first weight variable is specific to each country and sums to the sample size, but it is calibrated to reflect each country’s demographics. The second weight variable sums to 1500 for each country and is recommended for multi\-country analyses. Although not explicitly stated in the documentation, the Stata syntax example (`svyset upm [pw=weight1500], strata(strata)`) indicates the variable `upm` is a clustering variable, and `strata` is the strata variable. Therefore, the design object for multi\-country analysis is created in R as follows:
```
ambarom_des <- ambarom %>%
as_survey_design(
ids = upm,
strata = strata,
weight = weight1500
)
```
One interesting thing to note is that these weight variables can provide estimates for comparing countries but not for multi\-country estimates. This is due to the fact that the weights do not account for the different sizes of countries. For example, Canada has about 10% of the population of the United States, but an estimate that uses records from both countries would weigh them equally.
14\.5 Calculating estimates
---------------------------
When calculating estimates from the data, we use the survey design object `ambarom_des` and then apply the `survey_mean()` function. The next sections walk through a few examples.
### 14\.5\.1 Example: Worry about COVID\-19
This survey was administered between March and August 2021, with the specific timing varying by country[30](#fn30). Given the state of the pandemic at that time, several questions about COVID\-19 were included. According to the core questionnaire ([LAPOP 2021d](#ref-lapop-svy)), the first question asked about COVID\-19 was:
> How worried are you about the possibility that you or someone in your household will get sick from coronavirus in the next 3 months?
>
>
> \- Very worried
>
> \- Somewhat worried
>
> \- A little worried
>
> \- Not worried at all
If we are interested in those who are very worried or somewhat worried, we can create a new variable (`CovidWorry_bin`) that groups levels of the original question using the `fct_collapse()` function from the {forcats} package ([Wickham 2023](#ref-R-forcats)). We then use the `survey_count()` function to understand how responses are distributed across each category of the original variable (`CovidWorry`) and the new variable (`CovidWorry_bin`).
```
covid_worry_collapse <- ambarom_des %>%
mutate(CovidWorry_bin = fct_collapse(
CovidWorry,
WorriedHi = c("Very worried", "Somewhat worried"),
WorriedLo = c("A little worried", "Not worried at all")
))
covid_worry_collapse %>%
survey_count(CovidWorry_bin, CovidWorry)
```
```
## # A tibble: 5 × 4
## CovidWorry_bin CovidWorry n n_se
## <fct> <fct> <dbl> <dbl>
## 1 WorriedHi Very worried 12369. 83.6
## 2 WorriedHi Somewhat worried 6378. 63.4
## 3 WorriedLo A little worried 5896. 62.6
## 4 WorriedLo Not worried at all 4840. 59.7
## 5 <NA> <NA> 3518. 42.2
```
With this new variable, we can now use `survey_mean()` to calculate the percentage of people in each country who are either very or somewhat worried about COVID\-19\. There are missing data, as indicated in the `survey_count()` output above, so we need to use `na.rm = TRUE` in the `survey_mean()` function to handle the missing values.
```
covid_worry_country_ests <- covid_worry_collapse %>%
group_by(Country) %>%
summarize(p = survey_mean(CovidWorry_bin == "WorriedHi",
na.rm = TRUE
) * 100)
covid_worry_country_ests
```
```
## # A tibble: 22 × 3
## Country p p_se
## <fct> <dbl> <dbl>
## 1 Argentina 65.8 1.08
## 2 Bolivia 71.6 0.960
## 3 Brazil 83.5 0.962
## 4 Canada 48.9 1.34
## 5 Chile 81.8 0.828
## 6 Colombia 67.9 1.12
## 7 Costa Rica 72.6 0.952
## 8 Dominican Republic 50.1 1.13
## 9 Ecuador 71.7 0.967
## 10 El Salvador 52.5 1.02
## # ℹ 12 more rows
```
To view the results for all countries, we can use the {gt} package to create Table [14\.1](c14-ambarom-vignette.html#tab:ambarom-worry-tab) ([Iannone et al. 2024](#ref-R-gt)).
```
covid_worry_country_ests_gt <- covid_worry_country_ests %>%
gt(rowname_col = "Country") %>%
cols_label(
p = "%",
p_se = "S.E."
) %>%
fmt_number(decimals = 1) %>%
tab_source_note(md("*Source*: AmericasBarometer Surveys, 2021"))
```
```
covid_worry_country_ests_gt
```
TABLE 14\.1: Percentage worried about the possibility that they or someone in their household will get sick from coronavirus in the next 3 months
| | % | S.E. |
| --- | --- | --- |
| Argentina | 65\.8 | 1\.1 |
| Bolivia | 71\.6 | 1\.0 |
| Brazil | 83\.5 | 1\.0 |
| Canada | 48\.9 | 1\.3 |
| Chile | 81\.8 | 0\.8 |
| Colombia | 67\.9 | 1\.1 |
| Costa Rica | 72\.6 | 1\.0 |
| Dominican Republic | 50\.1 | 1\.1 |
| Ecuador | 71\.7 | 1\.0 |
| El Salvador | 52\.5 | 1\.0 |
| Guatemala | 69\.3 | 1\.0 |
| Guyana | 60\.0 | 1\.6 |
| Haiti | 54\.4 | 1\.8 |
| Honduras | 64\.6 | 1\.1 |
| Jamaica | 28\.4 | 0\.9 |
| Mexico | 63\.6 | 1\.0 |
| Nicaragua | 80\.0 | 1\.0 |
| Panama | 70\.2 | 1\.0 |
| Paraguay | 61\.5 | 1\.1 |
| Peru | 77\.1 | 2\.5 |
| United States | 46\.6 | 1\.7 |
| Uruguay | 60\.9 | 1\.1 |
| *Source*: AmericasBarometer Surveys, 2021 | | |
| --- | --- | --- |
### 14\.5\.2 Example: Education affected by COVID\-19
In the core questionnaire ([LAPOP 2021d](#ref-lapop-svy)), respondents were also asked a question about how the pandemic affected education. This question was asked to households with children under the age of 13, and respondents could select more than one option, as follows:
> Did any of these children have their school education affected due to the pandemic?
>
>
> \- No, because they are not yet school age or because they do not attend school for another reason
>
> \- No, their classes continued normally
>
> \- Yes, they went to virtual or remote classes
>
> \- Yes, they switched to a combination of virtual and in\-person classes
>
> \- Yes, they cut all ties with the school
Working with multiple\-choice questions can be both challenging and interesting. Let’s walk through how to analyze this question. If we are interested in the impact on education, we should focus on the data of those whose children are attending school. This means we need to exclude those who selected the first response option: “No, because they are not yet school age or because they do not attend school for another reason.” To do this, we use the `Educ_NotInSchool` variable in the dataset, which has values of `0` and `1`. A value of `1` indicates that the respondent chose the first response option (none of the children are in school), and a value of `0` means that at least one of their children is in school. By filtering the data to those with a value of `0` (they have at least one child in school), we can consider only respondents with at least one child attending school.
Now, let’s review the data for those who selected one of the next three response options:
* No, their classes continued normally: `Educ_NormalSchool`
* Yes, they went to virtual or remote classes: `Educ_VirtualSchool`
* Yes, they switched to a combination of virtual and in\-person classes: `Educ_Hybrid`
The unweighted cross\-tab for these responses is included below. It reveals a wide range of impacts, where many combinations of effects on education are possible.
```
ambarom %>%
filter(Educ_NotInSchool == 0) %>%
count(
Educ_NormalSchool,
Educ_VirtualSchool,
Educ_Hybrid
)
```
```
## # A tibble: 8 × 4
## Educ_NormalSchool Educ_VirtualSchool Educ_Hybrid n
## <dbl> <dbl> <dbl> <int>
## 1 0 0 0 861
## 2 0 0 1 1192
## 3 0 1 0 7554
## 4 0 1 1 280
## 5 1 0 0 833
## 6 1 0 1 18
## 7 1 1 0 72
## 8 1 1 1 7
```
In reviewing the survey question, we might be interested in knowing the answers to the following:
* What percentage of households indicated that school continued as normal with no virtual or hybrid option?
* What percentage of households indicated that the education medium was changed to either virtual or hybrid?
* What percentage of households indicated that they cut ties with their school?
To find the answers, we create indicators for the first two questions, make national estimates for all three questions, and then construct a summary table for easy viewing. First, we create and inspect the indicators and their distributions using `survey_count()`.
```
ambarom_des_educ <- ambarom_des %>%
filter(Educ_NotInSchool == 0) %>%
mutate(
Educ_OnlyNormal = (Educ_NormalSchool == 1 &
Educ_VirtualSchool == 0 &
Educ_Hybrid == 0),
Educ_MediumChange = (Educ_VirtualSchool == 1 |
Educ_Hybrid == 1)
)
ambarom_des_educ %>%
survey_count(
Educ_OnlyNormal,
Educ_NormalSchool,
Educ_VirtualSchool,
Educ_Hybrid
)
```
```
## # A tibble: 8 × 6
## Educ_OnlyNormal Educ_NormalSchool Educ_VirtualSchool Educ_Hybrid
## <lgl> <dbl> <dbl> <dbl>
## 1 FALSE 0 0 0
## 2 FALSE 0 0 1
## 3 FALSE 0 1 0
## 4 FALSE 0 1 1
## 5 FALSE 1 0 1
## 6 FALSE 1 1 0
## 7 FALSE 1 1 1
## 8 TRUE 1 0 0
## # ℹ 2 more variables: n <dbl>, n_se <dbl>
```
```
ambarom_des_educ %>%
survey_count(
Educ_MediumChange,
Educ_VirtualSchool,
Educ_Hybrid
)
```
```
## # A tibble: 4 × 5
## Educ_MediumChange Educ_VirtualSchool Educ_Hybrid n n_se
## <lgl> <dbl> <dbl> <dbl> <dbl>
## 1 FALSE 0 0 880. 26.1
## 2 TRUE 0 1 561. 19.2
## 3 TRUE 1 0 3812. 49.4
## 4 TRUE 1 1 136. 9.86
```
Next, we group the data by country and calculate the population estimates for our three questions.
```
covid_educ_ests <-
ambarom_des_educ %>%
group_by(Country) %>%
summarize(
p_onlynormal = survey_mean(Educ_OnlyNormal, na.rm = TRUE) * 100,
p_mediumchange = survey_mean(Educ_MediumChange, na.rm = TRUE) * 100,
p_noschool = survey_mean(Educ_NoSchool, na.rm = TRUE) * 100,
)
covid_educ_ests
```
```
## # A tibble: 16 × 7
## Country p_onlynormal p_onlynormal_se p_mediumchange p_mediumchange_se
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 Argent… 5.39 1.14 87.1 1.72
## 2 Brazil 4.28 1.17 81.5 2.33
## 3 Chile 0.715 0.267 96.2 0.962
## 4 Colomb… 2.84 0.727 90.3 1.40
## 5 Domini… 3.75 0.793 87.4 1.45
## 6 Ecuador 5.18 0.963 87.5 1.39
## 7 El Sal… 2.92 0.680 85.8 1.53
## 8 Guatem… 3.00 0.727 82.2 1.73
## 9 Guyana 3.34 0.702 85.3 1.67
## 10 Haiti 81.1 2.25 7.25 1.48
## 11 Hondur… 3.68 0.882 80.7 1.72
## 12 Jamaica 5.42 0.950 88.1 1.43
## 13 Panama 7.20 1.18 89.4 1.42
## 14 Paragu… 4.66 0.939 90.7 1.37
## 15 Peru 2.04 0.604 91.8 1.20
## 16 Uruguay 8.60 1.40 84.3 2.02
## # ℹ 2 more variables: p_noschool <dbl>, p_noschool_se <dbl>
```
Finally, to view the results for all countries, we can use the {gt} package to construct Table [14\.2](c14-ambarom-vignette.html#tab:ambarom-covid-ed-der-tab).
```
covid_educ_ests_gt <- covid_educ_ests %>%
gt(rowname_col = "Country") %>%
cols_label(
p_onlynormal = "%",
p_onlynormal_se = "S.E.",
p_mediumchange = "%",
p_mediumchange_se = "S.E.",
p_noschool = "%",
p_noschool_se = "S.E."
) %>%
tab_spanner(
label = "Normal School Only",
columns = c("p_onlynormal", "p_onlynormal_se")
) %>%
tab_spanner(
label = "Medium Change",
columns = c("p_mediumchange", "p_mediumchange_se")
) %>%
tab_spanner(
label = "Cut Ties with School",
columns = c("p_noschool", "p_noschool_se")
) %>%
fmt_number(decimals = 1) %>%
tab_source_note(md("*Source*: AmericasBarometer Surveys, 2021"))
```
```
covid_educ_ests_gt
```
TABLE 14\.2: Impact on education in households with children under the age of 13 who generally attend school
| | Normal School Only | | Medium Change | | Cut Ties with School | |
| --- | --- | --- | --- | --- | --- | --- |
| % | S.E. | % | S.E. | % | S.E. |
| Argentina | 5\.4 | 1\.1 | 87\.1 | 1\.7 | 9\.9 | 1\.6 |
| Brazil | 4\.3 | 1\.2 | 81\.5 | 2\.3 | 22\.1 | 2\.5 |
| Chile | 0\.7 | 0\.3 | 96\.2 | 1\.0 | 4\.0 | 1\.0 |
| Colombia | 2\.8 | 0\.7 | 90\.3 | 1\.4 | 7\.5 | 1\.3 |
| Dominican Republic | 3\.8 | 0\.8 | 87\.4 | 1\.5 | 10\.5 | 1\.4 |
| Ecuador | 5\.2 | 1\.0 | 87\.5 | 1\.4 | 7\.9 | 1\.1 |
| El Salvador | 2\.9 | 0\.7 | 85\.8 | 1\.5 | 11\.8 | 1\.4 |
| Guatemala | 3\.0 | 0\.7 | 82\.2 | 1\.7 | 17\.7 | 1\.8 |
| Guyana | 3\.3 | 0\.7 | 85\.3 | 1\.7 | 13\.0 | 1\.6 |
| Haiti | 81\.1 | 2\.3 | 7\.2 | 1\.5 | 11\.7 | 1\.8 |
| Honduras | 3\.7 | 0\.9 | 80\.7 | 1\.7 | 16\.9 | 1\.6 |
| Jamaica | 5\.4 | 0\.9 | 88\.1 | 1\.4 | 7\.5 | 1\.2 |
| Panama | 7\.2 | 1\.2 | 89\.4 | 1\.4 | 3\.8 | 0\.9 |
| Paraguay | 4\.7 | 0\.9 | 90\.7 | 1\.4 | 6\.4 | 1\.2 |
| Peru | 2\.0 | 0\.6 | 91\.8 | 1\.2 | 6\.8 | 1\.1 |
| Uruguay | 8\.6 | 1\.4 | 84\.3 | 2\.0 | 8\.0 | 1\.6 |
| *Source*: AmericasBarometer Surveys, 2021 | | | | | | |
| --- | --- | --- | --- | --- | --- | --- |
In the countries that were asked this question, many households experienced a change in their child’s education medium. However, in Haiti, only 7\.2% of households with children switched to virtual or hybrid learning.
14\.6 Mapping survey data
-------------------------
While the table effectively presents the data, a map could also be insightful. To create a map of the countries, we can use the package {rnaturalearth} and subset North and South America with the `ne_countries()` function ([Massicotte and South 2023](#ref-R-rnaturalearth)). The function returns a simple features (sf) object with many columns ([Pebesma and Bivand 2023](#ref-sf2023man)), but most importantly, `soverignt` (sovereignty), `geounit` (country or territory), and `geometry` (the shape). For an example of the difference between sovereignty and country/territory, the United States, Puerto Rico, and the U.S. Virgin Islands are all separate units with the same sovereignty. A map without data is plotted in Figure [14\.1](c14-ambarom-vignette.html#fig:ambarom-americas-map) using `geom_sf()` from the {ggplot2} package, which plots sf objects ([Wickham 2016](#ref-ggplot2wickham)).
```
country_shape <-
ne_countries(
scale = "medium",
returnclass = "sf",
continent = c("North America", "South America")
)
country_shape %>%
ggplot() +
geom_sf()
```
FIGURE 14\.1: Map of North and South America
The map in Figure [14\.1](c14-ambarom-vignette.html#fig:ambarom-americas-map) appears very wide due to the Aleutian Islands in Alaska extending into the Eastern Hemisphere. We can crop the shapefile to include only the Western Hemisphere using `st_crop()` from the {sf} package, which removes some of the trailing islands of Alaska.
```
country_shape_crop <- country_shape %>%
st_crop(c(
xmin = -180,
xmax = 0,
ymin = -90,
ymax = 90
))
```
Now that we have the necessary shape files, our next step is to match our survey data to the map. Countries can be named differently (e.g., “U.S.”, “U.S.A.”, “United States”). To make sure we can visualize our survey data on the map, we need to match the country names in both the survey data and the map data. To do this, we can use the `anti_join()` function from the {dplyr} package to identify the countries in the survey data that are not in the map data. Table [14\.3](c14-ambarom-vignette.html#tab:ambarom-map-merge-check-1-tab) shows the countries in the survey data but not the map data, and Table [14\.4](c14-ambarom-vignette.html#tab:ambarom-map-merge-check-2-tab) shows the countries in the map data but not the survey data. As shown below, the United States is referred to as “United States” in the survey data but “United States of America” in the map data.
```
survey_country_list <- ambarom %>% distinct(Country)
survey_country_list_gt <- survey_country_list %>%
anti_join(country_shape_crop, by = c("Country" = "geounit")) %>%
gt()
```
```
survey_country_list_gt
```
TABLE 14\.3: Countries in the survey data but not the map data
| Country |
| --- |
| United States |
```
map_country_list_gt <- country_shape_crop %>%
as_tibble() %>%
select(geounit, sovereignt) %>%
anti_join(survey_country_list, by = c("geounit" = "Country")) %>%
arrange(geounit) %>%
gt()
```
```
map_country_list_gt
```
TABLE 14\.4: Countries in the map data but not the survey data
| geounit | sovereignt |
| --- | --- |
| Anguilla | United Kingdom |
| Antigua and Barbuda | Antigua and Barbuda |
| Aruba | Netherlands |
| Barbados | Barbados |
| Belize | Belize |
| Bermuda | United Kingdom |
| British Virgin Islands | United Kingdom |
| Cayman Islands | United Kingdom |
| Cuba | Cuba |
| Curaçao | Netherlands |
| Dominica | Dominica |
| Falkland Islands | United Kingdom |
| Greenland | Denmark |
| Grenada | Grenada |
| Montserrat | United Kingdom |
| Puerto Rico | United States of America |
| Saint Barthelemy | France |
| Saint Kitts and Nevis | Saint Kitts and Nevis |
| Saint Lucia | Saint Lucia |
| Saint Martin | France |
| Saint Pierre and Miquelon | France |
| Saint Vincent and the Grenadines | Saint Vincent and the Grenadines |
| Sint Maarten | Netherlands |
| Suriname | Suriname |
| The Bahamas | The Bahamas |
| Trinidad and Tobago | Trinidad and Tobago |
| Turks and Caicos Islands | United Kingdom |
| United States Virgin Islands | United States of America |
| United States of America | United States of America |
| Venezuela | Venezuela |
There are several ways to fix the mismatched names for a successful join. The simplest solution is to rename the data in the shape object before merging. Since only one country name in the survey data differs from the map data, we rename the map data accordingly.
```
country_shape_upd <- country_shape_crop %>%
mutate(geounit = if_else(geounit == "United States of America",
"United States", geounit
))
```
Now that the country names match, we can merge the survey and map data and then plot the resulting dataset. We begin with the map file and merge it with the survey estimates generated in Section [14\.5](c14-ambarom-vignette.html#ambarom-estimates) (`covid_worry_country_ests` and `covid_educ_ests`). We use the {dplyr} function of `full_join()`, which joins the rows in the map data and the survey estimates based on the columns `geounit` and `Country`. A full join keeps all the rows from both datasets, matching rows when possible. For any rows without matches, the function fills in an `NA` for the missing value ([Pebesma and Bivand 2023](#ref-sf2023man)).
```
covid_sf <- country_shape_upd %>%
full_join(covid_worry_country_ests,
by = c("geounit" = "Country")
) %>%
full_join(covid_educ_ests,
by = c("geounit" = "Country")
)
```
After the merge, we create two figures that display the population estimates for the percentage of people worried about COVID\-19 (Figure [14\.2](c14-ambarom-vignette.html#fig:ambarom-make-maps-covid)) and the percentage of households with at least one child participating in virtual or hybrid learning (Figure [14\.3](c14-ambarom-vignette.html#fig:ambarom-make-maps-covid-ed)). We also add a crosshatch pattern to the countries without any data using the `geom_sf_pattern()` function from the {ggpattern} package ([FC, Davis, and ggplot2 authors 2022](#ref-R-ggpattern)).
```
ggplot() +
geom_sf(
data = covid_sf,
aes(fill = p, geometry = geometry),
color = "darkgray"
) +
scale_fill_gradientn(
guide = "colorbar",
name = "Percent",
labels = scales::comma,
colors = c("#BFD7EA", "#087e8b", "#0B3954"),
na.value = NA
) +
geom_sf_pattern(
data = filter(covid_sf, is.na(p)),
pattern = "crosshatch",
pattern_fill = "lightgray",
pattern_color = "lightgray",
fill = NA,
color = "darkgray"
) +
theme_minimal()
```
FIGURE 14\.2: Percentage of households by country worried someone in their household will get COVID\-19 in the next 3 months
```
ggplot() +
geom_sf(
data = covid_sf,
aes(fill = p_mediumchange, geometry = geometry),
color = "darkgray"
) +
scale_fill_gradientn(
guide = "colorbar",
name = "Percent",
labels = scales::comma,
colors = c("#BFD7EA", "#087e8b", "#0B3954"),
na.value = NA
) +
geom_sf_pattern(
data = filter(covid_sf, is.na(p_mediumchange)),
pattern = "crosshatch",
pattern_fill = "lightgray",
pattern_color = "lightgray",
fill = NA,
color = "darkgray"
) +
theme_minimal()
```
FIGURE 14\.3: Percentage of households by country who had at least one child participate in virtual or hybrid learning
In Figure [14\.3](c14-ambarom-vignette.html#fig:ambarom-make-maps-covid-ed), we observe missing data (represented by the crosshatch pattern) for Canada, Mexico, and the United States. The questionnaires indicate that these three countries did not include the education question in the survey. To focus on countries with available data, we can remove North America from the map and show only Central and South America. We do this below by restricting the shape files to Latin America and the Caribbean, as depicted in Figure [14\.4](c14-ambarom-vignette.html#fig:ambarom-make-maps-covid-ed-c-s).
```
covid_c_s <- covid_sf %>%
filter(region_wb == "Latin America & Caribbean")
ggplot() +
geom_sf(
data = covid_c_s,
aes(fill = p_mediumchange, geometry = geometry),
color = "darkgray"
) +
scale_fill_gradientn(
guide = "colorbar",
name = "Percent",
labels = scales::comma,
colors = c("#BFD7EA", "#087e8b", "#0B3954"),
na.value = NA
) +
geom_sf_pattern(
data = filter(covid_c_s, is.na(p_mediumchange)),
pattern = "crosshatch",
pattern_fill = "lightgray",
pattern_color = "lightgray",
fill = NA,
color = "darkgray"
) +
theme_minimal()
```
FIGURE 14\.4: Percentage of households who had at least one child participate in virtual or hybrid learning, in Central and South America
In Figure [14\.4](c14-ambarom-vignette.html#fig:ambarom-make-maps-covid-ed-c-s), we can see that most countries with available data have similar percentages (reflected in their similar shades). However, Haiti stands out with a lighter shade, indicating a considerably lower percentage of households with at least one child participating in virtual or hybrid learning.
14\.7 Exercises
---------------
1. Calculate the percentage of households with broadband internet and those with any internet at home, including from a phone or tablet in Latin America and the Caribbean. Hint: if there are countries with 0% internet usage, try filtering by something first.
2. Create a faceted map showing both broadband internet and any internet usage.
### Prerequisites
14\.1 Introduction
------------------
The AmericasBarometer surveys, conducted by the LAPOP Lab ([LAPOP 2023b](#ref-lapop)), are public opinion surveys of the Americas focused on democracy. The study was launched in 2004/2005 with 11 countries. Though the participating countries change over time, AmericasBarometer maintains a consistent methodology across many of them. In 2021, the study included 22 countries ranging from Canada in the north to Chile and Argentina in the south ([LAPOP 2023a](#ref-lapop-about)).
Historically, surveys were administered through in\-person household interviews, but the COVID\-19 pandemic changed the study significantly. Now, random\-digit dialing (RDD) of mobile phones is used in all countries except the United States and Canada ([LAPOP 2021c](#ref-lapop-tech)). In Canada, LAPOP collaborated with the Environics Institute to collect data from a panel of Canadians using a web survey ([LAPOP 2021a](#ref-lapop-can)). In the United States, YouGov conducted a web survey on behalf of LAPOP among its panelists ([LAPOP 2021b](#ref-lapop-usa)).
The survey includes a core set of questions for all countries, but not every question is asked in each country. Additionally, some questions are only posed to half of the respondents in a country, with different randomized sections ([LAPOP 2021d](#ref-lapop-svy)).
14\.2 Data structure
--------------------
Each country and year has its own file available in Stata format (`.dta`). In this vignette, we download and combine all the data from the 22 participating countries in 2021\. We subset the data to a smaller set of columns, as noted in the Prerequisites box. We recommend reviewing the core questionnaire to understand the common variables across the countries ([LAPOP 2021d](#ref-lapop-svy)).
14\.3 Preparing files
---------------------
Many of the variables are coded as numeric and do not have intuitive variable names, so the next step is to create derived variables and wrangle the data for analysis. Using the core questionnaire as a codebook, we reference the factor descriptions to create derived variables with informative names:
```
ambarom <- ambarom_in %>%
mutate(
Country = factor(
case_match(
pais,
1 ~ "Mexico",
2 ~ "Guatemala",
3 ~ "El Salvador",
4 ~ "Honduras",
5 ~ "Nicaragua",
6 ~ "Costa Rica",
7 ~ "Panama",
8 ~ "Colombia",
9 ~ "Ecuador",
10 ~ "Bolivia",
11 ~ "Peru",
12 ~ "Paraguay",
13 ~ "Chile",
14 ~ "Uruguay",
15 ~ "Brazil",
17 ~ "Argentina",
21 ~ "Dominican Republic",
22 ~ "Haiti",
23 ~ "Jamaica",
24 ~ "Guyana",
40 ~ "United States",
41 ~ "Canada"
)
),
CovidWorry = fct_reorder(
case_match(
covid2at,
1 ~ "Very worried",
2 ~ "Somewhat worried",
3 ~ "A little worried",
4 ~ "Not worried at all"
),
covid2at,
.na_rm = FALSE
)
) %>%
rename(
Educ_NotInSchool = covidedu1_1,
Educ_NormalSchool = covidedu1_2,
Educ_VirtualSchool = covidedu1_3,
Educ_Hybrid = covidedu1_4,
Educ_NoSchool = covidedu1_5,
BroadbandInternet = r18n,
Internet = r18
)
```
At this point, it is a good time to check the cross\-tabs between the original and newly derived variables. These tables help us confirm that we have correctly matched the numeric data from the original dataset to the renamed factor data in the new dataset. For instance, let’s check the original variable `pais` and the derived variable `Country`. We can consult the questionnaire or codebook to confirm that Argentina is coded as `17`, Bolivia as `10`, etc. Similarly, for `CovidWorry` and `covid2at`, we can verify that `Very worried` is coded as `1`, and so on for the other variables.
```
ambarom %>%
count(Country, pais) %>%
print(n = 22)
```
```
## # A tibble: 22 × 3
## Country pais n
## <fct> <dbl> <int>
## 1 Argentina 17 3011
## 2 Bolivia 10 3002
## 3 Brazil 15 3016
## 4 Canada 41 2201
## 5 Chile 13 2954
## 6 Colombia 8 2993
## 7 Costa Rica 6 2977
## 8 Dominican Republic 21 3000
## 9 Ecuador 9 3005
## 10 El Salvador 3 3245
## 11 Guatemala 2 3000
## 12 Guyana 24 3011
## 13 Haiti 22 3088
## 14 Honduras 4 2999
## 15 Jamaica 23 3121
## 16 Mexico 1 2998
## 17 Nicaragua 5 2997
## 18 Panama 7 3183
## 19 Paraguay 12 3004
## 20 Peru 11 3038
## 21 United States 40 1500
## 22 Uruguay 14 3009
```
```
ambarom %>%
count(CovidWorry, covid2at)
```
```
## # A tibble: 5 × 3
## CovidWorry covid2at n
## <fct> <dbl> <int>
## 1 Very worried 1 24327
## 2 Somewhat worried 2 13233
## 3 A little worried 3 11478
## 4 Not worried at all 4 8628
## 5 <NA> NA 6686
```
14\.4 Survey design objects
---------------------------
The technical report is the best reference for understanding how to specify the sampling design in R ([LAPOP 2021c](#ref-lapop-tech)). The data include two weights: `wt` and `weight1500`. The first weight variable is specific to each country and sums to the sample size, but it is calibrated to reflect each country’s demographics. The second weight variable sums to 1500 for each country and is recommended for multi\-country analyses. Although not explicitly stated in the documentation, the Stata syntax example (`svyset upm [pw=weight1500], strata(strata)`) indicates the variable `upm` is a clustering variable, and `strata` is the strata variable. Therefore, the design object for multi\-country analysis is created in R as follows:
```
ambarom_des <- ambarom %>%
as_survey_design(
ids = upm,
strata = strata,
weight = weight1500
)
```
One interesting thing to note is that these weight variables can provide estimates for comparing countries but not for multi\-country estimates. This is due to the fact that the weights do not account for the different sizes of countries. For example, Canada has about 10% of the population of the United States, but an estimate that uses records from both countries would weigh them equally.
14\.5 Calculating estimates
---------------------------
When calculating estimates from the data, we use the survey design object `ambarom_des` and then apply the `survey_mean()` function. The next sections walk through a few examples.
### 14\.5\.1 Example: Worry about COVID\-19
This survey was administered between March and August 2021, with the specific timing varying by country[30](#fn30). Given the state of the pandemic at that time, several questions about COVID\-19 were included. According to the core questionnaire ([LAPOP 2021d](#ref-lapop-svy)), the first question asked about COVID\-19 was:
> How worried are you about the possibility that you or someone in your household will get sick from coronavirus in the next 3 months?
>
>
> \- Very worried
>
> \- Somewhat worried
>
> \- A little worried
>
> \- Not worried at all
If we are interested in those who are very worried or somewhat worried, we can create a new variable (`CovidWorry_bin`) that groups levels of the original question using the `fct_collapse()` function from the {forcats} package ([Wickham 2023](#ref-R-forcats)). We then use the `survey_count()` function to understand how responses are distributed across each category of the original variable (`CovidWorry`) and the new variable (`CovidWorry_bin`).
```
covid_worry_collapse <- ambarom_des %>%
mutate(CovidWorry_bin = fct_collapse(
CovidWorry,
WorriedHi = c("Very worried", "Somewhat worried"),
WorriedLo = c("A little worried", "Not worried at all")
))
covid_worry_collapse %>%
survey_count(CovidWorry_bin, CovidWorry)
```
```
## # A tibble: 5 × 4
## CovidWorry_bin CovidWorry n n_se
## <fct> <fct> <dbl> <dbl>
## 1 WorriedHi Very worried 12369. 83.6
## 2 WorriedHi Somewhat worried 6378. 63.4
## 3 WorriedLo A little worried 5896. 62.6
## 4 WorriedLo Not worried at all 4840. 59.7
## 5 <NA> <NA> 3518. 42.2
```
With this new variable, we can now use `survey_mean()` to calculate the percentage of people in each country who are either very or somewhat worried about COVID\-19\. There are missing data, as indicated in the `survey_count()` output above, so we need to use `na.rm = TRUE` in the `survey_mean()` function to handle the missing values.
```
covid_worry_country_ests <- covid_worry_collapse %>%
group_by(Country) %>%
summarize(p = survey_mean(CovidWorry_bin == "WorriedHi",
na.rm = TRUE
) * 100)
covid_worry_country_ests
```
```
## # A tibble: 22 × 3
## Country p p_se
## <fct> <dbl> <dbl>
## 1 Argentina 65.8 1.08
## 2 Bolivia 71.6 0.960
## 3 Brazil 83.5 0.962
## 4 Canada 48.9 1.34
## 5 Chile 81.8 0.828
## 6 Colombia 67.9 1.12
## 7 Costa Rica 72.6 0.952
## 8 Dominican Republic 50.1 1.13
## 9 Ecuador 71.7 0.967
## 10 El Salvador 52.5 1.02
## # ℹ 12 more rows
```
To view the results for all countries, we can use the {gt} package to create Table [14\.1](c14-ambarom-vignette.html#tab:ambarom-worry-tab) ([Iannone et al. 2024](#ref-R-gt)).
```
covid_worry_country_ests_gt <- covid_worry_country_ests %>%
gt(rowname_col = "Country") %>%
cols_label(
p = "%",
p_se = "S.E."
) %>%
fmt_number(decimals = 1) %>%
tab_source_note(md("*Source*: AmericasBarometer Surveys, 2021"))
```
```
covid_worry_country_ests_gt
```
TABLE 14\.1: Percentage worried about the possibility that they or someone in their household will get sick from coronavirus in the next 3 months
| | % | S.E. |
| --- | --- | --- |
| Argentina | 65\.8 | 1\.1 |
| Bolivia | 71\.6 | 1\.0 |
| Brazil | 83\.5 | 1\.0 |
| Canada | 48\.9 | 1\.3 |
| Chile | 81\.8 | 0\.8 |
| Colombia | 67\.9 | 1\.1 |
| Costa Rica | 72\.6 | 1\.0 |
| Dominican Republic | 50\.1 | 1\.1 |
| Ecuador | 71\.7 | 1\.0 |
| El Salvador | 52\.5 | 1\.0 |
| Guatemala | 69\.3 | 1\.0 |
| Guyana | 60\.0 | 1\.6 |
| Haiti | 54\.4 | 1\.8 |
| Honduras | 64\.6 | 1\.1 |
| Jamaica | 28\.4 | 0\.9 |
| Mexico | 63\.6 | 1\.0 |
| Nicaragua | 80\.0 | 1\.0 |
| Panama | 70\.2 | 1\.0 |
| Paraguay | 61\.5 | 1\.1 |
| Peru | 77\.1 | 2\.5 |
| United States | 46\.6 | 1\.7 |
| Uruguay | 60\.9 | 1\.1 |
| *Source*: AmericasBarometer Surveys, 2021 | | |
| --- | --- | --- |
### 14\.5\.2 Example: Education affected by COVID\-19
In the core questionnaire ([LAPOP 2021d](#ref-lapop-svy)), respondents were also asked a question about how the pandemic affected education. This question was asked to households with children under the age of 13, and respondents could select more than one option, as follows:
> Did any of these children have their school education affected due to the pandemic?
>
>
> \- No, because they are not yet school age or because they do not attend school for another reason
>
> \- No, their classes continued normally
>
> \- Yes, they went to virtual or remote classes
>
> \- Yes, they switched to a combination of virtual and in\-person classes
>
> \- Yes, they cut all ties with the school
Working with multiple\-choice questions can be both challenging and interesting. Let’s walk through how to analyze this question. If we are interested in the impact on education, we should focus on the data of those whose children are attending school. This means we need to exclude those who selected the first response option: “No, because they are not yet school age or because they do not attend school for another reason.” To do this, we use the `Educ_NotInSchool` variable in the dataset, which has values of `0` and `1`. A value of `1` indicates that the respondent chose the first response option (none of the children are in school), and a value of `0` means that at least one of their children is in school. By filtering the data to those with a value of `0` (they have at least one child in school), we can consider only respondents with at least one child attending school.
Now, let’s review the data for those who selected one of the next three response options:
* No, their classes continued normally: `Educ_NormalSchool`
* Yes, they went to virtual or remote classes: `Educ_VirtualSchool`
* Yes, they switched to a combination of virtual and in\-person classes: `Educ_Hybrid`
The unweighted cross\-tab for these responses is included below. It reveals a wide range of impacts, where many combinations of effects on education are possible.
```
ambarom %>%
filter(Educ_NotInSchool == 0) %>%
count(
Educ_NormalSchool,
Educ_VirtualSchool,
Educ_Hybrid
)
```
```
## # A tibble: 8 × 4
## Educ_NormalSchool Educ_VirtualSchool Educ_Hybrid n
## <dbl> <dbl> <dbl> <int>
## 1 0 0 0 861
## 2 0 0 1 1192
## 3 0 1 0 7554
## 4 0 1 1 280
## 5 1 0 0 833
## 6 1 0 1 18
## 7 1 1 0 72
## 8 1 1 1 7
```
In reviewing the survey question, we might be interested in knowing the answers to the following:
* What percentage of households indicated that school continued as normal with no virtual or hybrid option?
* What percentage of households indicated that the education medium was changed to either virtual or hybrid?
* What percentage of households indicated that they cut ties with their school?
To find the answers, we create indicators for the first two questions, make national estimates for all three questions, and then construct a summary table for easy viewing. First, we create and inspect the indicators and their distributions using `survey_count()`.
```
ambarom_des_educ <- ambarom_des %>%
filter(Educ_NotInSchool == 0) %>%
mutate(
Educ_OnlyNormal = (Educ_NormalSchool == 1 &
Educ_VirtualSchool == 0 &
Educ_Hybrid == 0),
Educ_MediumChange = (Educ_VirtualSchool == 1 |
Educ_Hybrid == 1)
)
ambarom_des_educ %>%
survey_count(
Educ_OnlyNormal,
Educ_NormalSchool,
Educ_VirtualSchool,
Educ_Hybrid
)
```
```
## # A tibble: 8 × 6
## Educ_OnlyNormal Educ_NormalSchool Educ_VirtualSchool Educ_Hybrid
## <lgl> <dbl> <dbl> <dbl>
## 1 FALSE 0 0 0
## 2 FALSE 0 0 1
## 3 FALSE 0 1 0
## 4 FALSE 0 1 1
## 5 FALSE 1 0 1
## 6 FALSE 1 1 0
## 7 FALSE 1 1 1
## 8 TRUE 1 0 0
## # ℹ 2 more variables: n <dbl>, n_se <dbl>
```
```
ambarom_des_educ %>%
survey_count(
Educ_MediumChange,
Educ_VirtualSchool,
Educ_Hybrid
)
```
```
## # A tibble: 4 × 5
## Educ_MediumChange Educ_VirtualSchool Educ_Hybrid n n_se
## <lgl> <dbl> <dbl> <dbl> <dbl>
## 1 FALSE 0 0 880. 26.1
## 2 TRUE 0 1 561. 19.2
## 3 TRUE 1 0 3812. 49.4
## 4 TRUE 1 1 136. 9.86
```
Next, we group the data by country and calculate the population estimates for our three questions.
```
covid_educ_ests <-
ambarom_des_educ %>%
group_by(Country) %>%
summarize(
p_onlynormal = survey_mean(Educ_OnlyNormal, na.rm = TRUE) * 100,
p_mediumchange = survey_mean(Educ_MediumChange, na.rm = TRUE) * 100,
p_noschool = survey_mean(Educ_NoSchool, na.rm = TRUE) * 100,
)
covid_educ_ests
```
```
## # A tibble: 16 × 7
## Country p_onlynormal p_onlynormal_se p_mediumchange p_mediumchange_se
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 Argent… 5.39 1.14 87.1 1.72
## 2 Brazil 4.28 1.17 81.5 2.33
## 3 Chile 0.715 0.267 96.2 0.962
## 4 Colomb… 2.84 0.727 90.3 1.40
## 5 Domini… 3.75 0.793 87.4 1.45
## 6 Ecuador 5.18 0.963 87.5 1.39
## 7 El Sal… 2.92 0.680 85.8 1.53
## 8 Guatem… 3.00 0.727 82.2 1.73
## 9 Guyana 3.34 0.702 85.3 1.67
## 10 Haiti 81.1 2.25 7.25 1.48
## 11 Hondur… 3.68 0.882 80.7 1.72
## 12 Jamaica 5.42 0.950 88.1 1.43
## 13 Panama 7.20 1.18 89.4 1.42
## 14 Paragu… 4.66 0.939 90.7 1.37
## 15 Peru 2.04 0.604 91.8 1.20
## 16 Uruguay 8.60 1.40 84.3 2.02
## # ℹ 2 more variables: p_noschool <dbl>, p_noschool_se <dbl>
```
Finally, to view the results for all countries, we can use the {gt} package to construct Table [14\.2](c14-ambarom-vignette.html#tab:ambarom-covid-ed-der-tab).
```
covid_educ_ests_gt <- covid_educ_ests %>%
gt(rowname_col = "Country") %>%
cols_label(
p_onlynormal = "%",
p_onlynormal_se = "S.E.",
p_mediumchange = "%",
p_mediumchange_se = "S.E.",
p_noschool = "%",
p_noschool_se = "S.E."
) %>%
tab_spanner(
label = "Normal School Only",
columns = c("p_onlynormal", "p_onlynormal_se")
) %>%
tab_spanner(
label = "Medium Change",
columns = c("p_mediumchange", "p_mediumchange_se")
) %>%
tab_spanner(
label = "Cut Ties with School",
columns = c("p_noschool", "p_noschool_se")
) %>%
fmt_number(decimals = 1) %>%
tab_source_note(md("*Source*: AmericasBarometer Surveys, 2021"))
```
```
covid_educ_ests_gt
```
TABLE 14\.2: Impact on education in households with children under the age of 13 who generally attend school
| | Normal School Only | | Medium Change | | Cut Ties with School | |
| --- | --- | --- | --- | --- | --- | --- |
| % | S.E. | % | S.E. | % | S.E. |
| Argentina | 5\.4 | 1\.1 | 87\.1 | 1\.7 | 9\.9 | 1\.6 |
| Brazil | 4\.3 | 1\.2 | 81\.5 | 2\.3 | 22\.1 | 2\.5 |
| Chile | 0\.7 | 0\.3 | 96\.2 | 1\.0 | 4\.0 | 1\.0 |
| Colombia | 2\.8 | 0\.7 | 90\.3 | 1\.4 | 7\.5 | 1\.3 |
| Dominican Republic | 3\.8 | 0\.8 | 87\.4 | 1\.5 | 10\.5 | 1\.4 |
| Ecuador | 5\.2 | 1\.0 | 87\.5 | 1\.4 | 7\.9 | 1\.1 |
| El Salvador | 2\.9 | 0\.7 | 85\.8 | 1\.5 | 11\.8 | 1\.4 |
| Guatemala | 3\.0 | 0\.7 | 82\.2 | 1\.7 | 17\.7 | 1\.8 |
| Guyana | 3\.3 | 0\.7 | 85\.3 | 1\.7 | 13\.0 | 1\.6 |
| Haiti | 81\.1 | 2\.3 | 7\.2 | 1\.5 | 11\.7 | 1\.8 |
| Honduras | 3\.7 | 0\.9 | 80\.7 | 1\.7 | 16\.9 | 1\.6 |
| Jamaica | 5\.4 | 0\.9 | 88\.1 | 1\.4 | 7\.5 | 1\.2 |
| Panama | 7\.2 | 1\.2 | 89\.4 | 1\.4 | 3\.8 | 0\.9 |
| Paraguay | 4\.7 | 0\.9 | 90\.7 | 1\.4 | 6\.4 | 1\.2 |
| Peru | 2\.0 | 0\.6 | 91\.8 | 1\.2 | 6\.8 | 1\.1 |
| Uruguay | 8\.6 | 1\.4 | 84\.3 | 2\.0 | 8\.0 | 1\.6 |
| *Source*: AmericasBarometer Surveys, 2021 | | | | | | |
| --- | --- | --- | --- | --- | --- | --- |
In the countries that were asked this question, many households experienced a change in their child’s education medium. However, in Haiti, only 7\.2% of households with children switched to virtual or hybrid learning.
### 14\.5\.1 Example: Worry about COVID\-19
This survey was administered between March and August 2021, with the specific timing varying by country[30](#fn30). Given the state of the pandemic at that time, several questions about COVID\-19 were included. According to the core questionnaire ([LAPOP 2021d](#ref-lapop-svy)), the first question asked about COVID\-19 was:
> How worried are you about the possibility that you or someone in your household will get sick from coronavirus in the next 3 months?
>
>
> \- Very worried
>
> \- Somewhat worried
>
> \- A little worried
>
> \- Not worried at all
If we are interested in those who are very worried or somewhat worried, we can create a new variable (`CovidWorry_bin`) that groups levels of the original question using the `fct_collapse()` function from the {forcats} package ([Wickham 2023](#ref-R-forcats)). We then use the `survey_count()` function to understand how responses are distributed across each category of the original variable (`CovidWorry`) and the new variable (`CovidWorry_bin`).
```
covid_worry_collapse <- ambarom_des %>%
mutate(CovidWorry_bin = fct_collapse(
CovidWorry,
WorriedHi = c("Very worried", "Somewhat worried"),
WorriedLo = c("A little worried", "Not worried at all")
))
covid_worry_collapse %>%
survey_count(CovidWorry_bin, CovidWorry)
```
```
## # A tibble: 5 × 4
## CovidWorry_bin CovidWorry n n_se
## <fct> <fct> <dbl> <dbl>
## 1 WorriedHi Very worried 12369. 83.6
## 2 WorriedHi Somewhat worried 6378. 63.4
## 3 WorriedLo A little worried 5896. 62.6
## 4 WorriedLo Not worried at all 4840. 59.7
## 5 <NA> <NA> 3518. 42.2
```
With this new variable, we can now use `survey_mean()` to calculate the percentage of people in each country who are either very or somewhat worried about COVID\-19\. There are missing data, as indicated in the `survey_count()` output above, so we need to use `na.rm = TRUE` in the `survey_mean()` function to handle the missing values.
```
covid_worry_country_ests <- covid_worry_collapse %>%
group_by(Country) %>%
summarize(p = survey_mean(CovidWorry_bin == "WorriedHi",
na.rm = TRUE
) * 100)
covid_worry_country_ests
```
```
## # A tibble: 22 × 3
## Country p p_se
## <fct> <dbl> <dbl>
## 1 Argentina 65.8 1.08
## 2 Bolivia 71.6 0.960
## 3 Brazil 83.5 0.962
## 4 Canada 48.9 1.34
## 5 Chile 81.8 0.828
## 6 Colombia 67.9 1.12
## 7 Costa Rica 72.6 0.952
## 8 Dominican Republic 50.1 1.13
## 9 Ecuador 71.7 0.967
## 10 El Salvador 52.5 1.02
## # ℹ 12 more rows
```
To view the results for all countries, we can use the {gt} package to create Table [14\.1](c14-ambarom-vignette.html#tab:ambarom-worry-tab) ([Iannone et al. 2024](#ref-R-gt)).
```
covid_worry_country_ests_gt <- covid_worry_country_ests %>%
gt(rowname_col = "Country") %>%
cols_label(
p = "%",
p_se = "S.E."
) %>%
fmt_number(decimals = 1) %>%
tab_source_note(md("*Source*: AmericasBarometer Surveys, 2021"))
```
```
covid_worry_country_ests_gt
```
TABLE 14\.1: Percentage worried about the possibility that they or someone in their household will get sick from coronavirus in the next 3 months
| | % | S.E. |
| --- | --- | --- |
| Argentina | 65\.8 | 1\.1 |
| Bolivia | 71\.6 | 1\.0 |
| Brazil | 83\.5 | 1\.0 |
| Canada | 48\.9 | 1\.3 |
| Chile | 81\.8 | 0\.8 |
| Colombia | 67\.9 | 1\.1 |
| Costa Rica | 72\.6 | 1\.0 |
| Dominican Republic | 50\.1 | 1\.1 |
| Ecuador | 71\.7 | 1\.0 |
| El Salvador | 52\.5 | 1\.0 |
| Guatemala | 69\.3 | 1\.0 |
| Guyana | 60\.0 | 1\.6 |
| Haiti | 54\.4 | 1\.8 |
| Honduras | 64\.6 | 1\.1 |
| Jamaica | 28\.4 | 0\.9 |
| Mexico | 63\.6 | 1\.0 |
| Nicaragua | 80\.0 | 1\.0 |
| Panama | 70\.2 | 1\.0 |
| Paraguay | 61\.5 | 1\.1 |
| Peru | 77\.1 | 2\.5 |
| United States | 46\.6 | 1\.7 |
| Uruguay | 60\.9 | 1\.1 |
| *Source*: AmericasBarometer Surveys, 2021 | | |
| --- | --- | --- |
### 14\.5\.2 Example: Education affected by COVID\-19
In the core questionnaire ([LAPOP 2021d](#ref-lapop-svy)), respondents were also asked a question about how the pandemic affected education. This question was asked to households with children under the age of 13, and respondents could select more than one option, as follows:
> Did any of these children have their school education affected due to the pandemic?
>
>
> \- No, because they are not yet school age or because they do not attend school for another reason
>
> \- No, their classes continued normally
>
> \- Yes, they went to virtual or remote classes
>
> \- Yes, they switched to a combination of virtual and in\-person classes
>
> \- Yes, they cut all ties with the school
Working with multiple\-choice questions can be both challenging and interesting. Let’s walk through how to analyze this question. If we are interested in the impact on education, we should focus on the data of those whose children are attending school. This means we need to exclude those who selected the first response option: “No, because they are not yet school age or because they do not attend school for another reason.” To do this, we use the `Educ_NotInSchool` variable in the dataset, which has values of `0` and `1`. A value of `1` indicates that the respondent chose the first response option (none of the children are in school), and a value of `0` means that at least one of their children is in school. By filtering the data to those with a value of `0` (they have at least one child in school), we can consider only respondents with at least one child attending school.
Now, let’s review the data for those who selected one of the next three response options:
* No, their classes continued normally: `Educ_NormalSchool`
* Yes, they went to virtual or remote classes: `Educ_VirtualSchool`
* Yes, they switched to a combination of virtual and in\-person classes: `Educ_Hybrid`
The unweighted cross\-tab for these responses is included below. It reveals a wide range of impacts, where many combinations of effects on education are possible.
```
ambarom %>%
filter(Educ_NotInSchool == 0) %>%
count(
Educ_NormalSchool,
Educ_VirtualSchool,
Educ_Hybrid
)
```
```
## # A tibble: 8 × 4
## Educ_NormalSchool Educ_VirtualSchool Educ_Hybrid n
## <dbl> <dbl> <dbl> <int>
## 1 0 0 0 861
## 2 0 0 1 1192
## 3 0 1 0 7554
## 4 0 1 1 280
## 5 1 0 0 833
## 6 1 0 1 18
## 7 1 1 0 72
## 8 1 1 1 7
```
In reviewing the survey question, we might be interested in knowing the answers to the following:
* What percentage of households indicated that school continued as normal with no virtual or hybrid option?
* What percentage of households indicated that the education medium was changed to either virtual or hybrid?
* What percentage of households indicated that they cut ties with their school?
To find the answers, we create indicators for the first two questions, make national estimates for all three questions, and then construct a summary table for easy viewing. First, we create and inspect the indicators and their distributions using `survey_count()`.
```
ambarom_des_educ <- ambarom_des %>%
filter(Educ_NotInSchool == 0) %>%
mutate(
Educ_OnlyNormal = (Educ_NormalSchool == 1 &
Educ_VirtualSchool == 0 &
Educ_Hybrid == 0),
Educ_MediumChange = (Educ_VirtualSchool == 1 |
Educ_Hybrid == 1)
)
ambarom_des_educ %>%
survey_count(
Educ_OnlyNormal,
Educ_NormalSchool,
Educ_VirtualSchool,
Educ_Hybrid
)
```
```
## # A tibble: 8 × 6
## Educ_OnlyNormal Educ_NormalSchool Educ_VirtualSchool Educ_Hybrid
## <lgl> <dbl> <dbl> <dbl>
## 1 FALSE 0 0 0
## 2 FALSE 0 0 1
## 3 FALSE 0 1 0
## 4 FALSE 0 1 1
## 5 FALSE 1 0 1
## 6 FALSE 1 1 0
## 7 FALSE 1 1 1
## 8 TRUE 1 0 0
## # ℹ 2 more variables: n <dbl>, n_se <dbl>
```
```
ambarom_des_educ %>%
survey_count(
Educ_MediumChange,
Educ_VirtualSchool,
Educ_Hybrid
)
```
```
## # A tibble: 4 × 5
## Educ_MediumChange Educ_VirtualSchool Educ_Hybrid n n_se
## <lgl> <dbl> <dbl> <dbl> <dbl>
## 1 FALSE 0 0 880. 26.1
## 2 TRUE 0 1 561. 19.2
## 3 TRUE 1 0 3812. 49.4
## 4 TRUE 1 1 136. 9.86
```
Next, we group the data by country and calculate the population estimates for our three questions.
```
covid_educ_ests <-
ambarom_des_educ %>%
group_by(Country) %>%
summarize(
p_onlynormal = survey_mean(Educ_OnlyNormal, na.rm = TRUE) * 100,
p_mediumchange = survey_mean(Educ_MediumChange, na.rm = TRUE) * 100,
p_noschool = survey_mean(Educ_NoSchool, na.rm = TRUE) * 100,
)
covid_educ_ests
```
```
## # A tibble: 16 × 7
## Country p_onlynormal p_onlynormal_se p_mediumchange p_mediumchange_se
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 Argent… 5.39 1.14 87.1 1.72
## 2 Brazil 4.28 1.17 81.5 2.33
## 3 Chile 0.715 0.267 96.2 0.962
## 4 Colomb… 2.84 0.727 90.3 1.40
## 5 Domini… 3.75 0.793 87.4 1.45
## 6 Ecuador 5.18 0.963 87.5 1.39
## 7 El Sal… 2.92 0.680 85.8 1.53
## 8 Guatem… 3.00 0.727 82.2 1.73
## 9 Guyana 3.34 0.702 85.3 1.67
## 10 Haiti 81.1 2.25 7.25 1.48
## 11 Hondur… 3.68 0.882 80.7 1.72
## 12 Jamaica 5.42 0.950 88.1 1.43
## 13 Panama 7.20 1.18 89.4 1.42
## 14 Paragu… 4.66 0.939 90.7 1.37
## 15 Peru 2.04 0.604 91.8 1.20
## 16 Uruguay 8.60 1.40 84.3 2.02
## # ℹ 2 more variables: p_noschool <dbl>, p_noschool_se <dbl>
```
Finally, to view the results for all countries, we can use the {gt} package to construct Table [14\.2](c14-ambarom-vignette.html#tab:ambarom-covid-ed-der-tab).
```
covid_educ_ests_gt <- covid_educ_ests %>%
gt(rowname_col = "Country") %>%
cols_label(
p_onlynormal = "%",
p_onlynormal_se = "S.E.",
p_mediumchange = "%",
p_mediumchange_se = "S.E.",
p_noschool = "%",
p_noschool_se = "S.E."
) %>%
tab_spanner(
label = "Normal School Only",
columns = c("p_onlynormal", "p_onlynormal_se")
) %>%
tab_spanner(
label = "Medium Change",
columns = c("p_mediumchange", "p_mediumchange_se")
) %>%
tab_spanner(
label = "Cut Ties with School",
columns = c("p_noschool", "p_noschool_se")
) %>%
fmt_number(decimals = 1) %>%
tab_source_note(md("*Source*: AmericasBarometer Surveys, 2021"))
```
```
covid_educ_ests_gt
```
TABLE 14\.2: Impact on education in households with children under the age of 13 who generally attend school
| | Normal School Only | | Medium Change | | Cut Ties with School | |
| --- | --- | --- | --- | --- | --- | --- |
| % | S.E. | % | S.E. | % | S.E. |
| Argentina | 5\.4 | 1\.1 | 87\.1 | 1\.7 | 9\.9 | 1\.6 |
| Brazil | 4\.3 | 1\.2 | 81\.5 | 2\.3 | 22\.1 | 2\.5 |
| Chile | 0\.7 | 0\.3 | 96\.2 | 1\.0 | 4\.0 | 1\.0 |
| Colombia | 2\.8 | 0\.7 | 90\.3 | 1\.4 | 7\.5 | 1\.3 |
| Dominican Republic | 3\.8 | 0\.8 | 87\.4 | 1\.5 | 10\.5 | 1\.4 |
| Ecuador | 5\.2 | 1\.0 | 87\.5 | 1\.4 | 7\.9 | 1\.1 |
| El Salvador | 2\.9 | 0\.7 | 85\.8 | 1\.5 | 11\.8 | 1\.4 |
| Guatemala | 3\.0 | 0\.7 | 82\.2 | 1\.7 | 17\.7 | 1\.8 |
| Guyana | 3\.3 | 0\.7 | 85\.3 | 1\.7 | 13\.0 | 1\.6 |
| Haiti | 81\.1 | 2\.3 | 7\.2 | 1\.5 | 11\.7 | 1\.8 |
| Honduras | 3\.7 | 0\.9 | 80\.7 | 1\.7 | 16\.9 | 1\.6 |
| Jamaica | 5\.4 | 0\.9 | 88\.1 | 1\.4 | 7\.5 | 1\.2 |
| Panama | 7\.2 | 1\.2 | 89\.4 | 1\.4 | 3\.8 | 0\.9 |
| Paraguay | 4\.7 | 0\.9 | 90\.7 | 1\.4 | 6\.4 | 1\.2 |
| Peru | 2\.0 | 0\.6 | 91\.8 | 1\.2 | 6\.8 | 1\.1 |
| Uruguay | 8\.6 | 1\.4 | 84\.3 | 2\.0 | 8\.0 | 1\.6 |
| *Source*: AmericasBarometer Surveys, 2021 | | | | | | |
| --- | --- | --- | --- | --- | --- | --- |
In the countries that were asked this question, many households experienced a change in their child’s education medium. However, in Haiti, only 7\.2% of households with children switched to virtual or hybrid learning.
14\.6 Mapping survey data
-------------------------
While the table effectively presents the data, a map could also be insightful. To create a map of the countries, we can use the package {rnaturalearth} and subset North and South America with the `ne_countries()` function ([Massicotte and South 2023](#ref-R-rnaturalearth)). The function returns a simple features (sf) object with many columns ([Pebesma and Bivand 2023](#ref-sf2023man)), but most importantly, `soverignt` (sovereignty), `geounit` (country or territory), and `geometry` (the shape). For an example of the difference between sovereignty and country/territory, the United States, Puerto Rico, and the U.S. Virgin Islands are all separate units with the same sovereignty. A map without data is plotted in Figure [14\.1](c14-ambarom-vignette.html#fig:ambarom-americas-map) using `geom_sf()` from the {ggplot2} package, which plots sf objects ([Wickham 2016](#ref-ggplot2wickham)).
```
country_shape <-
ne_countries(
scale = "medium",
returnclass = "sf",
continent = c("North America", "South America")
)
country_shape %>%
ggplot() +
geom_sf()
```
FIGURE 14\.1: Map of North and South America
The map in Figure [14\.1](c14-ambarom-vignette.html#fig:ambarom-americas-map) appears very wide due to the Aleutian Islands in Alaska extending into the Eastern Hemisphere. We can crop the shapefile to include only the Western Hemisphere using `st_crop()` from the {sf} package, which removes some of the trailing islands of Alaska.
```
country_shape_crop <- country_shape %>%
st_crop(c(
xmin = -180,
xmax = 0,
ymin = -90,
ymax = 90
))
```
Now that we have the necessary shape files, our next step is to match our survey data to the map. Countries can be named differently (e.g., “U.S.”, “U.S.A.”, “United States”). To make sure we can visualize our survey data on the map, we need to match the country names in both the survey data and the map data. To do this, we can use the `anti_join()` function from the {dplyr} package to identify the countries in the survey data that are not in the map data. Table [14\.3](c14-ambarom-vignette.html#tab:ambarom-map-merge-check-1-tab) shows the countries in the survey data but not the map data, and Table [14\.4](c14-ambarom-vignette.html#tab:ambarom-map-merge-check-2-tab) shows the countries in the map data but not the survey data. As shown below, the United States is referred to as “United States” in the survey data but “United States of America” in the map data.
```
survey_country_list <- ambarom %>% distinct(Country)
survey_country_list_gt <- survey_country_list %>%
anti_join(country_shape_crop, by = c("Country" = "geounit")) %>%
gt()
```
```
survey_country_list_gt
```
TABLE 14\.3: Countries in the survey data but not the map data
| Country |
| --- |
| United States |
```
map_country_list_gt <- country_shape_crop %>%
as_tibble() %>%
select(geounit, sovereignt) %>%
anti_join(survey_country_list, by = c("geounit" = "Country")) %>%
arrange(geounit) %>%
gt()
```
```
map_country_list_gt
```
TABLE 14\.4: Countries in the map data but not the survey data
| geounit | sovereignt |
| --- | --- |
| Anguilla | United Kingdom |
| Antigua and Barbuda | Antigua and Barbuda |
| Aruba | Netherlands |
| Barbados | Barbados |
| Belize | Belize |
| Bermuda | United Kingdom |
| British Virgin Islands | United Kingdom |
| Cayman Islands | United Kingdom |
| Cuba | Cuba |
| Curaçao | Netherlands |
| Dominica | Dominica |
| Falkland Islands | United Kingdom |
| Greenland | Denmark |
| Grenada | Grenada |
| Montserrat | United Kingdom |
| Puerto Rico | United States of America |
| Saint Barthelemy | France |
| Saint Kitts and Nevis | Saint Kitts and Nevis |
| Saint Lucia | Saint Lucia |
| Saint Martin | France |
| Saint Pierre and Miquelon | France |
| Saint Vincent and the Grenadines | Saint Vincent and the Grenadines |
| Sint Maarten | Netherlands |
| Suriname | Suriname |
| The Bahamas | The Bahamas |
| Trinidad and Tobago | Trinidad and Tobago |
| Turks and Caicos Islands | United Kingdom |
| United States Virgin Islands | United States of America |
| United States of America | United States of America |
| Venezuela | Venezuela |
There are several ways to fix the mismatched names for a successful join. The simplest solution is to rename the data in the shape object before merging. Since only one country name in the survey data differs from the map data, we rename the map data accordingly.
```
country_shape_upd <- country_shape_crop %>%
mutate(geounit = if_else(geounit == "United States of America",
"United States", geounit
))
```
Now that the country names match, we can merge the survey and map data and then plot the resulting dataset. We begin with the map file and merge it with the survey estimates generated in Section [14\.5](c14-ambarom-vignette.html#ambarom-estimates) (`covid_worry_country_ests` and `covid_educ_ests`). We use the {dplyr} function of `full_join()`, which joins the rows in the map data and the survey estimates based on the columns `geounit` and `Country`. A full join keeps all the rows from both datasets, matching rows when possible. For any rows without matches, the function fills in an `NA` for the missing value ([Pebesma and Bivand 2023](#ref-sf2023man)).
```
covid_sf <- country_shape_upd %>%
full_join(covid_worry_country_ests,
by = c("geounit" = "Country")
) %>%
full_join(covid_educ_ests,
by = c("geounit" = "Country")
)
```
After the merge, we create two figures that display the population estimates for the percentage of people worried about COVID\-19 (Figure [14\.2](c14-ambarom-vignette.html#fig:ambarom-make-maps-covid)) and the percentage of households with at least one child participating in virtual or hybrid learning (Figure [14\.3](c14-ambarom-vignette.html#fig:ambarom-make-maps-covid-ed)). We also add a crosshatch pattern to the countries without any data using the `geom_sf_pattern()` function from the {ggpattern} package ([FC, Davis, and ggplot2 authors 2022](#ref-R-ggpattern)).
```
ggplot() +
geom_sf(
data = covid_sf,
aes(fill = p, geometry = geometry),
color = "darkgray"
) +
scale_fill_gradientn(
guide = "colorbar",
name = "Percent",
labels = scales::comma,
colors = c("#BFD7EA", "#087e8b", "#0B3954"),
na.value = NA
) +
geom_sf_pattern(
data = filter(covid_sf, is.na(p)),
pattern = "crosshatch",
pattern_fill = "lightgray",
pattern_color = "lightgray",
fill = NA,
color = "darkgray"
) +
theme_minimal()
```
FIGURE 14\.2: Percentage of households by country worried someone in their household will get COVID\-19 in the next 3 months
```
ggplot() +
geom_sf(
data = covid_sf,
aes(fill = p_mediumchange, geometry = geometry),
color = "darkgray"
) +
scale_fill_gradientn(
guide = "colorbar",
name = "Percent",
labels = scales::comma,
colors = c("#BFD7EA", "#087e8b", "#0B3954"),
na.value = NA
) +
geom_sf_pattern(
data = filter(covid_sf, is.na(p_mediumchange)),
pattern = "crosshatch",
pattern_fill = "lightgray",
pattern_color = "lightgray",
fill = NA,
color = "darkgray"
) +
theme_minimal()
```
FIGURE 14\.3: Percentage of households by country who had at least one child participate in virtual or hybrid learning
In Figure [14\.3](c14-ambarom-vignette.html#fig:ambarom-make-maps-covid-ed), we observe missing data (represented by the crosshatch pattern) for Canada, Mexico, and the United States. The questionnaires indicate that these three countries did not include the education question in the survey. To focus on countries with available data, we can remove North America from the map and show only Central and South America. We do this below by restricting the shape files to Latin America and the Caribbean, as depicted in Figure [14\.4](c14-ambarom-vignette.html#fig:ambarom-make-maps-covid-ed-c-s).
```
covid_c_s <- covid_sf %>%
filter(region_wb == "Latin America & Caribbean")
ggplot() +
geom_sf(
data = covid_c_s,
aes(fill = p_mediumchange, geometry = geometry),
color = "darkgray"
) +
scale_fill_gradientn(
guide = "colorbar",
name = "Percent",
labels = scales::comma,
colors = c("#BFD7EA", "#087e8b", "#0B3954"),
na.value = NA
) +
geom_sf_pattern(
data = filter(covid_c_s, is.na(p_mediumchange)),
pattern = "crosshatch",
pattern_fill = "lightgray",
pattern_color = "lightgray",
fill = NA,
color = "darkgray"
) +
theme_minimal()
```
FIGURE 14\.4: Percentage of households who had at least one child participate in virtual or hybrid learning, in Central and South America
In Figure [14\.4](c14-ambarom-vignette.html#fig:ambarom-make-maps-covid-ed-c-s), we can see that most countries with available data have similar percentages (reflected in their similar shades). However, Haiti stands out with a lighter shade, indicating a considerably lower percentage of households with at least one child participating in virtual or hybrid learning.
14\.7 Exercises
---------------
1. Calculate the percentage of households with broadband internet and those with any internet at home, including from a phone or tablet in Latin America and the Caribbean. Hint: if there are countries with 0% internet usage, try filtering by something first.
2. Create a faceted map showing both broadband internet and any internet usage.
| Social Science |
tidy-survey-r.github.io | https://tidy-survey-r.github.io/tidy-survey-book/importing-survey-data-into-r.html |
A Importing survey data into R
==============================
To analyze a survey, we need to bring the survey data into R. This process is often referred to as importing, loading, or reading in data. Survey files come in different formats depending on the software used to create them. One of the many advantages of R is its flexibility in handling various data formats, regardless of their file extensions. Here are examples of common public\-use survey file formats we may encounter:
* Delimiter\-separated text files
* Excel spreadsheets in `.xls` or `.xlsx` format
* R native `.rda` files
* Stata datasets in `.dta` format
* SAS datasets in `.sas` format
* SPSS datasets in `.sav` format
* Application Programming Interfaces (APIs), often in JavaScript Object Notation (JSON) format
* Data stored in databases
This appendix guides analysts through the process of importing these various types of survey data into R.
A.1 Importing delimiter\-separated files into R
-----------------------------------------------
Delimiter\-separated files use specific characters, known as delimiters, to separate values within the file. For example, CSV (comma\-separated values) files use commas as delimiters, while TSV (tab\-separated values) files use tabs. These file formats are widely used because of their simplicity and compatibility with various software applications.
The {readr} package, part of the tidyverse ecosystem, offers efficient ways to import delimiter\-separated files into R ([Wickham, Hester, and Bryan 2024](#ref-R-readr)). It offers several advantages, including automatic data type detection and flexible handling of missing values, depending on one’s survey analysis needs. The {readr} package includes functions for:
* `read_csv()`: This function is specifically designed to read CSV files.
* `read_tsv()`: Use this function for TSV files.
* `read_delim()`: This function can handle a broader range of delimiter\-separated files, including CSV and TSV. Specify the delimiter using the `delim` argument.
* `read_fwf()`: This function is useful for importing fixed\-width files (FWF), where columns have predetermined widths, and values are aligned in specific positions.
* `read_table()`: Use this function when dealing with whitespace\-separated files, such as those with spaces or multiple spaces as delimiters.
* `read_log()`: This function can read and parse web log files.
The syntax for `read_csv()` is:
```
read_csv(
file,
col_names = TRUE,
col_types = NULL,
col_select = NULL,
id = NULL,
locale = default_locale(),
na = c("", "NA"),
comment = "",
trim_ws = TRUE,
skip = 0,
n_max = Inf,
guess_max = min(1000, n_max),
name_repair = "unique",
num_threads = readr_threads(),
progress = show_progress(),
show_col_types = should_show_types(),
skip_empty_rows = TRUE,
lazy = should_read_lazy()
)
```
The arguments are:
* `file`: the path to the CSV file to import
* `col_names`: a value of `TRUE` imports the first row of the `file` as column names and not included in the data frame. A value of `FALSE` creates automated column names. Alternatively, we can provide a vector of column names.
* `col_types`: by default, R infers the column variable types. We can also provide a column specification using `list()` or `cols()`; for example, use `col_types = cols(.default = "c")` to read all the columns as characters. Alternatively, we can use a string to specify the variable types for each column.
* `col_select`: the columns to include in the results
* `id`: a column for storing the file path. This is useful for keeping track of the input file when importing multiple CSVs at a time.
* `locale`: the location\-specific defaults for the file
* `na`: a character vector of values to interpret as missing
* `comment`: a character vector of values to interpret as comments
* `trim_ws`: a value of `TRUE` trims leading and trailing white space
* `skip`: number of lines to skip before importing the data
* `n_max`: maximum number of lines to read
* `guess_max`: maximum number of lines used for guessing column types
* `name_repair`: whether to check column names. By default, the column names are unique.
* `num_threads`: the number of processing threads to use for initial parsing and lazy reading of data
* `progress`: a value of `TRUE` displays a progress bar
* `show_col_types`: a value of `TRUE` displays the column types
* `skip_empty_rows`: a value of `TRUE` ignores blank rows
* `lazy`: a value of `TRUE` reads values lazily
The other functions share a similar syntax to `read_csv()`. To find more details, run `??` followed by the function name. For example, run `??read_tsv` in the Console for additional information on importing TSV files.
In the example below, we use {readr} to import a CSV file named ‘anes\_timeseries\_2020\_csv\_20220210\.csv’ into an R object called `anes_csv`. The `read_csv()` imports the file and stores the data in the `anes_csv` object. We can then use this object for further analysis.
```
library(readr)
anes_csv <-
read_csv(file = "data/anes_timeseries_2020_csv_20220210.csv")
```
A.2 Importing Excel files into R
--------------------------------
Excel, a widely used spreadsheet software program created by Microsoft, is a common file format in survey research. We can import Excel spreadsheets into the R environment using the {readxl} package. The package supports both the legacy `.xls` files and the modern `.xlsx` format.
To import Excel data into R, we can use the `read_excel()` function from the {readxl} package. This function offers a range of options for the import process. Let’s explore the syntax:
```
read_excel(
path,
sheet = NULL,
range = NULL,
col_names = TRUE,
col_types = NULL,
na = "",
trim_ws = TRUE,
skip = 0,
n_max = Inf,
guess_max = min(1000, n_max),
progress = readxl_progress(),
.name_repair = "unique"
)
```
The arguments are:
* `path`: the path to the Excel file to import
* `sheet`: the name or index of the sheet (sometimes called tabs) within the Excel file
* `range`: the range of cells to import (for example, `P15:T87`)
* `col_names`: indicates whether the first row of the dataset contains column names
* `col_types`: specifies the data types of columns
* `na`: defines the representation of missing values (for example, `NULL`)
* `trim_ws`: controls whether leading and trailing whitespaces should be trimmed
* `skip` and `n_max`: enable skipping rows and limit the number of rows imported
* `guess_max`: sets the maximum number of rows used for data type guessing
* `progress`: specifies a progress bar for large imports
* `.name_repair`: determines how column names are repaired if they are not valid
In the code example below, we import an Excel spreadsheet named ‘anes\_timeseries\_2020\_csv\_20220210\.xlsx’ into R. The resulting data is saved as a tibble in the `anes_excel` object, ready for further analysis.
```
library(readxl)
anes_excel <-
read_excel(path = "data/anes_timeseries_2020_csv_20220210.xlsx")
```
A.3 Importing Stata, SAS, and SPSS files into R
-----------------------------------------------
The {haven} package, also from the tidyverse ecosystem, imports various proprietary data formats: Stata `.dta` files, SPSS `.sav` files, and SAS `.sas7bdat` and `.sas7bcat` files ([Wickham, Miller, and Smith 2023](#ref-R-haven)). One of the notable strengths of the {haven} package is its ability to handle multiple proprietary formats within a unified framework. It offers dedicated functions for each supported proprietary format, making it straightforward to import data regardless of the program. Here, we introduce `read_dat()` for Stata files, `read_sav()` for SPSS files, and `read_sas()` for SAS files.
### A.3\.1 Syntax
Let’s explore the syntax for importing Stata files `.dat` files using `haven::read_dat()`:
```
read_dta(
file,
encoding = NULL,
col_select = NULL,
skip = 0,
n_max = Inf,
.name_repair = "unique"
)
```
The arguments are:
* `file`: the path to the proprietary data file to import
* `encoding`: specifies the character encoding of the data file
* `col_select`: selects specific columns for import
* `skip` and `n_max`: control the number of rows skipped and the maximum number of rows imported
* `.name_repair`: determines how column names are repaired if they are not valid
The syntax for `read_sav()` is similar to `read_dat()`:
```
read_sav(
file,
encoding = NULL,
user_na = FALSE,
col_select = NULL,
skip = 0,
n_max = Inf,
.name_repair = "unique"
)
```
The arguments are:
* `file`: the path to the proprietary data file to import
* `encoding`: specifies the character encoding of the data file
* `col_select`: selects specific columns for import
* `user_na`: a value of `TRUE` reads variables with user\-defined missing labels into `labelled_spss()` objects
* `skip` and `n_max`: control the number of rows skipped and the maximum number of rows imported
* `.name_repair`: determines how column names are repaired if they are not valid
The syntax for importing SAS files with `read_sas()` is as follows:
```
read_sas(
data_file,
catalog_file = NULL,
encoding = NULL,
catalog_encoding = encoding,
col_select = NULL,
skip = 0L,
n_max = Inf,
.name_repair = "unique"
)
```
The arguments are:
* `data_file`: the path to the proprietary data file to import
* `catalog_file`: the path to the catalog file to import
* `encoding`: specifies the character encoding of the data file
* `catalog_encoding`: specifies the character encoding of the catalog file
* `col_select`: selects specific columns for import
* `skip` and `n_max`: control the number of rows skipped and the maximum number of rows imported
* `.name_repair`: determines how column names are repaired if they are not valid
In the code examples below, we demonstrate how to import Stata, SPSS, and SAS files into R using the respective {haven} functions. The resulting data are stored in `anes_dta`, `anes_sav`, and `anes_sas` objects as tibbles, ready for use in R. For the Stata example, we show how to import the data from the {srvyrexploR} package to use in examples.
Stata:
```
library(haven)
anes_dta <-
read_dta(file = system.file("extdata",
"anes_2020_stata_example.dta",
package = "srvyrexploR"
))
```
SPSS:
```
library(haven)
anes_sav <-
read_sav(file = "data/anes_timeseries_2020_spss_20220210.sav")
```
SAS:
```
library(haven)
anes_sas <-
read_sas(
data_file = "data/anes_timeseries_2020_sas_20220210.sas7bdat"
)
```
### A.3\.2 Working with labeled data
Stata, SPSS, and SAS files can contain labeled variables and values. These labels provide descriptive information about categorical data, making them easier to understand and analyze. When importing data from Stata, SPSS, or SAS, we want to preserve these labels to maintain data fidelity.
Consider a variable like ‘Education Level’ with coded values (e.g., 1, 2, 3\). Without labels, these codes can be cryptic. However, with labels (‘High School Graduate,’ ‘Bachelor’s Degree,’ ‘Master’s Degree’), the data become more informative and easier to work with.
With the {haven} package, we have the capability to import and work with labeled data from Stata, SPSS, and SAS files. The package uses a special class of data called `haven_labelled` to store labeled variables. When a dataset label is defined in Stata, it is stored in the ‘label’ attribute of the tibble when imported, ensuring that the information is not lost.
We can use functions like `select()`, `glimpse()`, and `is.labelled()` to inspect the imported data and verify if the variables are labeled. Take a look at the ANES Stata file. Notice that categorical variables `V200002` and `V201006` are marked with a type of `<dbl+lbl>`. This notation indicates that these variables are labeled.
```
library(dplyr)
anes_dta %>%
select(1:6) %>%
glimpse()
```
```
## Rows: 7,453
## Columns: 6
## $ V200001 <dbl> 200015, 200022, 200039, 200046, 200053, 200060, 20008…
## $ V200002 <dbl+lbl> 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3…
## $ V200010b <dbl> 1.0057, 1.1635, 0.7687, 0.5210, 0.9658, 0.2347, 0.440…
## $ V200010d <dbl> 9, 26, 41, 29, 23, 37, 7, 37, 32, 41, 22, 7, 38, 21, …
## $ V200010c <dbl> 2, 2, 1, 2, 1, 2, 1, 2, 2, 2, 1, 1, 2, 2, 2, 2, 1, 1,…
## $ V201006 <dbl+lbl> 2, 3, 2, 3, 2, 1, 2, 3, 2, 2, 2, 2, 2, 1, 2, 1, 1…
```
We can confirm their label status using the `haven::is.labelled()` function.
```
haven::is.labelled(anes_dta$V200002)
```
```
## [1] TRUE
```
To explore the labels further, we can use the `attributes()` function. This function provides insights into both the variable labels (`$label`) and the associated value labels (`$labels`).
```
attributes(anes_dta$V200002)
```
```
## $label
## [1] "Mode of interview: pre-election interview"
##
## $format.stata
## [1] "%10.0g"
##
## $class
## [1] "haven_labelled" "vctrs_vctr" "double"
##
## $labels
## 1. Video 2. Telephone 3. Web
## 1 2 3
```
When we import a labeled dataset using {haven}, it results in a tibble containing both the data and label information. However, this is meant to be an intermediary data structure and not intended to be the final data format for analysis. Instead, we should convert it into a regular R data frame before continuing our data workflow. There are two primary methods to achieve this conversion: (1\) convert to factors or (2\) remove the labels.
#### Option 1: Convert the vector into a factor
Factors are native R data types for working with categorical data. They consist of integer values that correspond to character values, known as levels. Below is a dummy example of factors. The `factors` show the four different levels in the data: `strongly agree`, `agree`, `disagree`, and `strongly disagree`.
```
response <-
c("strongly agree", "agree", "agree", "disagree", "strongly disagree")
response_levels <-
c("strongly agree", "agree", "disagree", "strongly disagree")
factors <- factor(response, levels = response_levels)
factors
```
```
## [1] strongly agree agree agree
## [4] disagree strongly disagree
## Levels: strongly agree agree disagree strongly disagree
```
Factors are integer vectors, though they may look like character strings. We can confirm by looking at the vector’s structure:
```
glimpse(factors)
```
```
## Factor w/ 4 levels "strongly agree",..: 1 2 2 3 4
```
R’s factors differ from Stata, SPSS, or SAS labeled vectors. However, we can convert labeled variables into factors using the `as_factor()` function.
```
anes_dta %>%
transmute(V200002 = as_factor(V200002))
```
```
## # A tibble: 7,453 × 1
## V200002
## <fct>
## 1 3. Web
## 2 3. Web
## 3 3. Web
## 4 3. Web
## 5 3. Web
## 6 3. Web
## 7 3. Web
## 8 3. Web
## 9 3. Web
## 10 3. Web
## # ℹ 7,443 more rows
```
The `as_factor()` function can be applied to all columns in a data frame or individual ones. Below, we convert all `<dbl+lbl>` columns into factors.
```
anes_dta_factor <-
anes_dta %>%
as_factor()
anes_dta_factor %>%
select(1:6) %>%
glimpse()
```
```
## Rows: 7,453
## Columns: 6
## $ V200001 <dbl> 200015, 200022, 200039, 200046, 200053, 200060, 20008…
## $ V200002 <fct> 3. Web, 3. Web, 3. Web, 3. Web, 3. Web, 3. Web, 3. We…
## $ V200010b <dbl> 1.0057, 1.1635, 0.7687, 0.5210, 0.9658, 0.2347, 0.440…
## $ V200010d <dbl> 9, 26, 41, 29, 23, 37, 7, 37, 32, 41, 22, 7, 38, 21, …
## $ V200010c <dbl> 2, 2, 1, 2, 1, 2, 1, 2, 2, 2, 1, 1, 2, 2, 2, 2, 1, 1,…
## $ V201006 <fct> 2. Somewhat interested, 3. Not much interested, 2. So…
```
#### Option 2: Strip the labels
The second option is to remove the labels altogether, converting the labeled data into a regular R data frame. To remove, or ‘zap,’ the labels from our tibble, we can use the {haven} package’s `zap_label()` and `zap_labels()` functions. This approach removes the labels but retains the data values in their original form.
The ANES Stata file columns contain variable labels. Using the `map()` function from {purrr}, we can review the labels using `attr`. In the example below, we list the first two variables and their labels. For instance, the label for `V200002` is “Mode of interview: pre\-election interview.”
```
purrr::map(anes_dta, ~ attr(.x, "label")) %>%
head(2)
```
```
## $V200001
## [1] "2020 Case ID"
##
## $V200002
## [1] "Mode of interview: pre-election interview"
```
Use `zap_label()` to remove the variable labels but retain the value labels. Notice that the labels return as `NULL`.
```
zap_label(anes_dta) %>%
purrr::map(~ attr(.x, "label")) %>%
head(2)
```
```
## $V200001
## NULL
##
## $V200002
## 1. Video 2. Telephone 3. Web
## 1 2 3
```
To remove the value labels, use `zap_labels()`. Notice the previous `<dbl+lbl>` columns are now `<dbl>`.
```
zap_labels(anes_dta) %>%
select(1:6) %>%
glimpse()
```
```
## Rows: 7,453
## Columns: 6
## $ V200001 <dbl> 200015, 200022, 200039, 200046, 200053, 200060, 20008…
## $ V200002 <dbl> 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,…
## $ V200010b <dbl> 1.0057, 1.1635, 0.7687, 0.5210, 0.9658, 0.2347, 0.440…
## $ V200010d <dbl> 9, 26, 41, 29, 23, 37, 7, 37, 32, 41, 22, 7, 38, 21, …
## $ V200010c <dbl> 2, 2, 1, 2, 1, 2, 1, 2, 2, 2, 1, 1, 2, 2, 2, 2, 1, 1,…
## $ V201006 <dbl> 2, 3, 2, 3, 2, 1, 2, 3, 2, 2, 2, 2, 2, 1, 2, 1, 1, 1,…
```
While it is important to convert labeled datasets into regular R data frames for working in R, the labels themselves often contain valuable information that provides context and meaning to the survey variables. To aid with interpretability and documentation, we can create a data dictionary from the labeled dataset. A data dictionary is a reference document that provides detailed information about the variables and values of a survey.
The {labelled} package offers a convenient function, `generate_dictionary()`, that creates data dictionaries directly from a labeled dataset ([Larmarange 2024](#ref-R-labelled)). This function extracts variable labels, value labels, and other metadata and organizes them into a structured document that we can browse and reference throughout our analysis.
Let’s create a data dictionary from the ANES Stata dataset as an example:
```
library(labelled)
dictionary <- generate_dictionary(anes_dta)
```
Once we’ve generated the data dictionary, we can take a look at the `V200002` variable and see the label, column type, number of missing entries, and associated values.
```
dictionary %>%
filter(variable == "V200002")
```
```
## pos variable label col_type missing values
## 2 V200002 Mode of interview: pre~ dbl+lbl 0 [1] 1. Video
## [2] 2. Telephone
## [3] 3. Web
```
### A.3\.3 Labeled missing data values
In survey data analysis, dealing with missing values is a crucial aspect of data preparation. Stata, SPSS, and SAS files each have their own method for handling missing values.
* Stata has “extended” missing values, `.A` through `.Z`.
* SAS has “special” missing values, `.A` through `.Z` and `._`.
* SPSS has per\-column “user” missing values. Each column can declare up to three distinct values or a range of values (plus one distinct value) that should be treated as missing.
SAS and Stata use a concept known as ‘tagged’ missing values, which extend R’s regular `NA`. A ‘tagged’ missing value is essentially an `NA` with an additional single\-character label. These values behave identically to regular `NA` in standard R operations while preserving the informative tag associated with the missing value.
Here is an example from the NORC at the University of Chicago’s 2018 General Society Survey, where Don’t Know (`DK`) responses are tagged as `NA(d)`, Inapplicable (`IAP`) responses are tagged as `NA(i)`, and `No Answer` responses are tagged as `NA(n)` ([Davern et al. 2021](#ref-gss-codebook)).
```
head(gss_dta$HEALTH)
#> <labelled<double>[6]>: condition of health
#> [1] 2 1 NA(i) NA(i) 1 2
#>
#> Labels:
#> value label
#> 1 excellent
#> 2 good
#> 3 fair
#> 4 poor
#> NA(d) DK
#> NA(i) IAP
#> NA(n) NA
```
In contrast, SPSS uses a different approach called ‘user\-defined values’ to denote missing values. Each column in an SPSS dataset can have up to three distinct values designated as missing or a specified range of missing values. To model these additional user\-defined missing values, {haven} provides the `labeled_spss()` subclass of `labeled()`. When importing SPSS data using {haven}, it ensures that user\-defined missing values are correctly handled. We can work with these data in R while preserving the unique missing value conventions from SPSS.
Here is what the GSS SPSS dataset looks like when loaded with {haven}.
```
head(gss_sps$HEALTH)
#> <labelled_spss<double>[6]>: Condition of health
#> [1] 2 1 0 0 1 2
#> Missing values: 0, 8, 9
#>
#> Labels:
#> value label
#> 0 IAP
#> 1 EXCELLENT
#> 2 GOOD
#> 3 FAIR
#> 4 POOR
#> 8 DK
#> 9 NA
```
A.4 Importing data from APIs into R
-----------------------------------
In addition to working with data saved as files, we may also need to retrieve data through Application Programming Interfaces (APIs). APIs provide a structured way to access data hosted on external servers and import them directly into R for analysis.
To access these data, we need to understand how to construct API requests. Each API has unique endpoints, parameters, and authentication requirements. Pay attention to:
* Endpoints: These are URLs that point to specific data or services
* Parameters: Information passed to the API to customize the request (e.g., date ranges, filters)
* Authentication: APIs may require API keys or tokens for access
* Rate Limits: APIs may have usage limits, so be aware of any rate limits or quotas
Typically, we begin by making a GET request to an API endpoint. The {httr2} package allows us to generate and process HTTP requests ([Wickham 2024](#ref-R-httr2)). We can make the GET request by pointing to the URL that contains the data we would like:
```
library(httr2)
api_url <- "https://api.example.com/survey-data"
response <- GET(url = api_url)
```
Once we make the request, we obtain the data as the `response`. The data often come in JSON format. We can extract and parse the data using the {jsonlite} package, allowing us to work with them in R ([Ooms 2014](#ref-jsonliteooms)). The `fromJSON()` function, shown below, converts JSON data to an R object.
```
survey_data <- fromJSON(content(response, "text"))
```
Note that these are dummy examples. Please review the documentation to understand how to make requests from a specific API.
R offers several packages that simplify API access by providing ready\-to\-use functions for popular APIs. These packages are called “wrappers,” as they “wrap” the API in R to make it easier to use. For example, the {tidycensus} package used in this book simplifies access to U.S. Census data, allowing us to retrieve data with R commands instead of writing API requests from scratch ([Walker and Herman 2024](#ref-R-tidycensus)). Behind the scenes, `get_pums()` is making a GET request from the Census API, and the {tidycensus} functions are converting the response into an R\-friendly format. For example, if we are interested in the age, sex, race, and Hispanicity of those in the American Community Survey sample of Durham County, North Carolina[31](#fn31), we can use the `get_pums()` function to extract the microdata as shown in the code below. We can then use the replicate weights to create a survey object and calculate estimates for Durham County.
```
library(tidycensus)
durh_pums <- get_pums(
variables = c("PUMA", "SEX", "AGEP", "RAC1P", "HISP"),
state = "NC",
puma = c("01301", "01302"),
survey = "acs1",
year = 2022,
rep_weights = "person"
)
```
```
## Getting data from the 2022 1-year ACS Public Use Microdata Sample
```
```
## Warning: • You have not set a Census API key. Users without a key are limited to 500
## queries per day and may experience performance limitations.
## ℹ For best results, get a Census API key at
## http://api.census.gov/data/key_signup.html and then supply the key to the
## `census_api_key()` function to use it throughout your tidycensus session.
## This warning is displayed once per session.
```
```
durh_pums
```
```
## # A tibble: 2,724 × 90
## SERIALNO SPORDER AGEP PUMA ST SEX HISP RAC1P WGTP PWGTP
## <chr> <dbl> <dbl> <chr> <chr> <chr> <chr> <chr> <dbl> <dbl>
## 1 2022GQ0002044 1 54 01301 37 1 01 2 0 17
## 2 2022GQ0002319 1 20 01301 37 2 01 2 0 64
## 3 2022GQ0003518 1 22 01301 37 1 01 6 0 52
## 4 2022GQ0003930 1 62 01302 37 1 01 2 0 17
## 5 2022GQ0005753 1 19 01301 37 2 24 8 0 29
## 6 2022GQ0006554 1 22 01301 37 2 01 6 0 59
## 7 2022GQ0007092 1 70 01301 37 1 01 2 0 55
## 8 2022GQ0007502 1 36 01302 37 1 01 1 0 39
## 9 2022GQ0008767 1 74 01301 37 1 01 1 0 15
## 10 2022GQ0008956 1 22 01302 37 2 24 1 0 43
## # ℹ 2,714 more rows
## # ℹ 80 more variables: PWGTP1 <dbl>, PWGTP2 <dbl>, PWGTP3 <dbl>,
## # PWGTP4 <dbl>, PWGTP5 <dbl>, PWGTP6 <dbl>, PWGTP7 <dbl>,
## # PWGTP8 <dbl>, PWGTP9 <dbl>, PWGTP10 <dbl>, PWGTP11 <dbl>,
## # PWGTP12 <dbl>, PWGTP13 <dbl>, PWGTP14 <dbl>, PWGTP15 <dbl>,
## # PWGTP16 <dbl>, PWGTP17 <dbl>, PWGTP18 <dbl>, PWGTP19 <dbl>,
## # PWGTP20 <dbl>, PWGTP21 <dbl>, PWGTP22 <dbl>, PWGTP23 <dbl>, …
```
In Chapter [4](c04-getting-started.html#c04-getting-started), we used the {censusapi} package to get data from the Census data API for the Current Population Survey. To discover if there is an R package that directly interfaces with a specific survey or data source, search for “\[survey] R wrapper” or “\[data source] R package” online.
A.5 Importing data from databases in R
--------------------------------------
Databases provide a secure and organized solution as the volume and complexity of data grow. We can access, manage, and update data stored in databases in a systematic way. Because of how the data are organized, teams can draw from the same source and obtain any metadata that would be helpful for analysis.
There are various ways of using R to work with databases. If using RStudio, we can connect to different databases through the Connections Pane in the top right of the IDE. We can also use packages like {DBI} and {odbc} to access database tables in R files. Here is an example script connecting to a database:
```
con <-
DBI::dbConnect(
odbc::odbc(),
Driver = "[driver name]",
Server = "[server path]",
UID = rstudioapi::askForPassword("Database user"),
PWD = rstudioapi::askForPassword("Database password"),
Database = "[database name]",
Warehouse = "[warehouse name]",
Schema = "[schema name]"
)
```
The {dbplyr} and {dplyr} packages allow us to make queries and run data analysis entirely using {dplyr} syntax. All of the code can be written in R, so we do not have to switch between R and SQL to explore the data. Here is some sample code:
```
q1 <- tbl(con, "bank") %>%
group_by(month_idx, year, month) %>%
summarize(subscribe = sum(ifelse(term_deposit == "yes", 1, 0)),
total = n())
show_query(q1)
```
Be sure to check the documentation to configure a database connection.
A.6 Importing data from other formats
-------------------------------------
R also offers dedicated packages such as {googlesheets4} for Google Sheets or {qualtRics} for Qualtrics. With less common or proprietary file formats, the broader data science community can often provide guidance. Online resources like [Stack Overflow](https://stackoverflow.com/) and dedicated forums like [Posit Community](https://forum.posit.co/) are valuable sources of information for importing data into R.
A.1 Importing delimiter\-separated files into R
-----------------------------------------------
Delimiter\-separated files use specific characters, known as delimiters, to separate values within the file. For example, CSV (comma\-separated values) files use commas as delimiters, while TSV (tab\-separated values) files use tabs. These file formats are widely used because of their simplicity and compatibility with various software applications.
The {readr} package, part of the tidyverse ecosystem, offers efficient ways to import delimiter\-separated files into R ([Wickham, Hester, and Bryan 2024](#ref-R-readr)). It offers several advantages, including automatic data type detection and flexible handling of missing values, depending on one’s survey analysis needs. The {readr} package includes functions for:
* `read_csv()`: This function is specifically designed to read CSV files.
* `read_tsv()`: Use this function for TSV files.
* `read_delim()`: This function can handle a broader range of delimiter\-separated files, including CSV and TSV. Specify the delimiter using the `delim` argument.
* `read_fwf()`: This function is useful for importing fixed\-width files (FWF), where columns have predetermined widths, and values are aligned in specific positions.
* `read_table()`: Use this function when dealing with whitespace\-separated files, such as those with spaces or multiple spaces as delimiters.
* `read_log()`: This function can read and parse web log files.
The syntax for `read_csv()` is:
```
read_csv(
file,
col_names = TRUE,
col_types = NULL,
col_select = NULL,
id = NULL,
locale = default_locale(),
na = c("", "NA"),
comment = "",
trim_ws = TRUE,
skip = 0,
n_max = Inf,
guess_max = min(1000, n_max),
name_repair = "unique",
num_threads = readr_threads(),
progress = show_progress(),
show_col_types = should_show_types(),
skip_empty_rows = TRUE,
lazy = should_read_lazy()
)
```
The arguments are:
* `file`: the path to the CSV file to import
* `col_names`: a value of `TRUE` imports the first row of the `file` as column names and not included in the data frame. A value of `FALSE` creates automated column names. Alternatively, we can provide a vector of column names.
* `col_types`: by default, R infers the column variable types. We can also provide a column specification using `list()` or `cols()`; for example, use `col_types = cols(.default = "c")` to read all the columns as characters. Alternatively, we can use a string to specify the variable types for each column.
* `col_select`: the columns to include in the results
* `id`: a column for storing the file path. This is useful for keeping track of the input file when importing multiple CSVs at a time.
* `locale`: the location\-specific defaults for the file
* `na`: a character vector of values to interpret as missing
* `comment`: a character vector of values to interpret as comments
* `trim_ws`: a value of `TRUE` trims leading and trailing white space
* `skip`: number of lines to skip before importing the data
* `n_max`: maximum number of lines to read
* `guess_max`: maximum number of lines used for guessing column types
* `name_repair`: whether to check column names. By default, the column names are unique.
* `num_threads`: the number of processing threads to use for initial parsing and lazy reading of data
* `progress`: a value of `TRUE` displays a progress bar
* `show_col_types`: a value of `TRUE` displays the column types
* `skip_empty_rows`: a value of `TRUE` ignores blank rows
* `lazy`: a value of `TRUE` reads values lazily
The other functions share a similar syntax to `read_csv()`. To find more details, run `??` followed by the function name. For example, run `??read_tsv` in the Console for additional information on importing TSV files.
In the example below, we use {readr} to import a CSV file named ‘anes\_timeseries\_2020\_csv\_20220210\.csv’ into an R object called `anes_csv`. The `read_csv()` imports the file and stores the data in the `anes_csv` object. We can then use this object for further analysis.
```
library(readr)
anes_csv <-
read_csv(file = "data/anes_timeseries_2020_csv_20220210.csv")
```
A.2 Importing Excel files into R
--------------------------------
Excel, a widely used spreadsheet software program created by Microsoft, is a common file format in survey research. We can import Excel spreadsheets into the R environment using the {readxl} package. The package supports both the legacy `.xls` files and the modern `.xlsx` format.
To import Excel data into R, we can use the `read_excel()` function from the {readxl} package. This function offers a range of options for the import process. Let’s explore the syntax:
```
read_excel(
path,
sheet = NULL,
range = NULL,
col_names = TRUE,
col_types = NULL,
na = "",
trim_ws = TRUE,
skip = 0,
n_max = Inf,
guess_max = min(1000, n_max),
progress = readxl_progress(),
.name_repair = "unique"
)
```
The arguments are:
* `path`: the path to the Excel file to import
* `sheet`: the name or index of the sheet (sometimes called tabs) within the Excel file
* `range`: the range of cells to import (for example, `P15:T87`)
* `col_names`: indicates whether the first row of the dataset contains column names
* `col_types`: specifies the data types of columns
* `na`: defines the representation of missing values (for example, `NULL`)
* `trim_ws`: controls whether leading and trailing whitespaces should be trimmed
* `skip` and `n_max`: enable skipping rows and limit the number of rows imported
* `guess_max`: sets the maximum number of rows used for data type guessing
* `progress`: specifies a progress bar for large imports
* `.name_repair`: determines how column names are repaired if they are not valid
In the code example below, we import an Excel spreadsheet named ‘anes\_timeseries\_2020\_csv\_20220210\.xlsx’ into R. The resulting data is saved as a tibble in the `anes_excel` object, ready for further analysis.
```
library(readxl)
anes_excel <-
read_excel(path = "data/anes_timeseries_2020_csv_20220210.xlsx")
```
A.3 Importing Stata, SAS, and SPSS files into R
-----------------------------------------------
The {haven} package, also from the tidyverse ecosystem, imports various proprietary data formats: Stata `.dta` files, SPSS `.sav` files, and SAS `.sas7bdat` and `.sas7bcat` files ([Wickham, Miller, and Smith 2023](#ref-R-haven)). One of the notable strengths of the {haven} package is its ability to handle multiple proprietary formats within a unified framework. It offers dedicated functions for each supported proprietary format, making it straightforward to import data regardless of the program. Here, we introduce `read_dat()` for Stata files, `read_sav()` for SPSS files, and `read_sas()` for SAS files.
### A.3\.1 Syntax
Let’s explore the syntax for importing Stata files `.dat` files using `haven::read_dat()`:
```
read_dta(
file,
encoding = NULL,
col_select = NULL,
skip = 0,
n_max = Inf,
.name_repair = "unique"
)
```
The arguments are:
* `file`: the path to the proprietary data file to import
* `encoding`: specifies the character encoding of the data file
* `col_select`: selects specific columns for import
* `skip` and `n_max`: control the number of rows skipped and the maximum number of rows imported
* `.name_repair`: determines how column names are repaired if they are not valid
The syntax for `read_sav()` is similar to `read_dat()`:
```
read_sav(
file,
encoding = NULL,
user_na = FALSE,
col_select = NULL,
skip = 0,
n_max = Inf,
.name_repair = "unique"
)
```
The arguments are:
* `file`: the path to the proprietary data file to import
* `encoding`: specifies the character encoding of the data file
* `col_select`: selects specific columns for import
* `user_na`: a value of `TRUE` reads variables with user\-defined missing labels into `labelled_spss()` objects
* `skip` and `n_max`: control the number of rows skipped and the maximum number of rows imported
* `.name_repair`: determines how column names are repaired if they are not valid
The syntax for importing SAS files with `read_sas()` is as follows:
```
read_sas(
data_file,
catalog_file = NULL,
encoding = NULL,
catalog_encoding = encoding,
col_select = NULL,
skip = 0L,
n_max = Inf,
.name_repair = "unique"
)
```
The arguments are:
* `data_file`: the path to the proprietary data file to import
* `catalog_file`: the path to the catalog file to import
* `encoding`: specifies the character encoding of the data file
* `catalog_encoding`: specifies the character encoding of the catalog file
* `col_select`: selects specific columns for import
* `skip` and `n_max`: control the number of rows skipped and the maximum number of rows imported
* `.name_repair`: determines how column names are repaired if they are not valid
In the code examples below, we demonstrate how to import Stata, SPSS, and SAS files into R using the respective {haven} functions. The resulting data are stored in `anes_dta`, `anes_sav`, and `anes_sas` objects as tibbles, ready for use in R. For the Stata example, we show how to import the data from the {srvyrexploR} package to use in examples.
Stata:
```
library(haven)
anes_dta <-
read_dta(file = system.file("extdata",
"anes_2020_stata_example.dta",
package = "srvyrexploR"
))
```
SPSS:
```
library(haven)
anes_sav <-
read_sav(file = "data/anes_timeseries_2020_spss_20220210.sav")
```
SAS:
```
library(haven)
anes_sas <-
read_sas(
data_file = "data/anes_timeseries_2020_sas_20220210.sas7bdat"
)
```
### A.3\.2 Working with labeled data
Stata, SPSS, and SAS files can contain labeled variables and values. These labels provide descriptive information about categorical data, making them easier to understand and analyze. When importing data from Stata, SPSS, or SAS, we want to preserve these labels to maintain data fidelity.
Consider a variable like ‘Education Level’ with coded values (e.g., 1, 2, 3\). Without labels, these codes can be cryptic. However, with labels (‘High School Graduate,’ ‘Bachelor’s Degree,’ ‘Master’s Degree’), the data become more informative and easier to work with.
With the {haven} package, we have the capability to import and work with labeled data from Stata, SPSS, and SAS files. The package uses a special class of data called `haven_labelled` to store labeled variables. When a dataset label is defined in Stata, it is stored in the ‘label’ attribute of the tibble when imported, ensuring that the information is not lost.
We can use functions like `select()`, `glimpse()`, and `is.labelled()` to inspect the imported data and verify if the variables are labeled. Take a look at the ANES Stata file. Notice that categorical variables `V200002` and `V201006` are marked with a type of `<dbl+lbl>`. This notation indicates that these variables are labeled.
```
library(dplyr)
anes_dta %>%
select(1:6) %>%
glimpse()
```
```
## Rows: 7,453
## Columns: 6
## $ V200001 <dbl> 200015, 200022, 200039, 200046, 200053, 200060, 20008…
## $ V200002 <dbl+lbl> 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3…
## $ V200010b <dbl> 1.0057, 1.1635, 0.7687, 0.5210, 0.9658, 0.2347, 0.440…
## $ V200010d <dbl> 9, 26, 41, 29, 23, 37, 7, 37, 32, 41, 22, 7, 38, 21, …
## $ V200010c <dbl> 2, 2, 1, 2, 1, 2, 1, 2, 2, 2, 1, 1, 2, 2, 2, 2, 1, 1,…
## $ V201006 <dbl+lbl> 2, 3, 2, 3, 2, 1, 2, 3, 2, 2, 2, 2, 2, 1, 2, 1, 1…
```
We can confirm their label status using the `haven::is.labelled()` function.
```
haven::is.labelled(anes_dta$V200002)
```
```
## [1] TRUE
```
To explore the labels further, we can use the `attributes()` function. This function provides insights into both the variable labels (`$label`) and the associated value labels (`$labels`).
```
attributes(anes_dta$V200002)
```
```
## $label
## [1] "Mode of interview: pre-election interview"
##
## $format.stata
## [1] "%10.0g"
##
## $class
## [1] "haven_labelled" "vctrs_vctr" "double"
##
## $labels
## 1. Video 2. Telephone 3. Web
## 1 2 3
```
When we import a labeled dataset using {haven}, it results in a tibble containing both the data and label information. However, this is meant to be an intermediary data structure and not intended to be the final data format for analysis. Instead, we should convert it into a regular R data frame before continuing our data workflow. There are two primary methods to achieve this conversion: (1\) convert to factors or (2\) remove the labels.
#### Option 1: Convert the vector into a factor
Factors are native R data types for working with categorical data. They consist of integer values that correspond to character values, known as levels. Below is a dummy example of factors. The `factors` show the four different levels in the data: `strongly agree`, `agree`, `disagree`, and `strongly disagree`.
```
response <-
c("strongly agree", "agree", "agree", "disagree", "strongly disagree")
response_levels <-
c("strongly agree", "agree", "disagree", "strongly disagree")
factors <- factor(response, levels = response_levels)
factors
```
```
## [1] strongly agree agree agree
## [4] disagree strongly disagree
## Levels: strongly agree agree disagree strongly disagree
```
Factors are integer vectors, though they may look like character strings. We can confirm by looking at the vector’s structure:
```
glimpse(factors)
```
```
## Factor w/ 4 levels "strongly agree",..: 1 2 2 3 4
```
R’s factors differ from Stata, SPSS, or SAS labeled vectors. However, we can convert labeled variables into factors using the `as_factor()` function.
```
anes_dta %>%
transmute(V200002 = as_factor(V200002))
```
```
## # A tibble: 7,453 × 1
## V200002
## <fct>
## 1 3. Web
## 2 3. Web
## 3 3. Web
## 4 3. Web
## 5 3. Web
## 6 3. Web
## 7 3. Web
## 8 3. Web
## 9 3. Web
## 10 3. Web
## # ℹ 7,443 more rows
```
The `as_factor()` function can be applied to all columns in a data frame or individual ones. Below, we convert all `<dbl+lbl>` columns into factors.
```
anes_dta_factor <-
anes_dta %>%
as_factor()
anes_dta_factor %>%
select(1:6) %>%
glimpse()
```
```
## Rows: 7,453
## Columns: 6
## $ V200001 <dbl> 200015, 200022, 200039, 200046, 200053, 200060, 20008…
## $ V200002 <fct> 3. Web, 3. Web, 3. Web, 3. Web, 3. Web, 3. Web, 3. We…
## $ V200010b <dbl> 1.0057, 1.1635, 0.7687, 0.5210, 0.9658, 0.2347, 0.440…
## $ V200010d <dbl> 9, 26, 41, 29, 23, 37, 7, 37, 32, 41, 22, 7, 38, 21, …
## $ V200010c <dbl> 2, 2, 1, 2, 1, 2, 1, 2, 2, 2, 1, 1, 2, 2, 2, 2, 1, 1,…
## $ V201006 <fct> 2. Somewhat interested, 3. Not much interested, 2. So…
```
#### Option 2: Strip the labels
The second option is to remove the labels altogether, converting the labeled data into a regular R data frame. To remove, or ‘zap,’ the labels from our tibble, we can use the {haven} package’s `zap_label()` and `zap_labels()` functions. This approach removes the labels but retains the data values in their original form.
The ANES Stata file columns contain variable labels. Using the `map()` function from {purrr}, we can review the labels using `attr`. In the example below, we list the first two variables and their labels. For instance, the label for `V200002` is “Mode of interview: pre\-election interview.”
```
purrr::map(anes_dta, ~ attr(.x, "label")) %>%
head(2)
```
```
## $V200001
## [1] "2020 Case ID"
##
## $V200002
## [1] "Mode of interview: pre-election interview"
```
Use `zap_label()` to remove the variable labels but retain the value labels. Notice that the labels return as `NULL`.
```
zap_label(anes_dta) %>%
purrr::map(~ attr(.x, "label")) %>%
head(2)
```
```
## $V200001
## NULL
##
## $V200002
## 1. Video 2. Telephone 3. Web
## 1 2 3
```
To remove the value labels, use `zap_labels()`. Notice the previous `<dbl+lbl>` columns are now `<dbl>`.
```
zap_labels(anes_dta) %>%
select(1:6) %>%
glimpse()
```
```
## Rows: 7,453
## Columns: 6
## $ V200001 <dbl> 200015, 200022, 200039, 200046, 200053, 200060, 20008…
## $ V200002 <dbl> 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,…
## $ V200010b <dbl> 1.0057, 1.1635, 0.7687, 0.5210, 0.9658, 0.2347, 0.440…
## $ V200010d <dbl> 9, 26, 41, 29, 23, 37, 7, 37, 32, 41, 22, 7, 38, 21, …
## $ V200010c <dbl> 2, 2, 1, 2, 1, 2, 1, 2, 2, 2, 1, 1, 2, 2, 2, 2, 1, 1,…
## $ V201006 <dbl> 2, 3, 2, 3, 2, 1, 2, 3, 2, 2, 2, 2, 2, 1, 2, 1, 1, 1,…
```
While it is important to convert labeled datasets into regular R data frames for working in R, the labels themselves often contain valuable information that provides context and meaning to the survey variables. To aid with interpretability and documentation, we can create a data dictionary from the labeled dataset. A data dictionary is a reference document that provides detailed information about the variables and values of a survey.
The {labelled} package offers a convenient function, `generate_dictionary()`, that creates data dictionaries directly from a labeled dataset ([Larmarange 2024](#ref-R-labelled)). This function extracts variable labels, value labels, and other metadata and organizes them into a structured document that we can browse and reference throughout our analysis.
Let’s create a data dictionary from the ANES Stata dataset as an example:
```
library(labelled)
dictionary <- generate_dictionary(anes_dta)
```
Once we’ve generated the data dictionary, we can take a look at the `V200002` variable and see the label, column type, number of missing entries, and associated values.
```
dictionary %>%
filter(variable == "V200002")
```
```
## pos variable label col_type missing values
## 2 V200002 Mode of interview: pre~ dbl+lbl 0 [1] 1. Video
## [2] 2. Telephone
## [3] 3. Web
```
### A.3\.3 Labeled missing data values
In survey data analysis, dealing with missing values is a crucial aspect of data preparation. Stata, SPSS, and SAS files each have their own method for handling missing values.
* Stata has “extended” missing values, `.A` through `.Z`.
* SAS has “special” missing values, `.A` through `.Z` and `._`.
* SPSS has per\-column “user” missing values. Each column can declare up to three distinct values or a range of values (plus one distinct value) that should be treated as missing.
SAS and Stata use a concept known as ‘tagged’ missing values, which extend R’s regular `NA`. A ‘tagged’ missing value is essentially an `NA` with an additional single\-character label. These values behave identically to regular `NA` in standard R operations while preserving the informative tag associated with the missing value.
Here is an example from the NORC at the University of Chicago’s 2018 General Society Survey, where Don’t Know (`DK`) responses are tagged as `NA(d)`, Inapplicable (`IAP`) responses are tagged as `NA(i)`, and `No Answer` responses are tagged as `NA(n)` ([Davern et al. 2021](#ref-gss-codebook)).
```
head(gss_dta$HEALTH)
#> <labelled<double>[6]>: condition of health
#> [1] 2 1 NA(i) NA(i) 1 2
#>
#> Labels:
#> value label
#> 1 excellent
#> 2 good
#> 3 fair
#> 4 poor
#> NA(d) DK
#> NA(i) IAP
#> NA(n) NA
```
In contrast, SPSS uses a different approach called ‘user\-defined values’ to denote missing values. Each column in an SPSS dataset can have up to three distinct values designated as missing or a specified range of missing values. To model these additional user\-defined missing values, {haven} provides the `labeled_spss()` subclass of `labeled()`. When importing SPSS data using {haven}, it ensures that user\-defined missing values are correctly handled. We can work with these data in R while preserving the unique missing value conventions from SPSS.
Here is what the GSS SPSS dataset looks like when loaded with {haven}.
```
head(gss_sps$HEALTH)
#> <labelled_spss<double>[6]>: Condition of health
#> [1] 2 1 0 0 1 2
#> Missing values: 0, 8, 9
#>
#> Labels:
#> value label
#> 0 IAP
#> 1 EXCELLENT
#> 2 GOOD
#> 3 FAIR
#> 4 POOR
#> 8 DK
#> 9 NA
```
### A.3\.1 Syntax
Let’s explore the syntax for importing Stata files `.dat` files using `haven::read_dat()`:
```
read_dta(
file,
encoding = NULL,
col_select = NULL,
skip = 0,
n_max = Inf,
.name_repair = "unique"
)
```
The arguments are:
* `file`: the path to the proprietary data file to import
* `encoding`: specifies the character encoding of the data file
* `col_select`: selects specific columns for import
* `skip` and `n_max`: control the number of rows skipped and the maximum number of rows imported
* `.name_repair`: determines how column names are repaired if they are not valid
The syntax for `read_sav()` is similar to `read_dat()`:
```
read_sav(
file,
encoding = NULL,
user_na = FALSE,
col_select = NULL,
skip = 0,
n_max = Inf,
.name_repair = "unique"
)
```
The arguments are:
* `file`: the path to the proprietary data file to import
* `encoding`: specifies the character encoding of the data file
* `col_select`: selects specific columns for import
* `user_na`: a value of `TRUE` reads variables with user\-defined missing labels into `labelled_spss()` objects
* `skip` and `n_max`: control the number of rows skipped and the maximum number of rows imported
* `.name_repair`: determines how column names are repaired if they are not valid
The syntax for importing SAS files with `read_sas()` is as follows:
```
read_sas(
data_file,
catalog_file = NULL,
encoding = NULL,
catalog_encoding = encoding,
col_select = NULL,
skip = 0L,
n_max = Inf,
.name_repair = "unique"
)
```
The arguments are:
* `data_file`: the path to the proprietary data file to import
* `catalog_file`: the path to the catalog file to import
* `encoding`: specifies the character encoding of the data file
* `catalog_encoding`: specifies the character encoding of the catalog file
* `col_select`: selects specific columns for import
* `skip` and `n_max`: control the number of rows skipped and the maximum number of rows imported
* `.name_repair`: determines how column names are repaired if they are not valid
In the code examples below, we demonstrate how to import Stata, SPSS, and SAS files into R using the respective {haven} functions. The resulting data are stored in `anes_dta`, `anes_sav`, and `anes_sas` objects as tibbles, ready for use in R. For the Stata example, we show how to import the data from the {srvyrexploR} package to use in examples.
Stata:
```
library(haven)
anes_dta <-
read_dta(file = system.file("extdata",
"anes_2020_stata_example.dta",
package = "srvyrexploR"
))
```
SPSS:
```
library(haven)
anes_sav <-
read_sav(file = "data/anes_timeseries_2020_spss_20220210.sav")
```
SAS:
```
library(haven)
anes_sas <-
read_sas(
data_file = "data/anes_timeseries_2020_sas_20220210.sas7bdat"
)
```
### A.3\.2 Working with labeled data
Stata, SPSS, and SAS files can contain labeled variables and values. These labels provide descriptive information about categorical data, making them easier to understand and analyze. When importing data from Stata, SPSS, or SAS, we want to preserve these labels to maintain data fidelity.
Consider a variable like ‘Education Level’ with coded values (e.g., 1, 2, 3\). Without labels, these codes can be cryptic. However, with labels (‘High School Graduate,’ ‘Bachelor’s Degree,’ ‘Master’s Degree’), the data become more informative and easier to work with.
With the {haven} package, we have the capability to import and work with labeled data from Stata, SPSS, and SAS files. The package uses a special class of data called `haven_labelled` to store labeled variables. When a dataset label is defined in Stata, it is stored in the ‘label’ attribute of the tibble when imported, ensuring that the information is not lost.
We can use functions like `select()`, `glimpse()`, and `is.labelled()` to inspect the imported data and verify if the variables are labeled. Take a look at the ANES Stata file. Notice that categorical variables `V200002` and `V201006` are marked with a type of `<dbl+lbl>`. This notation indicates that these variables are labeled.
```
library(dplyr)
anes_dta %>%
select(1:6) %>%
glimpse()
```
```
## Rows: 7,453
## Columns: 6
## $ V200001 <dbl> 200015, 200022, 200039, 200046, 200053, 200060, 20008…
## $ V200002 <dbl+lbl> 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3…
## $ V200010b <dbl> 1.0057, 1.1635, 0.7687, 0.5210, 0.9658, 0.2347, 0.440…
## $ V200010d <dbl> 9, 26, 41, 29, 23, 37, 7, 37, 32, 41, 22, 7, 38, 21, …
## $ V200010c <dbl> 2, 2, 1, 2, 1, 2, 1, 2, 2, 2, 1, 1, 2, 2, 2, 2, 1, 1,…
## $ V201006 <dbl+lbl> 2, 3, 2, 3, 2, 1, 2, 3, 2, 2, 2, 2, 2, 1, 2, 1, 1…
```
We can confirm their label status using the `haven::is.labelled()` function.
```
haven::is.labelled(anes_dta$V200002)
```
```
## [1] TRUE
```
To explore the labels further, we can use the `attributes()` function. This function provides insights into both the variable labels (`$label`) and the associated value labels (`$labels`).
```
attributes(anes_dta$V200002)
```
```
## $label
## [1] "Mode of interview: pre-election interview"
##
## $format.stata
## [1] "%10.0g"
##
## $class
## [1] "haven_labelled" "vctrs_vctr" "double"
##
## $labels
## 1. Video 2. Telephone 3. Web
## 1 2 3
```
When we import a labeled dataset using {haven}, it results in a tibble containing both the data and label information. However, this is meant to be an intermediary data structure and not intended to be the final data format for analysis. Instead, we should convert it into a regular R data frame before continuing our data workflow. There are two primary methods to achieve this conversion: (1\) convert to factors or (2\) remove the labels.
#### Option 1: Convert the vector into a factor
Factors are native R data types for working with categorical data. They consist of integer values that correspond to character values, known as levels. Below is a dummy example of factors. The `factors` show the four different levels in the data: `strongly agree`, `agree`, `disagree`, and `strongly disagree`.
```
response <-
c("strongly agree", "agree", "agree", "disagree", "strongly disagree")
response_levels <-
c("strongly agree", "agree", "disagree", "strongly disagree")
factors <- factor(response, levels = response_levels)
factors
```
```
## [1] strongly agree agree agree
## [4] disagree strongly disagree
## Levels: strongly agree agree disagree strongly disagree
```
Factors are integer vectors, though they may look like character strings. We can confirm by looking at the vector’s structure:
```
glimpse(factors)
```
```
## Factor w/ 4 levels "strongly agree",..: 1 2 2 3 4
```
R’s factors differ from Stata, SPSS, or SAS labeled vectors. However, we can convert labeled variables into factors using the `as_factor()` function.
```
anes_dta %>%
transmute(V200002 = as_factor(V200002))
```
```
## # A tibble: 7,453 × 1
## V200002
## <fct>
## 1 3. Web
## 2 3. Web
## 3 3. Web
## 4 3. Web
## 5 3. Web
## 6 3. Web
## 7 3. Web
## 8 3. Web
## 9 3. Web
## 10 3. Web
## # ℹ 7,443 more rows
```
The `as_factor()` function can be applied to all columns in a data frame or individual ones. Below, we convert all `<dbl+lbl>` columns into factors.
```
anes_dta_factor <-
anes_dta %>%
as_factor()
anes_dta_factor %>%
select(1:6) %>%
glimpse()
```
```
## Rows: 7,453
## Columns: 6
## $ V200001 <dbl> 200015, 200022, 200039, 200046, 200053, 200060, 20008…
## $ V200002 <fct> 3. Web, 3. Web, 3. Web, 3. Web, 3. Web, 3. Web, 3. We…
## $ V200010b <dbl> 1.0057, 1.1635, 0.7687, 0.5210, 0.9658, 0.2347, 0.440…
## $ V200010d <dbl> 9, 26, 41, 29, 23, 37, 7, 37, 32, 41, 22, 7, 38, 21, …
## $ V200010c <dbl> 2, 2, 1, 2, 1, 2, 1, 2, 2, 2, 1, 1, 2, 2, 2, 2, 1, 1,…
## $ V201006 <fct> 2. Somewhat interested, 3. Not much interested, 2. So…
```
#### Option 2: Strip the labels
The second option is to remove the labels altogether, converting the labeled data into a regular R data frame. To remove, or ‘zap,’ the labels from our tibble, we can use the {haven} package’s `zap_label()` and `zap_labels()` functions. This approach removes the labels but retains the data values in their original form.
The ANES Stata file columns contain variable labels. Using the `map()` function from {purrr}, we can review the labels using `attr`. In the example below, we list the first two variables and their labels. For instance, the label for `V200002` is “Mode of interview: pre\-election interview.”
```
purrr::map(anes_dta, ~ attr(.x, "label")) %>%
head(2)
```
```
## $V200001
## [1] "2020 Case ID"
##
## $V200002
## [1] "Mode of interview: pre-election interview"
```
Use `zap_label()` to remove the variable labels but retain the value labels. Notice that the labels return as `NULL`.
```
zap_label(anes_dta) %>%
purrr::map(~ attr(.x, "label")) %>%
head(2)
```
```
## $V200001
## NULL
##
## $V200002
## 1. Video 2. Telephone 3. Web
## 1 2 3
```
To remove the value labels, use `zap_labels()`. Notice the previous `<dbl+lbl>` columns are now `<dbl>`.
```
zap_labels(anes_dta) %>%
select(1:6) %>%
glimpse()
```
```
## Rows: 7,453
## Columns: 6
## $ V200001 <dbl> 200015, 200022, 200039, 200046, 200053, 200060, 20008…
## $ V200002 <dbl> 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,…
## $ V200010b <dbl> 1.0057, 1.1635, 0.7687, 0.5210, 0.9658, 0.2347, 0.440…
## $ V200010d <dbl> 9, 26, 41, 29, 23, 37, 7, 37, 32, 41, 22, 7, 38, 21, …
## $ V200010c <dbl> 2, 2, 1, 2, 1, 2, 1, 2, 2, 2, 1, 1, 2, 2, 2, 2, 1, 1,…
## $ V201006 <dbl> 2, 3, 2, 3, 2, 1, 2, 3, 2, 2, 2, 2, 2, 1, 2, 1, 1, 1,…
```
While it is important to convert labeled datasets into regular R data frames for working in R, the labels themselves often contain valuable information that provides context and meaning to the survey variables. To aid with interpretability and documentation, we can create a data dictionary from the labeled dataset. A data dictionary is a reference document that provides detailed information about the variables and values of a survey.
The {labelled} package offers a convenient function, `generate_dictionary()`, that creates data dictionaries directly from a labeled dataset ([Larmarange 2024](#ref-R-labelled)). This function extracts variable labels, value labels, and other metadata and organizes them into a structured document that we can browse and reference throughout our analysis.
Let’s create a data dictionary from the ANES Stata dataset as an example:
```
library(labelled)
dictionary <- generate_dictionary(anes_dta)
```
Once we’ve generated the data dictionary, we can take a look at the `V200002` variable and see the label, column type, number of missing entries, and associated values.
```
dictionary %>%
filter(variable == "V200002")
```
```
## pos variable label col_type missing values
## 2 V200002 Mode of interview: pre~ dbl+lbl 0 [1] 1. Video
## [2] 2. Telephone
## [3] 3. Web
```
#### Option 1: Convert the vector into a factor
Factors are native R data types for working with categorical data. They consist of integer values that correspond to character values, known as levels. Below is a dummy example of factors. The `factors` show the four different levels in the data: `strongly agree`, `agree`, `disagree`, and `strongly disagree`.
```
response <-
c("strongly agree", "agree", "agree", "disagree", "strongly disagree")
response_levels <-
c("strongly agree", "agree", "disagree", "strongly disagree")
factors <- factor(response, levels = response_levels)
factors
```
```
## [1] strongly agree agree agree
## [4] disagree strongly disagree
## Levels: strongly agree agree disagree strongly disagree
```
Factors are integer vectors, though they may look like character strings. We can confirm by looking at the vector’s structure:
```
glimpse(factors)
```
```
## Factor w/ 4 levels "strongly agree",..: 1 2 2 3 4
```
R’s factors differ from Stata, SPSS, or SAS labeled vectors. However, we can convert labeled variables into factors using the `as_factor()` function.
```
anes_dta %>%
transmute(V200002 = as_factor(V200002))
```
```
## # A tibble: 7,453 × 1
## V200002
## <fct>
## 1 3. Web
## 2 3. Web
## 3 3. Web
## 4 3. Web
## 5 3. Web
## 6 3. Web
## 7 3. Web
## 8 3. Web
## 9 3. Web
## 10 3. Web
## # ℹ 7,443 more rows
```
The `as_factor()` function can be applied to all columns in a data frame or individual ones. Below, we convert all `<dbl+lbl>` columns into factors.
```
anes_dta_factor <-
anes_dta %>%
as_factor()
anes_dta_factor %>%
select(1:6) %>%
glimpse()
```
```
## Rows: 7,453
## Columns: 6
## $ V200001 <dbl> 200015, 200022, 200039, 200046, 200053, 200060, 20008…
## $ V200002 <fct> 3. Web, 3. Web, 3. Web, 3. Web, 3. Web, 3. Web, 3. We…
## $ V200010b <dbl> 1.0057, 1.1635, 0.7687, 0.5210, 0.9658, 0.2347, 0.440…
## $ V200010d <dbl> 9, 26, 41, 29, 23, 37, 7, 37, 32, 41, 22, 7, 38, 21, …
## $ V200010c <dbl> 2, 2, 1, 2, 1, 2, 1, 2, 2, 2, 1, 1, 2, 2, 2, 2, 1, 1,…
## $ V201006 <fct> 2. Somewhat interested, 3. Not much interested, 2. So…
```
#### Option 2: Strip the labels
The second option is to remove the labels altogether, converting the labeled data into a regular R data frame. To remove, or ‘zap,’ the labels from our tibble, we can use the {haven} package’s `zap_label()` and `zap_labels()` functions. This approach removes the labels but retains the data values in their original form.
The ANES Stata file columns contain variable labels. Using the `map()` function from {purrr}, we can review the labels using `attr`. In the example below, we list the first two variables and their labels. For instance, the label for `V200002` is “Mode of interview: pre\-election interview.”
```
purrr::map(anes_dta, ~ attr(.x, "label")) %>%
head(2)
```
```
## $V200001
## [1] "2020 Case ID"
##
## $V200002
## [1] "Mode of interview: pre-election interview"
```
Use `zap_label()` to remove the variable labels but retain the value labels. Notice that the labels return as `NULL`.
```
zap_label(anes_dta) %>%
purrr::map(~ attr(.x, "label")) %>%
head(2)
```
```
## $V200001
## NULL
##
## $V200002
## 1. Video 2. Telephone 3. Web
## 1 2 3
```
To remove the value labels, use `zap_labels()`. Notice the previous `<dbl+lbl>` columns are now `<dbl>`.
```
zap_labels(anes_dta) %>%
select(1:6) %>%
glimpse()
```
```
## Rows: 7,453
## Columns: 6
## $ V200001 <dbl> 200015, 200022, 200039, 200046, 200053, 200060, 20008…
## $ V200002 <dbl> 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,…
## $ V200010b <dbl> 1.0057, 1.1635, 0.7687, 0.5210, 0.9658, 0.2347, 0.440…
## $ V200010d <dbl> 9, 26, 41, 29, 23, 37, 7, 37, 32, 41, 22, 7, 38, 21, …
## $ V200010c <dbl> 2, 2, 1, 2, 1, 2, 1, 2, 2, 2, 1, 1, 2, 2, 2, 2, 1, 1,…
## $ V201006 <dbl> 2, 3, 2, 3, 2, 1, 2, 3, 2, 2, 2, 2, 2, 1, 2, 1, 1, 1,…
```
While it is important to convert labeled datasets into regular R data frames for working in R, the labels themselves often contain valuable information that provides context and meaning to the survey variables. To aid with interpretability and documentation, we can create a data dictionary from the labeled dataset. A data dictionary is a reference document that provides detailed information about the variables and values of a survey.
The {labelled} package offers a convenient function, `generate_dictionary()`, that creates data dictionaries directly from a labeled dataset ([Larmarange 2024](#ref-R-labelled)). This function extracts variable labels, value labels, and other metadata and organizes them into a structured document that we can browse and reference throughout our analysis.
Let’s create a data dictionary from the ANES Stata dataset as an example:
```
library(labelled)
dictionary <- generate_dictionary(anes_dta)
```
Once we’ve generated the data dictionary, we can take a look at the `V200002` variable and see the label, column type, number of missing entries, and associated values.
```
dictionary %>%
filter(variable == "V200002")
```
```
## pos variable label col_type missing values
## 2 V200002 Mode of interview: pre~ dbl+lbl 0 [1] 1. Video
## [2] 2. Telephone
## [3] 3. Web
```
### A.3\.3 Labeled missing data values
In survey data analysis, dealing with missing values is a crucial aspect of data preparation. Stata, SPSS, and SAS files each have their own method for handling missing values.
* Stata has “extended” missing values, `.A` through `.Z`.
* SAS has “special” missing values, `.A` through `.Z` and `._`.
* SPSS has per\-column “user” missing values. Each column can declare up to three distinct values or a range of values (plus one distinct value) that should be treated as missing.
SAS and Stata use a concept known as ‘tagged’ missing values, which extend R’s regular `NA`. A ‘tagged’ missing value is essentially an `NA` with an additional single\-character label. These values behave identically to regular `NA` in standard R operations while preserving the informative tag associated with the missing value.
Here is an example from the NORC at the University of Chicago’s 2018 General Society Survey, where Don’t Know (`DK`) responses are tagged as `NA(d)`, Inapplicable (`IAP`) responses are tagged as `NA(i)`, and `No Answer` responses are tagged as `NA(n)` ([Davern et al. 2021](#ref-gss-codebook)).
```
head(gss_dta$HEALTH)
#> <labelled<double>[6]>: condition of health
#> [1] 2 1 NA(i) NA(i) 1 2
#>
#> Labels:
#> value label
#> 1 excellent
#> 2 good
#> 3 fair
#> 4 poor
#> NA(d) DK
#> NA(i) IAP
#> NA(n) NA
```
In contrast, SPSS uses a different approach called ‘user\-defined values’ to denote missing values. Each column in an SPSS dataset can have up to three distinct values designated as missing or a specified range of missing values. To model these additional user\-defined missing values, {haven} provides the `labeled_spss()` subclass of `labeled()`. When importing SPSS data using {haven}, it ensures that user\-defined missing values are correctly handled. We can work with these data in R while preserving the unique missing value conventions from SPSS.
Here is what the GSS SPSS dataset looks like when loaded with {haven}.
```
head(gss_sps$HEALTH)
#> <labelled_spss<double>[6]>: Condition of health
#> [1] 2 1 0 0 1 2
#> Missing values: 0, 8, 9
#>
#> Labels:
#> value label
#> 0 IAP
#> 1 EXCELLENT
#> 2 GOOD
#> 3 FAIR
#> 4 POOR
#> 8 DK
#> 9 NA
```
A.4 Importing data from APIs into R
-----------------------------------
In addition to working with data saved as files, we may also need to retrieve data through Application Programming Interfaces (APIs). APIs provide a structured way to access data hosted on external servers and import them directly into R for analysis.
To access these data, we need to understand how to construct API requests. Each API has unique endpoints, parameters, and authentication requirements. Pay attention to:
* Endpoints: These are URLs that point to specific data or services
* Parameters: Information passed to the API to customize the request (e.g., date ranges, filters)
* Authentication: APIs may require API keys or tokens for access
* Rate Limits: APIs may have usage limits, so be aware of any rate limits or quotas
Typically, we begin by making a GET request to an API endpoint. The {httr2} package allows us to generate and process HTTP requests ([Wickham 2024](#ref-R-httr2)). We can make the GET request by pointing to the URL that contains the data we would like:
```
library(httr2)
api_url <- "https://api.example.com/survey-data"
response <- GET(url = api_url)
```
Once we make the request, we obtain the data as the `response`. The data often come in JSON format. We can extract and parse the data using the {jsonlite} package, allowing us to work with them in R ([Ooms 2014](#ref-jsonliteooms)). The `fromJSON()` function, shown below, converts JSON data to an R object.
```
survey_data <- fromJSON(content(response, "text"))
```
Note that these are dummy examples. Please review the documentation to understand how to make requests from a specific API.
R offers several packages that simplify API access by providing ready\-to\-use functions for popular APIs. These packages are called “wrappers,” as they “wrap” the API in R to make it easier to use. For example, the {tidycensus} package used in this book simplifies access to U.S. Census data, allowing us to retrieve data with R commands instead of writing API requests from scratch ([Walker and Herman 2024](#ref-R-tidycensus)). Behind the scenes, `get_pums()` is making a GET request from the Census API, and the {tidycensus} functions are converting the response into an R\-friendly format. For example, if we are interested in the age, sex, race, and Hispanicity of those in the American Community Survey sample of Durham County, North Carolina[31](#fn31), we can use the `get_pums()` function to extract the microdata as shown in the code below. We can then use the replicate weights to create a survey object and calculate estimates for Durham County.
```
library(tidycensus)
durh_pums <- get_pums(
variables = c("PUMA", "SEX", "AGEP", "RAC1P", "HISP"),
state = "NC",
puma = c("01301", "01302"),
survey = "acs1",
year = 2022,
rep_weights = "person"
)
```
```
## Getting data from the 2022 1-year ACS Public Use Microdata Sample
```
```
## Warning: • You have not set a Census API key. Users without a key are limited to 500
## queries per day and may experience performance limitations.
## ℹ For best results, get a Census API key at
## http://api.census.gov/data/key_signup.html and then supply the key to the
## `census_api_key()` function to use it throughout your tidycensus session.
## This warning is displayed once per session.
```
```
durh_pums
```
```
## # A tibble: 2,724 × 90
## SERIALNO SPORDER AGEP PUMA ST SEX HISP RAC1P WGTP PWGTP
## <chr> <dbl> <dbl> <chr> <chr> <chr> <chr> <chr> <dbl> <dbl>
## 1 2022GQ0002044 1 54 01301 37 1 01 2 0 17
## 2 2022GQ0002319 1 20 01301 37 2 01 2 0 64
## 3 2022GQ0003518 1 22 01301 37 1 01 6 0 52
## 4 2022GQ0003930 1 62 01302 37 1 01 2 0 17
## 5 2022GQ0005753 1 19 01301 37 2 24 8 0 29
## 6 2022GQ0006554 1 22 01301 37 2 01 6 0 59
## 7 2022GQ0007092 1 70 01301 37 1 01 2 0 55
## 8 2022GQ0007502 1 36 01302 37 1 01 1 0 39
## 9 2022GQ0008767 1 74 01301 37 1 01 1 0 15
## 10 2022GQ0008956 1 22 01302 37 2 24 1 0 43
## # ℹ 2,714 more rows
## # ℹ 80 more variables: PWGTP1 <dbl>, PWGTP2 <dbl>, PWGTP3 <dbl>,
## # PWGTP4 <dbl>, PWGTP5 <dbl>, PWGTP6 <dbl>, PWGTP7 <dbl>,
## # PWGTP8 <dbl>, PWGTP9 <dbl>, PWGTP10 <dbl>, PWGTP11 <dbl>,
## # PWGTP12 <dbl>, PWGTP13 <dbl>, PWGTP14 <dbl>, PWGTP15 <dbl>,
## # PWGTP16 <dbl>, PWGTP17 <dbl>, PWGTP18 <dbl>, PWGTP19 <dbl>,
## # PWGTP20 <dbl>, PWGTP21 <dbl>, PWGTP22 <dbl>, PWGTP23 <dbl>, …
```
In Chapter [4](c04-getting-started.html#c04-getting-started), we used the {censusapi} package to get data from the Census data API for the Current Population Survey. To discover if there is an R package that directly interfaces with a specific survey or data source, search for “\[survey] R wrapper” or “\[data source] R package” online.
A.5 Importing data from databases in R
--------------------------------------
Databases provide a secure and organized solution as the volume and complexity of data grow. We can access, manage, and update data stored in databases in a systematic way. Because of how the data are organized, teams can draw from the same source and obtain any metadata that would be helpful for analysis.
There are various ways of using R to work with databases. If using RStudio, we can connect to different databases through the Connections Pane in the top right of the IDE. We can also use packages like {DBI} and {odbc} to access database tables in R files. Here is an example script connecting to a database:
```
con <-
DBI::dbConnect(
odbc::odbc(),
Driver = "[driver name]",
Server = "[server path]",
UID = rstudioapi::askForPassword("Database user"),
PWD = rstudioapi::askForPassword("Database password"),
Database = "[database name]",
Warehouse = "[warehouse name]",
Schema = "[schema name]"
)
```
The {dbplyr} and {dplyr} packages allow us to make queries and run data analysis entirely using {dplyr} syntax. All of the code can be written in R, so we do not have to switch between R and SQL to explore the data. Here is some sample code:
```
q1 <- tbl(con, "bank") %>%
group_by(month_idx, year, month) %>%
summarize(subscribe = sum(ifelse(term_deposit == "yes", 1, 0)),
total = n())
show_query(q1)
```
Be sure to check the documentation to configure a database connection.
A.6 Importing data from other formats
-------------------------------------
R also offers dedicated packages such as {googlesheets4} for Google Sheets or {qualtRics} for Qualtrics. With less common or proprietary file formats, the broader data science community can often provide guidance. Online resources like [Stack Overflow](https://stackoverflow.com/) and dedicated forums like [Posit Community](https://forum.posit.co/) are valuable sources of information for importing data into R.
| Social Science |
tidy-survey-r.github.io | https://tidy-survey-r.github.io/tidy-survey-book/anes-cb.html |
B ANES derived variable codebook
================================
The full codebook with the original variables is available at American National Election Studies ([2022](#ref-anes-cb)).
This is a codebook for the ANES data used in this book (`anes_2020`) from the {srvyrexploR} package.
B.1 ADMIN
---------
#### V200001
Description: 2020 Case ID
Variable class: numeric
#### CaseID
Description: 2020 Case ID
Variable class: numeric
#### V200002
Description: Mode of interview: pre\-election interview
Variable class: haven\_labelled, vctrs\_vctr, double
| V200002 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| 1 | 1. Video | 274 | 0\.037 |
| 2 | 2. Telephone | 115 | 0\.015 |
| 3 | 3. Web | 7064 | 0\.948 |
| Total | * | 7453 | 1\.000 |
#### InterviewMode
Description: Mode of interview: pre\-election interview
Variable class: factor
| InterviewMode | n | Unweighted Freq |
| --- | --- | --- |
| Video | 274 | 0\.037 |
| Telephone | 115 | 0\.015 |
| Web | 7064 | 0\.948 |
| Total | 7453 | 1\.000 |
B.2 WEIGHTS
-----------
#### V200010b
Description: Full sample post\-election weight
Variable class: numeric
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0\.0083 | 0\.6863 | 6\.651 |
#### Weight
Description: Full sample post\-election weight
Variable class: numeric
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0\.0083 | 0\.6863 | 6\.651 |
#### V200010c
Description: Full sample variance unit
Variable class: numeric
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 1 | 2 | 3 |
#### VarUnit
Description: Full sample variance unit
Variable class: factor
| VarUnit | n | Unweighted Freq |
| --- | --- | --- |
| 1 | 3689 | 0\.495 |
| 2 | 3750 | 0\.503 |
| 3 | 14 | 0\.002 |
| Total | 7453 | 1\.000 |
#### V200010d
Description: Full sample variance stratum
Variable class: numeric
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 1 | 24 | 50 |
#### Stratum
Description: Full sample variance stratum
Variable class: factor
| Stratum | n | Unweighted Freq |
| --- | --- | --- |
| 1 | 167 | 0\.022 |
| 2 | 148 | 0\.020 |
| 3 | 158 | 0\.021 |
| 4 | 151 | 0\.020 |
| 5 | 147 | 0\.020 |
| 6 | 172 | 0\.023 |
| 7 | 163 | 0\.022 |
| 8 | 159 | 0\.021 |
| 9 | 160 | 0\.021 |
| 10 | 159 | 0\.021 |
| 11 | 137 | 0\.018 |
| 12 | 179 | 0\.024 |
| 13 | 148 | 0\.020 |
| 14 | 160 | 0\.021 |
| 15 | 159 | 0\.021 |
| 16 | 148 | 0\.020 |
| 17 | 158 | 0\.021 |
| 18 | 156 | 0\.021 |
| 19 | 154 | 0\.021 |
| 20 | 144 | 0\.019 |
| 21 | 170 | 0\.023 |
| 22 | 146 | 0\.020 |
| 23 | 165 | 0\.022 |
| 24 | 147 | 0\.020 |
| 25 | 169 | 0\.023 |
| 26 | 165 | 0\.022 |
| 27 | 172 | 0\.023 |
| 28 | 133 | 0\.018 |
| 29 | 157 | 0\.021 |
| 30 | 167 | 0\.022 |
| 31 | 154 | 0\.021 |
| 32 | 143 | 0\.019 |
| 33 | 143 | 0\.019 |
| 34 | 124 | 0\.017 |
| 35 | 138 | 0\.019 |
| 36 | 130 | 0\.017 |
| 37 | 136 | 0\.018 |
| 38 | 145 | 0\.019 |
| 39 | 140 | 0\.019 |
| 40 | 125 | 0\.017 |
| 41 | 158 | 0\.021 |
| 42 | 146 | 0\.020 |
| 43 | 130 | 0\.017 |
| 44 | 126 | 0\.017 |
| 45 | 126 | 0\.017 |
| 46 | 135 | 0\.018 |
| 47 | 133 | 0\.018 |
| 48 | 140 | 0\.019 |
| 49 | 133 | 0\.018 |
| 50 | 130 | 0\.017 |
| Total | 7453 | 1\.000 |
B.3 PRE\-ELECTION SURVEY QUESTIONNAIRE
--------------------------------------
#### V201006
Description: PRE: How interested in following campaigns
Question text: Some people don’t pay much attention to political campaigns. How about you? Would you say that you have been very much interested, somewhat interested or not much interested in the political campaigns so far this year?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201006 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 1 | 0\.000 |
| 1 | 1. Very much interested | 3940 | 0\.529 |
| 2 | 2. Somewhat interested | 2569 | 0\.345 |
| 3 | 3. Not much interested | 943 | 0\.127 |
| Total | * | 7453 | 1\.000 |
#### CampaignInterest
Description: PRE: How interested in following campaigns
Question text: Some people don’t pay much attention to political campaigns. How about you? Would you say that you have been very much interested, somewhat interested or not much interested in the political campaigns so far this year?
Variable class: factor
| CampaignInterest | n | Unweighted Freq |
| --- | --- | --- |
| Very much interested | 3940 | 0\.529 |
| Somewhat interested | 2569 | 0\.345 |
| Not much interested | 943 | 0\.127 |
| NA | 1 | 0\.000 |
| Total | 7453 | 1\.000 |
#### V201023
Description: PRE: Confirmation voted (early) in November 3 Election (2020\)
Question text: Just to be clear, I’m recording that you already voted in the election that is scheduled to take place on November 3\. Is that right?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201023 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 2 | 0\.000 |
| \-1 | \-1\. Inapplicable | 6961 | 0\.934 |
| 1 | 1. Yes, voted | 375 | 0\.050 |
| 2 | 2. No, have not voted | 115 | 0\.015 |
| Total | * | 7453 | 1\.000 |
#### EarlyVote2020
Description: PRE: Confirmation voted (early) in November 3 Election (2020\)
Question text: Just to be clear, I’m recording that you already voted in the election that is scheduled to take place on November 3\. Is that right?
Variable class: factor
| EarlyVote2020 | n | Unweighted Freq |
| --- | --- | --- |
| Yes | 375 | 0\.050 |
| No | 115 | 0\.015 |
| NA | 6963 | 0\.934 |
| Total | 7453 | 1\.000 |
#### V201024
Description: PRE: In what manner did R vote
Question text: Which one of the following best describes how you voted?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201024 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 1 | 0\.000 |
| \-1 | \-1\. Inapplicable | 7078 | 0\.950 |
| 1 | 1. Definitely voted in person at a polling place before election day | 101 | 0\.014 |
| 2 | 2. Definitely voted by mailing a ballot to elections officials before election day | 242 | 0\.032 |
| 3 | 3. Definitely voted in some other way | 28 | 0\.004 |
| 4 | 4. Not completely sure whether you voted or not | 3 | 0\.000 |
| Total | * | 7453 | 1\.000 |
#### V201025x
Description: PRE: SUMMARY: Registration and early vote status
Variable class: haven\_labelled, vctrs\_vctr, double
| V201025x | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-4 | \-4\. Technical error | 1 | 0\.000 |
| 1 | 1. Not registered (or DK/RF), does not intend to register (or DK/RF intent) | 339 | 0\.045 |
| 2 | 2. Not registered (or DK/RF), intends to register | 290 | 0\.039 |
| 3 | 3. Registered but did not vote early (or DK/RF) | 6452 | 0\.866 |
| 4 | 4. Registered and voted early | 371 | 0\.050 |
| Total | * | 7453 | 1\.000 |
#### V201028
Description: PRE: DID R VOTE FOR PRESIDENT
Question text: How about the election for President? Did you vote for a candidate for President?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201028 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 1 | 0\.000 |
| \-1 | \-1\. Inapplicable | 7081 | 0\.950 |
| 1 | 1. Yes, voted for President | 361 | 0\.048 |
| 2 | 2. No, didn’t vote for President | 10 | 0\.001 |
| Total | * | 7453 | 1\.000 |
#### V201029
Description: PRE: For whom did R vote for President
Question text: Who did you vote for? \[Joe Biden, Donald Trump/Donald Trump, Joe Biden], Jo Jorgensen, Howie Hawkins, or someone else?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201029 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 10 | 0\.001 |
| \-1 | \-1\. Inapplicable | 7092 | 0\.952 |
| 1 | 1. Joe Biden | 239 | 0\.032 |
| 2 | 2. Donald Trump | 103 | 0\.014 |
| 3 | 3. Jo Jorgensen | 2 | 0\.000 |
| 4 | 4. Howie Hawkins | 1 | 0\.000 |
| 5 | 5. Other candidate {SPECIFY} | 4 | 0\.001 |
| 12 | 12. Specified as refused | 2 | 0\.000 |
| Total | * | 7453 | 1\.000 |
#### V201101
Description: PRE: Did R vote for President in 2016 \[revised]
Question text: Four years ago, in 2016, Hillary Clinton ran on the Democratic ticket against Donald Trump for the Republicans. We talk to many people who tell us they did not vote. And we talk to a few people who tell us they did vote, who really did not. We can tell they did not vote by checking with official government records. What about you? If we check the official government voter records, will they show that you voted in the 2016 presidential election, or that you did not vote in that election?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201101 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 13 | 0\.002 |
| \-8 | \-8\. Don’t know | 1 | 0\.000 |
| \-1 | \-1\. Inapplicable | 3780 | 0\.507 |
| 1 | 1. Yes, voted | 2780 | 0\.373 |
| 2 | 2. No, didn’t vote | 879 | 0\.118 |
| Total | * | 7453 | 1\.000 |
#### V201102
Description: PRE: Did R vote for President in 2016
Question text: Four years ago, in 2016, Hillary Clinton ran on the Democratic ticket against Donald Trump for the Republicans. Do you remember for sure whether or not you voted in that election?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201102 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 6 | 0\.001 |
| \-8 | \-8\. Don’t know | 1 | 0\.000 |
| \-1 | \-1\. Inapplicable | 3673 | 0\.493 |
| 1 | 1. Yes, voted | 3030 | 0\.407 |
| 2 | 2. No, didn’t vote | 743 | 0\.100 |
| Total | * | 7453 | 1\.000 |
#### VotedPres2016
Description: PRE: Did R vote for President in 2016
Question text: Derived from V201102, V201101
Variable class: factor
| VotedPres2016 | n | Unweighted Freq |
| --- | --- | --- |
| Yes | 5810 | 0\.780 |
| No | 1622 | 0\.218 |
| NA | 21 | 0\.003 |
| Total | 7453 | 1\.000 |
#### V201103
Description: PRE: Recall of last (2016\) Presidential vote choice
Question text: Which one did you vote for?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201103 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 41 | 0\.006 |
| \-8 | \-8\. Don’t know | 2 | 0\.000 |
| \-1 | \-1\. Inapplicable | 1643 | 0\.220 |
| 1 | 1. Hillary Clinton | 2911 | 0\.391 |
| 2 | 2. Donald Trump | 2466 | 0\.331 |
| 5 | 5. Other {SPECIFY} | 390 | 0\.052 |
| Total | * | 7453 | 1\.000 |
#### VotedPres2016\_selection
Description: PRE: Recall of last (2016\) Presidential vote choice
Question text: Which one did you vote for?
Variable class: factor
| VotedPres2016\_selection | n | Unweighted Freq |
| --- | --- | --- |
| Clinton | 2911 | 0\.391 |
| Trump | 2466 | 0\.331 |
| Other | 390 | 0\.052 |
| NA | 1686 | 0\.226 |
| Total | 7453 | 1\.000 |
#### V201228
Description: PRE: Party ID: Does R think of self as Democrat, Republican, or Independent
Question text: Generally speaking, do you usually think of yourself as \[a Democrat, a Republican / a Republican, a Democrat], an independent, or what?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201228 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 37 | 0\.005 |
| \-8 | \-8\. Don’t know | 4 | 0\.001 |
| \-4 | \-4\. Technical error | 1 | 0\.000 |
| 0 | 0. No preference {VOL \- video/phone only} | 6 | 0\.001 |
| 1 | 1. Democrat | 2589 | 0\.347 |
| 2 | 2. Republican | 2304 | 0\.309 |
| 3 | 3. Independent | 2277 | 0\.306 |
| 5 | 5. Other party {SPECIFY} | 235 | 0\.032 |
| Total | * | 7453 | 1\.000 |
#### V201229
Description: PRE: Party Identification strong \- Democrat Republican
Question text: Would you call yourself a strong \[Democrat / Republican] or a not very strong \[Democrat / Republican]?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201229 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 4 | 0\.001 |
| \-1 | \-1\. Inapplicable | 2560 | 0\.343 |
| 1 | 1. Strong | 3341 | 0\.448 |
| 2 | 2. Not very strong | 1548 | 0\.208 |
| Total | * | 7453 | 1\.000 |
#### V201230
Description: PRE: No Party Identification \- closer to Democratic Party or Republican Party
Question text: Do you think of yourself as closer to the Republican Party or to the Democratic Party?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201230 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 19 | 0\.003 |
| \-8 | \-8\. Don’t know | 2 | 0\.000 |
| \-1 | \-1\. Inapplicable | 4893 | 0\.657 |
| 1 | 1. Closer to Republican | 782 | 0\.105 |
| 2 | 2. Neither {VOL in video and phone} | 876 | 0\.118 |
| 3 | 3. Closer to Democratic | 881 | 0\.118 |
| Total | * | 7453 | 1\.000 |
#### V201231x
Description: PRE: SUMMARY: Party ID
Question text: Derived from V201228, V201229, and PTYID\_LEANPTY
Variable class: haven\_labelled, vctrs\_vctr, double
| V201231x | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 23 | 0\.003 |
| \-8 | \-8\. Don’t know | 2 | 0\.000 |
| 1 | 1. Strong Democrat | 1796 | 0\.241 |
| 2 | 2. Not very strong Democrat | 790 | 0\.106 |
| 3 | 3. Independent\-Democrat | 881 | 0\.118 |
| 4 | 4. Independent | 876 | 0\.118 |
| 5 | 5. Independent\-Republican | 782 | 0\.105 |
| 6 | 6. Not very strong Republican | 758 | 0\.102 |
| 7 | 7. Strong Republican | 1545 | 0\.207 |
| Total | * | 7453 | 1\.000 |
#### PartyID
Description: PRE: SUMMARY: Party ID
Question text: Derived from V201228, V201229, and PTYID\_LEANPTY
Variable class: factor
| PartyID | n | Unweighted Freq |
| --- | --- | --- |
| Strong democrat | 1796 | 0\.241 |
| Not very strong democrat | 790 | 0\.106 |
| Independent\-democrat | 881 | 0\.118 |
| Independent | 876 | 0\.118 |
| Independent\-republican | 782 | 0\.105 |
| Not very strong republican | 758 | 0\.102 |
| Strong republican | 1545 | 0\.207 |
| NA | 25 | 0\.003 |
| Total | 7453 | 1\.000 |
#### V201233
Description: PRE: How often trust government in Washington to do what is right \[revised]
Question text: How often can you trust the federal government in Washington to do what is right?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201233 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 26 | 0\.003 |
| \-8 | \-8\. Don’t know | 3 | 0\.000 |
| 1 | 1. Always | 80 | 0\.011 |
| 2 | 2. Most of the time | 1016 | 0\.136 |
| 3 | 3. About half the time | 2313 | 0\.310 |
| 4 | 4. Some of the time | 3313 | 0\.445 |
| 5 | 5. Never | 702 | 0\.094 |
| Total | * | 7453 | 1\.000 |
#### TrustGovernment
Description: PRE: How often trust government in Washington to do what is right \[revised]
Question text: How often can you trust the federal government in Washington to do what is right?
Variable class: factor
| TrustGovernment | n | Unweighted Freq |
| --- | --- | --- |
| Always | 80 | 0\.011 |
| Most of the time | 1016 | 0\.136 |
| About half the time | 2313 | 0\.310 |
| Some of the time | 3313 | 0\.445 |
| Never | 702 | 0\.094 |
| NA | 29 | 0\.004 |
| Total | 7453 | 1\.000 |
#### V201237
Description: PRE: How often can people be trusted
Question text: Generally speaking, how often can you trust other people?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201237 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 12 | 0\.002 |
| \-8 | \-8\. Don’t know | 1 | 0\.000 |
| 1 | 1. Always | 48 | 0\.006 |
| 2 | 2. Most of the time | 3511 | 0\.471 |
| 3 | 3. About half the time | 2020 | 0\.271 |
| 4 | 4. Some of the time | 1597 | 0\.214 |
| 5 | 5. Never | 264 | 0\.035 |
| Total | * | 7453 | 1\.000 |
#### TrustPeople
Description: PRE: How often can people be trusted
Question text: Generally speaking, how often can you trust other people?
Variable class: factor
| TrustPeople | n | Unweighted Freq |
| --- | --- | --- |
| Always | 48 | 0\.006 |
| Most of the time | 3511 | 0\.471 |
| About half the time | 2020 | 0\.271 |
| Some of the time | 1597 | 0\.214 |
| Never | 264 | 0\.035 |
| NA | 13 | 0\.002 |
| Total | 7453 | 1\.000 |
#### V201507x
Description: PRE: SUMMARY: Respondent age
Question text: Derived from birth month, day and year
Variable class: haven\_labelled, vctrs\_vctr, double
| N Missing | N Refused (\-9\) | Minimum | Median | Maximum |
| --- | --- | --- | --- | --- |
| 0 | 294 | 18 | 53 | 80 |
#### Age
Description: PRE: SUMMARY: Respondent age
Question text: Derived from birth month, day and year
Variable class: numeric
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 294 | 18 | 53 | 80 |
#### AgeGroup
Description: PRE: SUMMARY: Respondent age
Question text: Derived from birth month, day and year
Variable class: factor
| AgeGroup | n | Unweighted Freq |
| --- | --- | --- |
| 18\-29 | 871 | 0\.117 |
| 30\-39 | 1241 | 0\.167 |
| 40\-49 | 1081 | 0\.145 |
| 50\-59 | 1200 | 0\.161 |
| 60\-69 | 1436 | 0\.193 |
| 70 or older | 1330 | 0\.178 |
| NA | 294 | 0\.039 |
| Total | 7453 | 1\.000 |
#### V201510
Description: PRE: Highest level of Education
Question text: What is the highest level of school you have completed or the highest degree you have received?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201510 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 25 | 0\.003 |
| \-8 | \-8\. Don’t know | 1 | 0\.000 |
| 1 | 1. Less than high school credential | 312 | 0\.042 |
| 2 | 2. High school graduate \- High school diploma or equivalent (e.g. GED) | 1160 | 0\.156 |
| 3 | 3. Some college but no degree | 1519 | 0\.204 |
| 4 | 4. Associate degree in college \- occupational/vocational | 550 | 0\.074 |
| 5 | 5. Associate degree in college \- academic | 445 | 0\.060 |
| 6 | 6. Bachelor’s degree (e.g. BA, AB, BS) | 1877 | 0\.252 |
| 7 | 7. Master’s degree (e.g. MA, MS, MEng, MEd, MSW, MBA) | 1092 | 0\.147 |
| 8 | 8. Professional school degree (e.g. MD, DDS, DVM, LLB, JD)/Doctoral degree (e.g. PHD, EDD) | 382 | 0\.051 |
| 95 | 95. Other {SPECIFY} | 90 | 0\.012 |
| Total | * | 7453 | 1\.000 |
#### Education
Description: PRE: Highest level of Education
Question text: What is the highest level of school you have completed or the highest degree you have received?
Variable class: factor
| Education | n | Unweighted Freq |
| --- | --- | --- |
| Less than HS | 312 | 0\.042 |
| High school | 1160 | 0\.156 |
| Post HS | 2514 | 0\.337 |
| Bachelor’s | 1877 | 0\.252 |
| Graduate | 1474 | 0\.198 |
| NA | 116 | 0\.016 |
| Total | 7453 | 1\.000 |
#### V201546
Description: PRE: R: Are you Spanish, Hispanic, or Latino
Question text: Are you of Hispanic, Latino, or Spanish origin?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201546 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 45 | 0\.006 |
| \-8 | \-8\. Don’t know | 3 | 0\.000 |
| 1 | 1. Yes | 662 | 0\.089 |
| 2 | 2. No | 6743 | 0\.905 |
| Total | * | 7453 | 1\.000 |
#### V201547a
Description: RESTRICTED: PRE: Race of R: White \[mention]
Question text: I am going to read you a list of five race categories. You may choose one or more races. For this survey, Hispanic origin is not a race. Are you White?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201547a | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201547b
Description: RESTRICTED: PRE: Race of R: Black or African\-American \[mention]
Question text: I am going to read you a list of five race categories. You may choose one or more races. For this survey, Hispanic origin is not a race. Are you Black or African American?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201547b | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201547c
Description: RESTRICTED: PRE: Race of R: Asian \[mention]
Question text: I am going to read you a list of five race categories. You may choose one or more races. For this survey, Hispanic origin is not a race. Are you Asian?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201547c | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201547d
Description: RESTRICTED: PRE: Race of R: Native Hawaiian or Pacific Islander \[mention]
Question text: I am going to read you a list of five race categories. You may choose one or more races. For this survey, Hispanic origin is not a race. Are you White; Black or African American; American Indian or Alaska Native; Asian; or Native Hawaiian or Other Pacific Islander?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201547d | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201547e
Description: RESTRICTED: PRE: Race of R: Native American or Alaska Native \[mention]
Question text: I am going to read you a list of five race categories. You may choose one or more races. For this survey, Hispanic origin is not a race. Are you American Indian or Alaska Native?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201547e | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201547z
Description: RESTRICTED: PRE: Race of R: other specify
Question text: I am going to read you a list of five race categories. You may choose one or more races. For this survey, Hispanic origin is not a race. Reported other
Variable class: haven\_labelled, vctrs\_vctr, double
| V201547z | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201549x
Description: PRE: SUMMARY: R self\-identified race/ethnicity
Question text: Derived from V201546, V201547a\-V201547e, and V201547z
Variable class: haven\_labelled, vctrs\_vctr, double
| V201549x | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 75 | 0\.010 |
| \-8 | \-8\. Don’t know | 6 | 0\.001 |
| 1 | 1. White, non\-Hispanic | 5420 | 0\.727 |
| 2 | 2. Black, non\-Hispanic | 650 | 0\.087 |
| 3 | 3. Hispanic | 662 | 0\.089 |
| 4 | 4. Asian or Native Hawaiian/other Pacific Islander, non\-Hispanic alone | 248 | 0\.033 |
| 5 | 5. Native American/Alaska Native or other race, non\-Hispanic alone | 155 | 0\.021 |
| 6 | 6. Multiple races, non\-Hispanic | 237 | 0\.032 |
| Total | * | 7453 | 1\.000 |
#### RaceEth
Description: PRE: SUMMARY: R self\-identified race/ethnicity
Question text: Derived from V201546, V201547a\-V201547e, and V201547z
Variable class: factor
| RaceEth | n | Unweighted Freq |
| --- | --- | --- |
| White | 5420 | 0\.727 |
| Black | 650 | 0\.087 |
| Hispanic | 662 | 0\.089 |
| Asian, NH/PI | 248 | 0\.033 |
| AI/AN | 155 | 0\.021 |
| Other/multiple race | 237 | 0\.032 |
| NA | 81 | 0\.011 |
| Total | 7453 | 1\.000 |
#### V201600
Description: PRE: What is your (R) sex? \[revised]
Question text: What is your sex?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201600 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 51 | 0\.007 |
| 1 | 1. Male | 3375 | 0\.453 |
| 2 | 2. Female | 4027 | 0\.540 |
| Total | * | 7453 | 1\.000 |
#### Gender
Description: PRE: What is your (R) sex? \[revised]
Question text: What is your sex?
Variable class: factor
| Gender | n | Unweighted Freq |
| --- | --- | --- |
| Male | 3375 | 0\.453 |
| Female | 4027 | 0\.540 |
| NA | 51 | 0\.007 |
| Total | 7453 | 1\.000 |
#### V201607
Description: RESTRICTED: PRE: Total income amount \- revised
Question text: The next question is about \[the total combined income of all members of your family / your total income] during the past 12 months. This includes money from jobs, net income from business, farm or rent, pensions, dividends, interest, Social Security payments, and any other money income received by members of your family who are 15 years of age or older. What was the total income of your family during the past 12 months? TYPE THE NUMBER. YOUR BEST GUESS IS FINE.
Variable class: haven\_labelled, vctrs\_vctr, double
| V201607 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201610
Description: RESTRICTED: PRE: Income amt missing \- categories lt 20K
Question text: Please choose the answer that includes the income of all members of your family during the past 12 months before taxes.
Variable class: haven\_labelled, vctrs\_vctr, double
| V201610 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201611
Description: RESTRICTED: PRE: Income amt missing \- categories 20\-40K
Question text: Please choose the answer that includes the income of all members of your family during the past 12 months before taxes.
Variable class: haven\_labelled, vctrs\_vctr, double
| V201611 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201613
Description: RESTRICTED: PRE: Income amt missing \- categories 40\-70K
Question text: Please choose the answer that includes the income of all members of your family during the past 12 months before taxes.
Variable class: haven\_labelled, vctrs\_vctr, double
| V201613 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201615
Description: RESTRICTED: PRE: Income amt missing \- categories 70\-100K
Question text: Please choose the answer that includes the income of all members of your family during the past 12 months before taxes.
Variable class: haven\_labelled, vctrs\_vctr, double
| V201615 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201616
Description: RESTRICTED: PRE: Income amt missing \- categories 100\+K
Question text: Please choose the answer that includes the income of all members of your family during the past 12 months before taxes.
Variable class: haven\_labelled, vctrs\_vctr, double
| V201616 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201617x
Description: PRE: SUMMARY: Total (family) income
Question text: Derived from V201607, V201610, V201611, V201613, V201615, V201616
Variable class: haven\_labelled, vctrs\_vctr, double
| V201617x | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 502 | 0\.067 |
| \-5 | \-5\. Interview breakoff (sufficient partial IW) | 15 | 0\.002 |
| 1 | 1. Under $9,999 | 647 | 0\.087 |
| 2 | 2. $10,000\-14,999 | 244 | 0\.033 |
| 3 | 3. $15,000\-19,999 | 185 | 0\.025 |
| 4 | 4. $20,000\-24,999 | 301 | 0\.040 |
| 5 | 5. $25,000\-29,999 | 228 | 0\.031 |
| 6 | 6. $30,000\-34,999 | 296 | 0\.040 |
| 7 | 7. $35,000\-39,999 | 226 | 0\.030 |
| 8 | 8. $40,000\-44,999 | 286 | 0\.038 |
| 9 | 9. $45,000\-49,999 | 213 | 0\.029 |
| 10 | 10. $50,000\-59,999 | 485 | 0\.065 |
| 11 | 11. $60,000\-64,999 | 294 | 0\.039 |
| 12 | 12. $65,000\-69,999 | 168 | 0\.023 |
| 13 | 13. $70,000\-74,999 | 243 | 0\.033 |
| 14 | 14. $75,000\-79,999 | 215 | 0\.029 |
| 15 | 15. $80,000\-89,999 | 383 | 0\.051 |
| 16 | 16. $90,000\-99,999 | 291 | 0\.039 |
| 17 | 17. $100,000\-109,999 | 451 | 0\.061 |
| 18 | 18. $110,000\-124,999 | 312 | 0\.042 |
| 19 | 19. $125,000\-149,999 | 323 | 0\.043 |
| 20 | 20. $150,000\-174,999 | 366 | 0\.049 |
| 21 | 21. $175,000\-249,999 | 374 | 0\.050 |
| 22 | 22. $250,000 or more | 405 | 0\.054 |
| Total | * | 7453 | 1\.000 |
#### Income
Description: PRE: SUMMARY: Total (family) income
Question text: Derived from V201607, V201610, V201611, V201613, V201615, V201616
Variable class: factor
| Income | n | Unweighted Freq |
| --- | --- | --- |
| Under $9,999 | 647 | 0\.087 |
| $10,000\-14,999 | 244 | 0\.033 |
| $15,000\-19,999 | 185 | 0\.025 |
| $20,000\-24,999 | 301 | 0\.040 |
| $25,000\-29,999 | 228 | 0\.031 |
| $30,000\-34,999 | 296 | 0\.040 |
| $35,000\-39,999 | 226 | 0\.030 |
| $40,000\-44,999 | 286 | 0\.038 |
| $45,000\-49,999 | 213 | 0\.029 |
| $50,000\-59,999 | 485 | 0\.065 |
| $60,000\-64,999 | 294 | 0\.039 |
| $65,000\-69,999 | 168 | 0\.023 |
| $70,000\-74,999 | 243 | 0\.033 |
| $75,000\-79,999 | 215 | 0\.029 |
| $80,000\-89,999 | 383 | 0\.051 |
| $90,000\-99,999 | 291 | 0\.039 |
| $100,000\-109,999 | 451 | 0\.061 |
| $110,000\-124,999 | 312 | 0\.042 |
| $125,000\-149,999 | 323 | 0\.043 |
| $150,000\-174,999 | 366 | 0\.049 |
| $175,000\-249,999 | 374 | 0\.050 |
| $250,000 or more | 405 | 0\.054 |
| NA | 517 | 0\.069 |
| Total | 7453 | 1\.000 |
#### Income7
Description: PRE: SUMMARY: Total (family) income
Question text: Derived from V201607, V201610, V201611, V201613, V201615, V201616
Variable class: factor
| Income7 | n | Unweighted Freq |
| --- | --- | --- |
| Under $20k | 1076 | 0\.144 |
| $20k to \< 40k | 1051 | 0\.141 |
| $40k to \< 60k | 984 | 0\.132 |
| $60k to \< 80k | 920 | 0\.123 |
| $80k to \< 100k | 674 | 0\.090 |
| $100k to \< 125k | 763 | 0\.102 |
| $125k or more | 1468 | 0\.197 |
| NA | 517 | 0\.069 |
| Total | 7453 | 1\.000 |
B.4 POST\-ELECTION SURVEY QUESTIONNAIRE
---------------------------------------
#### V202051
Description: POST: R registered to vote (post\-election)
Question text: Now on a different topic. Are you registered to vote at \[Respondent’s preloaded address], registered at a different address, or not currently registered?
Variable class: haven\_labelled, vctrs\_vctr, double
| V202051 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 4 | 0\.001 |
| \-6 | \-6\. No post\-election interview | 4 | 0\.001 |
| \-1 | \-1\. Inapplicable | 6820 | 0\.915 |
| 1 | 1. Registered at this address | 173 | 0\.023 |
| 2 | 2. Registered at a different address | 59 | 0\.008 |
| 3 | 3. Not currently registered | 393 | 0\.053 |
| Total | * | 7453 | 1\.000 |
#### V202066
Description: POST: Did R vote in November 2020 election
Question text: In talking to people about elections, we often find that a lot of people were not able to vote because they weren’t registered, they were sick, or they just didn’t have time. Which of the following statements best describes you:
Variable class: haven\_labelled, vctrs\_vctr, double
| V202066 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 7 | 0\.001 |
| \-6 | \-6\. No post\-election interview | 4 | 0\.001 |
| \-1 | \-1\. Inapplicable | 372 | 0\.050 |
| 1 | 1. I did not vote (in the election this November) | 582 | 0\.078 |
| 2 | 2. I thought about voting this time, but didn’t | 265 | 0\.036 |
| 3 | 3. I usually vote, but didn’t this time | 192 | 0\.026 |
| 4 | 4. I am sure I voted | 6031 | 0\.809 |
| Total | * | 7453 | 1\.000 |
#### V202072
Description: POST: Did R vote for President
Question text: How about the election for President? Did you vote for a candidate for President?
Variable class: haven\_labelled, vctrs\_vctr, double
| V202072 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 2 | 0\.000 |
| \-6 | \-6\. No post\-election interview | 4 | 0\.001 |
| \-1 | \-1\. Inapplicable | 1418 | 0\.190 |
| 1 | 1. Yes, voted for President | 5952 | 0\.799 |
| 2 | 2. No, didn’t vote for President | 77 | 0\.010 |
| Total | * | 7453 | 1\.000 |
#### VotedPres2020
Description: POST: Did R vote for President
Question text: How about the election for President? Did you vote for a candidate for President?
Variable class: factor
| VotedPres2020 | n | Unweighted Freq |
| --- | --- | --- |
| Yes | 6313 | 0\.847 |
| No | 87 | 0\.012 |
| NA | 1053 | 0\.141 |
| Total | 7453 | 1\.000 |
#### V202073
Description: POST: For whom did R vote for President
Question text: Who did you vote for? \[Joe Biden, Donald Trump/Donald Trump, Joe Biden], Jo Jorgensen, Howie Hawkins, or someone else?
Variable class: haven\_labelled, vctrs\_vctr, double
| V202073 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 53 | 0\.007 |
| \-6 | \-6\. No post\-election interview | 4 | 0\.001 |
| \-1 | \-1\. Inapplicable | 1497 | 0\.201 |
| 1 | 1. Joe Biden | 3267 | 0\.438 |
| 2 | 2. Donald Trump | 2462 | 0\.330 |
| 3 | 3. Jo Jorgensen | 69 | 0\.009 |
| 4 | 4. Howie Hawkins | 23 | 0\.003 |
| 5 | 5. Other candidate {SPECIFY} | 56 | 0\.008 |
| 7 | 7. Specified as Republican candidate | 1 | 0\.000 |
| 8 | 8. Specified as Libertarian candidate | 3 | 0\.000 |
| 11 | 11. Specified as don’t know | 2 | 0\.000 |
| 12 | 12. Specified as refused | 16 | 0\.002 |
| Total | * | 7453 | 1\.000 |
#### V202109x
Description: PRE\-POST: SUMMARY: Voter turnout in 2020
Question text: Derived from V201024, V202066, V202051
Variable class: haven\_labelled, vctrs\_vctr, double
| V202109x | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-2 | \-2\. Not reported | 7 | 0\.001 |
| 0 | 0. Did not vote | 1039 | 0\.139 |
| 1 | 1. Voted | 6407 | 0\.860 |
| Total | * | 7453 | 1\.000 |
#### V202110x
Description: PRE\-POST: SUMMARY: 2020 Presidential vote
Question text: Derived from V201029, V202073
Variable class: haven\_labelled, vctrs\_vctr, double
| V202110x | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 81 | 0\.011 |
| \-8 | \-8\. Don’t know | 2 | 0\.000 |
| \-1 | \-1\. Inapplicable | 1136 | 0\.152 |
| 1 | 1. Joe Biden | 3509 | 0\.471 |
| 2 | 2. Donald Trump | 2567 | 0\.344 |
| 3 | 3. Jo Jorgensen | 74 | 0\.010 |
| 4 | 4. Howie Hawkins | 24 | 0\.003 |
| 5 | 5. Other candidate {SPECIFY} | 60 | 0\.008 |
| Total | * | 7453 | 1\.000 |
#### VotedPres2020\_selection
Description: PRE\-POST: SUMMARY: 2020 Presidential vote
Question text: Derived from V201029, V202073
Variable class: factor
| VotedPres2020\_selection | n | Unweighted Freq |
| --- | --- | --- |
| Biden | 3509 | 0\.471 |
| Trump | 2567 | 0\.344 |
| Other | 158 | 0\.021 |
| NA | 1219 | 0\.164 |
| Total | 7453 | 1\.000 |
B.1 ADMIN
---------
#### V200001
Description: 2020 Case ID
Variable class: numeric
#### CaseID
Description: 2020 Case ID
Variable class: numeric
#### V200002
Description: Mode of interview: pre\-election interview
Variable class: haven\_labelled, vctrs\_vctr, double
| V200002 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| 1 | 1. Video | 274 | 0\.037 |
| 2 | 2. Telephone | 115 | 0\.015 |
| 3 | 3. Web | 7064 | 0\.948 |
| Total | * | 7453 | 1\.000 |
#### InterviewMode
Description: Mode of interview: pre\-election interview
Variable class: factor
| InterviewMode | n | Unweighted Freq |
| --- | --- | --- |
| Video | 274 | 0\.037 |
| Telephone | 115 | 0\.015 |
| Web | 7064 | 0\.948 |
| Total | 7453 | 1\.000 |
#### V200001
Description: 2020 Case ID
Variable class: numeric
#### CaseID
Description: 2020 Case ID
Variable class: numeric
#### V200002
Description: Mode of interview: pre\-election interview
Variable class: haven\_labelled, vctrs\_vctr, double
| V200002 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| 1 | 1. Video | 274 | 0\.037 |
| 2 | 2. Telephone | 115 | 0\.015 |
| 3 | 3. Web | 7064 | 0\.948 |
| Total | * | 7453 | 1\.000 |
#### InterviewMode
Description: Mode of interview: pre\-election interview
Variable class: factor
| InterviewMode | n | Unweighted Freq |
| --- | --- | --- |
| Video | 274 | 0\.037 |
| Telephone | 115 | 0\.015 |
| Web | 7064 | 0\.948 |
| Total | 7453 | 1\.000 |
B.2 WEIGHTS
-----------
#### V200010b
Description: Full sample post\-election weight
Variable class: numeric
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0\.0083 | 0\.6863 | 6\.651 |
#### Weight
Description: Full sample post\-election weight
Variable class: numeric
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0\.0083 | 0\.6863 | 6\.651 |
#### V200010c
Description: Full sample variance unit
Variable class: numeric
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 1 | 2 | 3 |
#### VarUnit
Description: Full sample variance unit
Variable class: factor
| VarUnit | n | Unweighted Freq |
| --- | --- | --- |
| 1 | 3689 | 0\.495 |
| 2 | 3750 | 0\.503 |
| 3 | 14 | 0\.002 |
| Total | 7453 | 1\.000 |
#### V200010d
Description: Full sample variance stratum
Variable class: numeric
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 1 | 24 | 50 |
#### Stratum
Description: Full sample variance stratum
Variable class: factor
| Stratum | n | Unweighted Freq |
| --- | --- | --- |
| 1 | 167 | 0\.022 |
| 2 | 148 | 0\.020 |
| 3 | 158 | 0\.021 |
| 4 | 151 | 0\.020 |
| 5 | 147 | 0\.020 |
| 6 | 172 | 0\.023 |
| 7 | 163 | 0\.022 |
| 8 | 159 | 0\.021 |
| 9 | 160 | 0\.021 |
| 10 | 159 | 0\.021 |
| 11 | 137 | 0\.018 |
| 12 | 179 | 0\.024 |
| 13 | 148 | 0\.020 |
| 14 | 160 | 0\.021 |
| 15 | 159 | 0\.021 |
| 16 | 148 | 0\.020 |
| 17 | 158 | 0\.021 |
| 18 | 156 | 0\.021 |
| 19 | 154 | 0\.021 |
| 20 | 144 | 0\.019 |
| 21 | 170 | 0\.023 |
| 22 | 146 | 0\.020 |
| 23 | 165 | 0\.022 |
| 24 | 147 | 0\.020 |
| 25 | 169 | 0\.023 |
| 26 | 165 | 0\.022 |
| 27 | 172 | 0\.023 |
| 28 | 133 | 0\.018 |
| 29 | 157 | 0\.021 |
| 30 | 167 | 0\.022 |
| 31 | 154 | 0\.021 |
| 32 | 143 | 0\.019 |
| 33 | 143 | 0\.019 |
| 34 | 124 | 0\.017 |
| 35 | 138 | 0\.019 |
| 36 | 130 | 0\.017 |
| 37 | 136 | 0\.018 |
| 38 | 145 | 0\.019 |
| 39 | 140 | 0\.019 |
| 40 | 125 | 0\.017 |
| 41 | 158 | 0\.021 |
| 42 | 146 | 0\.020 |
| 43 | 130 | 0\.017 |
| 44 | 126 | 0\.017 |
| 45 | 126 | 0\.017 |
| 46 | 135 | 0\.018 |
| 47 | 133 | 0\.018 |
| 48 | 140 | 0\.019 |
| 49 | 133 | 0\.018 |
| 50 | 130 | 0\.017 |
| Total | 7453 | 1\.000 |
#### V200010b
Description: Full sample post\-election weight
Variable class: numeric
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0\.0083 | 0\.6863 | 6\.651 |
#### Weight
Description: Full sample post\-election weight
Variable class: numeric
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0\.0083 | 0\.6863 | 6\.651 |
#### V200010c
Description: Full sample variance unit
Variable class: numeric
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 1 | 2 | 3 |
#### VarUnit
Description: Full sample variance unit
Variable class: factor
| VarUnit | n | Unweighted Freq |
| --- | --- | --- |
| 1 | 3689 | 0\.495 |
| 2 | 3750 | 0\.503 |
| 3 | 14 | 0\.002 |
| Total | 7453 | 1\.000 |
#### V200010d
Description: Full sample variance stratum
Variable class: numeric
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 1 | 24 | 50 |
#### Stratum
Description: Full sample variance stratum
Variable class: factor
| Stratum | n | Unweighted Freq |
| --- | --- | --- |
| 1 | 167 | 0\.022 |
| 2 | 148 | 0\.020 |
| 3 | 158 | 0\.021 |
| 4 | 151 | 0\.020 |
| 5 | 147 | 0\.020 |
| 6 | 172 | 0\.023 |
| 7 | 163 | 0\.022 |
| 8 | 159 | 0\.021 |
| 9 | 160 | 0\.021 |
| 10 | 159 | 0\.021 |
| 11 | 137 | 0\.018 |
| 12 | 179 | 0\.024 |
| 13 | 148 | 0\.020 |
| 14 | 160 | 0\.021 |
| 15 | 159 | 0\.021 |
| 16 | 148 | 0\.020 |
| 17 | 158 | 0\.021 |
| 18 | 156 | 0\.021 |
| 19 | 154 | 0\.021 |
| 20 | 144 | 0\.019 |
| 21 | 170 | 0\.023 |
| 22 | 146 | 0\.020 |
| 23 | 165 | 0\.022 |
| 24 | 147 | 0\.020 |
| 25 | 169 | 0\.023 |
| 26 | 165 | 0\.022 |
| 27 | 172 | 0\.023 |
| 28 | 133 | 0\.018 |
| 29 | 157 | 0\.021 |
| 30 | 167 | 0\.022 |
| 31 | 154 | 0\.021 |
| 32 | 143 | 0\.019 |
| 33 | 143 | 0\.019 |
| 34 | 124 | 0\.017 |
| 35 | 138 | 0\.019 |
| 36 | 130 | 0\.017 |
| 37 | 136 | 0\.018 |
| 38 | 145 | 0\.019 |
| 39 | 140 | 0\.019 |
| 40 | 125 | 0\.017 |
| 41 | 158 | 0\.021 |
| 42 | 146 | 0\.020 |
| 43 | 130 | 0\.017 |
| 44 | 126 | 0\.017 |
| 45 | 126 | 0\.017 |
| 46 | 135 | 0\.018 |
| 47 | 133 | 0\.018 |
| 48 | 140 | 0\.019 |
| 49 | 133 | 0\.018 |
| 50 | 130 | 0\.017 |
| Total | 7453 | 1\.000 |
B.3 PRE\-ELECTION SURVEY QUESTIONNAIRE
--------------------------------------
#### V201006
Description: PRE: How interested in following campaigns
Question text: Some people don’t pay much attention to political campaigns. How about you? Would you say that you have been very much interested, somewhat interested or not much interested in the political campaigns so far this year?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201006 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 1 | 0\.000 |
| 1 | 1. Very much interested | 3940 | 0\.529 |
| 2 | 2. Somewhat interested | 2569 | 0\.345 |
| 3 | 3. Not much interested | 943 | 0\.127 |
| Total | * | 7453 | 1\.000 |
#### CampaignInterest
Description: PRE: How interested in following campaigns
Question text: Some people don’t pay much attention to political campaigns. How about you? Would you say that you have been very much interested, somewhat interested or not much interested in the political campaigns so far this year?
Variable class: factor
| CampaignInterest | n | Unweighted Freq |
| --- | --- | --- |
| Very much interested | 3940 | 0\.529 |
| Somewhat interested | 2569 | 0\.345 |
| Not much interested | 943 | 0\.127 |
| NA | 1 | 0\.000 |
| Total | 7453 | 1\.000 |
#### V201023
Description: PRE: Confirmation voted (early) in November 3 Election (2020\)
Question text: Just to be clear, I’m recording that you already voted in the election that is scheduled to take place on November 3\. Is that right?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201023 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 2 | 0\.000 |
| \-1 | \-1\. Inapplicable | 6961 | 0\.934 |
| 1 | 1. Yes, voted | 375 | 0\.050 |
| 2 | 2. No, have not voted | 115 | 0\.015 |
| Total | * | 7453 | 1\.000 |
#### EarlyVote2020
Description: PRE: Confirmation voted (early) in November 3 Election (2020\)
Question text: Just to be clear, I’m recording that you already voted in the election that is scheduled to take place on November 3\. Is that right?
Variable class: factor
| EarlyVote2020 | n | Unweighted Freq |
| --- | --- | --- |
| Yes | 375 | 0\.050 |
| No | 115 | 0\.015 |
| NA | 6963 | 0\.934 |
| Total | 7453 | 1\.000 |
#### V201024
Description: PRE: In what manner did R vote
Question text: Which one of the following best describes how you voted?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201024 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 1 | 0\.000 |
| \-1 | \-1\. Inapplicable | 7078 | 0\.950 |
| 1 | 1. Definitely voted in person at a polling place before election day | 101 | 0\.014 |
| 2 | 2. Definitely voted by mailing a ballot to elections officials before election day | 242 | 0\.032 |
| 3 | 3. Definitely voted in some other way | 28 | 0\.004 |
| 4 | 4. Not completely sure whether you voted or not | 3 | 0\.000 |
| Total | * | 7453 | 1\.000 |
#### V201025x
Description: PRE: SUMMARY: Registration and early vote status
Variable class: haven\_labelled, vctrs\_vctr, double
| V201025x | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-4 | \-4\. Technical error | 1 | 0\.000 |
| 1 | 1. Not registered (or DK/RF), does not intend to register (or DK/RF intent) | 339 | 0\.045 |
| 2 | 2. Not registered (or DK/RF), intends to register | 290 | 0\.039 |
| 3 | 3. Registered but did not vote early (or DK/RF) | 6452 | 0\.866 |
| 4 | 4. Registered and voted early | 371 | 0\.050 |
| Total | * | 7453 | 1\.000 |
#### V201028
Description: PRE: DID R VOTE FOR PRESIDENT
Question text: How about the election for President? Did you vote for a candidate for President?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201028 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 1 | 0\.000 |
| \-1 | \-1\. Inapplicable | 7081 | 0\.950 |
| 1 | 1. Yes, voted for President | 361 | 0\.048 |
| 2 | 2. No, didn’t vote for President | 10 | 0\.001 |
| Total | * | 7453 | 1\.000 |
#### V201029
Description: PRE: For whom did R vote for President
Question text: Who did you vote for? \[Joe Biden, Donald Trump/Donald Trump, Joe Biden], Jo Jorgensen, Howie Hawkins, or someone else?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201029 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 10 | 0\.001 |
| \-1 | \-1\. Inapplicable | 7092 | 0\.952 |
| 1 | 1. Joe Biden | 239 | 0\.032 |
| 2 | 2. Donald Trump | 103 | 0\.014 |
| 3 | 3. Jo Jorgensen | 2 | 0\.000 |
| 4 | 4. Howie Hawkins | 1 | 0\.000 |
| 5 | 5. Other candidate {SPECIFY} | 4 | 0\.001 |
| 12 | 12. Specified as refused | 2 | 0\.000 |
| Total | * | 7453 | 1\.000 |
#### V201101
Description: PRE: Did R vote for President in 2016 \[revised]
Question text: Four years ago, in 2016, Hillary Clinton ran on the Democratic ticket against Donald Trump for the Republicans. We talk to many people who tell us they did not vote. And we talk to a few people who tell us they did vote, who really did not. We can tell they did not vote by checking with official government records. What about you? If we check the official government voter records, will they show that you voted in the 2016 presidential election, or that you did not vote in that election?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201101 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 13 | 0\.002 |
| \-8 | \-8\. Don’t know | 1 | 0\.000 |
| \-1 | \-1\. Inapplicable | 3780 | 0\.507 |
| 1 | 1. Yes, voted | 2780 | 0\.373 |
| 2 | 2. No, didn’t vote | 879 | 0\.118 |
| Total | * | 7453 | 1\.000 |
#### V201102
Description: PRE: Did R vote for President in 2016
Question text: Four years ago, in 2016, Hillary Clinton ran on the Democratic ticket against Donald Trump for the Republicans. Do you remember for sure whether or not you voted in that election?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201102 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 6 | 0\.001 |
| \-8 | \-8\. Don’t know | 1 | 0\.000 |
| \-1 | \-1\. Inapplicable | 3673 | 0\.493 |
| 1 | 1. Yes, voted | 3030 | 0\.407 |
| 2 | 2. No, didn’t vote | 743 | 0\.100 |
| Total | * | 7453 | 1\.000 |
#### VotedPres2016
Description: PRE: Did R vote for President in 2016
Question text: Derived from V201102, V201101
Variable class: factor
| VotedPres2016 | n | Unweighted Freq |
| --- | --- | --- |
| Yes | 5810 | 0\.780 |
| No | 1622 | 0\.218 |
| NA | 21 | 0\.003 |
| Total | 7453 | 1\.000 |
#### V201103
Description: PRE: Recall of last (2016\) Presidential vote choice
Question text: Which one did you vote for?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201103 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 41 | 0\.006 |
| \-8 | \-8\. Don’t know | 2 | 0\.000 |
| \-1 | \-1\. Inapplicable | 1643 | 0\.220 |
| 1 | 1. Hillary Clinton | 2911 | 0\.391 |
| 2 | 2. Donald Trump | 2466 | 0\.331 |
| 5 | 5. Other {SPECIFY} | 390 | 0\.052 |
| Total | * | 7453 | 1\.000 |
#### VotedPres2016\_selection
Description: PRE: Recall of last (2016\) Presidential vote choice
Question text: Which one did you vote for?
Variable class: factor
| VotedPres2016\_selection | n | Unweighted Freq |
| --- | --- | --- |
| Clinton | 2911 | 0\.391 |
| Trump | 2466 | 0\.331 |
| Other | 390 | 0\.052 |
| NA | 1686 | 0\.226 |
| Total | 7453 | 1\.000 |
#### V201228
Description: PRE: Party ID: Does R think of self as Democrat, Republican, or Independent
Question text: Generally speaking, do you usually think of yourself as \[a Democrat, a Republican / a Republican, a Democrat], an independent, or what?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201228 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 37 | 0\.005 |
| \-8 | \-8\. Don’t know | 4 | 0\.001 |
| \-4 | \-4\. Technical error | 1 | 0\.000 |
| 0 | 0. No preference {VOL \- video/phone only} | 6 | 0\.001 |
| 1 | 1. Democrat | 2589 | 0\.347 |
| 2 | 2. Republican | 2304 | 0\.309 |
| 3 | 3. Independent | 2277 | 0\.306 |
| 5 | 5. Other party {SPECIFY} | 235 | 0\.032 |
| Total | * | 7453 | 1\.000 |
#### V201229
Description: PRE: Party Identification strong \- Democrat Republican
Question text: Would you call yourself a strong \[Democrat / Republican] or a not very strong \[Democrat / Republican]?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201229 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 4 | 0\.001 |
| \-1 | \-1\. Inapplicable | 2560 | 0\.343 |
| 1 | 1. Strong | 3341 | 0\.448 |
| 2 | 2. Not very strong | 1548 | 0\.208 |
| Total | * | 7453 | 1\.000 |
#### V201230
Description: PRE: No Party Identification \- closer to Democratic Party or Republican Party
Question text: Do you think of yourself as closer to the Republican Party or to the Democratic Party?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201230 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 19 | 0\.003 |
| \-8 | \-8\. Don’t know | 2 | 0\.000 |
| \-1 | \-1\. Inapplicable | 4893 | 0\.657 |
| 1 | 1. Closer to Republican | 782 | 0\.105 |
| 2 | 2. Neither {VOL in video and phone} | 876 | 0\.118 |
| 3 | 3. Closer to Democratic | 881 | 0\.118 |
| Total | * | 7453 | 1\.000 |
#### V201231x
Description: PRE: SUMMARY: Party ID
Question text: Derived from V201228, V201229, and PTYID\_LEANPTY
Variable class: haven\_labelled, vctrs\_vctr, double
| V201231x | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 23 | 0\.003 |
| \-8 | \-8\. Don’t know | 2 | 0\.000 |
| 1 | 1. Strong Democrat | 1796 | 0\.241 |
| 2 | 2. Not very strong Democrat | 790 | 0\.106 |
| 3 | 3. Independent\-Democrat | 881 | 0\.118 |
| 4 | 4. Independent | 876 | 0\.118 |
| 5 | 5. Independent\-Republican | 782 | 0\.105 |
| 6 | 6. Not very strong Republican | 758 | 0\.102 |
| 7 | 7. Strong Republican | 1545 | 0\.207 |
| Total | * | 7453 | 1\.000 |
#### PartyID
Description: PRE: SUMMARY: Party ID
Question text: Derived from V201228, V201229, and PTYID\_LEANPTY
Variable class: factor
| PartyID | n | Unweighted Freq |
| --- | --- | --- |
| Strong democrat | 1796 | 0\.241 |
| Not very strong democrat | 790 | 0\.106 |
| Independent\-democrat | 881 | 0\.118 |
| Independent | 876 | 0\.118 |
| Independent\-republican | 782 | 0\.105 |
| Not very strong republican | 758 | 0\.102 |
| Strong republican | 1545 | 0\.207 |
| NA | 25 | 0\.003 |
| Total | 7453 | 1\.000 |
#### V201233
Description: PRE: How often trust government in Washington to do what is right \[revised]
Question text: How often can you trust the federal government in Washington to do what is right?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201233 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 26 | 0\.003 |
| \-8 | \-8\. Don’t know | 3 | 0\.000 |
| 1 | 1. Always | 80 | 0\.011 |
| 2 | 2. Most of the time | 1016 | 0\.136 |
| 3 | 3. About half the time | 2313 | 0\.310 |
| 4 | 4. Some of the time | 3313 | 0\.445 |
| 5 | 5. Never | 702 | 0\.094 |
| Total | * | 7453 | 1\.000 |
#### TrustGovernment
Description: PRE: How often trust government in Washington to do what is right \[revised]
Question text: How often can you trust the federal government in Washington to do what is right?
Variable class: factor
| TrustGovernment | n | Unweighted Freq |
| --- | --- | --- |
| Always | 80 | 0\.011 |
| Most of the time | 1016 | 0\.136 |
| About half the time | 2313 | 0\.310 |
| Some of the time | 3313 | 0\.445 |
| Never | 702 | 0\.094 |
| NA | 29 | 0\.004 |
| Total | 7453 | 1\.000 |
#### V201237
Description: PRE: How often can people be trusted
Question text: Generally speaking, how often can you trust other people?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201237 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 12 | 0\.002 |
| \-8 | \-8\. Don’t know | 1 | 0\.000 |
| 1 | 1. Always | 48 | 0\.006 |
| 2 | 2. Most of the time | 3511 | 0\.471 |
| 3 | 3. About half the time | 2020 | 0\.271 |
| 4 | 4. Some of the time | 1597 | 0\.214 |
| 5 | 5. Never | 264 | 0\.035 |
| Total | * | 7453 | 1\.000 |
#### TrustPeople
Description: PRE: How often can people be trusted
Question text: Generally speaking, how often can you trust other people?
Variable class: factor
| TrustPeople | n | Unweighted Freq |
| --- | --- | --- |
| Always | 48 | 0\.006 |
| Most of the time | 3511 | 0\.471 |
| About half the time | 2020 | 0\.271 |
| Some of the time | 1597 | 0\.214 |
| Never | 264 | 0\.035 |
| NA | 13 | 0\.002 |
| Total | 7453 | 1\.000 |
#### V201507x
Description: PRE: SUMMARY: Respondent age
Question text: Derived from birth month, day and year
Variable class: haven\_labelled, vctrs\_vctr, double
| N Missing | N Refused (\-9\) | Minimum | Median | Maximum |
| --- | --- | --- | --- | --- |
| 0 | 294 | 18 | 53 | 80 |
#### Age
Description: PRE: SUMMARY: Respondent age
Question text: Derived from birth month, day and year
Variable class: numeric
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 294 | 18 | 53 | 80 |
#### AgeGroup
Description: PRE: SUMMARY: Respondent age
Question text: Derived from birth month, day and year
Variable class: factor
| AgeGroup | n | Unweighted Freq |
| --- | --- | --- |
| 18\-29 | 871 | 0\.117 |
| 30\-39 | 1241 | 0\.167 |
| 40\-49 | 1081 | 0\.145 |
| 50\-59 | 1200 | 0\.161 |
| 60\-69 | 1436 | 0\.193 |
| 70 or older | 1330 | 0\.178 |
| NA | 294 | 0\.039 |
| Total | 7453 | 1\.000 |
#### V201510
Description: PRE: Highest level of Education
Question text: What is the highest level of school you have completed or the highest degree you have received?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201510 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 25 | 0\.003 |
| \-8 | \-8\. Don’t know | 1 | 0\.000 |
| 1 | 1. Less than high school credential | 312 | 0\.042 |
| 2 | 2. High school graduate \- High school diploma or equivalent (e.g. GED) | 1160 | 0\.156 |
| 3 | 3. Some college but no degree | 1519 | 0\.204 |
| 4 | 4. Associate degree in college \- occupational/vocational | 550 | 0\.074 |
| 5 | 5. Associate degree in college \- academic | 445 | 0\.060 |
| 6 | 6. Bachelor’s degree (e.g. BA, AB, BS) | 1877 | 0\.252 |
| 7 | 7. Master’s degree (e.g. MA, MS, MEng, MEd, MSW, MBA) | 1092 | 0\.147 |
| 8 | 8. Professional school degree (e.g. MD, DDS, DVM, LLB, JD)/Doctoral degree (e.g. PHD, EDD) | 382 | 0\.051 |
| 95 | 95. Other {SPECIFY} | 90 | 0\.012 |
| Total | * | 7453 | 1\.000 |
#### Education
Description: PRE: Highest level of Education
Question text: What is the highest level of school you have completed or the highest degree you have received?
Variable class: factor
| Education | n | Unweighted Freq |
| --- | --- | --- |
| Less than HS | 312 | 0\.042 |
| High school | 1160 | 0\.156 |
| Post HS | 2514 | 0\.337 |
| Bachelor’s | 1877 | 0\.252 |
| Graduate | 1474 | 0\.198 |
| NA | 116 | 0\.016 |
| Total | 7453 | 1\.000 |
#### V201546
Description: PRE: R: Are you Spanish, Hispanic, or Latino
Question text: Are you of Hispanic, Latino, or Spanish origin?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201546 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 45 | 0\.006 |
| \-8 | \-8\. Don’t know | 3 | 0\.000 |
| 1 | 1. Yes | 662 | 0\.089 |
| 2 | 2. No | 6743 | 0\.905 |
| Total | * | 7453 | 1\.000 |
#### V201547a
Description: RESTRICTED: PRE: Race of R: White \[mention]
Question text: I am going to read you a list of five race categories. You may choose one or more races. For this survey, Hispanic origin is not a race. Are you White?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201547a | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201547b
Description: RESTRICTED: PRE: Race of R: Black or African\-American \[mention]
Question text: I am going to read you a list of five race categories. You may choose one or more races. For this survey, Hispanic origin is not a race. Are you Black or African American?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201547b | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201547c
Description: RESTRICTED: PRE: Race of R: Asian \[mention]
Question text: I am going to read you a list of five race categories. You may choose one or more races. For this survey, Hispanic origin is not a race. Are you Asian?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201547c | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201547d
Description: RESTRICTED: PRE: Race of R: Native Hawaiian or Pacific Islander \[mention]
Question text: I am going to read you a list of five race categories. You may choose one or more races. For this survey, Hispanic origin is not a race. Are you White; Black or African American; American Indian or Alaska Native; Asian; or Native Hawaiian or Other Pacific Islander?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201547d | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201547e
Description: RESTRICTED: PRE: Race of R: Native American or Alaska Native \[mention]
Question text: I am going to read you a list of five race categories. You may choose one or more races. For this survey, Hispanic origin is not a race. Are you American Indian or Alaska Native?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201547e | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201547z
Description: RESTRICTED: PRE: Race of R: other specify
Question text: I am going to read you a list of five race categories. You may choose one or more races. For this survey, Hispanic origin is not a race. Reported other
Variable class: haven\_labelled, vctrs\_vctr, double
| V201547z | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201549x
Description: PRE: SUMMARY: R self\-identified race/ethnicity
Question text: Derived from V201546, V201547a\-V201547e, and V201547z
Variable class: haven\_labelled, vctrs\_vctr, double
| V201549x | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 75 | 0\.010 |
| \-8 | \-8\. Don’t know | 6 | 0\.001 |
| 1 | 1. White, non\-Hispanic | 5420 | 0\.727 |
| 2 | 2. Black, non\-Hispanic | 650 | 0\.087 |
| 3 | 3. Hispanic | 662 | 0\.089 |
| 4 | 4. Asian or Native Hawaiian/other Pacific Islander, non\-Hispanic alone | 248 | 0\.033 |
| 5 | 5. Native American/Alaska Native or other race, non\-Hispanic alone | 155 | 0\.021 |
| 6 | 6. Multiple races, non\-Hispanic | 237 | 0\.032 |
| Total | * | 7453 | 1\.000 |
#### RaceEth
Description: PRE: SUMMARY: R self\-identified race/ethnicity
Question text: Derived from V201546, V201547a\-V201547e, and V201547z
Variable class: factor
| RaceEth | n | Unweighted Freq |
| --- | --- | --- |
| White | 5420 | 0\.727 |
| Black | 650 | 0\.087 |
| Hispanic | 662 | 0\.089 |
| Asian, NH/PI | 248 | 0\.033 |
| AI/AN | 155 | 0\.021 |
| Other/multiple race | 237 | 0\.032 |
| NA | 81 | 0\.011 |
| Total | 7453 | 1\.000 |
#### V201600
Description: PRE: What is your (R) sex? \[revised]
Question text: What is your sex?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201600 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 51 | 0\.007 |
| 1 | 1. Male | 3375 | 0\.453 |
| 2 | 2. Female | 4027 | 0\.540 |
| Total | * | 7453 | 1\.000 |
#### Gender
Description: PRE: What is your (R) sex? \[revised]
Question text: What is your sex?
Variable class: factor
| Gender | n | Unweighted Freq |
| --- | --- | --- |
| Male | 3375 | 0\.453 |
| Female | 4027 | 0\.540 |
| NA | 51 | 0\.007 |
| Total | 7453 | 1\.000 |
#### V201607
Description: RESTRICTED: PRE: Total income amount \- revised
Question text: The next question is about \[the total combined income of all members of your family / your total income] during the past 12 months. This includes money from jobs, net income from business, farm or rent, pensions, dividends, interest, Social Security payments, and any other money income received by members of your family who are 15 years of age or older. What was the total income of your family during the past 12 months? TYPE THE NUMBER. YOUR BEST GUESS IS FINE.
Variable class: haven\_labelled, vctrs\_vctr, double
| V201607 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201610
Description: RESTRICTED: PRE: Income amt missing \- categories lt 20K
Question text: Please choose the answer that includes the income of all members of your family during the past 12 months before taxes.
Variable class: haven\_labelled, vctrs\_vctr, double
| V201610 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201611
Description: RESTRICTED: PRE: Income amt missing \- categories 20\-40K
Question text: Please choose the answer that includes the income of all members of your family during the past 12 months before taxes.
Variable class: haven\_labelled, vctrs\_vctr, double
| V201611 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201613
Description: RESTRICTED: PRE: Income amt missing \- categories 40\-70K
Question text: Please choose the answer that includes the income of all members of your family during the past 12 months before taxes.
Variable class: haven\_labelled, vctrs\_vctr, double
| V201613 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201615
Description: RESTRICTED: PRE: Income amt missing \- categories 70\-100K
Question text: Please choose the answer that includes the income of all members of your family during the past 12 months before taxes.
Variable class: haven\_labelled, vctrs\_vctr, double
| V201615 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201616
Description: RESTRICTED: PRE: Income amt missing \- categories 100\+K
Question text: Please choose the answer that includes the income of all members of your family during the past 12 months before taxes.
Variable class: haven\_labelled, vctrs\_vctr, double
| V201616 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201617x
Description: PRE: SUMMARY: Total (family) income
Question text: Derived from V201607, V201610, V201611, V201613, V201615, V201616
Variable class: haven\_labelled, vctrs\_vctr, double
| V201617x | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 502 | 0\.067 |
| \-5 | \-5\. Interview breakoff (sufficient partial IW) | 15 | 0\.002 |
| 1 | 1. Under $9,999 | 647 | 0\.087 |
| 2 | 2. $10,000\-14,999 | 244 | 0\.033 |
| 3 | 3. $15,000\-19,999 | 185 | 0\.025 |
| 4 | 4. $20,000\-24,999 | 301 | 0\.040 |
| 5 | 5. $25,000\-29,999 | 228 | 0\.031 |
| 6 | 6. $30,000\-34,999 | 296 | 0\.040 |
| 7 | 7. $35,000\-39,999 | 226 | 0\.030 |
| 8 | 8. $40,000\-44,999 | 286 | 0\.038 |
| 9 | 9. $45,000\-49,999 | 213 | 0\.029 |
| 10 | 10. $50,000\-59,999 | 485 | 0\.065 |
| 11 | 11. $60,000\-64,999 | 294 | 0\.039 |
| 12 | 12. $65,000\-69,999 | 168 | 0\.023 |
| 13 | 13. $70,000\-74,999 | 243 | 0\.033 |
| 14 | 14. $75,000\-79,999 | 215 | 0\.029 |
| 15 | 15. $80,000\-89,999 | 383 | 0\.051 |
| 16 | 16. $90,000\-99,999 | 291 | 0\.039 |
| 17 | 17. $100,000\-109,999 | 451 | 0\.061 |
| 18 | 18. $110,000\-124,999 | 312 | 0\.042 |
| 19 | 19. $125,000\-149,999 | 323 | 0\.043 |
| 20 | 20. $150,000\-174,999 | 366 | 0\.049 |
| 21 | 21. $175,000\-249,999 | 374 | 0\.050 |
| 22 | 22. $250,000 or more | 405 | 0\.054 |
| Total | * | 7453 | 1\.000 |
#### Income
Description: PRE: SUMMARY: Total (family) income
Question text: Derived from V201607, V201610, V201611, V201613, V201615, V201616
Variable class: factor
| Income | n | Unweighted Freq |
| --- | --- | --- |
| Under $9,999 | 647 | 0\.087 |
| $10,000\-14,999 | 244 | 0\.033 |
| $15,000\-19,999 | 185 | 0\.025 |
| $20,000\-24,999 | 301 | 0\.040 |
| $25,000\-29,999 | 228 | 0\.031 |
| $30,000\-34,999 | 296 | 0\.040 |
| $35,000\-39,999 | 226 | 0\.030 |
| $40,000\-44,999 | 286 | 0\.038 |
| $45,000\-49,999 | 213 | 0\.029 |
| $50,000\-59,999 | 485 | 0\.065 |
| $60,000\-64,999 | 294 | 0\.039 |
| $65,000\-69,999 | 168 | 0\.023 |
| $70,000\-74,999 | 243 | 0\.033 |
| $75,000\-79,999 | 215 | 0\.029 |
| $80,000\-89,999 | 383 | 0\.051 |
| $90,000\-99,999 | 291 | 0\.039 |
| $100,000\-109,999 | 451 | 0\.061 |
| $110,000\-124,999 | 312 | 0\.042 |
| $125,000\-149,999 | 323 | 0\.043 |
| $150,000\-174,999 | 366 | 0\.049 |
| $175,000\-249,999 | 374 | 0\.050 |
| $250,000 or more | 405 | 0\.054 |
| NA | 517 | 0\.069 |
| Total | 7453 | 1\.000 |
#### Income7
Description: PRE: SUMMARY: Total (family) income
Question text: Derived from V201607, V201610, V201611, V201613, V201615, V201616
Variable class: factor
| Income7 | n | Unweighted Freq |
| --- | --- | --- |
| Under $20k | 1076 | 0\.144 |
| $20k to \< 40k | 1051 | 0\.141 |
| $40k to \< 60k | 984 | 0\.132 |
| $60k to \< 80k | 920 | 0\.123 |
| $80k to \< 100k | 674 | 0\.090 |
| $100k to \< 125k | 763 | 0\.102 |
| $125k or more | 1468 | 0\.197 |
| NA | 517 | 0\.069 |
| Total | 7453 | 1\.000 |
#### V201006
Description: PRE: How interested in following campaigns
Question text: Some people don’t pay much attention to political campaigns. How about you? Would you say that you have been very much interested, somewhat interested or not much interested in the political campaigns so far this year?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201006 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 1 | 0\.000 |
| 1 | 1. Very much interested | 3940 | 0\.529 |
| 2 | 2. Somewhat interested | 2569 | 0\.345 |
| 3 | 3. Not much interested | 943 | 0\.127 |
| Total | * | 7453 | 1\.000 |
#### CampaignInterest
Description: PRE: How interested in following campaigns
Question text: Some people don’t pay much attention to political campaigns. How about you? Would you say that you have been very much interested, somewhat interested or not much interested in the political campaigns so far this year?
Variable class: factor
| CampaignInterest | n | Unweighted Freq |
| --- | --- | --- |
| Very much interested | 3940 | 0\.529 |
| Somewhat interested | 2569 | 0\.345 |
| Not much interested | 943 | 0\.127 |
| NA | 1 | 0\.000 |
| Total | 7453 | 1\.000 |
#### V201023
Description: PRE: Confirmation voted (early) in November 3 Election (2020\)
Question text: Just to be clear, I’m recording that you already voted in the election that is scheduled to take place on November 3\. Is that right?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201023 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 2 | 0\.000 |
| \-1 | \-1\. Inapplicable | 6961 | 0\.934 |
| 1 | 1. Yes, voted | 375 | 0\.050 |
| 2 | 2. No, have not voted | 115 | 0\.015 |
| Total | * | 7453 | 1\.000 |
#### EarlyVote2020
Description: PRE: Confirmation voted (early) in November 3 Election (2020\)
Question text: Just to be clear, I’m recording that you already voted in the election that is scheduled to take place on November 3\. Is that right?
Variable class: factor
| EarlyVote2020 | n | Unweighted Freq |
| --- | --- | --- |
| Yes | 375 | 0\.050 |
| No | 115 | 0\.015 |
| NA | 6963 | 0\.934 |
| Total | 7453 | 1\.000 |
#### V201024
Description: PRE: In what manner did R vote
Question text: Which one of the following best describes how you voted?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201024 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 1 | 0\.000 |
| \-1 | \-1\. Inapplicable | 7078 | 0\.950 |
| 1 | 1. Definitely voted in person at a polling place before election day | 101 | 0\.014 |
| 2 | 2. Definitely voted by mailing a ballot to elections officials before election day | 242 | 0\.032 |
| 3 | 3. Definitely voted in some other way | 28 | 0\.004 |
| 4 | 4. Not completely sure whether you voted or not | 3 | 0\.000 |
| Total | * | 7453 | 1\.000 |
#### V201025x
Description: PRE: SUMMARY: Registration and early vote status
Variable class: haven\_labelled, vctrs\_vctr, double
| V201025x | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-4 | \-4\. Technical error | 1 | 0\.000 |
| 1 | 1. Not registered (or DK/RF), does not intend to register (or DK/RF intent) | 339 | 0\.045 |
| 2 | 2. Not registered (or DK/RF), intends to register | 290 | 0\.039 |
| 3 | 3. Registered but did not vote early (or DK/RF) | 6452 | 0\.866 |
| 4 | 4. Registered and voted early | 371 | 0\.050 |
| Total | * | 7453 | 1\.000 |
#### V201028
Description: PRE: DID R VOTE FOR PRESIDENT
Question text: How about the election for President? Did you vote for a candidate for President?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201028 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 1 | 0\.000 |
| \-1 | \-1\. Inapplicable | 7081 | 0\.950 |
| 1 | 1. Yes, voted for President | 361 | 0\.048 |
| 2 | 2. No, didn’t vote for President | 10 | 0\.001 |
| Total | * | 7453 | 1\.000 |
#### V201029
Description: PRE: For whom did R vote for President
Question text: Who did you vote for? \[Joe Biden, Donald Trump/Donald Trump, Joe Biden], Jo Jorgensen, Howie Hawkins, or someone else?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201029 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 10 | 0\.001 |
| \-1 | \-1\. Inapplicable | 7092 | 0\.952 |
| 1 | 1. Joe Biden | 239 | 0\.032 |
| 2 | 2. Donald Trump | 103 | 0\.014 |
| 3 | 3. Jo Jorgensen | 2 | 0\.000 |
| 4 | 4. Howie Hawkins | 1 | 0\.000 |
| 5 | 5. Other candidate {SPECIFY} | 4 | 0\.001 |
| 12 | 12. Specified as refused | 2 | 0\.000 |
| Total | * | 7453 | 1\.000 |
#### V201101
Description: PRE: Did R vote for President in 2016 \[revised]
Question text: Four years ago, in 2016, Hillary Clinton ran on the Democratic ticket against Donald Trump for the Republicans. We talk to many people who tell us they did not vote. And we talk to a few people who tell us they did vote, who really did not. We can tell they did not vote by checking with official government records. What about you? If we check the official government voter records, will they show that you voted in the 2016 presidential election, or that you did not vote in that election?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201101 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 13 | 0\.002 |
| \-8 | \-8\. Don’t know | 1 | 0\.000 |
| \-1 | \-1\. Inapplicable | 3780 | 0\.507 |
| 1 | 1. Yes, voted | 2780 | 0\.373 |
| 2 | 2. No, didn’t vote | 879 | 0\.118 |
| Total | * | 7453 | 1\.000 |
#### V201102
Description: PRE: Did R vote for President in 2016
Question text: Four years ago, in 2016, Hillary Clinton ran on the Democratic ticket against Donald Trump for the Republicans. Do you remember for sure whether or not you voted in that election?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201102 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 6 | 0\.001 |
| \-8 | \-8\. Don’t know | 1 | 0\.000 |
| \-1 | \-1\. Inapplicable | 3673 | 0\.493 |
| 1 | 1. Yes, voted | 3030 | 0\.407 |
| 2 | 2. No, didn’t vote | 743 | 0\.100 |
| Total | * | 7453 | 1\.000 |
#### VotedPres2016
Description: PRE: Did R vote for President in 2016
Question text: Derived from V201102, V201101
Variable class: factor
| VotedPres2016 | n | Unweighted Freq |
| --- | --- | --- |
| Yes | 5810 | 0\.780 |
| No | 1622 | 0\.218 |
| NA | 21 | 0\.003 |
| Total | 7453 | 1\.000 |
#### V201103
Description: PRE: Recall of last (2016\) Presidential vote choice
Question text: Which one did you vote for?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201103 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 41 | 0\.006 |
| \-8 | \-8\. Don’t know | 2 | 0\.000 |
| \-1 | \-1\. Inapplicable | 1643 | 0\.220 |
| 1 | 1. Hillary Clinton | 2911 | 0\.391 |
| 2 | 2. Donald Trump | 2466 | 0\.331 |
| 5 | 5. Other {SPECIFY} | 390 | 0\.052 |
| Total | * | 7453 | 1\.000 |
#### VotedPres2016\_selection
Description: PRE: Recall of last (2016\) Presidential vote choice
Question text: Which one did you vote for?
Variable class: factor
| VotedPres2016\_selection | n | Unweighted Freq |
| --- | --- | --- |
| Clinton | 2911 | 0\.391 |
| Trump | 2466 | 0\.331 |
| Other | 390 | 0\.052 |
| NA | 1686 | 0\.226 |
| Total | 7453 | 1\.000 |
#### V201228
Description: PRE: Party ID: Does R think of self as Democrat, Republican, or Independent
Question text: Generally speaking, do you usually think of yourself as \[a Democrat, a Republican / a Republican, a Democrat], an independent, or what?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201228 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 37 | 0\.005 |
| \-8 | \-8\. Don’t know | 4 | 0\.001 |
| \-4 | \-4\. Technical error | 1 | 0\.000 |
| 0 | 0. No preference {VOL \- video/phone only} | 6 | 0\.001 |
| 1 | 1. Democrat | 2589 | 0\.347 |
| 2 | 2. Republican | 2304 | 0\.309 |
| 3 | 3. Independent | 2277 | 0\.306 |
| 5 | 5. Other party {SPECIFY} | 235 | 0\.032 |
| Total | * | 7453 | 1\.000 |
#### V201229
Description: PRE: Party Identification strong \- Democrat Republican
Question text: Would you call yourself a strong \[Democrat / Republican] or a not very strong \[Democrat / Republican]?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201229 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 4 | 0\.001 |
| \-1 | \-1\. Inapplicable | 2560 | 0\.343 |
| 1 | 1. Strong | 3341 | 0\.448 |
| 2 | 2. Not very strong | 1548 | 0\.208 |
| Total | * | 7453 | 1\.000 |
#### V201230
Description: PRE: No Party Identification \- closer to Democratic Party or Republican Party
Question text: Do you think of yourself as closer to the Republican Party or to the Democratic Party?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201230 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 19 | 0\.003 |
| \-8 | \-8\. Don’t know | 2 | 0\.000 |
| \-1 | \-1\. Inapplicable | 4893 | 0\.657 |
| 1 | 1. Closer to Republican | 782 | 0\.105 |
| 2 | 2. Neither {VOL in video and phone} | 876 | 0\.118 |
| 3 | 3. Closer to Democratic | 881 | 0\.118 |
| Total | * | 7453 | 1\.000 |
#### V201231x
Description: PRE: SUMMARY: Party ID
Question text: Derived from V201228, V201229, and PTYID\_LEANPTY
Variable class: haven\_labelled, vctrs\_vctr, double
| V201231x | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 23 | 0\.003 |
| \-8 | \-8\. Don’t know | 2 | 0\.000 |
| 1 | 1. Strong Democrat | 1796 | 0\.241 |
| 2 | 2. Not very strong Democrat | 790 | 0\.106 |
| 3 | 3. Independent\-Democrat | 881 | 0\.118 |
| 4 | 4. Independent | 876 | 0\.118 |
| 5 | 5. Independent\-Republican | 782 | 0\.105 |
| 6 | 6. Not very strong Republican | 758 | 0\.102 |
| 7 | 7. Strong Republican | 1545 | 0\.207 |
| Total | * | 7453 | 1\.000 |
#### PartyID
Description: PRE: SUMMARY: Party ID
Question text: Derived from V201228, V201229, and PTYID\_LEANPTY
Variable class: factor
| PartyID | n | Unweighted Freq |
| --- | --- | --- |
| Strong democrat | 1796 | 0\.241 |
| Not very strong democrat | 790 | 0\.106 |
| Independent\-democrat | 881 | 0\.118 |
| Independent | 876 | 0\.118 |
| Independent\-republican | 782 | 0\.105 |
| Not very strong republican | 758 | 0\.102 |
| Strong republican | 1545 | 0\.207 |
| NA | 25 | 0\.003 |
| Total | 7453 | 1\.000 |
#### V201233
Description: PRE: How often trust government in Washington to do what is right \[revised]
Question text: How often can you trust the federal government in Washington to do what is right?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201233 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 26 | 0\.003 |
| \-8 | \-8\. Don’t know | 3 | 0\.000 |
| 1 | 1. Always | 80 | 0\.011 |
| 2 | 2. Most of the time | 1016 | 0\.136 |
| 3 | 3. About half the time | 2313 | 0\.310 |
| 4 | 4. Some of the time | 3313 | 0\.445 |
| 5 | 5. Never | 702 | 0\.094 |
| Total | * | 7453 | 1\.000 |
#### TrustGovernment
Description: PRE: How often trust government in Washington to do what is right \[revised]
Question text: How often can you trust the federal government in Washington to do what is right?
Variable class: factor
| TrustGovernment | n | Unweighted Freq |
| --- | --- | --- |
| Always | 80 | 0\.011 |
| Most of the time | 1016 | 0\.136 |
| About half the time | 2313 | 0\.310 |
| Some of the time | 3313 | 0\.445 |
| Never | 702 | 0\.094 |
| NA | 29 | 0\.004 |
| Total | 7453 | 1\.000 |
#### V201237
Description: PRE: How often can people be trusted
Question text: Generally speaking, how often can you trust other people?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201237 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 12 | 0\.002 |
| \-8 | \-8\. Don’t know | 1 | 0\.000 |
| 1 | 1. Always | 48 | 0\.006 |
| 2 | 2. Most of the time | 3511 | 0\.471 |
| 3 | 3. About half the time | 2020 | 0\.271 |
| 4 | 4. Some of the time | 1597 | 0\.214 |
| 5 | 5. Never | 264 | 0\.035 |
| Total | * | 7453 | 1\.000 |
#### TrustPeople
Description: PRE: How often can people be trusted
Question text: Generally speaking, how often can you trust other people?
Variable class: factor
| TrustPeople | n | Unweighted Freq |
| --- | --- | --- |
| Always | 48 | 0\.006 |
| Most of the time | 3511 | 0\.471 |
| About half the time | 2020 | 0\.271 |
| Some of the time | 1597 | 0\.214 |
| Never | 264 | 0\.035 |
| NA | 13 | 0\.002 |
| Total | 7453 | 1\.000 |
#### V201507x
Description: PRE: SUMMARY: Respondent age
Question text: Derived from birth month, day and year
Variable class: haven\_labelled, vctrs\_vctr, double
| N Missing | N Refused (\-9\) | Minimum | Median | Maximum |
| --- | --- | --- | --- | --- |
| 0 | 294 | 18 | 53 | 80 |
#### Age
Description: PRE: SUMMARY: Respondent age
Question text: Derived from birth month, day and year
Variable class: numeric
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 294 | 18 | 53 | 80 |
#### AgeGroup
Description: PRE: SUMMARY: Respondent age
Question text: Derived from birth month, day and year
Variable class: factor
| AgeGroup | n | Unweighted Freq |
| --- | --- | --- |
| 18\-29 | 871 | 0\.117 |
| 30\-39 | 1241 | 0\.167 |
| 40\-49 | 1081 | 0\.145 |
| 50\-59 | 1200 | 0\.161 |
| 60\-69 | 1436 | 0\.193 |
| 70 or older | 1330 | 0\.178 |
| NA | 294 | 0\.039 |
| Total | 7453 | 1\.000 |
#### V201510
Description: PRE: Highest level of Education
Question text: What is the highest level of school you have completed or the highest degree you have received?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201510 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 25 | 0\.003 |
| \-8 | \-8\. Don’t know | 1 | 0\.000 |
| 1 | 1. Less than high school credential | 312 | 0\.042 |
| 2 | 2. High school graduate \- High school diploma or equivalent (e.g. GED) | 1160 | 0\.156 |
| 3 | 3. Some college but no degree | 1519 | 0\.204 |
| 4 | 4. Associate degree in college \- occupational/vocational | 550 | 0\.074 |
| 5 | 5. Associate degree in college \- academic | 445 | 0\.060 |
| 6 | 6. Bachelor’s degree (e.g. BA, AB, BS) | 1877 | 0\.252 |
| 7 | 7. Master’s degree (e.g. MA, MS, MEng, MEd, MSW, MBA) | 1092 | 0\.147 |
| 8 | 8. Professional school degree (e.g. MD, DDS, DVM, LLB, JD)/Doctoral degree (e.g. PHD, EDD) | 382 | 0\.051 |
| 95 | 95. Other {SPECIFY} | 90 | 0\.012 |
| Total | * | 7453 | 1\.000 |
#### Education
Description: PRE: Highest level of Education
Question text: What is the highest level of school you have completed or the highest degree you have received?
Variable class: factor
| Education | n | Unweighted Freq |
| --- | --- | --- |
| Less than HS | 312 | 0\.042 |
| High school | 1160 | 0\.156 |
| Post HS | 2514 | 0\.337 |
| Bachelor’s | 1877 | 0\.252 |
| Graduate | 1474 | 0\.198 |
| NA | 116 | 0\.016 |
| Total | 7453 | 1\.000 |
#### V201546
Description: PRE: R: Are you Spanish, Hispanic, or Latino
Question text: Are you of Hispanic, Latino, or Spanish origin?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201546 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 45 | 0\.006 |
| \-8 | \-8\. Don’t know | 3 | 0\.000 |
| 1 | 1. Yes | 662 | 0\.089 |
| 2 | 2. No | 6743 | 0\.905 |
| Total | * | 7453 | 1\.000 |
#### V201547a
Description: RESTRICTED: PRE: Race of R: White \[mention]
Question text: I am going to read you a list of five race categories. You may choose one or more races. For this survey, Hispanic origin is not a race. Are you White?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201547a | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201547b
Description: RESTRICTED: PRE: Race of R: Black or African\-American \[mention]
Question text: I am going to read you a list of five race categories. You may choose one or more races. For this survey, Hispanic origin is not a race. Are you Black or African American?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201547b | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201547c
Description: RESTRICTED: PRE: Race of R: Asian \[mention]
Question text: I am going to read you a list of five race categories. You may choose one or more races. For this survey, Hispanic origin is not a race. Are you Asian?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201547c | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201547d
Description: RESTRICTED: PRE: Race of R: Native Hawaiian or Pacific Islander \[mention]
Question text: I am going to read you a list of five race categories. You may choose one or more races. For this survey, Hispanic origin is not a race. Are you White; Black or African American; American Indian or Alaska Native; Asian; or Native Hawaiian or Other Pacific Islander?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201547d | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201547e
Description: RESTRICTED: PRE: Race of R: Native American or Alaska Native \[mention]
Question text: I am going to read you a list of five race categories. You may choose one or more races. For this survey, Hispanic origin is not a race. Are you American Indian or Alaska Native?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201547e | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201547z
Description: RESTRICTED: PRE: Race of R: other specify
Question text: I am going to read you a list of five race categories. You may choose one or more races. For this survey, Hispanic origin is not a race. Reported other
Variable class: haven\_labelled, vctrs\_vctr, double
| V201547z | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201549x
Description: PRE: SUMMARY: R self\-identified race/ethnicity
Question text: Derived from V201546, V201547a\-V201547e, and V201547z
Variable class: haven\_labelled, vctrs\_vctr, double
| V201549x | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 75 | 0\.010 |
| \-8 | \-8\. Don’t know | 6 | 0\.001 |
| 1 | 1. White, non\-Hispanic | 5420 | 0\.727 |
| 2 | 2. Black, non\-Hispanic | 650 | 0\.087 |
| 3 | 3. Hispanic | 662 | 0\.089 |
| 4 | 4. Asian or Native Hawaiian/other Pacific Islander, non\-Hispanic alone | 248 | 0\.033 |
| 5 | 5. Native American/Alaska Native or other race, non\-Hispanic alone | 155 | 0\.021 |
| 6 | 6. Multiple races, non\-Hispanic | 237 | 0\.032 |
| Total | * | 7453 | 1\.000 |
#### RaceEth
Description: PRE: SUMMARY: R self\-identified race/ethnicity
Question text: Derived from V201546, V201547a\-V201547e, and V201547z
Variable class: factor
| RaceEth | n | Unweighted Freq |
| --- | --- | --- |
| White | 5420 | 0\.727 |
| Black | 650 | 0\.087 |
| Hispanic | 662 | 0\.089 |
| Asian, NH/PI | 248 | 0\.033 |
| AI/AN | 155 | 0\.021 |
| Other/multiple race | 237 | 0\.032 |
| NA | 81 | 0\.011 |
| Total | 7453 | 1\.000 |
#### V201600
Description: PRE: What is your (R) sex? \[revised]
Question text: What is your sex?
Variable class: haven\_labelled, vctrs\_vctr, double
| V201600 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 51 | 0\.007 |
| 1 | 1. Male | 3375 | 0\.453 |
| 2 | 2. Female | 4027 | 0\.540 |
| Total | * | 7453 | 1\.000 |
#### Gender
Description: PRE: What is your (R) sex? \[revised]
Question text: What is your sex?
Variable class: factor
| Gender | n | Unweighted Freq |
| --- | --- | --- |
| Male | 3375 | 0\.453 |
| Female | 4027 | 0\.540 |
| NA | 51 | 0\.007 |
| Total | 7453 | 1\.000 |
#### V201607
Description: RESTRICTED: PRE: Total income amount \- revised
Question text: The next question is about \[the total combined income of all members of your family / your total income] during the past 12 months. This includes money from jobs, net income from business, farm or rent, pensions, dividends, interest, Social Security payments, and any other money income received by members of your family who are 15 years of age or older. What was the total income of your family during the past 12 months? TYPE THE NUMBER. YOUR BEST GUESS IS FINE.
Variable class: haven\_labelled, vctrs\_vctr, double
| V201607 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201610
Description: RESTRICTED: PRE: Income amt missing \- categories lt 20K
Question text: Please choose the answer that includes the income of all members of your family during the past 12 months before taxes.
Variable class: haven\_labelled, vctrs\_vctr, double
| V201610 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201611
Description: RESTRICTED: PRE: Income amt missing \- categories 20\-40K
Question text: Please choose the answer that includes the income of all members of your family during the past 12 months before taxes.
Variable class: haven\_labelled, vctrs\_vctr, double
| V201611 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201613
Description: RESTRICTED: PRE: Income amt missing \- categories 40\-70K
Question text: Please choose the answer that includes the income of all members of your family during the past 12 months before taxes.
Variable class: haven\_labelled, vctrs\_vctr, double
| V201613 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201615
Description: RESTRICTED: PRE: Income amt missing \- categories 70\-100K
Question text: Please choose the answer that includes the income of all members of your family during the past 12 months before taxes.
Variable class: haven\_labelled, vctrs\_vctr, double
| V201615 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201616
Description: RESTRICTED: PRE: Income amt missing \- categories 100\+K
Question text: Please choose the answer that includes the income of all members of your family during the past 12 months before taxes.
Variable class: haven\_labelled, vctrs\_vctr, double
| V201616 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-3 | \-3\. Restricted | 7453 | 1 |
| Total | * | 7453 | 1 |
#### V201617x
Description: PRE: SUMMARY: Total (family) income
Question text: Derived from V201607, V201610, V201611, V201613, V201615, V201616
Variable class: haven\_labelled, vctrs\_vctr, double
| V201617x | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 502 | 0\.067 |
| \-5 | \-5\. Interview breakoff (sufficient partial IW) | 15 | 0\.002 |
| 1 | 1. Under $9,999 | 647 | 0\.087 |
| 2 | 2. $10,000\-14,999 | 244 | 0\.033 |
| 3 | 3. $15,000\-19,999 | 185 | 0\.025 |
| 4 | 4. $20,000\-24,999 | 301 | 0\.040 |
| 5 | 5. $25,000\-29,999 | 228 | 0\.031 |
| 6 | 6. $30,000\-34,999 | 296 | 0\.040 |
| 7 | 7. $35,000\-39,999 | 226 | 0\.030 |
| 8 | 8. $40,000\-44,999 | 286 | 0\.038 |
| 9 | 9. $45,000\-49,999 | 213 | 0\.029 |
| 10 | 10. $50,000\-59,999 | 485 | 0\.065 |
| 11 | 11. $60,000\-64,999 | 294 | 0\.039 |
| 12 | 12. $65,000\-69,999 | 168 | 0\.023 |
| 13 | 13. $70,000\-74,999 | 243 | 0\.033 |
| 14 | 14. $75,000\-79,999 | 215 | 0\.029 |
| 15 | 15. $80,000\-89,999 | 383 | 0\.051 |
| 16 | 16. $90,000\-99,999 | 291 | 0\.039 |
| 17 | 17. $100,000\-109,999 | 451 | 0\.061 |
| 18 | 18. $110,000\-124,999 | 312 | 0\.042 |
| 19 | 19. $125,000\-149,999 | 323 | 0\.043 |
| 20 | 20. $150,000\-174,999 | 366 | 0\.049 |
| 21 | 21. $175,000\-249,999 | 374 | 0\.050 |
| 22 | 22. $250,000 or more | 405 | 0\.054 |
| Total | * | 7453 | 1\.000 |
#### Income
Description: PRE: SUMMARY: Total (family) income
Question text: Derived from V201607, V201610, V201611, V201613, V201615, V201616
Variable class: factor
| Income | n | Unweighted Freq |
| --- | --- | --- |
| Under $9,999 | 647 | 0\.087 |
| $10,000\-14,999 | 244 | 0\.033 |
| $15,000\-19,999 | 185 | 0\.025 |
| $20,000\-24,999 | 301 | 0\.040 |
| $25,000\-29,999 | 228 | 0\.031 |
| $30,000\-34,999 | 296 | 0\.040 |
| $35,000\-39,999 | 226 | 0\.030 |
| $40,000\-44,999 | 286 | 0\.038 |
| $45,000\-49,999 | 213 | 0\.029 |
| $50,000\-59,999 | 485 | 0\.065 |
| $60,000\-64,999 | 294 | 0\.039 |
| $65,000\-69,999 | 168 | 0\.023 |
| $70,000\-74,999 | 243 | 0\.033 |
| $75,000\-79,999 | 215 | 0\.029 |
| $80,000\-89,999 | 383 | 0\.051 |
| $90,000\-99,999 | 291 | 0\.039 |
| $100,000\-109,999 | 451 | 0\.061 |
| $110,000\-124,999 | 312 | 0\.042 |
| $125,000\-149,999 | 323 | 0\.043 |
| $150,000\-174,999 | 366 | 0\.049 |
| $175,000\-249,999 | 374 | 0\.050 |
| $250,000 or more | 405 | 0\.054 |
| NA | 517 | 0\.069 |
| Total | 7453 | 1\.000 |
#### Income7
Description: PRE: SUMMARY: Total (family) income
Question text: Derived from V201607, V201610, V201611, V201613, V201615, V201616
Variable class: factor
| Income7 | n | Unweighted Freq |
| --- | --- | --- |
| Under $20k | 1076 | 0\.144 |
| $20k to \< 40k | 1051 | 0\.141 |
| $40k to \< 60k | 984 | 0\.132 |
| $60k to \< 80k | 920 | 0\.123 |
| $80k to \< 100k | 674 | 0\.090 |
| $100k to \< 125k | 763 | 0\.102 |
| $125k or more | 1468 | 0\.197 |
| NA | 517 | 0\.069 |
| Total | 7453 | 1\.000 |
B.4 POST\-ELECTION SURVEY QUESTIONNAIRE
---------------------------------------
#### V202051
Description: POST: R registered to vote (post\-election)
Question text: Now on a different topic. Are you registered to vote at \[Respondent’s preloaded address], registered at a different address, or not currently registered?
Variable class: haven\_labelled, vctrs\_vctr, double
| V202051 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 4 | 0\.001 |
| \-6 | \-6\. No post\-election interview | 4 | 0\.001 |
| \-1 | \-1\. Inapplicable | 6820 | 0\.915 |
| 1 | 1. Registered at this address | 173 | 0\.023 |
| 2 | 2. Registered at a different address | 59 | 0\.008 |
| 3 | 3. Not currently registered | 393 | 0\.053 |
| Total | * | 7453 | 1\.000 |
#### V202066
Description: POST: Did R vote in November 2020 election
Question text: In talking to people about elections, we often find that a lot of people were not able to vote because they weren’t registered, they were sick, or they just didn’t have time. Which of the following statements best describes you:
Variable class: haven\_labelled, vctrs\_vctr, double
| V202066 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 7 | 0\.001 |
| \-6 | \-6\. No post\-election interview | 4 | 0\.001 |
| \-1 | \-1\. Inapplicable | 372 | 0\.050 |
| 1 | 1. I did not vote (in the election this November) | 582 | 0\.078 |
| 2 | 2. I thought about voting this time, but didn’t | 265 | 0\.036 |
| 3 | 3. I usually vote, but didn’t this time | 192 | 0\.026 |
| 4 | 4. I am sure I voted | 6031 | 0\.809 |
| Total | * | 7453 | 1\.000 |
#### V202072
Description: POST: Did R vote for President
Question text: How about the election for President? Did you vote for a candidate for President?
Variable class: haven\_labelled, vctrs\_vctr, double
| V202072 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 2 | 0\.000 |
| \-6 | \-6\. No post\-election interview | 4 | 0\.001 |
| \-1 | \-1\. Inapplicable | 1418 | 0\.190 |
| 1 | 1. Yes, voted for President | 5952 | 0\.799 |
| 2 | 2. No, didn’t vote for President | 77 | 0\.010 |
| Total | * | 7453 | 1\.000 |
#### VotedPres2020
Description: POST: Did R vote for President
Question text: How about the election for President? Did you vote for a candidate for President?
Variable class: factor
| VotedPres2020 | n | Unweighted Freq |
| --- | --- | --- |
| Yes | 6313 | 0\.847 |
| No | 87 | 0\.012 |
| NA | 1053 | 0\.141 |
| Total | 7453 | 1\.000 |
#### V202073
Description: POST: For whom did R vote for President
Question text: Who did you vote for? \[Joe Biden, Donald Trump/Donald Trump, Joe Biden], Jo Jorgensen, Howie Hawkins, or someone else?
Variable class: haven\_labelled, vctrs\_vctr, double
| V202073 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 53 | 0\.007 |
| \-6 | \-6\. No post\-election interview | 4 | 0\.001 |
| \-1 | \-1\. Inapplicable | 1497 | 0\.201 |
| 1 | 1. Joe Biden | 3267 | 0\.438 |
| 2 | 2. Donald Trump | 2462 | 0\.330 |
| 3 | 3. Jo Jorgensen | 69 | 0\.009 |
| 4 | 4. Howie Hawkins | 23 | 0\.003 |
| 5 | 5. Other candidate {SPECIFY} | 56 | 0\.008 |
| 7 | 7. Specified as Republican candidate | 1 | 0\.000 |
| 8 | 8. Specified as Libertarian candidate | 3 | 0\.000 |
| 11 | 11. Specified as don’t know | 2 | 0\.000 |
| 12 | 12. Specified as refused | 16 | 0\.002 |
| Total | * | 7453 | 1\.000 |
#### V202109x
Description: PRE\-POST: SUMMARY: Voter turnout in 2020
Question text: Derived from V201024, V202066, V202051
Variable class: haven\_labelled, vctrs\_vctr, double
| V202109x | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-2 | \-2\. Not reported | 7 | 0\.001 |
| 0 | 0. Did not vote | 1039 | 0\.139 |
| 1 | 1. Voted | 6407 | 0\.860 |
| Total | * | 7453 | 1\.000 |
#### V202110x
Description: PRE\-POST: SUMMARY: 2020 Presidential vote
Question text: Derived from V201029, V202073
Variable class: haven\_labelled, vctrs\_vctr, double
| V202110x | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 81 | 0\.011 |
| \-8 | \-8\. Don’t know | 2 | 0\.000 |
| \-1 | \-1\. Inapplicable | 1136 | 0\.152 |
| 1 | 1. Joe Biden | 3509 | 0\.471 |
| 2 | 2. Donald Trump | 2567 | 0\.344 |
| 3 | 3. Jo Jorgensen | 74 | 0\.010 |
| 4 | 4. Howie Hawkins | 24 | 0\.003 |
| 5 | 5. Other candidate {SPECIFY} | 60 | 0\.008 |
| Total | * | 7453 | 1\.000 |
#### VotedPres2020\_selection
Description: PRE\-POST: SUMMARY: 2020 Presidential vote
Question text: Derived from V201029, V202073
Variable class: factor
| VotedPres2020\_selection | n | Unweighted Freq |
| --- | --- | --- |
| Biden | 3509 | 0\.471 |
| Trump | 2567 | 0\.344 |
| Other | 158 | 0\.021 |
| NA | 1219 | 0\.164 |
| Total | 7453 | 1\.000 |
#### V202051
Description: POST: R registered to vote (post\-election)
Question text: Now on a different topic. Are you registered to vote at \[Respondent’s preloaded address], registered at a different address, or not currently registered?
Variable class: haven\_labelled, vctrs\_vctr, double
| V202051 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 4 | 0\.001 |
| \-6 | \-6\. No post\-election interview | 4 | 0\.001 |
| \-1 | \-1\. Inapplicable | 6820 | 0\.915 |
| 1 | 1. Registered at this address | 173 | 0\.023 |
| 2 | 2. Registered at a different address | 59 | 0\.008 |
| 3 | 3. Not currently registered | 393 | 0\.053 |
| Total | * | 7453 | 1\.000 |
#### V202066
Description: POST: Did R vote in November 2020 election
Question text: In talking to people about elections, we often find that a lot of people were not able to vote because they weren’t registered, they were sick, or they just didn’t have time. Which of the following statements best describes you:
Variable class: haven\_labelled, vctrs\_vctr, double
| V202066 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 7 | 0\.001 |
| \-6 | \-6\. No post\-election interview | 4 | 0\.001 |
| \-1 | \-1\. Inapplicable | 372 | 0\.050 |
| 1 | 1. I did not vote (in the election this November) | 582 | 0\.078 |
| 2 | 2. I thought about voting this time, but didn’t | 265 | 0\.036 |
| 3 | 3. I usually vote, but didn’t this time | 192 | 0\.026 |
| 4 | 4. I am sure I voted | 6031 | 0\.809 |
| Total | * | 7453 | 1\.000 |
#### V202072
Description: POST: Did R vote for President
Question text: How about the election for President? Did you vote for a candidate for President?
Variable class: haven\_labelled, vctrs\_vctr, double
| V202072 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 2 | 0\.000 |
| \-6 | \-6\. No post\-election interview | 4 | 0\.001 |
| \-1 | \-1\. Inapplicable | 1418 | 0\.190 |
| 1 | 1. Yes, voted for President | 5952 | 0\.799 |
| 2 | 2. No, didn’t vote for President | 77 | 0\.010 |
| Total | * | 7453 | 1\.000 |
#### VotedPres2020
Description: POST: Did R vote for President
Question text: How about the election for President? Did you vote for a candidate for President?
Variable class: factor
| VotedPres2020 | n | Unweighted Freq |
| --- | --- | --- |
| Yes | 6313 | 0\.847 |
| No | 87 | 0\.012 |
| NA | 1053 | 0\.141 |
| Total | 7453 | 1\.000 |
#### V202073
Description: POST: For whom did R vote for President
Question text: Who did you vote for? \[Joe Biden, Donald Trump/Donald Trump, Joe Biden], Jo Jorgensen, Howie Hawkins, or someone else?
Variable class: haven\_labelled, vctrs\_vctr, double
| V202073 | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 53 | 0\.007 |
| \-6 | \-6\. No post\-election interview | 4 | 0\.001 |
| \-1 | \-1\. Inapplicable | 1497 | 0\.201 |
| 1 | 1. Joe Biden | 3267 | 0\.438 |
| 2 | 2. Donald Trump | 2462 | 0\.330 |
| 3 | 3. Jo Jorgensen | 69 | 0\.009 |
| 4 | 4. Howie Hawkins | 23 | 0\.003 |
| 5 | 5. Other candidate {SPECIFY} | 56 | 0\.008 |
| 7 | 7. Specified as Republican candidate | 1 | 0\.000 |
| 8 | 8. Specified as Libertarian candidate | 3 | 0\.000 |
| 11 | 11. Specified as don’t know | 2 | 0\.000 |
| 12 | 12. Specified as refused | 16 | 0\.002 |
| Total | * | 7453 | 1\.000 |
#### V202109x
Description: PRE\-POST: SUMMARY: Voter turnout in 2020
Question text: Derived from V201024, V202066, V202051
Variable class: haven\_labelled, vctrs\_vctr, double
| V202109x | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-2 | \-2\. Not reported | 7 | 0\.001 |
| 0 | 0. Did not vote | 1039 | 0\.139 |
| 1 | 1. Voted | 6407 | 0\.860 |
| Total | * | 7453 | 1\.000 |
#### V202110x
Description: PRE\-POST: SUMMARY: 2020 Presidential vote
Question text: Derived from V201029, V202073
Variable class: haven\_labelled, vctrs\_vctr, double
| V202110x | Label | n | Unweighted Freq |
| --- | --- | --- | --- |
| \-9 | \-9\. Refused | 81 | 0\.011 |
| \-8 | \-8\. Don’t know | 2 | 0\.000 |
| \-1 | \-1\. Inapplicable | 1136 | 0\.152 |
| 1 | 1. Joe Biden | 3509 | 0\.471 |
| 2 | 2. Donald Trump | 2567 | 0\.344 |
| 3 | 3. Jo Jorgensen | 74 | 0\.010 |
| 4 | 4. Howie Hawkins | 24 | 0\.003 |
| 5 | 5. Other candidate {SPECIFY} | 60 | 0\.008 |
| Total | * | 7453 | 1\.000 |
#### VotedPres2020\_selection
Description: PRE\-POST: SUMMARY: 2020 Presidential vote
Question text: Derived from V201029, V202073
Variable class: factor
| VotedPres2020\_selection | n | Unweighted Freq |
| --- | --- | --- |
| Biden | 3509 | 0\.471 |
| Trump | 2567 | 0\.344 |
| Other | 158 | 0\.021 |
| NA | 1219 | 0\.164 |
| Total | 7453 | 1\.000 |
| Social Science |
tidy-survey-r.github.io | https://tidy-survey-r.github.io/tidy-survey-book/recs-cb.html |
C RECS derived variable codebook
================================
The full codebook with the original variables is available at [https://www.eia.gov/consumption/residential/data/2020/index.php?view\=microdata](https://www.eia.gov/consumption/residential/data/2020/index.php?view=microdata) \- “Variable and Response Codebook.”
This is a codebook for the RECS data used in this book (`recs_2020`) from the {srvyrexploR} package.
C.1 ADMIN
---------
#### DOEID
Description: Unique identifier for each respondent
#### ClimateRegion\_BA
Description: Building America Climate Zone
| ClimateRegion\_BA | n | Unweighted Freq |
| --- | --- | --- |
| Mixed\-Dry | 142 | 0\.008 |
| Mixed\-Humid | 5579 | 0\.302 |
| Hot\-Humid | 2545 | 0\.138 |
| Hot\-Dry | 1577 | 0\.085 |
| Very\-Cold | 572 | 0\.031 |
| Cold | 7116 | 0\.385 |
| Marine | 911 | 0\.049 |
| Subarctic | 54 | 0\.003 |
| Total | 18496 | 1\.000 |
#### Urbanicity
Description: 2010 Census Urban Type Code
| Urbanicity | n | Unweighted Freq |
| --- | --- | --- |
| Urban Area | 12395 | 0\.670 |
| Urban Cluster | 2020 | 0\.109 |
| Rural | 4081 | 0\.221 |
| Total | 18496 | 1\.000 |
C.2 GEOGRAPHY
-------------
#### Region
Description: Census Region
| Region | n | Unweighted Freq |
| --- | --- | --- |
| Northeast | 3657 | 0\.198 |
| Midwest | 3832 | 0\.207 |
| South | 6426 | 0\.347 |
| West | 4581 | 0\.248 |
| Total | 18496 | 1\.000 |
#### REGIONC
Description: Census Region
| REGIONC | n | Unweighted Freq |
| --- | --- | --- |
| MIDWEST | 3832 | 0\.207 |
| NORTHEAST | 3657 | 0\.198 |
| SOUTH | 6426 | 0\.347 |
| WEST | 4581 | 0\.248 |
| Total | 18496 | 1\.000 |
#### Division
Description: Census Division, Mountain Division is divided into North and South for RECS purposes
| Division | n | Unweighted Freq |
| --- | --- | --- |
| New England | 1680 | 0\.091 |
| Middle Atlantic | 1977 | 0\.107 |
| East North Central | 2014 | 0\.109 |
| West North Central | 1818 | 0\.098 |
| South Atlantic | 3256 | 0\.176 |
| East South Central | 1343 | 0\.073 |
| West South Central | 1827 | 0\.099 |
| Mountain North | 1180 | 0\.064 |
| Mountain South | 904 | 0\.049 |
| Pacific | 2497 | 0\.135 |
| Total | 18496 | 1\.000 |
#### STATE\_FIPS
Description: State Federal Information Processing System Code
| STATE\_FIPS | n | Unweighted Freq |
| --- | --- | --- |
| 01 | 242 | 0\.013 |
| 02 | 311 | 0\.017 |
| 04 | 495 | 0\.027 |
| 05 | 268 | 0\.014 |
| 06 | 1152 | 0\.062 |
| 08 | 360 | 0\.019 |
| 09 | 294 | 0\.016 |
| 10 | 143 | 0\.008 |
| 11 | 221 | 0\.012 |
| 12 | 655 | 0\.035 |
| 13 | 417 | 0\.023 |
| 15 | 282 | 0\.015 |
| 16 | 270 | 0\.015 |
| 17 | 530 | 0\.029 |
| 18 | 400 | 0\.022 |
| 19 | 286 | 0\.015 |
| 20 | 208 | 0\.011 |
| 21 | 428 | 0\.023 |
| 22 | 311 | 0\.017 |
| 23 | 223 | 0\.012 |
| 24 | 359 | 0\.019 |
| 25 | 552 | 0\.030 |
| 26 | 388 | 0\.021 |
| 27 | 325 | 0\.018 |
| 28 | 168 | 0\.009 |
| 29 | 296 | 0\.016 |
| 30 | 172 | 0\.009 |
| 31 | 189 | 0\.010 |
| 32 | 231 | 0\.012 |
| 33 | 175 | 0\.009 |
| 34 | 456 | 0\.025 |
| 35 | 178 | 0\.010 |
| 36 | 904 | 0\.049 |
| 37 | 479 | 0\.026 |
| 38 | 331 | 0\.018 |
| 39 | 339 | 0\.018 |
| 40 | 232 | 0\.013 |
| 41 | 313 | 0\.017 |
| 42 | 617 | 0\.033 |
| 44 | 191 | 0\.010 |
| 45 | 334 | 0\.018 |
| 46 | 183 | 0\.010 |
| 47 | 505 | 0\.027 |
| 48 | 1016 | 0\.055 |
| 49 | 188 | 0\.010 |
| 50 | 245 | 0\.013 |
| 51 | 451 | 0\.024 |
| 53 | 439 | 0\.024 |
| 54 | 197 | 0\.011 |
| 55 | 357 | 0\.019 |
| 56 | 190 | 0\.010 |
| Total | 18496 | 1\.000 |
#### state\_postal
Description: State Postal Code
| state\_postal | n | Unweighted Freq |
| --- | --- | --- |
| AL | 242 | 0\.013 |
| AK | 311 | 0\.017 |
| AZ | 495 | 0\.027 |
| AR | 268 | 0\.014 |
| CA | 1152 | 0\.062 |
| CO | 360 | 0\.019 |
| CT | 294 | 0\.016 |
| DE | 143 | 0\.008 |
| DC | 221 | 0\.012 |
| FL | 655 | 0\.035 |
| GA | 417 | 0\.023 |
| HI | 282 | 0\.015 |
| ID | 270 | 0\.015 |
| IL | 530 | 0\.029 |
| IN | 400 | 0\.022 |
| IA | 286 | 0\.015 |
| KS | 208 | 0\.011 |
| KY | 428 | 0\.023 |
| LA | 311 | 0\.017 |
| ME | 223 | 0\.012 |
| MD | 359 | 0\.019 |
| MA | 552 | 0\.030 |
| MI | 388 | 0\.021 |
| MN | 325 | 0\.018 |
| MS | 168 | 0\.009 |
| MO | 296 | 0\.016 |
| MT | 172 | 0\.009 |
| NE | 189 | 0\.010 |
| NV | 231 | 0\.012 |
| NH | 175 | 0\.009 |
| NJ | 456 | 0\.025 |
| NM | 178 | 0\.010 |
| NY | 904 | 0\.049 |
| NC | 479 | 0\.026 |
| ND | 331 | 0\.018 |
| OH | 339 | 0\.018 |
| OK | 232 | 0\.013 |
| OR | 313 | 0\.017 |
| PA | 617 | 0\.033 |
| RI | 191 | 0\.010 |
| SC | 334 | 0\.018 |
| SD | 183 | 0\.010 |
| TN | 505 | 0\.027 |
| TX | 1016 | 0\.055 |
| UT | 188 | 0\.010 |
| VT | 245 | 0\.013 |
| VA | 451 | 0\.024 |
| WA | 439 | 0\.024 |
| WV | 197 | 0\.011 |
| WI | 357 | 0\.019 |
| WY | 190 | 0\.010 |
| Total | 18496 | 1\.000 |
#### state\_name
Description: State Name
| state\_name | n | Unweighted Freq |
| --- | --- | --- |
| Alabama | 242 | 0\.013 |
| Alaska | 311 | 0\.017 |
| Arizona | 495 | 0\.027 |
| Arkansas | 268 | 0\.014 |
| California | 1152 | 0\.062 |
| Colorado | 360 | 0\.019 |
| Connecticut | 294 | 0\.016 |
| Delaware | 143 | 0\.008 |
| District of Columbia | 221 | 0\.012 |
| Florida | 655 | 0\.035 |
| Georgia | 417 | 0\.023 |
| Hawaii | 282 | 0\.015 |
| Idaho | 270 | 0\.015 |
| Illinois | 530 | 0\.029 |
| Indiana | 400 | 0\.022 |
| Iowa | 286 | 0\.015 |
| Kansas | 208 | 0\.011 |
| Kentucky | 428 | 0\.023 |
| Louisiana | 311 | 0\.017 |
| Maine | 223 | 0\.012 |
| Maryland | 359 | 0\.019 |
| Massachusetts | 552 | 0\.030 |
| Michigan | 388 | 0\.021 |
| Minnesota | 325 | 0\.018 |
| Mississippi | 168 | 0\.009 |
| Missouri | 296 | 0\.016 |
| Montana | 172 | 0\.009 |
| Nebraska | 189 | 0\.010 |
| Nevada | 231 | 0\.012 |
| New Hampshire | 175 | 0\.009 |
| New Jersey | 456 | 0\.025 |
| New Mexico | 178 | 0\.010 |
| New York | 904 | 0\.049 |
| North Carolina | 479 | 0\.026 |
| North Dakota | 331 | 0\.018 |
| Ohio | 339 | 0\.018 |
| Oklahoma | 232 | 0\.013 |
| Oregon | 313 | 0\.017 |
| Pennsylvania | 617 | 0\.033 |
| Rhode Island | 191 | 0\.010 |
| South Carolina | 334 | 0\.018 |
| South Dakota | 183 | 0\.010 |
| Tennessee | 505 | 0\.027 |
| Texas | 1016 | 0\.055 |
| Utah | 188 | 0\.010 |
| Vermont | 245 | 0\.013 |
| Virginia | 451 | 0\.024 |
| Washington | 439 | 0\.024 |
| West Virginia | 197 | 0\.011 |
| Wisconsin | 357 | 0\.019 |
| Wyoming | 190 | 0\.010 |
| Total | 18496 | 1\.000 |
C.3 WEATHER
-----------
#### HDD65
Description: Heating degree days in 2020, base temperature 65F; Derived from the weighted temperatures of nearby weather stations
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 4396 | 17383 |
#### CDD65
Description: Cooling degree days in 2020, base temperature 65F; Derived from the weighted temperatures of nearby weather stations
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 1179 | 5534 |
#### HDD30YR
Description: Heating degree days, 30\-year average 1981\-2010, base temperature 65F; Taken from nearest weather station, inoculated with random errors
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 4825 | 16071 |
#### CDD30YR
Description: Cooling degree days, 30\-year average 1981\-2010, base temperature 65F; Taken from nearest weather station, inoculated with random errors
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 1020 | 4905 |
C.4 YOUR HOME
-------------
#### HousingUnitType
Description: Type of housing unit
Question text: Which best describes your home?
| HousingUnitType | n | Unweighted Freq |
| --- | --- | --- |
| Mobile home | 974 | 0\.053 |
| Single\-family detached | 12319 | 0\.666 |
| Single\-family attached | 1751 | 0\.095 |
| Apartment: 2\-4 Units | 1013 | 0\.055 |
| Apartment: 5 or more units | 2439 | 0\.132 |
| Total | 18496 | 1\.000 |
#### YearMade
Description: Range when housing unit was built
Question text: Derived from: In what year was your home built? AND Although you do not know the exact year your home was built, it is helpful to have an estimate. About when was your home built?
| YearMade | n | Unweighted Freq |
| --- | --- | --- |
| Before 1950 | 2721 | 0\.147 |
| 1950\-1959 | 1685 | 0\.091 |
| 1960\-1969 | 1867 | 0\.101 |
| 1970\-1979 | 2817 | 0\.152 |
| 1980\-1989 | 2435 | 0\.132 |
| 1990\-1999 | 2451 | 0\.133 |
| 2000\-2009 | 2748 | 0\.149 |
| 2010\-2015 | 989 | 0\.053 |
| 2016\-2020 | 783 | 0\.042 |
| Total | 18496 | 1\.000 |
#### TOTSQFT\_EN
Description: Total energy\-consuming area (square footage) of the housing unit. Includes all main living areas; all basements; heated, cooled, or finished attics; and heating or cooled garages. For single\-family housing units this is derived using the respondent\-reported square footage (SQFTEST) and adjusted using the “include” variables (e.g., SQFTINCB), where applicable. For apartments and mobile homes this is the respondent\-reported square footage. A derived variable rounded to the nearest 10
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 200 | 1700 | 15000 |
#### TOTHSQFT
Description: Square footage of the housing unit that is heated by space heating equipment. A derived variable rounded to the nearest 10
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 1520 | 15000 |
#### TOTCSQFT
Description: Square footage of the housing unit that is cooled by air\-conditioning equipment or evaporative cooler, a derived variable rounded to the nearest 10
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 1200 | 14600 |
C.5 SPACE HEATING
-----------------
#### SpaceHeatingUsed
Description: Space heating equipment used
Question text: Is your home heated during the winter?
| SpaceHeatingUsed | n | Unweighted Freq |
| --- | --- | --- |
| FALSE | 751 | 0\.041 |
| TRUE | 17745 | 0\.959 |
| Total | 18496 | 1\.000 |
C.6 AIR CONDITIONING
--------------------
#### ACUsed
Description: Air conditioning equipment used
Question text: Is any air conditioning equipment used in your home?
| ACUsed | n | Unweighted Freq |
| --- | --- | --- |
| FALSE | 2325 | 0\.126 |
| TRUE | 16171 | 0\.874 |
| Total | 18496 | 1\.000 |
C.7 THERMOSTAT
--------------
#### HeatingBehavior
Description: Winter temperature control method
Question text: Which of the following best describes how your household controls the indoor temperature during the winter?
| HeatingBehavior | n | Unweighted Freq |
| --- | --- | --- |
| Set one temp and leave it | 7806 | 0\.422 |
| Manually adjust at night/no one home | 4654 | 0\.252 |
| Programmable or smart thermostat automatically adjusts the temperature | 3310 | 0\.179 |
| Turn on or off as needed | 1491 | 0\.081 |
| No control | 438 | 0\.024 |
| Other | 46 | 0\.002 |
| NA | 751 | 0\.041 |
| Total | 18496 | 1\.000 |
#### WinterTempDay
Description: Winter thermostat setting or temperature in home when someone is home during the day
Question text: During the winter, what is your home’s typical indoor temperature when someone is home during the day?
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 751 | 50 | 70 | 90 |
#### WinterTempAway
Description: Winter thermostat setting or temperature in home when no one is home during the day
Question text: During the winter, what is your home’s typical indoor temperature when no one is inside your home during the day?
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 751 | 50 | 68 | 90 |
#### WinterTempNight
Description: Winter thermostat setting or temperature in home at night
Question text: During the winter, what is your home’s typical indoor temperature inside your home at night?
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 751 | 50 | 68 | 90 |
#### ACBehavior
Description: Summer temperature control method
Question text: Which of the following best describes how your household controls the indoor temperature during the summer?
| ACBehavior | n | Unweighted Freq |
| --- | --- | --- |
| Set one temp and leave it | 6738 | 0\.364 |
| Manually adjust at night/no one home | 3637 | 0\.197 |
| Programmable or smart thermostat automatically adjusts the temperature | 2638 | 0\.143 |
| Turn on or off as needed | 2746 | 0\.148 |
| No control | 409 | 0\.022 |
| Other | 3 | 0\.000 |
| NA | 2325 | 0\.126 |
| Total | 18496 | 1\.000 |
#### SummerTempDay
Description: Summer thermostat setting or temperature in home when someone is home during the day
Question text: During the summer, what is your home’s typical indoor temperature when someone is home during the day?
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 2325 | 50 | 72 | 90 |
#### SummerTempAway
Description: Summer thermostat setting or temperature in home when no one is home during the day
Question text: During the summer, what is your home’s typical indoor temperature when no one is inside your home during the day?
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 2325 | 50 | 74 | 90 |
#### SummerTempNight
Description: Summer thermostat setting or temperature in home at night
Question text: During the summer, what is your home’s typical indoor temperature inside your home at night?
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 2325 | 50 | 72 | 90 |
C.8 WEIGHTS
-----------
#### NWEIGHT
Description: Final Analysis Weight
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 437\.9 | 6119 | 29279 |
#### NWEIGHT1
Description: Final Analysis Weight for replicate 1
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6136 | 30015 |
#### NWEIGHT2
Description: Final Analysis Weight for replicate 2
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6151 | 29422 |
#### NWEIGHT3
Description: Final Analysis Weight for replicate 3
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6151 | 29431 |
#### NWEIGHT4
Description: Final Analysis Weight for replicate 4
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6153 | 29494 |
#### NWEIGHT5
Description: Final Analysis Weight for replicate 5
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6134 | 30039 |
#### NWEIGHT6
Description: Final Analysis Weight for replicate 6
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6147 | 29419 |
#### NWEIGHT7
Description: Final Analysis Weight for replicate 7
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6135 | 29586 |
#### NWEIGHT8
Description: Final Analysis Weight for replicate 8
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6151 | 29499 |
#### NWEIGHT9
Description: Final Analysis Weight for replicate 9
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6139 | 29845 |
#### NWEIGHT10
Description: Final Analysis Weight for replicate 10
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6163 | 29635 |
#### NWEIGHT11
Description: Final Analysis Weight for replicate 11
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6140 | 29681 |
#### NWEIGHT12
Description: Final Analysis Weight for replicate 12
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6160 | 29849 |
#### NWEIGHT13
Description: Final Analysis Weight for replicate 13
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6142 | 29843 |
#### NWEIGHT14
Description: Final Analysis Weight for replicate 14
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6154 | 30184 |
#### NWEIGHT15
Description: Final Analysis Weight for replicate 15
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6145 | 29970 |
#### NWEIGHT16
Description: Final Analysis Weight for replicate 16
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6133 | 29825 |
#### NWEIGHT17
Description: Final Analysis Weight for replicate 17
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6126 | 30606 |
#### NWEIGHT18
Description: Final Analysis Weight for replicate 18
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6155 | 29689 |
#### NWEIGHT19
Description: Final Analysis Weight for replicate 19
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6153 | 29336 |
#### NWEIGHT20
Description: Final Analysis Weight for replicate 20
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6139 | 30274 |
#### NWEIGHT21
Description: Final Analysis Weight for replicate 21
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6135 | 29766 |
#### NWEIGHT22
Description: Final Analysis Weight for replicate 22
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6149 | 29791 |
#### NWEIGHT23
Description: Final Analysis Weight for replicate 23
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6148 | 30126 |
#### NWEIGHT24
Description: Final Analysis Weight for replicate 24
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6136 | 29946 |
#### NWEIGHT25
Description: Final Analysis Weight for replicate 25
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6150 | 30445 |
#### NWEIGHT26
Description: Final Analysis Weight for replicate 26
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6136 | 29893 |
#### NWEIGHT27
Description: Final Analysis Weight for replicate 27
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6125 | 30030 |
#### NWEIGHT28
Description: Final Analysis Weight for replicate 28
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6149 | 29599 |
#### NWEIGHT29
Description: Final Analysis Weight for replicate 29
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6146 | 30136 |
#### NWEIGHT30
Description: Final Analysis Weight for replicate 30
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6149 | 29895 |
#### NWEIGHT31
Description: Final Analysis Weight for replicate 31
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6144 | 29604 |
#### NWEIGHT32
Description: Final Analysis Weight for replicate 32
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6159 | 29310 |
#### NWEIGHT33
Description: Final Analysis Weight for replicate 33
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6148 | 29408 |
#### NWEIGHT34
Description: Final Analysis Weight for replicate 34
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6139 | 29564 |
#### NWEIGHT35
Description: Final Analysis Weight for replicate 35
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6141 | 30437 |
#### NWEIGHT36
Description: Final Analysis Weight for replicate 36
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6149 | 27896 |
#### NWEIGHT37
Description: Final Analysis Weight for replicate 37
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6133 | 30596 |
#### NWEIGHT38
Description: Final Analysis Weight for replicate 38
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6139 | 30130 |
#### NWEIGHT39
Description: Final Analysis Weight for replicate 39
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6147 | 29262 |
#### NWEIGHT40
Description: Final Analysis Weight for replicate 40
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6144 | 30344 |
#### NWEIGHT41
Description: Final Analysis Weight for replicate 41
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6153 | 29594 |
#### NWEIGHT42
Description: Final Analysis Weight for replicate 42
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6137 | 29938 |
#### NWEIGHT43
Description: Final Analysis Weight for replicate 43
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6157 | 29878 |
#### NWEIGHT44
Description: Final Analysis Weight for replicate 44
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6148 | 29896 |
#### NWEIGHT45
Description: Final Analysis Weight for replicate 45
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6149 | 29729 |
#### NWEIGHT46
Description: Final Analysis Weight for replicate 46
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6152 | 29103 |
#### NWEIGHT47
Description: Final Analysis Weight for replicate 47
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6150 | 30070 |
#### NWEIGHT48
Description: Final Analysis Weight for replicate 48
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6139 | 29343 |
#### NWEIGHT49
Description: Final Analysis Weight for replicate 49
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6146 | 29590 |
#### NWEIGHT50
Description: Final Analysis Weight for replicate 50
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6159 | 30027 |
#### NWEIGHT51
Description: Final Analysis Weight for replicate 51
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6150 | 29247 |
#### NWEIGHT52
Description: Final Analysis Weight for replicate 52
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6154 | 29445 |
#### NWEIGHT53
Description: Final Analysis Weight for replicate 53
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6156 | 30131 |
#### NWEIGHT54
Description: Final Analysis Weight for replicate 54
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6151 | 29439 |
#### NWEIGHT55
Description: Final Analysis Weight for replicate 55
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6143 | 29216 |
#### NWEIGHT56
Description: Final Analysis Weight for replicate 56
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6153 | 29203 |
#### NWEIGHT57
Description: Final Analysis Weight for replicate 57
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6138 | 29819 |
#### NWEIGHT58
Description: Final Analysis Weight for replicate 58
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6137 | 29818 |
#### NWEIGHT59
Description: Final Analysis Weight for replicate 59
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6144 | 29606 |
#### NWEIGHT60
Description: Final Analysis Weight for replicate 60
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6140 | 29818 |
C.9 CONSUMPTION AND EXPENDITURE
-------------------------------
#### BTUEL
Description: Total electricity use, in thousand Btu, 2020, including self\-generation of solar power
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 143\.3 | 31890 | 628155 |
#### DOLLAREL
Description: Total electricity cost, in dollars, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | \-889\.5 | 1258 | 15680 |
#### BTUNG
Description: Total natural gas use, in thousand Btu, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 22012 | 1134709 |
#### DOLLARNG
Description: Total natural gas cost, in dollars, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 313\.9 | 8155 |
#### BTULP
Description: Total propane use, in thousand Btu, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 0 | 364215 |
#### DOLLARLP
Description: Total propane cost, in dollars, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 0 | 6621 |
#### BTUFO
Description: Total fuel oil/kerosene use, in thousand Btu, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 0 | 426268 |
#### DOLLARFO
Description: Total fuel oil/kerosene cost, in dollars, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 0 | 7004 |
#### BTUWOOD
Description: Total wood use, in thousand Btu, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 0 | 5e\+05 |
#### TOTALBTU
Description: Total usage including electricity, natural gas, propane, and fuel oil, in thousand Btu, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 1182 | 74180 | 1367548 |
#### TOTALDOL
Description: Total cost including electricity, natural gas, propane, and fuel oil, in dollars, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | \-150\.5 | 1793 | 20043 |
C.1 ADMIN
---------
#### DOEID
Description: Unique identifier for each respondent
#### ClimateRegion\_BA
Description: Building America Climate Zone
| ClimateRegion\_BA | n | Unweighted Freq |
| --- | --- | --- |
| Mixed\-Dry | 142 | 0\.008 |
| Mixed\-Humid | 5579 | 0\.302 |
| Hot\-Humid | 2545 | 0\.138 |
| Hot\-Dry | 1577 | 0\.085 |
| Very\-Cold | 572 | 0\.031 |
| Cold | 7116 | 0\.385 |
| Marine | 911 | 0\.049 |
| Subarctic | 54 | 0\.003 |
| Total | 18496 | 1\.000 |
#### Urbanicity
Description: 2010 Census Urban Type Code
| Urbanicity | n | Unweighted Freq |
| --- | --- | --- |
| Urban Area | 12395 | 0\.670 |
| Urban Cluster | 2020 | 0\.109 |
| Rural | 4081 | 0\.221 |
| Total | 18496 | 1\.000 |
#### DOEID
Description: Unique identifier for each respondent
#### ClimateRegion\_BA
Description: Building America Climate Zone
| ClimateRegion\_BA | n | Unweighted Freq |
| --- | --- | --- |
| Mixed\-Dry | 142 | 0\.008 |
| Mixed\-Humid | 5579 | 0\.302 |
| Hot\-Humid | 2545 | 0\.138 |
| Hot\-Dry | 1577 | 0\.085 |
| Very\-Cold | 572 | 0\.031 |
| Cold | 7116 | 0\.385 |
| Marine | 911 | 0\.049 |
| Subarctic | 54 | 0\.003 |
| Total | 18496 | 1\.000 |
#### Urbanicity
Description: 2010 Census Urban Type Code
| Urbanicity | n | Unweighted Freq |
| --- | --- | --- |
| Urban Area | 12395 | 0\.670 |
| Urban Cluster | 2020 | 0\.109 |
| Rural | 4081 | 0\.221 |
| Total | 18496 | 1\.000 |
C.2 GEOGRAPHY
-------------
#### Region
Description: Census Region
| Region | n | Unweighted Freq |
| --- | --- | --- |
| Northeast | 3657 | 0\.198 |
| Midwest | 3832 | 0\.207 |
| South | 6426 | 0\.347 |
| West | 4581 | 0\.248 |
| Total | 18496 | 1\.000 |
#### REGIONC
Description: Census Region
| REGIONC | n | Unweighted Freq |
| --- | --- | --- |
| MIDWEST | 3832 | 0\.207 |
| NORTHEAST | 3657 | 0\.198 |
| SOUTH | 6426 | 0\.347 |
| WEST | 4581 | 0\.248 |
| Total | 18496 | 1\.000 |
#### Division
Description: Census Division, Mountain Division is divided into North and South for RECS purposes
| Division | n | Unweighted Freq |
| --- | --- | --- |
| New England | 1680 | 0\.091 |
| Middle Atlantic | 1977 | 0\.107 |
| East North Central | 2014 | 0\.109 |
| West North Central | 1818 | 0\.098 |
| South Atlantic | 3256 | 0\.176 |
| East South Central | 1343 | 0\.073 |
| West South Central | 1827 | 0\.099 |
| Mountain North | 1180 | 0\.064 |
| Mountain South | 904 | 0\.049 |
| Pacific | 2497 | 0\.135 |
| Total | 18496 | 1\.000 |
#### STATE\_FIPS
Description: State Federal Information Processing System Code
| STATE\_FIPS | n | Unweighted Freq |
| --- | --- | --- |
| 01 | 242 | 0\.013 |
| 02 | 311 | 0\.017 |
| 04 | 495 | 0\.027 |
| 05 | 268 | 0\.014 |
| 06 | 1152 | 0\.062 |
| 08 | 360 | 0\.019 |
| 09 | 294 | 0\.016 |
| 10 | 143 | 0\.008 |
| 11 | 221 | 0\.012 |
| 12 | 655 | 0\.035 |
| 13 | 417 | 0\.023 |
| 15 | 282 | 0\.015 |
| 16 | 270 | 0\.015 |
| 17 | 530 | 0\.029 |
| 18 | 400 | 0\.022 |
| 19 | 286 | 0\.015 |
| 20 | 208 | 0\.011 |
| 21 | 428 | 0\.023 |
| 22 | 311 | 0\.017 |
| 23 | 223 | 0\.012 |
| 24 | 359 | 0\.019 |
| 25 | 552 | 0\.030 |
| 26 | 388 | 0\.021 |
| 27 | 325 | 0\.018 |
| 28 | 168 | 0\.009 |
| 29 | 296 | 0\.016 |
| 30 | 172 | 0\.009 |
| 31 | 189 | 0\.010 |
| 32 | 231 | 0\.012 |
| 33 | 175 | 0\.009 |
| 34 | 456 | 0\.025 |
| 35 | 178 | 0\.010 |
| 36 | 904 | 0\.049 |
| 37 | 479 | 0\.026 |
| 38 | 331 | 0\.018 |
| 39 | 339 | 0\.018 |
| 40 | 232 | 0\.013 |
| 41 | 313 | 0\.017 |
| 42 | 617 | 0\.033 |
| 44 | 191 | 0\.010 |
| 45 | 334 | 0\.018 |
| 46 | 183 | 0\.010 |
| 47 | 505 | 0\.027 |
| 48 | 1016 | 0\.055 |
| 49 | 188 | 0\.010 |
| 50 | 245 | 0\.013 |
| 51 | 451 | 0\.024 |
| 53 | 439 | 0\.024 |
| 54 | 197 | 0\.011 |
| 55 | 357 | 0\.019 |
| 56 | 190 | 0\.010 |
| Total | 18496 | 1\.000 |
#### state\_postal
Description: State Postal Code
| state\_postal | n | Unweighted Freq |
| --- | --- | --- |
| AL | 242 | 0\.013 |
| AK | 311 | 0\.017 |
| AZ | 495 | 0\.027 |
| AR | 268 | 0\.014 |
| CA | 1152 | 0\.062 |
| CO | 360 | 0\.019 |
| CT | 294 | 0\.016 |
| DE | 143 | 0\.008 |
| DC | 221 | 0\.012 |
| FL | 655 | 0\.035 |
| GA | 417 | 0\.023 |
| HI | 282 | 0\.015 |
| ID | 270 | 0\.015 |
| IL | 530 | 0\.029 |
| IN | 400 | 0\.022 |
| IA | 286 | 0\.015 |
| KS | 208 | 0\.011 |
| KY | 428 | 0\.023 |
| LA | 311 | 0\.017 |
| ME | 223 | 0\.012 |
| MD | 359 | 0\.019 |
| MA | 552 | 0\.030 |
| MI | 388 | 0\.021 |
| MN | 325 | 0\.018 |
| MS | 168 | 0\.009 |
| MO | 296 | 0\.016 |
| MT | 172 | 0\.009 |
| NE | 189 | 0\.010 |
| NV | 231 | 0\.012 |
| NH | 175 | 0\.009 |
| NJ | 456 | 0\.025 |
| NM | 178 | 0\.010 |
| NY | 904 | 0\.049 |
| NC | 479 | 0\.026 |
| ND | 331 | 0\.018 |
| OH | 339 | 0\.018 |
| OK | 232 | 0\.013 |
| OR | 313 | 0\.017 |
| PA | 617 | 0\.033 |
| RI | 191 | 0\.010 |
| SC | 334 | 0\.018 |
| SD | 183 | 0\.010 |
| TN | 505 | 0\.027 |
| TX | 1016 | 0\.055 |
| UT | 188 | 0\.010 |
| VT | 245 | 0\.013 |
| VA | 451 | 0\.024 |
| WA | 439 | 0\.024 |
| WV | 197 | 0\.011 |
| WI | 357 | 0\.019 |
| WY | 190 | 0\.010 |
| Total | 18496 | 1\.000 |
#### state\_name
Description: State Name
| state\_name | n | Unweighted Freq |
| --- | --- | --- |
| Alabama | 242 | 0\.013 |
| Alaska | 311 | 0\.017 |
| Arizona | 495 | 0\.027 |
| Arkansas | 268 | 0\.014 |
| California | 1152 | 0\.062 |
| Colorado | 360 | 0\.019 |
| Connecticut | 294 | 0\.016 |
| Delaware | 143 | 0\.008 |
| District of Columbia | 221 | 0\.012 |
| Florida | 655 | 0\.035 |
| Georgia | 417 | 0\.023 |
| Hawaii | 282 | 0\.015 |
| Idaho | 270 | 0\.015 |
| Illinois | 530 | 0\.029 |
| Indiana | 400 | 0\.022 |
| Iowa | 286 | 0\.015 |
| Kansas | 208 | 0\.011 |
| Kentucky | 428 | 0\.023 |
| Louisiana | 311 | 0\.017 |
| Maine | 223 | 0\.012 |
| Maryland | 359 | 0\.019 |
| Massachusetts | 552 | 0\.030 |
| Michigan | 388 | 0\.021 |
| Minnesota | 325 | 0\.018 |
| Mississippi | 168 | 0\.009 |
| Missouri | 296 | 0\.016 |
| Montana | 172 | 0\.009 |
| Nebraska | 189 | 0\.010 |
| Nevada | 231 | 0\.012 |
| New Hampshire | 175 | 0\.009 |
| New Jersey | 456 | 0\.025 |
| New Mexico | 178 | 0\.010 |
| New York | 904 | 0\.049 |
| North Carolina | 479 | 0\.026 |
| North Dakota | 331 | 0\.018 |
| Ohio | 339 | 0\.018 |
| Oklahoma | 232 | 0\.013 |
| Oregon | 313 | 0\.017 |
| Pennsylvania | 617 | 0\.033 |
| Rhode Island | 191 | 0\.010 |
| South Carolina | 334 | 0\.018 |
| South Dakota | 183 | 0\.010 |
| Tennessee | 505 | 0\.027 |
| Texas | 1016 | 0\.055 |
| Utah | 188 | 0\.010 |
| Vermont | 245 | 0\.013 |
| Virginia | 451 | 0\.024 |
| Washington | 439 | 0\.024 |
| West Virginia | 197 | 0\.011 |
| Wisconsin | 357 | 0\.019 |
| Wyoming | 190 | 0\.010 |
| Total | 18496 | 1\.000 |
#### Region
Description: Census Region
| Region | n | Unweighted Freq |
| --- | --- | --- |
| Northeast | 3657 | 0\.198 |
| Midwest | 3832 | 0\.207 |
| South | 6426 | 0\.347 |
| West | 4581 | 0\.248 |
| Total | 18496 | 1\.000 |
#### REGIONC
Description: Census Region
| REGIONC | n | Unweighted Freq |
| --- | --- | --- |
| MIDWEST | 3832 | 0\.207 |
| NORTHEAST | 3657 | 0\.198 |
| SOUTH | 6426 | 0\.347 |
| WEST | 4581 | 0\.248 |
| Total | 18496 | 1\.000 |
#### Division
Description: Census Division, Mountain Division is divided into North and South for RECS purposes
| Division | n | Unweighted Freq |
| --- | --- | --- |
| New England | 1680 | 0\.091 |
| Middle Atlantic | 1977 | 0\.107 |
| East North Central | 2014 | 0\.109 |
| West North Central | 1818 | 0\.098 |
| South Atlantic | 3256 | 0\.176 |
| East South Central | 1343 | 0\.073 |
| West South Central | 1827 | 0\.099 |
| Mountain North | 1180 | 0\.064 |
| Mountain South | 904 | 0\.049 |
| Pacific | 2497 | 0\.135 |
| Total | 18496 | 1\.000 |
#### STATE\_FIPS
Description: State Federal Information Processing System Code
| STATE\_FIPS | n | Unweighted Freq |
| --- | --- | --- |
| 01 | 242 | 0\.013 |
| 02 | 311 | 0\.017 |
| 04 | 495 | 0\.027 |
| 05 | 268 | 0\.014 |
| 06 | 1152 | 0\.062 |
| 08 | 360 | 0\.019 |
| 09 | 294 | 0\.016 |
| 10 | 143 | 0\.008 |
| 11 | 221 | 0\.012 |
| 12 | 655 | 0\.035 |
| 13 | 417 | 0\.023 |
| 15 | 282 | 0\.015 |
| 16 | 270 | 0\.015 |
| 17 | 530 | 0\.029 |
| 18 | 400 | 0\.022 |
| 19 | 286 | 0\.015 |
| 20 | 208 | 0\.011 |
| 21 | 428 | 0\.023 |
| 22 | 311 | 0\.017 |
| 23 | 223 | 0\.012 |
| 24 | 359 | 0\.019 |
| 25 | 552 | 0\.030 |
| 26 | 388 | 0\.021 |
| 27 | 325 | 0\.018 |
| 28 | 168 | 0\.009 |
| 29 | 296 | 0\.016 |
| 30 | 172 | 0\.009 |
| 31 | 189 | 0\.010 |
| 32 | 231 | 0\.012 |
| 33 | 175 | 0\.009 |
| 34 | 456 | 0\.025 |
| 35 | 178 | 0\.010 |
| 36 | 904 | 0\.049 |
| 37 | 479 | 0\.026 |
| 38 | 331 | 0\.018 |
| 39 | 339 | 0\.018 |
| 40 | 232 | 0\.013 |
| 41 | 313 | 0\.017 |
| 42 | 617 | 0\.033 |
| 44 | 191 | 0\.010 |
| 45 | 334 | 0\.018 |
| 46 | 183 | 0\.010 |
| 47 | 505 | 0\.027 |
| 48 | 1016 | 0\.055 |
| 49 | 188 | 0\.010 |
| 50 | 245 | 0\.013 |
| 51 | 451 | 0\.024 |
| 53 | 439 | 0\.024 |
| 54 | 197 | 0\.011 |
| 55 | 357 | 0\.019 |
| 56 | 190 | 0\.010 |
| Total | 18496 | 1\.000 |
#### state\_postal
Description: State Postal Code
| state\_postal | n | Unweighted Freq |
| --- | --- | --- |
| AL | 242 | 0\.013 |
| AK | 311 | 0\.017 |
| AZ | 495 | 0\.027 |
| AR | 268 | 0\.014 |
| CA | 1152 | 0\.062 |
| CO | 360 | 0\.019 |
| CT | 294 | 0\.016 |
| DE | 143 | 0\.008 |
| DC | 221 | 0\.012 |
| FL | 655 | 0\.035 |
| GA | 417 | 0\.023 |
| HI | 282 | 0\.015 |
| ID | 270 | 0\.015 |
| IL | 530 | 0\.029 |
| IN | 400 | 0\.022 |
| IA | 286 | 0\.015 |
| KS | 208 | 0\.011 |
| KY | 428 | 0\.023 |
| LA | 311 | 0\.017 |
| ME | 223 | 0\.012 |
| MD | 359 | 0\.019 |
| MA | 552 | 0\.030 |
| MI | 388 | 0\.021 |
| MN | 325 | 0\.018 |
| MS | 168 | 0\.009 |
| MO | 296 | 0\.016 |
| MT | 172 | 0\.009 |
| NE | 189 | 0\.010 |
| NV | 231 | 0\.012 |
| NH | 175 | 0\.009 |
| NJ | 456 | 0\.025 |
| NM | 178 | 0\.010 |
| NY | 904 | 0\.049 |
| NC | 479 | 0\.026 |
| ND | 331 | 0\.018 |
| OH | 339 | 0\.018 |
| OK | 232 | 0\.013 |
| OR | 313 | 0\.017 |
| PA | 617 | 0\.033 |
| RI | 191 | 0\.010 |
| SC | 334 | 0\.018 |
| SD | 183 | 0\.010 |
| TN | 505 | 0\.027 |
| TX | 1016 | 0\.055 |
| UT | 188 | 0\.010 |
| VT | 245 | 0\.013 |
| VA | 451 | 0\.024 |
| WA | 439 | 0\.024 |
| WV | 197 | 0\.011 |
| WI | 357 | 0\.019 |
| WY | 190 | 0\.010 |
| Total | 18496 | 1\.000 |
#### state\_name
Description: State Name
| state\_name | n | Unweighted Freq |
| --- | --- | --- |
| Alabama | 242 | 0\.013 |
| Alaska | 311 | 0\.017 |
| Arizona | 495 | 0\.027 |
| Arkansas | 268 | 0\.014 |
| California | 1152 | 0\.062 |
| Colorado | 360 | 0\.019 |
| Connecticut | 294 | 0\.016 |
| Delaware | 143 | 0\.008 |
| District of Columbia | 221 | 0\.012 |
| Florida | 655 | 0\.035 |
| Georgia | 417 | 0\.023 |
| Hawaii | 282 | 0\.015 |
| Idaho | 270 | 0\.015 |
| Illinois | 530 | 0\.029 |
| Indiana | 400 | 0\.022 |
| Iowa | 286 | 0\.015 |
| Kansas | 208 | 0\.011 |
| Kentucky | 428 | 0\.023 |
| Louisiana | 311 | 0\.017 |
| Maine | 223 | 0\.012 |
| Maryland | 359 | 0\.019 |
| Massachusetts | 552 | 0\.030 |
| Michigan | 388 | 0\.021 |
| Minnesota | 325 | 0\.018 |
| Mississippi | 168 | 0\.009 |
| Missouri | 296 | 0\.016 |
| Montana | 172 | 0\.009 |
| Nebraska | 189 | 0\.010 |
| Nevada | 231 | 0\.012 |
| New Hampshire | 175 | 0\.009 |
| New Jersey | 456 | 0\.025 |
| New Mexico | 178 | 0\.010 |
| New York | 904 | 0\.049 |
| North Carolina | 479 | 0\.026 |
| North Dakota | 331 | 0\.018 |
| Ohio | 339 | 0\.018 |
| Oklahoma | 232 | 0\.013 |
| Oregon | 313 | 0\.017 |
| Pennsylvania | 617 | 0\.033 |
| Rhode Island | 191 | 0\.010 |
| South Carolina | 334 | 0\.018 |
| South Dakota | 183 | 0\.010 |
| Tennessee | 505 | 0\.027 |
| Texas | 1016 | 0\.055 |
| Utah | 188 | 0\.010 |
| Vermont | 245 | 0\.013 |
| Virginia | 451 | 0\.024 |
| Washington | 439 | 0\.024 |
| West Virginia | 197 | 0\.011 |
| Wisconsin | 357 | 0\.019 |
| Wyoming | 190 | 0\.010 |
| Total | 18496 | 1\.000 |
C.3 WEATHER
-----------
#### HDD65
Description: Heating degree days in 2020, base temperature 65F; Derived from the weighted temperatures of nearby weather stations
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 4396 | 17383 |
#### CDD65
Description: Cooling degree days in 2020, base temperature 65F; Derived from the weighted temperatures of nearby weather stations
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 1179 | 5534 |
#### HDD30YR
Description: Heating degree days, 30\-year average 1981\-2010, base temperature 65F; Taken from nearest weather station, inoculated with random errors
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 4825 | 16071 |
#### CDD30YR
Description: Cooling degree days, 30\-year average 1981\-2010, base temperature 65F; Taken from nearest weather station, inoculated with random errors
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 1020 | 4905 |
#### HDD65
Description: Heating degree days in 2020, base temperature 65F; Derived from the weighted temperatures of nearby weather stations
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 4396 | 17383 |
#### CDD65
Description: Cooling degree days in 2020, base temperature 65F; Derived from the weighted temperatures of nearby weather stations
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 1179 | 5534 |
#### HDD30YR
Description: Heating degree days, 30\-year average 1981\-2010, base temperature 65F; Taken from nearest weather station, inoculated with random errors
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 4825 | 16071 |
#### CDD30YR
Description: Cooling degree days, 30\-year average 1981\-2010, base temperature 65F; Taken from nearest weather station, inoculated with random errors
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 1020 | 4905 |
C.4 YOUR HOME
-------------
#### HousingUnitType
Description: Type of housing unit
Question text: Which best describes your home?
| HousingUnitType | n | Unweighted Freq |
| --- | --- | --- |
| Mobile home | 974 | 0\.053 |
| Single\-family detached | 12319 | 0\.666 |
| Single\-family attached | 1751 | 0\.095 |
| Apartment: 2\-4 Units | 1013 | 0\.055 |
| Apartment: 5 or more units | 2439 | 0\.132 |
| Total | 18496 | 1\.000 |
#### YearMade
Description: Range when housing unit was built
Question text: Derived from: In what year was your home built? AND Although you do not know the exact year your home was built, it is helpful to have an estimate. About when was your home built?
| YearMade | n | Unweighted Freq |
| --- | --- | --- |
| Before 1950 | 2721 | 0\.147 |
| 1950\-1959 | 1685 | 0\.091 |
| 1960\-1969 | 1867 | 0\.101 |
| 1970\-1979 | 2817 | 0\.152 |
| 1980\-1989 | 2435 | 0\.132 |
| 1990\-1999 | 2451 | 0\.133 |
| 2000\-2009 | 2748 | 0\.149 |
| 2010\-2015 | 989 | 0\.053 |
| 2016\-2020 | 783 | 0\.042 |
| Total | 18496 | 1\.000 |
#### TOTSQFT\_EN
Description: Total energy\-consuming area (square footage) of the housing unit. Includes all main living areas; all basements; heated, cooled, or finished attics; and heating or cooled garages. For single\-family housing units this is derived using the respondent\-reported square footage (SQFTEST) and adjusted using the “include” variables (e.g., SQFTINCB), where applicable. For apartments and mobile homes this is the respondent\-reported square footage. A derived variable rounded to the nearest 10
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 200 | 1700 | 15000 |
#### TOTHSQFT
Description: Square footage of the housing unit that is heated by space heating equipment. A derived variable rounded to the nearest 10
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 1520 | 15000 |
#### TOTCSQFT
Description: Square footage of the housing unit that is cooled by air\-conditioning equipment or evaporative cooler, a derived variable rounded to the nearest 10
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 1200 | 14600 |
#### HousingUnitType
Description: Type of housing unit
Question text: Which best describes your home?
| HousingUnitType | n | Unweighted Freq |
| --- | --- | --- |
| Mobile home | 974 | 0\.053 |
| Single\-family detached | 12319 | 0\.666 |
| Single\-family attached | 1751 | 0\.095 |
| Apartment: 2\-4 Units | 1013 | 0\.055 |
| Apartment: 5 or more units | 2439 | 0\.132 |
| Total | 18496 | 1\.000 |
#### YearMade
Description: Range when housing unit was built
Question text: Derived from: In what year was your home built? AND Although you do not know the exact year your home was built, it is helpful to have an estimate. About when was your home built?
| YearMade | n | Unweighted Freq |
| --- | --- | --- |
| Before 1950 | 2721 | 0\.147 |
| 1950\-1959 | 1685 | 0\.091 |
| 1960\-1969 | 1867 | 0\.101 |
| 1970\-1979 | 2817 | 0\.152 |
| 1980\-1989 | 2435 | 0\.132 |
| 1990\-1999 | 2451 | 0\.133 |
| 2000\-2009 | 2748 | 0\.149 |
| 2010\-2015 | 989 | 0\.053 |
| 2016\-2020 | 783 | 0\.042 |
| Total | 18496 | 1\.000 |
#### TOTSQFT\_EN
Description: Total energy\-consuming area (square footage) of the housing unit. Includes all main living areas; all basements; heated, cooled, or finished attics; and heating or cooled garages. For single\-family housing units this is derived using the respondent\-reported square footage (SQFTEST) and adjusted using the “include” variables (e.g., SQFTINCB), where applicable. For apartments and mobile homes this is the respondent\-reported square footage. A derived variable rounded to the nearest 10
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 200 | 1700 | 15000 |
#### TOTHSQFT
Description: Square footage of the housing unit that is heated by space heating equipment. A derived variable rounded to the nearest 10
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 1520 | 15000 |
#### TOTCSQFT
Description: Square footage of the housing unit that is cooled by air\-conditioning equipment or evaporative cooler, a derived variable rounded to the nearest 10
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 1200 | 14600 |
C.5 SPACE HEATING
-----------------
#### SpaceHeatingUsed
Description: Space heating equipment used
Question text: Is your home heated during the winter?
| SpaceHeatingUsed | n | Unweighted Freq |
| --- | --- | --- |
| FALSE | 751 | 0\.041 |
| TRUE | 17745 | 0\.959 |
| Total | 18496 | 1\.000 |
#### SpaceHeatingUsed
Description: Space heating equipment used
Question text: Is your home heated during the winter?
| SpaceHeatingUsed | n | Unweighted Freq |
| --- | --- | --- |
| FALSE | 751 | 0\.041 |
| TRUE | 17745 | 0\.959 |
| Total | 18496 | 1\.000 |
C.6 AIR CONDITIONING
--------------------
#### ACUsed
Description: Air conditioning equipment used
Question text: Is any air conditioning equipment used in your home?
| ACUsed | n | Unweighted Freq |
| --- | --- | --- |
| FALSE | 2325 | 0\.126 |
| TRUE | 16171 | 0\.874 |
| Total | 18496 | 1\.000 |
#### ACUsed
Description: Air conditioning equipment used
Question text: Is any air conditioning equipment used in your home?
| ACUsed | n | Unweighted Freq |
| --- | --- | --- |
| FALSE | 2325 | 0\.126 |
| TRUE | 16171 | 0\.874 |
| Total | 18496 | 1\.000 |
C.7 THERMOSTAT
--------------
#### HeatingBehavior
Description: Winter temperature control method
Question text: Which of the following best describes how your household controls the indoor temperature during the winter?
| HeatingBehavior | n | Unweighted Freq |
| --- | --- | --- |
| Set one temp and leave it | 7806 | 0\.422 |
| Manually adjust at night/no one home | 4654 | 0\.252 |
| Programmable or smart thermostat automatically adjusts the temperature | 3310 | 0\.179 |
| Turn on or off as needed | 1491 | 0\.081 |
| No control | 438 | 0\.024 |
| Other | 46 | 0\.002 |
| NA | 751 | 0\.041 |
| Total | 18496 | 1\.000 |
#### WinterTempDay
Description: Winter thermostat setting or temperature in home when someone is home during the day
Question text: During the winter, what is your home’s typical indoor temperature when someone is home during the day?
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 751 | 50 | 70 | 90 |
#### WinterTempAway
Description: Winter thermostat setting or temperature in home when no one is home during the day
Question text: During the winter, what is your home’s typical indoor temperature when no one is inside your home during the day?
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 751 | 50 | 68 | 90 |
#### WinterTempNight
Description: Winter thermostat setting or temperature in home at night
Question text: During the winter, what is your home’s typical indoor temperature inside your home at night?
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 751 | 50 | 68 | 90 |
#### ACBehavior
Description: Summer temperature control method
Question text: Which of the following best describes how your household controls the indoor temperature during the summer?
| ACBehavior | n | Unweighted Freq |
| --- | --- | --- |
| Set one temp and leave it | 6738 | 0\.364 |
| Manually adjust at night/no one home | 3637 | 0\.197 |
| Programmable or smart thermostat automatically adjusts the temperature | 2638 | 0\.143 |
| Turn on or off as needed | 2746 | 0\.148 |
| No control | 409 | 0\.022 |
| Other | 3 | 0\.000 |
| NA | 2325 | 0\.126 |
| Total | 18496 | 1\.000 |
#### SummerTempDay
Description: Summer thermostat setting or temperature in home when someone is home during the day
Question text: During the summer, what is your home’s typical indoor temperature when someone is home during the day?
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 2325 | 50 | 72 | 90 |
#### SummerTempAway
Description: Summer thermostat setting or temperature in home when no one is home during the day
Question text: During the summer, what is your home’s typical indoor temperature when no one is inside your home during the day?
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 2325 | 50 | 74 | 90 |
#### SummerTempNight
Description: Summer thermostat setting or temperature in home at night
Question text: During the summer, what is your home’s typical indoor temperature inside your home at night?
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 2325 | 50 | 72 | 90 |
#### HeatingBehavior
Description: Winter temperature control method
Question text: Which of the following best describes how your household controls the indoor temperature during the winter?
| HeatingBehavior | n | Unweighted Freq |
| --- | --- | --- |
| Set one temp and leave it | 7806 | 0\.422 |
| Manually adjust at night/no one home | 4654 | 0\.252 |
| Programmable or smart thermostat automatically adjusts the temperature | 3310 | 0\.179 |
| Turn on or off as needed | 1491 | 0\.081 |
| No control | 438 | 0\.024 |
| Other | 46 | 0\.002 |
| NA | 751 | 0\.041 |
| Total | 18496 | 1\.000 |
#### WinterTempDay
Description: Winter thermostat setting or temperature in home when someone is home during the day
Question text: During the winter, what is your home’s typical indoor temperature when someone is home during the day?
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 751 | 50 | 70 | 90 |
#### WinterTempAway
Description: Winter thermostat setting or temperature in home when no one is home during the day
Question text: During the winter, what is your home’s typical indoor temperature when no one is inside your home during the day?
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 751 | 50 | 68 | 90 |
#### WinterTempNight
Description: Winter thermostat setting or temperature in home at night
Question text: During the winter, what is your home’s typical indoor temperature inside your home at night?
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 751 | 50 | 68 | 90 |
#### ACBehavior
Description: Summer temperature control method
Question text: Which of the following best describes how your household controls the indoor temperature during the summer?
| ACBehavior | n | Unweighted Freq |
| --- | --- | --- |
| Set one temp and leave it | 6738 | 0\.364 |
| Manually adjust at night/no one home | 3637 | 0\.197 |
| Programmable or smart thermostat automatically adjusts the temperature | 2638 | 0\.143 |
| Turn on or off as needed | 2746 | 0\.148 |
| No control | 409 | 0\.022 |
| Other | 3 | 0\.000 |
| NA | 2325 | 0\.126 |
| Total | 18496 | 1\.000 |
#### SummerTempDay
Description: Summer thermostat setting or temperature in home when someone is home during the day
Question text: During the summer, what is your home’s typical indoor temperature when someone is home during the day?
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 2325 | 50 | 72 | 90 |
#### SummerTempAway
Description: Summer thermostat setting or temperature in home when no one is home during the day
Question text: During the summer, what is your home’s typical indoor temperature when no one is inside your home during the day?
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 2325 | 50 | 74 | 90 |
#### SummerTempNight
Description: Summer thermostat setting or temperature in home at night
Question text: During the summer, what is your home’s typical indoor temperature inside your home at night?
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 2325 | 50 | 72 | 90 |
C.8 WEIGHTS
-----------
#### NWEIGHT
Description: Final Analysis Weight
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 437\.9 | 6119 | 29279 |
#### NWEIGHT1
Description: Final Analysis Weight for replicate 1
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6136 | 30015 |
#### NWEIGHT2
Description: Final Analysis Weight for replicate 2
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6151 | 29422 |
#### NWEIGHT3
Description: Final Analysis Weight for replicate 3
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6151 | 29431 |
#### NWEIGHT4
Description: Final Analysis Weight for replicate 4
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6153 | 29494 |
#### NWEIGHT5
Description: Final Analysis Weight for replicate 5
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6134 | 30039 |
#### NWEIGHT6
Description: Final Analysis Weight for replicate 6
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6147 | 29419 |
#### NWEIGHT7
Description: Final Analysis Weight for replicate 7
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6135 | 29586 |
#### NWEIGHT8
Description: Final Analysis Weight for replicate 8
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6151 | 29499 |
#### NWEIGHT9
Description: Final Analysis Weight for replicate 9
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6139 | 29845 |
#### NWEIGHT10
Description: Final Analysis Weight for replicate 10
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6163 | 29635 |
#### NWEIGHT11
Description: Final Analysis Weight for replicate 11
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6140 | 29681 |
#### NWEIGHT12
Description: Final Analysis Weight for replicate 12
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6160 | 29849 |
#### NWEIGHT13
Description: Final Analysis Weight for replicate 13
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6142 | 29843 |
#### NWEIGHT14
Description: Final Analysis Weight for replicate 14
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6154 | 30184 |
#### NWEIGHT15
Description: Final Analysis Weight for replicate 15
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6145 | 29970 |
#### NWEIGHT16
Description: Final Analysis Weight for replicate 16
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6133 | 29825 |
#### NWEIGHT17
Description: Final Analysis Weight for replicate 17
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6126 | 30606 |
#### NWEIGHT18
Description: Final Analysis Weight for replicate 18
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6155 | 29689 |
#### NWEIGHT19
Description: Final Analysis Weight for replicate 19
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6153 | 29336 |
#### NWEIGHT20
Description: Final Analysis Weight for replicate 20
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6139 | 30274 |
#### NWEIGHT21
Description: Final Analysis Weight for replicate 21
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6135 | 29766 |
#### NWEIGHT22
Description: Final Analysis Weight for replicate 22
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6149 | 29791 |
#### NWEIGHT23
Description: Final Analysis Weight for replicate 23
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6148 | 30126 |
#### NWEIGHT24
Description: Final Analysis Weight for replicate 24
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6136 | 29946 |
#### NWEIGHT25
Description: Final Analysis Weight for replicate 25
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6150 | 30445 |
#### NWEIGHT26
Description: Final Analysis Weight for replicate 26
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6136 | 29893 |
#### NWEIGHT27
Description: Final Analysis Weight for replicate 27
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6125 | 30030 |
#### NWEIGHT28
Description: Final Analysis Weight for replicate 28
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6149 | 29599 |
#### NWEIGHT29
Description: Final Analysis Weight for replicate 29
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6146 | 30136 |
#### NWEIGHT30
Description: Final Analysis Weight for replicate 30
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6149 | 29895 |
#### NWEIGHT31
Description: Final Analysis Weight for replicate 31
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6144 | 29604 |
#### NWEIGHT32
Description: Final Analysis Weight for replicate 32
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6159 | 29310 |
#### NWEIGHT33
Description: Final Analysis Weight for replicate 33
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6148 | 29408 |
#### NWEIGHT34
Description: Final Analysis Weight for replicate 34
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6139 | 29564 |
#### NWEIGHT35
Description: Final Analysis Weight for replicate 35
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6141 | 30437 |
#### NWEIGHT36
Description: Final Analysis Weight for replicate 36
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6149 | 27896 |
#### NWEIGHT37
Description: Final Analysis Weight for replicate 37
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6133 | 30596 |
#### NWEIGHT38
Description: Final Analysis Weight for replicate 38
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6139 | 30130 |
#### NWEIGHT39
Description: Final Analysis Weight for replicate 39
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6147 | 29262 |
#### NWEIGHT40
Description: Final Analysis Weight for replicate 40
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6144 | 30344 |
#### NWEIGHT41
Description: Final Analysis Weight for replicate 41
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6153 | 29594 |
#### NWEIGHT42
Description: Final Analysis Weight for replicate 42
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6137 | 29938 |
#### NWEIGHT43
Description: Final Analysis Weight for replicate 43
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6157 | 29878 |
#### NWEIGHT44
Description: Final Analysis Weight for replicate 44
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6148 | 29896 |
#### NWEIGHT45
Description: Final Analysis Weight for replicate 45
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6149 | 29729 |
#### NWEIGHT46
Description: Final Analysis Weight for replicate 46
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6152 | 29103 |
#### NWEIGHT47
Description: Final Analysis Weight for replicate 47
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6150 | 30070 |
#### NWEIGHT48
Description: Final Analysis Weight for replicate 48
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6139 | 29343 |
#### NWEIGHT49
Description: Final Analysis Weight for replicate 49
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6146 | 29590 |
#### NWEIGHT50
Description: Final Analysis Weight for replicate 50
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6159 | 30027 |
#### NWEIGHT51
Description: Final Analysis Weight for replicate 51
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6150 | 29247 |
#### NWEIGHT52
Description: Final Analysis Weight for replicate 52
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6154 | 29445 |
#### NWEIGHT53
Description: Final Analysis Weight for replicate 53
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6156 | 30131 |
#### NWEIGHT54
Description: Final Analysis Weight for replicate 54
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6151 | 29439 |
#### NWEIGHT55
Description: Final Analysis Weight for replicate 55
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6143 | 29216 |
#### NWEIGHT56
Description: Final Analysis Weight for replicate 56
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6153 | 29203 |
#### NWEIGHT57
Description: Final Analysis Weight for replicate 57
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6138 | 29819 |
#### NWEIGHT58
Description: Final Analysis Weight for replicate 58
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6137 | 29818 |
#### NWEIGHT59
Description: Final Analysis Weight for replicate 59
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6144 | 29606 |
#### NWEIGHT60
Description: Final Analysis Weight for replicate 60
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6140 | 29818 |
#### NWEIGHT
Description: Final Analysis Weight
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 437\.9 | 6119 | 29279 |
#### NWEIGHT1
Description: Final Analysis Weight for replicate 1
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6136 | 30015 |
#### NWEIGHT2
Description: Final Analysis Weight for replicate 2
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6151 | 29422 |
#### NWEIGHT3
Description: Final Analysis Weight for replicate 3
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6151 | 29431 |
#### NWEIGHT4
Description: Final Analysis Weight for replicate 4
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6153 | 29494 |
#### NWEIGHT5
Description: Final Analysis Weight for replicate 5
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6134 | 30039 |
#### NWEIGHT6
Description: Final Analysis Weight for replicate 6
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6147 | 29419 |
#### NWEIGHT7
Description: Final Analysis Weight for replicate 7
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6135 | 29586 |
#### NWEIGHT8
Description: Final Analysis Weight for replicate 8
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6151 | 29499 |
#### NWEIGHT9
Description: Final Analysis Weight for replicate 9
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6139 | 29845 |
#### NWEIGHT10
Description: Final Analysis Weight for replicate 10
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6163 | 29635 |
#### NWEIGHT11
Description: Final Analysis Weight for replicate 11
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6140 | 29681 |
#### NWEIGHT12
Description: Final Analysis Weight for replicate 12
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6160 | 29849 |
#### NWEIGHT13
Description: Final Analysis Weight for replicate 13
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6142 | 29843 |
#### NWEIGHT14
Description: Final Analysis Weight for replicate 14
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6154 | 30184 |
#### NWEIGHT15
Description: Final Analysis Weight for replicate 15
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6145 | 29970 |
#### NWEIGHT16
Description: Final Analysis Weight for replicate 16
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6133 | 29825 |
#### NWEIGHT17
Description: Final Analysis Weight for replicate 17
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6126 | 30606 |
#### NWEIGHT18
Description: Final Analysis Weight for replicate 18
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6155 | 29689 |
#### NWEIGHT19
Description: Final Analysis Weight for replicate 19
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6153 | 29336 |
#### NWEIGHT20
Description: Final Analysis Weight for replicate 20
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6139 | 30274 |
#### NWEIGHT21
Description: Final Analysis Weight for replicate 21
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6135 | 29766 |
#### NWEIGHT22
Description: Final Analysis Weight for replicate 22
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6149 | 29791 |
#### NWEIGHT23
Description: Final Analysis Weight for replicate 23
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6148 | 30126 |
#### NWEIGHT24
Description: Final Analysis Weight for replicate 24
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6136 | 29946 |
#### NWEIGHT25
Description: Final Analysis Weight for replicate 25
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6150 | 30445 |
#### NWEIGHT26
Description: Final Analysis Weight for replicate 26
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6136 | 29893 |
#### NWEIGHT27
Description: Final Analysis Weight for replicate 27
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6125 | 30030 |
#### NWEIGHT28
Description: Final Analysis Weight for replicate 28
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6149 | 29599 |
#### NWEIGHT29
Description: Final Analysis Weight for replicate 29
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6146 | 30136 |
#### NWEIGHT30
Description: Final Analysis Weight for replicate 30
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6149 | 29895 |
#### NWEIGHT31
Description: Final Analysis Weight for replicate 31
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6144 | 29604 |
#### NWEIGHT32
Description: Final Analysis Weight for replicate 32
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6159 | 29310 |
#### NWEIGHT33
Description: Final Analysis Weight for replicate 33
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6148 | 29408 |
#### NWEIGHT34
Description: Final Analysis Weight for replicate 34
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6139 | 29564 |
#### NWEIGHT35
Description: Final Analysis Weight for replicate 35
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6141 | 30437 |
#### NWEIGHT36
Description: Final Analysis Weight for replicate 36
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6149 | 27896 |
#### NWEIGHT37
Description: Final Analysis Weight for replicate 37
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6133 | 30596 |
#### NWEIGHT38
Description: Final Analysis Weight for replicate 38
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6139 | 30130 |
#### NWEIGHT39
Description: Final Analysis Weight for replicate 39
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6147 | 29262 |
#### NWEIGHT40
Description: Final Analysis Weight for replicate 40
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6144 | 30344 |
#### NWEIGHT41
Description: Final Analysis Weight for replicate 41
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6153 | 29594 |
#### NWEIGHT42
Description: Final Analysis Weight for replicate 42
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6137 | 29938 |
#### NWEIGHT43
Description: Final Analysis Weight for replicate 43
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6157 | 29878 |
#### NWEIGHT44
Description: Final Analysis Weight for replicate 44
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6148 | 29896 |
#### NWEIGHT45
Description: Final Analysis Weight for replicate 45
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6149 | 29729 |
#### NWEIGHT46
Description: Final Analysis Weight for replicate 46
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6152 | 29103 |
#### NWEIGHT47
Description: Final Analysis Weight for replicate 47
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6150 | 30070 |
#### NWEIGHT48
Description: Final Analysis Weight for replicate 48
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6139 | 29343 |
#### NWEIGHT49
Description: Final Analysis Weight for replicate 49
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6146 | 29590 |
#### NWEIGHT50
Description: Final Analysis Weight for replicate 50
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6159 | 30027 |
#### NWEIGHT51
Description: Final Analysis Weight for replicate 51
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6150 | 29247 |
#### NWEIGHT52
Description: Final Analysis Weight for replicate 52
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6154 | 29445 |
#### NWEIGHT53
Description: Final Analysis Weight for replicate 53
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6156 | 30131 |
#### NWEIGHT54
Description: Final Analysis Weight for replicate 54
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6151 | 29439 |
#### NWEIGHT55
Description: Final Analysis Weight for replicate 55
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6143 | 29216 |
#### NWEIGHT56
Description: Final Analysis Weight for replicate 56
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6153 | 29203 |
#### NWEIGHT57
Description: Final Analysis Weight for replicate 57
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6138 | 29819 |
#### NWEIGHT58
Description: Final Analysis Weight for replicate 58
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6137 | 29818 |
#### NWEIGHT59
Description: Final Analysis Weight for replicate 59
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6144 | 29606 |
#### NWEIGHT60
Description: Final Analysis Weight for replicate 60
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 6140 | 29818 |
C.9 CONSUMPTION AND EXPENDITURE
-------------------------------
#### BTUEL
Description: Total electricity use, in thousand Btu, 2020, including self\-generation of solar power
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 143\.3 | 31890 | 628155 |
#### DOLLAREL
Description: Total electricity cost, in dollars, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | \-889\.5 | 1258 | 15680 |
#### BTUNG
Description: Total natural gas use, in thousand Btu, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 22012 | 1134709 |
#### DOLLARNG
Description: Total natural gas cost, in dollars, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 313\.9 | 8155 |
#### BTULP
Description: Total propane use, in thousand Btu, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 0 | 364215 |
#### DOLLARLP
Description: Total propane cost, in dollars, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 0 | 6621 |
#### BTUFO
Description: Total fuel oil/kerosene use, in thousand Btu, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 0 | 426268 |
#### DOLLARFO
Description: Total fuel oil/kerosene cost, in dollars, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 0 | 7004 |
#### BTUWOOD
Description: Total wood use, in thousand Btu, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 0 | 5e\+05 |
#### TOTALBTU
Description: Total usage including electricity, natural gas, propane, and fuel oil, in thousand Btu, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 1182 | 74180 | 1367548 |
#### TOTALDOL
Description: Total cost including electricity, natural gas, propane, and fuel oil, in dollars, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | \-150\.5 | 1793 | 20043 |
#### BTUEL
Description: Total electricity use, in thousand Btu, 2020, including self\-generation of solar power
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 143\.3 | 31890 | 628155 |
#### DOLLAREL
Description: Total electricity cost, in dollars, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | \-889\.5 | 1258 | 15680 |
#### BTUNG
Description: Total natural gas use, in thousand Btu, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 22012 | 1134709 |
#### DOLLARNG
Description: Total natural gas cost, in dollars, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 313\.9 | 8155 |
#### BTULP
Description: Total propane use, in thousand Btu, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 0 | 364215 |
#### DOLLARLP
Description: Total propane cost, in dollars, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 0 | 6621 |
#### BTUFO
Description: Total fuel oil/kerosene use, in thousand Btu, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 0 | 426268 |
#### DOLLARFO
Description: Total fuel oil/kerosene cost, in dollars, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 0 | 7004 |
#### BTUWOOD
Description: Total wood use, in thousand Btu, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 0 | 0 | 5e\+05 |
#### TOTALBTU
Description: Total usage including electricity, natural gas, propane, and fuel oil, in thousand Btu, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | 1182 | 74180 | 1367548 |
#### TOTALDOL
Description: Total cost including electricity, natural gas, propane, and fuel oil, in dollars, 2020
| N Missing | Minimum | Median | Maximum |
| --- | --- | --- | --- |
| 0 | \-150\.5 | 1793 | 20043 |
| Social Science |
tidy-survey-r.github.io | https://tidy-survey-r.github.io/tidy-survey-book/exercise-solutions.html |
D Exercise solutions
====================
The chapter exercises use the survey design objects and packages provided in the Prerequisites box in the beginning of the chapter. Please ensure they are loaded in the environment before running the exercise solutions. Code chunks to load these are also included below.
```
library(tidyverse)
library(survey)
library(srvyr)
library(srvyrexploR)
library(broom)
library(prettyunits)
library(gt)
```
```
targetpop <- 231592693
anes_adjwgt <- anes_2020 %>%
mutate(Weight = Weight / sum(Weight) * targetpop)
anes_des <- anes_adjwgt %>%
as_survey_design(
weights = Weight,
strata = Stratum,
ids = VarUnit,
nest = TRUE
)
```
```
recs_des <- recs_2020 %>%
as_survey_rep(
weights = NWEIGHT,
repweights = NWEIGHT1:NWEIGHT60,
type = "JK1",
scale = 59/60,
mse = TRUE
)
```
```
inc_series <- ncvs_2021_incident %>%
mutate(
series = case_when(V4017 %in% c(1, 8) ~ 1,
V4018 %in% c(2, 8) ~ 1,
V4019 %in% c(1, 8) ~ 1,
TRUE ~ 2
),
n10v4016 = case_when(V4016 %in% c(997, 998) ~ NA_real_,
V4016 > 10 ~ 10,
TRUE ~ V4016),
serieswgt = case_when(series == 2 & is.na(n10v4016) ~ 6,
series == 2 ~ n10v4016,
TRUE ~ 1),
NEWWGT = WGTVICCY * serieswgt
)
inc_ind <- inc_series %>%
filter(V4022 != 1) %>%
mutate(
WeapCat = case_when(
is.na(V4049) ~ NA_character_,
V4049 == 2 ~ "NoWeap",
V4049 == 3 ~ "UnkWeapUse",
V4050 == 3 ~ "Other",
V4051 == 1 | V4052 == 1 | V4050 == 7 ~ "Firearm",
V4053 == 1 | V4054 == 1 ~ "Knife",
TRUE ~ "Other"
),
V4529_num = parse_number(as.character(V4529)),
ReportPolice = V4399 == 1,
Property = V4529_num >= 31,
Violent = V4529_num <= 20,
Property_ReportPolice = Property & ReportPolice,
Violent_ReportPolice = Violent & ReportPolice,
AAST = V4529_num %in% 11:13,
AAST_NoWeap = AAST & WeapCat == "NoWeap",
AAST_Firearm = AAST & WeapCat == "Firearm",
AAST_Knife = AAST & WeapCat == "Knife",
AAST_Other = AAST & WeapCat == "Other"
)
inc_hh_sums <-
inc_ind %>%
filter(V4529_num > 23) %>% # restrict to household crimes
group_by(YEARQ, IDHH) %>%
summarize(WGTVICCY = WGTVICCY[1],
across(starts_with("Property"),
~ sum(. * serieswgt),
.names = "{.col}"),
.groups = "drop")
inc_pers_sums <-
inc_ind %>%
filter(V4529_num <= 23) %>% # restrict to person crimes
group_by(YEARQ, IDHH, IDPER) %>%
summarize(WGTVICCY = WGTVICCY[1],
across(c(starts_with("Violent"), starts_with("AAST")),
~ sum(. * serieswgt),
.names = "{.col}"),
.groups = "drop")
hh_z_list <- rep(0, ncol(inc_hh_sums) - 3) %>% as.list() %>%
setNames(names(inc_hh_sums)[-(1:3)])
pers_z_list <- rep(0, ncol(inc_pers_sums) - 4) %>% as.list() %>%
setNames(names(inc_pers_sums)[-(1:4)])
hh_vsum <- ncvs_2021_household %>%
full_join(inc_hh_sums, by = c("YEARQ", "IDHH")) %>%
replace_na(hh_z_list) %>%
mutate(ADJINC_WT = if_else(is.na(WGTVICCY), 0, WGTVICCY / WGTHHCY))
pers_vsum <- ncvs_2021_person %>%
full_join(inc_pers_sums, by = c("YEARQ", "IDHH", "IDPER")) %>%
replace_na(pers_z_list) %>%
mutate(ADJINC_WT = if_else(is.na(WGTVICCY), 0, WGTVICCY / WGTPERCY))
hh_vsum_der <- hh_vsum %>%
mutate(
Tenure = factor(case_when(V2015 == 1 ~ "Owned",
!is.na(V2015) ~ "Rented"),
levels = c("Owned", "Rented")),
Urbanicity = factor(case_when(V2143 == 1 ~ "Urban",
V2143 == 2 ~ "Suburban",
V2143 == 3 ~ "Rural"),
levels = c("Urban", "Suburban", "Rural")),
SC214A_num = as.numeric(as.character(SC214A)),
Income = case_when(SC214A_num <= 8 ~ "Less than $25,000",
SC214A_num <= 12 ~ "$25,000--49,999",
SC214A_num <= 15 ~ "$50,000--99,999",
SC214A_num <= 17 ~ "$100,000--199,999",
SC214A_num <= 18 ~ "$200,000 or more"),
Income = fct_reorder(Income, SC214A_num, .na_rm = FALSE),
PlaceSize = case_match(as.numeric(as.character(V2126B)),
0 ~ "Not in a place",
13 ~ "Population under 10,000",
16 ~ "10,000--49,999",
17 ~ "50,000--99,999",
18 ~ "100,000--249,999",
19 ~ "250,000--499,999",
20 ~ "500,000--999,999",
c(21, 22, 23) ~ "1,000,000 or more"),
PlaceSize = fct_reorder(PlaceSize, as.numeric(V2126B)),
Region = case_match(as.numeric(V2127B),
1 ~ "Northeast",
2 ~ "Midwest",
3 ~ "South",
4 ~ "West"),
Region = fct_reorder(Region, as.numeric(V2127B))
)
NHOPI <- "Native Hawaiian or Other Pacific Islander"
pers_vsum_der <- pers_vsum %>%
mutate(
Sex = factor(case_when(V3018 == 1 ~ "Male",
V3018 == 2 ~ "Female")),
RaceHispOrigin = factor(case_when(V3024 == 1 ~ "Hispanic",
V3023A == 1 ~ "White",
V3023A == 2 ~ "Black",
V3023A == 4 ~ "Asian",
V3023A == 5 ~ NHOPI,
TRUE ~ "Other"),
levels = c("White", "Black", "Hispanic",
"Asian", NHOPI, "Other")),
V3014_num = as.numeric(as.character(V3014)),
AgeGroup = case_when(V3014_num <= 17 ~ "12--17",
V3014_num <= 24 ~ "18--24",
V3014_num <= 34 ~ "25--34",
V3014_num <= 49 ~ "35--49",
V3014_num <= 64 ~ "50--64",
V3014_num <= 90 ~ "65 or older"),
AgeGroup = fct_reorder(AgeGroup, V3014_num),
MaritalStatus = factor(case_when(V3015 == 1 ~ "Married",
V3015 == 2 ~ "Widowed",
V3015 == 3 ~ "Divorced",
V3015 == 4 ~ "Separated",
V3015 == 5 ~ "Never married"),
levels = c("Never married", "Married",
"Widowed","Divorced",
"Separated"))
) %>%
left_join(hh_vsum_der %>% select(YEARQ, IDHH,
V2117, V2118, Tenure:Region),
by = c("YEARQ", "IDHH"))
hh_vsum_slim <- hh_vsum_der %>%
select(YEARQ:V2118,
WGTVICCY:ADJINC_WT,
Tenure,
Urbanicity,
Income,
PlaceSize,
Region)
pers_vsum_slim <- pers_vsum_der %>%
select(YEARQ:WGTPERCY, WGTVICCY:ADJINC_WT, Sex:Region)
dummy_records <- hh_vsum_slim %>%
distinct(V2117, V2118) %>%
mutate(Dummy = 1,
WGTVICCY = 1,
NEWWGT = 1)
inc_analysis <- inc_ind %>%
mutate(Dummy = 0) %>%
left_join(select(pers_vsum_slim, YEARQ, IDHH, IDPER, Sex:Region),
by = c("YEARQ", "IDHH", "IDPER")) %>%
bind_rows(dummy_records) %>%
select(YEARQ:IDPER,
WGTVICCY,
NEWWGT,
V4529,
WeapCat,
ReportPolice,
Property:Region)
inc_des <- inc_analysis %>%
as_survey_design(
weight = NEWWGT,
strata = V2117,
ids = V2118,
nest = TRUE
)
hh_des <- hh_vsum_slim %>%
as_survey_design(
weight = WGTHHCY,
strata = V2117,
ids = V2118,
nest = TRUE
)
pers_des <- pers_vsum_slim %>%
as_survey_design(
weight = WGTPERCY,
strata = V2117,
ids = V2118,
nest = TRUE
)
```
The chapter exercises use the survey design objects and packages provided in the Prerequisites box in the beginning of the chapter. Please ensure they are loaded in the environment before running the exercise solutions.
5 \- Descriptive analysis
-------------------------
1. How many females have a graduate degree? Hint: The variables `Gender` and `Education` will be useful.
```
# Option 1:
femgd_option1 <- anes_des %>%
filter(Gender == "Female", Education == "Graduate") %>%
survey_count(name = "n")
femgd_option1
```
```
## # A tibble: 1 × 2
## n n_se
## <dbl> <dbl>
## 1 15072196. 837872.
```
```
# Option 2:
femgd_option2 <- anes_des %>%
filter(Gender == "Female", Education == "Graduate") %>%
summarize(N = survey_total(), .groups = "drop")
femgd_option2
```
```
## # A tibble: 1 × 2
## N N_se
## <dbl> <dbl>
## 1 15072196. 837872.
```
Answer: 15,072,196
2. What percentage of people identify as “Strong Democrat”? Hint: The variable `PartyID` indicates someone’s party affiliation.
```
psd <- anes_des %>%
group_by(PartyID) %>%
summarize(p = survey_mean()) %>%
filter(PartyID == "Strong democrat")
psd
```
```
## # A tibble: 1 × 3
## PartyID p p_se
## <fct> <dbl> <dbl>
## 1 Strong democrat 0.219 0.00646
```
Answer: 21\.9%
3. What percentage of people who voted in the 2020 election identify as “Strong Republican”? Hint: The variable `VotedPres2020` indicates whether someone voted in 2020\.
```
psr <- anes_des %>%
filter(VotedPres2020 == "Yes") %>%
group_by(PartyID) %>%
summarize(p = survey_mean()) %>%
filter(PartyID == "Strong republican")
psr
```
```
## # A tibble: 1 × 3
## PartyID p p_se
## <fct> <dbl> <dbl>
## 1 Strong republican 0.228 0.00824
```
Answer: 22\.8%
4. What percentage of people voted in both the 2016 election and the 2020 election? Include the logit confidence interval. Hint: The variable `VotedPres2016` indicates whether someone voted in 2016\.
```
pvb <- anes_des %>%
filter(!is.na(VotedPres2016), !is.na(VotedPres2020)) %>%
group_by(interact(VotedPres2016, VotedPres2020)) %>%
summarize(p = survey_prop(var = "ci", method = "logit"), ) %>%
filter(VotedPres2016 == "Yes", VotedPres2020 == "Yes")
pvb
```
```
## # A tibble: 1 × 5
## VotedPres2016 VotedPres2020 p p_low p_upp
## <fct> <fct> <dbl> <dbl> <dbl>
## 1 Yes Yes 0.794 0.777 0.810
```
Answer: 79\.4 with confidence interval: (77\.7, 81\)
5. What is the design effect for the proportion of people who voted early? Hint: The variable `EarlyVote2020` indicates whether someone voted early in 2020\.
```
pdeff <- anes_des %>%
filter(!is.na(EarlyVote2020)) %>%
group_by(EarlyVote2020) %>%
summarize(p = survey_mean(deff = TRUE)) %>%
filter(EarlyVote2020 == "Yes")
pdeff
```
```
## # A tibble: 1 × 4
## EarlyVote2020 p p_se p_deff
## <fct> <dbl> <dbl> <dbl>
## 1 Yes 0.726 0.0247 1.50
```
Answer: 1\.5
6. What is the median temperature people set their thermostats to at night during the winter? Hint: The variable `WinterTempNight` indicates the temperature that people set their thermostat to in the winter at night.
```
med_wintertempnight <- recs_des %>%
summarize(wtn_med = survey_median(
x = WinterTempNight,
na.rm = TRUE
))
med_wintertempnight
```
```
## # A tibble: 1 × 2
## wtn_med wtn_med_se
## <dbl> <dbl>
## 1 68 0.250
```
Answer: 68
7. People sometimes set their temperature differently over different seasons and during the day. What median temperatures do people set their thermostat to in the summer and winter, both during the day and at night? Include confidence intervals. Hint: Use the variables `WinterTempDay`, `WinterTempNight`, `SummerTempDay`, and `SummerTempNight`.
```
# Option 1
med_temps <- recs_des %>%
summarize(
across(c(WinterTempDay, WinterTempNight, SummerTempDay, SummerTempNight), ~ survey_median(.x, na.rm = TRUE))
)
med_temps
```
```
## # A tibble: 1 × 8
## WinterTempDay WinterTempDay_se WinterTempNight WinterTempNight_se
## <dbl> <dbl> <dbl> <dbl>
## 1 70 0.250 68 0.250
## # ℹ 4 more variables: SummerTempDay <dbl>, SummerTempDay_se <dbl>,
## # SummerTempNight <dbl>, SummerTempNight_se <dbl>
```
```
# Alternatively, could use `survey_quantile()` as shown below for WinterTempNight:
quant_temps <- recs_des %>%
summarize(
across(c(WinterTempDay, WinterTempNight, SummerTempDay, SummerTempNight), ~ survey_quantile(.x, quantiles = 0.5, na.rm = TRUE))
)
quant_temps
```
```
## # A tibble: 1 × 8
## WinterTempDay_q50 WinterTempDay_q50_se WinterTempNight_q50
## <dbl> <dbl> <dbl>
## 1 70 0.250 68
## # ℹ 5 more variables: WinterTempNight_q50_se <dbl>,
## # SummerTempDay_q50 <dbl>, SummerTempDay_q50_se <dbl>,
## # SummerTempNight_q50 <dbl>, SummerTempNight_q50_se <dbl>
```
Answer:
\- Winter during the day: 70
\- Winter during the night: 68
\- Summer during the day: 72
\- Summer during the night: 72
8. What is the correlation between the temperature that people set their temperature at during the night and during the day in the summer?
```
corr_summer_temp <- recs_des %>%
summarize(summer_corr = survey_corr(SummerTempNight, SummerTempDay,
na.rm = TRUE
))
corr_summer_temp
```
```
## # A tibble: 1 × 2
## summer_corr summer_corr_se
## <dbl> <dbl>
## 1 0.806 0.00806
```
Answer: 0\.806
9. What is the 1st, 2nd, and 3rd quartile of the amount of money spent on energy by Building America (BA) climate zone? Hint: `TOTALDOL` indicates the total amount spent on all fuel, and `ClimateRegion_BA` indicates the BA climate zones.
```
quant_baenergyexp <- recs_des %>%
group_by(ClimateRegion_BA) %>%
summarize(dol_quant = survey_quantile(
TOTALDOL,
quantiles = c(0.25, 0.5, 0.75),
vartype = "se",
na.rm = TRUE
))
quant_baenergyexp
```
```
## # A tibble: 8 × 7
## ClimateRegion_BA dol_quant_q25 dol_quant_q50 dol_quant_q75
## <fct> <dbl> <dbl> <dbl>
## 1 Mixed-Dry 1091. 1541. 2139.
## 2 Mixed-Humid 1317. 1840. 2462.
## 3 Hot-Humid 1094. 1622. 2233.
## 4 Hot-Dry 926. 1513. 2223.
## 5 Very-Cold 1195. 1986. 2955.
## 6 Cold 1213. 1756. 2422.
## 7 Marine 938. 1380. 1987.
## 8 Subarctic 2404. 3535. 5219.
## # ℹ 3 more variables: dol_quant_q25_se <dbl>, dol_quant_q50_se <dbl>,
## # dol_quant_q75_se <dbl>
```
Answer:
| Quartile summary of energy expenditure by BA Climate Zone | | | |
| --- | --- | --- | --- |
| | Q1 | Q2 | Q3 |
| Mixed\-Dry | $1,091 | $1,541 | $2,139 |
| Mixed\-Humid | $1,317 | $1,840 | $2,462 |
| Hot\-Humid | $1,094 | $1,622 | $2,233 |
| Hot\-Dry | $926 | $1,513 | $2,223 |
| Very\-Cold | $1,195 | $1,986 | $2,955 |
| Cold | $1,213 | $1,756 | $2,422 |
| Marine | $938 | $1,380 | $1,987 |
| Subarctic | $2,404 | $3,535 | $5,219 |
6 \- Statistical testing
------------------------
1. Using the RECS data, do more than 50% of U.S. households use A/C (`ACUsed`)?
```
ttest_solution1 <- recs_des %>%
svyttest(
design = .,
formula = ((ACUsed == TRUE) - 0.5) ~ 0,
na.rm = TRUE,
alternative = "greater"
) %>%
tidy()
ttest_solution1
```
```
## # A tibble: 1 × 8
## estimate statistic p.value parameter conf.low conf.high method
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 0.387 126. 1.73e-72 58 0.380 0.393 Design-based…
## # ℹ 1 more variable: alternative <chr>
```
Answer: 88\.7% of households use air conditioning which is significantly different from 50% (p\<0\.0001\) so there is strong evidence that more than 50% of households use air\-conditioning.
2. Using the RECS data, does the average temperature that U.S. households set their thermostats to differ between the day and night in the winter (`WinterTempDay` and `WinterTempNight`)?
```
ttest_solution2 <- recs_des %>%
svyttest(
design = .,
formula = WinterTempDay - WinterTempNight ~ 0,
na.rm = TRUE
) %>%
tidy()
ttest_solution2
```
```
## # A tibble: 1 × 8
## estimate statistic p.value parameter conf.low conf.high method
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 1.67 45.9 2.82e-47 58 1.59 1.74 Design-based…
## # ℹ 1 more variable: alternative <chr>
```
Answer: The average temperature difference between night and day during the winter for thermostat settings is 1\.67 which is significantly different from 0 (p\<0\.0001\) so there is strong evidence that the temperature setting is different between night and daytime during the winter.
3. Using the ANES data, does the average age (`Age`) of those who voted for Joseph Biden in 2020 (`VotedPres2020_selection`) differ from those who voted for another candidate?
```
ttest_solution3 <- anes_des %>%
filter(!is.na(VotedPres2020_selection)) %>%
svyttest(
design = .,
formula = Age ~ VotedPres2020_selection == "Biden",
na.rm = TRUE
) %>%
tidy()
ttest_solution3
```
```
## # A tibble: 1 × 8
## estimate statistic p.value parameter conf.low conf.high method
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 -3.60 -5.97 0.000000244 50 -4.81 -2.39 Design-ba…
## # ℹ 1 more variable: alternative <chr>
```
On average, those who voted for Joseph Biden in 2020 were \-3\.6 years younger than voters for other candidates and this is significantly different (p \<0\.0001\).
4. If we wanted to determine if the political party affiliation differed for males and females, what test would we use?
1. Goodness\-of\-fit test (`svygofchisq()`)
2. Test of independence (`svychisq()`)
3. Test of homogeneity (`svychisq()`)
Answer: c. Test of homogeneity (`svychisq()`)
5. In the RECS data, is there a relationship between the type of housing unit (`HousingUnitType`) and the year the house was built (`YearMade`)?
```
chisq_solution2 <- recs_des %>%
svychisq(
formula = ~ HousingUnitType + YearMade,
design = .,
statistic = "Wald",
na.rm = TRUE
)
chisq_solution2 %>% tidy()
```
```
## Multiple parameters; naming those columns ndf, ddf
```
```
## # A tibble: 1 × 5
## ndf ddf statistic p.value method
## <dbl> <dbl> <dbl> <dbl> <chr>
## 1 32 59 67.9 5.54e-36 Design-based Wald test of association
```
Answer: There is strong evidence (p\<0\.0001\) that there is a relationship between type of housing unit and the year the house was built.
6. In the ANES data, is there a difference in the distribution of gender (`Gender`) across early voting status in 2020 (`EarlyVote2020`)?
```
chisq_solution3 <- anes_des %>%
svychisq(
formula = ~ Gender + EarlyVote2020,
design = .,
statistic = "F",
na.rm = TRUE
) %>%
tidy()
```
```
## Multiple parameters; naming those columns ndf, ddf
```
```
chisq_solution3
```
```
## # A tibble: 1 × 5
## ndf ddf statistic p.value method
## <dbl> <dbl> <dbl> <dbl> <chr>
## 1 1 51 4.53 0.0381 Pearson's X^2: Rao & Scott adjustment
```
Answer: There is strong evidence that there is a difference in the gender distribution of gender by early voting status (p\=0\.0381\).
7 \- Modeling
-------------
1. The type of housing unit may have an impact on energy expenses. Is there any relationship between housing unit type (`HousingUnitType`) and total energy expenditure (`TOTALDOL`)? First, find the average energy expenditure by housing unit type as a descriptive analysis and then do the test. The reference level in the comparison should be the housing unit type that is most common.
```
expense_by_hut <- recs_des %>%
group_by(HousingUnitType) %>%
summarize(
Expense = survey_mean(TOTALDOL, na.rm = TRUE),
HUs = survey_total()
) %>%
arrange(desc(HUs))
expense_by_hut
```
```
## # A tibble: 5 × 5
## HousingUnitType Expense Expense_se HUs HUs_se
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 Single-family detached 2205. 9.36 77067692. 0.00000277
## 2 Apartment: 5 or more units 1108. 13.7 22835862. 0.000000226
## 3 Apartment: 2-4 Units 1407. 24.2 9341795. 0.119
## 4 Single-family attached 1653. 22.3 7451177. 0.114
## 5 Mobile home 1773. 26.2 6832499. 0.0000000927
```
```
exp_unit_out <- recs_des %>%
mutate(HousingUnitType = fct_infreq(HousingUnitType, NWEIGHT)) %>%
svyglm(
design = .,
formula = TOTALDOL ~ HousingUnitType,
na.action = na.omit
)
tidy(exp_unit_out)
```
```
## # A tibble: 5 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 2205. 9.36 236. 2.53e-84
## 2 HousingUnitTypeApartment: 5 or … -1097. 16.5 -66.3 3.52e-54
## 3 HousingUnitTypeApartment: 2-4 U… -798. 28.0 -28.5 1.37e-34
## 4 HousingUnitTypeSingle-family at… -551. 25.0 -22.1 5.28e-29
## 5 HousingUnitTypeMobile home -431. 27.4 -15.7 5.36e-22
```
Answer: The reference level should be Single\-family detached. All p\-values are very small indicating there is a significant relationship between housing unit type and total energy expenditure.
2. Does temperature play a role in electricity expenditure? Cooling degree days are a measure of how hot a place is. CDD65 for a given day indicates the number of degrees Fahrenheit warmer than 65°F (18\.3°C) it is in a location. On a day that averages 65°F and below, CDD65\=0\. While a day that averages 85°F (29\.4°C) would have CDD65\=20 because it is 20 degrees Fahrenheit warmer ([U.S. Energy Information Administration 2023d](#ref-eia-cdd)). For each day in the year, this is summed to give an indicator of how hot the place is throughout the year. Similarly, HDD65 indicates the days colder than 65°F. Can energy expenditure be predicted using these temperature indicators along with square footage? Is there a significant relationship? Include main effects and two\-way interactions.
```
temps_sqft_exp <- recs_des %>%
svyglm(
design = .,
formula = DOLLAREL ~ (TOTSQFT_EN + CDD65 + HDD65)^2,
na.action = na.omit
)
tidy(temps_sqft_exp) %>%
mutate(p.value = pretty_p_value(p.value) %>% str_pad(7))
```
```
## # A tibble: 7 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <chr>
## 1 (Intercept) 741. 70.5 10.5 "<0.0001"
## 2 TOTSQFT_EN 0.272 0.0471 5.77 "<0.0001"
## 3 CDD65 0.0293 0.0227 1.29 " 0.2024"
## 4 HDD65 -0.00111 0.0104 -0.107 " 0.9149"
## 5 TOTSQFT_EN:CDD65 0.0000459 0.0000154 2.97 " 0.0044"
## 6 TOTSQFT_EN:HDD65 -0.00000840 0.00000633 -1.33 " 0.1902"
## 7 CDD65:HDD65 0.00000533 0.00000355 1.50 " 0.1390"
```
Answer: There is a significant interaction between square footage and cooling degree days in the model and the square footage is a significant predictor of eletricity expenditure.
3. Continuing with our results from Exercise 2, create a plot between the actual and predicted expenditures and a residual plot for the predicted expenditures.
Answer:
```
temps_sqft_exp_fit <- temps_sqft_exp %>%
augment() %>%
mutate(
.se.fit = sqrt(attr(.fitted, "var")),
# extract the variance of the fitted value
.fitted = as.numeric(.fitted)
)
```
```
temps_sqft_exp_fit %>%
ggplot(aes(x = DOLLAREL, y = .fitted)) +
geom_point() +
geom_abline(
intercept = 0,
slope = 1,
color = "red"
) +
xlab("Actual expenditures") +
ylab("Predicted expenditures") +
theme_minimal()
```
FIGURE D.1: Actual and predicted electricity expenditures
```
temps_sqft_exp_fit %>%
ggplot(aes(x = .fitted, y = .resid)) +
geom_point() +
geom_hline(yintercept = 0, color = "red") +
xlab("Predicted expenditure") +
ylab("Residual value of expenditure") +
theme_minimal()
```
FIGURE D.2: Residual plot of electric cost model with covariates TOTSQFT\_EN, CDD65, and HDD65
4. Early voting expanded in 2020 ([Sprunt 2020](#ref-npr-voting-trend)). Build a logistic model predicting early voting in 2020 (`EarlyVote2020`) using age (`Age`), education (`Education`), and party identification (`PartyID`). Include two\-way interactions.
Answer:
```
earlyvote_mod <- anes_des %>%
filter(!is.na(EarlyVote2020)) %>%
svyglm(
design = .,
formula = EarlyVote2020 ~ (Age + Education + PartyID)^2,
family = quasibinomial
)
tidy(earlyvote_mod) %>% print(n = 50)
```
```
## # A tibble: 46 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 3.28e-1 3.86 0.0848 0.940
## 2 Age -2.20e-2 0.0579 -0.379 0.741
## 3 EducationHigh school -2.56e+0 3.89 -0.658 0.578
## 4 EducationPost HS -3.27e+0 3.97 -0.823 0.497
## 5 EducationBachelor's -3.29e+0 3.91 -0.842 0.489
## 6 EducationGraduate -1.36e+0 3.91 -0.349 0.761
## 7 PartyIDNot very strong democrat 2.00e+0 3.30 0.605 0.607
## 8 PartyIDIndependent-democrat 3.38e+0 2.60 1.30 0.323
## 9 PartyIDIndependent 5.22e+0 2.25 2.32 0.146
## 10 PartyIDIndependent-republican -1.95e+1 2.42 -8.09 0.0149
## 11 PartyIDNot very strong republic… -1.33e+1 3.24 -4.10 0.0546
## 12 PartyIDStrong republican 3.13e+0 2.18 1.44 0.287
## 13 Age:EducationHigh school 4.72e-2 0.0592 0.796 0.509
## 14 Age:EducationPost HS 5.25e-2 0.0588 0.892 0.467
## 15 Age:EducationBachelor's 4.76e-2 0.0600 0.793 0.511
## 16 Age:EducationGraduate 8.65e-3 0.0578 0.150 0.895
## 17 Age:PartyIDNot very strong demo… -2.28e-2 0.0497 -0.459 0.691
## 18 Age:PartyIDIndependent-democrat -7.03e-2 0.0285 -2.46 0.133
## 19 Age:PartyIDIndependent -8.00e-2 0.0302 -2.65 0.118
## 20 Age:PartyIDIndependent-republic… 6.72e-2 0.0378 1.78 0.217
## 21 Age:PartyIDNot very strong repu… -3.07e-2 0.0420 -0.732 0.540
## 22 Age:PartyIDStrong republican -3.84e-2 0.0180 -2.14 0.166
## 23 EducationHigh school:PartyIDNot… -1.24e+0 2.22 -0.557 0.633
## 24 EducationPost HS:PartyIDNot ver… -8.95e-1 2.16 -0.413 0.719
## 25 EducationBachelor's:PartyIDNot … -1.21e+0 2.29 -0.528 0.650
## 26 EducationGraduate:PartyIDNot ve… -1.90e+0 2.25 -0.844 0.487
## 27 EducationHigh school:PartyIDInd… 7.84e-1 2.50 0.314 0.783
## 28 EducationPost HS:PartyIDIndepen… 4.04e-1 2.31 0.175 0.877
## 29 EducationBachelor's:PartyIDInde… 5.00e-1 2.60 0.193 0.865
## 30 EducationGraduate:PartyIDIndepe… -1.48e+1 2.47 -5.99 0.0268
## 31 EducationHigh school:PartyIDInd… -6.32e-1 1.72 -0.368 0.748
## 32 EducationPost HS:PartyIDIndepen… -9.27e-2 1.63 -0.0568 0.960
## 33 EducationBachelor's:PartyIDInde… -2.62e-1 2.13 -0.123 0.913
## 34 EducationGraduate:PartyIDIndepe… -1.42e+1 1.75 -8.12 0.0148
## 35 EducationHigh school:PartyIDInd… 1.55e+1 2.56 6.05 0.0262
## 36 EducationPost HS:PartyIDIndepen… 1.48e+1 2.77 5.34 0.0333
## 37 EducationBachelor's:PartyIDInde… 1.77e+1 2.32 7.64 0.0167
## 38 EducationGraduate:PartyIDIndepe… 1.65e+1 2.33 7.10 0.0193
## 39 EducationHigh school:PartyIDNot… 1.59e+1 2.02 7.88 0.0157
## 40 EducationPost HS:PartyIDNot ver… 1.62e+1 1.69 9.54 0.0108
## 41 EducationBachelor's:PartyIDNot … 1.58e+1 1.93 8.18 0.0146
## 42 EducationGraduate:PartyIDNot ve… 1.54e+1 1.72 8.95 0.0123
## 43 EducationHigh school:PartyIDStr… -2.06e+0 1.88 -1.10 0.387
## 44 EducationPost HS:PartyIDStrong … 9.17e-2 2.01 0.0456 0.968
## 45 EducationBachelor's:PartyIDStro… 6.87e-2 2.06 0.0333 0.976
## 46 EducationGraduate:PartyIDStrong… -8.53e-1 1.81 -0.471 0.684
```
5. Continuing from Exercise 4, predict the probability of early voting for two people. Both are 28 years old and have a graduate degree; however, one person is a strong Democrat, and the other is a strong Republican.
```
add_vote_dat <- anes_2020 %>%
select(EarlyVote2020, Age, Education, PartyID) %>%
rbind(tibble(
EarlyVote2020 = NA,
Age = 28,
Education = "Graduate",
PartyID = c("Strong democrat", "Strong republican")
)) %>%
tail(2)
log_ex_2_out <- earlyvote_mod %>%
augment(newdata = add_vote_dat, type.predict = "response") %>%
mutate(
.se.fit = sqrt(attr(.fitted, "var")),
# extract the variance of the fitted value
.fitted = as.numeric(.fitted)
)
log_ex_2_out
```
```
## # A tibble: 2 × 6
## EarlyVote2020 Age Education PartyID .fitted .se.fit
## <fct> <dbl> <fct> <fct> <dbl> <dbl>
## 1 <NA> 28 Graduate Strong democrat 0.197 0.150
## 2 <NA> 28 Graduate Strong republican 0.450 0.244
```
Answer: We predict that the 28 year old with a graduate degree who identifies as a strong democrat will vote early 19\.7% of the time while a person who is otherwise similar but is a strong replican will vote early 45% of the time
10 \- Specifying sample designs and replicate weights in {srvyr}
----------------------------------------------------------------
1. The National Health Interview Survey (NHIS) is an annual household survey conducted by the National Center for Health Statistics (NCHS). The NHIS includes a wide variety of health topics for adults including health status and conditions, functioning and disability, health care access and health service utilization, health\-related behaviors, health promotion, mental health, barriers to receiving care, and community engagement. Like many national in\-person surveys, the sampling design is a stratified clustered design with details included in the Survey Description ([National Center for Health Statistics 2023](#ref-nhis-svy-des)). The Survey Description provides information on setting up syntax in SUDAAN, Stata, SPSS, SAS, and R ({survey} package implementation). We have imported the data and the variable containing the data as: `nhis_adult_data`. How would we specify the design using either `as_survey_design()` or `as_survey_rep()`?
Answer:
```
nhis_adult_des <- nhis_adult_data %>%
as_survey_design(
ids = PPSU,
strata = PSTRAT,
nest = TRUE,
weights = WTFA_A
)
```
2. The General Social Survey (GSS) is a survey that has been administered since 1972 on social, behavioral, and attitudinal topics. The 2016\-2020 GSS Panel codebook provides examples of setting up syntax in SAS and Stata but not R ([Davern et al. 2021](#ref-gss-codebook)). We have imported the data and the variable containing the data as: `gss_data`. How would we specify the design in R using either `as_survey_design()` or `as_survey_rep()`?
Answer:
```
gss_des <- gss_data %>%
as_survey_design(
ids = VPSU_2,
strata = VSTRAT_2,
weights = WTSSNR_2
)
```
13 \- National Crime Victimization Survey Vignette
--------------------------------------------------
1. What proportion of completed motor vehicle thefts are not reported to the police? Hint: Use the codebook to look at the definition of Type of Crime (V4529\).
```
ans1 <- inc_des %>%
filter(str_detect(V4529, "40|41")) %>%
summarize(Pct = survey_mean(!ReportPolice, na.rm = TRUE) * 100)
```
Answer: It is estimated that 23\.1% of motor vehicle thefts are not reported to the police.
2. How many violent crimes occur in each region?
Answer:
```
inc_des %>%
filter(Violent) %>%
survey_count(Region) %>%
select(-n_se) %>%
gt(rowname_col = "Region") %>%
fmt_integer() %>%
cols_label(
n = "Violent victimizations",
) %>%
tab_header("Estimated number of violent crimes by region")
```
| Estimated number of violent crimes by region | |
| --- | --- |
| | Violent victimizations |
| Northeast | 698,406 |
| Midwest | 1,144,407 |
| South | 1,394,214 |
| West | 1,361,278 |
3. What is the property victimization rate among each income level?
Answer:
```
hh_des %>%
filter(!is.na(Income)) %>%
group_by(Income) %>%
summarize(Property_Rate = survey_mean(Property * ADJINC_WT * 1000,
na.rm = TRUE
)) %>%
gt(rowname_col = "Income") %>%
cols_label(
Property_Rate = "Rate",
Property_Rate_se = "Standard Error"
) %>%
fmt_number(decimals = 1) %>%
tab_header("Estimated property victimization rate by income level")
```
| Estimated property victimization rate by income level | | |
| --- | --- | --- |
| | Rate | Standard Error |
| Less than $25,000 | 110\.6 | 5\.0 |
| $25,000\-\-49,999 | 89\.5 | 3\.4 |
| $50,000\-\-99,999 | 87\.8 | 3\.3 |
| $100,000\-\-199,999 | 76\.5 | 3\.5 |
| $200,000 or more | 91\.8 | 5\.7 |
4. What is the difference between the violent victimization rate between males and females? Is it statistically different?
```
vr_gender <- pers_des %>%
group_by(Sex) %>%
summarize(
Violent_rate = survey_mean(Violent * ADJINC_WT * 1000, na.rm = TRUE)
)
vr_gender_test <- pers_des %>%
mutate(
Violent_Adj = Violent * ADJINC_WT * 1000
) %>%
svyttest(
formula = Violent_Adj ~ Sex,
design = .,
na.rm = TRUE
) %>%
broom::tidy()
```
```
## Warning in summary.glm(g): observations with zero weight not used for
## calculating dispersion
```
```
## Warning in summary.glm(glm.object): observations with zero weight not
## used for calculating dispersion
```
Answer: The difference between male and female victimization rate is estimated as 1\.9 victimizations/1,000 people and is not significantly different (p\-value\=0\.1560\)
14 \- AmericasBarometer Vignette
--------------------------------
1. Calculate the percentage of households with broadband internet and those with any internet at home, including from a phone or tablet in Latin America and the Caribbean. Hint: if there are countries with 0% internet usage, try filtering by something first.
Answer:
```
int_ests <-
ambarom_des %>%
filter(!is.na(Internet) | !is.na(BroadbandInternet)) %>%
group_by(Country) %>%
summarize(
p_broadband = survey_mean(BroadbandInternet, na.rm = TRUE) * 100,
p_internet = survey_mean(Internet, na.rm = TRUE) * 100
)
int_ests %>%
gt(rowname_col = "Country") %>%
fmt_number(decimals = 1) %>%
tab_spanner(
label = "Broadband at home",
columns = c(p_broadband, p_broadband_se)
) %>%
tab_spanner(
label = "Internet at home",
columns = c(p_internet, p_internet_se)
) %>%
cols_label(
p_broadband = "Percent",
p_internet = "Percent",
p_broadband_se = "S.E.",
p_internet_se = "S.E.",
)
```
| | Broadband at home | | Internet at home | |
| --- | --- | --- | --- | --- |
| Percent | S.E. | Percent | S.E. |
| Argentina | 62\.3 | 1\.1 | 86\.2 | 0\.9 |
| Bolivia | 41\.4 | 1\.0 | 77\.2 | 1\.0 |
| Brazil | 68\.3 | 1\.2 | 88\.9 | 0\.9 |
| Chile | 63\.1 | 1\.1 | 93\.5 | 0\.5 |
| Colombia | 45\.7 | 1\.2 | 68\.7 | 1\.1 |
| Costa Rica | 49\.6 | 1\.1 | 84\.4 | 0\.8 |
| Dominican Republic | 37\.1 | 1\.0 | 73\.7 | 1\.0 |
| Ecuador | 59\.7 | 1\.1 | 79\.9 | 0\.9 |
| El Salvador | 30\.2 | 0\.9 | 63\.9 | 1\.0 |
| Guatemala | 33\.4 | 1\.0 | 61\.5 | 1\.1 |
| Guyana | 63\.7 | 1\.1 | 86\.8 | 0\.8 |
| Haiti | 11\.8 | 0\.8 | 58\.5 | 1\.2 |
| Honduras | 28\.2 | 1\.0 | 60\.7 | 1\.1 |
| Jamaica | 64\.2 | 1\.0 | 91\.5 | 0\.6 |
| Mexico | 44\.9 | 1\.1 | 70\.9 | 1\.0 |
| Nicaragua | 39\.1 | 1\.1 | 76\.3 | 1\.1 |
| Panama | 43\.4 | 1\.0 | 73\.1 | 1\.0 |
| Paraguay | 33\.3 | 1\.0 | 72\.9 | 1\.0 |
| Peru | 42\.4 | 1\.1 | 71\.1 | 1\.1 |
| Uruguay | 62\.7 | 1\.1 | 90\.6 | 0\.7 |
2. Create a faceted map showing both broadband internet and any internet usage.
Answer:
```
library(sf)
library(rnaturalearth)
library(ggpattern)
internet_sf <- country_shape_upd %>%
full_join(select(int_ests, p = p_internet, geounit = Country), by = "geounit") %>%
mutate(Type = "Internet")
broadband_sf <- country_shape_upd %>%
full_join(select(int_ests, p = p_broadband, geounit = Country), by = "geounit") %>%
mutate(Type = "Broadband")
b_int_sf <- internet_sf %>%
bind_rows(broadband_sf) %>%
filter(region_wb == "Latin America & Caribbean")
b_int_sf %>%
ggplot(aes(fill = p),
color = "darkgray"
) +
geom_sf() +
facet_wrap(~Type) +
scale_fill_gradientn(
guide = "colorbar",
name = "Percent",
labels = scales::comma,
colors = c("#BFD7EA", "#087E8B", "#0B3954"),
na.value = NA
) +
geom_sf_pattern(
data = filter(b_int_sf, is.na(p)),
pattern = "crosshatch",
pattern_fill = "lightgray",
pattern_color = "lightgray",
fill = NA,
color = "darkgray"
) +
theme_minimal()
```
FIGURE D.3: Percent of broadband internet and any internet usage, Central and South America
5 \- Descriptive analysis
-------------------------
1. How many females have a graduate degree? Hint: The variables `Gender` and `Education` will be useful.
```
# Option 1:
femgd_option1 <- anes_des %>%
filter(Gender == "Female", Education == "Graduate") %>%
survey_count(name = "n")
femgd_option1
```
```
## # A tibble: 1 × 2
## n n_se
## <dbl> <dbl>
## 1 15072196. 837872.
```
```
# Option 2:
femgd_option2 <- anes_des %>%
filter(Gender == "Female", Education == "Graduate") %>%
summarize(N = survey_total(), .groups = "drop")
femgd_option2
```
```
## # A tibble: 1 × 2
## N N_se
## <dbl> <dbl>
## 1 15072196. 837872.
```
Answer: 15,072,196
2. What percentage of people identify as “Strong Democrat”? Hint: The variable `PartyID` indicates someone’s party affiliation.
```
psd <- anes_des %>%
group_by(PartyID) %>%
summarize(p = survey_mean()) %>%
filter(PartyID == "Strong democrat")
psd
```
```
## # A tibble: 1 × 3
## PartyID p p_se
## <fct> <dbl> <dbl>
## 1 Strong democrat 0.219 0.00646
```
Answer: 21\.9%
3. What percentage of people who voted in the 2020 election identify as “Strong Republican”? Hint: The variable `VotedPres2020` indicates whether someone voted in 2020\.
```
psr <- anes_des %>%
filter(VotedPres2020 == "Yes") %>%
group_by(PartyID) %>%
summarize(p = survey_mean()) %>%
filter(PartyID == "Strong republican")
psr
```
```
## # A tibble: 1 × 3
## PartyID p p_se
## <fct> <dbl> <dbl>
## 1 Strong republican 0.228 0.00824
```
Answer: 22\.8%
4. What percentage of people voted in both the 2016 election and the 2020 election? Include the logit confidence interval. Hint: The variable `VotedPres2016` indicates whether someone voted in 2016\.
```
pvb <- anes_des %>%
filter(!is.na(VotedPres2016), !is.na(VotedPres2020)) %>%
group_by(interact(VotedPres2016, VotedPres2020)) %>%
summarize(p = survey_prop(var = "ci", method = "logit"), ) %>%
filter(VotedPres2016 == "Yes", VotedPres2020 == "Yes")
pvb
```
```
## # A tibble: 1 × 5
## VotedPres2016 VotedPres2020 p p_low p_upp
## <fct> <fct> <dbl> <dbl> <dbl>
## 1 Yes Yes 0.794 0.777 0.810
```
Answer: 79\.4 with confidence interval: (77\.7, 81\)
5. What is the design effect for the proportion of people who voted early? Hint: The variable `EarlyVote2020` indicates whether someone voted early in 2020\.
```
pdeff <- anes_des %>%
filter(!is.na(EarlyVote2020)) %>%
group_by(EarlyVote2020) %>%
summarize(p = survey_mean(deff = TRUE)) %>%
filter(EarlyVote2020 == "Yes")
pdeff
```
```
## # A tibble: 1 × 4
## EarlyVote2020 p p_se p_deff
## <fct> <dbl> <dbl> <dbl>
## 1 Yes 0.726 0.0247 1.50
```
Answer: 1\.5
6. What is the median temperature people set their thermostats to at night during the winter? Hint: The variable `WinterTempNight` indicates the temperature that people set their thermostat to in the winter at night.
```
med_wintertempnight <- recs_des %>%
summarize(wtn_med = survey_median(
x = WinterTempNight,
na.rm = TRUE
))
med_wintertempnight
```
```
## # A tibble: 1 × 2
## wtn_med wtn_med_se
## <dbl> <dbl>
## 1 68 0.250
```
Answer: 68
7. People sometimes set their temperature differently over different seasons and during the day. What median temperatures do people set their thermostat to in the summer and winter, both during the day and at night? Include confidence intervals. Hint: Use the variables `WinterTempDay`, `WinterTempNight`, `SummerTempDay`, and `SummerTempNight`.
```
# Option 1
med_temps <- recs_des %>%
summarize(
across(c(WinterTempDay, WinterTempNight, SummerTempDay, SummerTempNight), ~ survey_median(.x, na.rm = TRUE))
)
med_temps
```
```
## # A tibble: 1 × 8
## WinterTempDay WinterTempDay_se WinterTempNight WinterTempNight_se
## <dbl> <dbl> <dbl> <dbl>
## 1 70 0.250 68 0.250
## # ℹ 4 more variables: SummerTempDay <dbl>, SummerTempDay_se <dbl>,
## # SummerTempNight <dbl>, SummerTempNight_se <dbl>
```
```
# Alternatively, could use `survey_quantile()` as shown below for WinterTempNight:
quant_temps <- recs_des %>%
summarize(
across(c(WinterTempDay, WinterTempNight, SummerTempDay, SummerTempNight), ~ survey_quantile(.x, quantiles = 0.5, na.rm = TRUE))
)
quant_temps
```
```
## # A tibble: 1 × 8
## WinterTempDay_q50 WinterTempDay_q50_se WinterTempNight_q50
## <dbl> <dbl> <dbl>
## 1 70 0.250 68
## # ℹ 5 more variables: WinterTempNight_q50_se <dbl>,
## # SummerTempDay_q50 <dbl>, SummerTempDay_q50_se <dbl>,
## # SummerTempNight_q50 <dbl>, SummerTempNight_q50_se <dbl>
```
Answer:
\- Winter during the day: 70
\- Winter during the night: 68
\- Summer during the day: 72
\- Summer during the night: 72
8. What is the correlation between the temperature that people set their temperature at during the night and during the day in the summer?
```
corr_summer_temp <- recs_des %>%
summarize(summer_corr = survey_corr(SummerTempNight, SummerTempDay,
na.rm = TRUE
))
corr_summer_temp
```
```
## # A tibble: 1 × 2
## summer_corr summer_corr_se
## <dbl> <dbl>
## 1 0.806 0.00806
```
Answer: 0\.806
9. What is the 1st, 2nd, and 3rd quartile of the amount of money spent on energy by Building America (BA) climate zone? Hint: `TOTALDOL` indicates the total amount spent on all fuel, and `ClimateRegion_BA` indicates the BA climate zones.
```
quant_baenergyexp <- recs_des %>%
group_by(ClimateRegion_BA) %>%
summarize(dol_quant = survey_quantile(
TOTALDOL,
quantiles = c(0.25, 0.5, 0.75),
vartype = "se",
na.rm = TRUE
))
quant_baenergyexp
```
```
## # A tibble: 8 × 7
## ClimateRegion_BA dol_quant_q25 dol_quant_q50 dol_quant_q75
## <fct> <dbl> <dbl> <dbl>
## 1 Mixed-Dry 1091. 1541. 2139.
## 2 Mixed-Humid 1317. 1840. 2462.
## 3 Hot-Humid 1094. 1622. 2233.
## 4 Hot-Dry 926. 1513. 2223.
## 5 Very-Cold 1195. 1986. 2955.
## 6 Cold 1213. 1756. 2422.
## 7 Marine 938. 1380. 1987.
## 8 Subarctic 2404. 3535. 5219.
## # ℹ 3 more variables: dol_quant_q25_se <dbl>, dol_quant_q50_se <dbl>,
## # dol_quant_q75_se <dbl>
```
Answer:
| Quartile summary of energy expenditure by BA Climate Zone | | | |
| --- | --- | --- | --- |
| | Q1 | Q2 | Q3 |
| Mixed\-Dry | $1,091 | $1,541 | $2,139 |
| Mixed\-Humid | $1,317 | $1,840 | $2,462 |
| Hot\-Humid | $1,094 | $1,622 | $2,233 |
| Hot\-Dry | $926 | $1,513 | $2,223 |
| Very\-Cold | $1,195 | $1,986 | $2,955 |
| Cold | $1,213 | $1,756 | $2,422 |
| Marine | $938 | $1,380 | $1,987 |
| Subarctic | $2,404 | $3,535 | $5,219 |
6 \- Statistical testing
------------------------
1. Using the RECS data, do more than 50% of U.S. households use A/C (`ACUsed`)?
```
ttest_solution1 <- recs_des %>%
svyttest(
design = .,
formula = ((ACUsed == TRUE) - 0.5) ~ 0,
na.rm = TRUE,
alternative = "greater"
) %>%
tidy()
ttest_solution1
```
```
## # A tibble: 1 × 8
## estimate statistic p.value parameter conf.low conf.high method
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 0.387 126. 1.73e-72 58 0.380 0.393 Design-based…
## # ℹ 1 more variable: alternative <chr>
```
Answer: 88\.7% of households use air conditioning which is significantly different from 50% (p\<0\.0001\) so there is strong evidence that more than 50% of households use air\-conditioning.
2. Using the RECS data, does the average temperature that U.S. households set their thermostats to differ between the day and night in the winter (`WinterTempDay` and `WinterTempNight`)?
```
ttest_solution2 <- recs_des %>%
svyttest(
design = .,
formula = WinterTempDay - WinterTempNight ~ 0,
na.rm = TRUE
) %>%
tidy()
ttest_solution2
```
```
## # A tibble: 1 × 8
## estimate statistic p.value parameter conf.low conf.high method
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 1.67 45.9 2.82e-47 58 1.59 1.74 Design-based…
## # ℹ 1 more variable: alternative <chr>
```
Answer: The average temperature difference between night and day during the winter for thermostat settings is 1\.67 which is significantly different from 0 (p\<0\.0001\) so there is strong evidence that the temperature setting is different between night and daytime during the winter.
3. Using the ANES data, does the average age (`Age`) of those who voted for Joseph Biden in 2020 (`VotedPres2020_selection`) differ from those who voted for another candidate?
```
ttest_solution3 <- anes_des %>%
filter(!is.na(VotedPres2020_selection)) %>%
svyttest(
design = .,
formula = Age ~ VotedPres2020_selection == "Biden",
na.rm = TRUE
) %>%
tidy()
ttest_solution3
```
```
## # A tibble: 1 × 8
## estimate statistic p.value parameter conf.low conf.high method
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 -3.60 -5.97 0.000000244 50 -4.81 -2.39 Design-ba…
## # ℹ 1 more variable: alternative <chr>
```
On average, those who voted for Joseph Biden in 2020 were \-3\.6 years younger than voters for other candidates and this is significantly different (p \<0\.0001\).
4. If we wanted to determine if the political party affiliation differed for males and females, what test would we use?
1. Goodness\-of\-fit test (`svygofchisq()`)
2. Test of independence (`svychisq()`)
3. Test of homogeneity (`svychisq()`)
Answer: c. Test of homogeneity (`svychisq()`)
5. In the RECS data, is there a relationship between the type of housing unit (`HousingUnitType`) and the year the house was built (`YearMade`)?
```
chisq_solution2 <- recs_des %>%
svychisq(
formula = ~ HousingUnitType + YearMade,
design = .,
statistic = "Wald",
na.rm = TRUE
)
chisq_solution2 %>% tidy()
```
```
## Multiple parameters; naming those columns ndf, ddf
```
```
## # A tibble: 1 × 5
## ndf ddf statistic p.value method
## <dbl> <dbl> <dbl> <dbl> <chr>
## 1 32 59 67.9 5.54e-36 Design-based Wald test of association
```
Answer: There is strong evidence (p\<0\.0001\) that there is a relationship between type of housing unit and the year the house was built.
6. In the ANES data, is there a difference in the distribution of gender (`Gender`) across early voting status in 2020 (`EarlyVote2020`)?
```
chisq_solution3 <- anes_des %>%
svychisq(
formula = ~ Gender + EarlyVote2020,
design = .,
statistic = "F",
na.rm = TRUE
) %>%
tidy()
```
```
## Multiple parameters; naming those columns ndf, ddf
```
```
chisq_solution3
```
```
## # A tibble: 1 × 5
## ndf ddf statistic p.value method
## <dbl> <dbl> <dbl> <dbl> <chr>
## 1 1 51 4.53 0.0381 Pearson's X^2: Rao & Scott adjustment
```
Answer: There is strong evidence that there is a difference in the gender distribution of gender by early voting status (p\=0\.0381\).
7 \- Modeling
-------------
1. The type of housing unit may have an impact on energy expenses. Is there any relationship between housing unit type (`HousingUnitType`) and total energy expenditure (`TOTALDOL`)? First, find the average energy expenditure by housing unit type as a descriptive analysis and then do the test. The reference level in the comparison should be the housing unit type that is most common.
```
expense_by_hut <- recs_des %>%
group_by(HousingUnitType) %>%
summarize(
Expense = survey_mean(TOTALDOL, na.rm = TRUE),
HUs = survey_total()
) %>%
arrange(desc(HUs))
expense_by_hut
```
```
## # A tibble: 5 × 5
## HousingUnitType Expense Expense_se HUs HUs_se
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 Single-family detached 2205. 9.36 77067692. 0.00000277
## 2 Apartment: 5 or more units 1108. 13.7 22835862. 0.000000226
## 3 Apartment: 2-4 Units 1407. 24.2 9341795. 0.119
## 4 Single-family attached 1653. 22.3 7451177. 0.114
## 5 Mobile home 1773. 26.2 6832499. 0.0000000927
```
```
exp_unit_out <- recs_des %>%
mutate(HousingUnitType = fct_infreq(HousingUnitType, NWEIGHT)) %>%
svyglm(
design = .,
formula = TOTALDOL ~ HousingUnitType,
na.action = na.omit
)
tidy(exp_unit_out)
```
```
## # A tibble: 5 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 2205. 9.36 236. 2.53e-84
## 2 HousingUnitTypeApartment: 5 or … -1097. 16.5 -66.3 3.52e-54
## 3 HousingUnitTypeApartment: 2-4 U… -798. 28.0 -28.5 1.37e-34
## 4 HousingUnitTypeSingle-family at… -551. 25.0 -22.1 5.28e-29
## 5 HousingUnitTypeMobile home -431. 27.4 -15.7 5.36e-22
```
Answer: The reference level should be Single\-family detached. All p\-values are very small indicating there is a significant relationship between housing unit type and total energy expenditure.
2. Does temperature play a role in electricity expenditure? Cooling degree days are a measure of how hot a place is. CDD65 for a given day indicates the number of degrees Fahrenheit warmer than 65°F (18\.3°C) it is in a location. On a day that averages 65°F and below, CDD65\=0\. While a day that averages 85°F (29\.4°C) would have CDD65\=20 because it is 20 degrees Fahrenheit warmer ([U.S. Energy Information Administration 2023d](#ref-eia-cdd)). For each day in the year, this is summed to give an indicator of how hot the place is throughout the year. Similarly, HDD65 indicates the days colder than 65°F. Can energy expenditure be predicted using these temperature indicators along with square footage? Is there a significant relationship? Include main effects and two\-way interactions.
```
temps_sqft_exp <- recs_des %>%
svyglm(
design = .,
formula = DOLLAREL ~ (TOTSQFT_EN + CDD65 + HDD65)^2,
na.action = na.omit
)
tidy(temps_sqft_exp) %>%
mutate(p.value = pretty_p_value(p.value) %>% str_pad(7))
```
```
## # A tibble: 7 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <chr>
## 1 (Intercept) 741. 70.5 10.5 "<0.0001"
## 2 TOTSQFT_EN 0.272 0.0471 5.77 "<0.0001"
## 3 CDD65 0.0293 0.0227 1.29 " 0.2024"
## 4 HDD65 -0.00111 0.0104 -0.107 " 0.9149"
## 5 TOTSQFT_EN:CDD65 0.0000459 0.0000154 2.97 " 0.0044"
## 6 TOTSQFT_EN:HDD65 -0.00000840 0.00000633 -1.33 " 0.1902"
## 7 CDD65:HDD65 0.00000533 0.00000355 1.50 " 0.1390"
```
Answer: There is a significant interaction between square footage and cooling degree days in the model and the square footage is a significant predictor of eletricity expenditure.
3. Continuing with our results from Exercise 2, create a plot between the actual and predicted expenditures and a residual plot for the predicted expenditures.
Answer:
```
temps_sqft_exp_fit <- temps_sqft_exp %>%
augment() %>%
mutate(
.se.fit = sqrt(attr(.fitted, "var")),
# extract the variance of the fitted value
.fitted = as.numeric(.fitted)
)
```
```
temps_sqft_exp_fit %>%
ggplot(aes(x = DOLLAREL, y = .fitted)) +
geom_point() +
geom_abline(
intercept = 0,
slope = 1,
color = "red"
) +
xlab("Actual expenditures") +
ylab("Predicted expenditures") +
theme_minimal()
```
FIGURE D.1: Actual and predicted electricity expenditures
```
temps_sqft_exp_fit %>%
ggplot(aes(x = .fitted, y = .resid)) +
geom_point() +
geom_hline(yintercept = 0, color = "red") +
xlab("Predicted expenditure") +
ylab("Residual value of expenditure") +
theme_minimal()
```
FIGURE D.2: Residual plot of electric cost model with covariates TOTSQFT\_EN, CDD65, and HDD65
4. Early voting expanded in 2020 ([Sprunt 2020](#ref-npr-voting-trend)). Build a logistic model predicting early voting in 2020 (`EarlyVote2020`) using age (`Age`), education (`Education`), and party identification (`PartyID`). Include two\-way interactions.
Answer:
```
earlyvote_mod <- anes_des %>%
filter(!is.na(EarlyVote2020)) %>%
svyglm(
design = .,
formula = EarlyVote2020 ~ (Age + Education + PartyID)^2,
family = quasibinomial
)
tidy(earlyvote_mod) %>% print(n = 50)
```
```
## # A tibble: 46 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 3.28e-1 3.86 0.0848 0.940
## 2 Age -2.20e-2 0.0579 -0.379 0.741
## 3 EducationHigh school -2.56e+0 3.89 -0.658 0.578
## 4 EducationPost HS -3.27e+0 3.97 -0.823 0.497
## 5 EducationBachelor's -3.29e+0 3.91 -0.842 0.489
## 6 EducationGraduate -1.36e+0 3.91 -0.349 0.761
## 7 PartyIDNot very strong democrat 2.00e+0 3.30 0.605 0.607
## 8 PartyIDIndependent-democrat 3.38e+0 2.60 1.30 0.323
## 9 PartyIDIndependent 5.22e+0 2.25 2.32 0.146
## 10 PartyIDIndependent-republican -1.95e+1 2.42 -8.09 0.0149
## 11 PartyIDNot very strong republic… -1.33e+1 3.24 -4.10 0.0546
## 12 PartyIDStrong republican 3.13e+0 2.18 1.44 0.287
## 13 Age:EducationHigh school 4.72e-2 0.0592 0.796 0.509
## 14 Age:EducationPost HS 5.25e-2 0.0588 0.892 0.467
## 15 Age:EducationBachelor's 4.76e-2 0.0600 0.793 0.511
## 16 Age:EducationGraduate 8.65e-3 0.0578 0.150 0.895
## 17 Age:PartyIDNot very strong demo… -2.28e-2 0.0497 -0.459 0.691
## 18 Age:PartyIDIndependent-democrat -7.03e-2 0.0285 -2.46 0.133
## 19 Age:PartyIDIndependent -8.00e-2 0.0302 -2.65 0.118
## 20 Age:PartyIDIndependent-republic… 6.72e-2 0.0378 1.78 0.217
## 21 Age:PartyIDNot very strong repu… -3.07e-2 0.0420 -0.732 0.540
## 22 Age:PartyIDStrong republican -3.84e-2 0.0180 -2.14 0.166
## 23 EducationHigh school:PartyIDNot… -1.24e+0 2.22 -0.557 0.633
## 24 EducationPost HS:PartyIDNot ver… -8.95e-1 2.16 -0.413 0.719
## 25 EducationBachelor's:PartyIDNot … -1.21e+0 2.29 -0.528 0.650
## 26 EducationGraduate:PartyIDNot ve… -1.90e+0 2.25 -0.844 0.487
## 27 EducationHigh school:PartyIDInd… 7.84e-1 2.50 0.314 0.783
## 28 EducationPost HS:PartyIDIndepen… 4.04e-1 2.31 0.175 0.877
## 29 EducationBachelor's:PartyIDInde… 5.00e-1 2.60 0.193 0.865
## 30 EducationGraduate:PartyIDIndepe… -1.48e+1 2.47 -5.99 0.0268
## 31 EducationHigh school:PartyIDInd… -6.32e-1 1.72 -0.368 0.748
## 32 EducationPost HS:PartyIDIndepen… -9.27e-2 1.63 -0.0568 0.960
## 33 EducationBachelor's:PartyIDInde… -2.62e-1 2.13 -0.123 0.913
## 34 EducationGraduate:PartyIDIndepe… -1.42e+1 1.75 -8.12 0.0148
## 35 EducationHigh school:PartyIDInd… 1.55e+1 2.56 6.05 0.0262
## 36 EducationPost HS:PartyIDIndepen… 1.48e+1 2.77 5.34 0.0333
## 37 EducationBachelor's:PartyIDInde… 1.77e+1 2.32 7.64 0.0167
## 38 EducationGraduate:PartyIDIndepe… 1.65e+1 2.33 7.10 0.0193
## 39 EducationHigh school:PartyIDNot… 1.59e+1 2.02 7.88 0.0157
## 40 EducationPost HS:PartyIDNot ver… 1.62e+1 1.69 9.54 0.0108
## 41 EducationBachelor's:PartyIDNot … 1.58e+1 1.93 8.18 0.0146
## 42 EducationGraduate:PartyIDNot ve… 1.54e+1 1.72 8.95 0.0123
## 43 EducationHigh school:PartyIDStr… -2.06e+0 1.88 -1.10 0.387
## 44 EducationPost HS:PartyIDStrong … 9.17e-2 2.01 0.0456 0.968
## 45 EducationBachelor's:PartyIDStro… 6.87e-2 2.06 0.0333 0.976
## 46 EducationGraduate:PartyIDStrong… -8.53e-1 1.81 -0.471 0.684
```
5. Continuing from Exercise 4, predict the probability of early voting for two people. Both are 28 years old and have a graduate degree; however, one person is a strong Democrat, and the other is a strong Republican.
```
add_vote_dat <- anes_2020 %>%
select(EarlyVote2020, Age, Education, PartyID) %>%
rbind(tibble(
EarlyVote2020 = NA,
Age = 28,
Education = "Graduate",
PartyID = c("Strong democrat", "Strong republican")
)) %>%
tail(2)
log_ex_2_out <- earlyvote_mod %>%
augment(newdata = add_vote_dat, type.predict = "response") %>%
mutate(
.se.fit = sqrt(attr(.fitted, "var")),
# extract the variance of the fitted value
.fitted = as.numeric(.fitted)
)
log_ex_2_out
```
```
## # A tibble: 2 × 6
## EarlyVote2020 Age Education PartyID .fitted .se.fit
## <fct> <dbl> <fct> <fct> <dbl> <dbl>
## 1 <NA> 28 Graduate Strong democrat 0.197 0.150
## 2 <NA> 28 Graduate Strong republican 0.450 0.244
```
Answer: We predict that the 28 year old with a graduate degree who identifies as a strong democrat will vote early 19\.7% of the time while a person who is otherwise similar but is a strong replican will vote early 45% of the time
10 \- Specifying sample designs and replicate weights in {srvyr}
----------------------------------------------------------------
1. The National Health Interview Survey (NHIS) is an annual household survey conducted by the National Center for Health Statistics (NCHS). The NHIS includes a wide variety of health topics for adults including health status and conditions, functioning and disability, health care access and health service utilization, health\-related behaviors, health promotion, mental health, barriers to receiving care, and community engagement. Like many national in\-person surveys, the sampling design is a stratified clustered design with details included in the Survey Description ([National Center for Health Statistics 2023](#ref-nhis-svy-des)). The Survey Description provides information on setting up syntax in SUDAAN, Stata, SPSS, SAS, and R ({survey} package implementation). We have imported the data and the variable containing the data as: `nhis_adult_data`. How would we specify the design using either `as_survey_design()` or `as_survey_rep()`?
Answer:
```
nhis_adult_des <- nhis_adult_data %>%
as_survey_design(
ids = PPSU,
strata = PSTRAT,
nest = TRUE,
weights = WTFA_A
)
```
2. The General Social Survey (GSS) is a survey that has been administered since 1972 on social, behavioral, and attitudinal topics. The 2016\-2020 GSS Panel codebook provides examples of setting up syntax in SAS and Stata but not R ([Davern et al. 2021](#ref-gss-codebook)). We have imported the data and the variable containing the data as: `gss_data`. How would we specify the design in R using either `as_survey_design()` or `as_survey_rep()`?
Answer:
```
gss_des <- gss_data %>%
as_survey_design(
ids = VPSU_2,
strata = VSTRAT_2,
weights = WTSSNR_2
)
```
13 \- National Crime Victimization Survey Vignette
--------------------------------------------------
1. What proportion of completed motor vehicle thefts are not reported to the police? Hint: Use the codebook to look at the definition of Type of Crime (V4529\).
```
ans1 <- inc_des %>%
filter(str_detect(V4529, "40|41")) %>%
summarize(Pct = survey_mean(!ReportPolice, na.rm = TRUE) * 100)
```
Answer: It is estimated that 23\.1% of motor vehicle thefts are not reported to the police.
2. How many violent crimes occur in each region?
Answer:
```
inc_des %>%
filter(Violent) %>%
survey_count(Region) %>%
select(-n_se) %>%
gt(rowname_col = "Region") %>%
fmt_integer() %>%
cols_label(
n = "Violent victimizations",
) %>%
tab_header("Estimated number of violent crimes by region")
```
| Estimated number of violent crimes by region | |
| --- | --- |
| | Violent victimizations |
| Northeast | 698,406 |
| Midwest | 1,144,407 |
| South | 1,394,214 |
| West | 1,361,278 |
3. What is the property victimization rate among each income level?
Answer:
```
hh_des %>%
filter(!is.na(Income)) %>%
group_by(Income) %>%
summarize(Property_Rate = survey_mean(Property * ADJINC_WT * 1000,
na.rm = TRUE
)) %>%
gt(rowname_col = "Income") %>%
cols_label(
Property_Rate = "Rate",
Property_Rate_se = "Standard Error"
) %>%
fmt_number(decimals = 1) %>%
tab_header("Estimated property victimization rate by income level")
```
| Estimated property victimization rate by income level | | |
| --- | --- | --- |
| | Rate | Standard Error |
| Less than $25,000 | 110\.6 | 5\.0 |
| $25,000\-\-49,999 | 89\.5 | 3\.4 |
| $50,000\-\-99,999 | 87\.8 | 3\.3 |
| $100,000\-\-199,999 | 76\.5 | 3\.5 |
| $200,000 or more | 91\.8 | 5\.7 |
4. What is the difference between the violent victimization rate between males and females? Is it statistically different?
```
vr_gender <- pers_des %>%
group_by(Sex) %>%
summarize(
Violent_rate = survey_mean(Violent * ADJINC_WT * 1000, na.rm = TRUE)
)
vr_gender_test <- pers_des %>%
mutate(
Violent_Adj = Violent * ADJINC_WT * 1000
) %>%
svyttest(
formula = Violent_Adj ~ Sex,
design = .,
na.rm = TRUE
) %>%
broom::tidy()
```
```
## Warning in summary.glm(g): observations with zero weight not used for
## calculating dispersion
```
```
## Warning in summary.glm(glm.object): observations with zero weight not
## used for calculating dispersion
```
Answer: The difference between male and female victimization rate is estimated as 1\.9 victimizations/1,000 people and is not significantly different (p\-value\=0\.1560\)
14 \- AmericasBarometer Vignette
--------------------------------
1. Calculate the percentage of households with broadband internet and those with any internet at home, including from a phone or tablet in Latin America and the Caribbean. Hint: if there are countries with 0% internet usage, try filtering by something first.
Answer:
```
int_ests <-
ambarom_des %>%
filter(!is.na(Internet) | !is.na(BroadbandInternet)) %>%
group_by(Country) %>%
summarize(
p_broadband = survey_mean(BroadbandInternet, na.rm = TRUE) * 100,
p_internet = survey_mean(Internet, na.rm = TRUE) * 100
)
int_ests %>%
gt(rowname_col = "Country") %>%
fmt_number(decimals = 1) %>%
tab_spanner(
label = "Broadband at home",
columns = c(p_broadband, p_broadband_se)
) %>%
tab_spanner(
label = "Internet at home",
columns = c(p_internet, p_internet_se)
) %>%
cols_label(
p_broadband = "Percent",
p_internet = "Percent",
p_broadband_se = "S.E.",
p_internet_se = "S.E.",
)
```
| | Broadband at home | | Internet at home | |
| --- | --- | --- | --- | --- |
| Percent | S.E. | Percent | S.E. |
| Argentina | 62\.3 | 1\.1 | 86\.2 | 0\.9 |
| Bolivia | 41\.4 | 1\.0 | 77\.2 | 1\.0 |
| Brazil | 68\.3 | 1\.2 | 88\.9 | 0\.9 |
| Chile | 63\.1 | 1\.1 | 93\.5 | 0\.5 |
| Colombia | 45\.7 | 1\.2 | 68\.7 | 1\.1 |
| Costa Rica | 49\.6 | 1\.1 | 84\.4 | 0\.8 |
| Dominican Republic | 37\.1 | 1\.0 | 73\.7 | 1\.0 |
| Ecuador | 59\.7 | 1\.1 | 79\.9 | 0\.9 |
| El Salvador | 30\.2 | 0\.9 | 63\.9 | 1\.0 |
| Guatemala | 33\.4 | 1\.0 | 61\.5 | 1\.1 |
| Guyana | 63\.7 | 1\.1 | 86\.8 | 0\.8 |
| Haiti | 11\.8 | 0\.8 | 58\.5 | 1\.2 |
| Honduras | 28\.2 | 1\.0 | 60\.7 | 1\.1 |
| Jamaica | 64\.2 | 1\.0 | 91\.5 | 0\.6 |
| Mexico | 44\.9 | 1\.1 | 70\.9 | 1\.0 |
| Nicaragua | 39\.1 | 1\.1 | 76\.3 | 1\.1 |
| Panama | 43\.4 | 1\.0 | 73\.1 | 1\.0 |
| Paraguay | 33\.3 | 1\.0 | 72\.9 | 1\.0 |
| Peru | 42\.4 | 1\.1 | 71\.1 | 1\.1 |
| Uruguay | 62\.7 | 1\.1 | 90\.6 | 0\.7 |
2. Create a faceted map showing both broadband internet and any internet usage.
Answer:
```
library(sf)
library(rnaturalearth)
library(ggpattern)
internet_sf <- country_shape_upd %>%
full_join(select(int_ests, p = p_internet, geounit = Country), by = "geounit") %>%
mutate(Type = "Internet")
broadband_sf <- country_shape_upd %>%
full_join(select(int_ests, p = p_broadband, geounit = Country), by = "geounit") %>%
mutate(Type = "Broadband")
b_int_sf <- internet_sf %>%
bind_rows(broadband_sf) %>%
filter(region_wb == "Latin America & Caribbean")
b_int_sf %>%
ggplot(aes(fill = p),
color = "darkgray"
) +
geom_sf() +
facet_wrap(~Type) +
scale_fill_gradientn(
guide = "colorbar",
name = "Percent",
labels = scales::comma,
colors = c("#BFD7EA", "#087E8B", "#0B3954"),
na.value = NA
) +
geom_sf_pattern(
data = filter(b_int_sf, is.na(p)),
pattern = "crosshatch",
pattern_fill = "lightgray",
pattern_color = "lightgray",
fill = NA,
color = "darkgray"
) +
theme_minimal()
```
FIGURE D.3: Percent of broadband internet and any internet usage, Central and South America
| Social Science |
tidy-survey-r.github.io | https://tidy-survey-r.github.io/tidy-survey-book/references.html | Social Science |
|
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/intro.html |
Introduction
============
Overview
--------
Dealing with text is typically not even considered in the applied statistical training of most disciplines. This is in direct contrast with how often it has to be dealt with prior to more common analysis, or how interesting it might be to have text be the focus of analysis. This document and corresponding workshop will aim to provide a sense of the things one can do with text, and the sorts of analyses that might be useful.
### Goals
The goal of this workshop is primarily to provide a sense of common tasks related to dealing with text as part of the data or the focus of analysis, and provide some relatively easy to use tools. It must be stressed that this is only a starting point, a hopefully fun foray into the world of text, not definitive statement of how you *should* analyze text. In fact, some of the methods demonstrated would likely be too rudimentary for most goals.
Additionally, we’ll have exercises to practice, but those comfortable enough to do so should follow along with the in\-text examples. Note that there is more content here than will be covered in a single workshop.
### Prerequisites
The document is for the most part very applied in nature, and doesn’t assume much beyond familiarity with the R statistical computing environment. For programming purposes, it would be useful if you are familiar with the [tidyverse](https://www.tidyverse.org/), or at least dplyr specifically, otherwise some of the code may be difficult to understand (and is required if you want to run it).
Here are some of the packages used in this document:
* Throughout
+ tidyverse
+ tidytext
* Strings
+ stringr
+ lubridate
* Sentiment
+ gutenbergr
+ janeaustenr
* POS
+ openNLP
+ NLP
+ tm
* Topic Models
+ topicmodels
+ quanteda
* Word Embedding
+ text2vec
Note the following color coding used in this document:
* emphasis
* package
* function
* object/class
* link
Initial Steps
-------------
0. Download the zip file [here](https://github.com/m-clark/text-analysis-with-R/raw/master/workshop_project.zip). It contains an RStudio project with several data files that you can use as you attempt to replicate the analyses. Be mindful of where you put it.
1. Unzip it. Be mindful of where you put the resulting folder.
2. Open RStudio.
3. File/Open Project and navigate to and click on the blue icon in the folder you just created.
4. Install any of the above packages you want.
Overview
--------
Dealing with text is typically not even considered in the applied statistical training of most disciplines. This is in direct contrast with how often it has to be dealt with prior to more common analysis, or how interesting it might be to have text be the focus of analysis. This document and corresponding workshop will aim to provide a sense of the things one can do with text, and the sorts of analyses that might be useful.
### Goals
The goal of this workshop is primarily to provide a sense of common tasks related to dealing with text as part of the data or the focus of analysis, and provide some relatively easy to use tools. It must be stressed that this is only a starting point, a hopefully fun foray into the world of text, not definitive statement of how you *should* analyze text. In fact, some of the methods demonstrated would likely be too rudimentary for most goals.
Additionally, we’ll have exercises to practice, but those comfortable enough to do so should follow along with the in\-text examples. Note that there is more content here than will be covered in a single workshop.
### Prerequisites
The document is for the most part very applied in nature, and doesn’t assume much beyond familiarity with the R statistical computing environment. For programming purposes, it would be useful if you are familiar with the [tidyverse](https://www.tidyverse.org/), or at least dplyr specifically, otherwise some of the code may be difficult to understand (and is required if you want to run it).
Here are some of the packages used in this document:
* Throughout
+ tidyverse
+ tidytext
* Strings
+ stringr
+ lubridate
* Sentiment
+ gutenbergr
+ janeaustenr
* POS
+ openNLP
+ NLP
+ tm
* Topic Models
+ topicmodels
+ quanteda
* Word Embedding
+ text2vec
Note the following color coding used in this document:
* emphasis
* package
* function
* object/class
* link
### Goals
The goal of this workshop is primarily to provide a sense of common tasks related to dealing with text as part of the data or the focus of analysis, and provide some relatively easy to use tools. It must be stressed that this is only a starting point, a hopefully fun foray into the world of text, not definitive statement of how you *should* analyze text. In fact, some of the methods demonstrated would likely be too rudimentary for most goals.
Additionally, we’ll have exercises to practice, but those comfortable enough to do so should follow along with the in\-text examples. Note that there is more content here than will be covered in a single workshop.
### Prerequisites
The document is for the most part very applied in nature, and doesn’t assume much beyond familiarity with the R statistical computing environment. For programming purposes, it would be useful if you are familiar with the [tidyverse](https://www.tidyverse.org/), or at least dplyr specifically, otherwise some of the code may be difficult to understand (and is required if you want to run it).
Here are some of the packages used in this document:
* Throughout
+ tidyverse
+ tidytext
* Strings
+ stringr
+ lubridate
* Sentiment
+ gutenbergr
+ janeaustenr
* POS
+ openNLP
+ NLP
+ tm
* Topic Models
+ topicmodels
+ quanteda
* Word Embedding
+ text2vec
Note the following color coding used in this document:
* emphasis
* package
* function
* object/class
* link
Initial Steps
-------------
0. Download the zip file [here](https://github.com/m-clark/text-analysis-with-R/raw/master/workshop_project.zip). It contains an RStudio project with several data files that you can use as you attempt to replicate the analyses. Be mindful of where you put it.
1. Unzip it. Be mindful of where you put the resulting folder.
2. Open RStudio.
3. File/Open Project and navigate to and click on the blue icon in the folder you just created.
4. Install any of the above packages you want.
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/intro.html |
Introduction
============
Overview
--------
Dealing with text is typically not even considered in the applied statistical training of most disciplines. This is in direct contrast with how often it has to be dealt with prior to more common analysis, or how interesting it might be to have text be the focus of analysis. This document and corresponding workshop will aim to provide a sense of the things one can do with text, and the sorts of analyses that might be useful.
### Goals
The goal of this workshop is primarily to provide a sense of common tasks related to dealing with text as part of the data or the focus of analysis, and provide some relatively easy to use tools. It must be stressed that this is only a starting point, a hopefully fun foray into the world of text, not definitive statement of how you *should* analyze text. In fact, some of the methods demonstrated would likely be too rudimentary for most goals.
Additionally, we’ll have exercises to practice, but those comfortable enough to do so should follow along with the in\-text examples. Note that there is more content here than will be covered in a single workshop.
### Prerequisites
The document is for the most part very applied in nature, and doesn’t assume much beyond familiarity with the R statistical computing environment. For programming purposes, it would be useful if you are familiar with the [tidyverse](https://www.tidyverse.org/), or at least dplyr specifically, otherwise some of the code may be difficult to understand (and is required if you want to run it).
Here are some of the packages used in this document:
* Throughout
+ tidyverse
+ tidytext
* Strings
+ stringr
+ lubridate
* Sentiment
+ gutenbergr
+ janeaustenr
* POS
+ openNLP
+ NLP
+ tm
* Topic Models
+ topicmodels
+ quanteda
* Word Embedding
+ text2vec
Note the following color coding used in this document:
* emphasis
* package
* function
* object/class
* link
Initial Steps
-------------
0. Download the zip file [here](https://github.com/m-clark/text-analysis-with-R/raw/master/workshop_project.zip). It contains an RStudio project with several data files that you can use as you attempt to replicate the analyses. Be mindful of where you put it.
1. Unzip it. Be mindful of where you put the resulting folder.
2. Open RStudio.
3. File/Open Project and navigate to and click on the blue icon in the folder you just created.
4. Install any of the above packages you want.
Overview
--------
Dealing with text is typically not even considered in the applied statistical training of most disciplines. This is in direct contrast with how often it has to be dealt with prior to more common analysis, or how interesting it might be to have text be the focus of analysis. This document and corresponding workshop will aim to provide a sense of the things one can do with text, and the sorts of analyses that might be useful.
### Goals
The goal of this workshop is primarily to provide a sense of common tasks related to dealing with text as part of the data or the focus of analysis, and provide some relatively easy to use tools. It must be stressed that this is only a starting point, a hopefully fun foray into the world of text, not definitive statement of how you *should* analyze text. In fact, some of the methods demonstrated would likely be too rudimentary for most goals.
Additionally, we’ll have exercises to practice, but those comfortable enough to do so should follow along with the in\-text examples. Note that there is more content here than will be covered in a single workshop.
### Prerequisites
The document is for the most part very applied in nature, and doesn’t assume much beyond familiarity with the R statistical computing environment. For programming purposes, it would be useful if you are familiar with the [tidyverse](https://www.tidyverse.org/), or at least dplyr specifically, otherwise some of the code may be difficult to understand (and is required if you want to run it).
Here are some of the packages used in this document:
* Throughout
+ tidyverse
+ tidytext
* Strings
+ stringr
+ lubridate
* Sentiment
+ gutenbergr
+ janeaustenr
* POS
+ openNLP
+ NLP
+ tm
* Topic Models
+ topicmodels
+ quanteda
* Word Embedding
+ text2vec
Note the following color coding used in this document:
* emphasis
* package
* function
* object/class
* link
### Goals
The goal of this workshop is primarily to provide a sense of common tasks related to dealing with text as part of the data or the focus of analysis, and provide some relatively easy to use tools. It must be stressed that this is only a starting point, a hopefully fun foray into the world of text, not definitive statement of how you *should* analyze text. In fact, some of the methods demonstrated would likely be too rudimentary for most goals.
Additionally, we’ll have exercises to practice, but those comfortable enough to do so should follow along with the in\-text examples. Note that there is more content here than will be covered in a single workshop.
### Prerequisites
The document is for the most part very applied in nature, and doesn’t assume much beyond familiarity with the R statistical computing environment. For programming purposes, it would be useful if you are familiar with the [tidyverse](https://www.tidyverse.org/), or at least dplyr specifically, otherwise some of the code may be difficult to understand (and is required if you want to run it).
Here are some of the packages used in this document:
* Throughout
+ tidyverse
+ tidytext
* Strings
+ stringr
+ lubridate
* Sentiment
+ gutenbergr
+ janeaustenr
* POS
+ openNLP
+ NLP
+ tm
* Topic Models
+ topicmodels
+ quanteda
* Word Embedding
+ text2vec
Note the following color coding used in this document:
* emphasis
* package
* function
* object/class
* link
Initial Steps
-------------
0. Download the zip file [here](https://github.com/m-clark/text-analysis-with-R/raw/master/workshop_project.zip). It contains an RStudio project with several data files that you can use as you attempt to replicate the analyses. Be mindful of where you put it.
1. Unzip it. Be mindful of where you put the resulting folder.
2. Open RStudio.
3. File/Open Project and navigate to and click on the blue icon in the folder you just created.
4. Install any of the above packages you want.
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/intro.html |
Introduction
============
Overview
--------
Dealing with text is typically not even considered in the applied statistical training of most disciplines. This is in direct contrast with how often it has to be dealt with prior to more common analysis, or how interesting it might be to have text be the focus of analysis. This document and corresponding workshop will aim to provide a sense of the things one can do with text, and the sorts of analyses that might be useful.
### Goals
The goal of this workshop is primarily to provide a sense of common tasks related to dealing with text as part of the data or the focus of analysis, and provide some relatively easy to use tools. It must be stressed that this is only a starting point, a hopefully fun foray into the world of text, not definitive statement of how you *should* analyze text. In fact, some of the methods demonstrated would likely be too rudimentary for most goals.
Additionally, we’ll have exercises to practice, but those comfortable enough to do so should follow along with the in\-text examples. Note that there is more content here than will be covered in a single workshop.
### Prerequisites
The document is for the most part very applied in nature, and doesn’t assume much beyond familiarity with the R statistical computing environment. For programming purposes, it would be useful if you are familiar with the [tidyverse](https://www.tidyverse.org/), or at least dplyr specifically, otherwise some of the code may be difficult to understand (and is required if you want to run it).
Here are some of the packages used in this document:
* Throughout
+ tidyverse
+ tidytext
* Strings
+ stringr
+ lubridate
* Sentiment
+ gutenbergr
+ janeaustenr
* POS
+ openNLP
+ NLP
+ tm
* Topic Models
+ topicmodels
+ quanteda
* Word Embedding
+ text2vec
Note the following color coding used in this document:
* emphasis
* package
* function
* object/class
* link
Initial Steps
-------------
0. Download the zip file [here](https://github.com/m-clark/text-analysis-with-R/raw/master/workshop_project.zip). It contains an RStudio project with several data files that you can use as you attempt to replicate the analyses. Be mindful of where you put it.
1. Unzip it. Be mindful of where you put the resulting folder.
2. Open RStudio.
3. File/Open Project and navigate to and click on the blue icon in the folder you just created.
4. Install any of the above packages you want.
Overview
--------
Dealing with text is typically not even considered in the applied statistical training of most disciplines. This is in direct contrast with how often it has to be dealt with prior to more common analysis, or how interesting it might be to have text be the focus of analysis. This document and corresponding workshop will aim to provide a sense of the things one can do with text, and the sorts of analyses that might be useful.
### Goals
The goal of this workshop is primarily to provide a sense of common tasks related to dealing with text as part of the data or the focus of analysis, and provide some relatively easy to use tools. It must be stressed that this is only a starting point, a hopefully fun foray into the world of text, not definitive statement of how you *should* analyze text. In fact, some of the methods demonstrated would likely be too rudimentary for most goals.
Additionally, we’ll have exercises to practice, but those comfortable enough to do so should follow along with the in\-text examples. Note that there is more content here than will be covered in a single workshop.
### Prerequisites
The document is for the most part very applied in nature, and doesn’t assume much beyond familiarity with the R statistical computing environment. For programming purposes, it would be useful if you are familiar with the [tidyverse](https://www.tidyverse.org/), or at least dplyr specifically, otherwise some of the code may be difficult to understand (and is required if you want to run it).
Here are some of the packages used in this document:
* Throughout
+ tidyverse
+ tidytext
* Strings
+ stringr
+ lubridate
* Sentiment
+ gutenbergr
+ janeaustenr
* POS
+ openNLP
+ NLP
+ tm
* Topic Models
+ topicmodels
+ quanteda
* Word Embedding
+ text2vec
Note the following color coding used in this document:
* emphasis
* package
* function
* object/class
* link
### Goals
The goal of this workshop is primarily to provide a sense of common tasks related to dealing with text as part of the data or the focus of analysis, and provide some relatively easy to use tools. It must be stressed that this is only a starting point, a hopefully fun foray into the world of text, not definitive statement of how you *should* analyze text. In fact, some of the methods demonstrated would likely be too rudimentary for most goals.
Additionally, we’ll have exercises to practice, but those comfortable enough to do so should follow along with the in\-text examples. Note that there is more content here than will be covered in a single workshop.
### Prerequisites
The document is for the most part very applied in nature, and doesn’t assume much beyond familiarity with the R statistical computing environment. For programming purposes, it would be useful if you are familiar with the [tidyverse](https://www.tidyverse.org/), or at least dplyr specifically, otherwise some of the code may be difficult to understand (and is required if you want to run it).
Here are some of the packages used in this document:
* Throughout
+ tidyverse
+ tidytext
* Strings
+ stringr
+ lubridate
* Sentiment
+ gutenbergr
+ janeaustenr
* POS
+ openNLP
+ NLP
+ tm
* Topic Models
+ topicmodels
+ quanteda
* Word Embedding
+ text2vec
Note the following color coding used in this document:
* emphasis
* package
* function
* object/class
* link
Initial Steps
-------------
0. Download the zip file [here](https://github.com/m-clark/text-analysis-with-R/raw/master/workshop_project.zip). It contains an RStudio project with several data files that you can use as you attempt to replicate the analyses. Be mindful of where you put it.
1. Unzip it. Be mindful of where you put the resulting folder.
2. Open RStudio.
3. File/Open Project and navigate to and click on the blue icon in the folder you just created.
4. Install any of the above packages you want.
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/intro.html |
Introduction
============
Overview
--------
Dealing with text is typically not even considered in the applied statistical training of most disciplines. This is in direct contrast with how often it has to be dealt with prior to more common analysis, or how interesting it might be to have text be the focus of analysis. This document and corresponding workshop will aim to provide a sense of the things one can do with text, and the sorts of analyses that might be useful.
### Goals
The goal of this workshop is primarily to provide a sense of common tasks related to dealing with text as part of the data or the focus of analysis, and provide some relatively easy to use tools. It must be stressed that this is only a starting point, a hopefully fun foray into the world of text, not definitive statement of how you *should* analyze text. In fact, some of the methods demonstrated would likely be too rudimentary for most goals.
Additionally, we’ll have exercises to practice, but those comfortable enough to do so should follow along with the in\-text examples. Note that there is more content here than will be covered in a single workshop.
### Prerequisites
The document is for the most part very applied in nature, and doesn’t assume much beyond familiarity with the R statistical computing environment. For programming purposes, it would be useful if you are familiar with the [tidyverse](https://www.tidyverse.org/), or at least dplyr specifically, otherwise some of the code may be difficult to understand (and is required if you want to run it).
Here are some of the packages used in this document:
* Throughout
+ tidyverse
+ tidytext
* Strings
+ stringr
+ lubridate
* Sentiment
+ gutenbergr
+ janeaustenr
* POS
+ openNLP
+ NLP
+ tm
* Topic Models
+ topicmodels
+ quanteda
* Word Embedding
+ text2vec
Note the following color coding used in this document:
* emphasis
* package
* function
* object/class
* link
Initial Steps
-------------
0. Download the zip file [here](https://github.com/m-clark/text-analysis-with-R/raw/master/workshop_project.zip). It contains an RStudio project with several data files that you can use as you attempt to replicate the analyses. Be mindful of where you put it.
1. Unzip it. Be mindful of where you put the resulting folder.
2. Open RStudio.
3. File/Open Project and navigate to and click on the blue icon in the folder you just created.
4. Install any of the above packages you want.
Overview
--------
Dealing with text is typically not even considered in the applied statistical training of most disciplines. This is in direct contrast with how often it has to be dealt with prior to more common analysis, or how interesting it might be to have text be the focus of analysis. This document and corresponding workshop will aim to provide a sense of the things one can do with text, and the sorts of analyses that might be useful.
### Goals
The goal of this workshop is primarily to provide a sense of common tasks related to dealing with text as part of the data or the focus of analysis, and provide some relatively easy to use tools. It must be stressed that this is only a starting point, a hopefully fun foray into the world of text, not definitive statement of how you *should* analyze text. In fact, some of the methods demonstrated would likely be too rudimentary for most goals.
Additionally, we’ll have exercises to practice, but those comfortable enough to do so should follow along with the in\-text examples. Note that there is more content here than will be covered in a single workshop.
### Prerequisites
The document is for the most part very applied in nature, and doesn’t assume much beyond familiarity with the R statistical computing environment. For programming purposes, it would be useful if you are familiar with the [tidyverse](https://www.tidyverse.org/), or at least dplyr specifically, otherwise some of the code may be difficult to understand (and is required if you want to run it).
Here are some of the packages used in this document:
* Throughout
+ tidyverse
+ tidytext
* Strings
+ stringr
+ lubridate
* Sentiment
+ gutenbergr
+ janeaustenr
* POS
+ openNLP
+ NLP
+ tm
* Topic Models
+ topicmodels
+ quanteda
* Word Embedding
+ text2vec
Note the following color coding used in this document:
* emphasis
* package
* function
* object/class
* link
### Goals
The goal of this workshop is primarily to provide a sense of common tasks related to dealing with text as part of the data or the focus of analysis, and provide some relatively easy to use tools. It must be stressed that this is only a starting point, a hopefully fun foray into the world of text, not definitive statement of how you *should* analyze text. In fact, some of the methods demonstrated would likely be too rudimentary for most goals.
Additionally, we’ll have exercises to practice, but those comfortable enough to do so should follow along with the in\-text examples. Note that there is more content here than will be covered in a single workshop.
### Prerequisites
The document is for the most part very applied in nature, and doesn’t assume much beyond familiarity with the R statistical computing environment. For programming purposes, it would be useful if you are familiar with the [tidyverse](https://www.tidyverse.org/), or at least dplyr specifically, otherwise some of the code may be difficult to understand (and is required if you want to run it).
Here are some of the packages used in this document:
* Throughout
+ tidyverse
+ tidytext
* Strings
+ stringr
+ lubridate
* Sentiment
+ gutenbergr
+ janeaustenr
* POS
+ openNLP
+ NLP
+ tm
* Topic Models
+ topicmodels
+ quanteda
* Word Embedding
+ text2vec
Note the following color coding used in this document:
* emphasis
* package
* function
* object/class
* link
Initial Steps
-------------
0. Download the zip file [here](https://github.com/m-clark/text-analysis-with-R/raw/master/workshop_project.zip). It contains an RStudio project with several data files that you can use as you attempt to replicate the analyses. Be mindful of where you put it.
1. Unzip it. Be mindful of where you put the resulting folder.
2. Open RStudio.
3. File/Open Project and navigate to and click on the blue icon in the folder you just created.
4. Install any of the above packages you want.
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/string-theory.html |
String Theory
=============
Basic data types
----------------
R has several core data structures:
* Vectors
* Factors
* Lists
* Matrices/arrays
* Data frames
Vectors form the basis of R data structures. There are two main types\- atomic and lists. All elements of an atomic vector are the same type.
Examples include:
* character
* numeric (double)
* integer
* logical
### Character strings
When dealing with text, objects of class character are what you’d typically be dealing with.
```
x = c('... Of Your Fake Dimension', 'Ephemeron', 'Dryswch', 'Isotasy', 'Memory')
x
```
Not much to it, but be aware there is no real limit to what is represented as a character vector. For example, in a data frame, you could have a column where each entry is one of the works of Shakespeare.
### Factors
Although not exactly precise, one can think of factors as integers with labels. So, the underlying representation of a variable for sex is 1:2 with labels ‘Male’ and ‘Female’. They are a special class with attributes, or metadata, that contains the information about the levels.
```
x = factor(rep(letters[1:3], e=10))
attributes(x)
```
```
$levels
[1] "a" "b" "c"
$class
[1] "factor"
```
While the underlying representation is numeric, it is important to remember that factors are *categorical*. They can’t be used as numbers would be, as the following demonstrates.
```
as.numeric(x)
```
```
[1] 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3
```
```
sum(x)
```
```
Error in Summary.factor(structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, : 'sum' not meaningful for factors
```
Any numbers could be used, what we’re interested in are the labels, so a ‘sum’ doesn’t make any sense. All of the following would produce the same factor.
```
factor(c(1, 2, 3), labels=c('a', 'b', 'c'))
factor(c(3.2, 10, 500000), labels=c('a', 'b', 'c'))
factor(c(.49, 1, 5), labels=c('a', 'b', 'c'))
```
Because of the integer\+metadata representation, factors are actually smaller than character strings, often notably so.
```
x = sample(state.name, 10000, replace=T)
format(object.size(x), units='Kb')
```
```
[1] "80.8 Kb"
```
```
format(object.size(factor(x)), units='Kb')
```
```
[1] "42.4 Kb"
```
```
format(object.size(as.integer(factor(x))), units='Kb')
```
```
[1] "39.1 Kb"
```
However, if memory is really a concern, it’s probably not that using factors will help, but rather better hardware.
### Analysis
It is important to know that raw text cannot be analyzed quantitatively. There is no magic that takes a categorical variable with text labels and estimates correlations among words and other words or numeric data. *Everything* that can be analyzed must have some numeric representation first, and this is where factors come in. For example, here is a data frame with two categorical predictors (`factor*`), a numeric predictor (`x`), and a numeric target (`y`). What follows is what it looks like if you wanted to run a regression model in that setting.
```
df =
crossing(factor_1 = c('A', 'B'),
factor_2 = c('Q', 'X', 'J')) %>%
mutate(x=rnorm(6),
y=rnorm(6))
df
```
```
# A tibble: 6 x 4
factor_1 factor_2 x y
<chr> <chr> <dbl> <dbl>
1 A J 0.797 -0.190
2 A Q -1.000 -0.496
3 A X 1.05 0.487
4 B J -0.329 -0.101
5 B Q 0.905 -0.809
6 B X 1.18 -1.92
```
```
## model.matrix(lm(y ~ x + factor_1 + factor_2, data=df))
```
| (Intercept) | x | factor\_1B | factor\_2Q | factor\_2X |
| --- | --- | --- | --- | --- |
| 1 | 0\.7968603 | 0 | 0 | 0 |
| 1 | \-0\.9999264 | 0 | 1 | 0 |
| 1 | 1\.0522363 | 0 | 0 | 1 |
| 1 | \-0\.3291774 | 1 | 0 | 0 |
| 1 | 0\.9049071 | 1 | 1 | 0 |
| 1 | 1\.1754300 | 1 | 0 | 1 |
The model.matrix function exposes the underlying matrix that is actually used in the regression analysis. You’d get a coefficient for each column of that matrix. As such, even the intercept must be represented in some fashion. For categorical data, the default coding scheme is dummy coding. A reference category is arbitrarily chosen (it doesn’t matter which, and you can always change it), while the other categories are represented by indicator variables, where a 1 represents the corresponding label and everything else is zero. For details on this coding scheme or others, consult any basic statistical modeling book.
In addition, you’ll note that in all text\-specific analysis, the underlying information is numeric. For example, with topic models, the base data structure is a document\-term matrix of counts.
### Characters vs. Factors
The main thing to note is that factors are generally a statistical phenomenon, and are required to do statistical things with data that would otherwise be a simple character string. If you know the relatively few levels the data can take, you’ll generally want to use factors, or at least know that statistical packages and methods will require them. In addition, factors allow you to easily overcome the silly default alphabetical ordering of category levels in some very popular visualization packages.
For other things, such as text analysis, you’ll almost certainly want character strings instead, and in many cases it will be required. It’s also worth noting that a lot of base R and other behavior will coerce strings to factors. This made a lot more sense in the early days of R, but is not really necessary these days.
For more on this stuff see the following:
* [http://adv\-r.had.co.nz/Data\-structures.html](http://adv-r.had.co.nz/Data-structures.html)
* <http://forcats.tidyverse.org/>
* <http://r4ds.had.co.nz/factors.html>
* [https://simplystatistics.org/2015/07/24/stringsasfactors\-an\-unauthorized\-biography/](https://simplystatistics.org/2015/07/24/stringsasfactors-an-unauthorized-biography/)
* [http://notstatschat.tumblr.com/post/124987394001/stringsasfactors\-sigh](http://notstatschat.tumblr.com/post/124987394001/stringsasfactors-sigh)
Basic Text Functionality
------------------------
### Base R
A lot of folks new to R are not aware of just how much basic text processing R comes with out of the box. Here are examples of note.
* paste: glue text/numeric values together
* substr: extract or replace substrings in a character vector
* grep family: use regular expressions to deal with patterns of text
* strsplit: split strings
* nchar: how many characters in a string
* as.numeric: convert a string to numeric if it can be
* strtoi: convert a string to integer if it can be (faster than as.integer)
* adist: string distances
I probably use paste/paste0 more than most things when dealing with text, as string concatenation comes up so often. The following provides some demonstration.
```
paste(c('a', 'b', 'cd'), collapse='|')
```
```
[1] "a|b|cd"
```
```
paste(c('a', 'b', 'cd'), collapse='')
```
```
[1] "abcd"
```
```
paste0('a', 'b', 'cd') # shortcut to collapse=''
```
```
[1] "abcd"
```
```
paste0('x', 1:3)
```
```
[1] "x1" "x2" "x3"
```
Beyond that, use of regular expression and functionality included in the grep family is a major way to save a lot of time during data processing. I leave that to its own section later.
### Useful packages
A couple packages will probably take care of the vast majority of your standard text processing needs. Note that even if they aren’t adding anything to the functionality of the base R functions, they typically will have been optimized in some fashion, particularly with regard to speed.
* stringr/stringi: More or less the same stuff you’ll find with substr, grep etc. except easier to use and/or faster. They also add useful functionality not in base R (e.g. str\_to\_title). The stringr package is mostly a wrapper for the stringi functions, with some additional functions.
* tidyr: has functions such as unite, separate, replace\_na that can often come in handy when working with data frames.
* glue: a newer package that can be seen as a fancier paste. Most likely it will be useful when creating functions or shiny apps in which variable text output is desired.
One issue I have with both packages and base R is that often they return a list object, when it should be simplifying to the vector format it was initially fed. This sometimes requires an additional step or two of further processing that shouldn’t be necessary, so be prepared for it[1](#fn1).
### Other
In this section, I’ll add some things that come to mind that might come into play when you’re dealing with text.
#### Dates
Dates are not character strings. Though they may start that way, if you actually want to treat them as dates you’ll need to convert the string to the appropriate date class. The lubridate package makes dealing with dates much easier. It comes with conversion, extraction and other functionality that will be sure to save you some time.
```
library(lubridate)
today()
```
```
[1] "2018-03-06"
```
```
today() + 1
```
```
[1] "2018-03-07"
```
```
today() + dyears(1)
```
```
[1] "2019-03-06"
```
```
leap_year(2016)
```
```
[1] TRUE
```
```
span = interval(ymd("2017-07-01"), ymd("2017-07-04"))
span
```
```
[1] 2017-07-01 UTC--2017-07-04 UTC
```
```
as.duration(span)
```
```
[1] "259200s (~3 days)"
```
```
span %/% minutes(1)
```
```
[1] 4320
```
This package makes dates so much easier, you should always use it when dealing with them.
#### Categorical Time
In regression modeling with few time points, one often has to decide on whether to treat the year as categorical (factor) or numeric (continuous). This greatly depends on how you want to tell your data story or other practical concerns. For example, if you have five years in your data, treating year as categorical means you are interested in accounting for unspecified things that go on in a given year. If you treat it as numeric, you are more interested in trends. Either is fine.
#### Web
A major resource for text is of course the web. Packages like rvest,httr, xml2, and many other packages specific to website APIs are available to help you here. See the [R task view for web technologies](https://cran.r-project.org/web/views/WebTechnologies.html) as a starting point.
##### Encoding
Encoding can be a sizable PITA sometimes, and will often come up when dealing with webscraping and other languages. The rvest and stringr packages may be able to get you past some issues at least. See their respective functions repair\_encoding and str\_conv as starting points on this issue.
### Summary of basic text functionality
Being familiar with commonly used string functionality in base R and packages like stringr can save a ridiculous amount of time in your data processing. The more familiar you are with them the easier time you’ll have with text.
Regular Expressions
-------------------
A regular expression, regex for short, is a sequence of characters that can be used as a search pattern for a string. Common operations are to merely detect, extract, or replace the matching string. There are actually many different flavors of regex for different programming languages, which are all flavors that originate with the Perl approach, or can enable the Perl approach to be used. However, knowing one means you pretty much know the others with only minor modifications if any.
To be clear, not only is regex another language, it’s nigh on indecipherable. You will not learn much regex, but what you do learn will save a potentially enormous amount of time you’d otherwise spend trying to do things in a more haphazard fashion. Furthermore, practically every situation that will come up has already been asked and answered on [Stack Overflow](https://stackoverflow.com/questions/tagged/regex), so you’ll almost always be able to search for what you need.
Here is an example:
`^r.*shiny[0-9]$`
What is *that* you may ask? Well here is an example of strings it would and wouldn’t match.
```
string = c('r is the shiny', 'r is the shiny1', 'r shines brightly')
grepl(string, pattern='^r.*shiny[0-9]$')
```
```
[1] FALSE TRUE FALSE
```
What the regex is esoterically attempting to match is any string that starts with ‘r’ and ends with ‘shiny\_’ where \_ is some single digit. Specifically, it breaks down as follows:
* **^** : starts with, so ^r means starts with r
* **.** : any character
* **\*** : match the preceding zero or more times
* **shiny** : match ‘shiny’
* **\[0\-9]** : any digit 0\-9 (note that we are still talking about strings, not actual numbered values)
* **$** : ends with preceding
### Typical Uses
None of it makes sense, so don’t attempt to do so. Just try to remember a couple key approaches, and search the web for the rest.
Along with ^ . \* \[0\-9] $, a couple more common ones are:
* **\[a\-z]** : letters a\-z
* **\[A\-Z]** : capital letters
* **\+** : match the preceding one or more times
* **()** : groupings
* **\|** : logical or e.g. \[a\-z]\|\[0\-9] (a lower\-case letter or a number)
* **?** : preceding item is optional, and will be matched at most once. Typically used for ‘look ahead’ and ‘look behind’
* **\\** : escape a character, like if you actually wanted to search for a period instead of using it as a regex pattern, you’d use \\., though in R you need \\\\, i.e. double slashes, for escape.
In addition, in R there are certain predefined characters that can be called:
* **\[:punct:]** : punctuation
* **\[:blank:]** : spaces and tabs
* **\[:alnum:]** : alphanumeric characters
Those are just a few. The key functions can be found by looking at the help file for the grep function (`?grep`). However, the stringr package has the same functionality with perhaps a slightly faster processing (though that’s due to the underlying stringi package).
See if you can guess which of the following will turn up `TRUE`.
```
grepl(c('apple', 'pear', 'banana'), pattern='a')
grepl(c('apple', 'pear', 'banana'), pattern='^a')
grepl(c('apple', 'pear', 'banana'), pattern='^a|a$')
```
Scraping the web, munging data, just finding things in your scripts … you can potentially use this all the time, and not only with text analysis, as we’ll now see.
### dplyr helper functions
The dplyr package comes with some poorly documented[2](#fn2) but quite useful helper functions that essentially serve as human\-readable regex, which is a very good thing. These functions allow you to select variables[3](#fn3) based on their names. They are usually just calling base R functions in the end.
* starts\_with: starts with a prefix (same as regex ‘^blah’)
* ends\_with: ends with a prefix (same as regex ‘blah$’)
* contains: contains a literal string (same as regex ‘blah’)
* matches: matches a regular expression (put your regex here)
* num\_range: a numerical range like x01, x02, x03\. (same as regex ‘x\[0\-9]\[0\-9]’)
* one\_of: variables in character vector. (if you need to quote variable names, e.g. within a function)
* everything: all variables. (a good way to spend time doing something only to accomplish what you would have by doing nothing, or a way to reorder variables)
For more on using stringr and regular expressions in R, you may find [this cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/strings.pdf) useful.
Text Processing Examples
------------------------
### Example 1
Let’s say you’re dealing with some data that has been handled typically, that is to say, poorly. For example, you have a variable in your data representing whether something is from the north or south region.
It might seem okay until…
```
## table(df$region)
```
| Var1 | Freq |
| --- | --- |
| South | 76 |
| north | 68 |
| North | 75 |
| north | 70 |
| North | 70 |
| south | 65 |
| South | 76 |
Even if you spotted the casing issue, there is still a white space problem[4](#fn4). Let’s say you want this to be capitalized ‘North’ and ‘South’. How might you do it? It’s actually quite easy with the stringr tools.
```
library(stringr)
df %>%
mutate(region = str_trim(region),
region = str_to_title(region))
```
The str\_trim function trims white space from either side (or both), while str\_to\_title converts everything to first letter capitalized.
```
## table(df_corrected$region)
```
| Var1 | Freq |
| --- | --- |
| North | 283 |
| South | 217 |
Compare that to how you would have done it before knowing how to use text processing tools. One might have spent several minutes with some find and replace approach in a spreadsheet, or maybe even several `if... else` statements in R until all problematic cases were taken care of. Not very efficient.
### Example 2
Suppose you import a data frame, and the data was originally in wide format, where each column represented a year of data collection for the individual. Since it is bad form for data columns to have numbers for names, when you import it, the result looks like the following.
So, the problem now is to change the names to be Year\_1, Year\_2, etc. You might think you might have to use colnames and manually create a string of names to replace the current ones.
```
colnames(df)[-1] = c('Year_1', 'Year_2', 'Year_3', 'Year_4', 'Year_5')
```
Or perhaps you’re thinking of the paste0 function, which works fine and saves some typing.
```
colnames(df)[-1] = paste0('Year_', 1:5)
```
However, data sets may be hundreds of columns, and the columns of data may have the same pattern but not be next to one another. For example, the first few dozen columns are all data that belongs to the first wave, etc. It is tedious to figure out which columns you don’t want, but even then you’re resulting to using magic numbers with the above approach, and one column change to data will mean that redoing the name change will fail.
However, the following accomplishes what we want, and is reproducible regardless of where the columns are in the data set.
```
df %>%
rename_at(vars(num_range('X', 1:5)),
str_replace, pattern='X', replacement='Year_') %>%
head()
```
```
id Year_1 Year_2 Year_3 Year_4 Year_5
1 1 1.18 -2.04 -0.03 -0.36 0.43
2 2 0.34 -1.34 -0.30 -0.15 0.47
3 3 -0.32 -0.97 1.03 0.20 0.97
4 4 -0.57 1.36 1.29 0.00 0.32
5 5 0.64 0.73 -0.16 -1.29 -0.79
6 6 -0.59 0.16 -1.28 0.55 0.75
```
Let’s parse what it’s specifically doing.
* rename\_at allows us to rename specific columns
* Which columns? X1 through X:5\. The num\_range helper function creates the character strings X1, X2, X3, X4, and X5\.
* Now that we have the names, we use vars to tell rename\_at which ones. It would have allowed additional sets of variables as well.
* rename\_at needs a function to apply to each of those column names. In this case the function is str\_replace, to replace patterns of strings with some other string
* The specific arguments to str\_replace (pattern to be replaced, replacement pattern) are also supplied.
So in the end we just have to use the num\_range helper function within the function that tells rename\_at what it should be renaming, and let str\_replace do the rest.
Exercises
---------
1. In your own words, state the difference between a character string and a factor variable.
2. Consider the following character vector.
```
x = c('A', '1', 'Q')
```
How might you paste the elements together so that there is an underscore `_` between characters and no space (“A\_1\_Q”)? If you highlight the next line you’ll see the hint.
Revisit how we used the collapse argument within paste. `paste(..., collapse=?)`
Paste Part 2: The following application of paste produces this result.
```
paste(c('A', '1', 'Q'), c('B', '2', 'z'))
```
```
[1] "A B" "1 2" "Q z"
```
Now try to produce `"A - B" "1 - 2" "Q - z"`. To do this, note that one can paste any number of things together (i.e. more than two). So try adding ’ \- ’ to it.
3. Use regex to grab the Star Wars names that have a number. Use both grep and grepl and compare the results
```
grep(starwars$name, pattern = ?)
```
Now use your hacking skills to determine which one is the tallest.
4. Load the dplyr package, and use the its [helper functions](string-theory.html#dplyr-helper-functions) to grab all the columns in the starwars data set (comes with the package) with `color` in the name but without referring to them directly. The following shows a generic example. There are several ways to do this. Try two if you can.
```
starwars %>%
select(helper_function('pattern'))
```
Basic data types
----------------
R has several core data structures:
* Vectors
* Factors
* Lists
* Matrices/arrays
* Data frames
Vectors form the basis of R data structures. There are two main types\- atomic and lists. All elements of an atomic vector are the same type.
Examples include:
* character
* numeric (double)
* integer
* logical
### Character strings
When dealing with text, objects of class character are what you’d typically be dealing with.
```
x = c('... Of Your Fake Dimension', 'Ephemeron', 'Dryswch', 'Isotasy', 'Memory')
x
```
Not much to it, but be aware there is no real limit to what is represented as a character vector. For example, in a data frame, you could have a column where each entry is one of the works of Shakespeare.
### Factors
Although not exactly precise, one can think of factors as integers with labels. So, the underlying representation of a variable for sex is 1:2 with labels ‘Male’ and ‘Female’. They are a special class with attributes, or metadata, that contains the information about the levels.
```
x = factor(rep(letters[1:3], e=10))
attributes(x)
```
```
$levels
[1] "a" "b" "c"
$class
[1] "factor"
```
While the underlying representation is numeric, it is important to remember that factors are *categorical*. They can’t be used as numbers would be, as the following demonstrates.
```
as.numeric(x)
```
```
[1] 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3
```
```
sum(x)
```
```
Error in Summary.factor(structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, : 'sum' not meaningful for factors
```
Any numbers could be used, what we’re interested in are the labels, so a ‘sum’ doesn’t make any sense. All of the following would produce the same factor.
```
factor(c(1, 2, 3), labels=c('a', 'b', 'c'))
factor(c(3.2, 10, 500000), labels=c('a', 'b', 'c'))
factor(c(.49, 1, 5), labels=c('a', 'b', 'c'))
```
Because of the integer\+metadata representation, factors are actually smaller than character strings, often notably so.
```
x = sample(state.name, 10000, replace=T)
format(object.size(x), units='Kb')
```
```
[1] "80.8 Kb"
```
```
format(object.size(factor(x)), units='Kb')
```
```
[1] "42.4 Kb"
```
```
format(object.size(as.integer(factor(x))), units='Kb')
```
```
[1] "39.1 Kb"
```
However, if memory is really a concern, it’s probably not that using factors will help, but rather better hardware.
### Analysis
It is important to know that raw text cannot be analyzed quantitatively. There is no magic that takes a categorical variable with text labels and estimates correlations among words and other words or numeric data. *Everything* that can be analyzed must have some numeric representation first, and this is where factors come in. For example, here is a data frame with two categorical predictors (`factor*`), a numeric predictor (`x`), and a numeric target (`y`). What follows is what it looks like if you wanted to run a regression model in that setting.
```
df =
crossing(factor_1 = c('A', 'B'),
factor_2 = c('Q', 'X', 'J')) %>%
mutate(x=rnorm(6),
y=rnorm(6))
df
```
```
# A tibble: 6 x 4
factor_1 factor_2 x y
<chr> <chr> <dbl> <dbl>
1 A J 0.797 -0.190
2 A Q -1.000 -0.496
3 A X 1.05 0.487
4 B J -0.329 -0.101
5 B Q 0.905 -0.809
6 B X 1.18 -1.92
```
```
## model.matrix(lm(y ~ x + factor_1 + factor_2, data=df))
```
| (Intercept) | x | factor\_1B | factor\_2Q | factor\_2X |
| --- | --- | --- | --- | --- |
| 1 | 0\.7968603 | 0 | 0 | 0 |
| 1 | \-0\.9999264 | 0 | 1 | 0 |
| 1 | 1\.0522363 | 0 | 0 | 1 |
| 1 | \-0\.3291774 | 1 | 0 | 0 |
| 1 | 0\.9049071 | 1 | 1 | 0 |
| 1 | 1\.1754300 | 1 | 0 | 1 |
The model.matrix function exposes the underlying matrix that is actually used in the regression analysis. You’d get a coefficient for each column of that matrix. As such, even the intercept must be represented in some fashion. For categorical data, the default coding scheme is dummy coding. A reference category is arbitrarily chosen (it doesn’t matter which, and you can always change it), while the other categories are represented by indicator variables, where a 1 represents the corresponding label and everything else is zero. For details on this coding scheme or others, consult any basic statistical modeling book.
In addition, you’ll note that in all text\-specific analysis, the underlying information is numeric. For example, with topic models, the base data structure is a document\-term matrix of counts.
### Characters vs. Factors
The main thing to note is that factors are generally a statistical phenomenon, and are required to do statistical things with data that would otherwise be a simple character string. If you know the relatively few levels the data can take, you’ll generally want to use factors, or at least know that statistical packages and methods will require them. In addition, factors allow you to easily overcome the silly default alphabetical ordering of category levels in some very popular visualization packages.
For other things, such as text analysis, you’ll almost certainly want character strings instead, and in many cases it will be required. It’s also worth noting that a lot of base R and other behavior will coerce strings to factors. This made a lot more sense in the early days of R, but is not really necessary these days.
For more on this stuff see the following:
* [http://adv\-r.had.co.nz/Data\-structures.html](http://adv-r.had.co.nz/Data-structures.html)
* <http://forcats.tidyverse.org/>
* <http://r4ds.had.co.nz/factors.html>
* [https://simplystatistics.org/2015/07/24/stringsasfactors\-an\-unauthorized\-biography/](https://simplystatistics.org/2015/07/24/stringsasfactors-an-unauthorized-biography/)
* [http://notstatschat.tumblr.com/post/124987394001/stringsasfactors\-sigh](http://notstatschat.tumblr.com/post/124987394001/stringsasfactors-sigh)
### Character strings
When dealing with text, objects of class character are what you’d typically be dealing with.
```
x = c('... Of Your Fake Dimension', 'Ephemeron', 'Dryswch', 'Isotasy', 'Memory')
x
```
Not much to it, but be aware there is no real limit to what is represented as a character vector. For example, in a data frame, you could have a column where each entry is one of the works of Shakespeare.
### Factors
Although not exactly precise, one can think of factors as integers with labels. So, the underlying representation of a variable for sex is 1:2 with labels ‘Male’ and ‘Female’. They are a special class with attributes, or metadata, that contains the information about the levels.
```
x = factor(rep(letters[1:3], e=10))
attributes(x)
```
```
$levels
[1] "a" "b" "c"
$class
[1] "factor"
```
While the underlying representation is numeric, it is important to remember that factors are *categorical*. They can’t be used as numbers would be, as the following demonstrates.
```
as.numeric(x)
```
```
[1] 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3
```
```
sum(x)
```
```
Error in Summary.factor(structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, : 'sum' not meaningful for factors
```
Any numbers could be used, what we’re interested in are the labels, so a ‘sum’ doesn’t make any sense. All of the following would produce the same factor.
```
factor(c(1, 2, 3), labels=c('a', 'b', 'c'))
factor(c(3.2, 10, 500000), labels=c('a', 'b', 'c'))
factor(c(.49, 1, 5), labels=c('a', 'b', 'c'))
```
Because of the integer\+metadata representation, factors are actually smaller than character strings, often notably so.
```
x = sample(state.name, 10000, replace=T)
format(object.size(x), units='Kb')
```
```
[1] "80.8 Kb"
```
```
format(object.size(factor(x)), units='Kb')
```
```
[1] "42.4 Kb"
```
```
format(object.size(as.integer(factor(x))), units='Kb')
```
```
[1] "39.1 Kb"
```
However, if memory is really a concern, it’s probably not that using factors will help, but rather better hardware.
### Analysis
It is important to know that raw text cannot be analyzed quantitatively. There is no magic that takes a categorical variable with text labels and estimates correlations among words and other words or numeric data. *Everything* that can be analyzed must have some numeric representation first, and this is where factors come in. For example, here is a data frame with two categorical predictors (`factor*`), a numeric predictor (`x`), and a numeric target (`y`). What follows is what it looks like if you wanted to run a regression model in that setting.
```
df =
crossing(factor_1 = c('A', 'B'),
factor_2 = c('Q', 'X', 'J')) %>%
mutate(x=rnorm(6),
y=rnorm(6))
df
```
```
# A tibble: 6 x 4
factor_1 factor_2 x y
<chr> <chr> <dbl> <dbl>
1 A J 0.797 -0.190
2 A Q -1.000 -0.496
3 A X 1.05 0.487
4 B J -0.329 -0.101
5 B Q 0.905 -0.809
6 B X 1.18 -1.92
```
```
## model.matrix(lm(y ~ x + factor_1 + factor_2, data=df))
```
| (Intercept) | x | factor\_1B | factor\_2Q | factor\_2X |
| --- | --- | --- | --- | --- |
| 1 | 0\.7968603 | 0 | 0 | 0 |
| 1 | \-0\.9999264 | 0 | 1 | 0 |
| 1 | 1\.0522363 | 0 | 0 | 1 |
| 1 | \-0\.3291774 | 1 | 0 | 0 |
| 1 | 0\.9049071 | 1 | 1 | 0 |
| 1 | 1\.1754300 | 1 | 0 | 1 |
The model.matrix function exposes the underlying matrix that is actually used in the regression analysis. You’d get a coefficient for each column of that matrix. As such, even the intercept must be represented in some fashion. For categorical data, the default coding scheme is dummy coding. A reference category is arbitrarily chosen (it doesn’t matter which, and you can always change it), while the other categories are represented by indicator variables, where a 1 represents the corresponding label and everything else is zero. For details on this coding scheme or others, consult any basic statistical modeling book.
In addition, you’ll note that in all text\-specific analysis, the underlying information is numeric. For example, with topic models, the base data structure is a document\-term matrix of counts.
### Characters vs. Factors
The main thing to note is that factors are generally a statistical phenomenon, and are required to do statistical things with data that would otherwise be a simple character string. If you know the relatively few levels the data can take, you’ll generally want to use factors, or at least know that statistical packages and methods will require them. In addition, factors allow you to easily overcome the silly default alphabetical ordering of category levels in some very popular visualization packages.
For other things, such as text analysis, you’ll almost certainly want character strings instead, and in many cases it will be required. It’s also worth noting that a lot of base R and other behavior will coerce strings to factors. This made a lot more sense in the early days of R, but is not really necessary these days.
For more on this stuff see the following:
* [http://adv\-r.had.co.nz/Data\-structures.html](http://adv-r.had.co.nz/Data-structures.html)
* <http://forcats.tidyverse.org/>
* <http://r4ds.had.co.nz/factors.html>
* [https://simplystatistics.org/2015/07/24/stringsasfactors\-an\-unauthorized\-biography/](https://simplystatistics.org/2015/07/24/stringsasfactors-an-unauthorized-biography/)
* [http://notstatschat.tumblr.com/post/124987394001/stringsasfactors\-sigh](http://notstatschat.tumblr.com/post/124987394001/stringsasfactors-sigh)
Basic Text Functionality
------------------------
### Base R
A lot of folks new to R are not aware of just how much basic text processing R comes with out of the box. Here are examples of note.
* paste: glue text/numeric values together
* substr: extract or replace substrings in a character vector
* grep family: use regular expressions to deal with patterns of text
* strsplit: split strings
* nchar: how many characters in a string
* as.numeric: convert a string to numeric if it can be
* strtoi: convert a string to integer if it can be (faster than as.integer)
* adist: string distances
I probably use paste/paste0 more than most things when dealing with text, as string concatenation comes up so often. The following provides some demonstration.
```
paste(c('a', 'b', 'cd'), collapse='|')
```
```
[1] "a|b|cd"
```
```
paste(c('a', 'b', 'cd'), collapse='')
```
```
[1] "abcd"
```
```
paste0('a', 'b', 'cd') # shortcut to collapse=''
```
```
[1] "abcd"
```
```
paste0('x', 1:3)
```
```
[1] "x1" "x2" "x3"
```
Beyond that, use of regular expression and functionality included in the grep family is a major way to save a lot of time during data processing. I leave that to its own section later.
### Useful packages
A couple packages will probably take care of the vast majority of your standard text processing needs. Note that even if they aren’t adding anything to the functionality of the base R functions, they typically will have been optimized in some fashion, particularly with regard to speed.
* stringr/stringi: More or less the same stuff you’ll find with substr, grep etc. except easier to use and/or faster. They also add useful functionality not in base R (e.g. str\_to\_title). The stringr package is mostly a wrapper for the stringi functions, with some additional functions.
* tidyr: has functions such as unite, separate, replace\_na that can often come in handy when working with data frames.
* glue: a newer package that can be seen as a fancier paste. Most likely it will be useful when creating functions or shiny apps in which variable text output is desired.
One issue I have with both packages and base R is that often they return a list object, when it should be simplifying to the vector format it was initially fed. This sometimes requires an additional step or two of further processing that shouldn’t be necessary, so be prepared for it[1](#fn1).
### Other
In this section, I’ll add some things that come to mind that might come into play when you’re dealing with text.
#### Dates
Dates are not character strings. Though they may start that way, if you actually want to treat them as dates you’ll need to convert the string to the appropriate date class. The lubridate package makes dealing with dates much easier. It comes with conversion, extraction and other functionality that will be sure to save you some time.
```
library(lubridate)
today()
```
```
[1] "2018-03-06"
```
```
today() + 1
```
```
[1] "2018-03-07"
```
```
today() + dyears(1)
```
```
[1] "2019-03-06"
```
```
leap_year(2016)
```
```
[1] TRUE
```
```
span = interval(ymd("2017-07-01"), ymd("2017-07-04"))
span
```
```
[1] 2017-07-01 UTC--2017-07-04 UTC
```
```
as.duration(span)
```
```
[1] "259200s (~3 days)"
```
```
span %/% minutes(1)
```
```
[1] 4320
```
This package makes dates so much easier, you should always use it when dealing with them.
#### Categorical Time
In regression modeling with few time points, one often has to decide on whether to treat the year as categorical (factor) or numeric (continuous). This greatly depends on how you want to tell your data story or other practical concerns. For example, if you have five years in your data, treating year as categorical means you are interested in accounting for unspecified things that go on in a given year. If you treat it as numeric, you are more interested in trends. Either is fine.
#### Web
A major resource for text is of course the web. Packages like rvest,httr, xml2, and many other packages specific to website APIs are available to help you here. See the [R task view for web technologies](https://cran.r-project.org/web/views/WebTechnologies.html) as a starting point.
##### Encoding
Encoding can be a sizable PITA sometimes, and will often come up when dealing with webscraping and other languages. The rvest and stringr packages may be able to get you past some issues at least. See their respective functions repair\_encoding and str\_conv as starting points on this issue.
### Summary of basic text functionality
Being familiar with commonly used string functionality in base R and packages like stringr can save a ridiculous amount of time in your data processing. The more familiar you are with them the easier time you’ll have with text.
### Base R
A lot of folks new to R are not aware of just how much basic text processing R comes with out of the box. Here are examples of note.
* paste: glue text/numeric values together
* substr: extract or replace substrings in a character vector
* grep family: use regular expressions to deal with patterns of text
* strsplit: split strings
* nchar: how many characters in a string
* as.numeric: convert a string to numeric if it can be
* strtoi: convert a string to integer if it can be (faster than as.integer)
* adist: string distances
I probably use paste/paste0 more than most things when dealing with text, as string concatenation comes up so often. The following provides some demonstration.
```
paste(c('a', 'b', 'cd'), collapse='|')
```
```
[1] "a|b|cd"
```
```
paste(c('a', 'b', 'cd'), collapse='')
```
```
[1] "abcd"
```
```
paste0('a', 'b', 'cd') # shortcut to collapse=''
```
```
[1] "abcd"
```
```
paste0('x', 1:3)
```
```
[1] "x1" "x2" "x3"
```
Beyond that, use of regular expression and functionality included in the grep family is a major way to save a lot of time during data processing. I leave that to its own section later.
### Useful packages
A couple packages will probably take care of the vast majority of your standard text processing needs. Note that even if they aren’t adding anything to the functionality of the base R functions, they typically will have been optimized in some fashion, particularly with regard to speed.
* stringr/stringi: More or less the same stuff you’ll find with substr, grep etc. except easier to use and/or faster. They also add useful functionality not in base R (e.g. str\_to\_title). The stringr package is mostly a wrapper for the stringi functions, with some additional functions.
* tidyr: has functions such as unite, separate, replace\_na that can often come in handy when working with data frames.
* glue: a newer package that can be seen as a fancier paste. Most likely it will be useful when creating functions or shiny apps in which variable text output is desired.
One issue I have with both packages and base R is that often they return a list object, when it should be simplifying to the vector format it was initially fed. This sometimes requires an additional step or two of further processing that shouldn’t be necessary, so be prepared for it[1](#fn1).
### Other
In this section, I’ll add some things that come to mind that might come into play when you’re dealing with text.
#### Dates
Dates are not character strings. Though they may start that way, if you actually want to treat them as dates you’ll need to convert the string to the appropriate date class. The lubridate package makes dealing with dates much easier. It comes with conversion, extraction and other functionality that will be sure to save you some time.
```
library(lubridate)
today()
```
```
[1] "2018-03-06"
```
```
today() + 1
```
```
[1] "2018-03-07"
```
```
today() + dyears(1)
```
```
[1] "2019-03-06"
```
```
leap_year(2016)
```
```
[1] TRUE
```
```
span = interval(ymd("2017-07-01"), ymd("2017-07-04"))
span
```
```
[1] 2017-07-01 UTC--2017-07-04 UTC
```
```
as.duration(span)
```
```
[1] "259200s (~3 days)"
```
```
span %/% minutes(1)
```
```
[1] 4320
```
This package makes dates so much easier, you should always use it when dealing with them.
#### Categorical Time
In regression modeling with few time points, one often has to decide on whether to treat the year as categorical (factor) or numeric (continuous). This greatly depends on how you want to tell your data story or other practical concerns. For example, if you have five years in your data, treating year as categorical means you are interested in accounting for unspecified things that go on in a given year. If you treat it as numeric, you are more interested in trends. Either is fine.
#### Web
A major resource for text is of course the web. Packages like rvest,httr, xml2, and many other packages specific to website APIs are available to help you here. See the [R task view for web technologies](https://cran.r-project.org/web/views/WebTechnologies.html) as a starting point.
##### Encoding
Encoding can be a sizable PITA sometimes, and will often come up when dealing with webscraping and other languages. The rvest and stringr packages may be able to get you past some issues at least. See their respective functions repair\_encoding and str\_conv as starting points on this issue.
#### Dates
Dates are not character strings. Though they may start that way, if you actually want to treat them as dates you’ll need to convert the string to the appropriate date class. The lubridate package makes dealing with dates much easier. It comes with conversion, extraction and other functionality that will be sure to save you some time.
```
library(lubridate)
today()
```
```
[1] "2018-03-06"
```
```
today() + 1
```
```
[1] "2018-03-07"
```
```
today() + dyears(1)
```
```
[1] "2019-03-06"
```
```
leap_year(2016)
```
```
[1] TRUE
```
```
span = interval(ymd("2017-07-01"), ymd("2017-07-04"))
span
```
```
[1] 2017-07-01 UTC--2017-07-04 UTC
```
```
as.duration(span)
```
```
[1] "259200s (~3 days)"
```
```
span %/% minutes(1)
```
```
[1] 4320
```
This package makes dates so much easier, you should always use it when dealing with them.
#### Categorical Time
In regression modeling with few time points, one often has to decide on whether to treat the year as categorical (factor) or numeric (continuous). This greatly depends on how you want to tell your data story or other practical concerns. For example, if you have five years in your data, treating year as categorical means you are interested in accounting for unspecified things that go on in a given year. If you treat it as numeric, you are more interested in trends. Either is fine.
#### Web
A major resource for text is of course the web. Packages like rvest,httr, xml2, and many other packages specific to website APIs are available to help you here. See the [R task view for web technologies](https://cran.r-project.org/web/views/WebTechnologies.html) as a starting point.
##### Encoding
Encoding can be a sizable PITA sometimes, and will often come up when dealing with webscraping and other languages. The rvest and stringr packages may be able to get you past some issues at least. See their respective functions repair\_encoding and str\_conv as starting points on this issue.
##### Encoding
Encoding can be a sizable PITA sometimes, and will often come up when dealing with webscraping and other languages. The rvest and stringr packages may be able to get you past some issues at least. See their respective functions repair\_encoding and str\_conv as starting points on this issue.
### Summary of basic text functionality
Being familiar with commonly used string functionality in base R and packages like stringr can save a ridiculous amount of time in your data processing. The more familiar you are with them the easier time you’ll have with text.
Regular Expressions
-------------------
A regular expression, regex for short, is a sequence of characters that can be used as a search pattern for a string. Common operations are to merely detect, extract, or replace the matching string. There are actually many different flavors of regex for different programming languages, which are all flavors that originate with the Perl approach, or can enable the Perl approach to be used. However, knowing one means you pretty much know the others with only minor modifications if any.
To be clear, not only is regex another language, it’s nigh on indecipherable. You will not learn much regex, but what you do learn will save a potentially enormous amount of time you’d otherwise spend trying to do things in a more haphazard fashion. Furthermore, practically every situation that will come up has already been asked and answered on [Stack Overflow](https://stackoverflow.com/questions/tagged/regex), so you’ll almost always be able to search for what you need.
Here is an example:
`^r.*shiny[0-9]$`
What is *that* you may ask? Well here is an example of strings it would and wouldn’t match.
```
string = c('r is the shiny', 'r is the shiny1', 'r shines brightly')
grepl(string, pattern='^r.*shiny[0-9]$')
```
```
[1] FALSE TRUE FALSE
```
What the regex is esoterically attempting to match is any string that starts with ‘r’ and ends with ‘shiny\_’ where \_ is some single digit. Specifically, it breaks down as follows:
* **^** : starts with, so ^r means starts with r
* **.** : any character
* **\*** : match the preceding zero or more times
* **shiny** : match ‘shiny’
* **\[0\-9]** : any digit 0\-9 (note that we are still talking about strings, not actual numbered values)
* **$** : ends with preceding
### Typical Uses
None of it makes sense, so don’t attempt to do so. Just try to remember a couple key approaches, and search the web for the rest.
Along with ^ . \* \[0\-9] $, a couple more common ones are:
* **\[a\-z]** : letters a\-z
* **\[A\-Z]** : capital letters
* **\+** : match the preceding one or more times
* **()** : groupings
* **\|** : logical or e.g. \[a\-z]\|\[0\-9] (a lower\-case letter or a number)
* **?** : preceding item is optional, and will be matched at most once. Typically used for ‘look ahead’ and ‘look behind’
* **\\** : escape a character, like if you actually wanted to search for a period instead of using it as a regex pattern, you’d use \\., though in R you need \\\\, i.e. double slashes, for escape.
In addition, in R there are certain predefined characters that can be called:
* **\[:punct:]** : punctuation
* **\[:blank:]** : spaces and tabs
* **\[:alnum:]** : alphanumeric characters
Those are just a few. The key functions can be found by looking at the help file for the grep function (`?grep`). However, the stringr package has the same functionality with perhaps a slightly faster processing (though that’s due to the underlying stringi package).
See if you can guess which of the following will turn up `TRUE`.
```
grepl(c('apple', 'pear', 'banana'), pattern='a')
grepl(c('apple', 'pear', 'banana'), pattern='^a')
grepl(c('apple', 'pear', 'banana'), pattern='^a|a$')
```
Scraping the web, munging data, just finding things in your scripts … you can potentially use this all the time, and not only with text analysis, as we’ll now see.
### dplyr helper functions
The dplyr package comes with some poorly documented[2](#fn2) but quite useful helper functions that essentially serve as human\-readable regex, which is a very good thing. These functions allow you to select variables[3](#fn3) based on their names. They are usually just calling base R functions in the end.
* starts\_with: starts with a prefix (same as regex ‘^blah’)
* ends\_with: ends with a prefix (same as regex ‘blah$’)
* contains: contains a literal string (same as regex ‘blah’)
* matches: matches a regular expression (put your regex here)
* num\_range: a numerical range like x01, x02, x03\. (same as regex ‘x\[0\-9]\[0\-9]’)
* one\_of: variables in character vector. (if you need to quote variable names, e.g. within a function)
* everything: all variables. (a good way to spend time doing something only to accomplish what you would have by doing nothing, or a way to reorder variables)
For more on using stringr and regular expressions in R, you may find [this cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/strings.pdf) useful.
### Typical Uses
None of it makes sense, so don’t attempt to do so. Just try to remember a couple key approaches, and search the web for the rest.
Along with ^ . \* \[0\-9] $, a couple more common ones are:
* **\[a\-z]** : letters a\-z
* **\[A\-Z]** : capital letters
* **\+** : match the preceding one or more times
* **()** : groupings
* **\|** : logical or e.g. \[a\-z]\|\[0\-9] (a lower\-case letter or a number)
* **?** : preceding item is optional, and will be matched at most once. Typically used for ‘look ahead’ and ‘look behind’
* **\\** : escape a character, like if you actually wanted to search for a period instead of using it as a regex pattern, you’d use \\., though in R you need \\\\, i.e. double slashes, for escape.
In addition, in R there are certain predefined characters that can be called:
* **\[:punct:]** : punctuation
* **\[:blank:]** : spaces and tabs
* **\[:alnum:]** : alphanumeric characters
Those are just a few. The key functions can be found by looking at the help file for the grep function (`?grep`). However, the stringr package has the same functionality with perhaps a slightly faster processing (though that’s due to the underlying stringi package).
See if you can guess which of the following will turn up `TRUE`.
```
grepl(c('apple', 'pear', 'banana'), pattern='a')
grepl(c('apple', 'pear', 'banana'), pattern='^a')
grepl(c('apple', 'pear', 'banana'), pattern='^a|a$')
```
Scraping the web, munging data, just finding things in your scripts … you can potentially use this all the time, and not only with text analysis, as we’ll now see.
### dplyr helper functions
The dplyr package comes with some poorly documented[2](#fn2) but quite useful helper functions that essentially serve as human\-readable regex, which is a very good thing. These functions allow you to select variables[3](#fn3) based on their names. They are usually just calling base R functions in the end.
* starts\_with: starts with a prefix (same as regex ‘^blah’)
* ends\_with: ends with a prefix (same as regex ‘blah$’)
* contains: contains a literal string (same as regex ‘blah’)
* matches: matches a regular expression (put your regex here)
* num\_range: a numerical range like x01, x02, x03\. (same as regex ‘x\[0\-9]\[0\-9]’)
* one\_of: variables in character vector. (if you need to quote variable names, e.g. within a function)
* everything: all variables. (a good way to spend time doing something only to accomplish what you would have by doing nothing, or a way to reorder variables)
For more on using stringr and regular expressions in R, you may find [this cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/strings.pdf) useful.
Text Processing Examples
------------------------
### Example 1
Let’s say you’re dealing with some data that has been handled typically, that is to say, poorly. For example, you have a variable in your data representing whether something is from the north or south region.
It might seem okay until…
```
## table(df$region)
```
| Var1 | Freq |
| --- | --- |
| South | 76 |
| north | 68 |
| North | 75 |
| north | 70 |
| North | 70 |
| south | 65 |
| South | 76 |
Even if you spotted the casing issue, there is still a white space problem[4](#fn4). Let’s say you want this to be capitalized ‘North’ and ‘South’. How might you do it? It’s actually quite easy with the stringr tools.
```
library(stringr)
df %>%
mutate(region = str_trim(region),
region = str_to_title(region))
```
The str\_trim function trims white space from either side (or both), while str\_to\_title converts everything to first letter capitalized.
```
## table(df_corrected$region)
```
| Var1 | Freq |
| --- | --- |
| North | 283 |
| South | 217 |
Compare that to how you would have done it before knowing how to use text processing tools. One might have spent several minutes with some find and replace approach in a spreadsheet, or maybe even several `if... else` statements in R until all problematic cases were taken care of. Not very efficient.
### Example 2
Suppose you import a data frame, and the data was originally in wide format, where each column represented a year of data collection for the individual. Since it is bad form for data columns to have numbers for names, when you import it, the result looks like the following.
So, the problem now is to change the names to be Year\_1, Year\_2, etc. You might think you might have to use colnames and manually create a string of names to replace the current ones.
```
colnames(df)[-1] = c('Year_1', 'Year_2', 'Year_3', 'Year_4', 'Year_5')
```
Or perhaps you’re thinking of the paste0 function, which works fine and saves some typing.
```
colnames(df)[-1] = paste0('Year_', 1:5)
```
However, data sets may be hundreds of columns, and the columns of data may have the same pattern but not be next to one another. For example, the first few dozen columns are all data that belongs to the first wave, etc. It is tedious to figure out which columns you don’t want, but even then you’re resulting to using magic numbers with the above approach, and one column change to data will mean that redoing the name change will fail.
However, the following accomplishes what we want, and is reproducible regardless of where the columns are in the data set.
```
df %>%
rename_at(vars(num_range('X', 1:5)),
str_replace, pattern='X', replacement='Year_') %>%
head()
```
```
id Year_1 Year_2 Year_3 Year_4 Year_5
1 1 1.18 -2.04 -0.03 -0.36 0.43
2 2 0.34 -1.34 -0.30 -0.15 0.47
3 3 -0.32 -0.97 1.03 0.20 0.97
4 4 -0.57 1.36 1.29 0.00 0.32
5 5 0.64 0.73 -0.16 -1.29 -0.79
6 6 -0.59 0.16 -1.28 0.55 0.75
```
Let’s parse what it’s specifically doing.
* rename\_at allows us to rename specific columns
* Which columns? X1 through X:5\. The num\_range helper function creates the character strings X1, X2, X3, X4, and X5\.
* Now that we have the names, we use vars to tell rename\_at which ones. It would have allowed additional sets of variables as well.
* rename\_at needs a function to apply to each of those column names. In this case the function is str\_replace, to replace patterns of strings with some other string
* The specific arguments to str\_replace (pattern to be replaced, replacement pattern) are also supplied.
So in the end we just have to use the num\_range helper function within the function that tells rename\_at what it should be renaming, and let str\_replace do the rest.
### Example 1
Let’s say you’re dealing with some data that has been handled typically, that is to say, poorly. For example, you have a variable in your data representing whether something is from the north or south region.
It might seem okay until…
```
## table(df$region)
```
| Var1 | Freq |
| --- | --- |
| South | 76 |
| north | 68 |
| North | 75 |
| north | 70 |
| North | 70 |
| south | 65 |
| South | 76 |
Even if you spotted the casing issue, there is still a white space problem[4](#fn4). Let’s say you want this to be capitalized ‘North’ and ‘South’. How might you do it? It’s actually quite easy with the stringr tools.
```
library(stringr)
df %>%
mutate(region = str_trim(region),
region = str_to_title(region))
```
The str\_trim function trims white space from either side (or both), while str\_to\_title converts everything to first letter capitalized.
```
## table(df_corrected$region)
```
| Var1 | Freq |
| --- | --- |
| North | 283 |
| South | 217 |
Compare that to how you would have done it before knowing how to use text processing tools. One might have spent several minutes with some find and replace approach in a spreadsheet, or maybe even several `if... else` statements in R until all problematic cases were taken care of. Not very efficient.
### Example 2
Suppose you import a data frame, and the data was originally in wide format, where each column represented a year of data collection for the individual. Since it is bad form for data columns to have numbers for names, when you import it, the result looks like the following.
So, the problem now is to change the names to be Year\_1, Year\_2, etc. You might think you might have to use colnames and manually create a string of names to replace the current ones.
```
colnames(df)[-1] = c('Year_1', 'Year_2', 'Year_3', 'Year_4', 'Year_5')
```
Or perhaps you’re thinking of the paste0 function, which works fine and saves some typing.
```
colnames(df)[-1] = paste0('Year_', 1:5)
```
However, data sets may be hundreds of columns, and the columns of data may have the same pattern but not be next to one another. For example, the first few dozen columns are all data that belongs to the first wave, etc. It is tedious to figure out which columns you don’t want, but even then you’re resulting to using magic numbers with the above approach, and one column change to data will mean that redoing the name change will fail.
However, the following accomplishes what we want, and is reproducible regardless of where the columns are in the data set.
```
df %>%
rename_at(vars(num_range('X', 1:5)),
str_replace, pattern='X', replacement='Year_') %>%
head()
```
```
id Year_1 Year_2 Year_3 Year_4 Year_5
1 1 1.18 -2.04 -0.03 -0.36 0.43
2 2 0.34 -1.34 -0.30 -0.15 0.47
3 3 -0.32 -0.97 1.03 0.20 0.97
4 4 -0.57 1.36 1.29 0.00 0.32
5 5 0.64 0.73 -0.16 -1.29 -0.79
6 6 -0.59 0.16 -1.28 0.55 0.75
```
Let’s parse what it’s specifically doing.
* rename\_at allows us to rename specific columns
* Which columns? X1 through X:5\. The num\_range helper function creates the character strings X1, X2, X3, X4, and X5\.
* Now that we have the names, we use vars to tell rename\_at which ones. It would have allowed additional sets of variables as well.
* rename\_at needs a function to apply to each of those column names. In this case the function is str\_replace, to replace patterns of strings with some other string
* The specific arguments to str\_replace (pattern to be replaced, replacement pattern) are also supplied.
So in the end we just have to use the num\_range helper function within the function that tells rename\_at what it should be renaming, and let str\_replace do the rest.
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/string-theory.html |
String Theory
=============
Basic data types
----------------
R has several core data structures:
* Vectors
* Factors
* Lists
* Matrices/arrays
* Data frames
Vectors form the basis of R data structures. There are two main types\- atomic and lists. All elements of an atomic vector are the same type.
Examples include:
* character
* numeric (double)
* integer
* logical
### Character strings
When dealing with text, objects of class character are what you’d typically be dealing with.
```
x = c('... Of Your Fake Dimension', 'Ephemeron', 'Dryswch', 'Isotasy', 'Memory')
x
```
Not much to it, but be aware there is no real limit to what is represented as a character vector. For example, in a data frame, you could have a column where each entry is one of the works of Shakespeare.
### Factors
Although not exactly precise, one can think of factors as integers with labels. So, the underlying representation of a variable for sex is 1:2 with labels ‘Male’ and ‘Female’. They are a special class with attributes, or metadata, that contains the information about the levels.
```
x = factor(rep(letters[1:3], e=10))
attributes(x)
```
```
$levels
[1] "a" "b" "c"
$class
[1] "factor"
```
While the underlying representation is numeric, it is important to remember that factors are *categorical*. They can’t be used as numbers would be, as the following demonstrates.
```
as.numeric(x)
```
```
[1] 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3
```
```
sum(x)
```
```
Error in Summary.factor(structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, : 'sum' not meaningful for factors
```
Any numbers could be used, what we’re interested in are the labels, so a ‘sum’ doesn’t make any sense. All of the following would produce the same factor.
```
factor(c(1, 2, 3), labels=c('a', 'b', 'c'))
factor(c(3.2, 10, 500000), labels=c('a', 'b', 'c'))
factor(c(.49, 1, 5), labels=c('a', 'b', 'c'))
```
Because of the integer\+metadata representation, factors are actually smaller than character strings, often notably so.
```
x = sample(state.name, 10000, replace=T)
format(object.size(x), units='Kb')
```
```
[1] "80.8 Kb"
```
```
format(object.size(factor(x)), units='Kb')
```
```
[1] "42.4 Kb"
```
```
format(object.size(as.integer(factor(x))), units='Kb')
```
```
[1] "39.1 Kb"
```
However, if memory is really a concern, it’s probably not that using factors will help, but rather better hardware.
### Analysis
It is important to know that raw text cannot be analyzed quantitatively. There is no magic that takes a categorical variable with text labels and estimates correlations among words and other words or numeric data. *Everything* that can be analyzed must have some numeric representation first, and this is where factors come in. For example, here is a data frame with two categorical predictors (`factor*`), a numeric predictor (`x`), and a numeric target (`y`). What follows is what it looks like if you wanted to run a regression model in that setting.
```
df =
crossing(factor_1 = c('A', 'B'),
factor_2 = c('Q', 'X', 'J')) %>%
mutate(x=rnorm(6),
y=rnorm(6))
df
```
```
# A tibble: 6 x 4
factor_1 factor_2 x y
<chr> <chr> <dbl> <dbl>
1 A J 0.797 -0.190
2 A Q -1.000 -0.496
3 A X 1.05 0.487
4 B J -0.329 -0.101
5 B Q 0.905 -0.809
6 B X 1.18 -1.92
```
```
## model.matrix(lm(y ~ x + factor_1 + factor_2, data=df))
```
| (Intercept) | x | factor\_1B | factor\_2Q | factor\_2X |
| --- | --- | --- | --- | --- |
| 1 | 0\.7968603 | 0 | 0 | 0 |
| 1 | \-0\.9999264 | 0 | 1 | 0 |
| 1 | 1\.0522363 | 0 | 0 | 1 |
| 1 | \-0\.3291774 | 1 | 0 | 0 |
| 1 | 0\.9049071 | 1 | 1 | 0 |
| 1 | 1\.1754300 | 1 | 0 | 1 |
The model.matrix function exposes the underlying matrix that is actually used in the regression analysis. You’d get a coefficient for each column of that matrix. As such, even the intercept must be represented in some fashion. For categorical data, the default coding scheme is dummy coding. A reference category is arbitrarily chosen (it doesn’t matter which, and you can always change it), while the other categories are represented by indicator variables, where a 1 represents the corresponding label and everything else is zero. For details on this coding scheme or others, consult any basic statistical modeling book.
In addition, you’ll note that in all text\-specific analysis, the underlying information is numeric. For example, with topic models, the base data structure is a document\-term matrix of counts.
### Characters vs. Factors
The main thing to note is that factors are generally a statistical phenomenon, and are required to do statistical things with data that would otherwise be a simple character string. If you know the relatively few levels the data can take, you’ll generally want to use factors, or at least know that statistical packages and methods will require them. In addition, factors allow you to easily overcome the silly default alphabetical ordering of category levels in some very popular visualization packages.
For other things, such as text analysis, you’ll almost certainly want character strings instead, and in many cases it will be required. It’s also worth noting that a lot of base R and other behavior will coerce strings to factors. This made a lot more sense in the early days of R, but is not really necessary these days.
For more on this stuff see the following:
* [http://adv\-r.had.co.nz/Data\-structures.html](http://adv-r.had.co.nz/Data-structures.html)
* <http://forcats.tidyverse.org/>
* <http://r4ds.had.co.nz/factors.html>
* [https://simplystatistics.org/2015/07/24/stringsasfactors\-an\-unauthorized\-biography/](https://simplystatistics.org/2015/07/24/stringsasfactors-an-unauthorized-biography/)
* [http://notstatschat.tumblr.com/post/124987394001/stringsasfactors\-sigh](http://notstatschat.tumblr.com/post/124987394001/stringsasfactors-sigh)
Basic Text Functionality
------------------------
### Base R
A lot of folks new to R are not aware of just how much basic text processing R comes with out of the box. Here are examples of note.
* paste: glue text/numeric values together
* substr: extract or replace substrings in a character vector
* grep family: use regular expressions to deal with patterns of text
* strsplit: split strings
* nchar: how many characters in a string
* as.numeric: convert a string to numeric if it can be
* strtoi: convert a string to integer if it can be (faster than as.integer)
* adist: string distances
I probably use paste/paste0 more than most things when dealing with text, as string concatenation comes up so often. The following provides some demonstration.
```
paste(c('a', 'b', 'cd'), collapse='|')
```
```
[1] "a|b|cd"
```
```
paste(c('a', 'b', 'cd'), collapse='')
```
```
[1] "abcd"
```
```
paste0('a', 'b', 'cd') # shortcut to collapse=''
```
```
[1] "abcd"
```
```
paste0('x', 1:3)
```
```
[1] "x1" "x2" "x3"
```
Beyond that, use of regular expression and functionality included in the grep family is a major way to save a lot of time during data processing. I leave that to its own section later.
### Useful packages
A couple packages will probably take care of the vast majority of your standard text processing needs. Note that even if they aren’t adding anything to the functionality of the base R functions, they typically will have been optimized in some fashion, particularly with regard to speed.
* stringr/stringi: More or less the same stuff you’ll find with substr, grep etc. except easier to use and/or faster. They also add useful functionality not in base R (e.g. str\_to\_title). The stringr package is mostly a wrapper for the stringi functions, with some additional functions.
* tidyr: has functions such as unite, separate, replace\_na that can often come in handy when working with data frames.
* glue: a newer package that can be seen as a fancier paste. Most likely it will be useful when creating functions or shiny apps in which variable text output is desired.
One issue I have with both packages and base R is that often they return a list object, when it should be simplifying to the vector format it was initially fed. This sometimes requires an additional step or two of further processing that shouldn’t be necessary, so be prepared for it[1](#fn1).
### Other
In this section, I’ll add some things that come to mind that might come into play when you’re dealing with text.
#### Dates
Dates are not character strings. Though they may start that way, if you actually want to treat them as dates you’ll need to convert the string to the appropriate date class. The lubridate package makes dealing with dates much easier. It comes with conversion, extraction and other functionality that will be sure to save you some time.
```
library(lubridate)
today()
```
```
[1] "2018-03-06"
```
```
today() + 1
```
```
[1] "2018-03-07"
```
```
today() + dyears(1)
```
```
[1] "2019-03-06"
```
```
leap_year(2016)
```
```
[1] TRUE
```
```
span = interval(ymd("2017-07-01"), ymd("2017-07-04"))
span
```
```
[1] 2017-07-01 UTC--2017-07-04 UTC
```
```
as.duration(span)
```
```
[1] "259200s (~3 days)"
```
```
span %/% minutes(1)
```
```
[1] 4320
```
This package makes dates so much easier, you should always use it when dealing with them.
#### Categorical Time
In regression modeling with few time points, one often has to decide on whether to treat the year as categorical (factor) or numeric (continuous). This greatly depends on how you want to tell your data story or other practical concerns. For example, if you have five years in your data, treating year as categorical means you are interested in accounting for unspecified things that go on in a given year. If you treat it as numeric, you are more interested in trends. Either is fine.
#### Web
A major resource for text is of course the web. Packages like rvest,httr, xml2, and many other packages specific to website APIs are available to help you here. See the [R task view for web technologies](https://cran.r-project.org/web/views/WebTechnologies.html) as a starting point.
##### Encoding
Encoding can be a sizable PITA sometimes, and will often come up when dealing with webscraping and other languages. The rvest and stringr packages may be able to get you past some issues at least. See their respective functions repair\_encoding and str\_conv as starting points on this issue.
### Summary of basic text functionality
Being familiar with commonly used string functionality in base R and packages like stringr can save a ridiculous amount of time in your data processing. The more familiar you are with them the easier time you’ll have with text.
Regular Expressions
-------------------
A regular expression, regex for short, is a sequence of characters that can be used as a search pattern for a string. Common operations are to merely detect, extract, or replace the matching string. There are actually many different flavors of regex for different programming languages, which are all flavors that originate with the Perl approach, or can enable the Perl approach to be used. However, knowing one means you pretty much know the others with only minor modifications if any.
To be clear, not only is regex another language, it’s nigh on indecipherable. You will not learn much regex, but what you do learn will save a potentially enormous amount of time you’d otherwise spend trying to do things in a more haphazard fashion. Furthermore, practically every situation that will come up has already been asked and answered on [Stack Overflow](https://stackoverflow.com/questions/tagged/regex), so you’ll almost always be able to search for what you need.
Here is an example:
`^r.*shiny[0-9]$`
What is *that* you may ask? Well here is an example of strings it would and wouldn’t match.
```
string = c('r is the shiny', 'r is the shiny1', 'r shines brightly')
grepl(string, pattern='^r.*shiny[0-9]$')
```
```
[1] FALSE TRUE FALSE
```
What the regex is esoterically attempting to match is any string that starts with ‘r’ and ends with ‘shiny\_’ where \_ is some single digit. Specifically, it breaks down as follows:
* **^** : starts with, so ^r means starts with r
* **.** : any character
* **\*** : match the preceding zero or more times
* **shiny** : match ‘shiny’
* **\[0\-9]** : any digit 0\-9 (note that we are still talking about strings, not actual numbered values)
* **$** : ends with preceding
### Typical Uses
None of it makes sense, so don’t attempt to do so. Just try to remember a couple key approaches, and search the web for the rest.
Along with ^ . \* \[0\-9] $, a couple more common ones are:
* **\[a\-z]** : letters a\-z
* **\[A\-Z]** : capital letters
* **\+** : match the preceding one or more times
* **()** : groupings
* **\|** : logical or e.g. \[a\-z]\|\[0\-9] (a lower\-case letter or a number)
* **?** : preceding item is optional, and will be matched at most once. Typically used for ‘look ahead’ and ‘look behind’
* **\\** : escape a character, like if you actually wanted to search for a period instead of using it as a regex pattern, you’d use \\., though in R you need \\\\, i.e. double slashes, for escape.
In addition, in R there are certain predefined characters that can be called:
* **\[:punct:]** : punctuation
* **\[:blank:]** : spaces and tabs
* **\[:alnum:]** : alphanumeric characters
Those are just a few. The key functions can be found by looking at the help file for the grep function (`?grep`). However, the stringr package has the same functionality with perhaps a slightly faster processing (though that’s due to the underlying stringi package).
See if you can guess which of the following will turn up `TRUE`.
```
grepl(c('apple', 'pear', 'banana'), pattern='a')
grepl(c('apple', 'pear', 'banana'), pattern='^a')
grepl(c('apple', 'pear', 'banana'), pattern='^a|a$')
```
Scraping the web, munging data, just finding things in your scripts … you can potentially use this all the time, and not only with text analysis, as we’ll now see.
### dplyr helper functions
The dplyr package comes with some poorly documented[2](#fn2) but quite useful helper functions that essentially serve as human\-readable regex, which is a very good thing. These functions allow you to select variables[3](#fn3) based on their names. They are usually just calling base R functions in the end.
* starts\_with: starts with a prefix (same as regex ‘^blah’)
* ends\_with: ends with a prefix (same as regex ‘blah$’)
* contains: contains a literal string (same as regex ‘blah’)
* matches: matches a regular expression (put your regex here)
* num\_range: a numerical range like x01, x02, x03\. (same as regex ‘x\[0\-9]\[0\-9]’)
* one\_of: variables in character vector. (if you need to quote variable names, e.g. within a function)
* everything: all variables. (a good way to spend time doing something only to accomplish what you would have by doing nothing, or a way to reorder variables)
For more on using stringr and regular expressions in R, you may find [this cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/strings.pdf) useful.
Text Processing Examples
------------------------
### Example 1
Let’s say you’re dealing with some data that has been handled typically, that is to say, poorly. For example, you have a variable in your data representing whether something is from the north or south region.
It might seem okay until…
```
## table(df$region)
```
| Var1 | Freq |
| --- | --- |
| South | 76 |
| north | 68 |
| North | 75 |
| north | 70 |
| North | 70 |
| south | 65 |
| South | 76 |
Even if you spotted the casing issue, there is still a white space problem[4](#fn4). Let’s say you want this to be capitalized ‘North’ and ‘South’. How might you do it? It’s actually quite easy with the stringr tools.
```
library(stringr)
df %>%
mutate(region = str_trim(region),
region = str_to_title(region))
```
The str\_trim function trims white space from either side (or both), while str\_to\_title converts everything to first letter capitalized.
```
## table(df_corrected$region)
```
| Var1 | Freq |
| --- | --- |
| North | 283 |
| South | 217 |
Compare that to how you would have done it before knowing how to use text processing tools. One might have spent several minutes with some find and replace approach in a spreadsheet, or maybe even several `if... else` statements in R until all problematic cases were taken care of. Not very efficient.
### Example 2
Suppose you import a data frame, and the data was originally in wide format, where each column represented a year of data collection for the individual. Since it is bad form for data columns to have numbers for names, when you import it, the result looks like the following.
So, the problem now is to change the names to be Year\_1, Year\_2, etc. You might think you might have to use colnames and manually create a string of names to replace the current ones.
```
colnames(df)[-1] = c('Year_1', 'Year_2', 'Year_3', 'Year_4', 'Year_5')
```
Or perhaps you’re thinking of the paste0 function, which works fine and saves some typing.
```
colnames(df)[-1] = paste0('Year_', 1:5)
```
However, data sets may be hundreds of columns, and the columns of data may have the same pattern but not be next to one another. For example, the first few dozen columns are all data that belongs to the first wave, etc. It is tedious to figure out which columns you don’t want, but even then you’re resulting to using magic numbers with the above approach, and one column change to data will mean that redoing the name change will fail.
However, the following accomplishes what we want, and is reproducible regardless of where the columns are in the data set.
```
df %>%
rename_at(vars(num_range('X', 1:5)),
str_replace, pattern='X', replacement='Year_') %>%
head()
```
```
id Year_1 Year_2 Year_3 Year_4 Year_5
1 1 1.18 -2.04 -0.03 -0.36 0.43
2 2 0.34 -1.34 -0.30 -0.15 0.47
3 3 -0.32 -0.97 1.03 0.20 0.97
4 4 -0.57 1.36 1.29 0.00 0.32
5 5 0.64 0.73 -0.16 -1.29 -0.79
6 6 -0.59 0.16 -1.28 0.55 0.75
```
Let’s parse what it’s specifically doing.
* rename\_at allows us to rename specific columns
* Which columns? X1 through X:5\. The num\_range helper function creates the character strings X1, X2, X3, X4, and X5\.
* Now that we have the names, we use vars to tell rename\_at which ones. It would have allowed additional sets of variables as well.
* rename\_at needs a function to apply to each of those column names. In this case the function is str\_replace, to replace patterns of strings with some other string
* The specific arguments to str\_replace (pattern to be replaced, replacement pattern) are also supplied.
So in the end we just have to use the num\_range helper function within the function that tells rename\_at what it should be renaming, and let str\_replace do the rest.
Exercises
---------
1. In your own words, state the difference between a character string and a factor variable.
2. Consider the following character vector.
```
x = c('A', '1', 'Q')
```
How might you paste the elements together so that there is an underscore `_` between characters and no space (“A\_1\_Q”)? If you highlight the next line you’ll see the hint.
Revisit how we used the collapse argument within paste. `paste(..., collapse=?)`
Paste Part 2: The following application of paste produces this result.
```
paste(c('A', '1', 'Q'), c('B', '2', 'z'))
```
```
[1] "A B" "1 2" "Q z"
```
Now try to produce `"A - B" "1 - 2" "Q - z"`. To do this, note that one can paste any number of things together (i.e. more than two). So try adding ’ \- ’ to it.
3. Use regex to grab the Star Wars names that have a number. Use both grep and grepl and compare the results
```
grep(starwars$name, pattern = ?)
```
Now use your hacking skills to determine which one is the tallest.
4. Load the dplyr package, and use the its [helper functions](string-theory.html#dplyr-helper-functions) to grab all the columns in the starwars data set (comes with the package) with `color` in the name but without referring to them directly. The following shows a generic example. There are several ways to do this. Try two if you can.
```
starwars %>%
select(helper_function('pattern'))
```
Basic data types
----------------
R has several core data structures:
* Vectors
* Factors
* Lists
* Matrices/arrays
* Data frames
Vectors form the basis of R data structures. There are two main types\- atomic and lists. All elements of an atomic vector are the same type.
Examples include:
* character
* numeric (double)
* integer
* logical
### Character strings
When dealing with text, objects of class character are what you’d typically be dealing with.
```
x = c('... Of Your Fake Dimension', 'Ephemeron', 'Dryswch', 'Isotasy', 'Memory')
x
```
Not much to it, but be aware there is no real limit to what is represented as a character vector. For example, in a data frame, you could have a column where each entry is one of the works of Shakespeare.
### Factors
Although not exactly precise, one can think of factors as integers with labels. So, the underlying representation of a variable for sex is 1:2 with labels ‘Male’ and ‘Female’. They are a special class with attributes, or metadata, that contains the information about the levels.
```
x = factor(rep(letters[1:3], e=10))
attributes(x)
```
```
$levels
[1] "a" "b" "c"
$class
[1] "factor"
```
While the underlying representation is numeric, it is important to remember that factors are *categorical*. They can’t be used as numbers would be, as the following demonstrates.
```
as.numeric(x)
```
```
[1] 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3
```
```
sum(x)
```
```
Error in Summary.factor(structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, : 'sum' not meaningful for factors
```
Any numbers could be used, what we’re interested in are the labels, so a ‘sum’ doesn’t make any sense. All of the following would produce the same factor.
```
factor(c(1, 2, 3), labels=c('a', 'b', 'c'))
factor(c(3.2, 10, 500000), labels=c('a', 'b', 'c'))
factor(c(.49, 1, 5), labels=c('a', 'b', 'c'))
```
Because of the integer\+metadata representation, factors are actually smaller than character strings, often notably so.
```
x = sample(state.name, 10000, replace=T)
format(object.size(x), units='Kb')
```
```
[1] "80.8 Kb"
```
```
format(object.size(factor(x)), units='Kb')
```
```
[1] "42.4 Kb"
```
```
format(object.size(as.integer(factor(x))), units='Kb')
```
```
[1] "39.1 Kb"
```
However, if memory is really a concern, it’s probably not that using factors will help, but rather better hardware.
### Analysis
It is important to know that raw text cannot be analyzed quantitatively. There is no magic that takes a categorical variable with text labels and estimates correlations among words and other words or numeric data. *Everything* that can be analyzed must have some numeric representation first, and this is where factors come in. For example, here is a data frame with two categorical predictors (`factor*`), a numeric predictor (`x`), and a numeric target (`y`). What follows is what it looks like if you wanted to run a regression model in that setting.
```
df =
crossing(factor_1 = c('A', 'B'),
factor_2 = c('Q', 'X', 'J')) %>%
mutate(x=rnorm(6),
y=rnorm(6))
df
```
```
# A tibble: 6 x 4
factor_1 factor_2 x y
<chr> <chr> <dbl> <dbl>
1 A J 0.797 -0.190
2 A Q -1.000 -0.496
3 A X 1.05 0.487
4 B J -0.329 -0.101
5 B Q 0.905 -0.809
6 B X 1.18 -1.92
```
```
## model.matrix(lm(y ~ x + factor_1 + factor_2, data=df))
```
| (Intercept) | x | factor\_1B | factor\_2Q | factor\_2X |
| --- | --- | --- | --- | --- |
| 1 | 0\.7968603 | 0 | 0 | 0 |
| 1 | \-0\.9999264 | 0 | 1 | 0 |
| 1 | 1\.0522363 | 0 | 0 | 1 |
| 1 | \-0\.3291774 | 1 | 0 | 0 |
| 1 | 0\.9049071 | 1 | 1 | 0 |
| 1 | 1\.1754300 | 1 | 0 | 1 |
The model.matrix function exposes the underlying matrix that is actually used in the regression analysis. You’d get a coefficient for each column of that matrix. As such, even the intercept must be represented in some fashion. For categorical data, the default coding scheme is dummy coding. A reference category is arbitrarily chosen (it doesn’t matter which, and you can always change it), while the other categories are represented by indicator variables, where a 1 represents the corresponding label and everything else is zero. For details on this coding scheme or others, consult any basic statistical modeling book.
In addition, you’ll note that in all text\-specific analysis, the underlying information is numeric. For example, with topic models, the base data structure is a document\-term matrix of counts.
### Characters vs. Factors
The main thing to note is that factors are generally a statistical phenomenon, and are required to do statistical things with data that would otherwise be a simple character string. If you know the relatively few levels the data can take, you’ll generally want to use factors, or at least know that statistical packages and methods will require them. In addition, factors allow you to easily overcome the silly default alphabetical ordering of category levels in some very popular visualization packages.
For other things, such as text analysis, you’ll almost certainly want character strings instead, and in many cases it will be required. It’s also worth noting that a lot of base R and other behavior will coerce strings to factors. This made a lot more sense in the early days of R, but is not really necessary these days.
For more on this stuff see the following:
* [http://adv\-r.had.co.nz/Data\-structures.html](http://adv-r.had.co.nz/Data-structures.html)
* <http://forcats.tidyverse.org/>
* <http://r4ds.had.co.nz/factors.html>
* [https://simplystatistics.org/2015/07/24/stringsasfactors\-an\-unauthorized\-biography/](https://simplystatistics.org/2015/07/24/stringsasfactors-an-unauthorized-biography/)
* [http://notstatschat.tumblr.com/post/124987394001/stringsasfactors\-sigh](http://notstatschat.tumblr.com/post/124987394001/stringsasfactors-sigh)
### Character strings
When dealing with text, objects of class character are what you’d typically be dealing with.
```
x = c('... Of Your Fake Dimension', 'Ephemeron', 'Dryswch', 'Isotasy', 'Memory')
x
```
Not much to it, but be aware there is no real limit to what is represented as a character vector. For example, in a data frame, you could have a column where each entry is one of the works of Shakespeare.
### Factors
Although not exactly precise, one can think of factors as integers with labels. So, the underlying representation of a variable for sex is 1:2 with labels ‘Male’ and ‘Female’. They are a special class with attributes, or metadata, that contains the information about the levels.
```
x = factor(rep(letters[1:3], e=10))
attributes(x)
```
```
$levels
[1] "a" "b" "c"
$class
[1] "factor"
```
While the underlying representation is numeric, it is important to remember that factors are *categorical*. They can’t be used as numbers would be, as the following demonstrates.
```
as.numeric(x)
```
```
[1] 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3
```
```
sum(x)
```
```
Error in Summary.factor(structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, : 'sum' not meaningful for factors
```
Any numbers could be used, what we’re interested in are the labels, so a ‘sum’ doesn’t make any sense. All of the following would produce the same factor.
```
factor(c(1, 2, 3), labels=c('a', 'b', 'c'))
factor(c(3.2, 10, 500000), labels=c('a', 'b', 'c'))
factor(c(.49, 1, 5), labels=c('a', 'b', 'c'))
```
Because of the integer\+metadata representation, factors are actually smaller than character strings, often notably so.
```
x = sample(state.name, 10000, replace=T)
format(object.size(x), units='Kb')
```
```
[1] "80.8 Kb"
```
```
format(object.size(factor(x)), units='Kb')
```
```
[1] "42.4 Kb"
```
```
format(object.size(as.integer(factor(x))), units='Kb')
```
```
[1] "39.1 Kb"
```
However, if memory is really a concern, it’s probably not that using factors will help, but rather better hardware.
### Analysis
It is important to know that raw text cannot be analyzed quantitatively. There is no magic that takes a categorical variable with text labels and estimates correlations among words and other words or numeric data. *Everything* that can be analyzed must have some numeric representation first, and this is where factors come in. For example, here is a data frame with two categorical predictors (`factor*`), a numeric predictor (`x`), and a numeric target (`y`). What follows is what it looks like if you wanted to run a regression model in that setting.
```
df =
crossing(factor_1 = c('A', 'B'),
factor_2 = c('Q', 'X', 'J')) %>%
mutate(x=rnorm(6),
y=rnorm(6))
df
```
```
# A tibble: 6 x 4
factor_1 factor_2 x y
<chr> <chr> <dbl> <dbl>
1 A J 0.797 -0.190
2 A Q -1.000 -0.496
3 A X 1.05 0.487
4 B J -0.329 -0.101
5 B Q 0.905 -0.809
6 B X 1.18 -1.92
```
```
## model.matrix(lm(y ~ x + factor_1 + factor_2, data=df))
```
| (Intercept) | x | factor\_1B | factor\_2Q | factor\_2X |
| --- | --- | --- | --- | --- |
| 1 | 0\.7968603 | 0 | 0 | 0 |
| 1 | \-0\.9999264 | 0 | 1 | 0 |
| 1 | 1\.0522363 | 0 | 0 | 1 |
| 1 | \-0\.3291774 | 1 | 0 | 0 |
| 1 | 0\.9049071 | 1 | 1 | 0 |
| 1 | 1\.1754300 | 1 | 0 | 1 |
The model.matrix function exposes the underlying matrix that is actually used in the regression analysis. You’d get a coefficient for each column of that matrix. As such, even the intercept must be represented in some fashion. For categorical data, the default coding scheme is dummy coding. A reference category is arbitrarily chosen (it doesn’t matter which, and you can always change it), while the other categories are represented by indicator variables, where a 1 represents the corresponding label and everything else is zero. For details on this coding scheme or others, consult any basic statistical modeling book.
In addition, you’ll note that in all text\-specific analysis, the underlying information is numeric. For example, with topic models, the base data structure is a document\-term matrix of counts.
### Characters vs. Factors
The main thing to note is that factors are generally a statistical phenomenon, and are required to do statistical things with data that would otherwise be a simple character string. If you know the relatively few levels the data can take, you’ll generally want to use factors, or at least know that statistical packages and methods will require them. In addition, factors allow you to easily overcome the silly default alphabetical ordering of category levels in some very popular visualization packages.
For other things, such as text analysis, you’ll almost certainly want character strings instead, and in many cases it will be required. It’s also worth noting that a lot of base R and other behavior will coerce strings to factors. This made a lot more sense in the early days of R, but is not really necessary these days.
For more on this stuff see the following:
* [http://adv\-r.had.co.nz/Data\-structures.html](http://adv-r.had.co.nz/Data-structures.html)
* <http://forcats.tidyverse.org/>
* <http://r4ds.had.co.nz/factors.html>
* [https://simplystatistics.org/2015/07/24/stringsasfactors\-an\-unauthorized\-biography/](https://simplystatistics.org/2015/07/24/stringsasfactors-an-unauthorized-biography/)
* [http://notstatschat.tumblr.com/post/124987394001/stringsasfactors\-sigh](http://notstatschat.tumblr.com/post/124987394001/stringsasfactors-sigh)
Basic Text Functionality
------------------------
### Base R
A lot of folks new to R are not aware of just how much basic text processing R comes with out of the box. Here are examples of note.
* paste: glue text/numeric values together
* substr: extract or replace substrings in a character vector
* grep family: use regular expressions to deal with patterns of text
* strsplit: split strings
* nchar: how many characters in a string
* as.numeric: convert a string to numeric if it can be
* strtoi: convert a string to integer if it can be (faster than as.integer)
* adist: string distances
I probably use paste/paste0 more than most things when dealing with text, as string concatenation comes up so often. The following provides some demonstration.
```
paste(c('a', 'b', 'cd'), collapse='|')
```
```
[1] "a|b|cd"
```
```
paste(c('a', 'b', 'cd'), collapse='')
```
```
[1] "abcd"
```
```
paste0('a', 'b', 'cd') # shortcut to collapse=''
```
```
[1] "abcd"
```
```
paste0('x', 1:3)
```
```
[1] "x1" "x2" "x3"
```
Beyond that, use of regular expression and functionality included in the grep family is a major way to save a lot of time during data processing. I leave that to its own section later.
### Useful packages
A couple packages will probably take care of the vast majority of your standard text processing needs. Note that even if they aren’t adding anything to the functionality of the base R functions, they typically will have been optimized in some fashion, particularly with regard to speed.
* stringr/stringi: More or less the same stuff you’ll find with substr, grep etc. except easier to use and/or faster. They also add useful functionality not in base R (e.g. str\_to\_title). The stringr package is mostly a wrapper for the stringi functions, with some additional functions.
* tidyr: has functions such as unite, separate, replace\_na that can often come in handy when working with data frames.
* glue: a newer package that can be seen as a fancier paste. Most likely it will be useful when creating functions or shiny apps in which variable text output is desired.
One issue I have with both packages and base R is that often they return a list object, when it should be simplifying to the vector format it was initially fed. This sometimes requires an additional step or two of further processing that shouldn’t be necessary, so be prepared for it[1](#fn1).
### Other
In this section, I’ll add some things that come to mind that might come into play when you’re dealing with text.
#### Dates
Dates are not character strings. Though they may start that way, if you actually want to treat them as dates you’ll need to convert the string to the appropriate date class. The lubridate package makes dealing with dates much easier. It comes with conversion, extraction and other functionality that will be sure to save you some time.
```
library(lubridate)
today()
```
```
[1] "2018-03-06"
```
```
today() + 1
```
```
[1] "2018-03-07"
```
```
today() + dyears(1)
```
```
[1] "2019-03-06"
```
```
leap_year(2016)
```
```
[1] TRUE
```
```
span = interval(ymd("2017-07-01"), ymd("2017-07-04"))
span
```
```
[1] 2017-07-01 UTC--2017-07-04 UTC
```
```
as.duration(span)
```
```
[1] "259200s (~3 days)"
```
```
span %/% minutes(1)
```
```
[1] 4320
```
This package makes dates so much easier, you should always use it when dealing with them.
#### Categorical Time
In regression modeling with few time points, one often has to decide on whether to treat the year as categorical (factor) or numeric (continuous). This greatly depends on how you want to tell your data story or other practical concerns. For example, if you have five years in your data, treating year as categorical means you are interested in accounting for unspecified things that go on in a given year. If you treat it as numeric, you are more interested in trends. Either is fine.
#### Web
A major resource for text is of course the web. Packages like rvest,httr, xml2, and many other packages specific to website APIs are available to help you here. See the [R task view for web technologies](https://cran.r-project.org/web/views/WebTechnologies.html) as a starting point.
##### Encoding
Encoding can be a sizable PITA sometimes, and will often come up when dealing with webscraping and other languages. The rvest and stringr packages may be able to get you past some issues at least. See their respective functions repair\_encoding and str\_conv as starting points on this issue.
### Summary of basic text functionality
Being familiar with commonly used string functionality in base R and packages like stringr can save a ridiculous amount of time in your data processing. The more familiar you are with them the easier time you’ll have with text.
### Base R
A lot of folks new to R are not aware of just how much basic text processing R comes with out of the box. Here are examples of note.
* paste: glue text/numeric values together
* substr: extract or replace substrings in a character vector
* grep family: use regular expressions to deal with patterns of text
* strsplit: split strings
* nchar: how many characters in a string
* as.numeric: convert a string to numeric if it can be
* strtoi: convert a string to integer if it can be (faster than as.integer)
* adist: string distances
I probably use paste/paste0 more than most things when dealing with text, as string concatenation comes up so often. The following provides some demonstration.
```
paste(c('a', 'b', 'cd'), collapse='|')
```
```
[1] "a|b|cd"
```
```
paste(c('a', 'b', 'cd'), collapse='')
```
```
[1] "abcd"
```
```
paste0('a', 'b', 'cd') # shortcut to collapse=''
```
```
[1] "abcd"
```
```
paste0('x', 1:3)
```
```
[1] "x1" "x2" "x3"
```
Beyond that, use of regular expression and functionality included in the grep family is a major way to save a lot of time during data processing. I leave that to its own section later.
### Useful packages
A couple packages will probably take care of the vast majority of your standard text processing needs. Note that even if they aren’t adding anything to the functionality of the base R functions, they typically will have been optimized in some fashion, particularly with regard to speed.
* stringr/stringi: More or less the same stuff you’ll find with substr, grep etc. except easier to use and/or faster. They also add useful functionality not in base R (e.g. str\_to\_title). The stringr package is mostly a wrapper for the stringi functions, with some additional functions.
* tidyr: has functions such as unite, separate, replace\_na that can often come in handy when working with data frames.
* glue: a newer package that can be seen as a fancier paste. Most likely it will be useful when creating functions or shiny apps in which variable text output is desired.
One issue I have with both packages and base R is that often they return a list object, when it should be simplifying to the vector format it was initially fed. This sometimes requires an additional step or two of further processing that shouldn’t be necessary, so be prepared for it[1](#fn1).
### Other
In this section, I’ll add some things that come to mind that might come into play when you’re dealing with text.
#### Dates
Dates are not character strings. Though they may start that way, if you actually want to treat them as dates you’ll need to convert the string to the appropriate date class. The lubridate package makes dealing with dates much easier. It comes with conversion, extraction and other functionality that will be sure to save you some time.
```
library(lubridate)
today()
```
```
[1] "2018-03-06"
```
```
today() + 1
```
```
[1] "2018-03-07"
```
```
today() + dyears(1)
```
```
[1] "2019-03-06"
```
```
leap_year(2016)
```
```
[1] TRUE
```
```
span = interval(ymd("2017-07-01"), ymd("2017-07-04"))
span
```
```
[1] 2017-07-01 UTC--2017-07-04 UTC
```
```
as.duration(span)
```
```
[1] "259200s (~3 days)"
```
```
span %/% minutes(1)
```
```
[1] 4320
```
This package makes dates so much easier, you should always use it when dealing with them.
#### Categorical Time
In regression modeling with few time points, one often has to decide on whether to treat the year as categorical (factor) or numeric (continuous). This greatly depends on how you want to tell your data story or other practical concerns. For example, if you have five years in your data, treating year as categorical means you are interested in accounting for unspecified things that go on in a given year. If you treat it as numeric, you are more interested in trends. Either is fine.
#### Web
A major resource for text is of course the web. Packages like rvest,httr, xml2, and many other packages specific to website APIs are available to help you here. See the [R task view for web technologies](https://cran.r-project.org/web/views/WebTechnologies.html) as a starting point.
##### Encoding
Encoding can be a sizable PITA sometimes, and will often come up when dealing with webscraping and other languages. The rvest and stringr packages may be able to get you past some issues at least. See their respective functions repair\_encoding and str\_conv as starting points on this issue.
#### Dates
Dates are not character strings. Though they may start that way, if you actually want to treat them as dates you’ll need to convert the string to the appropriate date class. The lubridate package makes dealing with dates much easier. It comes with conversion, extraction and other functionality that will be sure to save you some time.
```
library(lubridate)
today()
```
```
[1] "2018-03-06"
```
```
today() + 1
```
```
[1] "2018-03-07"
```
```
today() + dyears(1)
```
```
[1] "2019-03-06"
```
```
leap_year(2016)
```
```
[1] TRUE
```
```
span = interval(ymd("2017-07-01"), ymd("2017-07-04"))
span
```
```
[1] 2017-07-01 UTC--2017-07-04 UTC
```
```
as.duration(span)
```
```
[1] "259200s (~3 days)"
```
```
span %/% minutes(1)
```
```
[1] 4320
```
This package makes dates so much easier, you should always use it when dealing with them.
#### Categorical Time
In regression modeling with few time points, one often has to decide on whether to treat the year as categorical (factor) or numeric (continuous). This greatly depends on how you want to tell your data story or other practical concerns. For example, if you have five years in your data, treating year as categorical means you are interested in accounting for unspecified things that go on in a given year. If you treat it as numeric, you are more interested in trends. Either is fine.
#### Web
A major resource for text is of course the web. Packages like rvest,httr, xml2, and many other packages specific to website APIs are available to help you here. See the [R task view for web technologies](https://cran.r-project.org/web/views/WebTechnologies.html) as a starting point.
##### Encoding
Encoding can be a sizable PITA sometimes, and will often come up when dealing with webscraping and other languages. The rvest and stringr packages may be able to get you past some issues at least. See their respective functions repair\_encoding and str\_conv as starting points on this issue.
##### Encoding
Encoding can be a sizable PITA sometimes, and will often come up when dealing with webscraping and other languages. The rvest and stringr packages may be able to get you past some issues at least. See their respective functions repair\_encoding and str\_conv as starting points on this issue.
### Summary of basic text functionality
Being familiar with commonly used string functionality in base R and packages like stringr can save a ridiculous amount of time in your data processing. The more familiar you are with them the easier time you’ll have with text.
Regular Expressions
-------------------
A regular expression, regex for short, is a sequence of characters that can be used as a search pattern for a string. Common operations are to merely detect, extract, or replace the matching string. There are actually many different flavors of regex for different programming languages, which are all flavors that originate with the Perl approach, or can enable the Perl approach to be used. However, knowing one means you pretty much know the others with only minor modifications if any.
To be clear, not only is regex another language, it’s nigh on indecipherable. You will not learn much regex, but what you do learn will save a potentially enormous amount of time you’d otherwise spend trying to do things in a more haphazard fashion. Furthermore, practically every situation that will come up has already been asked and answered on [Stack Overflow](https://stackoverflow.com/questions/tagged/regex), so you’ll almost always be able to search for what you need.
Here is an example:
`^r.*shiny[0-9]$`
What is *that* you may ask? Well here is an example of strings it would and wouldn’t match.
```
string = c('r is the shiny', 'r is the shiny1', 'r shines brightly')
grepl(string, pattern='^r.*shiny[0-9]$')
```
```
[1] FALSE TRUE FALSE
```
What the regex is esoterically attempting to match is any string that starts with ‘r’ and ends with ‘shiny\_’ where \_ is some single digit. Specifically, it breaks down as follows:
* **^** : starts with, so ^r means starts with r
* **.** : any character
* **\*** : match the preceding zero or more times
* **shiny** : match ‘shiny’
* **\[0\-9]** : any digit 0\-9 (note that we are still talking about strings, not actual numbered values)
* **$** : ends with preceding
### Typical Uses
None of it makes sense, so don’t attempt to do so. Just try to remember a couple key approaches, and search the web for the rest.
Along with ^ . \* \[0\-9] $, a couple more common ones are:
* **\[a\-z]** : letters a\-z
* **\[A\-Z]** : capital letters
* **\+** : match the preceding one or more times
* **()** : groupings
* **\|** : logical or e.g. \[a\-z]\|\[0\-9] (a lower\-case letter or a number)
* **?** : preceding item is optional, and will be matched at most once. Typically used for ‘look ahead’ and ‘look behind’
* **\\** : escape a character, like if you actually wanted to search for a period instead of using it as a regex pattern, you’d use \\., though in R you need \\\\, i.e. double slashes, for escape.
In addition, in R there are certain predefined characters that can be called:
* **\[:punct:]** : punctuation
* **\[:blank:]** : spaces and tabs
* **\[:alnum:]** : alphanumeric characters
Those are just a few. The key functions can be found by looking at the help file for the grep function (`?grep`). However, the stringr package has the same functionality with perhaps a slightly faster processing (though that’s due to the underlying stringi package).
See if you can guess which of the following will turn up `TRUE`.
```
grepl(c('apple', 'pear', 'banana'), pattern='a')
grepl(c('apple', 'pear', 'banana'), pattern='^a')
grepl(c('apple', 'pear', 'banana'), pattern='^a|a$')
```
Scraping the web, munging data, just finding things in your scripts … you can potentially use this all the time, and not only with text analysis, as we’ll now see.
### dplyr helper functions
The dplyr package comes with some poorly documented[2](#fn2) but quite useful helper functions that essentially serve as human\-readable regex, which is a very good thing. These functions allow you to select variables[3](#fn3) based on their names. They are usually just calling base R functions in the end.
* starts\_with: starts with a prefix (same as regex ‘^blah’)
* ends\_with: ends with a prefix (same as regex ‘blah$’)
* contains: contains a literal string (same as regex ‘blah’)
* matches: matches a regular expression (put your regex here)
* num\_range: a numerical range like x01, x02, x03\. (same as regex ‘x\[0\-9]\[0\-9]’)
* one\_of: variables in character vector. (if you need to quote variable names, e.g. within a function)
* everything: all variables. (a good way to spend time doing something only to accomplish what you would have by doing nothing, or a way to reorder variables)
For more on using stringr and regular expressions in R, you may find [this cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/strings.pdf) useful.
### Typical Uses
None of it makes sense, so don’t attempt to do so. Just try to remember a couple key approaches, and search the web for the rest.
Along with ^ . \* \[0\-9] $, a couple more common ones are:
* **\[a\-z]** : letters a\-z
* **\[A\-Z]** : capital letters
* **\+** : match the preceding one or more times
* **()** : groupings
* **\|** : logical or e.g. \[a\-z]\|\[0\-9] (a lower\-case letter or a number)
* **?** : preceding item is optional, and will be matched at most once. Typically used for ‘look ahead’ and ‘look behind’
* **\\** : escape a character, like if you actually wanted to search for a period instead of using it as a regex pattern, you’d use \\., though in R you need \\\\, i.e. double slashes, for escape.
In addition, in R there are certain predefined characters that can be called:
* **\[:punct:]** : punctuation
* **\[:blank:]** : spaces and tabs
* **\[:alnum:]** : alphanumeric characters
Those are just a few. The key functions can be found by looking at the help file for the grep function (`?grep`). However, the stringr package has the same functionality with perhaps a slightly faster processing (though that’s due to the underlying stringi package).
See if you can guess which of the following will turn up `TRUE`.
```
grepl(c('apple', 'pear', 'banana'), pattern='a')
grepl(c('apple', 'pear', 'banana'), pattern='^a')
grepl(c('apple', 'pear', 'banana'), pattern='^a|a$')
```
Scraping the web, munging data, just finding things in your scripts … you can potentially use this all the time, and not only with text analysis, as we’ll now see.
### dplyr helper functions
The dplyr package comes with some poorly documented[2](#fn2) but quite useful helper functions that essentially serve as human\-readable regex, which is a very good thing. These functions allow you to select variables[3](#fn3) based on their names. They are usually just calling base R functions in the end.
* starts\_with: starts with a prefix (same as regex ‘^blah’)
* ends\_with: ends with a prefix (same as regex ‘blah$’)
* contains: contains a literal string (same as regex ‘blah’)
* matches: matches a regular expression (put your regex here)
* num\_range: a numerical range like x01, x02, x03\. (same as regex ‘x\[0\-9]\[0\-9]’)
* one\_of: variables in character vector. (if you need to quote variable names, e.g. within a function)
* everything: all variables. (a good way to spend time doing something only to accomplish what you would have by doing nothing, or a way to reorder variables)
For more on using stringr and regular expressions in R, you may find [this cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/strings.pdf) useful.
Text Processing Examples
------------------------
### Example 1
Let’s say you’re dealing with some data that has been handled typically, that is to say, poorly. For example, you have a variable in your data representing whether something is from the north or south region.
It might seem okay until…
```
## table(df$region)
```
| Var1 | Freq |
| --- | --- |
| South | 76 |
| north | 68 |
| North | 75 |
| north | 70 |
| North | 70 |
| south | 65 |
| South | 76 |
Even if you spotted the casing issue, there is still a white space problem[4](#fn4). Let’s say you want this to be capitalized ‘North’ and ‘South’. How might you do it? It’s actually quite easy with the stringr tools.
```
library(stringr)
df %>%
mutate(region = str_trim(region),
region = str_to_title(region))
```
The str\_trim function trims white space from either side (or both), while str\_to\_title converts everything to first letter capitalized.
```
## table(df_corrected$region)
```
| Var1 | Freq |
| --- | --- |
| North | 283 |
| South | 217 |
Compare that to how you would have done it before knowing how to use text processing tools. One might have spent several minutes with some find and replace approach in a spreadsheet, or maybe even several `if... else` statements in R until all problematic cases were taken care of. Not very efficient.
### Example 2
Suppose you import a data frame, and the data was originally in wide format, where each column represented a year of data collection for the individual. Since it is bad form for data columns to have numbers for names, when you import it, the result looks like the following.
So, the problem now is to change the names to be Year\_1, Year\_2, etc. You might think you might have to use colnames and manually create a string of names to replace the current ones.
```
colnames(df)[-1] = c('Year_1', 'Year_2', 'Year_3', 'Year_4', 'Year_5')
```
Or perhaps you’re thinking of the paste0 function, which works fine and saves some typing.
```
colnames(df)[-1] = paste0('Year_', 1:5)
```
However, data sets may be hundreds of columns, and the columns of data may have the same pattern but not be next to one another. For example, the first few dozen columns are all data that belongs to the first wave, etc. It is tedious to figure out which columns you don’t want, but even then you’re resulting to using magic numbers with the above approach, and one column change to data will mean that redoing the name change will fail.
However, the following accomplishes what we want, and is reproducible regardless of where the columns are in the data set.
```
df %>%
rename_at(vars(num_range('X', 1:5)),
str_replace, pattern='X', replacement='Year_') %>%
head()
```
```
id Year_1 Year_2 Year_3 Year_4 Year_5
1 1 1.18 -2.04 -0.03 -0.36 0.43
2 2 0.34 -1.34 -0.30 -0.15 0.47
3 3 -0.32 -0.97 1.03 0.20 0.97
4 4 -0.57 1.36 1.29 0.00 0.32
5 5 0.64 0.73 -0.16 -1.29 -0.79
6 6 -0.59 0.16 -1.28 0.55 0.75
```
Let’s parse what it’s specifically doing.
* rename\_at allows us to rename specific columns
* Which columns? X1 through X:5\. The num\_range helper function creates the character strings X1, X2, X3, X4, and X5\.
* Now that we have the names, we use vars to tell rename\_at which ones. It would have allowed additional sets of variables as well.
* rename\_at needs a function to apply to each of those column names. In this case the function is str\_replace, to replace patterns of strings with some other string
* The specific arguments to str\_replace (pattern to be replaced, replacement pattern) are also supplied.
So in the end we just have to use the num\_range helper function within the function that tells rename\_at what it should be renaming, and let str\_replace do the rest.
### Example 1
Let’s say you’re dealing with some data that has been handled typically, that is to say, poorly. For example, you have a variable in your data representing whether something is from the north or south region.
It might seem okay until…
```
## table(df$region)
```
| Var1 | Freq |
| --- | --- |
| South | 76 |
| north | 68 |
| North | 75 |
| north | 70 |
| North | 70 |
| south | 65 |
| South | 76 |
Even if you spotted the casing issue, there is still a white space problem[4](#fn4). Let’s say you want this to be capitalized ‘North’ and ‘South’. How might you do it? It’s actually quite easy with the stringr tools.
```
library(stringr)
df %>%
mutate(region = str_trim(region),
region = str_to_title(region))
```
The str\_trim function trims white space from either side (or both), while str\_to\_title converts everything to first letter capitalized.
```
## table(df_corrected$region)
```
| Var1 | Freq |
| --- | --- |
| North | 283 |
| South | 217 |
Compare that to how you would have done it before knowing how to use text processing tools. One might have spent several minutes with some find and replace approach in a spreadsheet, or maybe even several `if... else` statements in R until all problematic cases were taken care of. Not very efficient.
### Example 2
Suppose you import a data frame, and the data was originally in wide format, where each column represented a year of data collection for the individual. Since it is bad form for data columns to have numbers for names, when you import it, the result looks like the following.
So, the problem now is to change the names to be Year\_1, Year\_2, etc. You might think you might have to use colnames and manually create a string of names to replace the current ones.
```
colnames(df)[-1] = c('Year_1', 'Year_2', 'Year_3', 'Year_4', 'Year_5')
```
Or perhaps you’re thinking of the paste0 function, which works fine and saves some typing.
```
colnames(df)[-1] = paste0('Year_', 1:5)
```
However, data sets may be hundreds of columns, and the columns of data may have the same pattern but not be next to one another. For example, the first few dozen columns are all data that belongs to the first wave, etc. It is tedious to figure out which columns you don’t want, but even then you’re resulting to using magic numbers with the above approach, and one column change to data will mean that redoing the name change will fail.
However, the following accomplishes what we want, and is reproducible regardless of where the columns are in the data set.
```
df %>%
rename_at(vars(num_range('X', 1:5)),
str_replace, pattern='X', replacement='Year_') %>%
head()
```
```
id Year_1 Year_2 Year_3 Year_4 Year_5
1 1 1.18 -2.04 -0.03 -0.36 0.43
2 2 0.34 -1.34 -0.30 -0.15 0.47
3 3 -0.32 -0.97 1.03 0.20 0.97
4 4 -0.57 1.36 1.29 0.00 0.32
5 5 0.64 0.73 -0.16 -1.29 -0.79
6 6 -0.59 0.16 -1.28 0.55 0.75
```
Let’s parse what it’s specifically doing.
* rename\_at allows us to rename specific columns
* Which columns? X1 through X:5\. The num\_range helper function creates the character strings X1, X2, X3, X4, and X5\.
* Now that we have the names, we use vars to tell rename\_at which ones. It would have allowed additional sets of variables as well.
* rename\_at needs a function to apply to each of those column names. In this case the function is str\_replace, to replace patterns of strings with some other string
* The specific arguments to str\_replace (pattern to be replaced, replacement pattern) are also supplied.
So in the end we just have to use the num\_range helper function within the function that tells rename\_at what it should be renaming, and let str\_replace do the rest.
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/string-theory.html |
String Theory
=============
Basic data types
----------------
R has several core data structures:
* Vectors
* Factors
* Lists
* Matrices/arrays
* Data frames
Vectors form the basis of R data structures. There are two main types\- atomic and lists. All elements of an atomic vector are the same type.
Examples include:
* character
* numeric (double)
* integer
* logical
### Character strings
When dealing with text, objects of class character are what you’d typically be dealing with.
```
x = c('... Of Your Fake Dimension', 'Ephemeron', 'Dryswch', 'Isotasy', 'Memory')
x
```
Not much to it, but be aware there is no real limit to what is represented as a character vector. For example, in a data frame, you could have a column where each entry is one of the works of Shakespeare.
### Factors
Although not exactly precise, one can think of factors as integers with labels. So, the underlying representation of a variable for sex is 1:2 with labels ‘Male’ and ‘Female’. They are a special class with attributes, or metadata, that contains the information about the levels.
```
x = factor(rep(letters[1:3], e=10))
attributes(x)
```
```
$levels
[1] "a" "b" "c"
$class
[1] "factor"
```
While the underlying representation is numeric, it is important to remember that factors are *categorical*. They can’t be used as numbers would be, as the following demonstrates.
```
as.numeric(x)
```
```
[1] 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3
```
```
sum(x)
```
```
Error in Summary.factor(structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, : 'sum' not meaningful for factors
```
Any numbers could be used, what we’re interested in are the labels, so a ‘sum’ doesn’t make any sense. All of the following would produce the same factor.
```
factor(c(1, 2, 3), labels=c('a', 'b', 'c'))
factor(c(3.2, 10, 500000), labels=c('a', 'b', 'c'))
factor(c(.49, 1, 5), labels=c('a', 'b', 'c'))
```
Because of the integer\+metadata representation, factors are actually smaller than character strings, often notably so.
```
x = sample(state.name, 10000, replace=T)
format(object.size(x), units='Kb')
```
```
[1] "80.8 Kb"
```
```
format(object.size(factor(x)), units='Kb')
```
```
[1] "42.4 Kb"
```
```
format(object.size(as.integer(factor(x))), units='Kb')
```
```
[1] "39.1 Kb"
```
However, if memory is really a concern, it’s probably not that using factors will help, but rather better hardware.
### Analysis
It is important to know that raw text cannot be analyzed quantitatively. There is no magic that takes a categorical variable with text labels and estimates correlations among words and other words or numeric data. *Everything* that can be analyzed must have some numeric representation first, and this is where factors come in. For example, here is a data frame with two categorical predictors (`factor*`), a numeric predictor (`x`), and a numeric target (`y`). What follows is what it looks like if you wanted to run a regression model in that setting.
```
df =
crossing(factor_1 = c('A', 'B'),
factor_2 = c('Q', 'X', 'J')) %>%
mutate(x=rnorm(6),
y=rnorm(6))
df
```
```
# A tibble: 6 x 4
factor_1 factor_2 x y
<chr> <chr> <dbl> <dbl>
1 A J 0.797 -0.190
2 A Q -1.000 -0.496
3 A X 1.05 0.487
4 B J -0.329 -0.101
5 B Q 0.905 -0.809
6 B X 1.18 -1.92
```
```
## model.matrix(lm(y ~ x + factor_1 + factor_2, data=df))
```
| (Intercept) | x | factor\_1B | factor\_2Q | factor\_2X |
| --- | --- | --- | --- | --- |
| 1 | 0\.7968603 | 0 | 0 | 0 |
| 1 | \-0\.9999264 | 0 | 1 | 0 |
| 1 | 1\.0522363 | 0 | 0 | 1 |
| 1 | \-0\.3291774 | 1 | 0 | 0 |
| 1 | 0\.9049071 | 1 | 1 | 0 |
| 1 | 1\.1754300 | 1 | 0 | 1 |
The model.matrix function exposes the underlying matrix that is actually used in the regression analysis. You’d get a coefficient for each column of that matrix. As such, even the intercept must be represented in some fashion. For categorical data, the default coding scheme is dummy coding. A reference category is arbitrarily chosen (it doesn’t matter which, and you can always change it), while the other categories are represented by indicator variables, where a 1 represents the corresponding label and everything else is zero. For details on this coding scheme or others, consult any basic statistical modeling book.
In addition, you’ll note that in all text\-specific analysis, the underlying information is numeric. For example, with topic models, the base data structure is a document\-term matrix of counts.
### Characters vs. Factors
The main thing to note is that factors are generally a statistical phenomenon, and are required to do statistical things with data that would otherwise be a simple character string. If you know the relatively few levels the data can take, you’ll generally want to use factors, or at least know that statistical packages and methods will require them. In addition, factors allow you to easily overcome the silly default alphabetical ordering of category levels in some very popular visualization packages.
For other things, such as text analysis, you’ll almost certainly want character strings instead, and in many cases it will be required. It’s also worth noting that a lot of base R and other behavior will coerce strings to factors. This made a lot more sense in the early days of R, but is not really necessary these days.
For more on this stuff see the following:
* [http://adv\-r.had.co.nz/Data\-structures.html](http://adv-r.had.co.nz/Data-structures.html)
* <http://forcats.tidyverse.org/>
* <http://r4ds.had.co.nz/factors.html>
* [https://simplystatistics.org/2015/07/24/stringsasfactors\-an\-unauthorized\-biography/](https://simplystatistics.org/2015/07/24/stringsasfactors-an-unauthorized-biography/)
* [http://notstatschat.tumblr.com/post/124987394001/stringsasfactors\-sigh](http://notstatschat.tumblr.com/post/124987394001/stringsasfactors-sigh)
Basic Text Functionality
------------------------
### Base R
A lot of folks new to R are not aware of just how much basic text processing R comes with out of the box. Here are examples of note.
* paste: glue text/numeric values together
* substr: extract or replace substrings in a character vector
* grep family: use regular expressions to deal with patterns of text
* strsplit: split strings
* nchar: how many characters in a string
* as.numeric: convert a string to numeric if it can be
* strtoi: convert a string to integer if it can be (faster than as.integer)
* adist: string distances
I probably use paste/paste0 more than most things when dealing with text, as string concatenation comes up so often. The following provides some demonstration.
```
paste(c('a', 'b', 'cd'), collapse='|')
```
```
[1] "a|b|cd"
```
```
paste(c('a', 'b', 'cd'), collapse='')
```
```
[1] "abcd"
```
```
paste0('a', 'b', 'cd') # shortcut to collapse=''
```
```
[1] "abcd"
```
```
paste0('x', 1:3)
```
```
[1] "x1" "x2" "x3"
```
Beyond that, use of regular expression and functionality included in the grep family is a major way to save a lot of time during data processing. I leave that to its own section later.
### Useful packages
A couple packages will probably take care of the vast majority of your standard text processing needs. Note that even if they aren’t adding anything to the functionality of the base R functions, they typically will have been optimized in some fashion, particularly with regard to speed.
* stringr/stringi: More or less the same stuff you’ll find with substr, grep etc. except easier to use and/or faster. They also add useful functionality not in base R (e.g. str\_to\_title). The stringr package is mostly a wrapper for the stringi functions, with some additional functions.
* tidyr: has functions such as unite, separate, replace\_na that can often come in handy when working with data frames.
* glue: a newer package that can be seen as a fancier paste. Most likely it will be useful when creating functions or shiny apps in which variable text output is desired.
One issue I have with both packages and base R is that often they return a list object, when it should be simplifying to the vector format it was initially fed. This sometimes requires an additional step or two of further processing that shouldn’t be necessary, so be prepared for it[1](#fn1).
### Other
In this section, I’ll add some things that come to mind that might come into play when you’re dealing with text.
#### Dates
Dates are not character strings. Though they may start that way, if you actually want to treat them as dates you’ll need to convert the string to the appropriate date class. The lubridate package makes dealing with dates much easier. It comes with conversion, extraction and other functionality that will be sure to save you some time.
```
library(lubridate)
today()
```
```
[1] "2018-03-06"
```
```
today() + 1
```
```
[1] "2018-03-07"
```
```
today() + dyears(1)
```
```
[1] "2019-03-06"
```
```
leap_year(2016)
```
```
[1] TRUE
```
```
span = interval(ymd("2017-07-01"), ymd("2017-07-04"))
span
```
```
[1] 2017-07-01 UTC--2017-07-04 UTC
```
```
as.duration(span)
```
```
[1] "259200s (~3 days)"
```
```
span %/% minutes(1)
```
```
[1] 4320
```
This package makes dates so much easier, you should always use it when dealing with them.
#### Categorical Time
In regression modeling with few time points, one often has to decide on whether to treat the year as categorical (factor) or numeric (continuous). This greatly depends on how you want to tell your data story or other practical concerns. For example, if you have five years in your data, treating year as categorical means you are interested in accounting for unspecified things that go on in a given year. If you treat it as numeric, you are more interested in trends. Either is fine.
#### Web
A major resource for text is of course the web. Packages like rvest,httr, xml2, and many other packages specific to website APIs are available to help you here. See the [R task view for web technologies](https://cran.r-project.org/web/views/WebTechnologies.html) as a starting point.
##### Encoding
Encoding can be a sizable PITA sometimes, and will often come up when dealing with webscraping and other languages. The rvest and stringr packages may be able to get you past some issues at least. See their respective functions repair\_encoding and str\_conv as starting points on this issue.
### Summary of basic text functionality
Being familiar with commonly used string functionality in base R and packages like stringr can save a ridiculous amount of time in your data processing. The more familiar you are with them the easier time you’ll have with text.
Regular Expressions
-------------------
A regular expression, regex for short, is a sequence of characters that can be used as a search pattern for a string. Common operations are to merely detect, extract, or replace the matching string. There are actually many different flavors of regex for different programming languages, which are all flavors that originate with the Perl approach, or can enable the Perl approach to be used. However, knowing one means you pretty much know the others with only minor modifications if any.
To be clear, not only is regex another language, it’s nigh on indecipherable. You will not learn much regex, but what you do learn will save a potentially enormous amount of time you’d otherwise spend trying to do things in a more haphazard fashion. Furthermore, practically every situation that will come up has already been asked and answered on [Stack Overflow](https://stackoverflow.com/questions/tagged/regex), so you’ll almost always be able to search for what you need.
Here is an example:
`^r.*shiny[0-9]$`
What is *that* you may ask? Well here is an example of strings it would and wouldn’t match.
```
string = c('r is the shiny', 'r is the shiny1', 'r shines brightly')
grepl(string, pattern='^r.*shiny[0-9]$')
```
```
[1] FALSE TRUE FALSE
```
What the regex is esoterically attempting to match is any string that starts with ‘r’ and ends with ‘shiny\_’ where \_ is some single digit. Specifically, it breaks down as follows:
* **^** : starts with, so ^r means starts with r
* **.** : any character
* **\*** : match the preceding zero or more times
* **shiny** : match ‘shiny’
* **\[0\-9]** : any digit 0\-9 (note that we are still talking about strings, not actual numbered values)
* **$** : ends with preceding
### Typical Uses
None of it makes sense, so don’t attempt to do so. Just try to remember a couple key approaches, and search the web for the rest.
Along with ^ . \* \[0\-9] $, a couple more common ones are:
* **\[a\-z]** : letters a\-z
* **\[A\-Z]** : capital letters
* **\+** : match the preceding one or more times
* **()** : groupings
* **\|** : logical or e.g. \[a\-z]\|\[0\-9] (a lower\-case letter or a number)
* **?** : preceding item is optional, and will be matched at most once. Typically used for ‘look ahead’ and ‘look behind’
* **\\** : escape a character, like if you actually wanted to search for a period instead of using it as a regex pattern, you’d use \\., though in R you need \\\\, i.e. double slashes, for escape.
In addition, in R there are certain predefined characters that can be called:
* **\[:punct:]** : punctuation
* **\[:blank:]** : spaces and tabs
* **\[:alnum:]** : alphanumeric characters
Those are just a few. The key functions can be found by looking at the help file for the grep function (`?grep`). However, the stringr package has the same functionality with perhaps a slightly faster processing (though that’s due to the underlying stringi package).
See if you can guess which of the following will turn up `TRUE`.
```
grepl(c('apple', 'pear', 'banana'), pattern='a')
grepl(c('apple', 'pear', 'banana'), pattern='^a')
grepl(c('apple', 'pear', 'banana'), pattern='^a|a$')
```
Scraping the web, munging data, just finding things in your scripts … you can potentially use this all the time, and not only with text analysis, as we’ll now see.
### dplyr helper functions
The dplyr package comes with some poorly documented[2](#fn2) but quite useful helper functions that essentially serve as human\-readable regex, which is a very good thing. These functions allow you to select variables[3](#fn3) based on their names. They are usually just calling base R functions in the end.
* starts\_with: starts with a prefix (same as regex ‘^blah’)
* ends\_with: ends with a prefix (same as regex ‘blah$’)
* contains: contains a literal string (same as regex ‘blah’)
* matches: matches a regular expression (put your regex here)
* num\_range: a numerical range like x01, x02, x03\. (same as regex ‘x\[0\-9]\[0\-9]’)
* one\_of: variables in character vector. (if you need to quote variable names, e.g. within a function)
* everything: all variables. (a good way to spend time doing something only to accomplish what you would have by doing nothing, or a way to reorder variables)
For more on using stringr and regular expressions in R, you may find [this cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/strings.pdf) useful.
Text Processing Examples
------------------------
### Example 1
Let’s say you’re dealing with some data that has been handled typically, that is to say, poorly. For example, you have a variable in your data representing whether something is from the north or south region.
It might seem okay until…
```
## table(df$region)
```
| Var1 | Freq |
| --- | --- |
| South | 76 |
| north | 68 |
| North | 75 |
| north | 70 |
| North | 70 |
| south | 65 |
| South | 76 |
Even if you spotted the casing issue, there is still a white space problem[4](#fn4). Let’s say you want this to be capitalized ‘North’ and ‘South’. How might you do it? It’s actually quite easy with the stringr tools.
```
library(stringr)
df %>%
mutate(region = str_trim(region),
region = str_to_title(region))
```
The str\_trim function trims white space from either side (or both), while str\_to\_title converts everything to first letter capitalized.
```
## table(df_corrected$region)
```
| Var1 | Freq |
| --- | --- |
| North | 283 |
| South | 217 |
Compare that to how you would have done it before knowing how to use text processing tools. One might have spent several minutes with some find and replace approach in a spreadsheet, or maybe even several `if... else` statements in R until all problematic cases were taken care of. Not very efficient.
### Example 2
Suppose you import a data frame, and the data was originally in wide format, where each column represented a year of data collection for the individual. Since it is bad form for data columns to have numbers for names, when you import it, the result looks like the following.
So, the problem now is to change the names to be Year\_1, Year\_2, etc. You might think you might have to use colnames and manually create a string of names to replace the current ones.
```
colnames(df)[-1] = c('Year_1', 'Year_2', 'Year_3', 'Year_4', 'Year_5')
```
Or perhaps you’re thinking of the paste0 function, which works fine and saves some typing.
```
colnames(df)[-1] = paste0('Year_', 1:5)
```
However, data sets may be hundreds of columns, and the columns of data may have the same pattern but not be next to one another. For example, the first few dozen columns are all data that belongs to the first wave, etc. It is tedious to figure out which columns you don’t want, but even then you’re resulting to using magic numbers with the above approach, and one column change to data will mean that redoing the name change will fail.
However, the following accomplishes what we want, and is reproducible regardless of where the columns are in the data set.
```
df %>%
rename_at(vars(num_range('X', 1:5)),
str_replace, pattern='X', replacement='Year_') %>%
head()
```
```
id Year_1 Year_2 Year_3 Year_4 Year_5
1 1 1.18 -2.04 -0.03 -0.36 0.43
2 2 0.34 -1.34 -0.30 -0.15 0.47
3 3 -0.32 -0.97 1.03 0.20 0.97
4 4 -0.57 1.36 1.29 0.00 0.32
5 5 0.64 0.73 -0.16 -1.29 -0.79
6 6 -0.59 0.16 -1.28 0.55 0.75
```
Let’s parse what it’s specifically doing.
* rename\_at allows us to rename specific columns
* Which columns? X1 through X:5\. The num\_range helper function creates the character strings X1, X2, X3, X4, and X5\.
* Now that we have the names, we use vars to tell rename\_at which ones. It would have allowed additional sets of variables as well.
* rename\_at needs a function to apply to each of those column names. In this case the function is str\_replace, to replace patterns of strings with some other string
* The specific arguments to str\_replace (pattern to be replaced, replacement pattern) are also supplied.
So in the end we just have to use the num\_range helper function within the function that tells rename\_at what it should be renaming, and let str\_replace do the rest.
Exercises
---------
1. In your own words, state the difference between a character string and a factor variable.
2. Consider the following character vector.
```
x = c('A', '1', 'Q')
```
How might you paste the elements together so that there is an underscore `_` between characters and no space (“A\_1\_Q”)? If you highlight the next line you’ll see the hint.
Revisit how we used the collapse argument within paste. `paste(..., collapse=?)`
Paste Part 2: The following application of paste produces this result.
```
paste(c('A', '1', 'Q'), c('B', '2', 'z'))
```
```
[1] "A B" "1 2" "Q z"
```
Now try to produce `"A - B" "1 - 2" "Q - z"`. To do this, note that one can paste any number of things together (i.e. more than two). So try adding ’ \- ’ to it.
3. Use regex to grab the Star Wars names that have a number. Use both grep and grepl and compare the results
```
grep(starwars$name, pattern = ?)
```
Now use your hacking skills to determine which one is the tallest.
4. Load the dplyr package, and use the its [helper functions](string-theory.html#dplyr-helper-functions) to grab all the columns in the starwars data set (comes with the package) with `color` in the name but without referring to them directly. The following shows a generic example. There are several ways to do this. Try two if you can.
```
starwars %>%
select(helper_function('pattern'))
```
Basic data types
----------------
R has several core data structures:
* Vectors
* Factors
* Lists
* Matrices/arrays
* Data frames
Vectors form the basis of R data structures. There are two main types\- atomic and lists. All elements of an atomic vector are the same type.
Examples include:
* character
* numeric (double)
* integer
* logical
### Character strings
When dealing with text, objects of class character are what you’d typically be dealing with.
```
x = c('... Of Your Fake Dimension', 'Ephemeron', 'Dryswch', 'Isotasy', 'Memory')
x
```
Not much to it, but be aware there is no real limit to what is represented as a character vector. For example, in a data frame, you could have a column where each entry is one of the works of Shakespeare.
### Factors
Although not exactly precise, one can think of factors as integers with labels. So, the underlying representation of a variable for sex is 1:2 with labels ‘Male’ and ‘Female’. They are a special class with attributes, or metadata, that contains the information about the levels.
```
x = factor(rep(letters[1:3], e=10))
attributes(x)
```
```
$levels
[1] "a" "b" "c"
$class
[1] "factor"
```
While the underlying representation is numeric, it is important to remember that factors are *categorical*. They can’t be used as numbers would be, as the following demonstrates.
```
as.numeric(x)
```
```
[1] 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3
```
```
sum(x)
```
```
Error in Summary.factor(structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, : 'sum' not meaningful for factors
```
Any numbers could be used, what we’re interested in are the labels, so a ‘sum’ doesn’t make any sense. All of the following would produce the same factor.
```
factor(c(1, 2, 3), labels=c('a', 'b', 'c'))
factor(c(3.2, 10, 500000), labels=c('a', 'b', 'c'))
factor(c(.49, 1, 5), labels=c('a', 'b', 'c'))
```
Because of the integer\+metadata representation, factors are actually smaller than character strings, often notably so.
```
x = sample(state.name, 10000, replace=T)
format(object.size(x), units='Kb')
```
```
[1] "80.8 Kb"
```
```
format(object.size(factor(x)), units='Kb')
```
```
[1] "42.4 Kb"
```
```
format(object.size(as.integer(factor(x))), units='Kb')
```
```
[1] "39.1 Kb"
```
However, if memory is really a concern, it’s probably not that using factors will help, but rather better hardware.
### Analysis
It is important to know that raw text cannot be analyzed quantitatively. There is no magic that takes a categorical variable with text labels and estimates correlations among words and other words or numeric data. *Everything* that can be analyzed must have some numeric representation first, and this is where factors come in. For example, here is a data frame with two categorical predictors (`factor*`), a numeric predictor (`x`), and a numeric target (`y`). What follows is what it looks like if you wanted to run a regression model in that setting.
```
df =
crossing(factor_1 = c('A', 'B'),
factor_2 = c('Q', 'X', 'J')) %>%
mutate(x=rnorm(6),
y=rnorm(6))
df
```
```
# A tibble: 6 x 4
factor_1 factor_2 x y
<chr> <chr> <dbl> <dbl>
1 A J 0.797 -0.190
2 A Q -1.000 -0.496
3 A X 1.05 0.487
4 B J -0.329 -0.101
5 B Q 0.905 -0.809
6 B X 1.18 -1.92
```
```
## model.matrix(lm(y ~ x + factor_1 + factor_2, data=df))
```
| (Intercept) | x | factor\_1B | factor\_2Q | factor\_2X |
| --- | --- | --- | --- | --- |
| 1 | 0\.7968603 | 0 | 0 | 0 |
| 1 | \-0\.9999264 | 0 | 1 | 0 |
| 1 | 1\.0522363 | 0 | 0 | 1 |
| 1 | \-0\.3291774 | 1 | 0 | 0 |
| 1 | 0\.9049071 | 1 | 1 | 0 |
| 1 | 1\.1754300 | 1 | 0 | 1 |
The model.matrix function exposes the underlying matrix that is actually used in the regression analysis. You’d get a coefficient for each column of that matrix. As such, even the intercept must be represented in some fashion. For categorical data, the default coding scheme is dummy coding. A reference category is arbitrarily chosen (it doesn’t matter which, and you can always change it), while the other categories are represented by indicator variables, where a 1 represents the corresponding label and everything else is zero. For details on this coding scheme or others, consult any basic statistical modeling book.
In addition, you’ll note that in all text\-specific analysis, the underlying information is numeric. For example, with topic models, the base data structure is a document\-term matrix of counts.
### Characters vs. Factors
The main thing to note is that factors are generally a statistical phenomenon, and are required to do statistical things with data that would otherwise be a simple character string. If you know the relatively few levels the data can take, you’ll generally want to use factors, or at least know that statistical packages and methods will require them. In addition, factors allow you to easily overcome the silly default alphabetical ordering of category levels in some very popular visualization packages.
For other things, such as text analysis, you’ll almost certainly want character strings instead, and in many cases it will be required. It’s also worth noting that a lot of base R and other behavior will coerce strings to factors. This made a lot more sense in the early days of R, but is not really necessary these days.
For more on this stuff see the following:
* [http://adv\-r.had.co.nz/Data\-structures.html](http://adv-r.had.co.nz/Data-structures.html)
* <http://forcats.tidyverse.org/>
* <http://r4ds.had.co.nz/factors.html>
* [https://simplystatistics.org/2015/07/24/stringsasfactors\-an\-unauthorized\-biography/](https://simplystatistics.org/2015/07/24/stringsasfactors-an-unauthorized-biography/)
* [http://notstatschat.tumblr.com/post/124987394001/stringsasfactors\-sigh](http://notstatschat.tumblr.com/post/124987394001/stringsasfactors-sigh)
### Character strings
When dealing with text, objects of class character are what you’d typically be dealing with.
```
x = c('... Of Your Fake Dimension', 'Ephemeron', 'Dryswch', 'Isotasy', 'Memory')
x
```
Not much to it, but be aware there is no real limit to what is represented as a character vector. For example, in a data frame, you could have a column where each entry is one of the works of Shakespeare.
### Factors
Although not exactly precise, one can think of factors as integers with labels. So, the underlying representation of a variable for sex is 1:2 with labels ‘Male’ and ‘Female’. They are a special class with attributes, or metadata, that contains the information about the levels.
```
x = factor(rep(letters[1:3], e=10))
attributes(x)
```
```
$levels
[1] "a" "b" "c"
$class
[1] "factor"
```
While the underlying representation is numeric, it is important to remember that factors are *categorical*. They can’t be used as numbers would be, as the following demonstrates.
```
as.numeric(x)
```
```
[1] 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3
```
```
sum(x)
```
```
Error in Summary.factor(structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, : 'sum' not meaningful for factors
```
Any numbers could be used, what we’re interested in are the labels, so a ‘sum’ doesn’t make any sense. All of the following would produce the same factor.
```
factor(c(1, 2, 3), labels=c('a', 'b', 'c'))
factor(c(3.2, 10, 500000), labels=c('a', 'b', 'c'))
factor(c(.49, 1, 5), labels=c('a', 'b', 'c'))
```
Because of the integer\+metadata representation, factors are actually smaller than character strings, often notably so.
```
x = sample(state.name, 10000, replace=T)
format(object.size(x), units='Kb')
```
```
[1] "80.8 Kb"
```
```
format(object.size(factor(x)), units='Kb')
```
```
[1] "42.4 Kb"
```
```
format(object.size(as.integer(factor(x))), units='Kb')
```
```
[1] "39.1 Kb"
```
However, if memory is really a concern, it’s probably not that using factors will help, but rather better hardware.
### Analysis
It is important to know that raw text cannot be analyzed quantitatively. There is no magic that takes a categorical variable with text labels and estimates correlations among words and other words or numeric data. *Everything* that can be analyzed must have some numeric representation first, and this is where factors come in. For example, here is a data frame with two categorical predictors (`factor*`), a numeric predictor (`x`), and a numeric target (`y`). What follows is what it looks like if you wanted to run a regression model in that setting.
```
df =
crossing(factor_1 = c('A', 'B'),
factor_2 = c('Q', 'X', 'J')) %>%
mutate(x=rnorm(6),
y=rnorm(6))
df
```
```
# A tibble: 6 x 4
factor_1 factor_2 x y
<chr> <chr> <dbl> <dbl>
1 A J 0.797 -0.190
2 A Q -1.000 -0.496
3 A X 1.05 0.487
4 B J -0.329 -0.101
5 B Q 0.905 -0.809
6 B X 1.18 -1.92
```
```
## model.matrix(lm(y ~ x + factor_1 + factor_2, data=df))
```
| (Intercept) | x | factor\_1B | factor\_2Q | factor\_2X |
| --- | --- | --- | --- | --- |
| 1 | 0\.7968603 | 0 | 0 | 0 |
| 1 | \-0\.9999264 | 0 | 1 | 0 |
| 1 | 1\.0522363 | 0 | 0 | 1 |
| 1 | \-0\.3291774 | 1 | 0 | 0 |
| 1 | 0\.9049071 | 1 | 1 | 0 |
| 1 | 1\.1754300 | 1 | 0 | 1 |
The model.matrix function exposes the underlying matrix that is actually used in the regression analysis. You’d get a coefficient for each column of that matrix. As such, even the intercept must be represented in some fashion. For categorical data, the default coding scheme is dummy coding. A reference category is arbitrarily chosen (it doesn’t matter which, and you can always change it), while the other categories are represented by indicator variables, where a 1 represents the corresponding label and everything else is zero. For details on this coding scheme or others, consult any basic statistical modeling book.
In addition, you’ll note that in all text\-specific analysis, the underlying information is numeric. For example, with topic models, the base data structure is a document\-term matrix of counts.
### Characters vs. Factors
The main thing to note is that factors are generally a statistical phenomenon, and are required to do statistical things with data that would otherwise be a simple character string. If you know the relatively few levels the data can take, you’ll generally want to use factors, or at least know that statistical packages and methods will require them. In addition, factors allow you to easily overcome the silly default alphabetical ordering of category levels in some very popular visualization packages.
For other things, such as text analysis, you’ll almost certainly want character strings instead, and in many cases it will be required. It’s also worth noting that a lot of base R and other behavior will coerce strings to factors. This made a lot more sense in the early days of R, but is not really necessary these days.
For more on this stuff see the following:
* [http://adv\-r.had.co.nz/Data\-structures.html](http://adv-r.had.co.nz/Data-structures.html)
* <http://forcats.tidyverse.org/>
* <http://r4ds.had.co.nz/factors.html>
* [https://simplystatistics.org/2015/07/24/stringsasfactors\-an\-unauthorized\-biography/](https://simplystatistics.org/2015/07/24/stringsasfactors-an-unauthorized-biography/)
* [http://notstatschat.tumblr.com/post/124987394001/stringsasfactors\-sigh](http://notstatschat.tumblr.com/post/124987394001/stringsasfactors-sigh)
Basic Text Functionality
------------------------
### Base R
A lot of folks new to R are not aware of just how much basic text processing R comes with out of the box. Here are examples of note.
* paste: glue text/numeric values together
* substr: extract or replace substrings in a character vector
* grep family: use regular expressions to deal with patterns of text
* strsplit: split strings
* nchar: how many characters in a string
* as.numeric: convert a string to numeric if it can be
* strtoi: convert a string to integer if it can be (faster than as.integer)
* adist: string distances
I probably use paste/paste0 more than most things when dealing with text, as string concatenation comes up so often. The following provides some demonstration.
```
paste(c('a', 'b', 'cd'), collapse='|')
```
```
[1] "a|b|cd"
```
```
paste(c('a', 'b', 'cd'), collapse='')
```
```
[1] "abcd"
```
```
paste0('a', 'b', 'cd') # shortcut to collapse=''
```
```
[1] "abcd"
```
```
paste0('x', 1:3)
```
```
[1] "x1" "x2" "x3"
```
Beyond that, use of regular expression and functionality included in the grep family is a major way to save a lot of time during data processing. I leave that to its own section later.
### Useful packages
A couple packages will probably take care of the vast majority of your standard text processing needs. Note that even if they aren’t adding anything to the functionality of the base R functions, they typically will have been optimized in some fashion, particularly with regard to speed.
* stringr/stringi: More or less the same stuff you’ll find with substr, grep etc. except easier to use and/or faster. They also add useful functionality not in base R (e.g. str\_to\_title). The stringr package is mostly a wrapper for the stringi functions, with some additional functions.
* tidyr: has functions such as unite, separate, replace\_na that can often come in handy when working with data frames.
* glue: a newer package that can be seen as a fancier paste. Most likely it will be useful when creating functions or shiny apps in which variable text output is desired.
One issue I have with both packages and base R is that often they return a list object, when it should be simplifying to the vector format it was initially fed. This sometimes requires an additional step or two of further processing that shouldn’t be necessary, so be prepared for it[1](#fn1).
### Other
In this section, I’ll add some things that come to mind that might come into play when you’re dealing with text.
#### Dates
Dates are not character strings. Though they may start that way, if you actually want to treat them as dates you’ll need to convert the string to the appropriate date class. The lubridate package makes dealing with dates much easier. It comes with conversion, extraction and other functionality that will be sure to save you some time.
```
library(lubridate)
today()
```
```
[1] "2018-03-06"
```
```
today() + 1
```
```
[1] "2018-03-07"
```
```
today() + dyears(1)
```
```
[1] "2019-03-06"
```
```
leap_year(2016)
```
```
[1] TRUE
```
```
span = interval(ymd("2017-07-01"), ymd("2017-07-04"))
span
```
```
[1] 2017-07-01 UTC--2017-07-04 UTC
```
```
as.duration(span)
```
```
[1] "259200s (~3 days)"
```
```
span %/% minutes(1)
```
```
[1] 4320
```
This package makes dates so much easier, you should always use it when dealing with them.
#### Categorical Time
In regression modeling with few time points, one often has to decide on whether to treat the year as categorical (factor) or numeric (continuous). This greatly depends on how you want to tell your data story or other practical concerns. For example, if you have five years in your data, treating year as categorical means you are interested in accounting for unspecified things that go on in a given year. If you treat it as numeric, you are more interested in trends. Either is fine.
#### Web
A major resource for text is of course the web. Packages like rvest,httr, xml2, and many other packages specific to website APIs are available to help you here. See the [R task view for web technologies](https://cran.r-project.org/web/views/WebTechnologies.html) as a starting point.
##### Encoding
Encoding can be a sizable PITA sometimes, and will often come up when dealing with webscraping and other languages. The rvest and stringr packages may be able to get you past some issues at least. See their respective functions repair\_encoding and str\_conv as starting points on this issue.
### Summary of basic text functionality
Being familiar with commonly used string functionality in base R and packages like stringr can save a ridiculous amount of time in your data processing. The more familiar you are with them the easier time you’ll have with text.
### Base R
A lot of folks new to R are not aware of just how much basic text processing R comes with out of the box. Here are examples of note.
* paste: glue text/numeric values together
* substr: extract or replace substrings in a character vector
* grep family: use regular expressions to deal with patterns of text
* strsplit: split strings
* nchar: how many characters in a string
* as.numeric: convert a string to numeric if it can be
* strtoi: convert a string to integer if it can be (faster than as.integer)
* adist: string distances
I probably use paste/paste0 more than most things when dealing with text, as string concatenation comes up so often. The following provides some demonstration.
```
paste(c('a', 'b', 'cd'), collapse='|')
```
```
[1] "a|b|cd"
```
```
paste(c('a', 'b', 'cd'), collapse='')
```
```
[1] "abcd"
```
```
paste0('a', 'b', 'cd') # shortcut to collapse=''
```
```
[1] "abcd"
```
```
paste0('x', 1:3)
```
```
[1] "x1" "x2" "x3"
```
Beyond that, use of regular expression and functionality included in the grep family is a major way to save a lot of time during data processing. I leave that to its own section later.
### Useful packages
A couple packages will probably take care of the vast majority of your standard text processing needs. Note that even if they aren’t adding anything to the functionality of the base R functions, they typically will have been optimized in some fashion, particularly with regard to speed.
* stringr/stringi: More or less the same stuff you’ll find with substr, grep etc. except easier to use and/or faster. They also add useful functionality not in base R (e.g. str\_to\_title). The stringr package is mostly a wrapper for the stringi functions, with some additional functions.
* tidyr: has functions such as unite, separate, replace\_na that can often come in handy when working with data frames.
* glue: a newer package that can be seen as a fancier paste. Most likely it will be useful when creating functions or shiny apps in which variable text output is desired.
One issue I have with both packages and base R is that often they return a list object, when it should be simplifying to the vector format it was initially fed. This sometimes requires an additional step or two of further processing that shouldn’t be necessary, so be prepared for it[1](#fn1).
### Other
In this section, I’ll add some things that come to mind that might come into play when you’re dealing with text.
#### Dates
Dates are not character strings. Though they may start that way, if you actually want to treat them as dates you’ll need to convert the string to the appropriate date class. The lubridate package makes dealing with dates much easier. It comes with conversion, extraction and other functionality that will be sure to save you some time.
```
library(lubridate)
today()
```
```
[1] "2018-03-06"
```
```
today() + 1
```
```
[1] "2018-03-07"
```
```
today() + dyears(1)
```
```
[1] "2019-03-06"
```
```
leap_year(2016)
```
```
[1] TRUE
```
```
span = interval(ymd("2017-07-01"), ymd("2017-07-04"))
span
```
```
[1] 2017-07-01 UTC--2017-07-04 UTC
```
```
as.duration(span)
```
```
[1] "259200s (~3 days)"
```
```
span %/% minutes(1)
```
```
[1] 4320
```
This package makes dates so much easier, you should always use it when dealing with them.
#### Categorical Time
In regression modeling with few time points, one often has to decide on whether to treat the year as categorical (factor) or numeric (continuous). This greatly depends on how you want to tell your data story or other practical concerns. For example, if you have five years in your data, treating year as categorical means you are interested in accounting for unspecified things that go on in a given year. If you treat it as numeric, you are more interested in trends. Either is fine.
#### Web
A major resource for text is of course the web. Packages like rvest,httr, xml2, and many other packages specific to website APIs are available to help you here. See the [R task view for web technologies](https://cran.r-project.org/web/views/WebTechnologies.html) as a starting point.
##### Encoding
Encoding can be a sizable PITA sometimes, and will often come up when dealing with webscraping and other languages. The rvest and stringr packages may be able to get you past some issues at least. See their respective functions repair\_encoding and str\_conv as starting points on this issue.
#### Dates
Dates are not character strings. Though they may start that way, if you actually want to treat them as dates you’ll need to convert the string to the appropriate date class. The lubridate package makes dealing with dates much easier. It comes with conversion, extraction and other functionality that will be sure to save you some time.
```
library(lubridate)
today()
```
```
[1] "2018-03-06"
```
```
today() + 1
```
```
[1] "2018-03-07"
```
```
today() + dyears(1)
```
```
[1] "2019-03-06"
```
```
leap_year(2016)
```
```
[1] TRUE
```
```
span = interval(ymd("2017-07-01"), ymd("2017-07-04"))
span
```
```
[1] 2017-07-01 UTC--2017-07-04 UTC
```
```
as.duration(span)
```
```
[1] "259200s (~3 days)"
```
```
span %/% minutes(1)
```
```
[1] 4320
```
This package makes dates so much easier, you should always use it when dealing with them.
#### Categorical Time
In regression modeling with few time points, one often has to decide on whether to treat the year as categorical (factor) or numeric (continuous). This greatly depends on how you want to tell your data story or other practical concerns. For example, if you have five years in your data, treating year as categorical means you are interested in accounting for unspecified things that go on in a given year. If you treat it as numeric, you are more interested in trends. Either is fine.
#### Web
A major resource for text is of course the web. Packages like rvest,httr, xml2, and many other packages specific to website APIs are available to help you here. See the [R task view for web technologies](https://cran.r-project.org/web/views/WebTechnologies.html) as a starting point.
##### Encoding
Encoding can be a sizable PITA sometimes, and will often come up when dealing with webscraping and other languages. The rvest and stringr packages may be able to get you past some issues at least. See their respective functions repair\_encoding and str\_conv as starting points on this issue.
##### Encoding
Encoding can be a sizable PITA sometimes, and will often come up when dealing with webscraping and other languages. The rvest and stringr packages may be able to get you past some issues at least. See their respective functions repair\_encoding and str\_conv as starting points on this issue.
### Summary of basic text functionality
Being familiar with commonly used string functionality in base R and packages like stringr can save a ridiculous amount of time in your data processing. The more familiar you are with them the easier time you’ll have with text.
Regular Expressions
-------------------
A regular expression, regex for short, is a sequence of characters that can be used as a search pattern for a string. Common operations are to merely detect, extract, or replace the matching string. There are actually many different flavors of regex for different programming languages, which are all flavors that originate with the Perl approach, or can enable the Perl approach to be used. However, knowing one means you pretty much know the others with only minor modifications if any.
To be clear, not only is regex another language, it’s nigh on indecipherable. You will not learn much regex, but what you do learn will save a potentially enormous amount of time you’d otherwise spend trying to do things in a more haphazard fashion. Furthermore, practically every situation that will come up has already been asked and answered on [Stack Overflow](https://stackoverflow.com/questions/tagged/regex), so you’ll almost always be able to search for what you need.
Here is an example:
`^r.*shiny[0-9]$`
What is *that* you may ask? Well here is an example of strings it would and wouldn’t match.
```
string = c('r is the shiny', 'r is the shiny1', 'r shines brightly')
grepl(string, pattern='^r.*shiny[0-9]$')
```
```
[1] FALSE TRUE FALSE
```
What the regex is esoterically attempting to match is any string that starts with ‘r’ and ends with ‘shiny\_’ where \_ is some single digit. Specifically, it breaks down as follows:
* **^** : starts with, so ^r means starts with r
* **.** : any character
* **\*** : match the preceding zero or more times
* **shiny** : match ‘shiny’
* **\[0\-9]** : any digit 0\-9 (note that we are still talking about strings, not actual numbered values)
* **$** : ends with preceding
### Typical Uses
None of it makes sense, so don’t attempt to do so. Just try to remember a couple key approaches, and search the web for the rest.
Along with ^ . \* \[0\-9] $, a couple more common ones are:
* **\[a\-z]** : letters a\-z
* **\[A\-Z]** : capital letters
* **\+** : match the preceding one or more times
* **()** : groupings
* **\|** : logical or e.g. \[a\-z]\|\[0\-9] (a lower\-case letter or a number)
* **?** : preceding item is optional, and will be matched at most once. Typically used for ‘look ahead’ and ‘look behind’
* **\\** : escape a character, like if you actually wanted to search for a period instead of using it as a regex pattern, you’d use \\., though in R you need \\\\, i.e. double slashes, for escape.
In addition, in R there are certain predefined characters that can be called:
* **\[:punct:]** : punctuation
* **\[:blank:]** : spaces and tabs
* **\[:alnum:]** : alphanumeric characters
Those are just a few. The key functions can be found by looking at the help file for the grep function (`?grep`). However, the stringr package has the same functionality with perhaps a slightly faster processing (though that’s due to the underlying stringi package).
See if you can guess which of the following will turn up `TRUE`.
```
grepl(c('apple', 'pear', 'banana'), pattern='a')
grepl(c('apple', 'pear', 'banana'), pattern='^a')
grepl(c('apple', 'pear', 'banana'), pattern='^a|a$')
```
Scraping the web, munging data, just finding things in your scripts … you can potentially use this all the time, and not only with text analysis, as we’ll now see.
### dplyr helper functions
The dplyr package comes with some poorly documented[2](#fn2) but quite useful helper functions that essentially serve as human\-readable regex, which is a very good thing. These functions allow you to select variables[3](#fn3) based on their names. They are usually just calling base R functions in the end.
* starts\_with: starts with a prefix (same as regex ‘^blah’)
* ends\_with: ends with a prefix (same as regex ‘blah$’)
* contains: contains a literal string (same as regex ‘blah’)
* matches: matches a regular expression (put your regex here)
* num\_range: a numerical range like x01, x02, x03\. (same as regex ‘x\[0\-9]\[0\-9]’)
* one\_of: variables in character vector. (if you need to quote variable names, e.g. within a function)
* everything: all variables. (a good way to spend time doing something only to accomplish what you would have by doing nothing, or a way to reorder variables)
For more on using stringr and regular expressions in R, you may find [this cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/strings.pdf) useful.
### Typical Uses
None of it makes sense, so don’t attempt to do so. Just try to remember a couple key approaches, and search the web for the rest.
Along with ^ . \* \[0\-9] $, a couple more common ones are:
* **\[a\-z]** : letters a\-z
* **\[A\-Z]** : capital letters
* **\+** : match the preceding one or more times
* **()** : groupings
* **\|** : logical or e.g. \[a\-z]\|\[0\-9] (a lower\-case letter or a number)
* **?** : preceding item is optional, and will be matched at most once. Typically used for ‘look ahead’ and ‘look behind’
* **\\** : escape a character, like if you actually wanted to search for a period instead of using it as a regex pattern, you’d use \\., though in R you need \\\\, i.e. double slashes, for escape.
In addition, in R there are certain predefined characters that can be called:
* **\[:punct:]** : punctuation
* **\[:blank:]** : spaces and tabs
* **\[:alnum:]** : alphanumeric characters
Those are just a few. The key functions can be found by looking at the help file for the grep function (`?grep`). However, the stringr package has the same functionality with perhaps a slightly faster processing (though that’s due to the underlying stringi package).
See if you can guess which of the following will turn up `TRUE`.
```
grepl(c('apple', 'pear', 'banana'), pattern='a')
grepl(c('apple', 'pear', 'banana'), pattern='^a')
grepl(c('apple', 'pear', 'banana'), pattern='^a|a$')
```
Scraping the web, munging data, just finding things in your scripts … you can potentially use this all the time, and not only with text analysis, as we’ll now see.
### dplyr helper functions
The dplyr package comes with some poorly documented[2](#fn2) but quite useful helper functions that essentially serve as human\-readable regex, which is a very good thing. These functions allow you to select variables[3](#fn3) based on their names. They are usually just calling base R functions in the end.
* starts\_with: starts with a prefix (same as regex ‘^blah’)
* ends\_with: ends with a prefix (same as regex ‘blah$’)
* contains: contains a literal string (same as regex ‘blah’)
* matches: matches a regular expression (put your regex here)
* num\_range: a numerical range like x01, x02, x03\. (same as regex ‘x\[0\-9]\[0\-9]’)
* one\_of: variables in character vector. (if you need to quote variable names, e.g. within a function)
* everything: all variables. (a good way to spend time doing something only to accomplish what you would have by doing nothing, or a way to reorder variables)
For more on using stringr and regular expressions in R, you may find [this cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/strings.pdf) useful.
Text Processing Examples
------------------------
### Example 1
Let’s say you’re dealing with some data that has been handled typically, that is to say, poorly. For example, you have a variable in your data representing whether something is from the north or south region.
It might seem okay until…
```
## table(df$region)
```
| Var1 | Freq |
| --- | --- |
| South | 76 |
| north | 68 |
| North | 75 |
| north | 70 |
| North | 70 |
| south | 65 |
| South | 76 |
Even if you spotted the casing issue, there is still a white space problem[4](#fn4). Let’s say you want this to be capitalized ‘North’ and ‘South’. How might you do it? It’s actually quite easy with the stringr tools.
```
library(stringr)
df %>%
mutate(region = str_trim(region),
region = str_to_title(region))
```
The str\_trim function trims white space from either side (or both), while str\_to\_title converts everything to first letter capitalized.
```
## table(df_corrected$region)
```
| Var1 | Freq |
| --- | --- |
| North | 283 |
| South | 217 |
Compare that to how you would have done it before knowing how to use text processing tools. One might have spent several minutes with some find and replace approach in a spreadsheet, or maybe even several `if... else` statements in R until all problematic cases were taken care of. Not very efficient.
### Example 2
Suppose you import a data frame, and the data was originally in wide format, where each column represented a year of data collection for the individual. Since it is bad form for data columns to have numbers for names, when you import it, the result looks like the following.
So, the problem now is to change the names to be Year\_1, Year\_2, etc. You might think you might have to use colnames and manually create a string of names to replace the current ones.
```
colnames(df)[-1] = c('Year_1', 'Year_2', 'Year_3', 'Year_4', 'Year_5')
```
Or perhaps you’re thinking of the paste0 function, which works fine and saves some typing.
```
colnames(df)[-1] = paste0('Year_', 1:5)
```
However, data sets may be hundreds of columns, and the columns of data may have the same pattern but not be next to one another. For example, the first few dozen columns are all data that belongs to the first wave, etc. It is tedious to figure out which columns you don’t want, but even then you’re resulting to using magic numbers with the above approach, and one column change to data will mean that redoing the name change will fail.
However, the following accomplishes what we want, and is reproducible regardless of where the columns are in the data set.
```
df %>%
rename_at(vars(num_range('X', 1:5)),
str_replace, pattern='X', replacement='Year_') %>%
head()
```
```
id Year_1 Year_2 Year_3 Year_4 Year_5
1 1 1.18 -2.04 -0.03 -0.36 0.43
2 2 0.34 -1.34 -0.30 -0.15 0.47
3 3 -0.32 -0.97 1.03 0.20 0.97
4 4 -0.57 1.36 1.29 0.00 0.32
5 5 0.64 0.73 -0.16 -1.29 -0.79
6 6 -0.59 0.16 -1.28 0.55 0.75
```
Let’s parse what it’s specifically doing.
* rename\_at allows us to rename specific columns
* Which columns? X1 through X:5\. The num\_range helper function creates the character strings X1, X2, X3, X4, and X5\.
* Now that we have the names, we use vars to tell rename\_at which ones. It would have allowed additional sets of variables as well.
* rename\_at needs a function to apply to each of those column names. In this case the function is str\_replace, to replace patterns of strings with some other string
* The specific arguments to str\_replace (pattern to be replaced, replacement pattern) are also supplied.
So in the end we just have to use the num\_range helper function within the function that tells rename\_at what it should be renaming, and let str\_replace do the rest.
### Example 1
Let’s say you’re dealing with some data that has been handled typically, that is to say, poorly. For example, you have a variable in your data representing whether something is from the north or south region.
It might seem okay until…
```
## table(df$region)
```
| Var1 | Freq |
| --- | --- |
| South | 76 |
| north | 68 |
| North | 75 |
| north | 70 |
| North | 70 |
| south | 65 |
| South | 76 |
Even if you spotted the casing issue, there is still a white space problem[4](#fn4). Let’s say you want this to be capitalized ‘North’ and ‘South’. How might you do it? It’s actually quite easy with the stringr tools.
```
library(stringr)
df %>%
mutate(region = str_trim(region),
region = str_to_title(region))
```
The str\_trim function trims white space from either side (or both), while str\_to\_title converts everything to first letter capitalized.
```
## table(df_corrected$region)
```
| Var1 | Freq |
| --- | --- |
| North | 283 |
| South | 217 |
Compare that to how you would have done it before knowing how to use text processing tools. One might have spent several minutes with some find and replace approach in a spreadsheet, or maybe even several `if... else` statements in R until all problematic cases were taken care of. Not very efficient.
### Example 2
Suppose you import a data frame, and the data was originally in wide format, where each column represented a year of data collection for the individual. Since it is bad form for data columns to have numbers for names, when you import it, the result looks like the following.
So, the problem now is to change the names to be Year\_1, Year\_2, etc. You might think you might have to use colnames and manually create a string of names to replace the current ones.
```
colnames(df)[-1] = c('Year_1', 'Year_2', 'Year_3', 'Year_4', 'Year_5')
```
Or perhaps you’re thinking of the paste0 function, which works fine and saves some typing.
```
colnames(df)[-1] = paste0('Year_', 1:5)
```
However, data sets may be hundreds of columns, and the columns of data may have the same pattern but not be next to one another. For example, the first few dozen columns are all data that belongs to the first wave, etc. It is tedious to figure out which columns you don’t want, but even then you’re resulting to using magic numbers with the above approach, and one column change to data will mean that redoing the name change will fail.
However, the following accomplishes what we want, and is reproducible regardless of where the columns are in the data set.
```
df %>%
rename_at(vars(num_range('X', 1:5)),
str_replace, pattern='X', replacement='Year_') %>%
head()
```
```
id Year_1 Year_2 Year_3 Year_4 Year_5
1 1 1.18 -2.04 -0.03 -0.36 0.43
2 2 0.34 -1.34 -0.30 -0.15 0.47
3 3 -0.32 -0.97 1.03 0.20 0.97
4 4 -0.57 1.36 1.29 0.00 0.32
5 5 0.64 0.73 -0.16 -1.29 -0.79
6 6 -0.59 0.16 -1.28 0.55 0.75
```
Let’s parse what it’s specifically doing.
* rename\_at allows us to rename specific columns
* Which columns? X1 through X:5\. The num\_range helper function creates the character strings X1, X2, X3, X4, and X5\.
* Now that we have the names, we use vars to tell rename\_at which ones. It would have allowed additional sets of variables as well.
* rename\_at needs a function to apply to each of those column names. In this case the function is str\_replace, to replace patterns of strings with some other string
* The specific arguments to str\_replace (pattern to be replaced, replacement pattern) are also supplied.
So in the end we just have to use the num\_range helper function within the function that tells rename\_at what it should be renaming, and let str\_replace do the rest.
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/string-theory.html |
String Theory
=============
Basic data types
----------------
R has several core data structures:
* Vectors
* Factors
* Lists
* Matrices/arrays
* Data frames
Vectors form the basis of R data structures. There are two main types\- atomic and lists. All elements of an atomic vector are the same type.
Examples include:
* character
* numeric (double)
* integer
* logical
### Character strings
When dealing with text, objects of class character are what you’d typically be dealing with.
```
x = c('... Of Your Fake Dimension', 'Ephemeron', 'Dryswch', 'Isotasy', 'Memory')
x
```
Not much to it, but be aware there is no real limit to what is represented as a character vector. For example, in a data frame, you could have a column where each entry is one of the works of Shakespeare.
### Factors
Although not exactly precise, one can think of factors as integers with labels. So, the underlying representation of a variable for sex is 1:2 with labels ‘Male’ and ‘Female’. They are a special class with attributes, or metadata, that contains the information about the levels.
```
x = factor(rep(letters[1:3], e=10))
attributes(x)
```
```
$levels
[1] "a" "b" "c"
$class
[1] "factor"
```
While the underlying representation is numeric, it is important to remember that factors are *categorical*. They can’t be used as numbers would be, as the following demonstrates.
```
as.numeric(x)
```
```
[1] 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3
```
```
sum(x)
```
```
Error in Summary.factor(structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, : 'sum' not meaningful for factors
```
Any numbers could be used, what we’re interested in are the labels, so a ‘sum’ doesn’t make any sense. All of the following would produce the same factor.
```
factor(c(1, 2, 3), labels=c('a', 'b', 'c'))
factor(c(3.2, 10, 500000), labels=c('a', 'b', 'c'))
factor(c(.49, 1, 5), labels=c('a', 'b', 'c'))
```
Because of the integer\+metadata representation, factors are actually smaller than character strings, often notably so.
```
x = sample(state.name, 10000, replace=T)
format(object.size(x), units='Kb')
```
```
[1] "80.8 Kb"
```
```
format(object.size(factor(x)), units='Kb')
```
```
[1] "42.4 Kb"
```
```
format(object.size(as.integer(factor(x))), units='Kb')
```
```
[1] "39.1 Kb"
```
However, if memory is really a concern, it’s probably not that using factors will help, but rather better hardware.
### Analysis
It is important to know that raw text cannot be analyzed quantitatively. There is no magic that takes a categorical variable with text labels and estimates correlations among words and other words or numeric data. *Everything* that can be analyzed must have some numeric representation first, and this is where factors come in. For example, here is a data frame with two categorical predictors (`factor*`), a numeric predictor (`x`), and a numeric target (`y`). What follows is what it looks like if you wanted to run a regression model in that setting.
```
df =
crossing(factor_1 = c('A', 'B'),
factor_2 = c('Q', 'X', 'J')) %>%
mutate(x=rnorm(6),
y=rnorm(6))
df
```
```
# A tibble: 6 x 4
factor_1 factor_2 x y
<chr> <chr> <dbl> <dbl>
1 A J 0.797 -0.190
2 A Q -1.000 -0.496
3 A X 1.05 0.487
4 B J -0.329 -0.101
5 B Q 0.905 -0.809
6 B X 1.18 -1.92
```
```
## model.matrix(lm(y ~ x + factor_1 + factor_2, data=df))
```
| (Intercept) | x | factor\_1B | factor\_2Q | factor\_2X |
| --- | --- | --- | --- | --- |
| 1 | 0\.7968603 | 0 | 0 | 0 |
| 1 | \-0\.9999264 | 0 | 1 | 0 |
| 1 | 1\.0522363 | 0 | 0 | 1 |
| 1 | \-0\.3291774 | 1 | 0 | 0 |
| 1 | 0\.9049071 | 1 | 1 | 0 |
| 1 | 1\.1754300 | 1 | 0 | 1 |
The model.matrix function exposes the underlying matrix that is actually used in the regression analysis. You’d get a coefficient for each column of that matrix. As such, even the intercept must be represented in some fashion. For categorical data, the default coding scheme is dummy coding. A reference category is arbitrarily chosen (it doesn’t matter which, and you can always change it), while the other categories are represented by indicator variables, where a 1 represents the corresponding label and everything else is zero. For details on this coding scheme or others, consult any basic statistical modeling book.
In addition, you’ll note that in all text\-specific analysis, the underlying information is numeric. For example, with topic models, the base data structure is a document\-term matrix of counts.
### Characters vs. Factors
The main thing to note is that factors are generally a statistical phenomenon, and are required to do statistical things with data that would otherwise be a simple character string. If you know the relatively few levels the data can take, you’ll generally want to use factors, or at least know that statistical packages and methods will require them. In addition, factors allow you to easily overcome the silly default alphabetical ordering of category levels in some very popular visualization packages.
For other things, such as text analysis, you’ll almost certainly want character strings instead, and in many cases it will be required. It’s also worth noting that a lot of base R and other behavior will coerce strings to factors. This made a lot more sense in the early days of R, but is not really necessary these days.
For more on this stuff see the following:
* [http://adv\-r.had.co.nz/Data\-structures.html](http://adv-r.had.co.nz/Data-structures.html)
* <http://forcats.tidyverse.org/>
* <http://r4ds.had.co.nz/factors.html>
* [https://simplystatistics.org/2015/07/24/stringsasfactors\-an\-unauthorized\-biography/](https://simplystatistics.org/2015/07/24/stringsasfactors-an-unauthorized-biography/)
* [http://notstatschat.tumblr.com/post/124987394001/stringsasfactors\-sigh](http://notstatschat.tumblr.com/post/124987394001/stringsasfactors-sigh)
Basic Text Functionality
------------------------
### Base R
A lot of folks new to R are not aware of just how much basic text processing R comes with out of the box. Here are examples of note.
* paste: glue text/numeric values together
* substr: extract or replace substrings in a character vector
* grep family: use regular expressions to deal with patterns of text
* strsplit: split strings
* nchar: how many characters in a string
* as.numeric: convert a string to numeric if it can be
* strtoi: convert a string to integer if it can be (faster than as.integer)
* adist: string distances
I probably use paste/paste0 more than most things when dealing with text, as string concatenation comes up so often. The following provides some demonstration.
```
paste(c('a', 'b', 'cd'), collapse='|')
```
```
[1] "a|b|cd"
```
```
paste(c('a', 'b', 'cd'), collapse='')
```
```
[1] "abcd"
```
```
paste0('a', 'b', 'cd') # shortcut to collapse=''
```
```
[1] "abcd"
```
```
paste0('x', 1:3)
```
```
[1] "x1" "x2" "x3"
```
Beyond that, use of regular expression and functionality included in the grep family is a major way to save a lot of time during data processing. I leave that to its own section later.
### Useful packages
A couple packages will probably take care of the vast majority of your standard text processing needs. Note that even if they aren’t adding anything to the functionality of the base R functions, they typically will have been optimized in some fashion, particularly with regard to speed.
* stringr/stringi: More or less the same stuff you’ll find with substr, grep etc. except easier to use and/or faster. They also add useful functionality not in base R (e.g. str\_to\_title). The stringr package is mostly a wrapper for the stringi functions, with some additional functions.
* tidyr: has functions such as unite, separate, replace\_na that can often come in handy when working with data frames.
* glue: a newer package that can be seen as a fancier paste. Most likely it will be useful when creating functions or shiny apps in which variable text output is desired.
One issue I have with both packages and base R is that often they return a list object, when it should be simplifying to the vector format it was initially fed. This sometimes requires an additional step or two of further processing that shouldn’t be necessary, so be prepared for it[1](#fn1).
### Other
In this section, I’ll add some things that come to mind that might come into play when you’re dealing with text.
#### Dates
Dates are not character strings. Though they may start that way, if you actually want to treat them as dates you’ll need to convert the string to the appropriate date class. The lubridate package makes dealing with dates much easier. It comes with conversion, extraction and other functionality that will be sure to save you some time.
```
library(lubridate)
today()
```
```
[1] "2018-03-06"
```
```
today() + 1
```
```
[1] "2018-03-07"
```
```
today() + dyears(1)
```
```
[1] "2019-03-06"
```
```
leap_year(2016)
```
```
[1] TRUE
```
```
span = interval(ymd("2017-07-01"), ymd("2017-07-04"))
span
```
```
[1] 2017-07-01 UTC--2017-07-04 UTC
```
```
as.duration(span)
```
```
[1] "259200s (~3 days)"
```
```
span %/% minutes(1)
```
```
[1] 4320
```
This package makes dates so much easier, you should always use it when dealing with them.
#### Categorical Time
In regression modeling with few time points, one often has to decide on whether to treat the year as categorical (factor) or numeric (continuous). This greatly depends on how you want to tell your data story or other practical concerns. For example, if you have five years in your data, treating year as categorical means you are interested in accounting for unspecified things that go on in a given year. If you treat it as numeric, you are more interested in trends. Either is fine.
#### Web
A major resource for text is of course the web. Packages like rvest,httr, xml2, and many other packages specific to website APIs are available to help you here. See the [R task view for web technologies](https://cran.r-project.org/web/views/WebTechnologies.html) as a starting point.
##### Encoding
Encoding can be a sizable PITA sometimes, and will often come up when dealing with webscraping and other languages. The rvest and stringr packages may be able to get you past some issues at least. See their respective functions repair\_encoding and str\_conv as starting points on this issue.
### Summary of basic text functionality
Being familiar with commonly used string functionality in base R and packages like stringr can save a ridiculous amount of time in your data processing. The more familiar you are with them the easier time you’ll have with text.
Regular Expressions
-------------------
A regular expression, regex for short, is a sequence of characters that can be used as a search pattern for a string. Common operations are to merely detect, extract, or replace the matching string. There are actually many different flavors of regex for different programming languages, which are all flavors that originate with the Perl approach, or can enable the Perl approach to be used. However, knowing one means you pretty much know the others with only minor modifications if any.
To be clear, not only is regex another language, it’s nigh on indecipherable. You will not learn much regex, but what you do learn will save a potentially enormous amount of time you’d otherwise spend trying to do things in a more haphazard fashion. Furthermore, practically every situation that will come up has already been asked and answered on [Stack Overflow](https://stackoverflow.com/questions/tagged/regex), so you’ll almost always be able to search for what you need.
Here is an example:
`^r.*shiny[0-9]$`
What is *that* you may ask? Well here is an example of strings it would and wouldn’t match.
```
string = c('r is the shiny', 'r is the shiny1', 'r shines brightly')
grepl(string, pattern='^r.*shiny[0-9]$')
```
```
[1] FALSE TRUE FALSE
```
What the regex is esoterically attempting to match is any string that starts with ‘r’ and ends with ‘shiny\_’ where \_ is some single digit. Specifically, it breaks down as follows:
* **^** : starts with, so ^r means starts with r
* **.** : any character
* **\*** : match the preceding zero or more times
* **shiny** : match ‘shiny’
* **\[0\-9]** : any digit 0\-9 (note that we are still talking about strings, not actual numbered values)
* **$** : ends with preceding
### Typical Uses
None of it makes sense, so don’t attempt to do so. Just try to remember a couple key approaches, and search the web for the rest.
Along with ^ . \* \[0\-9] $, a couple more common ones are:
* **\[a\-z]** : letters a\-z
* **\[A\-Z]** : capital letters
* **\+** : match the preceding one or more times
* **()** : groupings
* **\|** : logical or e.g. \[a\-z]\|\[0\-9] (a lower\-case letter or a number)
* **?** : preceding item is optional, and will be matched at most once. Typically used for ‘look ahead’ and ‘look behind’
* **\\** : escape a character, like if you actually wanted to search for a period instead of using it as a regex pattern, you’d use \\., though in R you need \\\\, i.e. double slashes, for escape.
In addition, in R there are certain predefined characters that can be called:
* **\[:punct:]** : punctuation
* **\[:blank:]** : spaces and tabs
* **\[:alnum:]** : alphanumeric characters
Those are just a few. The key functions can be found by looking at the help file for the grep function (`?grep`). However, the stringr package has the same functionality with perhaps a slightly faster processing (though that’s due to the underlying stringi package).
See if you can guess which of the following will turn up `TRUE`.
```
grepl(c('apple', 'pear', 'banana'), pattern='a')
grepl(c('apple', 'pear', 'banana'), pattern='^a')
grepl(c('apple', 'pear', 'banana'), pattern='^a|a$')
```
Scraping the web, munging data, just finding things in your scripts … you can potentially use this all the time, and not only with text analysis, as we’ll now see.
### dplyr helper functions
The dplyr package comes with some poorly documented[2](#fn2) but quite useful helper functions that essentially serve as human\-readable regex, which is a very good thing. These functions allow you to select variables[3](#fn3) based on their names. They are usually just calling base R functions in the end.
* starts\_with: starts with a prefix (same as regex ‘^blah’)
* ends\_with: ends with a prefix (same as regex ‘blah$’)
* contains: contains a literal string (same as regex ‘blah’)
* matches: matches a regular expression (put your regex here)
* num\_range: a numerical range like x01, x02, x03\. (same as regex ‘x\[0\-9]\[0\-9]’)
* one\_of: variables in character vector. (if you need to quote variable names, e.g. within a function)
* everything: all variables. (a good way to spend time doing something only to accomplish what you would have by doing nothing, or a way to reorder variables)
For more on using stringr and regular expressions in R, you may find [this cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/strings.pdf) useful.
Text Processing Examples
------------------------
### Example 1
Let’s say you’re dealing with some data that has been handled typically, that is to say, poorly. For example, you have a variable in your data representing whether something is from the north or south region.
It might seem okay until…
```
## table(df$region)
```
| Var1 | Freq |
| --- | --- |
| South | 76 |
| north | 68 |
| North | 75 |
| north | 70 |
| North | 70 |
| south | 65 |
| South | 76 |
Even if you spotted the casing issue, there is still a white space problem[4](#fn4). Let’s say you want this to be capitalized ‘North’ and ‘South’. How might you do it? It’s actually quite easy with the stringr tools.
```
library(stringr)
df %>%
mutate(region = str_trim(region),
region = str_to_title(region))
```
The str\_trim function trims white space from either side (or both), while str\_to\_title converts everything to first letter capitalized.
```
## table(df_corrected$region)
```
| Var1 | Freq |
| --- | --- |
| North | 283 |
| South | 217 |
Compare that to how you would have done it before knowing how to use text processing tools. One might have spent several minutes with some find and replace approach in a spreadsheet, or maybe even several `if... else` statements in R until all problematic cases were taken care of. Not very efficient.
### Example 2
Suppose you import a data frame, and the data was originally in wide format, where each column represented a year of data collection for the individual. Since it is bad form for data columns to have numbers for names, when you import it, the result looks like the following.
So, the problem now is to change the names to be Year\_1, Year\_2, etc. You might think you might have to use colnames and manually create a string of names to replace the current ones.
```
colnames(df)[-1] = c('Year_1', 'Year_2', 'Year_3', 'Year_4', 'Year_5')
```
Or perhaps you’re thinking of the paste0 function, which works fine and saves some typing.
```
colnames(df)[-1] = paste0('Year_', 1:5)
```
However, data sets may be hundreds of columns, and the columns of data may have the same pattern but not be next to one another. For example, the first few dozen columns are all data that belongs to the first wave, etc. It is tedious to figure out which columns you don’t want, but even then you’re resulting to using magic numbers with the above approach, and one column change to data will mean that redoing the name change will fail.
However, the following accomplishes what we want, and is reproducible regardless of where the columns are in the data set.
```
df %>%
rename_at(vars(num_range('X', 1:5)),
str_replace, pattern='X', replacement='Year_') %>%
head()
```
```
id Year_1 Year_2 Year_3 Year_4 Year_5
1 1 1.18 -2.04 -0.03 -0.36 0.43
2 2 0.34 -1.34 -0.30 -0.15 0.47
3 3 -0.32 -0.97 1.03 0.20 0.97
4 4 -0.57 1.36 1.29 0.00 0.32
5 5 0.64 0.73 -0.16 -1.29 -0.79
6 6 -0.59 0.16 -1.28 0.55 0.75
```
Let’s parse what it’s specifically doing.
* rename\_at allows us to rename specific columns
* Which columns? X1 through X:5\. The num\_range helper function creates the character strings X1, X2, X3, X4, and X5\.
* Now that we have the names, we use vars to tell rename\_at which ones. It would have allowed additional sets of variables as well.
* rename\_at needs a function to apply to each of those column names. In this case the function is str\_replace, to replace patterns of strings with some other string
* The specific arguments to str\_replace (pattern to be replaced, replacement pattern) are also supplied.
So in the end we just have to use the num\_range helper function within the function that tells rename\_at what it should be renaming, and let str\_replace do the rest.
Exercises
---------
1. In your own words, state the difference between a character string and a factor variable.
2. Consider the following character vector.
```
x = c('A', '1', 'Q')
```
How might you paste the elements together so that there is an underscore `_` between characters and no space (“A\_1\_Q”)? If you highlight the next line you’ll see the hint.
Revisit how we used the collapse argument within paste. `paste(..., collapse=?)`
Paste Part 2: The following application of paste produces this result.
```
paste(c('A', '1', 'Q'), c('B', '2', 'z'))
```
```
[1] "A B" "1 2" "Q z"
```
Now try to produce `"A - B" "1 - 2" "Q - z"`. To do this, note that one can paste any number of things together (i.e. more than two). So try adding ’ \- ’ to it.
3. Use regex to grab the Star Wars names that have a number. Use both grep and grepl and compare the results
```
grep(starwars$name, pattern = ?)
```
Now use your hacking skills to determine which one is the tallest.
4. Load the dplyr package, and use the its [helper functions](string-theory.html#dplyr-helper-functions) to grab all the columns in the starwars data set (comes with the package) with `color` in the name but without referring to them directly. The following shows a generic example. There are several ways to do this. Try two if you can.
```
starwars %>%
select(helper_function('pattern'))
```
Basic data types
----------------
R has several core data structures:
* Vectors
* Factors
* Lists
* Matrices/arrays
* Data frames
Vectors form the basis of R data structures. There are two main types\- atomic and lists. All elements of an atomic vector are the same type.
Examples include:
* character
* numeric (double)
* integer
* logical
### Character strings
When dealing with text, objects of class character are what you’d typically be dealing with.
```
x = c('... Of Your Fake Dimension', 'Ephemeron', 'Dryswch', 'Isotasy', 'Memory')
x
```
Not much to it, but be aware there is no real limit to what is represented as a character vector. For example, in a data frame, you could have a column where each entry is one of the works of Shakespeare.
### Factors
Although not exactly precise, one can think of factors as integers with labels. So, the underlying representation of a variable for sex is 1:2 with labels ‘Male’ and ‘Female’. They are a special class with attributes, or metadata, that contains the information about the levels.
```
x = factor(rep(letters[1:3], e=10))
attributes(x)
```
```
$levels
[1] "a" "b" "c"
$class
[1] "factor"
```
While the underlying representation is numeric, it is important to remember that factors are *categorical*. They can’t be used as numbers would be, as the following demonstrates.
```
as.numeric(x)
```
```
[1] 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3
```
```
sum(x)
```
```
Error in Summary.factor(structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, : 'sum' not meaningful for factors
```
Any numbers could be used, what we’re interested in are the labels, so a ‘sum’ doesn’t make any sense. All of the following would produce the same factor.
```
factor(c(1, 2, 3), labels=c('a', 'b', 'c'))
factor(c(3.2, 10, 500000), labels=c('a', 'b', 'c'))
factor(c(.49, 1, 5), labels=c('a', 'b', 'c'))
```
Because of the integer\+metadata representation, factors are actually smaller than character strings, often notably so.
```
x = sample(state.name, 10000, replace=T)
format(object.size(x), units='Kb')
```
```
[1] "80.8 Kb"
```
```
format(object.size(factor(x)), units='Kb')
```
```
[1] "42.4 Kb"
```
```
format(object.size(as.integer(factor(x))), units='Kb')
```
```
[1] "39.1 Kb"
```
However, if memory is really a concern, it’s probably not that using factors will help, but rather better hardware.
### Analysis
It is important to know that raw text cannot be analyzed quantitatively. There is no magic that takes a categorical variable with text labels and estimates correlations among words and other words or numeric data. *Everything* that can be analyzed must have some numeric representation first, and this is where factors come in. For example, here is a data frame with two categorical predictors (`factor*`), a numeric predictor (`x`), and a numeric target (`y`). What follows is what it looks like if you wanted to run a regression model in that setting.
```
df =
crossing(factor_1 = c('A', 'B'),
factor_2 = c('Q', 'X', 'J')) %>%
mutate(x=rnorm(6),
y=rnorm(6))
df
```
```
# A tibble: 6 x 4
factor_1 factor_2 x y
<chr> <chr> <dbl> <dbl>
1 A J 0.797 -0.190
2 A Q -1.000 -0.496
3 A X 1.05 0.487
4 B J -0.329 -0.101
5 B Q 0.905 -0.809
6 B X 1.18 -1.92
```
```
## model.matrix(lm(y ~ x + factor_1 + factor_2, data=df))
```
| (Intercept) | x | factor\_1B | factor\_2Q | factor\_2X |
| --- | --- | --- | --- | --- |
| 1 | 0\.7968603 | 0 | 0 | 0 |
| 1 | \-0\.9999264 | 0 | 1 | 0 |
| 1 | 1\.0522363 | 0 | 0 | 1 |
| 1 | \-0\.3291774 | 1 | 0 | 0 |
| 1 | 0\.9049071 | 1 | 1 | 0 |
| 1 | 1\.1754300 | 1 | 0 | 1 |
The model.matrix function exposes the underlying matrix that is actually used in the regression analysis. You’d get a coefficient for each column of that matrix. As such, even the intercept must be represented in some fashion. For categorical data, the default coding scheme is dummy coding. A reference category is arbitrarily chosen (it doesn’t matter which, and you can always change it), while the other categories are represented by indicator variables, where a 1 represents the corresponding label and everything else is zero. For details on this coding scheme or others, consult any basic statistical modeling book.
In addition, you’ll note that in all text\-specific analysis, the underlying information is numeric. For example, with topic models, the base data structure is a document\-term matrix of counts.
### Characters vs. Factors
The main thing to note is that factors are generally a statistical phenomenon, and are required to do statistical things with data that would otherwise be a simple character string. If you know the relatively few levels the data can take, you’ll generally want to use factors, or at least know that statistical packages and methods will require them. In addition, factors allow you to easily overcome the silly default alphabetical ordering of category levels in some very popular visualization packages.
For other things, such as text analysis, you’ll almost certainly want character strings instead, and in many cases it will be required. It’s also worth noting that a lot of base R and other behavior will coerce strings to factors. This made a lot more sense in the early days of R, but is not really necessary these days.
For more on this stuff see the following:
* [http://adv\-r.had.co.nz/Data\-structures.html](http://adv-r.had.co.nz/Data-structures.html)
* <http://forcats.tidyverse.org/>
* <http://r4ds.had.co.nz/factors.html>
* [https://simplystatistics.org/2015/07/24/stringsasfactors\-an\-unauthorized\-biography/](https://simplystatistics.org/2015/07/24/stringsasfactors-an-unauthorized-biography/)
* [http://notstatschat.tumblr.com/post/124987394001/stringsasfactors\-sigh](http://notstatschat.tumblr.com/post/124987394001/stringsasfactors-sigh)
### Character strings
When dealing with text, objects of class character are what you’d typically be dealing with.
```
x = c('... Of Your Fake Dimension', 'Ephemeron', 'Dryswch', 'Isotasy', 'Memory')
x
```
Not much to it, but be aware there is no real limit to what is represented as a character vector. For example, in a data frame, you could have a column where each entry is one of the works of Shakespeare.
### Factors
Although not exactly precise, one can think of factors as integers with labels. So, the underlying representation of a variable for sex is 1:2 with labels ‘Male’ and ‘Female’. They are a special class with attributes, or metadata, that contains the information about the levels.
```
x = factor(rep(letters[1:3], e=10))
attributes(x)
```
```
$levels
[1] "a" "b" "c"
$class
[1] "factor"
```
While the underlying representation is numeric, it is important to remember that factors are *categorical*. They can’t be used as numbers would be, as the following demonstrates.
```
as.numeric(x)
```
```
[1] 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3
```
```
sum(x)
```
```
Error in Summary.factor(structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, : 'sum' not meaningful for factors
```
Any numbers could be used, what we’re interested in are the labels, so a ‘sum’ doesn’t make any sense. All of the following would produce the same factor.
```
factor(c(1, 2, 3), labels=c('a', 'b', 'c'))
factor(c(3.2, 10, 500000), labels=c('a', 'b', 'c'))
factor(c(.49, 1, 5), labels=c('a', 'b', 'c'))
```
Because of the integer\+metadata representation, factors are actually smaller than character strings, often notably so.
```
x = sample(state.name, 10000, replace=T)
format(object.size(x), units='Kb')
```
```
[1] "80.8 Kb"
```
```
format(object.size(factor(x)), units='Kb')
```
```
[1] "42.4 Kb"
```
```
format(object.size(as.integer(factor(x))), units='Kb')
```
```
[1] "39.1 Kb"
```
However, if memory is really a concern, it’s probably not that using factors will help, but rather better hardware.
### Analysis
It is important to know that raw text cannot be analyzed quantitatively. There is no magic that takes a categorical variable with text labels and estimates correlations among words and other words or numeric data. *Everything* that can be analyzed must have some numeric representation first, and this is where factors come in. For example, here is a data frame with two categorical predictors (`factor*`), a numeric predictor (`x`), and a numeric target (`y`). What follows is what it looks like if you wanted to run a regression model in that setting.
```
df =
crossing(factor_1 = c('A', 'B'),
factor_2 = c('Q', 'X', 'J')) %>%
mutate(x=rnorm(6),
y=rnorm(6))
df
```
```
# A tibble: 6 x 4
factor_1 factor_2 x y
<chr> <chr> <dbl> <dbl>
1 A J 0.797 -0.190
2 A Q -1.000 -0.496
3 A X 1.05 0.487
4 B J -0.329 -0.101
5 B Q 0.905 -0.809
6 B X 1.18 -1.92
```
```
## model.matrix(lm(y ~ x + factor_1 + factor_2, data=df))
```
| (Intercept) | x | factor\_1B | factor\_2Q | factor\_2X |
| --- | --- | --- | --- | --- |
| 1 | 0\.7968603 | 0 | 0 | 0 |
| 1 | \-0\.9999264 | 0 | 1 | 0 |
| 1 | 1\.0522363 | 0 | 0 | 1 |
| 1 | \-0\.3291774 | 1 | 0 | 0 |
| 1 | 0\.9049071 | 1 | 1 | 0 |
| 1 | 1\.1754300 | 1 | 0 | 1 |
The model.matrix function exposes the underlying matrix that is actually used in the regression analysis. You’d get a coefficient for each column of that matrix. As such, even the intercept must be represented in some fashion. For categorical data, the default coding scheme is dummy coding. A reference category is arbitrarily chosen (it doesn’t matter which, and you can always change it), while the other categories are represented by indicator variables, where a 1 represents the corresponding label and everything else is zero. For details on this coding scheme or others, consult any basic statistical modeling book.
In addition, you’ll note that in all text\-specific analysis, the underlying information is numeric. For example, with topic models, the base data structure is a document\-term matrix of counts.
### Characters vs. Factors
The main thing to note is that factors are generally a statistical phenomenon, and are required to do statistical things with data that would otherwise be a simple character string. If you know the relatively few levels the data can take, you’ll generally want to use factors, or at least know that statistical packages and methods will require them. In addition, factors allow you to easily overcome the silly default alphabetical ordering of category levels in some very popular visualization packages.
For other things, such as text analysis, you’ll almost certainly want character strings instead, and in many cases it will be required. It’s also worth noting that a lot of base R and other behavior will coerce strings to factors. This made a lot more sense in the early days of R, but is not really necessary these days.
For more on this stuff see the following:
* [http://adv\-r.had.co.nz/Data\-structures.html](http://adv-r.had.co.nz/Data-structures.html)
* <http://forcats.tidyverse.org/>
* <http://r4ds.had.co.nz/factors.html>
* [https://simplystatistics.org/2015/07/24/stringsasfactors\-an\-unauthorized\-biography/](https://simplystatistics.org/2015/07/24/stringsasfactors-an-unauthorized-biography/)
* [http://notstatschat.tumblr.com/post/124987394001/stringsasfactors\-sigh](http://notstatschat.tumblr.com/post/124987394001/stringsasfactors-sigh)
Basic Text Functionality
------------------------
### Base R
A lot of folks new to R are not aware of just how much basic text processing R comes with out of the box. Here are examples of note.
* paste: glue text/numeric values together
* substr: extract or replace substrings in a character vector
* grep family: use regular expressions to deal with patterns of text
* strsplit: split strings
* nchar: how many characters in a string
* as.numeric: convert a string to numeric if it can be
* strtoi: convert a string to integer if it can be (faster than as.integer)
* adist: string distances
I probably use paste/paste0 more than most things when dealing with text, as string concatenation comes up so often. The following provides some demonstration.
```
paste(c('a', 'b', 'cd'), collapse='|')
```
```
[1] "a|b|cd"
```
```
paste(c('a', 'b', 'cd'), collapse='')
```
```
[1] "abcd"
```
```
paste0('a', 'b', 'cd') # shortcut to collapse=''
```
```
[1] "abcd"
```
```
paste0('x', 1:3)
```
```
[1] "x1" "x2" "x3"
```
Beyond that, use of regular expression and functionality included in the grep family is a major way to save a lot of time during data processing. I leave that to its own section later.
### Useful packages
A couple packages will probably take care of the vast majority of your standard text processing needs. Note that even if they aren’t adding anything to the functionality of the base R functions, they typically will have been optimized in some fashion, particularly with regard to speed.
* stringr/stringi: More or less the same stuff you’ll find with substr, grep etc. except easier to use and/or faster. They also add useful functionality not in base R (e.g. str\_to\_title). The stringr package is mostly a wrapper for the stringi functions, with some additional functions.
* tidyr: has functions such as unite, separate, replace\_na that can often come in handy when working with data frames.
* glue: a newer package that can be seen as a fancier paste. Most likely it will be useful when creating functions or shiny apps in which variable text output is desired.
One issue I have with both packages and base R is that often they return a list object, when it should be simplifying to the vector format it was initially fed. This sometimes requires an additional step or two of further processing that shouldn’t be necessary, so be prepared for it[1](#fn1).
### Other
In this section, I’ll add some things that come to mind that might come into play when you’re dealing with text.
#### Dates
Dates are not character strings. Though they may start that way, if you actually want to treat them as dates you’ll need to convert the string to the appropriate date class. The lubridate package makes dealing with dates much easier. It comes with conversion, extraction and other functionality that will be sure to save you some time.
```
library(lubridate)
today()
```
```
[1] "2018-03-06"
```
```
today() + 1
```
```
[1] "2018-03-07"
```
```
today() + dyears(1)
```
```
[1] "2019-03-06"
```
```
leap_year(2016)
```
```
[1] TRUE
```
```
span = interval(ymd("2017-07-01"), ymd("2017-07-04"))
span
```
```
[1] 2017-07-01 UTC--2017-07-04 UTC
```
```
as.duration(span)
```
```
[1] "259200s (~3 days)"
```
```
span %/% minutes(1)
```
```
[1] 4320
```
This package makes dates so much easier, you should always use it when dealing with them.
#### Categorical Time
In regression modeling with few time points, one often has to decide on whether to treat the year as categorical (factor) or numeric (continuous). This greatly depends on how you want to tell your data story or other practical concerns. For example, if you have five years in your data, treating year as categorical means you are interested in accounting for unspecified things that go on in a given year. If you treat it as numeric, you are more interested in trends. Either is fine.
#### Web
A major resource for text is of course the web. Packages like rvest,httr, xml2, and many other packages specific to website APIs are available to help you here. See the [R task view for web technologies](https://cran.r-project.org/web/views/WebTechnologies.html) as a starting point.
##### Encoding
Encoding can be a sizable PITA sometimes, and will often come up when dealing with webscraping and other languages. The rvest and stringr packages may be able to get you past some issues at least. See their respective functions repair\_encoding and str\_conv as starting points on this issue.
### Summary of basic text functionality
Being familiar with commonly used string functionality in base R and packages like stringr can save a ridiculous amount of time in your data processing. The more familiar you are with them the easier time you’ll have with text.
### Base R
A lot of folks new to R are not aware of just how much basic text processing R comes with out of the box. Here are examples of note.
* paste: glue text/numeric values together
* substr: extract or replace substrings in a character vector
* grep family: use regular expressions to deal with patterns of text
* strsplit: split strings
* nchar: how many characters in a string
* as.numeric: convert a string to numeric if it can be
* strtoi: convert a string to integer if it can be (faster than as.integer)
* adist: string distances
I probably use paste/paste0 more than most things when dealing with text, as string concatenation comes up so often. The following provides some demonstration.
```
paste(c('a', 'b', 'cd'), collapse='|')
```
```
[1] "a|b|cd"
```
```
paste(c('a', 'b', 'cd'), collapse='')
```
```
[1] "abcd"
```
```
paste0('a', 'b', 'cd') # shortcut to collapse=''
```
```
[1] "abcd"
```
```
paste0('x', 1:3)
```
```
[1] "x1" "x2" "x3"
```
Beyond that, use of regular expression and functionality included in the grep family is a major way to save a lot of time during data processing. I leave that to its own section later.
### Useful packages
A couple packages will probably take care of the vast majority of your standard text processing needs. Note that even if they aren’t adding anything to the functionality of the base R functions, they typically will have been optimized in some fashion, particularly with regard to speed.
* stringr/stringi: More or less the same stuff you’ll find with substr, grep etc. except easier to use and/or faster. They also add useful functionality not in base R (e.g. str\_to\_title). The stringr package is mostly a wrapper for the stringi functions, with some additional functions.
* tidyr: has functions such as unite, separate, replace\_na that can often come in handy when working with data frames.
* glue: a newer package that can be seen as a fancier paste. Most likely it will be useful when creating functions or shiny apps in which variable text output is desired.
One issue I have with both packages and base R is that often they return a list object, when it should be simplifying to the vector format it was initially fed. This sometimes requires an additional step or two of further processing that shouldn’t be necessary, so be prepared for it[1](#fn1).
### Other
In this section, I’ll add some things that come to mind that might come into play when you’re dealing with text.
#### Dates
Dates are not character strings. Though they may start that way, if you actually want to treat them as dates you’ll need to convert the string to the appropriate date class. The lubridate package makes dealing with dates much easier. It comes with conversion, extraction and other functionality that will be sure to save you some time.
```
library(lubridate)
today()
```
```
[1] "2018-03-06"
```
```
today() + 1
```
```
[1] "2018-03-07"
```
```
today() + dyears(1)
```
```
[1] "2019-03-06"
```
```
leap_year(2016)
```
```
[1] TRUE
```
```
span = interval(ymd("2017-07-01"), ymd("2017-07-04"))
span
```
```
[1] 2017-07-01 UTC--2017-07-04 UTC
```
```
as.duration(span)
```
```
[1] "259200s (~3 days)"
```
```
span %/% minutes(1)
```
```
[1] 4320
```
This package makes dates so much easier, you should always use it when dealing with them.
#### Categorical Time
In regression modeling with few time points, one often has to decide on whether to treat the year as categorical (factor) or numeric (continuous). This greatly depends on how you want to tell your data story or other practical concerns. For example, if you have five years in your data, treating year as categorical means you are interested in accounting for unspecified things that go on in a given year. If you treat it as numeric, you are more interested in trends. Either is fine.
#### Web
A major resource for text is of course the web. Packages like rvest,httr, xml2, and many other packages specific to website APIs are available to help you here. See the [R task view for web technologies](https://cran.r-project.org/web/views/WebTechnologies.html) as a starting point.
##### Encoding
Encoding can be a sizable PITA sometimes, and will often come up when dealing with webscraping and other languages. The rvest and stringr packages may be able to get you past some issues at least. See their respective functions repair\_encoding and str\_conv as starting points on this issue.
#### Dates
Dates are not character strings. Though they may start that way, if you actually want to treat them as dates you’ll need to convert the string to the appropriate date class. The lubridate package makes dealing with dates much easier. It comes with conversion, extraction and other functionality that will be sure to save you some time.
```
library(lubridate)
today()
```
```
[1] "2018-03-06"
```
```
today() + 1
```
```
[1] "2018-03-07"
```
```
today() + dyears(1)
```
```
[1] "2019-03-06"
```
```
leap_year(2016)
```
```
[1] TRUE
```
```
span = interval(ymd("2017-07-01"), ymd("2017-07-04"))
span
```
```
[1] 2017-07-01 UTC--2017-07-04 UTC
```
```
as.duration(span)
```
```
[1] "259200s (~3 days)"
```
```
span %/% minutes(1)
```
```
[1] 4320
```
This package makes dates so much easier, you should always use it when dealing with them.
#### Categorical Time
In regression modeling with few time points, one often has to decide on whether to treat the year as categorical (factor) or numeric (continuous). This greatly depends on how you want to tell your data story or other practical concerns. For example, if you have five years in your data, treating year as categorical means you are interested in accounting for unspecified things that go on in a given year. If you treat it as numeric, you are more interested in trends. Either is fine.
#### Web
A major resource for text is of course the web. Packages like rvest,httr, xml2, and many other packages specific to website APIs are available to help you here. See the [R task view for web technologies](https://cran.r-project.org/web/views/WebTechnologies.html) as a starting point.
##### Encoding
Encoding can be a sizable PITA sometimes, and will often come up when dealing with webscraping and other languages. The rvest and stringr packages may be able to get you past some issues at least. See their respective functions repair\_encoding and str\_conv as starting points on this issue.
##### Encoding
Encoding can be a sizable PITA sometimes, and will often come up when dealing with webscraping and other languages. The rvest and stringr packages may be able to get you past some issues at least. See their respective functions repair\_encoding and str\_conv as starting points on this issue.
### Summary of basic text functionality
Being familiar with commonly used string functionality in base R and packages like stringr can save a ridiculous amount of time in your data processing. The more familiar you are with them the easier time you’ll have with text.
Regular Expressions
-------------------
A regular expression, regex for short, is a sequence of characters that can be used as a search pattern for a string. Common operations are to merely detect, extract, or replace the matching string. There are actually many different flavors of regex for different programming languages, which are all flavors that originate with the Perl approach, or can enable the Perl approach to be used. However, knowing one means you pretty much know the others with only minor modifications if any.
To be clear, not only is regex another language, it’s nigh on indecipherable. You will not learn much regex, but what you do learn will save a potentially enormous amount of time you’d otherwise spend trying to do things in a more haphazard fashion. Furthermore, practically every situation that will come up has already been asked and answered on [Stack Overflow](https://stackoverflow.com/questions/tagged/regex), so you’ll almost always be able to search for what you need.
Here is an example:
`^r.*shiny[0-9]$`
What is *that* you may ask? Well here is an example of strings it would and wouldn’t match.
```
string = c('r is the shiny', 'r is the shiny1', 'r shines brightly')
grepl(string, pattern='^r.*shiny[0-9]$')
```
```
[1] FALSE TRUE FALSE
```
What the regex is esoterically attempting to match is any string that starts with ‘r’ and ends with ‘shiny\_’ where \_ is some single digit. Specifically, it breaks down as follows:
* **^** : starts with, so ^r means starts with r
* **.** : any character
* **\*** : match the preceding zero or more times
* **shiny** : match ‘shiny’
* **\[0\-9]** : any digit 0\-9 (note that we are still talking about strings, not actual numbered values)
* **$** : ends with preceding
### Typical Uses
None of it makes sense, so don’t attempt to do so. Just try to remember a couple key approaches, and search the web for the rest.
Along with ^ . \* \[0\-9] $, a couple more common ones are:
* **\[a\-z]** : letters a\-z
* **\[A\-Z]** : capital letters
* **\+** : match the preceding one or more times
* **()** : groupings
* **\|** : logical or e.g. \[a\-z]\|\[0\-9] (a lower\-case letter or a number)
* **?** : preceding item is optional, and will be matched at most once. Typically used for ‘look ahead’ and ‘look behind’
* **\\** : escape a character, like if you actually wanted to search for a period instead of using it as a regex pattern, you’d use \\., though in R you need \\\\, i.e. double slashes, for escape.
In addition, in R there are certain predefined characters that can be called:
* **\[:punct:]** : punctuation
* **\[:blank:]** : spaces and tabs
* **\[:alnum:]** : alphanumeric characters
Those are just a few. The key functions can be found by looking at the help file for the grep function (`?grep`). However, the stringr package has the same functionality with perhaps a slightly faster processing (though that’s due to the underlying stringi package).
See if you can guess which of the following will turn up `TRUE`.
```
grepl(c('apple', 'pear', 'banana'), pattern='a')
grepl(c('apple', 'pear', 'banana'), pattern='^a')
grepl(c('apple', 'pear', 'banana'), pattern='^a|a$')
```
Scraping the web, munging data, just finding things in your scripts … you can potentially use this all the time, and not only with text analysis, as we’ll now see.
### dplyr helper functions
The dplyr package comes with some poorly documented[2](#fn2) but quite useful helper functions that essentially serve as human\-readable regex, which is a very good thing. These functions allow you to select variables[3](#fn3) based on their names. They are usually just calling base R functions in the end.
* starts\_with: starts with a prefix (same as regex ‘^blah’)
* ends\_with: ends with a prefix (same as regex ‘blah$’)
* contains: contains a literal string (same as regex ‘blah’)
* matches: matches a regular expression (put your regex here)
* num\_range: a numerical range like x01, x02, x03\. (same as regex ‘x\[0\-9]\[0\-9]’)
* one\_of: variables in character vector. (if you need to quote variable names, e.g. within a function)
* everything: all variables. (a good way to spend time doing something only to accomplish what you would have by doing nothing, or a way to reorder variables)
For more on using stringr and regular expressions in R, you may find [this cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/strings.pdf) useful.
### Typical Uses
None of it makes sense, so don’t attempt to do so. Just try to remember a couple key approaches, and search the web for the rest.
Along with ^ . \* \[0\-9] $, a couple more common ones are:
* **\[a\-z]** : letters a\-z
* **\[A\-Z]** : capital letters
* **\+** : match the preceding one or more times
* **()** : groupings
* **\|** : logical or e.g. \[a\-z]\|\[0\-9] (a lower\-case letter or a number)
* **?** : preceding item is optional, and will be matched at most once. Typically used for ‘look ahead’ and ‘look behind’
* **\\** : escape a character, like if you actually wanted to search for a period instead of using it as a regex pattern, you’d use \\., though in R you need \\\\, i.e. double slashes, for escape.
In addition, in R there are certain predefined characters that can be called:
* **\[:punct:]** : punctuation
* **\[:blank:]** : spaces and tabs
* **\[:alnum:]** : alphanumeric characters
Those are just a few. The key functions can be found by looking at the help file for the grep function (`?grep`). However, the stringr package has the same functionality with perhaps a slightly faster processing (though that’s due to the underlying stringi package).
See if you can guess which of the following will turn up `TRUE`.
```
grepl(c('apple', 'pear', 'banana'), pattern='a')
grepl(c('apple', 'pear', 'banana'), pattern='^a')
grepl(c('apple', 'pear', 'banana'), pattern='^a|a$')
```
Scraping the web, munging data, just finding things in your scripts … you can potentially use this all the time, and not only with text analysis, as we’ll now see.
### dplyr helper functions
The dplyr package comes with some poorly documented[2](#fn2) but quite useful helper functions that essentially serve as human\-readable regex, which is a very good thing. These functions allow you to select variables[3](#fn3) based on their names. They are usually just calling base R functions in the end.
* starts\_with: starts with a prefix (same as regex ‘^blah’)
* ends\_with: ends with a prefix (same as regex ‘blah$’)
* contains: contains a literal string (same as regex ‘blah’)
* matches: matches a regular expression (put your regex here)
* num\_range: a numerical range like x01, x02, x03\. (same as regex ‘x\[0\-9]\[0\-9]’)
* one\_of: variables in character vector. (if you need to quote variable names, e.g. within a function)
* everything: all variables. (a good way to spend time doing something only to accomplish what you would have by doing nothing, or a way to reorder variables)
For more on using stringr and regular expressions in R, you may find [this cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/strings.pdf) useful.
Text Processing Examples
------------------------
### Example 1
Let’s say you’re dealing with some data that has been handled typically, that is to say, poorly. For example, you have a variable in your data representing whether something is from the north or south region.
It might seem okay until…
```
## table(df$region)
```
| Var1 | Freq |
| --- | --- |
| South | 76 |
| north | 68 |
| North | 75 |
| north | 70 |
| North | 70 |
| south | 65 |
| South | 76 |
Even if you spotted the casing issue, there is still a white space problem[4](#fn4). Let’s say you want this to be capitalized ‘North’ and ‘South’. How might you do it? It’s actually quite easy with the stringr tools.
```
library(stringr)
df %>%
mutate(region = str_trim(region),
region = str_to_title(region))
```
The str\_trim function trims white space from either side (or both), while str\_to\_title converts everything to first letter capitalized.
```
## table(df_corrected$region)
```
| Var1 | Freq |
| --- | --- |
| North | 283 |
| South | 217 |
Compare that to how you would have done it before knowing how to use text processing tools. One might have spent several minutes with some find and replace approach in a spreadsheet, or maybe even several `if... else` statements in R until all problematic cases were taken care of. Not very efficient.
### Example 2
Suppose you import a data frame, and the data was originally in wide format, where each column represented a year of data collection for the individual. Since it is bad form for data columns to have numbers for names, when you import it, the result looks like the following.
So, the problem now is to change the names to be Year\_1, Year\_2, etc. You might think you might have to use colnames and manually create a string of names to replace the current ones.
```
colnames(df)[-1] = c('Year_1', 'Year_2', 'Year_3', 'Year_4', 'Year_5')
```
Or perhaps you’re thinking of the paste0 function, which works fine and saves some typing.
```
colnames(df)[-1] = paste0('Year_', 1:5)
```
However, data sets may be hundreds of columns, and the columns of data may have the same pattern but not be next to one another. For example, the first few dozen columns are all data that belongs to the first wave, etc. It is tedious to figure out which columns you don’t want, but even then you’re resulting to using magic numbers with the above approach, and one column change to data will mean that redoing the name change will fail.
However, the following accomplishes what we want, and is reproducible regardless of where the columns are in the data set.
```
df %>%
rename_at(vars(num_range('X', 1:5)),
str_replace, pattern='X', replacement='Year_') %>%
head()
```
```
id Year_1 Year_2 Year_3 Year_4 Year_5
1 1 1.18 -2.04 -0.03 -0.36 0.43
2 2 0.34 -1.34 -0.30 -0.15 0.47
3 3 -0.32 -0.97 1.03 0.20 0.97
4 4 -0.57 1.36 1.29 0.00 0.32
5 5 0.64 0.73 -0.16 -1.29 -0.79
6 6 -0.59 0.16 -1.28 0.55 0.75
```
Let’s parse what it’s specifically doing.
* rename\_at allows us to rename specific columns
* Which columns? X1 through X:5\. The num\_range helper function creates the character strings X1, X2, X3, X4, and X5\.
* Now that we have the names, we use vars to tell rename\_at which ones. It would have allowed additional sets of variables as well.
* rename\_at needs a function to apply to each of those column names. In this case the function is str\_replace, to replace patterns of strings with some other string
* The specific arguments to str\_replace (pattern to be replaced, replacement pattern) are also supplied.
So in the end we just have to use the num\_range helper function within the function that tells rename\_at what it should be renaming, and let str\_replace do the rest.
### Example 1
Let’s say you’re dealing with some data that has been handled typically, that is to say, poorly. For example, you have a variable in your data representing whether something is from the north or south region.
It might seem okay until…
```
## table(df$region)
```
| Var1 | Freq |
| --- | --- |
| South | 76 |
| north | 68 |
| North | 75 |
| north | 70 |
| North | 70 |
| south | 65 |
| South | 76 |
Even if you spotted the casing issue, there is still a white space problem[4](#fn4). Let’s say you want this to be capitalized ‘North’ and ‘South’. How might you do it? It’s actually quite easy with the stringr tools.
```
library(stringr)
df %>%
mutate(region = str_trim(region),
region = str_to_title(region))
```
The str\_trim function trims white space from either side (or both), while str\_to\_title converts everything to first letter capitalized.
```
## table(df_corrected$region)
```
| Var1 | Freq |
| --- | --- |
| North | 283 |
| South | 217 |
Compare that to how you would have done it before knowing how to use text processing tools. One might have spent several minutes with some find and replace approach in a spreadsheet, or maybe even several `if... else` statements in R until all problematic cases were taken care of. Not very efficient.
### Example 2
Suppose you import a data frame, and the data was originally in wide format, where each column represented a year of data collection for the individual. Since it is bad form for data columns to have numbers for names, when you import it, the result looks like the following.
So, the problem now is to change the names to be Year\_1, Year\_2, etc. You might think you might have to use colnames and manually create a string of names to replace the current ones.
```
colnames(df)[-1] = c('Year_1', 'Year_2', 'Year_3', 'Year_4', 'Year_5')
```
Or perhaps you’re thinking of the paste0 function, which works fine and saves some typing.
```
colnames(df)[-1] = paste0('Year_', 1:5)
```
However, data sets may be hundreds of columns, and the columns of data may have the same pattern but not be next to one another. For example, the first few dozen columns are all data that belongs to the first wave, etc. It is tedious to figure out which columns you don’t want, but even then you’re resulting to using magic numbers with the above approach, and one column change to data will mean that redoing the name change will fail.
However, the following accomplishes what we want, and is reproducible regardless of where the columns are in the data set.
```
df %>%
rename_at(vars(num_range('X', 1:5)),
str_replace, pattern='X', replacement='Year_') %>%
head()
```
```
id Year_1 Year_2 Year_3 Year_4 Year_5
1 1 1.18 -2.04 -0.03 -0.36 0.43
2 2 0.34 -1.34 -0.30 -0.15 0.47
3 3 -0.32 -0.97 1.03 0.20 0.97
4 4 -0.57 1.36 1.29 0.00 0.32
5 5 0.64 0.73 -0.16 -1.29 -0.79
6 6 -0.59 0.16 -1.28 0.55 0.75
```
Let’s parse what it’s specifically doing.
* rename\_at allows us to rename specific columns
* Which columns? X1 through X:5\. The num\_range helper function creates the character strings X1, X2, X3, X4, and X5\.
* Now that we have the names, we use vars to tell rename\_at which ones. It would have allowed additional sets of variables as well.
* rename\_at needs a function to apply to each of those column names. In this case the function is str\_replace, to replace patterns of strings with some other string
* The specific arguments to str\_replace (pattern to be replaced, replacement pattern) are also supplied.
So in the end we just have to use the num\_range helper function within the function that tells rename\_at what it should be renaming, and let str\_replace do the rest.
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/sentiment-analysis.html |
Sentiment Analysis
==================
Basic idea
----------
A common and intuitive approach to text is sentiment analysis. In a grand sense, we are interested in the emotional content of some text, e.g. posts on Facebook, tweets, or movie reviews. Most of the time, this is obvious when one reads it, but if you have hundreds of thousands or millions of strings to analyze, you’d like to be able to do so efficiently.
We will use the tidytext package for our demonstration. It comes with a lexicon of positive and negative words that is actually a combination of multiple sources, one of which provides numeric ratings, while the others suggest different classes of sentiment.
```
library(tidytext)
sentiments %>% slice(sample(1:nrow(sentiments)))
```
```
# A tibble: 27,314 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 decomposition negative nrc NA
2 imaculate positive bing NA
3 greatness positive bing NA
4 impatient negative bing NA
5 contradicting negative loughran NA
6 irrecoverableness negative bing NA
7 advisable trust nrc NA
8 humiliation disgust nrc NA
9 obscures negative bing NA
10 affliction negative bing NA
# ... with 27,304 more rows
```
The gist is that we are dealing with a specific, pre\-defined vocabulary. Of course, any analysis will only be as good as the lexicon. The goal is usually to assign a sentiment score to a text, possibly an overall score, or a generally positive or negative grade. Given that, other analyses may be implemented to predict sentiment via standard regression tools or machine learning approaches.
Issues
------
### Context, sarcasm, etc.
Now consider the following.
```
sentiments %>% filter(word=='sick')
```
```
# A tibble: 5 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 sick disgust nrc NA
2 sick negative nrc NA
3 sick sadness nrc NA
4 sick negative bing NA
5 sick <NA> AFINN -2
```
Despite the above assigned sentiments, the word *sick* has been used at least since 1960s surfing culture as slang for positive affect. A basic approach to sentiment analysis as described here will not be able to detect slang or other context like sarcasm. However, lots of training data for a particular context may allow one to correctly predict such sentiment. In addition, there are, for example, slang lexicons, or one can simply add their own complements to any available lexicon.
### Lexicons
In addition, the lexicons are going to maybe be applicable to *general* usage of English in the western world. Some might wonder where exactly these came from or who decided that the word *abacus* should be affiliated with ‘trust’. You may start your path by typing `?sentiments` at the console if you have the tidytext package loaded.
Sentiment Analysis Examples
---------------------------
### The first thing the baby did wrong
We demonstrate sentiment analysis with the text *The first thing the baby did wrong*, which is a very popular brief guide to parenting written by world renown psychologist [Donald Barthelme](appendix.html#donald-barthelme) who, in his spare time, also wrote postmodern literature. This particular text talks about an issue with the baby, whose name is Born Dancin’, and who likes to tear pages out of books. Attempts are made by her parents to rectify the situation, without much success, but things are finally resolved at the end. The ultimate goal will be to see how sentiment in the text evolves over time, and in general we’d expect things to end more positively than they began.
How do we start? Let’s look again at the sentiments data set in the tidytext package.
```
sentiments %>% slice(sample(1:nrow(sentiments)))
```
```
# A tibble: 27,314 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 blunder sadness nrc NA
2 solidity positive nrc NA
3 mortuary fear nrc NA
4 absorbed positive nrc NA
5 successful joy nrc NA
6 virus negative nrc NA
7 exorbitantly negative bing NA
8 discombobulate negative bing NA
9 wail negative nrc NA
10 intimidatingly negative bing NA
# ... with 27,304 more rows
```
The bing lexicon provides only *positive* or *negative* labels. The AFINN, on the other hand, is numerical, with ratings \-5:5 that are in the score column. The others get more imaginative, but also more problematic. Why *assimilate* is *superfluous* is beyond me. It clearly should be negative given the [Borg](https://en.wikipedia.org/wiki/Borg_%28Star_Trek%29) connotations.
```
sentiments %>%
filter(sentiment=='superfluous')
```
```
# A tibble: 56 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 aegis superfluous loughran NA
2 amorphous superfluous loughran NA
3 anticipatory superfluous loughran NA
4 appertaining superfluous loughran NA
5 assimilate superfluous loughran NA
6 assimilating superfluous loughran NA
7 assimilation superfluous loughran NA
8 bifurcated superfluous loughran NA
9 bifurcation superfluous loughran NA
10 cessions superfluous loughran NA
# ... with 46 more rows
```
#### Read in the text files
But I digress. We start with the raw text, reading it in line by line. In what follows we read in all the texts (three) in a given directory, such that each element of ‘text’ is the work itself, i.e. `text` is a list column[5](#fn5). The unnest function will unravel the works to where each entry is essentially a paragraph form.
```
library(tidytext)
barth0 =
data_frame(file = dir('data/texts_raw/barthelme', full.names = TRUE)) %>%
mutate(text = map(file, read_lines)) %>%
transmute(work = basename(file), text) %>%
unnest(text)
```
#### Iterative processing
One of the things stressed in this document is the iterative nature of text analysis. You will consistently take two steps forward, and then one or two back as you find issues that need to be addressed. For example, in a subsequent step I found there were encoding issues[6](#fn6), so the following attempts to fix them. In addition, we want to tokenize the documents such that our tokens are sentences (e.g. as opposed to words or paragraphs). The reason for this is that I will be summarizing the sentiment at sentence level.
```
# Fix encoding, convert to sentences; you may get a warning message
barth = barth0 %>%
mutate(
text =
sapply(
text,
stringi::stri_enc_toutf8,
is_unknown_8bit = TRUE,
validate = TRUE
)
) %>%
unnest_tokens(
output = sentence,
input = text,
token = 'sentences'
)
```
#### Tokenization
The next step is to drill down to just the document we want, and subsequently tokenize to the word level. However, I also create a sentence id so that we can group on it later.
```
# get baby doc, convert to words
baby = barth %>%
filter(work=='baby.txt') %>%
mutate(sentence_id = 1:n()) %>%
unnest_tokens(
output = word,
input = sentence,
token = 'words',
drop = FALSE
) %>%
ungroup()
```
#### Get sentiments
Now that the data has been prepped, getting the sentiments is ridiculously easy. But that is how it is with text analysis. All the hard work is spent with the data processing. Here all we need is an inner join of our words with a sentiment lexicon of choice. This process will only retain words that are also in the lexicon. I use the numeric\-based lexicon here. At that point, we get a sum score of sentiment by sentence.
```
# get sentiment via inner join
baby_sentiment = baby %>%
inner_join(get_sentiments("afinn")) %>%
group_by(sentence_id, sentence) %>%
summarise(sentiment = sum(score)) %>%
ungroup()
```
#### Alternative approach
As we are interested in the sentence level, it turns out that the sentimentr package has built\-in functionality for this, and includes a more nuanced sentiment scores that takes into account valence shifters, e.g. words that would negate something with positive or negative sentiment (‘I do ***not*** like it’).
```
baby_sentiment = barth0 %>%
filter(work=='baby.txt') %>%
get_sentences(text) %>%
sentiment() %>%
drop_na() %>% # empty lines
mutate(sentence_id = row_number())
```
The following visualizes sentiment over the progression of sentences (note that not every sentence will receive a sentiment score). You can read the sentence by hovering over the dot. The ▬ is the running average.
In general, the sentiment starts out negative as the problem is explained. It bounces back and forth a bit but ends on a positive note. You’ll see that some sentences’ context are not captured. For example, sentence 16 is ‘But it didn’t do any good’. However *good* is going to be marked as a positive sentiment in any lexicon by default. In addition, the token length will matter. Longer sentences are more likely to have some sentiment, for example.
### Romeo \& Juliet
For this example, I’ll invite you to more or less follow along, as there is notable pre\-processing that must be done. We’ll look at sentiment in Shakespeare’s Romeo and Juliet. I have a cleaner version in the raw texts folder, but we can take the opportunity to use the gutenbergr package to download it directly from Project Gutenberg, a storehouse for works that have entered the public domain.
```
library(gutenbergr)
gw0 = gutenberg_works(title == "Romeo and Juliet") # look for something with this title
```
```
# A tibble: 1 x 4
gutenberg_id title author gutenberg_author_id
<int> <chr> <chr> <int>
1 1513 Romeo and Juliet Shakespeare, William 65
```
```
rnj = gutenberg_download(gw0$gutenberg_id)
```
We’ve got the text now, but there is still work to be done. The following is a quick and dirty approach, but see the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish) to see a more deliberate one.
We first slice off the initial parts we don’t want like title, author etc. Then we get rid of other tidbits that would interfere, using a little regex as well to aid the process.
```
rnj_filtered = rnj %>%
slice(-(1:49)) %>%
filter(!text==str_to_upper(text), # will remove THE PROLOGUE etc.
!text==str_to_title(text), # will remove names/single word lines
!str_detect(text, pattern='^(Scene|SCENE)|^(Act|ACT)|^\\[')) %>%
select(-gutenberg_id) %>%
unnest_tokens(sentence, input=text, token='sentences') %>%
mutate(sentenceID = 1:n())
```
The following unnests the data to word tokens. In addition, you can remove stopwords like a, an, the etc., and tidytext comes with a stop\_words data frame. However, some of the stopwords have sentiments, so you would get a bit of a different result if you retain them. As Black Sheep once said, the choice is yours, and you can deal with this, or you can deal with that.
```
# show some of the matches
stop_words$word[which(stop_words$word %in% sentiments$word)] %>% head(20)
```
```
[1] "able" "against" "allow" "almost" "alone" "appear" "appreciate" "appropriate" "available" "awfully" "believe" "best" "better" "certain" "clearly"
[16] "could" "despite" "downwards" "enough" "furthermore"
```
```
# remember to call output 'word' or antijoin won't work without a 'by' argument
rnj_filtered = rnj_filtered %>%
unnest_tokens(output=word, input=sentence, token='words') %>%
anti_join(stop_words)
```
Now we add the sentiments via the inner\_join function. Here I use ‘bing’, but you can use another, and you might get a different result.
```
rnj_filtered %>%
count(word) %>%
arrange(desc(n))
```
```
# A tibble: 3,288 x 2
word n
<chr> <int>
1 thou 276
2 thy 165
3 love 140
4 thee 139
5 romeo 110
6 night 83
7 death 71
8 hath 64
9 sir 58
10 art 55
# ... with 3,278 more rows
```
```
rnj_sentiment = rnj_filtered %>%
inner_join(sentiments)
rnj_sentiment
```
```
# A tibble: 12,668 x 5
sentenceID word sentiment lexicon score
<int> <chr> <chr> <chr> <int>
1 1 dignity positive nrc NA
2 1 dignity trust nrc NA
3 1 dignity positive bing NA
4 1 fair positive nrc NA
5 1 fair positive bing NA
6 1 fair <NA> AFINN 2
7 1 ancient negative nrc NA
8 1 grudge anger nrc NA
9 1 grudge negative nrc NA
10 1 grudge negative bing NA
# ... with 12,658 more rows
```
```
rnj_sentiment_bing = rnj_sentiment %>%
filter(lexicon=='bing')
table(rnj_sentiment_bing$sentiment)
```
```
negative positive
1244 833
```
Looks like this one is going to be a downer. The following visualizes the positive and negative sentiment scores as one progresses sentence by sentence through the work using the plotly package. I also show same information expressed as a difference (opaque line).
It’s a close game until perhaps the midway point, when negativity takes over and despair sets in with the story. By the end \[\[:SPOILER ALERT:]] Sean Bean is beheaded, Darth Vader reveals himself to be Luke’s father, and Verbal is Keyser Söze.
Sentiment Analysis Summary
--------------------------
In general, sentiment analysis can be a useful exploration of data, but it is highly dependent on the context and tools used. Note also that ‘sentiment’ can be anything, it doesn’t have to be positive vs. negative. Any vocabulary may be applied, and so it has more utility than the usual implementation.
It should also be noted that the above demonstration is largely conceptual and descriptive. While fun, it’s a bit simplified. For starters, trying to classify words as simply positive or negative itself is not a straightforward endeavor. As we noted at the beginning, context matters, and in general you’d want to take it into account. Modern methods of sentiment analysis would use approaches like word2vec or deep learning to predict a sentiment probability, as opposed to a simple word match. Even in the above, matching sentiments to texts would probably only be a precursor to building a model predicting sentiment, which could then be applied to new data.
Exercise
--------
### Step 0: Install the packages
If you haven’t already, install the tidytext package. Install the janeaustenr package and load both of them[7](#fn7).
### Step 1: Initial inspection
First you’ll want to look at what we’re dealing with, so take a gander at austenbooks.
```
library(tidytext); library(janeaustenr)
austen_books()
```
```
# A tibble: 73,422 x 2
text book
* <chr> <fct>
1 SENSE AND SENSIBILITY Sense & Sensibility
2 "" Sense & Sensibility
3 by Jane Austen Sense & Sensibility
4 "" Sense & Sensibility
5 (1811) Sense & Sensibility
6 "" Sense & Sensibility
7 "" Sense & Sensibility
8 "" Sense & Sensibility
9 "" Sense & Sensibility
10 CHAPTER 1 Sense & Sensibility
# ... with 73,412 more rows
```
```
austen_books() %>%
distinct(book)
```
```
# A tibble: 6 x 1
book
<fct>
1 Sense & Sensibility
2 Pride & Prejudice
3 Mansfield Park
4 Emma
5 Northanger Abbey
6 Persuasion
```
We will examine only one text. In addition, for this exercise we’ll take a little bit of a different approach, looking for a specific kind of sentiment using the NRC database. It contains 10 distinct sentiments.
```
get_sentiments("nrc") %>% distinct(sentiment)
```
```
# A tibble: 10 x 1
sentiment
<chr>
1 trust
2 fear
3 negative
4 sadness
5 anger
6 surprise
7 positive
8 disgust
9 joy
10 anticipation
```
Now, select from any of those sentiments you like (or more than one), and one of the texts as follows.
```
nrc_sadness <- get_sentiments("nrc") %>%
filter(sentiment == "positive")
ja_book = austen_books() %>%
filter(book == "Emma")
```
### Step 2: Data prep
Now we do a little prep, and I’ll save you the trouble. You can just run the following.
```
ja_book = ja_book %>%
mutate(chapter = str_detect(text, regex("^chapter [\\divxlc]", ignore_case = TRUE)),
chapter = cumsum(chapter),
line_book = row_number()) %>%
unnest_tokens(word, text)
```
```
ja_book = ja_book %>%
mutate(chapter = str_detect(text, regex("^chapter [\\divxlc]", ignore_case = TRUE)),
chapter = cumsum(chapter),
line_book = row_number()) %>%
group_by(chapter) %>%
mutate(line_chapter = row_number()) %>%
# ungroup()
unnest_tokens(word, text)
```
### Step 3: Get sentiment
Now, on your own, try the inner join approach we used previously to match the sentiments to the text. Don’t try to overthink this. The third pipe step will use the count function with the `word` column and also the argument `sort=TRUE`. Note this is just to look at your result, we aren’t assigning it to an object yet.
```
ja_book %>%
? %>%
?
```
The following shows my negative evaluation of Mansfield Park.
```
# A tibble: 4,204 x 3
# Groups: chapter [48]
chapter word n
<int> <chr> <int>
1 24 feeling 35
2 7 ill 25
3 46 evil 25
4 26 cross 24
5 27 cross 24
6 48 punishment 24
7 7 cutting 20
8 19 feeling 20
9 33 feeling 20
10 34 feeling 20
# ... with 4,194 more rows
```
### Step 4: Visualize
Now let’s do a visualization for sentiment. So redo your inner join, but we’ll create a data frame that has the information we need.
```
plot_data = ja_book %>%
inner_join(nrc_bad) %>%
group_by(chapter, line_book, line_chapter) %>%
count() %>%
group_by(chapter) %>%
mutate(negativity = cumsum(n),
mean_chapter_negativity=mean(negativity)) %>%
group_by(line_chapter) %>%
mutate(mean_line_negativity=mean(n))
plot_data
```
```
# A tibble: 4,398 x 7
# Groups: line_chapter [453]
chapter line_book line_chapter n negativity mean_chapter_negativity mean_line_negativity
<int> <int> <int> <int> <int> <dbl> <dbl>
1 1 17 7 2 2 111. 3.41
2 1 18 8 4 6 111. 2.65
3 1 20 10 1 7 111. 3.31
4 1 24 14 1 8 111. 2.88
5 1 26 16 2 10 111. 2.54
6 1 27 17 3 13 111. 2.67
7 1 28 18 3 16 111. 3.58
8 1 29 19 2 18 111. 2.31
9 1 34 24 3 21 111. 2.17
10 1 41 31 1 22 111. 2.87
# ... with 4,388 more rows
```
At this point you have enough to play with, so I leave you to plot whatever you want.
The following[8](#fn8) shows both the total negativity within a chapter, as well as the per line negativity within a chapter. We can see that there is less negativity towards the end of chapters. We can also see that there appears to be more negativity in later chapters (darker lines).
Basic idea
----------
A common and intuitive approach to text is sentiment analysis. In a grand sense, we are interested in the emotional content of some text, e.g. posts on Facebook, tweets, or movie reviews. Most of the time, this is obvious when one reads it, but if you have hundreds of thousands or millions of strings to analyze, you’d like to be able to do so efficiently.
We will use the tidytext package for our demonstration. It comes with a lexicon of positive and negative words that is actually a combination of multiple sources, one of which provides numeric ratings, while the others suggest different classes of sentiment.
```
library(tidytext)
sentiments %>% slice(sample(1:nrow(sentiments)))
```
```
# A tibble: 27,314 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 decomposition negative nrc NA
2 imaculate positive bing NA
3 greatness positive bing NA
4 impatient negative bing NA
5 contradicting negative loughran NA
6 irrecoverableness negative bing NA
7 advisable trust nrc NA
8 humiliation disgust nrc NA
9 obscures negative bing NA
10 affliction negative bing NA
# ... with 27,304 more rows
```
The gist is that we are dealing with a specific, pre\-defined vocabulary. Of course, any analysis will only be as good as the lexicon. The goal is usually to assign a sentiment score to a text, possibly an overall score, or a generally positive or negative grade. Given that, other analyses may be implemented to predict sentiment via standard regression tools or machine learning approaches.
Issues
------
### Context, sarcasm, etc.
Now consider the following.
```
sentiments %>% filter(word=='sick')
```
```
# A tibble: 5 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 sick disgust nrc NA
2 sick negative nrc NA
3 sick sadness nrc NA
4 sick negative bing NA
5 sick <NA> AFINN -2
```
Despite the above assigned sentiments, the word *sick* has been used at least since 1960s surfing culture as slang for positive affect. A basic approach to sentiment analysis as described here will not be able to detect slang or other context like sarcasm. However, lots of training data for a particular context may allow one to correctly predict such sentiment. In addition, there are, for example, slang lexicons, or one can simply add their own complements to any available lexicon.
### Lexicons
In addition, the lexicons are going to maybe be applicable to *general* usage of English in the western world. Some might wonder where exactly these came from or who decided that the word *abacus* should be affiliated with ‘trust’. You may start your path by typing `?sentiments` at the console if you have the tidytext package loaded.
### Context, sarcasm, etc.
Now consider the following.
```
sentiments %>% filter(word=='sick')
```
```
# A tibble: 5 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 sick disgust nrc NA
2 sick negative nrc NA
3 sick sadness nrc NA
4 sick negative bing NA
5 sick <NA> AFINN -2
```
Despite the above assigned sentiments, the word *sick* has been used at least since 1960s surfing culture as slang for positive affect. A basic approach to sentiment analysis as described here will not be able to detect slang or other context like sarcasm. However, lots of training data for a particular context may allow one to correctly predict such sentiment. In addition, there are, for example, slang lexicons, or one can simply add their own complements to any available lexicon.
### Lexicons
In addition, the lexicons are going to maybe be applicable to *general* usage of English in the western world. Some might wonder where exactly these came from or who decided that the word *abacus* should be affiliated with ‘trust’. You may start your path by typing `?sentiments` at the console if you have the tidytext package loaded.
Sentiment Analysis Examples
---------------------------
### The first thing the baby did wrong
We demonstrate sentiment analysis with the text *The first thing the baby did wrong*, which is a very popular brief guide to parenting written by world renown psychologist [Donald Barthelme](appendix.html#donald-barthelme) who, in his spare time, also wrote postmodern literature. This particular text talks about an issue with the baby, whose name is Born Dancin’, and who likes to tear pages out of books. Attempts are made by her parents to rectify the situation, without much success, but things are finally resolved at the end. The ultimate goal will be to see how sentiment in the text evolves over time, and in general we’d expect things to end more positively than they began.
How do we start? Let’s look again at the sentiments data set in the tidytext package.
```
sentiments %>% slice(sample(1:nrow(sentiments)))
```
```
# A tibble: 27,314 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 blunder sadness nrc NA
2 solidity positive nrc NA
3 mortuary fear nrc NA
4 absorbed positive nrc NA
5 successful joy nrc NA
6 virus negative nrc NA
7 exorbitantly negative bing NA
8 discombobulate negative bing NA
9 wail negative nrc NA
10 intimidatingly negative bing NA
# ... with 27,304 more rows
```
The bing lexicon provides only *positive* or *negative* labels. The AFINN, on the other hand, is numerical, with ratings \-5:5 that are in the score column. The others get more imaginative, but also more problematic. Why *assimilate* is *superfluous* is beyond me. It clearly should be negative given the [Borg](https://en.wikipedia.org/wiki/Borg_%28Star_Trek%29) connotations.
```
sentiments %>%
filter(sentiment=='superfluous')
```
```
# A tibble: 56 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 aegis superfluous loughran NA
2 amorphous superfluous loughran NA
3 anticipatory superfluous loughran NA
4 appertaining superfluous loughran NA
5 assimilate superfluous loughran NA
6 assimilating superfluous loughran NA
7 assimilation superfluous loughran NA
8 bifurcated superfluous loughran NA
9 bifurcation superfluous loughran NA
10 cessions superfluous loughran NA
# ... with 46 more rows
```
#### Read in the text files
But I digress. We start with the raw text, reading it in line by line. In what follows we read in all the texts (three) in a given directory, such that each element of ‘text’ is the work itself, i.e. `text` is a list column[5](#fn5). The unnest function will unravel the works to where each entry is essentially a paragraph form.
```
library(tidytext)
barth0 =
data_frame(file = dir('data/texts_raw/barthelme', full.names = TRUE)) %>%
mutate(text = map(file, read_lines)) %>%
transmute(work = basename(file), text) %>%
unnest(text)
```
#### Iterative processing
One of the things stressed in this document is the iterative nature of text analysis. You will consistently take two steps forward, and then one or two back as you find issues that need to be addressed. For example, in a subsequent step I found there were encoding issues[6](#fn6), so the following attempts to fix them. In addition, we want to tokenize the documents such that our tokens are sentences (e.g. as opposed to words or paragraphs). The reason for this is that I will be summarizing the sentiment at sentence level.
```
# Fix encoding, convert to sentences; you may get a warning message
barth = barth0 %>%
mutate(
text =
sapply(
text,
stringi::stri_enc_toutf8,
is_unknown_8bit = TRUE,
validate = TRUE
)
) %>%
unnest_tokens(
output = sentence,
input = text,
token = 'sentences'
)
```
#### Tokenization
The next step is to drill down to just the document we want, and subsequently tokenize to the word level. However, I also create a sentence id so that we can group on it later.
```
# get baby doc, convert to words
baby = barth %>%
filter(work=='baby.txt') %>%
mutate(sentence_id = 1:n()) %>%
unnest_tokens(
output = word,
input = sentence,
token = 'words',
drop = FALSE
) %>%
ungroup()
```
#### Get sentiments
Now that the data has been prepped, getting the sentiments is ridiculously easy. But that is how it is with text analysis. All the hard work is spent with the data processing. Here all we need is an inner join of our words with a sentiment lexicon of choice. This process will only retain words that are also in the lexicon. I use the numeric\-based lexicon here. At that point, we get a sum score of sentiment by sentence.
```
# get sentiment via inner join
baby_sentiment = baby %>%
inner_join(get_sentiments("afinn")) %>%
group_by(sentence_id, sentence) %>%
summarise(sentiment = sum(score)) %>%
ungroup()
```
#### Alternative approach
As we are interested in the sentence level, it turns out that the sentimentr package has built\-in functionality for this, and includes a more nuanced sentiment scores that takes into account valence shifters, e.g. words that would negate something with positive or negative sentiment (‘I do ***not*** like it’).
```
baby_sentiment = barth0 %>%
filter(work=='baby.txt') %>%
get_sentences(text) %>%
sentiment() %>%
drop_na() %>% # empty lines
mutate(sentence_id = row_number())
```
The following visualizes sentiment over the progression of sentences (note that not every sentence will receive a sentiment score). You can read the sentence by hovering over the dot. The ▬ is the running average.
In general, the sentiment starts out negative as the problem is explained. It bounces back and forth a bit but ends on a positive note. You’ll see that some sentences’ context are not captured. For example, sentence 16 is ‘But it didn’t do any good’. However *good* is going to be marked as a positive sentiment in any lexicon by default. In addition, the token length will matter. Longer sentences are more likely to have some sentiment, for example.
### Romeo \& Juliet
For this example, I’ll invite you to more or less follow along, as there is notable pre\-processing that must be done. We’ll look at sentiment in Shakespeare’s Romeo and Juliet. I have a cleaner version in the raw texts folder, but we can take the opportunity to use the gutenbergr package to download it directly from Project Gutenberg, a storehouse for works that have entered the public domain.
```
library(gutenbergr)
gw0 = gutenberg_works(title == "Romeo and Juliet") # look for something with this title
```
```
# A tibble: 1 x 4
gutenberg_id title author gutenberg_author_id
<int> <chr> <chr> <int>
1 1513 Romeo and Juliet Shakespeare, William 65
```
```
rnj = gutenberg_download(gw0$gutenberg_id)
```
We’ve got the text now, but there is still work to be done. The following is a quick and dirty approach, but see the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish) to see a more deliberate one.
We first slice off the initial parts we don’t want like title, author etc. Then we get rid of other tidbits that would interfere, using a little regex as well to aid the process.
```
rnj_filtered = rnj %>%
slice(-(1:49)) %>%
filter(!text==str_to_upper(text), # will remove THE PROLOGUE etc.
!text==str_to_title(text), # will remove names/single word lines
!str_detect(text, pattern='^(Scene|SCENE)|^(Act|ACT)|^\\[')) %>%
select(-gutenberg_id) %>%
unnest_tokens(sentence, input=text, token='sentences') %>%
mutate(sentenceID = 1:n())
```
The following unnests the data to word tokens. In addition, you can remove stopwords like a, an, the etc., and tidytext comes with a stop\_words data frame. However, some of the stopwords have sentiments, so you would get a bit of a different result if you retain them. As Black Sheep once said, the choice is yours, and you can deal with this, or you can deal with that.
```
# show some of the matches
stop_words$word[which(stop_words$word %in% sentiments$word)] %>% head(20)
```
```
[1] "able" "against" "allow" "almost" "alone" "appear" "appreciate" "appropriate" "available" "awfully" "believe" "best" "better" "certain" "clearly"
[16] "could" "despite" "downwards" "enough" "furthermore"
```
```
# remember to call output 'word' or antijoin won't work without a 'by' argument
rnj_filtered = rnj_filtered %>%
unnest_tokens(output=word, input=sentence, token='words') %>%
anti_join(stop_words)
```
Now we add the sentiments via the inner\_join function. Here I use ‘bing’, but you can use another, and you might get a different result.
```
rnj_filtered %>%
count(word) %>%
arrange(desc(n))
```
```
# A tibble: 3,288 x 2
word n
<chr> <int>
1 thou 276
2 thy 165
3 love 140
4 thee 139
5 romeo 110
6 night 83
7 death 71
8 hath 64
9 sir 58
10 art 55
# ... with 3,278 more rows
```
```
rnj_sentiment = rnj_filtered %>%
inner_join(sentiments)
rnj_sentiment
```
```
# A tibble: 12,668 x 5
sentenceID word sentiment lexicon score
<int> <chr> <chr> <chr> <int>
1 1 dignity positive nrc NA
2 1 dignity trust nrc NA
3 1 dignity positive bing NA
4 1 fair positive nrc NA
5 1 fair positive bing NA
6 1 fair <NA> AFINN 2
7 1 ancient negative nrc NA
8 1 grudge anger nrc NA
9 1 grudge negative nrc NA
10 1 grudge negative bing NA
# ... with 12,658 more rows
```
```
rnj_sentiment_bing = rnj_sentiment %>%
filter(lexicon=='bing')
table(rnj_sentiment_bing$sentiment)
```
```
negative positive
1244 833
```
Looks like this one is going to be a downer. The following visualizes the positive and negative sentiment scores as one progresses sentence by sentence through the work using the plotly package. I also show same information expressed as a difference (opaque line).
It’s a close game until perhaps the midway point, when negativity takes over and despair sets in with the story. By the end \[\[:SPOILER ALERT:]] Sean Bean is beheaded, Darth Vader reveals himself to be Luke’s father, and Verbal is Keyser Söze.
### The first thing the baby did wrong
We demonstrate sentiment analysis with the text *The first thing the baby did wrong*, which is a very popular brief guide to parenting written by world renown psychologist [Donald Barthelme](appendix.html#donald-barthelme) who, in his spare time, also wrote postmodern literature. This particular text talks about an issue with the baby, whose name is Born Dancin’, and who likes to tear pages out of books. Attempts are made by her parents to rectify the situation, without much success, but things are finally resolved at the end. The ultimate goal will be to see how sentiment in the text evolves over time, and in general we’d expect things to end more positively than they began.
How do we start? Let’s look again at the sentiments data set in the tidytext package.
```
sentiments %>% slice(sample(1:nrow(sentiments)))
```
```
# A tibble: 27,314 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 blunder sadness nrc NA
2 solidity positive nrc NA
3 mortuary fear nrc NA
4 absorbed positive nrc NA
5 successful joy nrc NA
6 virus negative nrc NA
7 exorbitantly negative bing NA
8 discombobulate negative bing NA
9 wail negative nrc NA
10 intimidatingly negative bing NA
# ... with 27,304 more rows
```
The bing lexicon provides only *positive* or *negative* labels. The AFINN, on the other hand, is numerical, with ratings \-5:5 that are in the score column. The others get more imaginative, but also more problematic. Why *assimilate* is *superfluous* is beyond me. It clearly should be negative given the [Borg](https://en.wikipedia.org/wiki/Borg_%28Star_Trek%29) connotations.
```
sentiments %>%
filter(sentiment=='superfluous')
```
```
# A tibble: 56 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 aegis superfluous loughran NA
2 amorphous superfluous loughran NA
3 anticipatory superfluous loughran NA
4 appertaining superfluous loughran NA
5 assimilate superfluous loughran NA
6 assimilating superfluous loughran NA
7 assimilation superfluous loughran NA
8 bifurcated superfluous loughran NA
9 bifurcation superfluous loughran NA
10 cessions superfluous loughran NA
# ... with 46 more rows
```
#### Read in the text files
But I digress. We start with the raw text, reading it in line by line. In what follows we read in all the texts (three) in a given directory, such that each element of ‘text’ is the work itself, i.e. `text` is a list column[5](#fn5). The unnest function will unravel the works to where each entry is essentially a paragraph form.
```
library(tidytext)
barth0 =
data_frame(file = dir('data/texts_raw/barthelme', full.names = TRUE)) %>%
mutate(text = map(file, read_lines)) %>%
transmute(work = basename(file), text) %>%
unnest(text)
```
#### Iterative processing
One of the things stressed in this document is the iterative nature of text analysis. You will consistently take two steps forward, and then one or two back as you find issues that need to be addressed. For example, in a subsequent step I found there were encoding issues[6](#fn6), so the following attempts to fix them. In addition, we want to tokenize the documents such that our tokens are sentences (e.g. as opposed to words or paragraphs). The reason for this is that I will be summarizing the sentiment at sentence level.
```
# Fix encoding, convert to sentences; you may get a warning message
barth = barth0 %>%
mutate(
text =
sapply(
text,
stringi::stri_enc_toutf8,
is_unknown_8bit = TRUE,
validate = TRUE
)
) %>%
unnest_tokens(
output = sentence,
input = text,
token = 'sentences'
)
```
#### Tokenization
The next step is to drill down to just the document we want, and subsequently tokenize to the word level. However, I also create a sentence id so that we can group on it later.
```
# get baby doc, convert to words
baby = barth %>%
filter(work=='baby.txt') %>%
mutate(sentence_id = 1:n()) %>%
unnest_tokens(
output = word,
input = sentence,
token = 'words',
drop = FALSE
) %>%
ungroup()
```
#### Get sentiments
Now that the data has been prepped, getting the sentiments is ridiculously easy. But that is how it is with text analysis. All the hard work is spent with the data processing. Here all we need is an inner join of our words with a sentiment lexicon of choice. This process will only retain words that are also in the lexicon. I use the numeric\-based lexicon here. At that point, we get a sum score of sentiment by sentence.
```
# get sentiment via inner join
baby_sentiment = baby %>%
inner_join(get_sentiments("afinn")) %>%
group_by(sentence_id, sentence) %>%
summarise(sentiment = sum(score)) %>%
ungroup()
```
#### Alternative approach
As we are interested in the sentence level, it turns out that the sentimentr package has built\-in functionality for this, and includes a more nuanced sentiment scores that takes into account valence shifters, e.g. words that would negate something with positive or negative sentiment (‘I do ***not*** like it’).
```
baby_sentiment = barth0 %>%
filter(work=='baby.txt') %>%
get_sentences(text) %>%
sentiment() %>%
drop_na() %>% # empty lines
mutate(sentence_id = row_number())
```
The following visualizes sentiment over the progression of sentences (note that not every sentence will receive a sentiment score). You can read the sentence by hovering over the dot. The ▬ is the running average.
In general, the sentiment starts out negative as the problem is explained. It bounces back and forth a bit but ends on a positive note. You’ll see that some sentences’ context are not captured. For example, sentence 16 is ‘But it didn’t do any good’. However *good* is going to be marked as a positive sentiment in any lexicon by default. In addition, the token length will matter. Longer sentences are more likely to have some sentiment, for example.
#### Read in the text files
But I digress. We start with the raw text, reading it in line by line. In what follows we read in all the texts (three) in a given directory, such that each element of ‘text’ is the work itself, i.e. `text` is a list column[5](#fn5). The unnest function will unravel the works to where each entry is essentially a paragraph form.
```
library(tidytext)
barth0 =
data_frame(file = dir('data/texts_raw/barthelme', full.names = TRUE)) %>%
mutate(text = map(file, read_lines)) %>%
transmute(work = basename(file), text) %>%
unnest(text)
```
#### Iterative processing
One of the things stressed in this document is the iterative nature of text analysis. You will consistently take two steps forward, and then one or two back as you find issues that need to be addressed. For example, in a subsequent step I found there were encoding issues[6](#fn6), so the following attempts to fix them. In addition, we want to tokenize the documents such that our tokens are sentences (e.g. as opposed to words or paragraphs). The reason for this is that I will be summarizing the sentiment at sentence level.
```
# Fix encoding, convert to sentences; you may get a warning message
barth = barth0 %>%
mutate(
text =
sapply(
text,
stringi::stri_enc_toutf8,
is_unknown_8bit = TRUE,
validate = TRUE
)
) %>%
unnest_tokens(
output = sentence,
input = text,
token = 'sentences'
)
```
#### Tokenization
The next step is to drill down to just the document we want, and subsequently tokenize to the word level. However, I also create a sentence id so that we can group on it later.
```
# get baby doc, convert to words
baby = barth %>%
filter(work=='baby.txt') %>%
mutate(sentence_id = 1:n()) %>%
unnest_tokens(
output = word,
input = sentence,
token = 'words',
drop = FALSE
) %>%
ungroup()
```
#### Get sentiments
Now that the data has been prepped, getting the sentiments is ridiculously easy. But that is how it is with text analysis. All the hard work is spent with the data processing. Here all we need is an inner join of our words with a sentiment lexicon of choice. This process will only retain words that are also in the lexicon. I use the numeric\-based lexicon here. At that point, we get a sum score of sentiment by sentence.
```
# get sentiment via inner join
baby_sentiment = baby %>%
inner_join(get_sentiments("afinn")) %>%
group_by(sentence_id, sentence) %>%
summarise(sentiment = sum(score)) %>%
ungroup()
```
#### Alternative approach
As we are interested in the sentence level, it turns out that the sentimentr package has built\-in functionality for this, and includes a more nuanced sentiment scores that takes into account valence shifters, e.g. words that would negate something with positive or negative sentiment (‘I do ***not*** like it’).
```
baby_sentiment = barth0 %>%
filter(work=='baby.txt') %>%
get_sentences(text) %>%
sentiment() %>%
drop_na() %>% # empty lines
mutate(sentence_id = row_number())
```
The following visualizes sentiment over the progression of sentences (note that not every sentence will receive a sentiment score). You can read the sentence by hovering over the dot. The ▬ is the running average.
In general, the sentiment starts out negative as the problem is explained. It bounces back and forth a bit but ends on a positive note. You’ll see that some sentences’ context are not captured. For example, sentence 16 is ‘But it didn’t do any good’. However *good* is going to be marked as a positive sentiment in any lexicon by default. In addition, the token length will matter. Longer sentences are more likely to have some sentiment, for example.
### Romeo \& Juliet
For this example, I’ll invite you to more or less follow along, as there is notable pre\-processing that must be done. We’ll look at sentiment in Shakespeare’s Romeo and Juliet. I have a cleaner version in the raw texts folder, but we can take the opportunity to use the gutenbergr package to download it directly from Project Gutenberg, a storehouse for works that have entered the public domain.
```
library(gutenbergr)
gw0 = gutenberg_works(title == "Romeo and Juliet") # look for something with this title
```
```
# A tibble: 1 x 4
gutenberg_id title author gutenberg_author_id
<int> <chr> <chr> <int>
1 1513 Romeo and Juliet Shakespeare, William 65
```
```
rnj = gutenberg_download(gw0$gutenberg_id)
```
We’ve got the text now, but there is still work to be done. The following is a quick and dirty approach, but see the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish) to see a more deliberate one.
We first slice off the initial parts we don’t want like title, author etc. Then we get rid of other tidbits that would interfere, using a little regex as well to aid the process.
```
rnj_filtered = rnj %>%
slice(-(1:49)) %>%
filter(!text==str_to_upper(text), # will remove THE PROLOGUE etc.
!text==str_to_title(text), # will remove names/single word lines
!str_detect(text, pattern='^(Scene|SCENE)|^(Act|ACT)|^\\[')) %>%
select(-gutenberg_id) %>%
unnest_tokens(sentence, input=text, token='sentences') %>%
mutate(sentenceID = 1:n())
```
The following unnests the data to word tokens. In addition, you can remove stopwords like a, an, the etc., and tidytext comes with a stop\_words data frame. However, some of the stopwords have sentiments, so you would get a bit of a different result if you retain them. As Black Sheep once said, the choice is yours, and you can deal with this, or you can deal with that.
```
# show some of the matches
stop_words$word[which(stop_words$word %in% sentiments$word)] %>% head(20)
```
```
[1] "able" "against" "allow" "almost" "alone" "appear" "appreciate" "appropriate" "available" "awfully" "believe" "best" "better" "certain" "clearly"
[16] "could" "despite" "downwards" "enough" "furthermore"
```
```
# remember to call output 'word' or antijoin won't work without a 'by' argument
rnj_filtered = rnj_filtered %>%
unnest_tokens(output=word, input=sentence, token='words') %>%
anti_join(stop_words)
```
Now we add the sentiments via the inner\_join function. Here I use ‘bing’, but you can use another, and you might get a different result.
```
rnj_filtered %>%
count(word) %>%
arrange(desc(n))
```
```
# A tibble: 3,288 x 2
word n
<chr> <int>
1 thou 276
2 thy 165
3 love 140
4 thee 139
5 romeo 110
6 night 83
7 death 71
8 hath 64
9 sir 58
10 art 55
# ... with 3,278 more rows
```
```
rnj_sentiment = rnj_filtered %>%
inner_join(sentiments)
rnj_sentiment
```
```
# A tibble: 12,668 x 5
sentenceID word sentiment lexicon score
<int> <chr> <chr> <chr> <int>
1 1 dignity positive nrc NA
2 1 dignity trust nrc NA
3 1 dignity positive bing NA
4 1 fair positive nrc NA
5 1 fair positive bing NA
6 1 fair <NA> AFINN 2
7 1 ancient negative nrc NA
8 1 grudge anger nrc NA
9 1 grudge negative nrc NA
10 1 grudge negative bing NA
# ... with 12,658 more rows
```
```
rnj_sentiment_bing = rnj_sentiment %>%
filter(lexicon=='bing')
table(rnj_sentiment_bing$sentiment)
```
```
negative positive
1244 833
```
Looks like this one is going to be a downer. The following visualizes the positive and negative sentiment scores as one progresses sentence by sentence through the work using the plotly package. I also show same information expressed as a difference (opaque line).
It’s a close game until perhaps the midway point, when negativity takes over and despair sets in with the story. By the end \[\[:SPOILER ALERT:]] Sean Bean is beheaded, Darth Vader reveals himself to be Luke’s father, and Verbal is Keyser Söze.
Sentiment Analysis Summary
--------------------------
In general, sentiment analysis can be a useful exploration of data, but it is highly dependent on the context and tools used. Note also that ‘sentiment’ can be anything, it doesn’t have to be positive vs. negative. Any vocabulary may be applied, and so it has more utility than the usual implementation.
It should also be noted that the above demonstration is largely conceptual and descriptive. While fun, it’s a bit simplified. For starters, trying to classify words as simply positive or negative itself is not a straightforward endeavor. As we noted at the beginning, context matters, and in general you’d want to take it into account. Modern methods of sentiment analysis would use approaches like word2vec or deep learning to predict a sentiment probability, as opposed to a simple word match. Even in the above, matching sentiments to texts would probably only be a precursor to building a model predicting sentiment, which could then be applied to new data.
Exercise
--------
### Step 0: Install the packages
If you haven’t already, install the tidytext package. Install the janeaustenr package and load both of them[7](#fn7).
### Step 1: Initial inspection
First you’ll want to look at what we’re dealing with, so take a gander at austenbooks.
```
library(tidytext); library(janeaustenr)
austen_books()
```
```
# A tibble: 73,422 x 2
text book
* <chr> <fct>
1 SENSE AND SENSIBILITY Sense & Sensibility
2 "" Sense & Sensibility
3 by Jane Austen Sense & Sensibility
4 "" Sense & Sensibility
5 (1811) Sense & Sensibility
6 "" Sense & Sensibility
7 "" Sense & Sensibility
8 "" Sense & Sensibility
9 "" Sense & Sensibility
10 CHAPTER 1 Sense & Sensibility
# ... with 73,412 more rows
```
```
austen_books() %>%
distinct(book)
```
```
# A tibble: 6 x 1
book
<fct>
1 Sense & Sensibility
2 Pride & Prejudice
3 Mansfield Park
4 Emma
5 Northanger Abbey
6 Persuasion
```
We will examine only one text. In addition, for this exercise we’ll take a little bit of a different approach, looking for a specific kind of sentiment using the NRC database. It contains 10 distinct sentiments.
```
get_sentiments("nrc") %>% distinct(sentiment)
```
```
# A tibble: 10 x 1
sentiment
<chr>
1 trust
2 fear
3 negative
4 sadness
5 anger
6 surprise
7 positive
8 disgust
9 joy
10 anticipation
```
Now, select from any of those sentiments you like (or more than one), and one of the texts as follows.
```
nrc_sadness <- get_sentiments("nrc") %>%
filter(sentiment == "positive")
ja_book = austen_books() %>%
filter(book == "Emma")
```
### Step 2: Data prep
Now we do a little prep, and I’ll save you the trouble. You can just run the following.
```
ja_book = ja_book %>%
mutate(chapter = str_detect(text, regex("^chapter [\\divxlc]", ignore_case = TRUE)),
chapter = cumsum(chapter),
line_book = row_number()) %>%
unnest_tokens(word, text)
```
```
ja_book = ja_book %>%
mutate(chapter = str_detect(text, regex("^chapter [\\divxlc]", ignore_case = TRUE)),
chapter = cumsum(chapter),
line_book = row_number()) %>%
group_by(chapter) %>%
mutate(line_chapter = row_number()) %>%
# ungroup()
unnest_tokens(word, text)
```
### Step 3: Get sentiment
Now, on your own, try the inner join approach we used previously to match the sentiments to the text. Don’t try to overthink this. The third pipe step will use the count function with the `word` column and also the argument `sort=TRUE`. Note this is just to look at your result, we aren’t assigning it to an object yet.
```
ja_book %>%
? %>%
?
```
The following shows my negative evaluation of Mansfield Park.
```
# A tibble: 4,204 x 3
# Groups: chapter [48]
chapter word n
<int> <chr> <int>
1 24 feeling 35
2 7 ill 25
3 46 evil 25
4 26 cross 24
5 27 cross 24
6 48 punishment 24
7 7 cutting 20
8 19 feeling 20
9 33 feeling 20
10 34 feeling 20
# ... with 4,194 more rows
```
### Step 4: Visualize
Now let’s do a visualization for sentiment. So redo your inner join, but we’ll create a data frame that has the information we need.
```
plot_data = ja_book %>%
inner_join(nrc_bad) %>%
group_by(chapter, line_book, line_chapter) %>%
count() %>%
group_by(chapter) %>%
mutate(negativity = cumsum(n),
mean_chapter_negativity=mean(negativity)) %>%
group_by(line_chapter) %>%
mutate(mean_line_negativity=mean(n))
plot_data
```
```
# A tibble: 4,398 x 7
# Groups: line_chapter [453]
chapter line_book line_chapter n negativity mean_chapter_negativity mean_line_negativity
<int> <int> <int> <int> <int> <dbl> <dbl>
1 1 17 7 2 2 111. 3.41
2 1 18 8 4 6 111. 2.65
3 1 20 10 1 7 111. 3.31
4 1 24 14 1 8 111. 2.88
5 1 26 16 2 10 111. 2.54
6 1 27 17 3 13 111. 2.67
7 1 28 18 3 16 111. 3.58
8 1 29 19 2 18 111. 2.31
9 1 34 24 3 21 111. 2.17
10 1 41 31 1 22 111. 2.87
# ... with 4,388 more rows
```
At this point you have enough to play with, so I leave you to plot whatever you want.
The following[8](#fn8) shows both the total negativity within a chapter, as well as the per line negativity within a chapter. We can see that there is less negativity towards the end of chapters. We can also see that there appears to be more negativity in later chapters (darker lines).
### Step 0: Install the packages
If you haven’t already, install the tidytext package. Install the janeaustenr package and load both of them[7](#fn7).
### Step 1: Initial inspection
First you’ll want to look at what we’re dealing with, so take a gander at austenbooks.
```
library(tidytext); library(janeaustenr)
austen_books()
```
```
# A tibble: 73,422 x 2
text book
* <chr> <fct>
1 SENSE AND SENSIBILITY Sense & Sensibility
2 "" Sense & Sensibility
3 by Jane Austen Sense & Sensibility
4 "" Sense & Sensibility
5 (1811) Sense & Sensibility
6 "" Sense & Sensibility
7 "" Sense & Sensibility
8 "" Sense & Sensibility
9 "" Sense & Sensibility
10 CHAPTER 1 Sense & Sensibility
# ... with 73,412 more rows
```
```
austen_books() %>%
distinct(book)
```
```
# A tibble: 6 x 1
book
<fct>
1 Sense & Sensibility
2 Pride & Prejudice
3 Mansfield Park
4 Emma
5 Northanger Abbey
6 Persuasion
```
We will examine only one text. In addition, for this exercise we’ll take a little bit of a different approach, looking for a specific kind of sentiment using the NRC database. It contains 10 distinct sentiments.
```
get_sentiments("nrc") %>% distinct(sentiment)
```
```
# A tibble: 10 x 1
sentiment
<chr>
1 trust
2 fear
3 negative
4 sadness
5 anger
6 surprise
7 positive
8 disgust
9 joy
10 anticipation
```
Now, select from any of those sentiments you like (or more than one), and one of the texts as follows.
```
nrc_sadness <- get_sentiments("nrc") %>%
filter(sentiment == "positive")
ja_book = austen_books() %>%
filter(book == "Emma")
```
### Step 2: Data prep
Now we do a little prep, and I’ll save you the trouble. You can just run the following.
```
ja_book = ja_book %>%
mutate(chapter = str_detect(text, regex("^chapter [\\divxlc]", ignore_case = TRUE)),
chapter = cumsum(chapter),
line_book = row_number()) %>%
unnest_tokens(word, text)
```
```
ja_book = ja_book %>%
mutate(chapter = str_detect(text, regex("^chapter [\\divxlc]", ignore_case = TRUE)),
chapter = cumsum(chapter),
line_book = row_number()) %>%
group_by(chapter) %>%
mutate(line_chapter = row_number()) %>%
# ungroup()
unnest_tokens(word, text)
```
### Step 3: Get sentiment
Now, on your own, try the inner join approach we used previously to match the sentiments to the text. Don’t try to overthink this. The third pipe step will use the count function with the `word` column and also the argument `sort=TRUE`. Note this is just to look at your result, we aren’t assigning it to an object yet.
```
ja_book %>%
? %>%
?
```
The following shows my negative evaluation of Mansfield Park.
```
# A tibble: 4,204 x 3
# Groups: chapter [48]
chapter word n
<int> <chr> <int>
1 24 feeling 35
2 7 ill 25
3 46 evil 25
4 26 cross 24
5 27 cross 24
6 48 punishment 24
7 7 cutting 20
8 19 feeling 20
9 33 feeling 20
10 34 feeling 20
# ... with 4,194 more rows
```
### Step 4: Visualize
Now let’s do a visualization for sentiment. So redo your inner join, but we’ll create a data frame that has the information we need.
```
plot_data = ja_book %>%
inner_join(nrc_bad) %>%
group_by(chapter, line_book, line_chapter) %>%
count() %>%
group_by(chapter) %>%
mutate(negativity = cumsum(n),
mean_chapter_negativity=mean(negativity)) %>%
group_by(line_chapter) %>%
mutate(mean_line_negativity=mean(n))
plot_data
```
```
# A tibble: 4,398 x 7
# Groups: line_chapter [453]
chapter line_book line_chapter n negativity mean_chapter_negativity mean_line_negativity
<int> <int> <int> <int> <int> <dbl> <dbl>
1 1 17 7 2 2 111. 3.41
2 1 18 8 4 6 111. 2.65
3 1 20 10 1 7 111. 3.31
4 1 24 14 1 8 111. 2.88
5 1 26 16 2 10 111. 2.54
6 1 27 17 3 13 111. 2.67
7 1 28 18 3 16 111. 3.58
8 1 29 19 2 18 111. 2.31
9 1 34 24 3 21 111. 2.17
10 1 41 31 1 22 111. 2.87
# ... with 4,388 more rows
```
At this point you have enough to play with, so I leave you to plot whatever you want.
The following[8](#fn8) shows both the total negativity within a chapter, as well as the per line negativity within a chapter. We can see that there is less negativity towards the end of chapters. We can also see that there appears to be more negativity in later chapters (darker lines).
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/sentiment-analysis.html |
Sentiment Analysis
==================
Basic idea
----------
A common and intuitive approach to text is sentiment analysis. In a grand sense, we are interested in the emotional content of some text, e.g. posts on Facebook, tweets, or movie reviews. Most of the time, this is obvious when one reads it, but if you have hundreds of thousands or millions of strings to analyze, you’d like to be able to do so efficiently.
We will use the tidytext package for our demonstration. It comes with a lexicon of positive and negative words that is actually a combination of multiple sources, one of which provides numeric ratings, while the others suggest different classes of sentiment.
```
library(tidytext)
sentiments %>% slice(sample(1:nrow(sentiments)))
```
```
# A tibble: 27,314 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 decomposition negative nrc NA
2 imaculate positive bing NA
3 greatness positive bing NA
4 impatient negative bing NA
5 contradicting negative loughran NA
6 irrecoverableness negative bing NA
7 advisable trust nrc NA
8 humiliation disgust nrc NA
9 obscures negative bing NA
10 affliction negative bing NA
# ... with 27,304 more rows
```
The gist is that we are dealing with a specific, pre\-defined vocabulary. Of course, any analysis will only be as good as the lexicon. The goal is usually to assign a sentiment score to a text, possibly an overall score, or a generally positive or negative grade. Given that, other analyses may be implemented to predict sentiment via standard regression tools or machine learning approaches.
Issues
------
### Context, sarcasm, etc.
Now consider the following.
```
sentiments %>% filter(word=='sick')
```
```
# A tibble: 5 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 sick disgust nrc NA
2 sick negative nrc NA
3 sick sadness nrc NA
4 sick negative bing NA
5 sick <NA> AFINN -2
```
Despite the above assigned sentiments, the word *sick* has been used at least since 1960s surfing culture as slang for positive affect. A basic approach to sentiment analysis as described here will not be able to detect slang or other context like sarcasm. However, lots of training data for a particular context may allow one to correctly predict such sentiment. In addition, there are, for example, slang lexicons, or one can simply add their own complements to any available lexicon.
### Lexicons
In addition, the lexicons are going to maybe be applicable to *general* usage of English in the western world. Some might wonder where exactly these came from or who decided that the word *abacus* should be affiliated with ‘trust’. You may start your path by typing `?sentiments` at the console if you have the tidytext package loaded.
Sentiment Analysis Examples
---------------------------
### The first thing the baby did wrong
We demonstrate sentiment analysis with the text *The first thing the baby did wrong*, which is a very popular brief guide to parenting written by world renown psychologist [Donald Barthelme](appendix.html#donald-barthelme) who, in his spare time, also wrote postmodern literature. This particular text talks about an issue with the baby, whose name is Born Dancin’, and who likes to tear pages out of books. Attempts are made by her parents to rectify the situation, without much success, but things are finally resolved at the end. The ultimate goal will be to see how sentiment in the text evolves over time, and in general we’d expect things to end more positively than they began.
How do we start? Let’s look again at the sentiments data set in the tidytext package.
```
sentiments %>% slice(sample(1:nrow(sentiments)))
```
```
# A tibble: 27,314 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 blunder sadness nrc NA
2 solidity positive nrc NA
3 mortuary fear nrc NA
4 absorbed positive nrc NA
5 successful joy nrc NA
6 virus negative nrc NA
7 exorbitantly negative bing NA
8 discombobulate negative bing NA
9 wail negative nrc NA
10 intimidatingly negative bing NA
# ... with 27,304 more rows
```
The bing lexicon provides only *positive* or *negative* labels. The AFINN, on the other hand, is numerical, with ratings \-5:5 that are in the score column. The others get more imaginative, but also more problematic. Why *assimilate* is *superfluous* is beyond me. It clearly should be negative given the [Borg](https://en.wikipedia.org/wiki/Borg_%28Star_Trek%29) connotations.
```
sentiments %>%
filter(sentiment=='superfluous')
```
```
# A tibble: 56 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 aegis superfluous loughran NA
2 amorphous superfluous loughran NA
3 anticipatory superfluous loughran NA
4 appertaining superfluous loughran NA
5 assimilate superfluous loughran NA
6 assimilating superfluous loughran NA
7 assimilation superfluous loughran NA
8 bifurcated superfluous loughran NA
9 bifurcation superfluous loughran NA
10 cessions superfluous loughran NA
# ... with 46 more rows
```
#### Read in the text files
But I digress. We start with the raw text, reading it in line by line. In what follows we read in all the texts (three) in a given directory, such that each element of ‘text’ is the work itself, i.e. `text` is a list column[5](#fn5). The unnest function will unravel the works to where each entry is essentially a paragraph form.
```
library(tidytext)
barth0 =
data_frame(file = dir('data/texts_raw/barthelme', full.names = TRUE)) %>%
mutate(text = map(file, read_lines)) %>%
transmute(work = basename(file), text) %>%
unnest(text)
```
#### Iterative processing
One of the things stressed in this document is the iterative nature of text analysis. You will consistently take two steps forward, and then one or two back as you find issues that need to be addressed. For example, in a subsequent step I found there were encoding issues[6](#fn6), so the following attempts to fix them. In addition, we want to tokenize the documents such that our tokens are sentences (e.g. as opposed to words or paragraphs). The reason for this is that I will be summarizing the sentiment at sentence level.
```
# Fix encoding, convert to sentences; you may get a warning message
barth = barth0 %>%
mutate(
text =
sapply(
text,
stringi::stri_enc_toutf8,
is_unknown_8bit = TRUE,
validate = TRUE
)
) %>%
unnest_tokens(
output = sentence,
input = text,
token = 'sentences'
)
```
#### Tokenization
The next step is to drill down to just the document we want, and subsequently tokenize to the word level. However, I also create a sentence id so that we can group on it later.
```
# get baby doc, convert to words
baby = barth %>%
filter(work=='baby.txt') %>%
mutate(sentence_id = 1:n()) %>%
unnest_tokens(
output = word,
input = sentence,
token = 'words',
drop = FALSE
) %>%
ungroup()
```
#### Get sentiments
Now that the data has been prepped, getting the sentiments is ridiculously easy. But that is how it is with text analysis. All the hard work is spent with the data processing. Here all we need is an inner join of our words with a sentiment lexicon of choice. This process will only retain words that are also in the lexicon. I use the numeric\-based lexicon here. At that point, we get a sum score of sentiment by sentence.
```
# get sentiment via inner join
baby_sentiment = baby %>%
inner_join(get_sentiments("afinn")) %>%
group_by(sentence_id, sentence) %>%
summarise(sentiment = sum(score)) %>%
ungroup()
```
#### Alternative approach
As we are interested in the sentence level, it turns out that the sentimentr package has built\-in functionality for this, and includes a more nuanced sentiment scores that takes into account valence shifters, e.g. words that would negate something with positive or negative sentiment (‘I do ***not*** like it’).
```
baby_sentiment = barth0 %>%
filter(work=='baby.txt') %>%
get_sentences(text) %>%
sentiment() %>%
drop_na() %>% # empty lines
mutate(sentence_id = row_number())
```
The following visualizes sentiment over the progression of sentences (note that not every sentence will receive a sentiment score). You can read the sentence by hovering over the dot. The ▬ is the running average.
In general, the sentiment starts out negative as the problem is explained. It bounces back and forth a bit but ends on a positive note. You’ll see that some sentences’ context are not captured. For example, sentence 16 is ‘But it didn’t do any good’. However *good* is going to be marked as a positive sentiment in any lexicon by default. In addition, the token length will matter. Longer sentences are more likely to have some sentiment, for example.
### Romeo \& Juliet
For this example, I’ll invite you to more or less follow along, as there is notable pre\-processing that must be done. We’ll look at sentiment in Shakespeare’s Romeo and Juliet. I have a cleaner version in the raw texts folder, but we can take the opportunity to use the gutenbergr package to download it directly from Project Gutenberg, a storehouse for works that have entered the public domain.
```
library(gutenbergr)
gw0 = gutenberg_works(title == "Romeo and Juliet") # look for something with this title
```
```
# A tibble: 1 x 4
gutenberg_id title author gutenberg_author_id
<int> <chr> <chr> <int>
1 1513 Romeo and Juliet Shakespeare, William 65
```
```
rnj = gutenberg_download(gw0$gutenberg_id)
```
We’ve got the text now, but there is still work to be done. The following is a quick and dirty approach, but see the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish) to see a more deliberate one.
We first slice off the initial parts we don’t want like title, author etc. Then we get rid of other tidbits that would interfere, using a little regex as well to aid the process.
```
rnj_filtered = rnj %>%
slice(-(1:49)) %>%
filter(!text==str_to_upper(text), # will remove THE PROLOGUE etc.
!text==str_to_title(text), # will remove names/single word lines
!str_detect(text, pattern='^(Scene|SCENE)|^(Act|ACT)|^\\[')) %>%
select(-gutenberg_id) %>%
unnest_tokens(sentence, input=text, token='sentences') %>%
mutate(sentenceID = 1:n())
```
The following unnests the data to word tokens. In addition, you can remove stopwords like a, an, the etc., and tidytext comes with a stop\_words data frame. However, some of the stopwords have sentiments, so you would get a bit of a different result if you retain them. As Black Sheep once said, the choice is yours, and you can deal with this, or you can deal with that.
```
# show some of the matches
stop_words$word[which(stop_words$word %in% sentiments$word)] %>% head(20)
```
```
[1] "able" "against" "allow" "almost" "alone" "appear" "appreciate" "appropriate" "available" "awfully" "believe" "best" "better" "certain" "clearly"
[16] "could" "despite" "downwards" "enough" "furthermore"
```
```
# remember to call output 'word' or antijoin won't work without a 'by' argument
rnj_filtered = rnj_filtered %>%
unnest_tokens(output=word, input=sentence, token='words') %>%
anti_join(stop_words)
```
Now we add the sentiments via the inner\_join function. Here I use ‘bing’, but you can use another, and you might get a different result.
```
rnj_filtered %>%
count(word) %>%
arrange(desc(n))
```
```
# A tibble: 3,288 x 2
word n
<chr> <int>
1 thou 276
2 thy 165
3 love 140
4 thee 139
5 romeo 110
6 night 83
7 death 71
8 hath 64
9 sir 58
10 art 55
# ... with 3,278 more rows
```
```
rnj_sentiment = rnj_filtered %>%
inner_join(sentiments)
rnj_sentiment
```
```
# A tibble: 12,668 x 5
sentenceID word sentiment lexicon score
<int> <chr> <chr> <chr> <int>
1 1 dignity positive nrc NA
2 1 dignity trust nrc NA
3 1 dignity positive bing NA
4 1 fair positive nrc NA
5 1 fair positive bing NA
6 1 fair <NA> AFINN 2
7 1 ancient negative nrc NA
8 1 grudge anger nrc NA
9 1 grudge negative nrc NA
10 1 grudge negative bing NA
# ... with 12,658 more rows
```
```
rnj_sentiment_bing = rnj_sentiment %>%
filter(lexicon=='bing')
table(rnj_sentiment_bing$sentiment)
```
```
negative positive
1244 833
```
Looks like this one is going to be a downer. The following visualizes the positive and negative sentiment scores as one progresses sentence by sentence through the work using the plotly package. I also show same information expressed as a difference (opaque line).
It’s a close game until perhaps the midway point, when negativity takes over and despair sets in with the story. By the end \[\[:SPOILER ALERT:]] Sean Bean is beheaded, Darth Vader reveals himself to be Luke’s father, and Verbal is Keyser Söze.
Sentiment Analysis Summary
--------------------------
In general, sentiment analysis can be a useful exploration of data, but it is highly dependent on the context and tools used. Note also that ‘sentiment’ can be anything, it doesn’t have to be positive vs. negative. Any vocabulary may be applied, and so it has more utility than the usual implementation.
It should also be noted that the above demonstration is largely conceptual and descriptive. While fun, it’s a bit simplified. For starters, trying to classify words as simply positive or negative itself is not a straightforward endeavor. As we noted at the beginning, context matters, and in general you’d want to take it into account. Modern methods of sentiment analysis would use approaches like word2vec or deep learning to predict a sentiment probability, as opposed to a simple word match. Even in the above, matching sentiments to texts would probably only be a precursor to building a model predicting sentiment, which could then be applied to new data.
Exercise
--------
### Step 0: Install the packages
If you haven’t already, install the tidytext package. Install the janeaustenr package and load both of them[7](#fn7).
### Step 1: Initial inspection
First you’ll want to look at what we’re dealing with, so take a gander at austenbooks.
```
library(tidytext); library(janeaustenr)
austen_books()
```
```
# A tibble: 73,422 x 2
text book
* <chr> <fct>
1 SENSE AND SENSIBILITY Sense & Sensibility
2 "" Sense & Sensibility
3 by Jane Austen Sense & Sensibility
4 "" Sense & Sensibility
5 (1811) Sense & Sensibility
6 "" Sense & Sensibility
7 "" Sense & Sensibility
8 "" Sense & Sensibility
9 "" Sense & Sensibility
10 CHAPTER 1 Sense & Sensibility
# ... with 73,412 more rows
```
```
austen_books() %>%
distinct(book)
```
```
# A tibble: 6 x 1
book
<fct>
1 Sense & Sensibility
2 Pride & Prejudice
3 Mansfield Park
4 Emma
5 Northanger Abbey
6 Persuasion
```
We will examine only one text. In addition, for this exercise we’ll take a little bit of a different approach, looking for a specific kind of sentiment using the NRC database. It contains 10 distinct sentiments.
```
get_sentiments("nrc") %>% distinct(sentiment)
```
```
# A tibble: 10 x 1
sentiment
<chr>
1 trust
2 fear
3 negative
4 sadness
5 anger
6 surprise
7 positive
8 disgust
9 joy
10 anticipation
```
Now, select from any of those sentiments you like (or more than one), and one of the texts as follows.
```
nrc_sadness <- get_sentiments("nrc") %>%
filter(sentiment == "positive")
ja_book = austen_books() %>%
filter(book == "Emma")
```
### Step 2: Data prep
Now we do a little prep, and I’ll save you the trouble. You can just run the following.
```
ja_book = ja_book %>%
mutate(chapter = str_detect(text, regex("^chapter [\\divxlc]", ignore_case = TRUE)),
chapter = cumsum(chapter),
line_book = row_number()) %>%
unnest_tokens(word, text)
```
```
ja_book = ja_book %>%
mutate(chapter = str_detect(text, regex("^chapter [\\divxlc]", ignore_case = TRUE)),
chapter = cumsum(chapter),
line_book = row_number()) %>%
group_by(chapter) %>%
mutate(line_chapter = row_number()) %>%
# ungroup()
unnest_tokens(word, text)
```
### Step 3: Get sentiment
Now, on your own, try the inner join approach we used previously to match the sentiments to the text. Don’t try to overthink this. The third pipe step will use the count function with the `word` column and also the argument `sort=TRUE`. Note this is just to look at your result, we aren’t assigning it to an object yet.
```
ja_book %>%
? %>%
?
```
The following shows my negative evaluation of Mansfield Park.
```
# A tibble: 4,204 x 3
# Groups: chapter [48]
chapter word n
<int> <chr> <int>
1 24 feeling 35
2 7 ill 25
3 46 evil 25
4 26 cross 24
5 27 cross 24
6 48 punishment 24
7 7 cutting 20
8 19 feeling 20
9 33 feeling 20
10 34 feeling 20
# ... with 4,194 more rows
```
### Step 4: Visualize
Now let’s do a visualization for sentiment. So redo your inner join, but we’ll create a data frame that has the information we need.
```
plot_data = ja_book %>%
inner_join(nrc_bad) %>%
group_by(chapter, line_book, line_chapter) %>%
count() %>%
group_by(chapter) %>%
mutate(negativity = cumsum(n),
mean_chapter_negativity=mean(negativity)) %>%
group_by(line_chapter) %>%
mutate(mean_line_negativity=mean(n))
plot_data
```
```
# A tibble: 4,398 x 7
# Groups: line_chapter [453]
chapter line_book line_chapter n negativity mean_chapter_negativity mean_line_negativity
<int> <int> <int> <int> <int> <dbl> <dbl>
1 1 17 7 2 2 111. 3.41
2 1 18 8 4 6 111. 2.65
3 1 20 10 1 7 111. 3.31
4 1 24 14 1 8 111. 2.88
5 1 26 16 2 10 111. 2.54
6 1 27 17 3 13 111. 2.67
7 1 28 18 3 16 111. 3.58
8 1 29 19 2 18 111. 2.31
9 1 34 24 3 21 111. 2.17
10 1 41 31 1 22 111. 2.87
# ... with 4,388 more rows
```
At this point you have enough to play with, so I leave you to plot whatever you want.
The following[8](#fn8) shows both the total negativity within a chapter, as well as the per line negativity within a chapter. We can see that there is less negativity towards the end of chapters. We can also see that there appears to be more negativity in later chapters (darker lines).
Basic idea
----------
A common and intuitive approach to text is sentiment analysis. In a grand sense, we are interested in the emotional content of some text, e.g. posts on Facebook, tweets, or movie reviews. Most of the time, this is obvious when one reads it, but if you have hundreds of thousands or millions of strings to analyze, you’d like to be able to do so efficiently.
We will use the tidytext package for our demonstration. It comes with a lexicon of positive and negative words that is actually a combination of multiple sources, one of which provides numeric ratings, while the others suggest different classes of sentiment.
```
library(tidytext)
sentiments %>% slice(sample(1:nrow(sentiments)))
```
```
# A tibble: 27,314 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 decomposition negative nrc NA
2 imaculate positive bing NA
3 greatness positive bing NA
4 impatient negative bing NA
5 contradicting negative loughran NA
6 irrecoverableness negative bing NA
7 advisable trust nrc NA
8 humiliation disgust nrc NA
9 obscures negative bing NA
10 affliction negative bing NA
# ... with 27,304 more rows
```
The gist is that we are dealing with a specific, pre\-defined vocabulary. Of course, any analysis will only be as good as the lexicon. The goal is usually to assign a sentiment score to a text, possibly an overall score, or a generally positive or negative grade. Given that, other analyses may be implemented to predict sentiment via standard regression tools or machine learning approaches.
Issues
------
### Context, sarcasm, etc.
Now consider the following.
```
sentiments %>% filter(word=='sick')
```
```
# A tibble: 5 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 sick disgust nrc NA
2 sick negative nrc NA
3 sick sadness nrc NA
4 sick negative bing NA
5 sick <NA> AFINN -2
```
Despite the above assigned sentiments, the word *sick* has been used at least since 1960s surfing culture as slang for positive affect. A basic approach to sentiment analysis as described here will not be able to detect slang or other context like sarcasm. However, lots of training data for a particular context may allow one to correctly predict such sentiment. In addition, there are, for example, slang lexicons, or one can simply add their own complements to any available lexicon.
### Lexicons
In addition, the lexicons are going to maybe be applicable to *general* usage of English in the western world. Some might wonder where exactly these came from or who decided that the word *abacus* should be affiliated with ‘trust’. You may start your path by typing `?sentiments` at the console if you have the tidytext package loaded.
### Context, sarcasm, etc.
Now consider the following.
```
sentiments %>% filter(word=='sick')
```
```
# A tibble: 5 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 sick disgust nrc NA
2 sick negative nrc NA
3 sick sadness nrc NA
4 sick negative bing NA
5 sick <NA> AFINN -2
```
Despite the above assigned sentiments, the word *sick* has been used at least since 1960s surfing culture as slang for positive affect. A basic approach to sentiment analysis as described here will not be able to detect slang or other context like sarcasm. However, lots of training data for a particular context may allow one to correctly predict such sentiment. In addition, there are, for example, slang lexicons, or one can simply add their own complements to any available lexicon.
### Lexicons
In addition, the lexicons are going to maybe be applicable to *general* usage of English in the western world. Some might wonder where exactly these came from or who decided that the word *abacus* should be affiliated with ‘trust’. You may start your path by typing `?sentiments` at the console if you have the tidytext package loaded.
Sentiment Analysis Examples
---------------------------
### The first thing the baby did wrong
We demonstrate sentiment analysis with the text *The first thing the baby did wrong*, which is a very popular brief guide to parenting written by world renown psychologist [Donald Barthelme](appendix.html#donald-barthelme) who, in his spare time, also wrote postmodern literature. This particular text talks about an issue with the baby, whose name is Born Dancin’, and who likes to tear pages out of books. Attempts are made by her parents to rectify the situation, without much success, but things are finally resolved at the end. The ultimate goal will be to see how sentiment in the text evolves over time, and in general we’d expect things to end more positively than they began.
How do we start? Let’s look again at the sentiments data set in the tidytext package.
```
sentiments %>% slice(sample(1:nrow(sentiments)))
```
```
# A tibble: 27,314 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 blunder sadness nrc NA
2 solidity positive nrc NA
3 mortuary fear nrc NA
4 absorbed positive nrc NA
5 successful joy nrc NA
6 virus negative nrc NA
7 exorbitantly negative bing NA
8 discombobulate negative bing NA
9 wail negative nrc NA
10 intimidatingly negative bing NA
# ... with 27,304 more rows
```
The bing lexicon provides only *positive* or *negative* labels. The AFINN, on the other hand, is numerical, with ratings \-5:5 that are in the score column. The others get more imaginative, but also more problematic. Why *assimilate* is *superfluous* is beyond me. It clearly should be negative given the [Borg](https://en.wikipedia.org/wiki/Borg_%28Star_Trek%29) connotations.
```
sentiments %>%
filter(sentiment=='superfluous')
```
```
# A tibble: 56 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 aegis superfluous loughran NA
2 amorphous superfluous loughran NA
3 anticipatory superfluous loughran NA
4 appertaining superfluous loughran NA
5 assimilate superfluous loughran NA
6 assimilating superfluous loughran NA
7 assimilation superfluous loughran NA
8 bifurcated superfluous loughran NA
9 bifurcation superfluous loughran NA
10 cessions superfluous loughran NA
# ... with 46 more rows
```
#### Read in the text files
But I digress. We start with the raw text, reading it in line by line. In what follows we read in all the texts (three) in a given directory, such that each element of ‘text’ is the work itself, i.e. `text` is a list column[5](#fn5). The unnest function will unravel the works to where each entry is essentially a paragraph form.
```
library(tidytext)
barth0 =
data_frame(file = dir('data/texts_raw/barthelme', full.names = TRUE)) %>%
mutate(text = map(file, read_lines)) %>%
transmute(work = basename(file), text) %>%
unnest(text)
```
#### Iterative processing
One of the things stressed in this document is the iterative nature of text analysis. You will consistently take two steps forward, and then one or two back as you find issues that need to be addressed. For example, in a subsequent step I found there were encoding issues[6](#fn6), so the following attempts to fix them. In addition, we want to tokenize the documents such that our tokens are sentences (e.g. as opposed to words or paragraphs). The reason for this is that I will be summarizing the sentiment at sentence level.
```
# Fix encoding, convert to sentences; you may get a warning message
barth = barth0 %>%
mutate(
text =
sapply(
text,
stringi::stri_enc_toutf8,
is_unknown_8bit = TRUE,
validate = TRUE
)
) %>%
unnest_tokens(
output = sentence,
input = text,
token = 'sentences'
)
```
#### Tokenization
The next step is to drill down to just the document we want, and subsequently tokenize to the word level. However, I also create a sentence id so that we can group on it later.
```
# get baby doc, convert to words
baby = barth %>%
filter(work=='baby.txt') %>%
mutate(sentence_id = 1:n()) %>%
unnest_tokens(
output = word,
input = sentence,
token = 'words',
drop = FALSE
) %>%
ungroup()
```
#### Get sentiments
Now that the data has been prepped, getting the sentiments is ridiculously easy. But that is how it is with text analysis. All the hard work is spent with the data processing. Here all we need is an inner join of our words with a sentiment lexicon of choice. This process will only retain words that are also in the lexicon. I use the numeric\-based lexicon here. At that point, we get a sum score of sentiment by sentence.
```
# get sentiment via inner join
baby_sentiment = baby %>%
inner_join(get_sentiments("afinn")) %>%
group_by(sentence_id, sentence) %>%
summarise(sentiment = sum(score)) %>%
ungroup()
```
#### Alternative approach
As we are interested in the sentence level, it turns out that the sentimentr package has built\-in functionality for this, and includes a more nuanced sentiment scores that takes into account valence shifters, e.g. words that would negate something with positive or negative sentiment (‘I do ***not*** like it’).
```
baby_sentiment = barth0 %>%
filter(work=='baby.txt') %>%
get_sentences(text) %>%
sentiment() %>%
drop_na() %>% # empty lines
mutate(sentence_id = row_number())
```
The following visualizes sentiment over the progression of sentences (note that not every sentence will receive a sentiment score). You can read the sentence by hovering over the dot. The ▬ is the running average.
In general, the sentiment starts out negative as the problem is explained. It bounces back and forth a bit but ends on a positive note. You’ll see that some sentences’ context are not captured. For example, sentence 16 is ‘But it didn’t do any good’. However *good* is going to be marked as a positive sentiment in any lexicon by default. In addition, the token length will matter. Longer sentences are more likely to have some sentiment, for example.
### Romeo \& Juliet
For this example, I’ll invite you to more or less follow along, as there is notable pre\-processing that must be done. We’ll look at sentiment in Shakespeare’s Romeo and Juliet. I have a cleaner version in the raw texts folder, but we can take the opportunity to use the gutenbergr package to download it directly from Project Gutenberg, a storehouse for works that have entered the public domain.
```
library(gutenbergr)
gw0 = gutenberg_works(title == "Romeo and Juliet") # look for something with this title
```
```
# A tibble: 1 x 4
gutenberg_id title author gutenberg_author_id
<int> <chr> <chr> <int>
1 1513 Romeo and Juliet Shakespeare, William 65
```
```
rnj = gutenberg_download(gw0$gutenberg_id)
```
We’ve got the text now, but there is still work to be done. The following is a quick and dirty approach, but see the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish) to see a more deliberate one.
We first slice off the initial parts we don’t want like title, author etc. Then we get rid of other tidbits that would interfere, using a little regex as well to aid the process.
```
rnj_filtered = rnj %>%
slice(-(1:49)) %>%
filter(!text==str_to_upper(text), # will remove THE PROLOGUE etc.
!text==str_to_title(text), # will remove names/single word lines
!str_detect(text, pattern='^(Scene|SCENE)|^(Act|ACT)|^\\[')) %>%
select(-gutenberg_id) %>%
unnest_tokens(sentence, input=text, token='sentences') %>%
mutate(sentenceID = 1:n())
```
The following unnests the data to word tokens. In addition, you can remove stopwords like a, an, the etc., and tidytext comes with a stop\_words data frame. However, some of the stopwords have sentiments, so you would get a bit of a different result if you retain them. As Black Sheep once said, the choice is yours, and you can deal with this, or you can deal with that.
```
# show some of the matches
stop_words$word[which(stop_words$word %in% sentiments$word)] %>% head(20)
```
```
[1] "able" "against" "allow" "almost" "alone" "appear" "appreciate" "appropriate" "available" "awfully" "believe" "best" "better" "certain" "clearly"
[16] "could" "despite" "downwards" "enough" "furthermore"
```
```
# remember to call output 'word' or antijoin won't work without a 'by' argument
rnj_filtered = rnj_filtered %>%
unnest_tokens(output=word, input=sentence, token='words') %>%
anti_join(stop_words)
```
Now we add the sentiments via the inner\_join function. Here I use ‘bing’, but you can use another, and you might get a different result.
```
rnj_filtered %>%
count(word) %>%
arrange(desc(n))
```
```
# A tibble: 3,288 x 2
word n
<chr> <int>
1 thou 276
2 thy 165
3 love 140
4 thee 139
5 romeo 110
6 night 83
7 death 71
8 hath 64
9 sir 58
10 art 55
# ... with 3,278 more rows
```
```
rnj_sentiment = rnj_filtered %>%
inner_join(sentiments)
rnj_sentiment
```
```
# A tibble: 12,668 x 5
sentenceID word sentiment lexicon score
<int> <chr> <chr> <chr> <int>
1 1 dignity positive nrc NA
2 1 dignity trust nrc NA
3 1 dignity positive bing NA
4 1 fair positive nrc NA
5 1 fair positive bing NA
6 1 fair <NA> AFINN 2
7 1 ancient negative nrc NA
8 1 grudge anger nrc NA
9 1 grudge negative nrc NA
10 1 grudge negative bing NA
# ... with 12,658 more rows
```
```
rnj_sentiment_bing = rnj_sentiment %>%
filter(lexicon=='bing')
table(rnj_sentiment_bing$sentiment)
```
```
negative positive
1244 833
```
Looks like this one is going to be a downer. The following visualizes the positive and negative sentiment scores as one progresses sentence by sentence through the work using the plotly package. I also show same information expressed as a difference (opaque line).
It’s a close game until perhaps the midway point, when negativity takes over and despair sets in with the story. By the end \[\[:SPOILER ALERT:]] Sean Bean is beheaded, Darth Vader reveals himself to be Luke’s father, and Verbal is Keyser Söze.
### The first thing the baby did wrong
We demonstrate sentiment analysis with the text *The first thing the baby did wrong*, which is a very popular brief guide to parenting written by world renown psychologist [Donald Barthelme](appendix.html#donald-barthelme) who, in his spare time, also wrote postmodern literature. This particular text talks about an issue with the baby, whose name is Born Dancin’, and who likes to tear pages out of books. Attempts are made by her parents to rectify the situation, without much success, but things are finally resolved at the end. The ultimate goal will be to see how sentiment in the text evolves over time, and in general we’d expect things to end more positively than they began.
How do we start? Let’s look again at the sentiments data set in the tidytext package.
```
sentiments %>% slice(sample(1:nrow(sentiments)))
```
```
# A tibble: 27,314 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 blunder sadness nrc NA
2 solidity positive nrc NA
3 mortuary fear nrc NA
4 absorbed positive nrc NA
5 successful joy nrc NA
6 virus negative nrc NA
7 exorbitantly negative bing NA
8 discombobulate negative bing NA
9 wail negative nrc NA
10 intimidatingly negative bing NA
# ... with 27,304 more rows
```
The bing lexicon provides only *positive* or *negative* labels. The AFINN, on the other hand, is numerical, with ratings \-5:5 that are in the score column. The others get more imaginative, but also more problematic. Why *assimilate* is *superfluous* is beyond me. It clearly should be negative given the [Borg](https://en.wikipedia.org/wiki/Borg_%28Star_Trek%29) connotations.
```
sentiments %>%
filter(sentiment=='superfluous')
```
```
# A tibble: 56 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 aegis superfluous loughran NA
2 amorphous superfluous loughran NA
3 anticipatory superfluous loughran NA
4 appertaining superfluous loughran NA
5 assimilate superfluous loughran NA
6 assimilating superfluous loughran NA
7 assimilation superfluous loughran NA
8 bifurcated superfluous loughran NA
9 bifurcation superfluous loughran NA
10 cessions superfluous loughran NA
# ... with 46 more rows
```
#### Read in the text files
But I digress. We start with the raw text, reading it in line by line. In what follows we read in all the texts (three) in a given directory, such that each element of ‘text’ is the work itself, i.e. `text` is a list column[5](#fn5). The unnest function will unravel the works to where each entry is essentially a paragraph form.
```
library(tidytext)
barth0 =
data_frame(file = dir('data/texts_raw/barthelme', full.names = TRUE)) %>%
mutate(text = map(file, read_lines)) %>%
transmute(work = basename(file), text) %>%
unnest(text)
```
#### Iterative processing
One of the things stressed in this document is the iterative nature of text analysis. You will consistently take two steps forward, and then one or two back as you find issues that need to be addressed. For example, in a subsequent step I found there were encoding issues[6](#fn6), so the following attempts to fix them. In addition, we want to tokenize the documents such that our tokens are sentences (e.g. as opposed to words or paragraphs). The reason for this is that I will be summarizing the sentiment at sentence level.
```
# Fix encoding, convert to sentences; you may get a warning message
barth = barth0 %>%
mutate(
text =
sapply(
text,
stringi::stri_enc_toutf8,
is_unknown_8bit = TRUE,
validate = TRUE
)
) %>%
unnest_tokens(
output = sentence,
input = text,
token = 'sentences'
)
```
#### Tokenization
The next step is to drill down to just the document we want, and subsequently tokenize to the word level. However, I also create a sentence id so that we can group on it later.
```
# get baby doc, convert to words
baby = barth %>%
filter(work=='baby.txt') %>%
mutate(sentence_id = 1:n()) %>%
unnest_tokens(
output = word,
input = sentence,
token = 'words',
drop = FALSE
) %>%
ungroup()
```
#### Get sentiments
Now that the data has been prepped, getting the sentiments is ridiculously easy. But that is how it is with text analysis. All the hard work is spent with the data processing. Here all we need is an inner join of our words with a sentiment lexicon of choice. This process will only retain words that are also in the lexicon. I use the numeric\-based lexicon here. At that point, we get a sum score of sentiment by sentence.
```
# get sentiment via inner join
baby_sentiment = baby %>%
inner_join(get_sentiments("afinn")) %>%
group_by(sentence_id, sentence) %>%
summarise(sentiment = sum(score)) %>%
ungroup()
```
#### Alternative approach
As we are interested in the sentence level, it turns out that the sentimentr package has built\-in functionality for this, and includes a more nuanced sentiment scores that takes into account valence shifters, e.g. words that would negate something with positive or negative sentiment (‘I do ***not*** like it’).
```
baby_sentiment = barth0 %>%
filter(work=='baby.txt') %>%
get_sentences(text) %>%
sentiment() %>%
drop_na() %>% # empty lines
mutate(sentence_id = row_number())
```
The following visualizes sentiment over the progression of sentences (note that not every sentence will receive a sentiment score). You can read the sentence by hovering over the dot. The ▬ is the running average.
In general, the sentiment starts out negative as the problem is explained. It bounces back and forth a bit but ends on a positive note. You’ll see that some sentences’ context are not captured. For example, sentence 16 is ‘But it didn’t do any good’. However *good* is going to be marked as a positive sentiment in any lexicon by default. In addition, the token length will matter. Longer sentences are more likely to have some sentiment, for example.
#### Read in the text files
But I digress. We start with the raw text, reading it in line by line. In what follows we read in all the texts (three) in a given directory, such that each element of ‘text’ is the work itself, i.e. `text` is a list column[5](#fn5). The unnest function will unravel the works to where each entry is essentially a paragraph form.
```
library(tidytext)
barth0 =
data_frame(file = dir('data/texts_raw/barthelme', full.names = TRUE)) %>%
mutate(text = map(file, read_lines)) %>%
transmute(work = basename(file), text) %>%
unnest(text)
```
#### Iterative processing
One of the things stressed in this document is the iterative nature of text analysis. You will consistently take two steps forward, and then one or two back as you find issues that need to be addressed. For example, in a subsequent step I found there were encoding issues[6](#fn6), so the following attempts to fix them. In addition, we want to tokenize the documents such that our tokens are sentences (e.g. as opposed to words or paragraphs). The reason for this is that I will be summarizing the sentiment at sentence level.
```
# Fix encoding, convert to sentences; you may get a warning message
barth = barth0 %>%
mutate(
text =
sapply(
text,
stringi::stri_enc_toutf8,
is_unknown_8bit = TRUE,
validate = TRUE
)
) %>%
unnest_tokens(
output = sentence,
input = text,
token = 'sentences'
)
```
#### Tokenization
The next step is to drill down to just the document we want, and subsequently tokenize to the word level. However, I also create a sentence id so that we can group on it later.
```
# get baby doc, convert to words
baby = barth %>%
filter(work=='baby.txt') %>%
mutate(sentence_id = 1:n()) %>%
unnest_tokens(
output = word,
input = sentence,
token = 'words',
drop = FALSE
) %>%
ungroup()
```
#### Get sentiments
Now that the data has been prepped, getting the sentiments is ridiculously easy. But that is how it is with text analysis. All the hard work is spent with the data processing. Here all we need is an inner join of our words with a sentiment lexicon of choice. This process will only retain words that are also in the lexicon. I use the numeric\-based lexicon here. At that point, we get a sum score of sentiment by sentence.
```
# get sentiment via inner join
baby_sentiment = baby %>%
inner_join(get_sentiments("afinn")) %>%
group_by(sentence_id, sentence) %>%
summarise(sentiment = sum(score)) %>%
ungroup()
```
#### Alternative approach
As we are interested in the sentence level, it turns out that the sentimentr package has built\-in functionality for this, and includes a more nuanced sentiment scores that takes into account valence shifters, e.g. words that would negate something with positive or negative sentiment (‘I do ***not*** like it’).
```
baby_sentiment = barth0 %>%
filter(work=='baby.txt') %>%
get_sentences(text) %>%
sentiment() %>%
drop_na() %>% # empty lines
mutate(sentence_id = row_number())
```
The following visualizes sentiment over the progression of sentences (note that not every sentence will receive a sentiment score). You can read the sentence by hovering over the dot. The ▬ is the running average.
In general, the sentiment starts out negative as the problem is explained. It bounces back and forth a bit but ends on a positive note. You’ll see that some sentences’ context are not captured. For example, sentence 16 is ‘But it didn’t do any good’. However *good* is going to be marked as a positive sentiment in any lexicon by default. In addition, the token length will matter. Longer sentences are more likely to have some sentiment, for example.
### Romeo \& Juliet
For this example, I’ll invite you to more or less follow along, as there is notable pre\-processing that must be done. We’ll look at sentiment in Shakespeare’s Romeo and Juliet. I have a cleaner version in the raw texts folder, but we can take the opportunity to use the gutenbergr package to download it directly from Project Gutenberg, a storehouse for works that have entered the public domain.
```
library(gutenbergr)
gw0 = gutenberg_works(title == "Romeo and Juliet") # look for something with this title
```
```
# A tibble: 1 x 4
gutenberg_id title author gutenberg_author_id
<int> <chr> <chr> <int>
1 1513 Romeo and Juliet Shakespeare, William 65
```
```
rnj = gutenberg_download(gw0$gutenberg_id)
```
We’ve got the text now, but there is still work to be done. The following is a quick and dirty approach, but see the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish) to see a more deliberate one.
We first slice off the initial parts we don’t want like title, author etc. Then we get rid of other tidbits that would interfere, using a little regex as well to aid the process.
```
rnj_filtered = rnj %>%
slice(-(1:49)) %>%
filter(!text==str_to_upper(text), # will remove THE PROLOGUE etc.
!text==str_to_title(text), # will remove names/single word lines
!str_detect(text, pattern='^(Scene|SCENE)|^(Act|ACT)|^\\[')) %>%
select(-gutenberg_id) %>%
unnest_tokens(sentence, input=text, token='sentences') %>%
mutate(sentenceID = 1:n())
```
The following unnests the data to word tokens. In addition, you can remove stopwords like a, an, the etc., and tidytext comes with a stop\_words data frame. However, some of the stopwords have sentiments, so you would get a bit of a different result if you retain them. As Black Sheep once said, the choice is yours, and you can deal with this, or you can deal with that.
```
# show some of the matches
stop_words$word[which(stop_words$word %in% sentiments$word)] %>% head(20)
```
```
[1] "able" "against" "allow" "almost" "alone" "appear" "appreciate" "appropriate" "available" "awfully" "believe" "best" "better" "certain" "clearly"
[16] "could" "despite" "downwards" "enough" "furthermore"
```
```
# remember to call output 'word' or antijoin won't work without a 'by' argument
rnj_filtered = rnj_filtered %>%
unnest_tokens(output=word, input=sentence, token='words') %>%
anti_join(stop_words)
```
Now we add the sentiments via the inner\_join function. Here I use ‘bing’, but you can use another, and you might get a different result.
```
rnj_filtered %>%
count(word) %>%
arrange(desc(n))
```
```
# A tibble: 3,288 x 2
word n
<chr> <int>
1 thou 276
2 thy 165
3 love 140
4 thee 139
5 romeo 110
6 night 83
7 death 71
8 hath 64
9 sir 58
10 art 55
# ... with 3,278 more rows
```
```
rnj_sentiment = rnj_filtered %>%
inner_join(sentiments)
rnj_sentiment
```
```
# A tibble: 12,668 x 5
sentenceID word sentiment lexicon score
<int> <chr> <chr> <chr> <int>
1 1 dignity positive nrc NA
2 1 dignity trust nrc NA
3 1 dignity positive bing NA
4 1 fair positive nrc NA
5 1 fair positive bing NA
6 1 fair <NA> AFINN 2
7 1 ancient negative nrc NA
8 1 grudge anger nrc NA
9 1 grudge negative nrc NA
10 1 grudge negative bing NA
# ... with 12,658 more rows
```
```
rnj_sentiment_bing = rnj_sentiment %>%
filter(lexicon=='bing')
table(rnj_sentiment_bing$sentiment)
```
```
negative positive
1244 833
```
Looks like this one is going to be a downer. The following visualizes the positive and negative sentiment scores as one progresses sentence by sentence through the work using the plotly package. I also show same information expressed as a difference (opaque line).
It’s a close game until perhaps the midway point, when negativity takes over and despair sets in with the story. By the end \[\[:SPOILER ALERT:]] Sean Bean is beheaded, Darth Vader reveals himself to be Luke’s father, and Verbal is Keyser Söze.
Sentiment Analysis Summary
--------------------------
In general, sentiment analysis can be a useful exploration of data, but it is highly dependent on the context and tools used. Note also that ‘sentiment’ can be anything, it doesn’t have to be positive vs. negative. Any vocabulary may be applied, and so it has more utility than the usual implementation.
It should also be noted that the above demonstration is largely conceptual and descriptive. While fun, it’s a bit simplified. For starters, trying to classify words as simply positive or negative itself is not a straightforward endeavor. As we noted at the beginning, context matters, and in general you’d want to take it into account. Modern methods of sentiment analysis would use approaches like word2vec or deep learning to predict a sentiment probability, as opposed to a simple word match. Even in the above, matching sentiments to texts would probably only be a precursor to building a model predicting sentiment, which could then be applied to new data.
Exercise
--------
### Step 0: Install the packages
If you haven’t already, install the tidytext package. Install the janeaustenr package and load both of them[7](#fn7).
### Step 1: Initial inspection
First you’ll want to look at what we’re dealing with, so take a gander at austenbooks.
```
library(tidytext); library(janeaustenr)
austen_books()
```
```
# A tibble: 73,422 x 2
text book
* <chr> <fct>
1 SENSE AND SENSIBILITY Sense & Sensibility
2 "" Sense & Sensibility
3 by Jane Austen Sense & Sensibility
4 "" Sense & Sensibility
5 (1811) Sense & Sensibility
6 "" Sense & Sensibility
7 "" Sense & Sensibility
8 "" Sense & Sensibility
9 "" Sense & Sensibility
10 CHAPTER 1 Sense & Sensibility
# ... with 73,412 more rows
```
```
austen_books() %>%
distinct(book)
```
```
# A tibble: 6 x 1
book
<fct>
1 Sense & Sensibility
2 Pride & Prejudice
3 Mansfield Park
4 Emma
5 Northanger Abbey
6 Persuasion
```
We will examine only one text. In addition, for this exercise we’ll take a little bit of a different approach, looking for a specific kind of sentiment using the NRC database. It contains 10 distinct sentiments.
```
get_sentiments("nrc") %>% distinct(sentiment)
```
```
# A tibble: 10 x 1
sentiment
<chr>
1 trust
2 fear
3 negative
4 sadness
5 anger
6 surprise
7 positive
8 disgust
9 joy
10 anticipation
```
Now, select from any of those sentiments you like (or more than one), and one of the texts as follows.
```
nrc_sadness <- get_sentiments("nrc") %>%
filter(sentiment == "positive")
ja_book = austen_books() %>%
filter(book == "Emma")
```
### Step 2: Data prep
Now we do a little prep, and I’ll save you the trouble. You can just run the following.
```
ja_book = ja_book %>%
mutate(chapter = str_detect(text, regex("^chapter [\\divxlc]", ignore_case = TRUE)),
chapter = cumsum(chapter),
line_book = row_number()) %>%
unnest_tokens(word, text)
```
```
ja_book = ja_book %>%
mutate(chapter = str_detect(text, regex("^chapter [\\divxlc]", ignore_case = TRUE)),
chapter = cumsum(chapter),
line_book = row_number()) %>%
group_by(chapter) %>%
mutate(line_chapter = row_number()) %>%
# ungroup()
unnest_tokens(word, text)
```
### Step 3: Get sentiment
Now, on your own, try the inner join approach we used previously to match the sentiments to the text. Don’t try to overthink this. The third pipe step will use the count function with the `word` column and also the argument `sort=TRUE`. Note this is just to look at your result, we aren’t assigning it to an object yet.
```
ja_book %>%
? %>%
?
```
The following shows my negative evaluation of Mansfield Park.
```
# A tibble: 4,204 x 3
# Groups: chapter [48]
chapter word n
<int> <chr> <int>
1 24 feeling 35
2 7 ill 25
3 46 evil 25
4 26 cross 24
5 27 cross 24
6 48 punishment 24
7 7 cutting 20
8 19 feeling 20
9 33 feeling 20
10 34 feeling 20
# ... with 4,194 more rows
```
### Step 4: Visualize
Now let’s do a visualization for sentiment. So redo your inner join, but we’ll create a data frame that has the information we need.
```
plot_data = ja_book %>%
inner_join(nrc_bad) %>%
group_by(chapter, line_book, line_chapter) %>%
count() %>%
group_by(chapter) %>%
mutate(negativity = cumsum(n),
mean_chapter_negativity=mean(negativity)) %>%
group_by(line_chapter) %>%
mutate(mean_line_negativity=mean(n))
plot_data
```
```
# A tibble: 4,398 x 7
# Groups: line_chapter [453]
chapter line_book line_chapter n negativity mean_chapter_negativity mean_line_negativity
<int> <int> <int> <int> <int> <dbl> <dbl>
1 1 17 7 2 2 111. 3.41
2 1 18 8 4 6 111. 2.65
3 1 20 10 1 7 111. 3.31
4 1 24 14 1 8 111. 2.88
5 1 26 16 2 10 111. 2.54
6 1 27 17 3 13 111. 2.67
7 1 28 18 3 16 111. 3.58
8 1 29 19 2 18 111. 2.31
9 1 34 24 3 21 111. 2.17
10 1 41 31 1 22 111. 2.87
# ... with 4,388 more rows
```
At this point you have enough to play with, so I leave you to plot whatever you want.
The following[8](#fn8) shows both the total negativity within a chapter, as well as the per line negativity within a chapter. We can see that there is less negativity towards the end of chapters. We can also see that there appears to be more negativity in later chapters (darker lines).
### Step 0: Install the packages
If you haven’t already, install the tidytext package. Install the janeaustenr package and load both of them[7](#fn7).
### Step 1: Initial inspection
First you’ll want to look at what we’re dealing with, so take a gander at austenbooks.
```
library(tidytext); library(janeaustenr)
austen_books()
```
```
# A tibble: 73,422 x 2
text book
* <chr> <fct>
1 SENSE AND SENSIBILITY Sense & Sensibility
2 "" Sense & Sensibility
3 by Jane Austen Sense & Sensibility
4 "" Sense & Sensibility
5 (1811) Sense & Sensibility
6 "" Sense & Sensibility
7 "" Sense & Sensibility
8 "" Sense & Sensibility
9 "" Sense & Sensibility
10 CHAPTER 1 Sense & Sensibility
# ... with 73,412 more rows
```
```
austen_books() %>%
distinct(book)
```
```
# A tibble: 6 x 1
book
<fct>
1 Sense & Sensibility
2 Pride & Prejudice
3 Mansfield Park
4 Emma
5 Northanger Abbey
6 Persuasion
```
We will examine only one text. In addition, for this exercise we’ll take a little bit of a different approach, looking for a specific kind of sentiment using the NRC database. It contains 10 distinct sentiments.
```
get_sentiments("nrc") %>% distinct(sentiment)
```
```
# A tibble: 10 x 1
sentiment
<chr>
1 trust
2 fear
3 negative
4 sadness
5 anger
6 surprise
7 positive
8 disgust
9 joy
10 anticipation
```
Now, select from any of those sentiments you like (or more than one), and one of the texts as follows.
```
nrc_sadness <- get_sentiments("nrc") %>%
filter(sentiment == "positive")
ja_book = austen_books() %>%
filter(book == "Emma")
```
### Step 2: Data prep
Now we do a little prep, and I’ll save you the trouble. You can just run the following.
```
ja_book = ja_book %>%
mutate(chapter = str_detect(text, regex("^chapter [\\divxlc]", ignore_case = TRUE)),
chapter = cumsum(chapter),
line_book = row_number()) %>%
unnest_tokens(word, text)
```
```
ja_book = ja_book %>%
mutate(chapter = str_detect(text, regex("^chapter [\\divxlc]", ignore_case = TRUE)),
chapter = cumsum(chapter),
line_book = row_number()) %>%
group_by(chapter) %>%
mutate(line_chapter = row_number()) %>%
# ungroup()
unnest_tokens(word, text)
```
### Step 3: Get sentiment
Now, on your own, try the inner join approach we used previously to match the sentiments to the text. Don’t try to overthink this. The third pipe step will use the count function with the `word` column and also the argument `sort=TRUE`. Note this is just to look at your result, we aren’t assigning it to an object yet.
```
ja_book %>%
? %>%
?
```
The following shows my negative evaluation of Mansfield Park.
```
# A tibble: 4,204 x 3
# Groups: chapter [48]
chapter word n
<int> <chr> <int>
1 24 feeling 35
2 7 ill 25
3 46 evil 25
4 26 cross 24
5 27 cross 24
6 48 punishment 24
7 7 cutting 20
8 19 feeling 20
9 33 feeling 20
10 34 feeling 20
# ... with 4,194 more rows
```
### Step 4: Visualize
Now let’s do a visualization for sentiment. So redo your inner join, but we’ll create a data frame that has the information we need.
```
plot_data = ja_book %>%
inner_join(nrc_bad) %>%
group_by(chapter, line_book, line_chapter) %>%
count() %>%
group_by(chapter) %>%
mutate(negativity = cumsum(n),
mean_chapter_negativity=mean(negativity)) %>%
group_by(line_chapter) %>%
mutate(mean_line_negativity=mean(n))
plot_data
```
```
# A tibble: 4,398 x 7
# Groups: line_chapter [453]
chapter line_book line_chapter n negativity mean_chapter_negativity mean_line_negativity
<int> <int> <int> <int> <int> <dbl> <dbl>
1 1 17 7 2 2 111. 3.41
2 1 18 8 4 6 111. 2.65
3 1 20 10 1 7 111. 3.31
4 1 24 14 1 8 111. 2.88
5 1 26 16 2 10 111. 2.54
6 1 27 17 3 13 111. 2.67
7 1 28 18 3 16 111. 3.58
8 1 29 19 2 18 111. 2.31
9 1 34 24 3 21 111. 2.17
10 1 41 31 1 22 111. 2.87
# ... with 4,388 more rows
```
At this point you have enough to play with, so I leave you to plot whatever you want.
The following[8](#fn8) shows both the total negativity within a chapter, as well as the per line negativity within a chapter. We can see that there is less negativity towards the end of chapters. We can also see that there appears to be more negativity in later chapters (darker lines).
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/sentiment-analysis.html |
Sentiment Analysis
==================
Basic idea
----------
A common and intuitive approach to text is sentiment analysis. In a grand sense, we are interested in the emotional content of some text, e.g. posts on Facebook, tweets, or movie reviews. Most of the time, this is obvious when one reads it, but if you have hundreds of thousands or millions of strings to analyze, you’d like to be able to do so efficiently.
We will use the tidytext package for our demonstration. It comes with a lexicon of positive and negative words that is actually a combination of multiple sources, one of which provides numeric ratings, while the others suggest different classes of sentiment.
```
library(tidytext)
sentiments %>% slice(sample(1:nrow(sentiments)))
```
```
# A tibble: 27,314 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 decomposition negative nrc NA
2 imaculate positive bing NA
3 greatness positive bing NA
4 impatient negative bing NA
5 contradicting negative loughran NA
6 irrecoverableness negative bing NA
7 advisable trust nrc NA
8 humiliation disgust nrc NA
9 obscures negative bing NA
10 affliction negative bing NA
# ... with 27,304 more rows
```
The gist is that we are dealing with a specific, pre\-defined vocabulary. Of course, any analysis will only be as good as the lexicon. The goal is usually to assign a sentiment score to a text, possibly an overall score, or a generally positive or negative grade. Given that, other analyses may be implemented to predict sentiment via standard regression tools or machine learning approaches.
Issues
------
### Context, sarcasm, etc.
Now consider the following.
```
sentiments %>% filter(word=='sick')
```
```
# A tibble: 5 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 sick disgust nrc NA
2 sick negative nrc NA
3 sick sadness nrc NA
4 sick negative bing NA
5 sick <NA> AFINN -2
```
Despite the above assigned sentiments, the word *sick* has been used at least since 1960s surfing culture as slang for positive affect. A basic approach to sentiment analysis as described here will not be able to detect slang or other context like sarcasm. However, lots of training data for a particular context may allow one to correctly predict such sentiment. In addition, there are, for example, slang lexicons, or one can simply add their own complements to any available lexicon.
### Lexicons
In addition, the lexicons are going to maybe be applicable to *general* usage of English in the western world. Some might wonder where exactly these came from or who decided that the word *abacus* should be affiliated with ‘trust’. You may start your path by typing `?sentiments` at the console if you have the tidytext package loaded.
Sentiment Analysis Examples
---------------------------
### The first thing the baby did wrong
We demonstrate sentiment analysis with the text *The first thing the baby did wrong*, which is a very popular brief guide to parenting written by world renown psychologist [Donald Barthelme](appendix.html#donald-barthelme) who, in his spare time, also wrote postmodern literature. This particular text talks about an issue with the baby, whose name is Born Dancin’, and who likes to tear pages out of books. Attempts are made by her parents to rectify the situation, without much success, but things are finally resolved at the end. The ultimate goal will be to see how sentiment in the text evolves over time, and in general we’d expect things to end more positively than they began.
How do we start? Let’s look again at the sentiments data set in the tidytext package.
```
sentiments %>% slice(sample(1:nrow(sentiments)))
```
```
# A tibble: 27,314 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 blunder sadness nrc NA
2 solidity positive nrc NA
3 mortuary fear nrc NA
4 absorbed positive nrc NA
5 successful joy nrc NA
6 virus negative nrc NA
7 exorbitantly negative bing NA
8 discombobulate negative bing NA
9 wail negative nrc NA
10 intimidatingly negative bing NA
# ... with 27,304 more rows
```
The bing lexicon provides only *positive* or *negative* labels. The AFINN, on the other hand, is numerical, with ratings \-5:5 that are in the score column. The others get more imaginative, but also more problematic. Why *assimilate* is *superfluous* is beyond me. It clearly should be negative given the [Borg](https://en.wikipedia.org/wiki/Borg_%28Star_Trek%29) connotations.
```
sentiments %>%
filter(sentiment=='superfluous')
```
```
# A tibble: 56 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 aegis superfluous loughran NA
2 amorphous superfluous loughran NA
3 anticipatory superfluous loughran NA
4 appertaining superfluous loughran NA
5 assimilate superfluous loughran NA
6 assimilating superfluous loughran NA
7 assimilation superfluous loughran NA
8 bifurcated superfluous loughran NA
9 bifurcation superfluous loughran NA
10 cessions superfluous loughran NA
# ... with 46 more rows
```
#### Read in the text files
But I digress. We start with the raw text, reading it in line by line. In what follows we read in all the texts (three) in a given directory, such that each element of ‘text’ is the work itself, i.e. `text` is a list column[5](#fn5). The unnest function will unravel the works to where each entry is essentially a paragraph form.
```
library(tidytext)
barth0 =
data_frame(file = dir('data/texts_raw/barthelme', full.names = TRUE)) %>%
mutate(text = map(file, read_lines)) %>%
transmute(work = basename(file), text) %>%
unnest(text)
```
#### Iterative processing
One of the things stressed in this document is the iterative nature of text analysis. You will consistently take two steps forward, and then one or two back as you find issues that need to be addressed. For example, in a subsequent step I found there were encoding issues[6](#fn6), so the following attempts to fix them. In addition, we want to tokenize the documents such that our tokens are sentences (e.g. as opposed to words or paragraphs). The reason for this is that I will be summarizing the sentiment at sentence level.
```
# Fix encoding, convert to sentences; you may get a warning message
barth = barth0 %>%
mutate(
text =
sapply(
text,
stringi::stri_enc_toutf8,
is_unknown_8bit = TRUE,
validate = TRUE
)
) %>%
unnest_tokens(
output = sentence,
input = text,
token = 'sentences'
)
```
#### Tokenization
The next step is to drill down to just the document we want, and subsequently tokenize to the word level. However, I also create a sentence id so that we can group on it later.
```
# get baby doc, convert to words
baby = barth %>%
filter(work=='baby.txt') %>%
mutate(sentence_id = 1:n()) %>%
unnest_tokens(
output = word,
input = sentence,
token = 'words',
drop = FALSE
) %>%
ungroup()
```
#### Get sentiments
Now that the data has been prepped, getting the sentiments is ridiculously easy. But that is how it is with text analysis. All the hard work is spent with the data processing. Here all we need is an inner join of our words with a sentiment lexicon of choice. This process will only retain words that are also in the lexicon. I use the numeric\-based lexicon here. At that point, we get a sum score of sentiment by sentence.
```
# get sentiment via inner join
baby_sentiment = baby %>%
inner_join(get_sentiments("afinn")) %>%
group_by(sentence_id, sentence) %>%
summarise(sentiment = sum(score)) %>%
ungroup()
```
#### Alternative approach
As we are interested in the sentence level, it turns out that the sentimentr package has built\-in functionality for this, and includes a more nuanced sentiment scores that takes into account valence shifters, e.g. words that would negate something with positive or negative sentiment (‘I do ***not*** like it’).
```
baby_sentiment = barth0 %>%
filter(work=='baby.txt') %>%
get_sentences(text) %>%
sentiment() %>%
drop_na() %>% # empty lines
mutate(sentence_id = row_number())
```
The following visualizes sentiment over the progression of sentences (note that not every sentence will receive a sentiment score). You can read the sentence by hovering over the dot. The ▬ is the running average.
In general, the sentiment starts out negative as the problem is explained. It bounces back and forth a bit but ends on a positive note. You’ll see that some sentences’ context are not captured. For example, sentence 16 is ‘But it didn’t do any good’. However *good* is going to be marked as a positive sentiment in any lexicon by default. In addition, the token length will matter. Longer sentences are more likely to have some sentiment, for example.
### Romeo \& Juliet
For this example, I’ll invite you to more or less follow along, as there is notable pre\-processing that must be done. We’ll look at sentiment in Shakespeare’s Romeo and Juliet. I have a cleaner version in the raw texts folder, but we can take the opportunity to use the gutenbergr package to download it directly from Project Gutenberg, a storehouse for works that have entered the public domain.
```
library(gutenbergr)
gw0 = gutenberg_works(title == "Romeo and Juliet") # look for something with this title
```
```
# A tibble: 1 x 4
gutenberg_id title author gutenberg_author_id
<int> <chr> <chr> <int>
1 1513 Romeo and Juliet Shakespeare, William 65
```
```
rnj = gutenberg_download(gw0$gutenberg_id)
```
We’ve got the text now, but there is still work to be done. The following is a quick and dirty approach, but see the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish) to see a more deliberate one.
We first slice off the initial parts we don’t want like title, author etc. Then we get rid of other tidbits that would interfere, using a little regex as well to aid the process.
```
rnj_filtered = rnj %>%
slice(-(1:49)) %>%
filter(!text==str_to_upper(text), # will remove THE PROLOGUE etc.
!text==str_to_title(text), # will remove names/single word lines
!str_detect(text, pattern='^(Scene|SCENE)|^(Act|ACT)|^\\[')) %>%
select(-gutenberg_id) %>%
unnest_tokens(sentence, input=text, token='sentences') %>%
mutate(sentenceID = 1:n())
```
The following unnests the data to word tokens. In addition, you can remove stopwords like a, an, the etc., and tidytext comes with a stop\_words data frame. However, some of the stopwords have sentiments, so you would get a bit of a different result if you retain them. As Black Sheep once said, the choice is yours, and you can deal with this, or you can deal with that.
```
# show some of the matches
stop_words$word[which(stop_words$word %in% sentiments$word)] %>% head(20)
```
```
[1] "able" "against" "allow" "almost" "alone" "appear" "appreciate" "appropriate" "available" "awfully" "believe" "best" "better" "certain" "clearly"
[16] "could" "despite" "downwards" "enough" "furthermore"
```
```
# remember to call output 'word' or antijoin won't work without a 'by' argument
rnj_filtered = rnj_filtered %>%
unnest_tokens(output=word, input=sentence, token='words') %>%
anti_join(stop_words)
```
Now we add the sentiments via the inner\_join function. Here I use ‘bing’, but you can use another, and you might get a different result.
```
rnj_filtered %>%
count(word) %>%
arrange(desc(n))
```
```
# A tibble: 3,288 x 2
word n
<chr> <int>
1 thou 276
2 thy 165
3 love 140
4 thee 139
5 romeo 110
6 night 83
7 death 71
8 hath 64
9 sir 58
10 art 55
# ... with 3,278 more rows
```
```
rnj_sentiment = rnj_filtered %>%
inner_join(sentiments)
rnj_sentiment
```
```
# A tibble: 12,668 x 5
sentenceID word sentiment lexicon score
<int> <chr> <chr> <chr> <int>
1 1 dignity positive nrc NA
2 1 dignity trust nrc NA
3 1 dignity positive bing NA
4 1 fair positive nrc NA
5 1 fair positive bing NA
6 1 fair <NA> AFINN 2
7 1 ancient negative nrc NA
8 1 grudge anger nrc NA
9 1 grudge negative nrc NA
10 1 grudge negative bing NA
# ... with 12,658 more rows
```
```
rnj_sentiment_bing = rnj_sentiment %>%
filter(lexicon=='bing')
table(rnj_sentiment_bing$sentiment)
```
```
negative positive
1244 833
```
Looks like this one is going to be a downer. The following visualizes the positive and negative sentiment scores as one progresses sentence by sentence through the work using the plotly package. I also show same information expressed as a difference (opaque line).
It’s a close game until perhaps the midway point, when negativity takes over and despair sets in with the story. By the end \[\[:SPOILER ALERT:]] Sean Bean is beheaded, Darth Vader reveals himself to be Luke’s father, and Verbal is Keyser Söze.
Sentiment Analysis Summary
--------------------------
In general, sentiment analysis can be a useful exploration of data, but it is highly dependent on the context and tools used. Note also that ‘sentiment’ can be anything, it doesn’t have to be positive vs. negative. Any vocabulary may be applied, and so it has more utility than the usual implementation.
It should also be noted that the above demonstration is largely conceptual and descriptive. While fun, it’s a bit simplified. For starters, trying to classify words as simply positive or negative itself is not a straightforward endeavor. As we noted at the beginning, context matters, and in general you’d want to take it into account. Modern methods of sentiment analysis would use approaches like word2vec or deep learning to predict a sentiment probability, as opposed to a simple word match. Even in the above, matching sentiments to texts would probably only be a precursor to building a model predicting sentiment, which could then be applied to new data.
Exercise
--------
### Step 0: Install the packages
If you haven’t already, install the tidytext package. Install the janeaustenr package and load both of them[7](#fn7).
### Step 1: Initial inspection
First you’ll want to look at what we’re dealing with, so take a gander at austenbooks.
```
library(tidytext); library(janeaustenr)
austen_books()
```
```
# A tibble: 73,422 x 2
text book
* <chr> <fct>
1 SENSE AND SENSIBILITY Sense & Sensibility
2 "" Sense & Sensibility
3 by Jane Austen Sense & Sensibility
4 "" Sense & Sensibility
5 (1811) Sense & Sensibility
6 "" Sense & Sensibility
7 "" Sense & Sensibility
8 "" Sense & Sensibility
9 "" Sense & Sensibility
10 CHAPTER 1 Sense & Sensibility
# ... with 73,412 more rows
```
```
austen_books() %>%
distinct(book)
```
```
# A tibble: 6 x 1
book
<fct>
1 Sense & Sensibility
2 Pride & Prejudice
3 Mansfield Park
4 Emma
5 Northanger Abbey
6 Persuasion
```
We will examine only one text. In addition, for this exercise we’ll take a little bit of a different approach, looking for a specific kind of sentiment using the NRC database. It contains 10 distinct sentiments.
```
get_sentiments("nrc") %>% distinct(sentiment)
```
```
# A tibble: 10 x 1
sentiment
<chr>
1 trust
2 fear
3 negative
4 sadness
5 anger
6 surprise
7 positive
8 disgust
9 joy
10 anticipation
```
Now, select from any of those sentiments you like (or more than one), and one of the texts as follows.
```
nrc_sadness <- get_sentiments("nrc") %>%
filter(sentiment == "positive")
ja_book = austen_books() %>%
filter(book == "Emma")
```
### Step 2: Data prep
Now we do a little prep, and I’ll save you the trouble. You can just run the following.
```
ja_book = ja_book %>%
mutate(chapter = str_detect(text, regex("^chapter [\\divxlc]", ignore_case = TRUE)),
chapter = cumsum(chapter),
line_book = row_number()) %>%
unnest_tokens(word, text)
```
```
ja_book = ja_book %>%
mutate(chapter = str_detect(text, regex("^chapter [\\divxlc]", ignore_case = TRUE)),
chapter = cumsum(chapter),
line_book = row_number()) %>%
group_by(chapter) %>%
mutate(line_chapter = row_number()) %>%
# ungroup()
unnest_tokens(word, text)
```
### Step 3: Get sentiment
Now, on your own, try the inner join approach we used previously to match the sentiments to the text. Don’t try to overthink this. The third pipe step will use the count function with the `word` column and also the argument `sort=TRUE`. Note this is just to look at your result, we aren’t assigning it to an object yet.
```
ja_book %>%
? %>%
?
```
The following shows my negative evaluation of Mansfield Park.
```
# A tibble: 4,204 x 3
# Groups: chapter [48]
chapter word n
<int> <chr> <int>
1 24 feeling 35
2 7 ill 25
3 46 evil 25
4 26 cross 24
5 27 cross 24
6 48 punishment 24
7 7 cutting 20
8 19 feeling 20
9 33 feeling 20
10 34 feeling 20
# ... with 4,194 more rows
```
### Step 4: Visualize
Now let’s do a visualization for sentiment. So redo your inner join, but we’ll create a data frame that has the information we need.
```
plot_data = ja_book %>%
inner_join(nrc_bad) %>%
group_by(chapter, line_book, line_chapter) %>%
count() %>%
group_by(chapter) %>%
mutate(negativity = cumsum(n),
mean_chapter_negativity=mean(negativity)) %>%
group_by(line_chapter) %>%
mutate(mean_line_negativity=mean(n))
plot_data
```
```
# A tibble: 4,398 x 7
# Groups: line_chapter [453]
chapter line_book line_chapter n negativity mean_chapter_negativity mean_line_negativity
<int> <int> <int> <int> <int> <dbl> <dbl>
1 1 17 7 2 2 111. 3.41
2 1 18 8 4 6 111. 2.65
3 1 20 10 1 7 111. 3.31
4 1 24 14 1 8 111. 2.88
5 1 26 16 2 10 111. 2.54
6 1 27 17 3 13 111. 2.67
7 1 28 18 3 16 111. 3.58
8 1 29 19 2 18 111. 2.31
9 1 34 24 3 21 111. 2.17
10 1 41 31 1 22 111. 2.87
# ... with 4,388 more rows
```
At this point you have enough to play with, so I leave you to plot whatever you want.
The following[8](#fn8) shows both the total negativity within a chapter, as well as the per line negativity within a chapter. We can see that there is less negativity towards the end of chapters. We can also see that there appears to be more negativity in later chapters (darker lines).
Basic idea
----------
A common and intuitive approach to text is sentiment analysis. In a grand sense, we are interested in the emotional content of some text, e.g. posts on Facebook, tweets, or movie reviews. Most of the time, this is obvious when one reads it, but if you have hundreds of thousands or millions of strings to analyze, you’d like to be able to do so efficiently.
We will use the tidytext package for our demonstration. It comes with a lexicon of positive and negative words that is actually a combination of multiple sources, one of which provides numeric ratings, while the others suggest different classes of sentiment.
```
library(tidytext)
sentiments %>% slice(sample(1:nrow(sentiments)))
```
```
# A tibble: 27,314 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 decomposition negative nrc NA
2 imaculate positive bing NA
3 greatness positive bing NA
4 impatient negative bing NA
5 contradicting negative loughran NA
6 irrecoverableness negative bing NA
7 advisable trust nrc NA
8 humiliation disgust nrc NA
9 obscures negative bing NA
10 affliction negative bing NA
# ... with 27,304 more rows
```
The gist is that we are dealing with a specific, pre\-defined vocabulary. Of course, any analysis will only be as good as the lexicon. The goal is usually to assign a sentiment score to a text, possibly an overall score, or a generally positive or negative grade. Given that, other analyses may be implemented to predict sentiment via standard regression tools or machine learning approaches.
Issues
------
### Context, sarcasm, etc.
Now consider the following.
```
sentiments %>% filter(word=='sick')
```
```
# A tibble: 5 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 sick disgust nrc NA
2 sick negative nrc NA
3 sick sadness nrc NA
4 sick negative bing NA
5 sick <NA> AFINN -2
```
Despite the above assigned sentiments, the word *sick* has been used at least since 1960s surfing culture as slang for positive affect. A basic approach to sentiment analysis as described here will not be able to detect slang or other context like sarcasm. However, lots of training data for a particular context may allow one to correctly predict such sentiment. In addition, there are, for example, slang lexicons, or one can simply add their own complements to any available lexicon.
### Lexicons
In addition, the lexicons are going to maybe be applicable to *general* usage of English in the western world. Some might wonder where exactly these came from or who decided that the word *abacus* should be affiliated with ‘trust’. You may start your path by typing `?sentiments` at the console if you have the tidytext package loaded.
### Context, sarcasm, etc.
Now consider the following.
```
sentiments %>% filter(word=='sick')
```
```
# A tibble: 5 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 sick disgust nrc NA
2 sick negative nrc NA
3 sick sadness nrc NA
4 sick negative bing NA
5 sick <NA> AFINN -2
```
Despite the above assigned sentiments, the word *sick* has been used at least since 1960s surfing culture as slang for positive affect. A basic approach to sentiment analysis as described here will not be able to detect slang or other context like sarcasm. However, lots of training data for a particular context may allow one to correctly predict such sentiment. In addition, there are, for example, slang lexicons, or one can simply add their own complements to any available lexicon.
### Lexicons
In addition, the lexicons are going to maybe be applicable to *general* usage of English in the western world. Some might wonder where exactly these came from or who decided that the word *abacus* should be affiliated with ‘trust’. You may start your path by typing `?sentiments` at the console if you have the tidytext package loaded.
Sentiment Analysis Examples
---------------------------
### The first thing the baby did wrong
We demonstrate sentiment analysis with the text *The first thing the baby did wrong*, which is a very popular brief guide to parenting written by world renown psychologist [Donald Barthelme](appendix.html#donald-barthelme) who, in his spare time, also wrote postmodern literature. This particular text talks about an issue with the baby, whose name is Born Dancin’, and who likes to tear pages out of books. Attempts are made by her parents to rectify the situation, without much success, but things are finally resolved at the end. The ultimate goal will be to see how sentiment in the text evolves over time, and in general we’d expect things to end more positively than they began.
How do we start? Let’s look again at the sentiments data set in the tidytext package.
```
sentiments %>% slice(sample(1:nrow(sentiments)))
```
```
# A tibble: 27,314 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 blunder sadness nrc NA
2 solidity positive nrc NA
3 mortuary fear nrc NA
4 absorbed positive nrc NA
5 successful joy nrc NA
6 virus negative nrc NA
7 exorbitantly negative bing NA
8 discombobulate negative bing NA
9 wail negative nrc NA
10 intimidatingly negative bing NA
# ... with 27,304 more rows
```
The bing lexicon provides only *positive* or *negative* labels. The AFINN, on the other hand, is numerical, with ratings \-5:5 that are in the score column. The others get more imaginative, but also more problematic. Why *assimilate* is *superfluous* is beyond me. It clearly should be negative given the [Borg](https://en.wikipedia.org/wiki/Borg_%28Star_Trek%29) connotations.
```
sentiments %>%
filter(sentiment=='superfluous')
```
```
# A tibble: 56 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 aegis superfluous loughran NA
2 amorphous superfluous loughran NA
3 anticipatory superfluous loughran NA
4 appertaining superfluous loughran NA
5 assimilate superfluous loughran NA
6 assimilating superfluous loughran NA
7 assimilation superfluous loughran NA
8 bifurcated superfluous loughran NA
9 bifurcation superfluous loughran NA
10 cessions superfluous loughran NA
# ... with 46 more rows
```
#### Read in the text files
But I digress. We start with the raw text, reading it in line by line. In what follows we read in all the texts (three) in a given directory, such that each element of ‘text’ is the work itself, i.e. `text` is a list column[5](#fn5). The unnest function will unravel the works to where each entry is essentially a paragraph form.
```
library(tidytext)
barth0 =
data_frame(file = dir('data/texts_raw/barthelme', full.names = TRUE)) %>%
mutate(text = map(file, read_lines)) %>%
transmute(work = basename(file), text) %>%
unnest(text)
```
#### Iterative processing
One of the things stressed in this document is the iterative nature of text analysis. You will consistently take two steps forward, and then one or two back as you find issues that need to be addressed. For example, in a subsequent step I found there were encoding issues[6](#fn6), so the following attempts to fix them. In addition, we want to tokenize the documents such that our tokens are sentences (e.g. as opposed to words or paragraphs). The reason for this is that I will be summarizing the sentiment at sentence level.
```
# Fix encoding, convert to sentences; you may get a warning message
barth = barth0 %>%
mutate(
text =
sapply(
text,
stringi::stri_enc_toutf8,
is_unknown_8bit = TRUE,
validate = TRUE
)
) %>%
unnest_tokens(
output = sentence,
input = text,
token = 'sentences'
)
```
#### Tokenization
The next step is to drill down to just the document we want, and subsequently tokenize to the word level. However, I also create a sentence id so that we can group on it later.
```
# get baby doc, convert to words
baby = barth %>%
filter(work=='baby.txt') %>%
mutate(sentence_id = 1:n()) %>%
unnest_tokens(
output = word,
input = sentence,
token = 'words',
drop = FALSE
) %>%
ungroup()
```
#### Get sentiments
Now that the data has been prepped, getting the sentiments is ridiculously easy. But that is how it is with text analysis. All the hard work is spent with the data processing. Here all we need is an inner join of our words with a sentiment lexicon of choice. This process will only retain words that are also in the lexicon. I use the numeric\-based lexicon here. At that point, we get a sum score of sentiment by sentence.
```
# get sentiment via inner join
baby_sentiment = baby %>%
inner_join(get_sentiments("afinn")) %>%
group_by(sentence_id, sentence) %>%
summarise(sentiment = sum(score)) %>%
ungroup()
```
#### Alternative approach
As we are interested in the sentence level, it turns out that the sentimentr package has built\-in functionality for this, and includes a more nuanced sentiment scores that takes into account valence shifters, e.g. words that would negate something with positive or negative sentiment (‘I do ***not*** like it’).
```
baby_sentiment = barth0 %>%
filter(work=='baby.txt') %>%
get_sentences(text) %>%
sentiment() %>%
drop_na() %>% # empty lines
mutate(sentence_id = row_number())
```
The following visualizes sentiment over the progression of sentences (note that not every sentence will receive a sentiment score). You can read the sentence by hovering over the dot. The ▬ is the running average.
In general, the sentiment starts out negative as the problem is explained. It bounces back and forth a bit but ends on a positive note. You’ll see that some sentences’ context are not captured. For example, sentence 16 is ‘But it didn’t do any good’. However *good* is going to be marked as a positive sentiment in any lexicon by default. In addition, the token length will matter. Longer sentences are more likely to have some sentiment, for example.
### Romeo \& Juliet
For this example, I’ll invite you to more or less follow along, as there is notable pre\-processing that must be done. We’ll look at sentiment in Shakespeare’s Romeo and Juliet. I have a cleaner version in the raw texts folder, but we can take the opportunity to use the gutenbergr package to download it directly from Project Gutenberg, a storehouse for works that have entered the public domain.
```
library(gutenbergr)
gw0 = gutenberg_works(title == "Romeo and Juliet") # look for something with this title
```
```
# A tibble: 1 x 4
gutenberg_id title author gutenberg_author_id
<int> <chr> <chr> <int>
1 1513 Romeo and Juliet Shakespeare, William 65
```
```
rnj = gutenberg_download(gw0$gutenberg_id)
```
We’ve got the text now, but there is still work to be done. The following is a quick and dirty approach, but see the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish) to see a more deliberate one.
We first slice off the initial parts we don’t want like title, author etc. Then we get rid of other tidbits that would interfere, using a little regex as well to aid the process.
```
rnj_filtered = rnj %>%
slice(-(1:49)) %>%
filter(!text==str_to_upper(text), # will remove THE PROLOGUE etc.
!text==str_to_title(text), # will remove names/single word lines
!str_detect(text, pattern='^(Scene|SCENE)|^(Act|ACT)|^\\[')) %>%
select(-gutenberg_id) %>%
unnest_tokens(sentence, input=text, token='sentences') %>%
mutate(sentenceID = 1:n())
```
The following unnests the data to word tokens. In addition, you can remove stopwords like a, an, the etc., and tidytext comes with a stop\_words data frame. However, some of the stopwords have sentiments, so you would get a bit of a different result if you retain them. As Black Sheep once said, the choice is yours, and you can deal with this, or you can deal with that.
```
# show some of the matches
stop_words$word[which(stop_words$word %in% sentiments$word)] %>% head(20)
```
```
[1] "able" "against" "allow" "almost" "alone" "appear" "appreciate" "appropriate" "available" "awfully" "believe" "best" "better" "certain" "clearly"
[16] "could" "despite" "downwards" "enough" "furthermore"
```
```
# remember to call output 'word' or antijoin won't work without a 'by' argument
rnj_filtered = rnj_filtered %>%
unnest_tokens(output=word, input=sentence, token='words') %>%
anti_join(stop_words)
```
Now we add the sentiments via the inner\_join function. Here I use ‘bing’, but you can use another, and you might get a different result.
```
rnj_filtered %>%
count(word) %>%
arrange(desc(n))
```
```
# A tibble: 3,288 x 2
word n
<chr> <int>
1 thou 276
2 thy 165
3 love 140
4 thee 139
5 romeo 110
6 night 83
7 death 71
8 hath 64
9 sir 58
10 art 55
# ... with 3,278 more rows
```
```
rnj_sentiment = rnj_filtered %>%
inner_join(sentiments)
rnj_sentiment
```
```
# A tibble: 12,668 x 5
sentenceID word sentiment lexicon score
<int> <chr> <chr> <chr> <int>
1 1 dignity positive nrc NA
2 1 dignity trust nrc NA
3 1 dignity positive bing NA
4 1 fair positive nrc NA
5 1 fair positive bing NA
6 1 fair <NA> AFINN 2
7 1 ancient negative nrc NA
8 1 grudge anger nrc NA
9 1 grudge negative nrc NA
10 1 grudge negative bing NA
# ... with 12,658 more rows
```
```
rnj_sentiment_bing = rnj_sentiment %>%
filter(lexicon=='bing')
table(rnj_sentiment_bing$sentiment)
```
```
negative positive
1244 833
```
Looks like this one is going to be a downer. The following visualizes the positive and negative sentiment scores as one progresses sentence by sentence through the work using the plotly package. I also show same information expressed as a difference (opaque line).
It’s a close game until perhaps the midway point, when negativity takes over and despair sets in with the story. By the end \[\[:SPOILER ALERT:]] Sean Bean is beheaded, Darth Vader reveals himself to be Luke’s father, and Verbal is Keyser Söze.
### The first thing the baby did wrong
We demonstrate sentiment analysis with the text *The first thing the baby did wrong*, which is a very popular brief guide to parenting written by world renown psychologist [Donald Barthelme](appendix.html#donald-barthelme) who, in his spare time, also wrote postmodern literature. This particular text talks about an issue with the baby, whose name is Born Dancin’, and who likes to tear pages out of books. Attempts are made by her parents to rectify the situation, without much success, but things are finally resolved at the end. The ultimate goal will be to see how sentiment in the text evolves over time, and in general we’d expect things to end more positively than they began.
How do we start? Let’s look again at the sentiments data set in the tidytext package.
```
sentiments %>% slice(sample(1:nrow(sentiments)))
```
```
# A tibble: 27,314 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 blunder sadness nrc NA
2 solidity positive nrc NA
3 mortuary fear nrc NA
4 absorbed positive nrc NA
5 successful joy nrc NA
6 virus negative nrc NA
7 exorbitantly negative bing NA
8 discombobulate negative bing NA
9 wail negative nrc NA
10 intimidatingly negative bing NA
# ... with 27,304 more rows
```
The bing lexicon provides only *positive* or *negative* labels. The AFINN, on the other hand, is numerical, with ratings \-5:5 that are in the score column. The others get more imaginative, but also more problematic. Why *assimilate* is *superfluous* is beyond me. It clearly should be negative given the [Borg](https://en.wikipedia.org/wiki/Borg_%28Star_Trek%29) connotations.
```
sentiments %>%
filter(sentiment=='superfluous')
```
```
# A tibble: 56 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 aegis superfluous loughran NA
2 amorphous superfluous loughran NA
3 anticipatory superfluous loughran NA
4 appertaining superfluous loughran NA
5 assimilate superfluous loughran NA
6 assimilating superfluous loughran NA
7 assimilation superfluous loughran NA
8 bifurcated superfluous loughran NA
9 bifurcation superfluous loughran NA
10 cessions superfluous loughran NA
# ... with 46 more rows
```
#### Read in the text files
But I digress. We start with the raw text, reading it in line by line. In what follows we read in all the texts (three) in a given directory, such that each element of ‘text’ is the work itself, i.e. `text` is a list column[5](#fn5). The unnest function will unravel the works to where each entry is essentially a paragraph form.
```
library(tidytext)
barth0 =
data_frame(file = dir('data/texts_raw/barthelme', full.names = TRUE)) %>%
mutate(text = map(file, read_lines)) %>%
transmute(work = basename(file), text) %>%
unnest(text)
```
#### Iterative processing
One of the things stressed in this document is the iterative nature of text analysis. You will consistently take two steps forward, and then one or two back as you find issues that need to be addressed. For example, in a subsequent step I found there were encoding issues[6](#fn6), so the following attempts to fix them. In addition, we want to tokenize the documents such that our tokens are sentences (e.g. as opposed to words or paragraphs). The reason for this is that I will be summarizing the sentiment at sentence level.
```
# Fix encoding, convert to sentences; you may get a warning message
barth = barth0 %>%
mutate(
text =
sapply(
text,
stringi::stri_enc_toutf8,
is_unknown_8bit = TRUE,
validate = TRUE
)
) %>%
unnest_tokens(
output = sentence,
input = text,
token = 'sentences'
)
```
#### Tokenization
The next step is to drill down to just the document we want, and subsequently tokenize to the word level. However, I also create a sentence id so that we can group on it later.
```
# get baby doc, convert to words
baby = barth %>%
filter(work=='baby.txt') %>%
mutate(sentence_id = 1:n()) %>%
unnest_tokens(
output = word,
input = sentence,
token = 'words',
drop = FALSE
) %>%
ungroup()
```
#### Get sentiments
Now that the data has been prepped, getting the sentiments is ridiculously easy. But that is how it is with text analysis. All the hard work is spent with the data processing. Here all we need is an inner join of our words with a sentiment lexicon of choice. This process will only retain words that are also in the lexicon. I use the numeric\-based lexicon here. At that point, we get a sum score of sentiment by sentence.
```
# get sentiment via inner join
baby_sentiment = baby %>%
inner_join(get_sentiments("afinn")) %>%
group_by(sentence_id, sentence) %>%
summarise(sentiment = sum(score)) %>%
ungroup()
```
#### Alternative approach
As we are interested in the sentence level, it turns out that the sentimentr package has built\-in functionality for this, and includes a more nuanced sentiment scores that takes into account valence shifters, e.g. words that would negate something with positive or negative sentiment (‘I do ***not*** like it’).
```
baby_sentiment = barth0 %>%
filter(work=='baby.txt') %>%
get_sentences(text) %>%
sentiment() %>%
drop_na() %>% # empty lines
mutate(sentence_id = row_number())
```
The following visualizes sentiment over the progression of sentences (note that not every sentence will receive a sentiment score). You can read the sentence by hovering over the dot. The ▬ is the running average.
In general, the sentiment starts out negative as the problem is explained. It bounces back and forth a bit but ends on a positive note. You’ll see that some sentences’ context are not captured. For example, sentence 16 is ‘But it didn’t do any good’. However *good* is going to be marked as a positive sentiment in any lexicon by default. In addition, the token length will matter. Longer sentences are more likely to have some sentiment, for example.
#### Read in the text files
But I digress. We start with the raw text, reading it in line by line. In what follows we read in all the texts (three) in a given directory, such that each element of ‘text’ is the work itself, i.e. `text` is a list column[5](#fn5). The unnest function will unravel the works to where each entry is essentially a paragraph form.
```
library(tidytext)
barth0 =
data_frame(file = dir('data/texts_raw/barthelme', full.names = TRUE)) %>%
mutate(text = map(file, read_lines)) %>%
transmute(work = basename(file), text) %>%
unnest(text)
```
#### Iterative processing
One of the things stressed in this document is the iterative nature of text analysis. You will consistently take two steps forward, and then one or two back as you find issues that need to be addressed. For example, in a subsequent step I found there were encoding issues[6](#fn6), so the following attempts to fix them. In addition, we want to tokenize the documents such that our tokens are sentences (e.g. as opposed to words or paragraphs). The reason for this is that I will be summarizing the sentiment at sentence level.
```
# Fix encoding, convert to sentences; you may get a warning message
barth = barth0 %>%
mutate(
text =
sapply(
text,
stringi::stri_enc_toutf8,
is_unknown_8bit = TRUE,
validate = TRUE
)
) %>%
unnest_tokens(
output = sentence,
input = text,
token = 'sentences'
)
```
#### Tokenization
The next step is to drill down to just the document we want, and subsequently tokenize to the word level. However, I also create a sentence id so that we can group on it later.
```
# get baby doc, convert to words
baby = barth %>%
filter(work=='baby.txt') %>%
mutate(sentence_id = 1:n()) %>%
unnest_tokens(
output = word,
input = sentence,
token = 'words',
drop = FALSE
) %>%
ungroup()
```
#### Get sentiments
Now that the data has been prepped, getting the sentiments is ridiculously easy. But that is how it is with text analysis. All the hard work is spent with the data processing. Here all we need is an inner join of our words with a sentiment lexicon of choice. This process will only retain words that are also in the lexicon. I use the numeric\-based lexicon here. At that point, we get a sum score of sentiment by sentence.
```
# get sentiment via inner join
baby_sentiment = baby %>%
inner_join(get_sentiments("afinn")) %>%
group_by(sentence_id, sentence) %>%
summarise(sentiment = sum(score)) %>%
ungroup()
```
#### Alternative approach
As we are interested in the sentence level, it turns out that the sentimentr package has built\-in functionality for this, and includes a more nuanced sentiment scores that takes into account valence shifters, e.g. words that would negate something with positive or negative sentiment (‘I do ***not*** like it’).
```
baby_sentiment = barth0 %>%
filter(work=='baby.txt') %>%
get_sentences(text) %>%
sentiment() %>%
drop_na() %>% # empty lines
mutate(sentence_id = row_number())
```
The following visualizes sentiment over the progression of sentences (note that not every sentence will receive a sentiment score). You can read the sentence by hovering over the dot. The ▬ is the running average.
In general, the sentiment starts out negative as the problem is explained. It bounces back and forth a bit but ends on a positive note. You’ll see that some sentences’ context are not captured. For example, sentence 16 is ‘But it didn’t do any good’. However *good* is going to be marked as a positive sentiment in any lexicon by default. In addition, the token length will matter. Longer sentences are more likely to have some sentiment, for example.
### Romeo \& Juliet
For this example, I’ll invite you to more or less follow along, as there is notable pre\-processing that must be done. We’ll look at sentiment in Shakespeare’s Romeo and Juliet. I have a cleaner version in the raw texts folder, but we can take the opportunity to use the gutenbergr package to download it directly from Project Gutenberg, a storehouse for works that have entered the public domain.
```
library(gutenbergr)
gw0 = gutenberg_works(title == "Romeo and Juliet") # look for something with this title
```
```
# A tibble: 1 x 4
gutenberg_id title author gutenberg_author_id
<int> <chr> <chr> <int>
1 1513 Romeo and Juliet Shakespeare, William 65
```
```
rnj = gutenberg_download(gw0$gutenberg_id)
```
We’ve got the text now, but there is still work to be done. The following is a quick and dirty approach, but see the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish) to see a more deliberate one.
We first slice off the initial parts we don’t want like title, author etc. Then we get rid of other tidbits that would interfere, using a little regex as well to aid the process.
```
rnj_filtered = rnj %>%
slice(-(1:49)) %>%
filter(!text==str_to_upper(text), # will remove THE PROLOGUE etc.
!text==str_to_title(text), # will remove names/single word lines
!str_detect(text, pattern='^(Scene|SCENE)|^(Act|ACT)|^\\[')) %>%
select(-gutenberg_id) %>%
unnest_tokens(sentence, input=text, token='sentences') %>%
mutate(sentenceID = 1:n())
```
The following unnests the data to word tokens. In addition, you can remove stopwords like a, an, the etc., and tidytext comes with a stop\_words data frame. However, some of the stopwords have sentiments, so you would get a bit of a different result if you retain them. As Black Sheep once said, the choice is yours, and you can deal with this, or you can deal with that.
```
# show some of the matches
stop_words$word[which(stop_words$word %in% sentiments$word)] %>% head(20)
```
```
[1] "able" "against" "allow" "almost" "alone" "appear" "appreciate" "appropriate" "available" "awfully" "believe" "best" "better" "certain" "clearly"
[16] "could" "despite" "downwards" "enough" "furthermore"
```
```
# remember to call output 'word' or antijoin won't work without a 'by' argument
rnj_filtered = rnj_filtered %>%
unnest_tokens(output=word, input=sentence, token='words') %>%
anti_join(stop_words)
```
Now we add the sentiments via the inner\_join function. Here I use ‘bing’, but you can use another, and you might get a different result.
```
rnj_filtered %>%
count(word) %>%
arrange(desc(n))
```
```
# A tibble: 3,288 x 2
word n
<chr> <int>
1 thou 276
2 thy 165
3 love 140
4 thee 139
5 romeo 110
6 night 83
7 death 71
8 hath 64
9 sir 58
10 art 55
# ... with 3,278 more rows
```
```
rnj_sentiment = rnj_filtered %>%
inner_join(sentiments)
rnj_sentiment
```
```
# A tibble: 12,668 x 5
sentenceID word sentiment lexicon score
<int> <chr> <chr> <chr> <int>
1 1 dignity positive nrc NA
2 1 dignity trust nrc NA
3 1 dignity positive bing NA
4 1 fair positive nrc NA
5 1 fair positive bing NA
6 1 fair <NA> AFINN 2
7 1 ancient negative nrc NA
8 1 grudge anger nrc NA
9 1 grudge negative nrc NA
10 1 grudge negative bing NA
# ... with 12,658 more rows
```
```
rnj_sentiment_bing = rnj_sentiment %>%
filter(lexicon=='bing')
table(rnj_sentiment_bing$sentiment)
```
```
negative positive
1244 833
```
Looks like this one is going to be a downer. The following visualizes the positive and negative sentiment scores as one progresses sentence by sentence through the work using the plotly package. I also show same information expressed as a difference (opaque line).
It’s a close game until perhaps the midway point, when negativity takes over and despair sets in with the story. By the end \[\[:SPOILER ALERT:]] Sean Bean is beheaded, Darth Vader reveals himself to be Luke’s father, and Verbal is Keyser Söze.
Sentiment Analysis Summary
--------------------------
In general, sentiment analysis can be a useful exploration of data, but it is highly dependent on the context and tools used. Note also that ‘sentiment’ can be anything, it doesn’t have to be positive vs. negative. Any vocabulary may be applied, and so it has more utility than the usual implementation.
It should also be noted that the above demonstration is largely conceptual and descriptive. While fun, it’s a bit simplified. For starters, trying to classify words as simply positive or negative itself is not a straightforward endeavor. As we noted at the beginning, context matters, and in general you’d want to take it into account. Modern methods of sentiment analysis would use approaches like word2vec or deep learning to predict a sentiment probability, as opposed to a simple word match. Even in the above, matching sentiments to texts would probably only be a precursor to building a model predicting sentiment, which could then be applied to new data.
Exercise
--------
### Step 0: Install the packages
If you haven’t already, install the tidytext package. Install the janeaustenr package and load both of them[7](#fn7).
### Step 1: Initial inspection
First you’ll want to look at what we’re dealing with, so take a gander at austenbooks.
```
library(tidytext); library(janeaustenr)
austen_books()
```
```
# A tibble: 73,422 x 2
text book
* <chr> <fct>
1 SENSE AND SENSIBILITY Sense & Sensibility
2 "" Sense & Sensibility
3 by Jane Austen Sense & Sensibility
4 "" Sense & Sensibility
5 (1811) Sense & Sensibility
6 "" Sense & Sensibility
7 "" Sense & Sensibility
8 "" Sense & Sensibility
9 "" Sense & Sensibility
10 CHAPTER 1 Sense & Sensibility
# ... with 73,412 more rows
```
```
austen_books() %>%
distinct(book)
```
```
# A tibble: 6 x 1
book
<fct>
1 Sense & Sensibility
2 Pride & Prejudice
3 Mansfield Park
4 Emma
5 Northanger Abbey
6 Persuasion
```
We will examine only one text. In addition, for this exercise we’ll take a little bit of a different approach, looking for a specific kind of sentiment using the NRC database. It contains 10 distinct sentiments.
```
get_sentiments("nrc") %>% distinct(sentiment)
```
```
# A tibble: 10 x 1
sentiment
<chr>
1 trust
2 fear
3 negative
4 sadness
5 anger
6 surprise
7 positive
8 disgust
9 joy
10 anticipation
```
Now, select from any of those sentiments you like (or more than one), and one of the texts as follows.
```
nrc_sadness <- get_sentiments("nrc") %>%
filter(sentiment == "positive")
ja_book = austen_books() %>%
filter(book == "Emma")
```
### Step 2: Data prep
Now we do a little prep, and I’ll save you the trouble. You can just run the following.
```
ja_book = ja_book %>%
mutate(chapter = str_detect(text, regex("^chapter [\\divxlc]", ignore_case = TRUE)),
chapter = cumsum(chapter),
line_book = row_number()) %>%
unnest_tokens(word, text)
```
```
ja_book = ja_book %>%
mutate(chapter = str_detect(text, regex("^chapter [\\divxlc]", ignore_case = TRUE)),
chapter = cumsum(chapter),
line_book = row_number()) %>%
group_by(chapter) %>%
mutate(line_chapter = row_number()) %>%
# ungroup()
unnest_tokens(word, text)
```
### Step 3: Get sentiment
Now, on your own, try the inner join approach we used previously to match the sentiments to the text. Don’t try to overthink this. The third pipe step will use the count function with the `word` column and also the argument `sort=TRUE`. Note this is just to look at your result, we aren’t assigning it to an object yet.
```
ja_book %>%
? %>%
?
```
The following shows my negative evaluation of Mansfield Park.
```
# A tibble: 4,204 x 3
# Groups: chapter [48]
chapter word n
<int> <chr> <int>
1 24 feeling 35
2 7 ill 25
3 46 evil 25
4 26 cross 24
5 27 cross 24
6 48 punishment 24
7 7 cutting 20
8 19 feeling 20
9 33 feeling 20
10 34 feeling 20
# ... with 4,194 more rows
```
### Step 4: Visualize
Now let’s do a visualization for sentiment. So redo your inner join, but we’ll create a data frame that has the information we need.
```
plot_data = ja_book %>%
inner_join(nrc_bad) %>%
group_by(chapter, line_book, line_chapter) %>%
count() %>%
group_by(chapter) %>%
mutate(negativity = cumsum(n),
mean_chapter_negativity=mean(negativity)) %>%
group_by(line_chapter) %>%
mutate(mean_line_negativity=mean(n))
plot_data
```
```
# A tibble: 4,398 x 7
# Groups: line_chapter [453]
chapter line_book line_chapter n negativity mean_chapter_negativity mean_line_negativity
<int> <int> <int> <int> <int> <dbl> <dbl>
1 1 17 7 2 2 111. 3.41
2 1 18 8 4 6 111. 2.65
3 1 20 10 1 7 111. 3.31
4 1 24 14 1 8 111. 2.88
5 1 26 16 2 10 111. 2.54
6 1 27 17 3 13 111. 2.67
7 1 28 18 3 16 111. 3.58
8 1 29 19 2 18 111. 2.31
9 1 34 24 3 21 111. 2.17
10 1 41 31 1 22 111. 2.87
# ... with 4,388 more rows
```
At this point you have enough to play with, so I leave you to plot whatever you want.
The following[8](#fn8) shows both the total negativity within a chapter, as well as the per line negativity within a chapter. We can see that there is less negativity towards the end of chapters. We can also see that there appears to be more negativity in later chapters (darker lines).
### Step 0: Install the packages
If you haven’t already, install the tidytext package. Install the janeaustenr package and load both of them[7](#fn7).
### Step 1: Initial inspection
First you’ll want to look at what we’re dealing with, so take a gander at austenbooks.
```
library(tidytext); library(janeaustenr)
austen_books()
```
```
# A tibble: 73,422 x 2
text book
* <chr> <fct>
1 SENSE AND SENSIBILITY Sense & Sensibility
2 "" Sense & Sensibility
3 by Jane Austen Sense & Sensibility
4 "" Sense & Sensibility
5 (1811) Sense & Sensibility
6 "" Sense & Sensibility
7 "" Sense & Sensibility
8 "" Sense & Sensibility
9 "" Sense & Sensibility
10 CHAPTER 1 Sense & Sensibility
# ... with 73,412 more rows
```
```
austen_books() %>%
distinct(book)
```
```
# A tibble: 6 x 1
book
<fct>
1 Sense & Sensibility
2 Pride & Prejudice
3 Mansfield Park
4 Emma
5 Northanger Abbey
6 Persuasion
```
We will examine only one text. In addition, for this exercise we’ll take a little bit of a different approach, looking for a specific kind of sentiment using the NRC database. It contains 10 distinct sentiments.
```
get_sentiments("nrc") %>% distinct(sentiment)
```
```
# A tibble: 10 x 1
sentiment
<chr>
1 trust
2 fear
3 negative
4 sadness
5 anger
6 surprise
7 positive
8 disgust
9 joy
10 anticipation
```
Now, select from any of those sentiments you like (or more than one), and one of the texts as follows.
```
nrc_sadness <- get_sentiments("nrc") %>%
filter(sentiment == "positive")
ja_book = austen_books() %>%
filter(book == "Emma")
```
### Step 2: Data prep
Now we do a little prep, and I’ll save you the trouble. You can just run the following.
```
ja_book = ja_book %>%
mutate(chapter = str_detect(text, regex("^chapter [\\divxlc]", ignore_case = TRUE)),
chapter = cumsum(chapter),
line_book = row_number()) %>%
unnest_tokens(word, text)
```
```
ja_book = ja_book %>%
mutate(chapter = str_detect(text, regex("^chapter [\\divxlc]", ignore_case = TRUE)),
chapter = cumsum(chapter),
line_book = row_number()) %>%
group_by(chapter) %>%
mutate(line_chapter = row_number()) %>%
# ungroup()
unnest_tokens(word, text)
```
### Step 3: Get sentiment
Now, on your own, try the inner join approach we used previously to match the sentiments to the text. Don’t try to overthink this. The third pipe step will use the count function with the `word` column and also the argument `sort=TRUE`. Note this is just to look at your result, we aren’t assigning it to an object yet.
```
ja_book %>%
? %>%
?
```
The following shows my negative evaluation of Mansfield Park.
```
# A tibble: 4,204 x 3
# Groups: chapter [48]
chapter word n
<int> <chr> <int>
1 24 feeling 35
2 7 ill 25
3 46 evil 25
4 26 cross 24
5 27 cross 24
6 48 punishment 24
7 7 cutting 20
8 19 feeling 20
9 33 feeling 20
10 34 feeling 20
# ... with 4,194 more rows
```
### Step 4: Visualize
Now let’s do a visualization for sentiment. So redo your inner join, but we’ll create a data frame that has the information we need.
```
plot_data = ja_book %>%
inner_join(nrc_bad) %>%
group_by(chapter, line_book, line_chapter) %>%
count() %>%
group_by(chapter) %>%
mutate(negativity = cumsum(n),
mean_chapter_negativity=mean(negativity)) %>%
group_by(line_chapter) %>%
mutate(mean_line_negativity=mean(n))
plot_data
```
```
# A tibble: 4,398 x 7
# Groups: line_chapter [453]
chapter line_book line_chapter n negativity mean_chapter_negativity mean_line_negativity
<int> <int> <int> <int> <int> <dbl> <dbl>
1 1 17 7 2 2 111. 3.41
2 1 18 8 4 6 111. 2.65
3 1 20 10 1 7 111. 3.31
4 1 24 14 1 8 111. 2.88
5 1 26 16 2 10 111. 2.54
6 1 27 17 3 13 111. 2.67
7 1 28 18 3 16 111. 3.58
8 1 29 19 2 18 111. 2.31
9 1 34 24 3 21 111. 2.17
10 1 41 31 1 22 111. 2.87
# ... with 4,388 more rows
```
At this point you have enough to play with, so I leave you to plot whatever you want.
The following[8](#fn8) shows both the total negativity within a chapter, as well as the per line negativity within a chapter. We can see that there is less negativity towards the end of chapters. We can also see that there appears to be more negativity in later chapters (darker lines).
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/sentiment-analysis.html |
Sentiment Analysis
==================
Basic idea
----------
A common and intuitive approach to text is sentiment analysis. In a grand sense, we are interested in the emotional content of some text, e.g. posts on Facebook, tweets, or movie reviews. Most of the time, this is obvious when one reads it, but if you have hundreds of thousands or millions of strings to analyze, you’d like to be able to do so efficiently.
We will use the tidytext package for our demonstration. It comes with a lexicon of positive and negative words that is actually a combination of multiple sources, one of which provides numeric ratings, while the others suggest different classes of sentiment.
```
library(tidytext)
sentiments %>% slice(sample(1:nrow(sentiments)))
```
```
# A tibble: 27,314 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 decomposition negative nrc NA
2 imaculate positive bing NA
3 greatness positive bing NA
4 impatient negative bing NA
5 contradicting negative loughran NA
6 irrecoverableness negative bing NA
7 advisable trust nrc NA
8 humiliation disgust nrc NA
9 obscures negative bing NA
10 affliction negative bing NA
# ... with 27,304 more rows
```
The gist is that we are dealing with a specific, pre\-defined vocabulary. Of course, any analysis will only be as good as the lexicon. The goal is usually to assign a sentiment score to a text, possibly an overall score, or a generally positive or negative grade. Given that, other analyses may be implemented to predict sentiment via standard regression tools or machine learning approaches.
Issues
------
### Context, sarcasm, etc.
Now consider the following.
```
sentiments %>% filter(word=='sick')
```
```
# A tibble: 5 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 sick disgust nrc NA
2 sick negative nrc NA
3 sick sadness nrc NA
4 sick negative bing NA
5 sick <NA> AFINN -2
```
Despite the above assigned sentiments, the word *sick* has been used at least since 1960s surfing culture as slang for positive affect. A basic approach to sentiment analysis as described here will not be able to detect slang or other context like sarcasm. However, lots of training data for a particular context may allow one to correctly predict such sentiment. In addition, there are, for example, slang lexicons, or one can simply add their own complements to any available lexicon.
### Lexicons
In addition, the lexicons are going to maybe be applicable to *general* usage of English in the western world. Some might wonder where exactly these came from or who decided that the word *abacus* should be affiliated with ‘trust’. You may start your path by typing `?sentiments` at the console if you have the tidytext package loaded.
Sentiment Analysis Examples
---------------------------
### The first thing the baby did wrong
We demonstrate sentiment analysis with the text *The first thing the baby did wrong*, which is a very popular brief guide to parenting written by world renown psychologist [Donald Barthelme](appendix.html#donald-barthelme) who, in his spare time, also wrote postmodern literature. This particular text talks about an issue with the baby, whose name is Born Dancin’, and who likes to tear pages out of books. Attempts are made by her parents to rectify the situation, without much success, but things are finally resolved at the end. The ultimate goal will be to see how sentiment in the text evolves over time, and in general we’d expect things to end more positively than they began.
How do we start? Let’s look again at the sentiments data set in the tidytext package.
```
sentiments %>% slice(sample(1:nrow(sentiments)))
```
```
# A tibble: 27,314 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 blunder sadness nrc NA
2 solidity positive nrc NA
3 mortuary fear nrc NA
4 absorbed positive nrc NA
5 successful joy nrc NA
6 virus negative nrc NA
7 exorbitantly negative bing NA
8 discombobulate negative bing NA
9 wail negative nrc NA
10 intimidatingly negative bing NA
# ... with 27,304 more rows
```
The bing lexicon provides only *positive* or *negative* labels. The AFINN, on the other hand, is numerical, with ratings \-5:5 that are in the score column. The others get more imaginative, but also more problematic. Why *assimilate* is *superfluous* is beyond me. It clearly should be negative given the [Borg](https://en.wikipedia.org/wiki/Borg_%28Star_Trek%29) connotations.
```
sentiments %>%
filter(sentiment=='superfluous')
```
```
# A tibble: 56 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 aegis superfluous loughran NA
2 amorphous superfluous loughran NA
3 anticipatory superfluous loughran NA
4 appertaining superfluous loughran NA
5 assimilate superfluous loughran NA
6 assimilating superfluous loughran NA
7 assimilation superfluous loughran NA
8 bifurcated superfluous loughran NA
9 bifurcation superfluous loughran NA
10 cessions superfluous loughran NA
# ... with 46 more rows
```
#### Read in the text files
But I digress. We start with the raw text, reading it in line by line. In what follows we read in all the texts (three) in a given directory, such that each element of ‘text’ is the work itself, i.e. `text` is a list column[5](#fn5). The unnest function will unravel the works to where each entry is essentially a paragraph form.
```
library(tidytext)
barth0 =
data_frame(file = dir('data/texts_raw/barthelme', full.names = TRUE)) %>%
mutate(text = map(file, read_lines)) %>%
transmute(work = basename(file), text) %>%
unnest(text)
```
#### Iterative processing
One of the things stressed in this document is the iterative nature of text analysis. You will consistently take two steps forward, and then one or two back as you find issues that need to be addressed. For example, in a subsequent step I found there were encoding issues[6](#fn6), so the following attempts to fix them. In addition, we want to tokenize the documents such that our tokens are sentences (e.g. as opposed to words or paragraphs). The reason for this is that I will be summarizing the sentiment at sentence level.
```
# Fix encoding, convert to sentences; you may get a warning message
barth = barth0 %>%
mutate(
text =
sapply(
text,
stringi::stri_enc_toutf8,
is_unknown_8bit = TRUE,
validate = TRUE
)
) %>%
unnest_tokens(
output = sentence,
input = text,
token = 'sentences'
)
```
#### Tokenization
The next step is to drill down to just the document we want, and subsequently tokenize to the word level. However, I also create a sentence id so that we can group on it later.
```
# get baby doc, convert to words
baby = barth %>%
filter(work=='baby.txt') %>%
mutate(sentence_id = 1:n()) %>%
unnest_tokens(
output = word,
input = sentence,
token = 'words',
drop = FALSE
) %>%
ungroup()
```
#### Get sentiments
Now that the data has been prepped, getting the sentiments is ridiculously easy. But that is how it is with text analysis. All the hard work is spent with the data processing. Here all we need is an inner join of our words with a sentiment lexicon of choice. This process will only retain words that are also in the lexicon. I use the numeric\-based lexicon here. At that point, we get a sum score of sentiment by sentence.
```
# get sentiment via inner join
baby_sentiment = baby %>%
inner_join(get_sentiments("afinn")) %>%
group_by(sentence_id, sentence) %>%
summarise(sentiment = sum(score)) %>%
ungroup()
```
#### Alternative approach
As we are interested in the sentence level, it turns out that the sentimentr package has built\-in functionality for this, and includes a more nuanced sentiment scores that takes into account valence shifters, e.g. words that would negate something with positive or negative sentiment (‘I do ***not*** like it’).
```
baby_sentiment = barth0 %>%
filter(work=='baby.txt') %>%
get_sentences(text) %>%
sentiment() %>%
drop_na() %>% # empty lines
mutate(sentence_id = row_number())
```
The following visualizes sentiment over the progression of sentences (note that not every sentence will receive a sentiment score). You can read the sentence by hovering over the dot. The ▬ is the running average.
In general, the sentiment starts out negative as the problem is explained. It bounces back and forth a bit but ends on a positive note. You’ll see that some sentences’ context are not captured. For example, sentence 16 is ‘But it didn’t do any good’. However *good* is going to be marked as a positive sentiment in any lexicon by default. In addition, the token length will matter. Longer sentences are more likely to have some sentiment, for example.
### Romeo \& Juliet
For this example, I’ll invite you to more or less follow along, as there is notable pre\-processing that must be done. We’ll look at sentiment in Shakespeare’s Romeo and Juliet. I have a cleaner version in the raw texts folder, but we can take the opportunity to use the gutenbergr package to download it directly from Project Gutenberg, a storehouse for works that have entered the public domain.
```
library(gutenbergr)
gw0 = gutenberg_works(title == "Romeo and Juliet") # look for something with this title
```
```
# A tibble: 1 x 4
gutenberg_id title author gutenberg_author_id
<int> <chr> <chr> <int>
1 1513 Romeo and Juliet Shakespeare, William 65
```
```
rnj = gutenberg_download(gw0$gutenberg_id)
```
We’ve got the text now, but there is still work to be done. The following is a quick and dirty approach, but see the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish) to see a more deliberate one.
We first slice off the initial parts we don’t want like title, author etc. Then we get rid of other tidbits that would interfere, using a little regex as well to aid the process.
```
rnj_filtered = rnj %>%
slice(-(1:49)) %>%
filter(!text==str_to_upper(text), # will remove THE PROLOGUE etc.
!text==str_to_title(text), # will remove names/single word lines
!str_detect(text, pattern='^(Scene|SCENE)|^(Act|ACT)|^\\[')) %>%
select(-gutenberg_id) %>%
unnest_tokens(sentence, input=text, token='sentences') %>%
mutate(sentenceID = 1:n())
```
The following unnests the data to word tokens. In addition, you can remove stopwords like a, an, the etc., and tidytext comes with a stop\_words data frame. However, some of the stopwords have sentiments, so you would get a bit of a different result if you retain them. As Black Sheep once said, the choice is yours, and you can deal with this, or you can deal with that.
```
# show some of the matches
stop_words$word[which(stop_words$word %in% sentiments$word)] %>% head(20)
```
```
[1] "able" "against" "allow" "almost" "alone" "appear" "appreciate" "appropriate" "available" "awfully" "believe" "best" "better" "certain" "clearly"
[16] "could" "despite" "downwards" "enough" "furthermore"
```
```
# remember to call output 'word' or antijoin won't work without a 'by' argument
rnj_filtered = rnj_filtered %>%
unnest_tokens(output=word, input=sentence, token='words') %>%
anti_join(stop_words)
```
Now we add the sentiments via the inner\_join function. Here I use ‘bing’, but you can use another, and you might get a different result.
```
rnj_filtered %>%
count(word) %>%
arrange(desc(n))
```
```
# A tibble: 3,288 x 2
word n
<chr> <int>
1 thou 276
2 thy 165
3 love 140
4 thee 139
5 romeo 110
6 night 83
7 death 71
8 hath 64
9 sir 58
10 art 55
# ... with 3,278 more rows
```
```
rnj_sentiment = rnj_filtered %>%
inner_join(sentiments)
rnj_sentiment
```
```
# A tibble: 12,668 x 5
sentenceID word sentiment lexicon score
<int> <chr> <chr> <chr> <int>
1 1 dignity positive nrc NA
2 1 dignity trust nrc NA
3 1 dignity positive bing NA
4 1 fair positive nrc NA
5 1 fair positive bing NA
6 1 fair <NA> AFINN 2
7 1 ancient negative nrc NA
8 1 grudge anger nrc NA
9 1 grudge negative nrc NA
10 1 grudge negative bing NA
# ... with 12,658 more rows
```
```
rnj_sentiment_bing = rnj_sentiment %>%
filter(lexicon=='bing')
table(rnj_sentiment_bing$sentiment)
```
```
negative positive
1244 833
```
Looks like this one is going to be a downer. The following visualizes the positive and negative sentiment scores as one progresses sentence by sentence through the work using the plotly package. I also show same information expressed as a difference (opaque line).
It’s a close game until perhaps the midway point, when negativity takes over and despair sets in with the story. By the end \[\[:SPOILER ALERT:]] Sean Bean is beheaded, Darth Vader reveals himself to be Luke’s father, and Verbal is Keyser Söze.
Sentiment Analysis Summary
--------------------------
In general, sentiment analysis can be a useful exploration of data, but it is highly dependent on the context and tools used. Note also that ‘sentiment’ can be anything, it doesn’t have to be positive vs. negative. Any vocabulary may be applied, and so it has more utility than the usual implementation.
It should also be noted that the above demonstration is largely conceptual and descriptive. While fun, it’s a bit simplified. For starters, trying to classify words as simply positive or negative itself is not a straightforward endeavor. As we noted at the beginning, context matters, and in general you’d want to take it into account. Modern methods of sentiment analysis would use approaches like word2vec or deep learning to predict a sentiment probability, as opposed to a simple word match. Even in the above, matching sentiments to texts would probably only be a precursor to building a model predicting sentiment, which could then be applied to new data.
Exercise
--------
### Step 0: Install the packages
If you haven’t already, install the tidytext package. Install the janeaustenr package and load both of them[7](#fn7).
### Step 1: Initial inspection
First you’ll want to look at what we’re dealing with, so take a gander at austenbooks.
```
library(tidytext); library(janeaustenr)
austen_books()
```
```
# A tibble: 73,422 x 2
text book
* <chr> <fct>
1 SENSE AND SENSIBILITY Sense & Sensibility
2 "" Sense & Sensibility
3 by Jane Austen Sense & Sensibility
4 "" Sense & Sensibility
5 (1811) Sense & Sensibility
6 "" Sense & Sensibility
7 "" Sense & Sensibility
8 "" Sense & Sensibility
9 "" Sense & Sensibility
10 CHAPTER 1 Sense & Sensibility
# ... with 73,412 more rows
```
```
austen_books() %>%
distinct(book)
```
```
# A tibble: 6 x 1
book
<fct>
1 Sense & Sensibility
2 Pride & Prejudice
3 Mansfield Park
4 Emma
5 Northanger Abbey
6 Persuasion
```
We will examine only one text. In addition, for this exercise we’ll take a little bit of a different approach, looking for a specific kind of sentiment using the NRC database. It contains 10 distinct sentiments.
```
get_sentiments("nrc") %>% distinct(sentiment)
```
```
# A tibble: 10 x 1
sentiment
<chr>
1 trust
2 fear
3 negative
4 sadness
5 anger
6 surprise
7 positive
8 disgust
9 joy
10 anticipation
```
Now, select from any of those sentiments you like (or more than one), and one of the texts as follows.
```
nrc_sadness <- get_sentiments("nrc") %>%
filter(sentiment == "positive")
ja_book = austen_books() %>%
filter(book == "Emma")
```
### Step 2: Data prep
Now we do a little prep, and I’ll save you the trouble. You can just run the following.
```
ja_book = ja_book %>%
mutate(chapter = str_detect(text, regex("^chapter [\\divxlc]", ignore_case = TRUE)),
chapter = cumsum(chapter),
line_book = row_number()) %>%
unnest_tokens(word, text)
```
```
ja_book = ja_book %>%
mutate(chapter = str_detect(text, regex("^chapter [\\divxlc]", ignore_case = TRUE)),
chapter = cumsum(chapter),
line_book = row_number()) %>%
group_by(chapter) %>%
mutate(line_chapter = row_number()) %>%
# ungroup()
unnest_tokens(word, text)
```
### Step 3: Get sentiment
Now, on your own, try the inner join approach we used previously to match the sentiments to the text. Don’t try to overthink this. The third pipe step will use the count function with the `word` column and also the argument `sort=TRUE`. Note this is just to look at your result, we aren’t assigning it to an object yet.
```
ja_book %>%
? %>%
?
```
The following shows my negative evaluation of Mansfield Park.
```
# A tibble: 4,204 x 3
# Groups: chapter [48]
chapter word n
<int> <chr> <int>
1 24 feeling 35
2 7 ill 25
3 46 evil 25
4 26 cross 24
5 27 cross 24
6 48 punishment 24
7 7 cutting 20
8 19 feeling 20
9 33 feeling 20
10 34 feeling 20
# ... with 4,194 more rows
```
### Step 4: Visualize
Now let’s do a visualization for sentiment. So redo your inner join, but we’ll create a data frame that has the information we need.
```
plot_data = ja_book %>%
inner_join(nrc_bad) %>%
group_by(chapter, line_book, line_chapter) %>%
count() %>%
group_by(chapter) %>%
mutate(negativity = cumsum(n),
mean_chapter_negativity=mean(negativity)) %>%
group_by(line_chapter) %>%
mutate(mean_line_negativity=mean(n))
plot_data
```
```
# A tibble: 4,398 x 7
# Groups: line_chapter [453]
chapter line_book line_chapter n negativity mean_chapter_negativity mean_line_negativity
<int> <int> <int> <int> <int> <dbl> <dbl>
1 1 17 7 2 2 111. 3.41
2 1 18 8 4 6 111. 2.65
3 1 20 10 1 7 111. 3.31
4 1 24 14 1 8 111. 2.88
5 1 26 16 2 10 111. 2.54
6 1 27 17 3 13 111. 2.67
7 1 28 18 3 16 111. 3.58
8 1 29 19 2 18 111. 2.31
9 1 34 24 3 21 111. 2.17
10 1 41 31 1 22 111. 2.87
# ... with 4,388 more rows
```
At this point you have enough to play with, so I leave you to plot whatever you want.
The following[8](#fn8) shows both the total negativity within a chapter, as well as the per line negativity within a chapter. We can see that there is less negativity towards the end of chapters. We can also see that there appears to be more negativity in later chapters (darker lines).
Basic idea
----------
A common and intuitive approach to text is sentiment analysis. In a grand sense, we are interested in the emotional content of some text, e.g. posts on Facebook, tweets, or movie reviews. Most of the time, this is obvious when one reads it, but if you have hundreds of thousands or millions of strings to analyze, you’d like to be able to do so efficiently.
We will use the tidytext package for our demonstration. It comes with a lexicon of positive and negative words that is actually a combination of multiple sources, one of which provides numeric ratings, while the others suggest different classes of sentiment.
```
library(tidytext)
sentiments %>% slice(sample(1:nrow(sentiments)))
```
```
# A tibble: 27,314 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 decomposition negative nrc NA
2 imaculate positive bing NA
3 greatness positive bing NA
4 impatient negative bing NA
5 contradicting negative loughran NA
6 irrecoverableness negative bing NA
7 advisable trust nrc NA
8 humiliation disgust nrc NA
9 obscures negative bing NA
10 affliction negative bing NA
# ... with 27,304 more rows
```
The gist is that we are dealing with a specific, pre\-defined vocabulary. Of course, any analysis will only be as good as the lexicon. The goal is usually to assign a sentiment score to a text, possibly an overall score, or a generally positive or negative grade. Given that, other analyses may be implemented to predict sentiment via standard regression tools or machine learning approaches.
Issues
------
### Context, sarcasm, etc.
Now consider the following.
```
sentiments %>% filter(word=='sick')
```
```
# A tibble: 5 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 sick disgust nrc NA
2 sick negative nrc NA
3 sick sadness nrc NA
4 sick negative bing NA
5 sick <NA> AFINN -2
```
Despite the above assigned sentiments, the word *sick* has been used at least since 1960s surfing culture as slang for positive affect. A basic approach to sentiment analysis as described here will not be able to detect slang or other context like sarcasm. However, lots of training data for a particular context may allow one to correctly predict such sentiment. In addition, there are, for example, slang lexicons, or one can simply add their own complements to any available lexicon.
### Lexicons
In addition, the lexicons are going to maybe be applicable to *general* usage of English in the western world. Some might wonder where exactly these came from or who decided that the word *abacus* should be affiliated with ‘trust’. You may start your path by typing `?sentiments` at the console if you have the tidytext package loaded.
### Context, sarcasm, etc.
Now consider the following.
```
sentiments %>% filter(word=='sick')
```
```
# A tibble: 5 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 sick disgust nrc NA
2 sick negative nrc NA
3 sick sadness nrc NA
4 sick negative bing NA
5 sick <NA> AFINN -2
```
Despite the above assigned sentiments, the word *sick* has been used at least since 1960s surfing culture as slang for positive affect. A basic approach to sentiment analysis as described here will not be able to detect slang or other context like sarcasm. However, lots of training data for a particular context may allow one to correctly predict such sentiment. In addition, there are, for example, slang lexicons, or one can simply add their own complements to any available lexicon.
### Lexicons
In addition, the lexicons are going to maybe be applicable to *general* usage of English in the western world. Some might wonder where exactly these came from or who decided that the word *abacus* should be affiliated with ‘trust’. You may start your path by typing `?sentiments` at the console if you have the tidytext package loaded.
Sentiment Analysis Examples
---------------------------
### The first thing the baby did wrong
We demonstrate sentiment analysis with the text *The first thing the baby did wrong*, which is a very popular brief guide to parenting written by world renown psychologist [Donald Barthelme](appendix.html#donald-barthelme) who, in his spare time, also wrote postmodern literature. This particular text talks about an issue with the baby, whose name is Born Dancin’, and who likes to tear pages out of books. Attempts are made by her parents to rectify the situation, without much success, but things are finally resolved at the end. The ultimate goal will be to see how sentiment in the text evolves over time, and in general we’d expect things to end more positively than they began.
How do we start? Let’s look again at the sentiments data set in the tidytext package.
```
sentiments %>% slice(sample(1:nrow(sentiments)))
```
```
# A tibble: 27,314 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 blunder sadness nrc NA
2 solidity positive nrc NA
3 mortuary fear nrc NA
4 absorbed positive nrc NA
5 successful joy nrc NA
6 virus negative nrc NA
7 exorbitantly negative bing NA
8 discombobulate negative bing NA
9 wail negative nrc NA
10 intimidatingly negative bing NA
# ... with 27,304 more rows
```
The bing lexicon provides only *positive* or *negative* labels. The AFINN, on the other hand, is numerical, with ratings \-5:5 that are in the score column. The others get more imaginative, but also more problematic. Why *assimilate* is *superfluous* is beyond me. It clearly should be negative given the [Borg](https://en.wikipedia.org/wiki/Borg_%28Star_Trek%29) connotations.
```
sentiments %>%
filter(sentiment=='superfluous')
```
```
# A tibble: 56 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 aegis superfluous loughran NA
2 amorphous superfluous loughran NA
3 anticipatory superfluous loughran NA
4 appertaining superfluous loughran NA
5 assimilate superfluous loughran NA
6 assimilating superfluous loughran NA
7 assimilation superfluous loughran NA
8 bifurcated superfluous loughran NA
9 bifurcation superfluous loughran NA
10 cessions superfluous loughran NA
# ... with 46 more rows
```
#### Read in the text files
But I digress. We start with the raw text, reading it in line by line. In what follows we read in all the texts (three) in a given directory, such that each element of ‘text’ is the work itself, i.e. `text` is a list column[5](#fn5). The unnest function will unravel the works to where each entry is essentially a paragraph form.
```
library(tidytext)
barth0 =
data_frame(file = dir('data/texts_raw/barthelme', full.names = TRUE)) %>%
mutate(text = map(file, read_lines)) %>%
transmute(work = basename(file), text) %>%
unnest(text)
```
#### Iterative processing
One of the things stressed in this document is the iterative nature of text analysis. You will consistently take two steps forward, and then one or two back as you find issues that need to be addressed. For example, in a subsequent step I found there were encoding issues[6](#fn6), so the following attempts to fix them. In addition, we want to tokenize the documents such that our tokens are sentences (e.g. as opposed to words or paragraphs). The reason for this is that I will be summarizing the sentiment at sentence level.
```
# Fix encoding, convert to sentences; you may get a warning message
barth = barth0 %>%
mutate(
text =
sapply(
text,
stringi::stri_enc_toutf8,
is_unknown_8bit = TRUE,
validate = TRUE
)
) %>%
unnest_tokens(
output = sentence,
input = text,
token = 'sentences'
)
```
#### Tokenization
The next step is to drill down to just the document we want, and subsequently tokenize to the word level. However, I also create a sentence id so that we can group on it later.
```
# get baby doc, convert to words
baby = barth %>%
filter(work=='baby.txt') %>%
mutate(sentence_id = 1:n()) %>%
unnest_tokens(
output = word,
input = sentence,
token = 'words',
drop = FALSE
) %>%
ungroup()
```
#### Get sentiments
Now that the data has been prepped, getting the sentiments is ridiculously easy. But that is how it is with text analysis. All the hard work is spent with the data processing. Here all we need is an inner join of our words with a sentiment lexicon of choice. This process will only retain words that are also in the lexicon. I use the numeric\-based lexicon here. At that point, we get a sum score of sentiment by sentence.
```
# get sentiment via inner join
baby_sentiment = baby %>%
inner_join(get_sentiments("afinn")) %>%
group_by(sentence_id, sentence) %>%
summarise(sentiment = sum(score)) %>%
ungroup()
```
#### Alternative approach
As we are interested in the sentence level, it turns out that the sentimentr package has built\-in functionality for this, and includes a more nuanced sentiment scores that takes into account valence shifters, e.g. words that would negate something with positive or negative sentiment (‘I do ***not*** like it’).
```
baby_sentiment = barth0 %>%
filter(work=='baby.txt') %>%
get_sentences(text) %>%
sentiment() %>%
drop_na() %>% # empty lines
mutate(sentence_id = row_number())
```
The following visualizes sentiment over the progression of sentences (note that not every sentence will receive a sentiment score). You can read the sentence by hovering over the dot. The ▬ is the running average.
In general, the sentiment starts out negative as the problem is explained. It bounces back and forth a bit but ends on a positive note. You’ll see that some sentences’ context are not captured. For example, sentence 16 is ‘But it didn’t do any good’. However *good* is going to be marked as a positive sentiment in any lexicon by default. In addition, the token length will matter. Longer sentences are more likely to have some sentiment, for example.
### Romeo \& Juliet
For this example, I’ll invite you to more or less follow along, as there is notable pre\-processing that must be done. We’ll look at sentiment in Shakespeare’s Romeo and Juliet. I have a cleaner version in the raw texts folder, but we can take the opportunity to use the gutenbergr package to download it directly from Project Gutenberg, a storehouse for works that have entered the public domain.
```
library(gutenbergr)
gw0 = gutenberg_works(title == "Romeo and Juliet") # look for something with this title
```
```
# A tibble: 1 x 4
gutenberg_id title author gutenberg_author_id
<int> <chr> <chr> <int>
1 1513 Romeo and Juliet Shakespeare, William 65
```
```
rnj = gutenberg_download(gw0$gutenberg_id)
```
We’ve got the text now, but there is still work to be done. The following is a quick and dirty approach, but see the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish) to see a more deliberate one.
We first slice off the initial parts we don’t want like title, author etc. Then we get rid of other tidbits that would interfere, using a little regex as well to aid the process.
```
rnj_filtered = rnj %>%
slice(-(1:49)) %>%
filter(!text==str_to_upper(text), # will remove THE PROLOGUE etc.
!text==str_to_title(text), # will remove names/single word lines
!str_detect(text, pattern='^(Scene|SCENE)|^(Act|ACT)|^\\[')) %>%
select(-gutenberg_id) %>%
unnest_tokens(sentence, input=text, token='sentences') %>%
mutate(sentenceID = 1:n())
```
The following unnests the data to word tokens. In addition, you can remove stopwords like a, an, the etc., and tidytext comes with a stop\_words data frame. However, some of the stopwords have sentiments, so you would get a bit of a different result if you retain them. As Black Sheep once said, the choice is yours, and you can deal with this, or you can deal with that.
```
# show some of the matches
stop_words$word[which(stop_words$word %in% sentiments$word)] %>% head(20)
```
```
[1] "able" "against" "allow" "almost" "alone" "appear" "appreciate" "appropriate" "available" "awfully" "believe" "best" "better" "certain" "clearly"
[16] "could" "despite" "downwards" "enough" "furthermore"
```
```
# remember to call output 'word' or antijoin won't work without a 'by' argument
rnj_filtered = rnj_filtered %>%
unnest_tokens(output=word, input=sentence, token='words') %>%
anti_join(stop_words)
```
Now we add the sentiments via the inner\_join function. Here I use ‘bing’, but you can use another, and you might get a different result.
```
rnj_filtered %>%
count(word) %>%
arrange(desc(n))
```
```
# A tibble: 3,288 x 2
word n
<chr> <int>
1 thou 276
2 thy 165
3 love 140
4 thee 139
5 romeo 110
6 night 83
7 death 71
8 hath 64
9 sir 58
10 art 55
# ... with 3,278 more rows
```
```
rnj_sentiment = rnj_filtered %>%
inner_join(sentiments)
rnj_sentiment
```
```
# A tibble: 12,668 x 5
sentenceID word sentiment lexicon score
<int> <chr> <chr> <chr> <int>
1 1 dignity positive nrc NA
2 1 dignity trust nrc NA
3 1 dignity positive bing NA
4 1 fair positive nrc NA
5 1 fair positive bing NA
6 1 fair <NA> AFINN 2
7 1 ancient negative nrc NA
8 1 grudge anger nrc NA
9 1 grudge negative nrc NA
10 1 grudge negative bing NA
# ... with 12,658 more rows
```
```
rnj_sentiment_bing = rnj_sentiment %>%
filter(lexicon=='bing')
table(rnj_sentiment_bing$sentiment)
```
```
negative positive
1244 833
```
Looks like this one is going to be a downer. The following visualizes the positive and negative sentiment scores as one progresses sentence by sentence through the work using the plotly package. I also show same information expressed as a difference (opaque line).
It’s a close game until perhaps the midway point, when negativity takes over and despair sets in with the story. By the end \[\[:SPOILER ALERT:]] Sean Bean is beheaded, Darth Vader reveals himself to be Luke’s father, and Verbal is Keyser Söze.
### The first thing the baby did wrong
We demonstrate sentiment analysis with the text *The first thing the baby did wrong*, which is a very popular brief guide to parenting written by world renown psychologist [Donald Barthelme](appendix.html#donald-barthelme) who, in his spare time, also wrote postmodern literature. This particular text talks about an issue with the baby, whose name is Born Dancin’, and who likes to tear pages out of books. Attempts are made by her parents to rectify the situation, without much success, but things are finally resolved at the end. The ultimate goal will be to see how sentiment in the text evolves over time, and in general we’d expect things to end more positively than they began.
How do we start? Let’s look again at the sentiments data set in the tidytext package.
```
sentiments %>% slice(sample(1:nrow(sentiments)))
```
```
# A tibble: 27,314 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 blunder sadness nrc NA
2 solidity positive nrc NA
3 mortuary fear nrc NA
4 absorbed positive nrc NA
5 successful joy nrc NA
6 virus negative nrc NA
7 exorbitantly negative bing NA
8 discombobulate negative bing NA
9 wail negative nrc NA
10 intimidatingly negative bing NA
# ... with 27,304 more rows
```
The bing lexicon provides only *positive* or *negative* labels. The AFINN, on the other hand, is numerical, with ratings \-5:5 that are in the score column. The others get more imaginative, but also more problematic. Why *assimilate* is *superfluous* is beyond me. It clearly should be negative given the [Borg](https://en.wikipedia.org/wiki/Borg_%28Star_Trek%29) connotations.
```
sentiments %>%
filter(sentiment=='superfluous')
```
```
# A tibble: 56 x 4
word sentiment lexicon score
<chr> <chr> <chr> <int>
1 aegis superfluous loughran NA
2 amorphous superfluous loughran NA
3 anticipatory superfluous loughran NA
4 appertaining superfluous loughran NA
5 assimilate superfluous loughran NA
6 assimilating superfluous loughran NA
7 assimilation superfluous loughran NA
8 bifurcated superfluous loughran NA
9 bifurcation superfluous loughran NA
10 cessions superfluous loughran NA
# ... with 46 more rows
```
#### Read in the text files
But I digress. We start with the raw text, reading it in line by line. In what follows we read in all the texts (three) in a given directory, such that each element of ‘text’ is the work itself, i.e. `text` is a list column[5](#fn5). The unnest function will unravel the works to where each entry is essentially a paragraph form.
```
library(tidytext)
barth0 =
data_frame(file = dir('data/texts_raw/barthelme', full.names = TRUE)) %>%
mutate(text = map(file, read_lines)) %>%
transmute(work = basename(file), text) %>%
unnest(text)
```
#### Iterative processing
One of the things stressed in this document is the iterative nature of text analysis. You will consistently take two steps forward, and then one or two back as you find issues that need to be addressed. For example, in a subsequent step I found there were encoding issues[6](#fn6), so the following attempts to fix them. In addition, we want to tokenize the documents such that our tokens are sentences (e.g. as opposed to words or paragraphs). The reason for this is that I will be summarizing the sentiment at sentence level.
```
# Fix encoding, convert to sentences; you may get a warning message
barth = barth0 %>%
mutate(
text =
sapply(
text,
stringi::stri_enc_toutf8,
is_unknown_8bit = TRUE,
validate = TRUE
)
) %>%
unnest_tokens(
output = sentence,
input = text,
token = 'sentences'
)
```
#### Tokenization
The next step is to drill down to just the document we want, and subsequently tokenize to the word level. However, I also create a sentence id so that we can group on it later.
```
# get baby doc, convert to words
baby = barth %>%
filter(work=='baby.txt') %>%
mutate(sentence_id = 1:n()) %>%
unnest_tokens(
output = word,
input = sentence,
token = 'words',
drop = FALSE
) %>%
ungroup()
```
#### Get sentiments
Now that the data has been prepped, getting the sentiments is ridiculously easy. But that is how it is with text analysis. All the hard work is spent with the data processing. Here all we need is an inner join of our words with a sentiment lexicon of choice. This process will only retain words that are also in the lexicon. I use the numeric\-based lexicon here. At that point, we get a sum score of sentiment by sentence.
```
# get sentiment via inner join
baby_sentiment = baby %>%
inner_join(get_sentiments("afinn")) %>%
group_by(sentence_id, sentence) %>%
summarise(sentiment = sum(score)) %>%
ungroup()
```
#### Alternative approach
As we are interested in the sentence level, it turns out that the sentimentr package has built\-in functionality for this, and includes a more nuanced sentiment scores that takes into account valence shifters, e.g. words that would negate something with positive or negative sentiment (‘I do ***not*** like it’).
```
baby_sentiment = barth0 %>%
filter(work=='baby.txt') %>%
get_sentences(text) %>%
sentiment() %>%
drop_na() %>% # empty lines
mutate(sentence_id = row_number())
```
The following visualizes sentiment over the progression of sentences (note that not every sentence will receive a sentiment score). You can read the sentence by hovering over the dot. The ▬ is the running average.
In general, the sentiment starts out negative as the problem is explained. It bounces back and forth a bit but ends on a positive note. You’ll see that some sentences’ context are not captured. For example, sentence 16 is ‘But it didn’t do any good’. However *good* is going to be marked as a positive sentiment in any lexicon by default. In addition, the token length will matter. Longer sentences are more likely to have some sentiment, for example.
#### Read in the text files
But I digress. We start with the raw text, reading it in line by line. In what follows we read in all the texts (three) in a given directory, such that each element of ‘text’ is the work itself, i.e. `text` is a list column[5](#fn5). The unnest function will unravel the works to where each entry is essentially a paragraph form.
```
library(tidytext)
barth0 =
data_frame(file = dir('data/texts_raw/barthelme', full.names = TRUE)) %>%
mutate(text = map(file, read_lines)) %>%
transmute(work = basename(file), text) %>%
unnest(text)
```
#### Iterative processing
One of the things stressed in this document is the iterative nature of text analysis. You will consistently take two steps forward, and then one or two back as you find issues that need to be addressed. For example, in a subsequent step I found there were encoding issues[6](#fn6), so the following attempts to fix them. In addition, we want to tokenize the documents such that our tokens are sentences (e.g. as opposed to words or paragraphs). The reason for this is that I will be summarizing the sentiment at sentence level.
```
# Fix encoding, convert to sentences; you may get a warning message
barth = barth0 %>%
mutate(
text =
sapply(
text,
stringi::stri_enc_toutf8,
is_unknown_8bit = TRUE,
validate = TRUE
)
) %>%
unnest_tokens(
output = sentence,
input = text,
token = 'sentences'
)
```
#### Tokenization
The next step is to drill down to just the document we want, and subsequently tokenize to the word level. However, I also create a sentence id so that we can group on it later.
```
# get baby doc, convert to words
baby = barth %>%
filter(work=='baby.txt') %>%
mutate(sentence_id = 1:n()) %>%
unnest_tokens(
output = word,
input = sentence,
token = 'words',
drop = FALSE
) %>%
ungroup()
```
#### Get sentiments
Now that the data has been prepped, getting the sentiments is ridiculously easy. But that is how it is with text analysis. All the hard work is spent with the data processing. Here all we need is an inner join of our words with a sentiment lexicon of choice. This process will only retain words that are also in the lexicon. I use the numeric\-based lexicon here. At that point, we get a sum score of sentiment by sentence.
```
# get sentiment via inner join
baby_sentiment = baby %>%
inner_join(get_sentiments("afinn")) %>%
group_by(sentence_id, sentence) %>%
summarise(sentiment = sum(score)) %>%
ungroup()
```
#### Alternative approach
As we are interested in the sentence level, it turns out that the sentimentr package has built\-in functionality for this, and includes a more nuanced sentiment scores that takes into account valence shifters, e.g. words that would negate something with positive or negative sentiment (‘I do ***not*** like it’).
```
baby_sentiment = barth0 %>%
filter(work=='baby.txt') %>%
get_sentences(text) %>%
sentiment() %>%
drop_na() %>% # empty lines
mutate(sentence_id = row_number())
```
The following visualizes sentiment over the progression of sentences (note that not every sentence will receive a sentiment score). You can read the sentence by hovering over the dot. The ▬ is the running average.
In general, the sentiment starts out negative as the problem is explained. It bounces back and forth a bit but ends on a positive note. You’ll see that some sentences’ context are not captured. For example, sentence 16 is ‘But it didn’t do any good’. However *good* is going to be marked as a positive sentiment in any lexicon by default. In addition, the token length will matter. Longer sentences are more likely to have some sentiment, for example.
### Romeo \& Juliet
For this example, I’ll invite you to more or less follow along, as there is notable pre\-processing that must be done. We’ll look at sentiment in Shakespeare’s Romeo and Juliet. I have a cleaner version in the raw texts folder, but we can take the opportunity to use the gutenbergr package to download it directly from Project Gutenberg, a storehouse for works that have entered the public domain.
```
library(gutenbergr)
gw0 = gutenberg_works(title == "Romeo and Juliet") # look for something with this title
```
```
# A tibble: 1 x 4
gutenberg_id title author gutenberg_author_id
<int> <chr> <chr> <int>
1 1513 Romeo and Juliet Shakespeare, William 65
```
```
rnj = gutenberg_download(gw0$gutenberg_id)
```
We’ve got the text now, but there is still work to be done. The following is a quick and dirty approach, but see the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish) to see a more deliberate one.
We first slice off the initial parts we don’t want like title, author etc. Then we get rid of other tidbits that would interfere, using a little regex as well to aid the process.
```
rnj_filtered = rnj %>%
slice(-(1:49)) %>%
filter(!text==str_to_upper(text), # will remove THE PROLOGUE etc.
!text==str_to_title(text), # will remove names/single word lines
!str_detect(text, pattern='^(Scene|SCENE)|^(Act|ACT)|^\\[')) %>%
select(-gutenberg_id) %>%
unnest_tokens(sentence, input=text, token='sentences') %>%
mutate(sentenceID = 1:n())
```
The following unnests the data to word tokens. In addition, you can remove stopwords like a, an, the etc., and tidytext comes with a stop\_words data frame. However, some of the stopwords have sentiments, so you would get a bit of a different result if you retain them. As Black Sheep once said, the choice is yours, and you can deal with this, or you can deal with that.
```
# show some of the matches
stop_words$word[which(stop_words$word %in% sentiments$word)] %>% head(20)
```
```
[1] "able" "against" "allow" "almost" "alone" "appear" "appreciate" "appropriate" "available" "awfully" "believe" "best" "better" "certain" "clearly"
[16] "could" "despite" "downwards" "enough" "furthermore"
```
```
# remember to call output 'word' or antijoin won't work without a 'by' argument
rnj_filtered = rnj_filtered %>%
unnest_tokens(output=word, input=sentence, token='words') %>%
anti_join(stop_words)
```
Now we add the sentiments via the inner\_join function. Here I use ‘bing’, but you can use another, and you might get a different result.
```
rnj_filtered %>%
count(word) %>%
arrange(desc(n))
```
```
# A tibble: 3,288 x 2
word n
<chr> <int>
1 thou 276
2 thy 165
3 love 140
4 thee 139
5 romeo 110
6 night 83
7 death 71
8 hath 64
9 sir 58
10 art 55
# ... with 3,278 more rows
```
```
rnj_sentiment = rnj_filtered %>%
inner_join(sentiments)
rnj_sentiment
```
```
# A tibble: 12,668 x 5
sentenceID word sentiment lexicon score
<int> <chr> <chr> <chr> <int>
1 1 dignity positive nrc NA
2 1 dignity trust nrc NA
3 1 dignity positive bing NA
4 1 fair positive nrc NA
5 1 fair positive bing NA
6 1 fair <NA> AFINN 2
7 1 ancient negative nrc NA
8 1 grudge anger nrc NA
9 1 grudge negative nrc NA
10 1 grudge negative bing NA
# ... with 12,658 more rows
```
```
rnj_sentiment_bing = rnj_sentiment %>%
filter(lexicon=='bing')
table(rnj_sentiment_bing$sentiment)
```
```
negative positive
1244 833
```
Looks like this one is going to be a downer. The following visualizes the positive and negative sentiment scores as one progresses sentence by sentence through the work using the plotly package. I also show same information expressed as a difference (opaque line).
It’s a close game until perhaps the midway point, when negativity takes over and despair sets in with the story. By the end \[\[:SPOILER ALERT:]] Sean Bean is beheaded, Darth Vader reveals himself to be Luke’s father, and Verbal is Keyser Söze.
Sentiment Analysis Summary
--------------------------
In general, sentiment analysis can be a useful exploration of data, but it is highly dependent on the context and tools used. Note also that ‘sentiment’ can be anything, it doesn’t have to be positive vs. negative. Any vocabulary may be applied, and so it has more utility than the usual implementation.
It should also be noted that the above demonstration is largely conceptual and descriptive. While fun, it’s a bit simplified. For starters, trying to classify words as simply positive or negative itself is not a straightforward endeavor. As we noted at the beginning, context matters, and in general you’d want to take it into account. Modern methods of sentiment analysis would use approaches like word2vec or deep learning to predict a sentiment probability, as opposed to a simple word match. Even in the above, matching sentiments to texts would probably only be a precursor to building a model predicting sentiment, which could then be applied to new data.
Exercise
--------
### Step 0: Install the packages
If you haven’t already, install the tidytext package. Install the janeaustenr package and load both of them[7](#fn7).
### Step 1: Initial inspection
First you’ll want to look at what we’re dealing with, so take a gander at austenbooks.
```
library(tidytext); library(janeaustenr)
austen_books()
```
```
# A tibble: 73,422 x 2
text book
* <chr> <fct>
1 SENSE AND SENSIBILITY Sense & Sensibility
2 "" Sense & Sensibility
3 by Jane Austen Sense & Sensibility
4 "" Sense & Sensibility
5 (1811) Sense & Sensibility
6 "" Sense & Sensibility
7 "" Sense & Sensibility
8 "" Sense & Sensibility
9 "" Sense & Sensibility
10 CHAPTER 1 Sense & Sensibility
# ... with 73,412 more rows
```
```
austen_books() %>%
distinct(book)
```
```
# A tibble: 6 x 1
book
<fct>
1 Sense & Sensibility
2 Pride & Prejudice
3 Mansfield Park
4 Emma
5 Northanger Abbey
6 Persuasion
```
We will examine only one text. In addition, for this exercise we’ll take a little bit of a different approach, looking for a specific kind of sentiment using the NRC database. It contains 10 distinct sentiments.
```
get_sentiments("nrc") %>% distinct(sentiment)
```
```
# A tibble: 10 x 1
sentiment
<chr>
1 trust
2 fear
3 negative
4 sadness
5 anger
6 surprise
7 positive
8 disgust
9 joy
10 anticipation
```
Now, select from any of those sentiments you like (or more than one), and one of the texts as follows.
```
nrc_sadness <- get_sentiments("nrc") %>%
filter(sentiment == "positive")
ja_book = austen_books() %>%
filter(book == "Emma")
```
### Step 2: Data prep
Now we do a little prep, and I’ll save you the trouble. You can just run the following.
```
ja_book = ja_book %>%
mutate(chapter = str_detect(text, regex("^chapter [\\divxlc]", ignore_case = TRUE)),
chapter = cumsum(chapter),
line_book = row_number()) %>%
unnest_tokens(word, text)
```
```
ja_book = ja_book %>%
mutate(chapter = str_detect(text, regex("^chapter [\\divxlc]", ignore_case = TRUE)),
chapter = cumsum(chapter),
line_book = row_number()) %>%
group_by(chapter) %>%
mutate(line_chapter = row_number()) %>%
# ungroup()
unnest_tokens(word, text)
```
### Step 3: Get sentiment
Now, on your own, try the inner join approach we used previously to match the sentiments to the text. Don’t try to overthink this. The third pipe step will use the count function with the `word` column and also the argument `sort=TRUE`. Note this is just to look at your result, we aren’t assigning it to an object yet.
```
ja_book %>%
? %>%
?
```
The following shows my negative evaluation of Mansfield Park.
```
# A tibble: 4,204 x 3
# Groups: chapter [48]
chapter word n
<int> <chr> <int>
1 24 feeling 35
2 7 ill 25
3 46 evil 25
4 26 cross 24
5 27 cross 24
6 48 punishment 24
7 7 cutting 20
8 19 feeling 20
9 33 feeling 20
10 34 feeling 20
# ... with 4,194 more rows
```
### Step 4: Visualize
Now let’s do a visualization for sentiment. So redo your inner join, but we’ll create a data frame that has the information we need.
```
plot_data = ja_book %>%
inner_join(nrc_bad) %>%
group_by(chapter, line_book, line_chapter) %>%
count() %>%
group_by(chapter) %>%
mutate(negativity = cumsum(n),
mean_chapter_negativity=mean(negativity)) %>%
group_by(line_chapter) %>%
mutate(mean_line_negativity=mean(n))
plot_data
```
```
# A tibble: 4,398 x 7
# Groups: line_chapter [453]
chapter line_book line_chapter n negativity mean_chapter_negativity mean_line_negativity
<int> <int> <int> <int> <int> <dbl> <dbl>
1 1 17 7 2 2 111. 3.41
2 1 18 8 4 6 111. 2.65
3 1 20 10 1 7 111. 3.31
4 1 24 14 1 8 111. 2.88
5 1 26 16 2 10 111. 2.54
6 1 27 17 3 13 111. 2.67
7 1 28 18 3 16 111. 3.58
8 1 29 19 2 18 111. 2.31
9 1 34 24 3 21 111. 2.17
10 1 41 31 1 22 111. 2.87
# ... with 4,388 more rows
```
At this point you have enough to play with, so I leave you to plot whatever you want.
The following[8](#fn8) shows both the total negativity within a chapter, as well as the per line negativity within a chapter. We can see that there is less negativity towards the end of chapters. We can also see that there appears to be more negativity in later chapters (darker lines).
### Step 0: Install the packages
If you haven’t already, install the tidytext package. Install the janeaustenr package and load both of them[7](#fn7).
### Step 1: Initial inspection
First you’ll want to look at what we’re dealing with, so take a gander at austenbooks.
```
library(tidytext); library(janeaustenr)
austen_books()
```
```
# A tibble: 73,422 x 2
text book
* <chr> <fct>
1 SENSE AND SENSIBILITY Sense & Sensibility
2 "" Sense & Sensibility
3 by Jane Austen Sense & Sensibility
4 "" Sense & Sensibility
5 (1811) Sense & Sensibility
6 "" Sense & Sensibility
7 "" Sense & Sensibility
8 "" Sense & Sensibility
9 "" Sense & Sensibility
10 CHAPTER 1 Sense & Sensibility
# ... with 73,412 more rows
```
```
austen_books() %>%
distinct(book)
```
```
# A tibble: 6 x 1
book
<fct>
1 Sense & Sensibility
2 Pride & Prejudice
3 Mansfield Park
4 Emma
5 Northanger Abbey
6 Persuasion
```
We will examine only one text. In addition, for this exercise we’ll take a little bit of a different approach, looking for a specific kind of sentiment using the NRC database. It contains 10 distinct sentiments.
```
get_sentiments("nrc") %>% distinct(sentiment)
```
```
# A tibble: 10 x 1
sentiment
<chr>
1 trust
2 fear
3 negative
4 sadness
5 anger
6 surprise
7 positive
8 disgust
9 joy
10 anticipation
```
Now, select from any of those sentiments you like (or more than one), and one of the texts as follows.
```
nrc_sadness <- get_sentiments("nrc") %>%
filter(sentiment == "positive")
ja_book = austen_books() %>%
filter(book == "Emma")
```
### Step 2: Data prep
Now we do a little prep, and I’ll save you the trouble. You can just run the following.
```
ja_book = ja_book %>%
mutate(chapter = str_detect(text, regex("^chapter [\\divxlc]", ignore_case = TRUE)),
chapter = cumsum(chapter),
line_book = row_number()) %>%
unnest_tokens(word, text)
```
```
ja_book = ja_book %>%
mutate(chapter = str_detect(text, regex("^chapter [\\divxlc]", ignore_case = TRUE)),
chapter = cumsum(chapter),
line_book = row_number()) %>%
group_by(chapter) %>%
mutate(line_chapter = row_number()) %>%
# ungroup()
unnest_tokens(word, text)
```
### Step 3: Get sentiment
Now, on your own, try the inner join approach we used previously to match the sentiments to the text. Don’t try to overthink this. The third pipe step will use the count function with the `word` column and also the argument `sort=TRUE`. Note this is just to look at your result, we aren’t assigning it to an object yet.
```
ja_book %>%
? %>%
?
```
The following shows my negative evaluation of Mansfield Park.
```
# A tibble: 4,204 x 3
# Groups: chapter [48]
chapter word n
<int> <chr> <int>
1 24 feeling 35
2 7 ill 25
3 46 evil 25
4 26 cross 24
5 27 cross 24
6 48 punishment 24
7 7 cutting 20
8 19 feeling 20
9 33 feeling 20
10 34 feeling 20
# ... with 4,194 more rows
```
### Step 4: Visualize
Now let’s do a visualization for sentiment. So redo your inner join, but we’ll create a data frame that has the information we need.
```
plot_data = ja_book %>%
inner_join(nrc_bad) %>%
group_by(chapter, line_book, line_chapter) %>%
count() %>%
group_by(chapter) %>%
mutate(negativity = cumsum(n),
mean_chapter_negativity=mean(negativity)) %>%
group_by(line_chapter) %>%
mutate(mean_line_negativity=mean(n))
plot_data
```
```
# A tibble: 4,398 x 7
# Groups: line_chapter [453]
chapter line_book line_chapter n negativity mean_chapter_negativity mean_line_negativity
<int> <int> <int> <int> <int> <dbl> <dbl>
1 1 17 7 2 2 111. 3.41
2 1 18 8 4 6 111. 2.65
3 1 20 10 1 7 111. 3.31
4 1 24 14 1 8 111. 2.88
5 1 26 16 2 10 111. 2.54
6 1 27 17 3 13 111. 2.67
7 1 28 18 3 16 111. 3.58
8 1 29 19 2 18 111. 2.31
9 1 34 24 3 21 111. 2.17
10 1 41 31 1 22 111. 2.87
# ... with 4,388 more rows
```
At this point you have enough to play with, so I leave you to plot whatever you want.
The following[8](#fn8) shows both the total negativity within a chapter, as well as the per line negativity within a chapter. We can see that there is less negativity towards the end of chapters. We can also see that there appears to be more negativity in later chapters (darker lines).
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/part-of-speech-tagging.html |
Part of Speech Tagging
======================
As an initial review of parts of speech, if you need a refresher, the following Schoolhouse Rocks videos should get you squared away:
* [A noun is a person, place, or thing.](https://youtu.be/h0m89e9oZko)
* [Interjections](https://youtu.be/YkAX7Vk3JEw)
* [Pronouns](https://youtu.be/Eu1ciVFbecw)
* [Verbs](https://youtu.be/US8mGU1MzYw)
* [Unpack your adjectives](https://youtu.be/NkuuZEey_bs)
* [Lolly Lolly Lolly Get Your Adverbs Here](https://youtu.be/14fXm4FOMPM)
* [Conjunction Junction](https://youtu.be/RPoBE-E8VOc) (personal fave)
Aside from those, you can also learn how bills get passed, about being a victim of gravity, a comparison of the decimal to other numeric systems used by alien species (I recommend the Chavez remix), and a host of other useful things.
Basic idea
----------
With part\-of\-speech tagging, we classify a word with its corresponding part of speech. The following provides an example.
| JJ | JJ | NNS | VBP | RB |
| --- | --- | --- | --- | --- |
| Colorless | green | ideas | sleep | furiously. |
We have two adjectives (JJ), a plural noun (NNS), a verb (VBP), and an adverb (RB).
Common analysis may then be used to predict POS given the current state of the text, comparing the grammar of different texts, human\-computer interaction, or translation from one language to another. In addition, using POS information would make for richer sentiment analysis as well.
POS Examples
------------
The following approach to POS\-tagging is very similar to what we did for sentiment analysis as depicted previously. We have a POS dictionary, and can use an inner join to attach the words to their POS. Unfortunately, this approach is unrealistically simplistic, as additional steps would need to be taken to ensure words are correctly classified. For example, without more information, we are unable to tell if some words are being used as nouns or verbs (human being vs. being a problematic part of speech). However, this example can serve as a starting point.
### Barthelme \& Carver
In the following we’ll compare three texts from Donald Barthelme:
* *The Balloon*
* *The First Thing The Baby Did Wrong*
* *Some Of Us Had Been Threatening Our Friend Colby*
As another comparison, I’ve included Raymond Carver’s *What we talk about when we talk about love*, the unedited version. First we’ll load an unnested object from the sentiment analysis, the barth object. Then for each work we create a sentence id, unnest the data to words, join the POS data, then create counts/proportions for each POS.
```
load('data/barth_sentences.RData')
barthelme_pos = barth %>%
mutate(work = str_replace(work, '.txt', '')) %>% # remove file extension
group_by(work) %>%
mutate(sentence_id = 1:n()) %>% # create a sentence id
unnest_tokens(word, sentence, drop=F) %>% # get words
inner_join(parts_of_speech) %>% # join POS
count(pos) %>% # count
mutate(prop=n/sum(n))
```
Next we read in and process the Carver text in the same manner.
```
carver_pos =
data_frame(file = dir('data/texts_raw/carver/', full.names = TRUE)) %>%
mutate(text = map(file, read_lines)) %>%
transmute(work = basename(file), text) %>%
unnest(text) %>%
unnest_tokens(word, text, token='words') %>%
inner_join(parts_of_speech) %>%
count(pos) %>%
mutate(work='love',
prop=n/sum(n))
```
This visualization depicts the proportion of occurrence for each part of speech across the works. It would appear Barthelme is fairly consistent, and also that relative to the Barthelme texts, Carver preferred nouns and pronouns.
### More taggin’
More sophisticated POS tagging would require the context of the sentence structure. Luckily there are tools to help with that here, in particular via the openNLP package. In addition, it will require a certain language model to be installed (English is only one of many available). I don’t recommend doing so unless you are really interested in this (the openNLPmodels.en package is fairly large).
We’ll reexamine the Barthelme texts above with this more involved approach. Initially we’ll need to get the English\-based tagger we need and load the libraries.
```
# install.packages("openNLPmodels.en", repos = "http://datacube.wu.ac.at/", type = "source")
library(NLP)
library(tm) # make sure to load this prior to openNLP
library(openNLP)
library(openNLPmodels.en)
```
Next comes the processing. This more or less follows the help file example for `?Maxent_POS_Tag_Annotator`. Given the several steps involved I show only the processing for one text for clarity. Ideally you’d write a function, and use a group\_by approach, to process each of the texts of interest.
```
load('data/barthelme_start.RData')
baby_string0 = barth0 %>%
filter(id=='baby.txt')
baby_string = unlist(baby_string0$text) %>%
paste(collapse=' ') %>%
as.String
init_s_w = annotate(baby_string, list(Maxent_Sent_Token_Annotator(),
Maxent_Word_Token_Annotator()))
pos_res = annotate(baby_string, Maxent_POS_Tag_Annotator(), init_s_w)
word_subset = subset(pos_res, type=='word')
tags = sapply(word_subset$features , '[[', "POS")
baby_pos = data_frame(word=baby_string[word_subset], pos=tags) %>%
filter(!str_detect(pos, pattern='[[:punct:]]'))
```
Let’s take a look. I’ve also done the other Barthelme texts as well for comparison.
| word | pos | text |
| --- | --- | --- |
| The | DT | baby |
| first | JJ | baby |
| thing | NN | baby |
| the | DT | baby |
| baby | NN | baby |
| did | VBD | baby |
| wrong | JJ | baby |
| was | VBD | baby |
| to | TO | baby |
| tear | VB | baby |
| pages | NNS | baby |
| out | IN | baby |
| of | IN | baby |
| her | PRP$ | baby |
| books | NNS | baby |
As we can see, we have quite a few more POS to deal with here. They come from the [Penn Treebank](https://en.wikipedia.org/wiki/Treebank). The following table notes what the acronyms stand for. I don’t pretend to know all the facets to this.
Plotting the differences, we now see a little more distinction between *The Balloon* and the other two texts. It is more likely to use the determiners, adjectives, singular nouns, and less likely to use personal pronouns and verbs (including past tense).
Tagging summary
---------------
For more information, consult the following:
* [Penn Treebank](http://repository.upenn.edu/cgi/viewcontent.cgi?article=1603&context=cis_reports)
* [Maxent function](http://maxent.sourceforge.net/about.html)
As with the sentiment analysis demos, the above should be seen only starting point for getting a sense of what you’re dealing with. The ‘maximum entropy’ approach is just one way to go about things. Other models include hidden Markov models, conditional random fields, and more recently, deep learning techniques. Goals might include text prediction (i.e. the thing your phone always gets wrong), translation, and more.
POS Exercise
------------
As this is a more involved sort of analysis, if nothing else in terms of the tools required, as an exercise I would suggest starting with a cleaned text, and seeing if the above code in the last example can get you to the result of having parsed text. Otherwise, assuming you’ve downloaded the appropriate packages, feel free to play around with some strings of your choosing as follows.
```
string = 'Colorless green ideas sleep furiously'
initial_result = string %>%
annotate(list(Maxent_Sent_Token_Annotator(),
Maxent_Word_Token_Annotator())) %>%
annotate(string, Maxent_POS_Tag_Annotator(), .) %>%
subset(type=='word')
sapply(initial_result$features , '[[', "POS") %>% table
```
Basic idea
----------
With part\-of\-speech tagging, we classify a word with its corresponding part of speech. The following provides an example.
| JJ | JJ | NNS | VBP | RB |
| --- | --- | --- | --- | --- |
| Colorless | green | ideas | sleep | furiously. |
We have two adjectives (JJ), a plural noun (NNS), a verb (VBP), and an adverb (RB).
Common analysis may then be used to predict POS given the current state of the text, comparing the grammar of different texts, human\-computer interaction, or translation from one language to another. In addition, using POS information would make for richer sentiment analysis as well.
POS Examples
------------
The following approach to POS\-tagging is very similar to what we did for sentiment analysis as depicted previously. We have a POS dictionary, and can use an inner join to attach the words to their POS. Unfortunately, this approach is unrealistically simplistic, as additional steps would need to be taken to ensure words are correctly classified. For example, without more information, we are unable to tell if some words are being used as nouns or verbs (human being vs. being a problematic part of speech). However, this example can serve as a starting point.
### Barthelme \& Carver
In the following we’ll compare three texts from Donald Barthelme:
* *The Balloon*
* *The First Thing The Baby Did Wrong*
* *Some Of Us Had Been Threatening Our Friend Colby*
As another comparison, I’ve included Raymond Carver’s *What we talk about when we talk about love*, the unedited version. First we’ll load an unnested object from the sentiment analysis, the barth object. Then for each work we create a sentence id, unnest the data to words, join the POS data, then create counts/proportions for each POS.
```
load('data/barth_sentences.RData')
barthelme_pos = barth %>%
mutate(work = str_replace(work, '.txt', '')) %>% # remove file extension
group_by(work) %>%
mutate(sentence_id = 1:n()) %>% # create a sentence id
unnest_tokens(word, sentence, drop=F) %>% # get words
inner_join(parts_of_speech) %>% # join POS
count(pos) %>% # count
mutate(prop=n/sum(n))
```
Next we read in and process the Carver text in the same manner.
```
carver_pos =
data_frame(file = dir('data/texts_raw/carver/', full.names = TRUE)) %>%
mutate(text = map(file, read_lines)) %>%
transmute(work = basename(file), text) %>%
unnest(text) %>%
unnest_tokens(word, text, token='words') %>%
inner_join(parts_of_speech) %>%
count(pos) %>%
mutate(work='love',
prop=n/sum(n))
```
This visualization depicts the proportion of occurrence for each part of speech across the works. It would appear Barthelme is fairly consistent, and also that relative to the Barthelme texts, Carver preferred nouns and pronouns.
### More taggin’
More sophisticated POS tagging would require the context of the sentence structure. Luckily there are tools to help with that here, in particular via the openNLP package. In addition, it will require a certain language model to be installed (English is only one of many available). I don’t recommend doing so unless you are really interested in this (the openNLPmodels.en package is fairly large).
We’ll reexamine the Barthelme texts above with this more involved approach. Initially we’ll need to get the English\-based tagger we need and load the libraries.
```
# install.packages("openNLPmodels.en", repos = "http://datacube.wu.ac.at/", type = "source")
library(NLP)
library(tm) # make sure to load this prior to openNLP
library(openNLP)
library(openNLPmodels.en)
```
Next comes the processing. This more or less follows the help file example for `?Maxent_POS_Tag_Annotator`. Given the several steps involved I show only the processing for one text for clarity. Ideally you’d write a function, and use a group\_by approach, to process each of the texts of interest.
```
load('data/barthelme_start.RData')
baby_string0 = barth0 %>%
filter(id=='baby.txt')
baby_string = unlist(baby_string0$text) %>%
paste(collapse=' ') %>%
as.String
init_s_w = annotate(baby_string, list(Maxent_Sent_Token_Annotator(),
Maxent_Word_Token_Annotator()))
pos_res = annotate(baby_string, Maxent_POS_Tag_Annotator(), init_s_w)
word_subset = subset(pos_res, type=='word')
tags = sapply(word_subset$features , '[[', "POS")
baby_pos = data_frame(word=baby_string[word_subset], pos=tags) %>%
filter(!str_detect(pos, pattern='[[:punct:]]'))
```
Let’s take a look. I’ve also done the other Barthelme texts as well for comparison.
| word | pos | text |
| --- | --- | --- |
| The | DT | baby |
| first | JJ | baby |
| thing | NN | baby |
| the | DT | baby |
| baby | NN | baby |
| did | VBD | baby |
| wrong | JJ | baby |
| was | VBD | baby |
| to | TO | baby |
| tear | VB | baby |
| pages | NNS | baby |
| out | IN | baby |
| of | IN | baby |
| her | PRP$ | baby |
| books | NNS | baby |
As we can see, we have quite a few more POS to deal with here. They come from the [Penn Treebank](https://en.wikipedia.org/wiki/Treebank). The following table notes what the acronyms stand for. I don’t pretend to know all the facets to this.
Plotting the differences, we now see a little more distinction between *The Balloon* and the other two texts. It is more likely to use the determiners, adjectives, singular nouns, and less likely to use personal pronouns and verbs (including past tense).
### Barthelme \& Carver
In the following we’ll compare three texts from Donald Barthelme:
* *The Balloon*
* *The First Thing The Baby Did Wrong*
* *Some Of Us Had Been Threatening Our Friend Colby*
As another comparison, I’ve included Raymond Carver’s *What we talk about when we talk about love*, the unedited version. First we’ll load an unnested object from the sentiment analysis, the barth object. Then for each work we create a sentence id, unnest the data to words, join the POS data, then create counts/proportions for each POS.
```
load('data/barth_sentences.RData')
barthelme_pos = barth %>%
mutate(work = str_replace(work, '.txt', '')) %>% # remove file extension
group_by(work) %>%
mutate(sentence_id = 1:n()) %>% # create a sentence id
unnest_tokens(word, sentence, drop=F) %>% # get words
inner_join(parts_of_speech) %>% # join POS
count(pos) %>% # count
mutate(prop=n/sum(n))
```
Next we read in and process the Carver text in the same manner.
```
carver_pos =
data_frame(file = dir('data/texts_raw/carver/', full.names = TRUE)) %>%
mutate(text = map(file, read_lines)) %>%
transmute(work = basename(file), text) %>%
unnest(text) %>%
unnest_tokens(word, text, token='words') %>%
inner_join(parts_of_speech) %>%
count(pos) %>%
mutate(work='love',
prop=n/sum(n))
```
This visualization depicts the proportion of occurrence for each part of speech across the works. It would appear Barthelme is fairly consistent, and also that relative to the Barthelme texts, Carver preferred nouns and pronouns.
### More taggin’
More sophisticated POS tagging would require the context of the sentence structure. Luckily there are tools to help with that here, in particular via the openNLP package. In addition, it will require a certain language model to be installed (English is only one of many available). I don’t recommend doing so unless you are really interested in this (the openNLPmodels.en package is fairly large).
We’ll reexamine the Barthelme texts above with this more involved approach. Initially we’ll need to get the English\-based tagger we need and load the libraries.
```
# install.packages("openNLPmodels.en", repos = "http://datacube.wu.ac.at/", type = "source")
library(NLP)
library(tm) # make sure to load this prior to openNLP
library(openNLP)
library(openNLPmodels.en)
```
Next comes the processing. This more or less follows the help file example for `?Maxent_POS_Tag_Annotator`. Given the several steps involved I show only the processing for one text for clarity. Ideally you’d write a function, and use a group\_by approach, to process each of the texts of interest.
```
load('data/barthelme_start.RData')
baby_string0 = barth0 %>%
filter(id=='baby.txt')
baby_string = unlist(baby_string0$text) %>%
paste(collapse=' ') %>%
as.String
init_s_w = annotate(baby_string, list(Maxent_Sent_Token_Annotator(),
Maxent_Word_Token_Annotator()))
pos_res = annotate(baby_string, Maxent_POS_Tag_Annotator(), init_s_w)
word_subset = subset(pos_res, type=='word')
tags = sapply(word_subset$features , '[[', "POS")
baby_pos = data_frame(word=baby_string[word_subset], pos=tags) %>%
filter(!str_detect(pos, pattern='[[:punct:]]'))
```
Let’s take a look. I’ve also done the other Barthelme texts as well for comparison.
| word | pos | text |
| --- | --- | --- |
| The | DT | baby |
| first | JJ | baby |
| thing | NN | baby |
| the | DT | baby |
| baby | NN | baby |
| did | VBD | baby |
| wrong | JJ | baby |
| was | VBD | baby |
| to | TO | baby |
| tear | VB | baby |
| pages | NNS | baby |
| out | IN | baby |
| of | IN | baby |
| her | PRP$ | baby |
| books | NNS | baby |
As we can see, we have quite a few more POS to deal with here. They come from the [Penn Treebank](https://en.wikipedia.org/wiki/Treebank). The following table notes what the acronyms stand for. I don’t pretend to know all the facets to this.
Plotting the differences, we now see a little more distinction between *The Balloon* and the other two texts. It is more likely to use the determiners, adjectives, singular nouns, and less likely to use personal pronouns and verbs (including past tense).
Tagging summary
---------------
For more information, consult the following:
* [Penn Treebank](http://repository.upenn.edu/cgi/viewcontent.cgi?article=1603&context=cis_reports)
* [Maxent function](http://maxent.sourceforge.net/about.html)
As with the sentiment analysis demos, the above should be seen only starting point for getting a sense of what you’re dealing with. The ‘maximum entropy’ approach is just one way to go about things. Other models include hidden Markov models, conditional random fields, and more recently, deep learning techniques. Goals might include text prediction (i.e. the thing your phone always gets wrong), translation, and more.
POS Exercise
------------
As this is a more involved sort of analysis, if nothing else in terms of the tools required, as an exercise I would suggest starting with a cleaned text, and seeing if the above code in the last example can get you to the result of having parsed text. Otherwise, assuming you’ve downloaded the appropriate packages, feel free to play around with some strings of your choosing as follows.
```
string = 'Colorless green ideas sleep furiously'
initial_result = string %>%
annotate(list(Maxent_Sent_Token_Annotator(),
Maxent_Word_Token_Annotator())) %>%
annotate(string, Maxent_POS_Tag_Annotator(), .) %>%
subset(type=='word')
sapply(initial_result$features , '[[', "POS") %>% table
```
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/part-of-speech-tagging.html |
Part of Speech Tagging
======================
As an initial review of parts of speech, if you need a refresher, the following Schoolhouse Rocks videos should get you squared away:
* [A noun is a person, place, or thing.](https://youtu.be/h0m89e9oZko)
* [Interjections](https://youtu.be/YkAX7Vk3JEw)
* [Pronouns](https://youtu.be/Eu1ciVFbecw)
* [Verbs](https://youtu.be/US8mGU1MzYw)
* [Unpack your adjectives](https://youtu.be/NkuuZEey_bs)
* [Lolly Lolly Lolly Get Your Adverbs Here](https://youtu.be/14fXm4FOMPM)
* [Conjunction Junction](https://youtu.be/RPoBE-E8VOc) (personal fave)
Aside from those, you can also learn how bills get passed, about being a victim of gravity, a comparison of the decimal to other numeric systems used by alien species (I recommend the Chavez remix), and a host of other useful things.
Basic idea
----------
With part\-of\-speech tagging, we classify a word with its corresponding part of speech. The following provides an example.
| JJ | JJ | NNS | VBP | RB |
| --- | --- | --- | --- | --- |
| Colorless | green | ideas | sleep | furiously. |
We have two adjectives (JJ), a plural noun (NNS), a verb (VBP), and an adverb (RB).
Common analysis may then be used to predict POS given the current state of the text, comparing the grammar of different texts, human\-computer interaction, or translation from one language to another. In addition, using POS information would make for richer sentiment analysis as well.
POS Examples
------------
The following approach to POS\-tagging is very similar to what we did for sentiment analysis as depicted previously. We have a POS dictionary, and can use an inner join to attach the words to their POS. Unfortunately, this approach is unrealistically simplistic, as additional steps would need to be taken to ensure words are correctly classified. For example, without more information, we are unable to tell if some words are being used as nouns or verbs (human being vs. being a problematic part of speech). However, this example can serve as a starting point.
### Barthelme \& Carver
In the following we’ll compare three texts from Donald Barthelme:
* *The Balloon*
* *The First Thing The Baby Did Wrong*
* *Some Of Us Had Been Threatening Our Friend Colby*
As another comparison, I’ve included Raymond Carver’s *What we talk about when we talk about love*, the unedited version. First we’ll load an unnested object from the sentiment analysis, the barth object. Then for each work we create a sentence id, unnest the data to words, join the POS data, then create counts/proportions for each POS.
```
load('data/barth_sentences.RData')
barthelme_pos = barth %>%
mutate(work = str_replace(work, '.txt', '')) %>% # remove file extension
group_by(work) %>%
mutate(sentence_id = 1:n()) %>% # create a sentence id
unnest_tokens(word, sentence, drop=F) %>% # get words
inner_join(parts_of_speech) %>% # join POS
count(pos) %>% # count
mutate(prop=n/sum(n))
```
Next we read in and process the Carver text in the same manner.
```
carver_pos =
data_frame(file = dir('data/texts_raw/carver/', full.names = TRUE)) %>%
mutate(text = map(file, read_lines)) %>%
transmute(work = basename(file), text) %>%
unnest(text) %>%
unnest_tokens(word, text, token='words') %>%
inner_join(parts_of_speech) %>%
count(pos) %>%
mutate(work='love',
prop=n/sum(n))
```
This visualization depicts the proportion of occurrence for each part of speech across the works. It would appear Barthelme is fairly consistent, and also that relative to the Barthelme texts, Carver preferred nouns and pronouns.
### More taggin’
More sophisticated POS tagging would require the context of the sentence structure. Luckily there are tools to help with that here, in particular via the openNLP package. In addition, it will require a certain language model to be installed (English is only one of many available). I don’t recommend doing so unless you are really interested in this (the openNLPmodels.en package is fairly large).
We’ll reexamine the Barthelme texts above with this more involved approach. Initially we’ll need to get the English\-based tagger we need and load the libraries.
```
# install.packages("openNLPmodels.en", repos = "http://datacube.wu.ac.at/", type = "source")
library(NLP)
library(tm) # make sure to load this prior to openNLP
library(openNLP)
library(openNLPmodels.en)
```
Next comes the processing. This more or less follows the help file example for `?Maxent_POS_Tag_Annotator`. Given the several steps involved I show only the processing for one text for clarity. Ideally you’d write a function, and use a group\_by approach, to process each of the texts of interest.
```
load('data/barthelme_start.RData')
baby_string0 = barth0 %>%
filter(id=='baby.txt')
baby_string = unlist(baby_string0$text) %>%
paste(collapse=' ') %>%
as.String
init_s_w = annotate(baby_string, list(Maxent_Sent_Token_Annotator(),
Maxent_Word_Token_Annotator()))
pos_res = annotate(baby_string, Maxent_POS_Tag_Annotator(), init_s_w)
word_subset = subset(pos_res, type=='word')
tags = sapply(word_subset$features , '[[', "POS")
baby_pos = data_frame(word=baby_string[word_subset], pos=tags) %>%
filter(!str_detect(pos, pattern='[[:punct:]]'))
```
Let’s take a look. I’ve also done the other Barthelme texts as well for comparison.
| word | pos | text |
| --- | --- | --- |
| The | DT | baby |
| first | JJ | baby |
| thing | NN | baby |
| the | DT | baby |
| baby | NN | baby |
| did | VBD | baby |
| wrong | JJ | baby |
| was | VBD | baby |
| to | TO | baby |
| tear | VB | baby |
| pages | NNS | baby |
| out | IN | baby |
| of | IN | baby |
| her | PRP$ | baby |
| books | NNS | baby |
As we can see, we have quite a few more POS to deal with here. They come from the [Penn Treebank](https://en.wikipedia.org/wiki/Treebank). The following table notes what the acronyms stand for. I don’t pretend to know all the facets to this.
Plotting the differences, we now see a little more distinction between *The Balloon* and the other two texts. It is more likely to use the determiners, adjectives, singular nouns, and less likely to use personal pronouns and verbs (including past tense).
Tagging summary
---------------
For more information, consult the following:
* [Penn Treebank](http://repository.upenn.edu/cgi/viewcontent.cgi?article=1603&context=cis_reports)
* [Maxent function](http://maxent.sourceforge.net/about.html)
As with the sentiment analysis demos, the above should be seen only starting point for getting a sense of what you’re dealing with. The ‘maximum entropy’ approach is just one way to go about things. Other models include hidden Markov models, conditional random fields, and more recently, deep learning techniques. Goals might include text prediction (i.e. the thing your phone always gets wrong), translation, and more.
POS Exercise
------------
As this is a more involved sort of analysis, if nothing else in terms of the tools required, as an exercise I would suggest starting with a cleaned text, and seeing if the above code in the last example can get you to the result of having parsed text. Otherwise, assuming you’ve downloaded the appropriate packages, feel free to play around with some strings of your choosing as follows.
```
string = 'Colorless green ideas sleep furiously'
initial_result = string %>%
annotate(list(Maxent_Sent_Token_Annotator(),
Maxent_Word_Token_Annotator())) %>%
annotate(string, Maxent_POS_Tag_Annotator(), .) %>%
subset(type=='word')
sapply(initial_result$features , '[[', "POS") %>% table
```
Basic idea
----------
With part\-of\-speech tagging, we classify a word with its corresponding part of speech. The following provides an example.
| JJ | JJ | NNS | VBP | RB |
| --- | --- | --- | --- | --- |
| Colorless | green | ideas | sleep | furiously. |
We have two adjectives (JJ), a plural noun (NNS), a verb (VBP), and an adverb (RB).
Common analysis may then be used to predict POS given the current state of the text, comparing the grammar of different texts, human\-computer interaction, or translation from one language to another. In addition, using POS information would make for richer sentiment analysis as well.
POS Examples
------------
The following approach to POS\-tagging is very similar to what we did for sentiment analysis as depicted previously. We have a POS dictionary, and can use an inner join to attach the words to their POS. Unfortunately, this approach is unrealistically simplistic, as additional steps would need to be taken to ensure words are correctly classified. For example, without more information, we are unable to tell if some words are being used as nouns or verbs (human being vs. being a problematic part of speech). However, this example can serve as a starting point.
### Barthelme \& Carver
In the following we’ll compare three texts from Donald Barthelme:
* *The Balloon*
* *The First Thing The Baby Did Wrong*
* *Some Of Us Had Been Threatening Our Friend Colby*
As another comparison, I’ve included Raymond Carver’s *What we talk about when we talk about love*, the unedited version. First we’ll load an unnested object from the sentiment analysis, the barth object. Then for each work we create a sentence id, unnest the data to words, join the POS data, then create counts/proportions for each POS.
```
load('data/barth_sentences.RData')
barthelme_pos = barth %>%
mutate(work = str_replace(work, '.txt', '')) %>% # remove file extension
group_by(work) %>%
mutate(sentence_id = 1:n()) %>% # create a sentence id
unnest_tokens(word, sentence, drop=F) %>% # get words
inner_join(parts_of_speech) %>% # join POS
count(pos) %>% # count
mutate(prop=n/sum(n))
```
Next we read in and process the Carver text in the same manner.
```
carver_pos =
data_frame(file = dir('data/texts_raw/carver/', full.names = TRUE)) %>%
mutate(text = map(file, read_lines)) %>%
transmute(work = basename(file), text) %>%
unnest(text) %>%
unnest_tokens(word, text, token='words') %>%
inner_join(parts_of_speech) %>%
count(pos) %>%
mutate(work='love',
prop=n/sum(n))
```
This visualization depicts the proportion of occurrence for each part of speech across the works. It would appear Barthelme is fairly consistent, and also that relative to the Barthelme texts, Carver preferred nouns and pronouns.
### More taggin’
More sophisticated POS tagging would require the context of the sentence structure. Luckily there are tools to help with that here, in particular via the openNLP package. In addition, it will require a certain language model to be installed (English is only one of many available). I don’t recommend doing so unless you are really interested in this (the openNLPmodels.en package is fairly large).
We’ll reexamine the Barthelme texts above with this more involved approach. Initially we’ll need to get the English\-based tagger we need and load the libraries.
```
# install.packages("openNLPmodels.en", repos = "http://datacube.wu.ac.at/", type = "source")
library(NLP)
library(tm) # make sure to load this prior to openNLP
library(openNLP)
library(openNLPmodels.en)
```
Next comes the processing. This more or less follows the help file example for `?Maxent_POS_Tag_Annotator`. Given the several steps involved I show only the processing for one text for clarity. Ideally you’d write a function, and use a group\_by approach, to process each of the texts of interest.
```
load('data/barthelme_start.RData')
baby_string0 = barth0 %>%
filter(id=='baby.txt')
baby_string = unlist(baby_string0$text) %>%
paste(collapse=' ') %>%
as.String
init_s_w = annotate(baby_string, list(Maxent_Sent_Token_Annotator(),
Maxent_Word_Token_Annotator()))
pos_res = annotate(baby_string, Maxent_POS_Tag_Annotator(), init_s_w)
word_subset = subset(pos_res, type=='word')
tags = sapply(word_subset$features , '[[', "POS")
baby_pos = data_frame(word=baby_string[word_subset], pos=tags) %>%
filter(!str_detect(pos, pattern='[[:punct:]]'))
```
Let’s take a look. I’ve also done the other Barthelme texts as well for comparison.
| word | pos | text |
| --- | --- | --- |
| The | DT | baby |
| first | JJ | baby |
| thing | NN | baby |
| the | DT | baby |
| baby | NN | baby |
| did | VBD | baby |
| wrong | JJ | baby |
| was | VBD | baby |
| to | TO | baby |
| tear | VB | baby |
| pages | NNS | baby |
| out | IN | baby |
| of | IN | baby |
| her | PRP$ | baby |
| books | NNS | baby |
As we can see, we have quite a few more POS to deal with here. They come from the [Penn Treebank](https://en.wikipedia.org/wiki/Treebank). The following table notes what the acronyms stand for. I don’t pretend to know all the facets to this.
Plotting the differences, we now see a little more distinction between *The Balloon* and the other two texts. It is more likely to use the determiners, adjectives, singular nouns, and less likely to use personal pronouns and verbs (including past tense).
### Barthelme \& Carver
In the following we’ll compare three texts from Donald Barthelme:
* *The Balloon*
* *The First Thing The Baby Did Wrong*
* *Some Of Us Had Been Threatening Our Friend Colby*
As another comparison, I’ve included Raymond Carver’s *What we talk about when we talk about love*, the unedited version. First we’ll load an unnested object from the sentiment analysis, the barth object. Then for each work we create a sentence id, unnest the data to words, join the POS data, then create counts/proportions for each POS.
```
load('data/barth_sentences.RData')
barthelme_pos = barth %>%
mutate(work = str_replace(work, '.txt', '')) %>% # remove file extension
group_by(work) %>%
mutate(sentence_id = 1:n()) %>% # create a sentence id
unnest_tokens(word, sentence, drop=F) %>% # get words
inner_join(parts_of_speech) %>% # join POS
count(pos) %>% # count
mutate(prop=n/sum(n))
```
Next we read in and process the Carver text in the same manner.
```
carver_pos =
data_frame(file = dir('data/texts_raw/carver/', full.names = TRUE)) %>%
mutate(text = map(file, read_lines)) %>%
transmute(work = basename(file), text) %>%
unnest(text) %>%
unnest_tokens(word, text, token='words') %>%
inner_join(parts_of_speech) %>%
count(pos) %>%
mutate(work='love',
prop=n/sum(n))
```
This visualization depicts the proportion of occurrence for each part of speech across the works. It would appear Barthelme is fairly consistent, and also that relative to the Barthelme texts, Carver preferred nouns and pronouns.
### More taggin’
More sophisticated POS tagging would require the context of the sentence structure. Luckily there are tools to help with that here, in particular via the openNLP package. In addition, it will require a certain language model to be installed (English is only one of many available). I don’t recommend doing so unless you are really interested in this (the openNLPmodels.en package is fairly large).
We’ll reexamine the Barthelme texts above with this more involved approach. Initially we’ll need to get the English\-based tagger we need and load the libraries.
```
# install.packages("openNLPmodels.en", repos = "http://datacube.wu.ac.at/", type = "source")
library(NLP)
library(tm) # make sure to load this prior to openNLP
library(openNLP)
library(openNLPmodels.en)
```
Next comes the processing. This more or less follows the help file example for `?Maxent_POS_Tag_Annotator`. Given the several steps involved I show only the processing for one text for clarity. Ideally you’d write a function, and use a group\_by approach, to process each of the texts of interest.
```
load('data/barthelme_start.RData')
baby_string0 = barth0 %>%
filter(id=='baby.txt')
baby_string = unlist(baby_string0$text) %>%
paste(collapse=' ') %>%
as.String
init_s_w = annotate(baby_string, list(Maxent_Sent_Token_Annotator(),
Maxent_Word_Token_Annotator()))
pos_res = annotate(baby_string, Maxent_POS_Tag_Annotator(), init_s_w)
word_subset = subset(pos_res, type=='word')
tags = sapply(word_subset$features , '[[', "POS")
baby_pos = data_frame(word=baby_string[word_subset], pos=tags) %>%
filter(!str_detect(pos, pattern='[[:punct:]]'))
```
Let’s take a look. I’ve also done the other Barthelme texts as well for comparison.
| word | pos | text |
| --- | --- | --- |
| The | DT | baby |
| first | JJ | baby |
| thing | NN | baby |
| the | DT | baby |
| baby | NN | baby |
| did | VBD | baby |
| wrong | JJ | baby |
| was | VBD | baby |
| to | TO | baby |
| tear | VB | baby |
| pages | NNS | baby |
| out | IN | baby |
| of | IN | baby |
| her | PRP$ | baby |
| books | NNS | baby |
As we can see, we have quite a few more POS to deal with here. They come from the [Penn Treebank](https://en.wikipedia.org/wiki/Treebank). The following table notes what the acronyms stand for. I don’t pretend to know all the facets to this.
Plotting the differences, we now see a little more distinction between *The Balloon* and the other two texts. It is more likely to use the determiners, adjectives, singular nouns, and less likely to use personal pronouns and verbs (including past tense).
Tagging summary
---------------
For more information, consult the following:
* [Penn Treebank](http://repository.upenn.edu/cgi/viewcontent.cgi?article=1603&context=cis_reports)
* [Maxent function](http://maxent.sourceforge.net/about.html)
As with the sentiment analysis demos, the above should be seen only starting point for getting a sense of what you’re dealing with. The ‘maximum entropy’ approach is just one way to go about things. Other models include hidden Markov models, conditional random fields, and more recently, deep learning techniques. Goals might include text prediction (i.e. the thing your phone always gets wrong), translation, and more.
POS Exercise
------------
As this is a more involved sort of analysis, if nothing else in terms of the tools required, as an exercise I would suggest starting with a cleaned text, and seeing if the above code in the last example can get you to the result of having parsed text. Otherwise, assuming you’ve downloaded the appropriate packages, feel free to play around with some strings of your choosing as follows.
```
string = 'Colorless green ideas sleep furiously'
initial_result = string %>%
annotate(list(Maxent_Sent_Token_Annotator(),
Maxent_Word_Token_Annotator())) %>%
annotate(string, Maxent_POS_Tag_Annotator(), .) %>%
subset(type=='word')
sapply(initial_result$features , '[[', "POS") %>% table
```
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/part-of-speech-tagging.html |
Part of Speech Tagging
======================
As an initial review of parts of speech, if you need a refresher, the following Schoolhouse Rocks videos should get you squared away:
* [A noun is a person, place, or thing.](https://youtu.be/h0m89e9oZko)
* [Interjections](https://youtu.be/YkAX7Vk3JEw)
* [Pronouns](https://youtu.be/Eu1ciVFbecw)
* [Verbs](https://youtu.be/US8mGU1MzYw)
* [Unpack your adjectives](https://youtu.be/NkuuZEey_bs)
* [Lolly Lolly Lolly Get Your Adverbs Here](https://youtu.be/14fXm4FOMPM)
* [Conjunction Junction](https://youtu.be/RPoBE-E8VOc) (personal fave)
Aside from those, you can also learn how bills get passed, about being a victim of gravity, a comparison of the decimal to other numeric systems used by alien species (I recommend the Chavez remix), and a host of other useful things.
Basic idea
----------
With part\-of\-speech tagging, we classify a word with its corresponding part of speech. The following provides an example.
| JJ | JJ | NNS | VBP | RB |
| --- | --- | --- | --- | --- |
| Colorless | green | ideas | sleep | furiously. |
We have two adjectives (JJ), a plural noun (NNS), a verb (VBP), and an adverb (RB).
Common analysis may then be used to predict POS given the current state of the text, comparing the grammar of different texts, human\-computer interaction, or translation from one language to another. In addition, using POS information would make for richer sentiment analysis as well.
POS Examples
------------
The following approach to POS\-tagging is very similar to what we did for sentiment analysis as depicted previously. We have a POS dictionary, and can use an inner join to attach the words to their POS. Unfortunately, this approach is unrealistically simplistic, as additional steps would need to be taken to ensure words are correctly classified. For example, without more information, we are unable to tell if some words are being used as nouns or verbs (human being vs. being a problematic part of speech). However, this example can serve as a starting point.
### Barthelme \& Carver
In the following we’ll compare three texts from Donald Barthelme:
* *The Balloon*
* *The First Thing The Baby Did Wrong*
* *Some Of Us Had Been Threatening Our Friend Colby*
As another comparison, I’ve included Raymond Carver’s *What we talk about when we talk about love*, the unedited version. First we’ll load an unnested object from the sentiment analysis, the barth object. Then for each work we create a sentence id, unnest the data to words, join the POS data, then create counts/proportions for each POS.
```
load('data/barth_sentences.RData')
barthelme_pos = barth %>%
mutate(work = str_replace(work, '.txt', '')) %>% # remove file extension
group_by(work) %>%
mutate(sentence_id = 1:n()) %>% # create a sentence id
unnest_tokens(word, sentence, drop=F) %>% # get words
inner_join(parts_of_speech) %>% # join POS
count(pos) %>% # count
mutate(prop=n/sum(n))
```
Next we read in and process the Carver text in the same manner.
```
carver_pos =
data_frame(file = dir('data/texts_raw/carver/', full.names = TRUE)) %>%
mutate(text = map(file, read_lines)) %>%
transmute(work = basename(file), text) %>%
unnest(text) %>%
unnest_tokens(word, text, token='words') %>%
inner_join(parts_of_speech) %>%
count(pos) %>%
mutate(work='love',
prop=n/sum(n))
```
This visualization depicts the proportion of occurrence for each part of speech across the works. It would appear Barthelme is fairly consistent, and also that relative to the Barthelme texts, Carver preferred nouns and pronouns.
### More taggin’
More sophisticated POS tagging would require the context of the sentence structure. Luckily there are tools to help with that here, in particular via the openNLP package. In addition, it will require a certain language model to be installed (English is only one of many available). I don’t recommend doing so unless you are really interested in this (the openNLPmodels.en package is fairly large).
We’ll reexamine the Barthelme texts above with this more involved approach. Initially we’ll need to get the English\-based tagger we need and load the libraries.
```
# install.packages("openNLPmodels.en", repos = "http://datacube.wu.ac.at/", type = "source")
library(NLP)
library(tm) # make sure to load this prior to openNLP
library(openNLP)
library(openNLPmodels.en)
```
Next comes the processing. This more or less follows the help file example for `?Maxent_POS_Tag_Annotator`. Given the several steps involved I show only the processing for one text for clarity. Ideally you’d write a function, and use a group\_by approach, to process each of the texts of interest.
```
load('data/barthelme_start.RData')
baby_string0 = barth0 %>%
filter(id=='baby.txt')
baby_string = unlist(baby_string0$text) %>%
paste(collapse=' ') %>%
as.String
init_s_w = annotate(baby_string, list(Maxent_Sent_Token_Annotator(),
Maxent_Word_Token_Annotator()))
pos_res = annotate(baby_string, Maxent_POS_Tag_Annotator(), init_s_w)
word_subset = subset(pos_res, type=='word')
tags = sapply(word_subset$features , '[[', "POS")
baby_pos = data_frame(word=baby_string[word_subset], pos=tags) %>%
filter(!str_detect(pos, pattern='[[:punct:]]'))
```
Let’s take a look. I’ve also done the other Barthelme texts as well for comparison.
| word | pos | text |
| --- | --- | --- |
| The | DT | baby |
| first | JJ | baby |
| thing | NN | baby |
| the | DT | baby |
| baby | NN | baby |
| did | VBD | baby |
| wrong | JJ | baby |
| was | VBD | baby |
| to | TO | baby |
| tear | VB | baby |
| pages | NNS | baby |
| out | IN | baby |
| of | IN | baby |
| her | PRP$ | baby |
| books | NNS | baby |
As we can see, we have quite a few more POS to deal with here. They come from the [Penn Treebank](https://en.wikipedia.org/wiki/Treebank). The following table notes what the acronyms stand for. I don’t pretend to know all the facets to this.
Plotting the differences, we now see a little more distinction between *The Balloon* and the other two texts. It is more likely to use the determiners, adjectives, singular nouns, and less likely to use personal pronouns and verbs (including past tense).
Tagging summary
---------------
For more information, consult the following:
* [Penn Treebank](http://repository.upenn.edu/cgi/viewcontent.cgi?article=1603&context=cis_reports)
* [Maxent function](http://maxent.sourceforge.net/about.html)
As with the sentiment analysis demos, the above should be seen only starting point for getting a sense of what you’re dealing with. The ‘maximum entropy’ approach is just one way to go about things. Other models include hidden Markov models, conditional random fields, and more recently, deep learning techniques. Goals might include text prediction (i.e. the thing your phone always gets wrong), translation, and more.
POS Exercise
------------
As this is a more involved sort of analysis, if nothing else in terms of the tools required, as an exercise I would suggest starting with a cleaned text, and seeing if the above code in the last example can get you to the result of having parsed text. Otherwise, assuming you’ve downloaded the appropriate packages, feel free to play around with some strings of your choosing as follows.
```
string = 'Colorless green ideas sleep furiously'
initial_result = string %>%
annotate(list(Maxent_Sent_Token_Annotator(),
Maxent_Word_Token_Annotator())) %>%
annotate(string, Maxent_POS_Tag_Annotator(), .) %>%
subset(type=='word')
sapply(initial_result$features , '[[', "POS") %>% table
```
Basic idea
----------
With part\-of\-speech tagging, we classify a word with its corresponding part of speech. The following provides an example.
| JJ | JJ | NNS | VBP | RB |
| --- | --- | --- | --- | --- |
| Colorless | green | ideas | sleep | furiously. |
We have two adjectives (JJ), a plural noun (NNS), a verb (VBP), and an adverb (RB).
Common analysis may then be used to predict POS given the current state of the text, comparing the grammar of different texts, human\-computer interaction, or translation from one language to another. In addition, using POS information would make for richer sentiment analysis as well.
POS Examples
------------
The following approach to POS\-tagging is very similar to what we did for sentiment analysis as depicted previously. We have a POS dictionary, and can use an inner join to attach the words to their POS. Unfortunately, this approach is unrealistically simplistic, as additional steps would need to be taken to ensure words are correctly classified. For example, without more information, we are unable to tell if some words are being used as nouns or verbs (human being vs. being a problematic part of speech). However, this example can serve as a starting point.
### Barthelme \& Carver
In the following we’ll compare three texts from Donald Barthelme:
* *The Balloon*
* *The First Thing The Baby Did Wrong*
* *Some Of Us Had Been Threatening Our Friend Colby*
As another comparison, I’ve included Raymond Carver’s *What we talk about when we talk about love*, the unedited version. First we’ll load an unnested object from the sentiment analysis, the barth object. Then for each work we create a sentence id, unnest the data to words, join the POS data, then create counts/proportions for each POS.
```
load('data/barth_sentences.RData')
barthelme_pos = barth %>%
mutate(work = str_replace(work, '.txt', '')) %>% # remove file extension
group_by(work) %>%
mutate(sentence_id = 1:n()) %>% # create a sentence id
unnest_tokens(word, sentence, drop=F) %>% # get words
inner_join(parts_of_speech) %>% # join POS
count(pos) %>% # count
mutate(prop=n/sum(n))
```
Next we read in and process the Carver text in the same manner.
```
carver_pos =
data_frame(file = dir('data/texts_raw/carver/', full.names = TRUE)) %>%
mutate(text = map(file, read_lines)) %>%
transmute(work = basename(file), text) %>%
unnest(text) %>%
unnest_tokens(word, text, token='words') %>%
inner_join(parts_of_speech) %>%
count(pos) %>%
mutate(work='love',
prop=n/sum(n))
```
This visualization depicts the proportion of occurrence for each part of speech across the works. It would appear Barthelme is fairly consistent, and also that relative to the Barthelme texts, Carver preferred nouns and pronouns.
### More taggin’
More sophisticated POS tagging would require the context of the sentence structure. Luckily there are tools to help with that here, in particular via the openNLP package. In addition, it will require a certain language model to be installed (English is only one of many available). I don’t recommend doing so unless you are really interested in this (the openNLPmodels.en package is fairly large).
We’ll reexamine the Barthelme texts above with this more involved approach. Initially we’ll need to get the English\-based tagger we need and load the libraries.
```
# install.packages("openNLPmodels.en", repos = "http://datacube.wu.ac.at/", type = "source")
library(NLP)
library(tm) # make sure to load this prior to openNLP
library(openNLP)
library(openNLPmodels.en)
```
Next comes the processing. This more or less follows the help file example for `?Maxent_POS_Tag_Annotator`. Given the several steps involved I show only the processing for one text for clarity. Ideally you’d write a function, and use a group\_by approach, to process each of the texts of interest.
```
load('data/barthelme_start.RData')
baby_string0 = barth0 %>%
filter(id=='baby.txt')
baby_string = unlist(baby_string0$text) %>%
paste(collapse=' ') %>%
as.String
init_s_w = annotate(baby_string, list(Maxent_Sent_Token_Annotator(),
Maxent_Word_Token_Annotator()))
pos_res = annotate(baby_string, Maxent_POS_Tag_Annotator(), init_s_w)
word_subset = subset(pos_res, type=='word')
tags = sapply(word_subset$features , '[[', "POS")
baby_pos = data_frame(word=baby_string[word_subset], pos=tags) %>%
filter(!str_detect(pos, pattern='[[:punct:]]'))
```
Let’s take a look. I’ve also done the other Barthelme texts as well for comparison.
| word | pos | text |
| --- | --- | --- |
| The | DT | baby |
| first | JJ | baby |
| thing | NN | baby |
| the | DT | baby |
| baby | NN | baby |
| did | VBD | baby |
| wrong | JJ | baby |
| was | VBD | baby |
| to | TO | baby |
| tear | VB | baby |
| pages | NNS | baby |
| out | IN | baby |
| of | IN | baby |
| her | PRP$ | baby |
| books | NNS | baby |
As we can see, we have quite a few more POS to deal with here. They come from the [Penn Treebank](https://en.wikipedia.org/wiki/Treebank). The following table notes what the acronyms stand for. I don’t pretend to know all the facets to this.
Plotting the differences, we now see a little more distinction between *The Balloon* and the other two texts. It is more likely to use the determiners, adjectives, singular nouns, and less likely to use personal pronouns and verbs (including past tense).
### Barthelme \& Carver
In the following we’ll compare three texts from Donald Barthelme:
* *The Balloon*
* *The First Thing The Baby Did Wrong*
* *Some Of Us Had Been Threatening Our Friend Colby*
As another comparison, I’ve included Raymond Carver’s *What we talk about when we talk about love*, the unedited version. First we’ll load an unnested object from the sentiment analysis, the barth object. Then for each work we create a sentence id, unnest the data to words, join the POS data, then create counts/proportions for each POS.
```
load('data/barth_sentences.RData')
barthelme_pos = barth %>%
mutate(work = str_replace(work, '.txt', '')) %>% # remove file extension
group_by(work) %>%
mutate(sentence_id = 1:n()) %>% # create a sentence id
unnest_tokens(word, sentence, drop=F) %>% # get words
inner_join(parts_of_speech) %>% # join POS
count(pos) %>% # count
mutate(prop=n/sum(n))
```
Next we read in and process the Carver text in the same manner.
```
carver_pos =
data_frame(file = dir('data/texts_raw/carver/', full.names = TRUE)) %>%
mutate(text = map(file, read_lines)) %>%
transmute(work = basename(file), text) %>%
unnest(text) %>%
unnest_tokens(word, text, token='words') %>%
inner_join(parts_of_speech) %>%
count(pos) %>%
mutate(work='love',
prop=n/sum(n))
```
This visualization depicts the proportion of occurrence for each part of speech across the works. It would appear Barthelme is fairly consistent, and also that relative to the Barthelme texts, Carver preferred nouns and pronouns.
### More taggin’
More sophisticated POS tagging would require the context of the sentence structure. Luckily there are tools to help with that here, in particular via the openNLP package. In addition, it will require a certain language model to be installed (English is only one of many available). I don’t recommend doing so unless you are really interested in this (the openNLPmodels.en package is fairly large).
We’ll reexamine the Barthelme texts above with this more involved approach. Initially we’ll need to get the English\-based tagger we need and load the libraries.
```
# install.packages("openNLPmodels.en", repos = "http://datacube.wu.ac.at/", type = "source")
library(NLP)
library(tm) # make sure to load this prior to openNLP
library(openNLP)
library(openNLPmodels.en)
```
Next comes the processing. This more or less follows the help file example for `?Maxent_POS_Tag_Annotator`. Given the several steps involved I show only the processing for one text for clarity. Ideally you’d write a function, and use a group\_by approach, to process each of the texts of interest.
```
load('data/barthelme_start.RData')
baby_string0 = barth0 %>%
filter(id=='baby.txt')
baby_string = unlist(baby_string0$text) %>%
paste(collapse=' ') %>%
as.String
init_s_w = annotate(baby_string, list(Maxent_Sent_Token_Annotator(),
Maxent_Word_Token_Annotator()))
pos_res = annotate(baby_string, Maxent_POS_Tag_Annotator(), init_s_w)
word_subset = subset(pos_res, type=='word')
tags = sapply(word_subset$features , '[[', "POS")
baby_pos = data_frame(word=baby_string[word_subset], pos=tags) %>%
filter(!str_detect(pos, pattern='[[:punct:]]'))
```
Let’s take a look. I’ve also done the other Barthelme texts as well for comparison.
| word | pos | text |
| --- | --- | --- |
| The | DT | baby |
| first | JJ | baby |
| thing | NN | baby |
| the | DT | baby |
| baby | NN | baby |
| did | VBD | baby |
| wrong | JJ | baby |
| was | VBD | baby |
| to | TO | baby |
| tear | VB | baby |
| pages | NNS | baby |
| out | IN | baby |
| of | IN | baby |
| her | PRP$ | baby |
| books | NNS | baby |
As we can see, we have quite a few more POS to deal with here. They come from the [Penn Treebank](https://en.wikipedia.org/wiki/Treebank). The following table notes what the acronyms stand for. I don’t pretend to know all the facets to this.
Plotting the differences, we now see a little more distinction between *The Balloon* and the other two texts. It is more likely to use the determiners, adjectives, singular nouns, and less likely to use personal pronouns and verbs (including past tense).
Tagging summary
---------------
For more information, consult the following:
* [Penn Treebank](http://repository.upenn.edu/cgi/viewcontent.cgi?article=1603&context=cis_reports)
* [Maxent function](http://maxent.sourceforge.net/about.html)
As with the sentiment analysis demos, the above should be seen only starting point for getting a sense of what you’re dealing with. The ‘maximum entropy’ approach is just one way to go about things. Other models include hidden Markov models, conditional random fields, and more recently, deep learning techniques. Goals might include text prediction (i.e. the thing your phone always gets wrong), translation, and more.
POS Exercise
------------
As this is a more involved sort of analysis, if nothing else in terms of the tools required, as an exercise I would suggest starting with a cleaned text, and seeing if the above code in the last example can get you to the result of having parsed text. Otherwise, assuming you’ve downloaded the appropriate packages, feel free to play around with some strings of your choosing as follows.
```
string = 'Colorless green ideas sleep furiously'
initial_result = string %>%
annotate(list(Maxent_Sent_Token_Annotator(),
Maxent_Word_Token_Annotator())) %>%
annotate(string, Maxent_POS_Tag_Annotator(), .) %>%
subset(type=='word')
sapply(initial_result$features , '[[', "POS") %>% table
```
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/part-of-speech-tagging.html |
Part of Speech Tagging
======================
As an initial review of parts of speech, if you need a refresher, the following Schoolhouse Rocks videos should get you squared away:
* [A noun is a person, place, or thing.](https://youtu.be/h0m89e9oZko)
* [Interjections](https://youtu.be/YkAX7Vk3JEw)
* [Pronouns](https://youtu.be/Eu1ciVFbecw)
* [Verbs](https://youtu.be/US8mGU1MzYw)
* [Unpack your adjectives](https://youtu.be/NkuuZEey_bs)
* [Lolly Lolly Lolly Get Your Adverbs Here](https://youtu.be/14fXm4FOMPM)
* [Conjunction Junction](https://youtu.be/RPoBE-E8VOc) (personal fave)
Aside from those, you can also learn how bills get passed, about being a victim of gravity, a comparison of the decimal to other numeric systems used by alien species (I recommend the Chavez remix), and a host of other useful things.
Basic idea
----------
With part\-of\-speech tagging, we classify a word with its corresponding part of speech. The following provides an example.
| JJ | JJ | NNS | VBP | RB |
| --- | --- | --- | --- | --- |
| Colorless | green | ideas | sleep | furiously. |
We have two adjectives (JJ), a plural noun (NNS), a verb (VBP), and an adverb (RB).
Common analysis may then be used to predict POS given the current state of the text, comparing the grammar of different texts, human\-computer interaction, or translation from one language to another. In addition, using POS information would make for richer sentiment analysis as well.
POS Examples
------------
The following approach to POS\-tagging is very similar to what we did for sentiment analysis as depicted previously. We have a POS dictionary, and can use an inner join to attach the words to their POS. Unfortunately, this approach is unrealistically simplistic, as additional steps would need to be taken to ensure words are correctly classified. For example, without more information, we are unable to tell if some words are being used as nouns or verbs (human being vs. being a problematic part of speech). However, this example can serve as a starting point.
### Barthelme \& Carver
In the following we’ll compare three texts from Donald Barthelme:
* *The Balloon*
* *The First Thing The Baby Did Wrong*
* *Some Of Us Had Been Threatening Our Friend Colby*
As another comparison, I’ve included Raymond Carver’s *What we talk about when we talk about love*, the unedited version. First we’ll load an unnested object from the sentiment analysis, the barth object. Then for each work we create a sentence id, unnest the data to words, join the POS data, then create counts/proportions for each POS.
```
load('data/barth_sentences.RData')
barthelme_pos = barth %>%
mutate(work = str_replace(work, '.txt', '')) %>% # remove file extension
group_by(work) %>%
mutate(sentence_id = 1:n()) %>% # create a sentence id
unnest_tokens(word, sentence, drop=F) %>% # get words
inner_join(parts_of_speech) %>% # join POS
count(pos) %>% # count
mutate(prop=n/sum(n))
```
Next we read in and process the Carver text in the same manner.
```
carver_pos =
data_frame(file = dir('data/texts_raw/carver/', full.names = TRUE)) %>%
mutate(text = map(file, read_lines)) %>%
transmute(work = basename(file), text) %>%
unnest(text) %>%
unnest_tokens(word, text, token='words') %>%
inner_join(parts_of_speech) %>%
count(pos) %>%
mutate(work='love',
prop=n/sum(n))
```
This visualization depicts the proportion of occurrence for each part of speech across the works. It would appear Barthelme is fairly consistent, and also that relative to the Barthelme texts, Carver preferred nouns and pronouns.
### More taggin’
More sophisticated POS tagging would require the context of the sentence structure. Luckily there are tools to help with that here, in particular via the openNLP package. In addition, it will require a certain language model to be installed (English is only one of many available). I don’t recommend doing so unless you are really interested in this (the openNLPmodels.en package is fairly large).
We’ll reexamine the Barthelme texts above with this more involved approach. Initially we’ll need to get the English\-based tagger we need and load the libraries.
```
# install.packages("openNLPmodels.en", repos = "http://datacube.wu.ac.at/", type = "source")
library(NLP)
library(tm) # make sure to load this prior to openNLP
library(openNLP)
library(openNLPmodels.en)
```
Next comes the processing. This more or less follows the help file example for `?Maxent_POS_Tag_Annotator`. Given the several steps involved I show only the processing for one text for clarity. Ideally you’d write a function, and use a group\_by approach, to process each of the texts of interest.
```
load('data/barthelme_start.RData')
baby_string0 = barth0 %>%
filter(id=='baby.txt')
baby_string = unlist(baby_string0$text) %>%
paste(collapse=' ') %>%
as.String
init_s_w = annotate(baby_string, list(Maxent_Sent_Token_Annotator(),
Maxent_Word_Token_Annotator()))
pos_res = annotate(baby_string, Maxent_POS_Tag_Annotator(), init_s_w)
word_subset = subset(pos_res, type=='word')
tags = sapply(word_subset$features , '[[', "POS")
baby_pos = data_frame(word=baby_string[word_subset], pos=tags) %>%
filter(!str_detect(pos, pattern='[[:punct:]]'))
```
Let’s take a look. I’ve also done the other Barthelme texts as well for comparison.
| word | pos | text |
| --- | --- | --- |
| The | DT | baby |
| first | JJ | baby |
| thing | NN | baby |
| the | DT | baby |
| baby | NN | baby |
| did | VBD | baby |
| wrong | JJ | baby |
| was | VBD | baby |
| to | TO | baby |
| tear | VB | baby |
| pages | NNS | baby |
| out | IN | baby |
| of | IN | baby |
| her | PRP$ | baby |
| books | NNS | baby |
As we can see, we have quite a few more POS to deal with here. They come from the [Penn Treebank](https://en.wikipedia.org/wiki/Treebank). The following table notes what the acronyms stand for. I don’t pretend to know all the facets to this.
Plotting the differences, we now see a little more distinction between *The Balloon* and the other two texts. It is more likely to use the determiners, adjectives, singular nouns, and less likely to use personal pronouns and verbs (including past tense).
Tagging summary
---------------
For more information, consult the following:
* [Penn Treebank](http://repository.upenn.edu/cgi/viewcontent.cgi?article=1603&context=cis_reports)
* [Maxent function](http://maxent.sourceforge.net/about.html)
As with the sentiment analysis demos, the above should be seen only starting point for getting a sense of what you’re dealing with. The ‘maximum entropy’ approach is just one way to go about things. Other models include hidden Markov models, conditional random fields, and more recently, deep learning techniques. Goals might include text prediction (i.e. the thing your phone always gets wrong), translation, and more.
POS Exercise
------------
As this is a more involved sort of analysis, if nothing else in terms of the tools required, as an exercise I would suggest starting with a cleaned text, and seeing if the above code in the last example can get you to the result of having parsed text. Otherwise, assuming you’ve downloaded the appropriate packages, feel free to play around with some strings of your choosing as follows.
```
string = 'Colorless green ideas sleep furiously'
initial_result = string %>%
annotate(list(Maxent_Sent_Token_Annotator(),
Maxent_Word_Token_Annotator())) %>%
annotate(string, Maxent_POS_Tag_Annotator(), .) %>%
subset(type=='word')
sapply(initial_result$features , '[[', "POS") %>% table
```
Basic idea
----------
With part\-of\-speech tagging, we classify a word with its corresponding part of speech. The following provides an example.
| JJ | JJ | NNS | VBP | RB |
| --- | --- | --- | --- | --- |
| Colorless | green | ideas | sleep | furiously. |
We have two adjectives (JJ), a plural noun (NNS), a verb (VBP), and an adverb (RB).
Common analysis may then be used to predict POS given the current state of the text, comparing the grammar of different texts, human\-computer interaction, or translation from one language to another. In addition, using POS information would make for richer sentiment analysis as well.
POS Examples
------------
The following approach to POS\-tagging is very similar to what we did for sentiment analysis as depicted previously. We have a POS dictionary, and can use an inner join to attach the words to their POS. Unfortunately, this approach is unrealistically simplistic, as additional steps would need to be taken to ensure words are correctly classified. For example, without more information, we are unable to tell if some words are being used as nouns or verbs (human being vs. being a problematic part of speech). However, this example can serve as a starting point.
### Barthelme \& Carver
In the following we’ll compare three texts from Donald Barthelme:
* *The Balloon*
* *The First Thing The Baby Did Wrong*
* *Some Of Us Had Been Threatening Our Friend Colby*
As another comparison, I’ve included Raymond Carver’s *What we talk about when we talk about love*, the unedited version. First we’ll load an unnested object from the sentiment analysis, the barth object. Then for each work we create a sentence id, unnest the data to words, join the POS data, then create counts/proportions for each POS.
```
load('data/barth_sentences.RData')
barthelme_pos = barth %>%
mutate(work = str_replace(work, '.txt', '')) %>% # remove file extension
group_by(work) %>%
mutate(sentence_id = 1:n()) %>% # create a sentence id
unnest_tokens(word, sentence, drop=F) %>% # get words
inner_join(parts_of_speech) %>% # join POS
count(pos) %>% # count
mutate(prop=n/sum(n))
```
Next we read in and process the Carver text in the same manner.
```
carver_pos =
data_frame(file = dir('data/texts_raw/carver/', full.names = TRUE)) %>%
mutate(text = map(file, read_lines)) %>%
transmute(work = basename(file), text) %>%
unnest(text) %>%
unnest_tokens(word, text, token='words') %>%
inner_join(parts_of_speech) %>%
count(pos) %>%
mutate(work='love',
prop=n/sum(n))
```
This visualization depicts the proportion of occurrence for each part of speech across the works. It would appear Barthelme is fairly consistent, and also that relative to the Barthelme texts, Carver preferred nouns and pronouns.
### More taggin’
More sophisticated POS tagging would require the context of the sentence structure. Luckily there are tools to help with that here, in particular via the openNLP package. In addition, it will require a certain language model to be installed (English is only one of many available). I don’t recommend doing so unless you are really interested in this (the openNLPmodels.en package is fairly large).
We’ll reexamine the Barthelme texts above with this more involved approach. Initially we’ll need to get the English\-based tagger we need and load the libraries.
```
# install.packages("openNLPmodels.en", repos = "http://datacube.wu.ac.at/", type = "source")
library(NLP)
library(tm) # make sure to load this prior to openNLP
library(openNLP)
library(openNLPmodels.en)
```
Next comes the processing. This more or less follows the help file example for `?Maxent_POS_Tag_Annotator`. Given the several steps involved I show only the processing for one text for clarity. Ideally you’d write a function, and use a group\_by approach, to process each of the texts of interest.
```
load('data/barthelme_start.RData')
baby_string0 = barth0 %>%
filter(id=='baby.txt')
baby_string = unlist(baby_string0$text) %>%
paste(collapse=' ') %>%
as.String
init_s_w = annotate(baby_string, list(Maxent_Sent_Token_Annotator(),
Maxent_Word_Token_Annotator()))
pos_res = annotate(baby_string, Maxent_POS_Tag_Annotator(), init_s_w)
word_subset = subset(pos_res, type=='word')
tags = sapply(word_subset$features , '[[', "POS")
baby_pos = data_frame(word=baby_string[word_subset], pos=tags) %>%
filter(!str_detect(pos, pattern='[[:punct:]]'))
```
Let’s take a look. I’ve also done the other Barthelme texts as well for comparison.
| word | pos | text |
| --- | --- | --- |
| The | DT | baby |
| first | JJ | baby |
| thing | NN | baby |
| the | DT | baby |
| baby | NN | baby |
| did | VBD | baby |
| wrong | JJ | baby |
| was | VBD | baby |
| to | TO | baby |
| tear | VB | baby |
| pages | NNS | baby |
| out | IN | baby |
| of | IN | baby |
| her | PRP$ | baby |
| books | NNS | baby |
As we can see, we have quite a few more POS to deal with here. They come from the [Penn Treebank](https://en.wikipedia.org/wiki/Treebank). The following table notes what the acronyms stand for. I don’t pretend to know all the facets to this.
Plotting the differences, we now see a little more distinction between *The Balloon* and the other two texts. It is more likely to use the determiners, adjectives, singular nouns, and less likely to use personal pronouns and verbs (including past tense).
### Barthelme \& Carver
In the following we’ll compare three texts from Donald Barthelme:
* *The Balloon*
* *The First Thing The Baby Did Wrong*
* *Some Of Us Had Been Threatening Our Friend Colby*
As another comparison, I’ve included Raymond Carver’s *What we talk about when we talk about love*, the unedited version. First we’ll load an unnested object from the sentiment analysis, the barth object. Then for each work we create a sentence id, unnest the data to words, join the POS data, then create counts/proportions for each POS.
```
load('data/barth_sentences.RData')
barthelme_pos = barth %>%
mutate(work = str_replace(work, '.txt', '')) %>% # remove file extension
group_by(work) %>%
mutate(sentence_id = 1:n()) %>% # create a sentence id
unnest_tokens(word, sentence, drop=F) %>% # get words
inner_join(parts_of_speech) %>% # join POS
count(pos) %>% # count
mutate(prop=n/sum(n))
```
Next we read in and process the Carver text in the same manner.
```
carver_pos =
data_frame(file = dir('data/texts_raw/carver/', full.names = TRUE)) %>%
mutate(text = map(file, read_lines)) %>%
transmute(work = basename(file), text) %>%
unnest(text) %>%
unnest_tokens(word, text, token='words') %>%
inner_join(parts_of_speech) %>%
count(pos) %>%
mutate(work='love',
prop=n/sum(n))
```
This visualization depicts the proportion of occurrence for each part of speech across the works. It would appear Barthelme is fairly consistent, and also that relative to the Barthelme texts, Carver preferred nouns and pronouns.
### More taggin’
More sophisticated POS tagging would require the context of the sentence structure. Luckily there are tools to help with that here, in particular via the openNLP package. In addition, it will require a certain language model to be installed (English is only one of many available). I don’t recommend doing so unless you are really interested in this (the openNLPmodels.en package is fairly large).
We’ll reexamine the Barthelme texts above with this more involved approach. Initially we’ll need to get the English\-based tagger we need and load the libraries.
```
# install.packages("openNLPmodels.en", repos = "http://datacube.wu.ac.at/", type = "source")
library(NLP)
library(tm) # make sure to load this prior to openNLP
library(openNLP)
library(openNLPmodels.en)
```
Next comes the processing. This more or less follows the help file example for `?Maxent_POS_Tag_Annotator`. Given the several steps involved I show only the processing for one text for clarity. Ideally you’d write a function, and use a group\_by approach, to process each of the texts of interest.
```
load('data/barthelme_start.RData')
baby_string0 = barth0 %>%
filter(id=='baby.txt')
baby_string = unlist(baby_string0$text) %>%
paste(collapse=' ') %>%
as.String
init_s_w = annotate(baby_string, list(Maxent_Sent_Token_Annotator(),
Maxent_Word_Token_Annotator()))
pos_res = annotate(baby_string, Maxent_POS_Tag_Annotator(), init_s_w)
word_subset = subset(pos_res, type=='word')
tags = sapply(word_subset$features , '[[', "POS")
baby_pos = data_frame(word=baby_string[word_subset], pos=tags) %>%
filter(!str_detect(pos, pattern='[[:punct:]]'))
```
Let’s take a look. I’ve also done the other Barthelme texts as well for comparison.
| word | pos | text |
| --- | --- | --- |
| The | DT | baby |
| first | JJ | baby |
| thing | NN | baby |
| the | DT | baby |
| baby | NN | baby |
| did | VBD | baby |
| wrong | JJ | baby |
| was | VBD | baby |
| to | TO | baby |
| tear | VB | baby |
| pages | NNS | baby |
| out | IN | baby |
| of | IN | baby |
| her | PRP$ | baby |
| books | NNS | baby |
As we can see, we have quite a few more POS to deal with here. They come from the [Penn Treebank](https://en.wikipedia.org/wiki/Treebank). The following table notes what the acronyms stand for. I don’t pretend to know all the facets to this.
Plotting the differences, we now see a little more distinction between *The Balloon* and the other two texts. It is more likely to use the determiners, adjectives, singular nouns, and less likely to use personal pronouns and verbs (including past tense).
Tagging summary
---------------
For more information, consult the following:
* [Penn Treebank](http://repository.upenn.edu/cgi/viewcontent.cgi?article=1603&context=cis_reports)
* [Maxent function](http://maxent.sourceforge.net/about.html)
As with the sentiment analysis demos, the above should be seen only starting point for getting a sense of what you’re dealing with. The ‘maximum entropy’ approach is just one way to go about things. Other models include hidden Markov models, conditional random fields, and more recently, deep learning techniques. Goals might include text prediction (i.e. the thing your phone always gets wrong), translation, and more.
POS Exercise
------------
As this is a more involved sort of analysis, if nothing else in terms of the tools required, as an exercise I would suggest starting with a cleaned text, and seeing if the above code in the last example can get you to the result of having parsed text. Otherwise, assuming you’ve downloaded the appropriate packages, feel free to play around with some strings of your choosing as follows.
```
string = 'Colorless green ideas sleep furiously'
initial_result = string %>%
annotate(list(Maxent_Sent_Token_Annotator(),
Maxent_Word_Token_Annotator())) %>%
annotate(string, Maxent_POS_Tag_Annotator(), .) %>%
subset(type=='word')
sapply(initial_result$features , '[[', "POS") %>% table
```
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/topic-modeling.html |
Topic modeling
==============
Basic idea
----------
Topic modeling as typically conducted is a tool for much more than text. The primary technique of Latent Dirichlet Allocation (LDA) should be as much a part of your toolbox as principal components and factor analysis. It can be seen merely as a dimension reduction approach, but it can also be used for its rich interpretative quality as well. The basic idea is that we’ll take a whole lot of features and boil them down to a few ‘topics’. In this sense LDA is akin to discrete PCA. Another way to think about this is more from the perspective of factor analysis, where we are keenly interested in interpretation of the result, and want to know both what terms are associated with which topics, and what documents are more likely to present which topics.
In the standard setting, to be able to conduct such an analysis from text one needs a document\-term matrix, where rows represent documents, and columns terms. Each cell is a count of how many times the term occurs in the document. Terms are typically words, but could be any n\-gram of interest.
Outside of text analysis terms could represent bacterial composition, genetic information, or whatever the researcher is interested in. Likewise, documents can be people, geographic regions, etc. The gist is, despite the common text\-based application, that what constitutes a document or term is dependent upon the research question, and LDA can be applied in a variety of research settings.
Steps
-----
When it comes to text analysis, most of the time in topic modeling is spent on processing the text itself. Importing/scraping it, dealing with capitalization, punctuation, removing stopwords, dealing with encoding issues, removing other miscellaneous common words. It is a highly iterative process such that once you get to the document\-term matrix, you’re just going to find the stuff that was missed before and repeat the process with new ‘cleaning parameters’ in place. So getting to the analysis stage is the hard part. See the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish), which comprises 5 acts, of which the first four and some additional scenes represent all the processing needed to get to the final scene of topic modeling. In what follows we’ll start at the end of that journey.
Topic Model Example
-------------------
### Shakespeare
In this example, we’ll look at Shakespeare’s plays and poems, using a topic model with 10 topics. For our needs, we’ll use the topicmodels package for the analysis, and mostly others for post\-processing. Due to the large number of terms, this could take a while to run depending on your machine (maybe a minute or two). We can also see how things compare with the academic classifications for the texts.
```
load('Data/shakes_dtm_stemmed.RData')
library(topicmodels)
shakes_10 = LDA(convert(shakes_dtm, to = "topicmodels"), k = 10)
```
#### Examine Terms within Topics
One of the first things to do is attempt to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish) for some examples of those.
#### Examine Document\-Topic Expression
Next we can look at which documents are more likely to express each topic.
```
t(topics(shakes_10, 2))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram. The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[9](#fn9). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix* (on its own), standard poems, a mixed bag of more love\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
So we can see that topic modeling can be used to classify the documents themselves into groups of documents most likely to express the same sorts of topics.
Extensions
----------
There are extensions of LDA used in topic modeling that will allow your analysis to go even further.
* Correlated Topic Models: the standard LDA does not estimate the topic correlation as part of the process.
* Supervised LDA: In this scenario, topics can be used for prediction, e.g. the classification of tragedy, comedy etc. (similar to PC regression)
* Structured Topic Models: Here we want to find the relevant covariates that can explain the topics (e.g. year written, author sex, etc.)
* Other: There are still other ways to examine topics.
Topic Model Exercise
--------------------
### Movie reviews
Perform a topic model on the [Cornell Movie review data](http://www.cs.cornell.edu/people/pabo/movie-review-data/). I’ve done some initial cleaning (e.g. removing stopwords, punctuation, etc.), and have both a tidy data frame and document term matrix for you to use. The former is provided if you want to do additional processing. But otherwise, just use the topicmodels package and perform your own analysis on the DTM. You can compare to [this result](https://ldavis.cpsievert.me/reviews/reviews.html).
```
load('data/movie_reviews.RData')
library(topicmodels)
```
### Associated Press articles
Do some topic modeling on articles from the Associated Press data from the First Text Retrieval Conference in 1992\. The following will load the DTM, so you are ready to go. See how your result compares with that of [Dave Blei](http://www.cs.columbia.edu/~blei/lda-c/ap-topics.pdf), based on 100 topics.
```
library(topicmodels)
data("AssociatedPress")
```
Basic idea
----------
Topic modeling as typically conducted is a tool for much more than text. The primary technique of Latent Dirichlet Allocation (LDA) should be as much a part of your toolbox as principal components and factor analysis. It can be seen merely as a dimension reduction approach, but it can also be used for its rich interpretative quality as well. The basic idea is that we’ll take a whole lot of features and boil them down to a few ‘topics’. In this sense LDA is akin to discrete PCA. Another way to think about this is more from the perspective of factor analysis, where we are keenly interested in interpretation of the result, and want to know both what terms are associated with which topics, and what documents are more likely to present which topics.
In the standard setting, to be able to conduct such an analysis from text one needs a document\-term matrix, where rows represent documents, and columns terms. Each cell is a count of how many times the term occurs in the document. Terms are typically words, but could be any n\-gram of interest.
Outside of text analysis terms could represent bacterial composition, genetic information, or whatever the researcher is interested in. Likewise, documents can be people, geographic regions, etc. The gist is, despite the common text\-based application, that what constitutes a document or term is dependent upon the research question, and LDA can be applied in a variety of research settings.
Steps
-----
When it comes to text analysis, most of the time in topic modeling is spent on processing the text itself. Importing/scraping it, dealing with capitalization, punctuation, removing stopwords, dealing with encoding issues, removing other miscellaneous common words. It is a highly iterative process such that once you get to the document\-term matrix, you’re just going to find the stuff that was missed before and repeat the process with new ‘cleaning parameters’ in place. So getting to the analysis stage is the hard part. See the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish), which comprises 5 acts, of which the first four and some additional scenes represent all the processing needed to get to the final scene of topic modeling. In what follows we’ll start at the end of that journey.
Topic Model Example
-------------------
### Shakespeare
In this example, we’ll look at Shakespeare’s plays and poems, using a topic model with 10 topics. For our needs, we’ll use the topicmodels package for the analysis, and mostly others for post\-processing. Due to the large number of terms, this could take a while to run depending on your machine (maybe a minute or two). We can also see how things compare with the academic classifications for the texts.
```
load('Data/shakes_dtm_stemmed.RData')
library(topicmodels)
shakes_10 = LDA(convert(shakes_dtm, to = "topicmodels"), k = 10)
```
#### Examine Terms within Topics
One of the first things to do is attempt to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish) for some examples of those.
#### Examine Document\-Topic Expression
Next we can look at which documents are more likely to express each topic.
```
t(topics(shakes_10, 2))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram. The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[9](#fn9). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix* (on its own), standard poems, a mixed bag of more love\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
So we can see that topic modeling can be used to classify the documents themselves into groups of documents most likely to express the same sorts of topics.
### Shakespeare
In this example, we’ll look at Shakespeare’s plays and poems, using a topic model with 10 topics. For our needs, we’ll use the topicmodels package for the analysis, and mostly others for post\-processing. Due to the large number of terms, this could take a while to run depending on your machine (maybe a minute or two). We can also see how things compare with the academic classifications for the texts.
```
load('Data/shakes_dtm_stemmed.RData')
library(topicmodels)
shakes_10 = LDA(convert(shakes_dtm, to = "topicmodels"), k = 10)
```
#### Examine Terms within Topics
One of the first things to do is attempt to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish) for some examples of those.
#### Examine Document\-Topic Expression
Next we can look at which documents are more likely to express each topic.
```
t(topics(shakes_10, 2))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram. The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[9](#fn9). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix* (on its own), standard poems, a mixed bag of more love\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
So we can see that topic modeling can be used to classify the documents themselves into groups of documents most likely to express the same sorts of topics.
#### Examine Terms within Topics
One of the first things to do is attempt to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish) for some examples of those.
#### Examine Document\-Topic Expression
Next we can look at which documents are more likely to express each topic.
```
t(topics(shakes_10, 2))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram. The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[9](#fn9). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix* (on its own), standard poems, a mixed bag of more love\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
So we can see that topic modeling can be used to classify the documents themselves into groups of documents most likely to express the same sorts of topics.
Extensions
----------
There are extensions of LDA used in topic modeling that will allow your analysis to go even further.
* Correlated Topic Models: the standard LDA does not estimate the topic correlation as part of the process.
* Supervised LDA: In this scenario, topics can be used for prediction, e.g. the classification of tragedy, comedy etc. (similar to PC regression)
* Structured Topic Models: Here we want to find the relevant covariates that can explain the topics (e.g. year written, author sex, etc.)
* Other: There are still other ways to examine topics.
Topic Model Exercise
--------------------
### Movie reviews
Perform a topic model on the [Cornell Movie review data](http://www.cs.cornell.edu/people/pabo/movie-review-data/). I’ve done some initial cleaning (e.g. removing stopwords, punctuation, etc.), and have both a tidy data frame and document term matrix for you to use. The former is provided if you want to do additional processing. But otherwise, just use the topicmodels package and perform your own analysis on the DTM. You can compare to [this result](https://ldavis.cpsievert.me/reviews/reviews.html).
```
load('data/movie_reviews.RData')
library(topicmodels)
```
### Associated Press articles
Do some topic modeling on articles from the Associated Press data from the First Text Retrieval Conference in 1992\. The following will load the DTM, so you are ready to go. See how your result compares with that of [Dave Blei](http://www.cs.columbia.edu/~blei/lda-c/ap-topics.pdf), based on 100 topics.
```
library(topicmodels)
data("AssociatedPress")
```
### Movie reviews
Perform a topic model on the [Cornell Movie review data](http://www.cs.cornell.edu/people/pabo/movie-review-data/). I’ve done some initial cleaning (e.g. removing stopwords, punctuation, etc.), and have both a tidy data frame and document term matrix for you to use. The former is provided if you want to do additional processing. But otherwise, just use the topicmodels package and perform your own analysis on the DTM. You can compare to [this result](https://ldavis.cpsievert.me/reviews/reviews.html).
```
load('data/movie_reviews.RData')
library(topicmodels)
```
### Associated Press articles
Do some topic modeling on articles from the Associated Press data from the First Text Retrieval Conference in 1992\. The following will load the DTM, so you are ready to go. See how your result compares with that of [Dave Blei](http://www.cs.columbia.edu/~blei/lda-c/ap-topics.pdf), based on 100 topics.
```
library(topicmodels)
data("AssociatedPress")
```
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/topic-modeling.html |
Topic modeling
==============
Basic idea
----------
Topic modeling as typically conducted is a tool for much more than text. The primary technique of Latent Dirichlet Allocation (LDA) should be as much a part of your toolbox as principal components and factor analysis. It can be seen merely as a dimension reduction approach, but it can also be used for its rich interpretative quality as well. The basic idea is that we’ll take a whole lot of features and boil them down to a few ‘topics’. In this sense LDA is akin to discrete PCA. Another way to think about this is more from the perspective of factor analysis, where we are keenly interested in interpretation of the result, and want to know both what terms are associated with which topics, and what documents are more likely to present which topics.
In the standard setting, to be able to conduct such an analysis from text one needs a document\-term matrix, where rows represent documents, and columns terms. Each cell is a count of how many times the term occurs in the document. Terms are typically words, but could be any n\-gram of interest.
Outside of text analysis terms could represent bacterial composition, genetic information, or whatever the researcher is interested in. Likewise, documents can be people, geographic regions, etc. The gist is, despite the common text\-based application, that what constitutes a document or term is dependent upon the research question, and LDA can be applied in a variety of research settings.
Steps
-----
When it comes to text analysis, most of the time in topic modeling is spent on processing the text itself. Importing/scraping it, dealing with capitalization, punctuation, removing stopwords, dealing with encoding issues, removing other miscellaneous common words. It is a highly iterative process such that once you get to the document\-term matrix, you’re just going to find the stuff that was missed before and repeat the process with new ‘cleaning parameters’ in place. So getting to the analysis stage is the hard part. See the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish), which comprises 5 acts, of which the first four and some additional scenes represent all the processing needed to get to the final scene of topic modeling. In what follows we’ll start at the end of that journey.
Topic Model Example
-------------------
### Shakespeare
In this example, we’ll look at Shakespeare’s plays and poems, using a topic model with 10 topics. For our needs, we’ll use the topicmodels package for the analysis, and mostly others for post\-processing. Due to the large number of terms, this could take a while to run depending on your machine (maybe a minute or two). We can also see how things compare with the academic classifications for the texts.
```
load('Data/shakes_dtm_stemmed.RData')
library(topicmodels)
shakes_10 = LDA(convert(shakes_dtm, to = "topicmodels"), k = 10)
```
#### Examine Terms within Topics
One of the first things to do is attempt to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish) for some examples of those.
#### Examine Document\-Topic Expression
Next we can look at which documents are more likely to express each topic.
```
t(topics(shakes_10, 2))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram. The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[9](#fn9). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix* (on its own), standard poems, a mixed bag of more love\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
So we can see that topic modeling can be used to classify the documents themselves into groups of documents most likely to express the same sorts of topics.
Extensions
----------
There are extensions of LDA used in topic modeling that will allow your analysis to go even further.
* Correlated Topic Models: the standard LDA does not estimate the topic correlation as part of the process.
* Supervised LDA: In this scenario, topics can be used for prediction, e.g. the classification of tragedy, comedy etc. (similar to PC regression)
* Structured Topic Models: Here we want to find the relevant covariates that can explain the topics (e.g. year written, author sex, etc.)
* Other: There are still other ways to examine topics.
Topic Model Exercise
--------------------
### Movie reviews
Perform a topic model on the [Cornell Movie review data](http://www.cs.cornell.edu/people/pabo/movie-review-data/). I’ve done some initial cleaning (e.g. removing stopwords, punctuation, etc.), and have both a tidy data frame and document term matrix for you to use. The former is provided if you want to do additional processing. But otherwise, just use the topicmodels package and perform your own analysis on the DTM. You can compare to [this result](https://ldavis.cpsievert.me/reviews/reviews.html).
```
load('data/movie_reviews.RData')
library(topicmodels)
```
### Associated Press articles
Do some topic modeling on articles from the Associated Press data from the First Text Retrieval Conference in 1992\. The following will load the DTM, so you are ready to go. See how your result compares with that of [Dave Blei](http://www.cs.columbia.edu/~blei/lda-c/ap-topics.pdf), based on 100 topics.
```
library(topicmodels)
data("AssociatedPress")
```
Basic idea
----------
Topic modeling as typically conducted is a tool for much more than text. The primary technique of Latent Dirichlet Allocation (LDA) should be as much a part of your toolbox as principal components and factor analysis. It can be seen merely as a dimension reduction approach, but it can also be used for its rich interpretative quality as well. The basic idea is that we’ll take a whole lot of features and boil them down to a few ‘topics’. In this sense LDA is akin to discrete PCA. Another way to think about this is more from the perspective of factor analysis, where we are keenly interested in interpretation of the result, and want to know both what terms are associated with which topics, and what documents are more likely to present which topics.
In the standard setting, to be able to conduct such an analysis from text one needs a document\-term matrix, where rows represent documents, and columns terms. Each cell is a count of how many times the term occurs in the document. Terms are typically words, but could be any n\-gram of interest.
Outside of text analysis terms could represent bacterial composition, genetic information, or whatever the researcher is interested in. Likewise, documents can be people, geographic regions, etc. The gist is, despite the common text\-based application, that what constitutes a document or term is dependent upon the research question, and LDA can be applied in a variety of research settings.
Steps
-----
When it comes to text analysis, most of the time in topic modeling is spent on processing the text itself. Importing/scraping it, dealing with capitalization, punctuation, removing stopwords, dealing with encoding issues, removing other miscellaneous common words. It is a highly iterative process such that once you get to the document\-term matrix, you’re just going to find the stuff that was missed before and repeat the process with new ‘cleaning parameters’ in place. So getting to the analysis stage is the hard part. See the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish), which comprises 5 acts, of which the first four and some additional scenes represent all the processing needed to get to the final scene of topic modeling. In what follows we’ll start at the end of that journey.
Topic Model Example
-------------------
### Shakespeare
In this example, we’ll look at Shakespeare’s plays and poems, using a topic model with 10 topics. For our needs, we’ll use the topicmodels package for the analysis, and mostly others for post\-processing. Due to the large number of terms, this could take a while to run depending on your machine (maybe a minute or two). We can also see how things compare with the academic classifications for the texts.
```
load('Data/shakes_dtm_stemmed.RData')
library(topicmodels)
shakes_10 = LDA(convert(shakes_dtm, to = "topicmodels"), k = 10)
```
#### Examine Terms within Topics
One of the first things to do is attempt to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish) for some examples of those.
#### Examine Document\-Topic Expression
Next we can look at which documents are more likely to express each topic.
```
t(topics(shakes_10, 2))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram. The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[9](#fn9). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix* (on its own), standard poems, a mixed bag of more love\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
So we can see that topic modeling can be used to classify the documents themselves into groups of documents most likely to express the same sorts of topics.
### Shakespeare
In this example, we’ll look at Shakespeare’s plays and poems, using a topic model with 10 topics. For our needs, we’ll use the topicmodels package for the analysis, and mostly others for post\-processing. Due to the large number of terms, this could take a while to run depending on your machine (maybe a minute or two). We can also see how things compare with the academic classifications for the texts.
```
load('Data/shakes_dtm_stemmed.RData')
library(topicmodels)
shakes_10 = LDA(convert(shakes_dtm, to = "topicmodels"), k = 10)
```
#### Examine Terms within Topics
One of the first things to do is attempt to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish) for some examples of those.
#### Examine Document\-Topic Expression
Next we can look at which documents are more likely to express each topic.
```
t(topics(shakes_10, 2))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram. The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[9](#fn9). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix* (on its own), standard poems, a mixed bag of more love\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
So we can see that topic modeling can be used to classify the documents themselves into groups of documents most likely to express the same sorts of topics.
#### Examine Terms within Topics
One of the first things to do is attempt to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish) for some examples of those.
#### Examine Document\-Topic Expression
Next we can look at which documents are more likely to express each topic.
```
t(topics(shakes_10, 2))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram. The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[9](#fn9). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix* (on its own), standard poems, a mixed bag of more love\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
So we can see that topic modeling can be used to classify the documents themselves into groups of documents most likely to express the same sorts of topics.
Extensions
----------
There are extensions of LDA used in topic modeling that will allow your analysis to go even further.
* Correlated Topic Models: the standard LDA does not estimate the topic correlation as part of the process.
* Supervised LDA: In this scenario, topics can be used for prediction, e.g. the classification of tragedy, comedy etc. (similar to PC regression)
* Structured Topic Models: Here we want to find the relevant covariates that can explain the topics (e.g. year written, author sex, etc.)
* Other: There are still other ways to examine topics.
Topic Model Exercise
--------------------
### Movie reviews
Perform a topic model on the [Cornell Movie review data](http://www.cs.cornell.edu/people/pabo/movie-review-data/). I’ve done some initial cleaning (e.g. removing stopwords, punctuation, etc.), and have both a tidy data frame and document term matrix for you to use. The former is provided if you want to do additional processing. But otherwise, just use the topicmodels package and perform your own analysis on the DTM. You can compare to [this result](https://ldavis.cpsievert.me/reviews/reviews.html).
```
load('data/movie_reviews.RData')
library(topicmodels)
```
### Associated Press articles
Do some topic modeling on articles from the Associated Press data from the First Text Retrieval Conference in 1992\. The following will load the DTM, so you are ready to go. See how your result compares with that of [Dave Blei](http://www.cs.columbia.edu/~blei/lda-c/ap-topics.pdf), based on 100 topics.
```
library(topicmodels)
data("AssociatedPress")
```
### Movie reviews
Perform a topic model on the [Cornell Movie review data](http://www.cs.cornell.edu/people/pabo/movie-review-data/). I’ve done some initial cleaning (e.g. removing stopwords, punctuation, etc.), and have both a tidy data frame and document term matrix for you to use. The former is provided if you want to do additional processing. But otherwise, just use the topicmodels package and perform your own analysis on the DTM. You can compare to [this result](https://ldavis.cpsievert.me/reviews/reviews.html).
```
load('data/movie_reviews.RData')
library(topicmodels)
```
### Associated Press articles
Do some topic modeling on articles from the Associated Press data from the First Text Retrieval Conference in 1992\. The following will load the DTM, so you are ready to go. See how your result compares with that of [Dave Blei](http://www.cs.columbia.edu/~blei/lda-c/ap-topics.pdf), based on 100 topics.
```
library(topicmodels)
data("AssociatedPress")
```
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/topic-modeling.html |
Topic modeling
==============
Basic idea
----------
Topic modeling as typically conducted is a tool for much more than text. The primary technique of Latent Dirichlet Allocation (LDA) should be as much a part of your toolbox as principal components and factor analysis. It can be seen merely as a dimension reduction approach, but it can also be used for its rich interpretative quality as well. The basic idea is that we’ll take a whole lot of features and boil them down to a few ‘topics’. In this sense LDA is akin to discrete PCA. Another way to think about this is more from the perspective of factor analysis, where we are keenly interested in interpretation of the result, and want to know both what terms are associated with which topics, and what documents are more likely to present which topics.
In the standard setting, to be able to conduct such an analysis from text one needs a document\-term matrix, where rows represent documents, and columns terms. Each cell is a count of how many times the term occurs in the document. Terms are typically words, but could be any n\-gram of interest.
Outside of text analysis terms could represent bacterial composition, genetic information, or whatever the researcher is interested in. Likewise, documents can be people, geographic regions, etc. The gist is, despite the common text\-based application, that what constitutes a document or term is dependent upon the research question, and LDA can be applied in a variety of research settings.
Steps
-----
When it comes to text analysis, most of the time in topic modeling is spent on processing the text itself. Importing/scraping it, dealing with capitalization, punctuation, removing stopwords, dealing with encoding issues, removing other miscellaneous common words. It is a highly iterative process such that once you get to the document\-term matrix, you’re just going to find the stuff that was missed before and repeat the process with new ‘cleaning parameters’ in place. So getting to the analysis stage is the hard part. See the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish), which comprises 5 acts, of which the first four and some additional scenes represent all the processing needed to get to the final scene of topic modeling. In what follows we’ll start at the end of that journey.
Topic Model Example
-------------------
### Shakespeare
In this example, we’ll look at Shakespeare’s plays and poems, using a topic model with 10 topics. For our needs, we’ll use the topicmodels package for the analysis, and mostly others for post\-processing. Due to the large number of terms, this could take a while to run depending on your machine (maybe a minute or two). We can also see how things compare with the academic classifications for the texts.
```
load('Data/shakes_dtm_stemmed.RData')
library(topicmodels)
shakes_10 = LDA(convert(shakes_dtm, to = "topicmodels"), k = 10)
```
#### Examine Terms within Topics
One of the first things to do is attempt to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish) for some examples of those.
#### Examine Document\-Topic Expression
Next we can look at which documents are more likely to express each topic.
```
t(topics(shakes_10, 2))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram. The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[9](#fn9). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix* (on its own), standard poems, a mixed bag of more love\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
So we can see that topic modeling can be used to classify the documents themselves into groups of documents most likely to express the same sorts of topics.
Extensions
----------
There are extensions of LDA used in topic modeling that will allow your analysis to go even further.
* Correlated Topic Models: the standard LDA does not estimate the topic correlation as part of the process.
* Supervised LDA: In this scenario, topics can be used for prediction, e.g. the classification of tragedy, comedy etc. (similar to PC regression)
* Structured Topic Models: Here we want to find the relevant covariates that can explain the topics (e.g. year written, author sex, etc.)
* Other: There are still other ways to examine topics.
Topic Model Exercise
--------------------
### Movie reviews
Perform a topic model on the [Cornell Movie review data](http://www.cs.cornell.edu/people/pabo/movie-review-data/). I’ve done some initial cleaning (e.g. removing stopwords, punctuation, etc.), and have both a tidy data frame and document term matrix for you to use. The former is provided if you want to do additional processing. But otherwise, just use the topicmodels package and perform your own analysis on the DTM. You can compare to [this result](https://ldavis.cpsievert.me/reviews/reviews.html).
```
load('data/movie_reviews.RData')
library(topicmodels)
```
### Associated Press articles
Do some topic modeling on articles from the Associated Press data from the First Text Retrieval Conference in 1992\. The following will load the DTM, so you are ready to go. See how your result compares with that of [Dave Blei](http://www.cs.columbia.edu/~blei/lda-c/ap-topics.pdf), based on 100 topics.
```
library(topicmodels)
data("AssociatedPress")
```
Basic idea
----------
Topic modeling as typically conducted is a tool for much more than text. The primary technique of Latent Dirichlet Allocation (LDA) should be as much a part of your toolbox as principal components and factor analysis. It can be seen merely as a dimension reduction approach, but it can also be used for its rich interpretative quality as well. The basic idea is that we’ll take a whole lot of features and boil them down to a few ‘topics’. In this sense LDA is akin to discrete PCA. Another way to think about this is more from the perspective of factor analysis, where we are keenly interested in interpretation of the result, and want to know both what terms are associated with which topics, and what documents are more likely to present which topics.
In the standard setting, to be able to conduct such an analysis from text one needs a document\-term matrix, where rows represent documents, and columns terms. Each cell is a count of how many times the term occurs in the document. Terms are typically words, but could be any n\-gram of interest.
Outside of text analysis terms could represent bacterial composition, genetic information, or whatever the researcher is interested in. Likewise, documents can be people, geographic regions, etc. The gist is, despite the common text\-based application, that what constitutes a document or term is dependent upon the research question, and LDA can be applied in a variety of research settings.
Steps
-----
When it comes to text analysis, most of the time in topic modeling is spent on processing the text itself. Importing/scraping it, dealing with capitalization, punctuation, removing stopwords, dealing with encoding issues, removing other miscellaneous common words. It is a highly iterative process such that once you get to the document\-term matrix, you’re just going to find the stuff that was missed before and repeat the process with new ‘cleaning parameters’ in place. So getting to the analysis stage is the hard part. See the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish), which comprises 5 acts, of which the first four and some additional scenes represent all the processing needed to get to the final scene of topic modeling. In what follows we’ll start at the end of that journey.
Topic Model Example
-------------------
### Shakespeare
In this example, we’ll look at Shakespeare’s plays and poems, using a topic model with 10 topics. For our needs, we’ll use the topicmodels package for the analysis, and mostly others for post\-processing. Due to the large number of terms, this could take a while to run depending on your machine (maybe a minute or two). We can also see how things compare with the academic classifications for the texts.
```
load('Data/shakes_dtm_stemmed.RData')
library(topicmodels)
shakes_10 = LDA(convert(shakes_dtm, to = "topicmodels"), k = 10)
```
#### Examine Terms within Topics
One of the first things to do is attempt to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish) for some examples of those.
#### Examine Document\-Topic Expression
Next we can look at which documents are more likely to express each topic.
```
t(topics(shakes_10, 2))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram. The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[9](#fn9). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix* (on its own), standard poems, a mixed bag of more love\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
So we can see that topic modeling can be used to classify the documents themselves into groups of documents most likely to express the same sorts of topics.
### Shakespeare
In this example, we’ll look at Shakespeare’s plays and poems, using a topic model with 10 topics. For our needs, we’ll use the topicmodels package for the analysis, and mostly others for post\-processing. Due to the large number of terms, this could take a while to run depending on your machine (maybe a minute or two). We can also see how things compare with the academic classifications for the texts.
```
load('Data/shakes_dtm_stemmed.RData')
library(topicmodels)
shakes_10 = LDA(convert(shakes_dtm, to = "topicmodels"), k = 10)
```
#### Examine Terms within Topics
One of the first things to do is attempt to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish) for some examples of those.
#### Examine Document\-Topic Expression
Next we can look at which documents are more likely to express each topic.
```
t(topics(shakes_10, 2))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram. The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[9](#fn9). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix* (on its own), standard poems, a mixed bag of more love\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
So we can see that topic modeling can be used to classify the documents themselves into groups of documents most likely to express the same sorts of topics.
#### Examine Terms within Topics
One of the first things to do is attempt to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish) for some examples of those.
#### Examine Document\-Topic Expression
Next we can look at which documents are more likely to express each topic.
```
t(topics(shakes_10, 2))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram. The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[9](#fn9). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix* (on its own), standard poems, a mixed bag of more love\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
So we can see that topic modeling can be used to classify the documents themselves into groups of documents most likely to express the same sorts of topics.
Extensions
----------
There are extensions of LDA used in topic modeling that will allow your analysis to go even further.
* Correlated Topic Models: the standard LDA does not estimate the topic correlation as part of the process.
* Supervised LDA: In this scenario, topics can be used for prediction, e.g. the classification of tragedy, comedy etc. (similar to PC regression)
* Structured Topic Models: Here we want to find the relevant covariates that can explain the topics (e.g. year written, author sex, etc.)
* Other: There are still other ways to examine topics.
Topic Model Exercise
--------------------
### Movie reviews
Perform a topic model on the [Cornell Movie review data](http://www.cs.cornell.edu/people/pabo/movie-review-data/). I’ve done some initial cleaning (e.g. removing stopwords, punctuation, etc.), and have both a tidy data frame and document term matrix for you to use. The former is provided if you want to do additional processing. But otherwise, just use the topicmodels package and perform your own analysis on the DTM. You can compare to [this result](https://ldavis.cpsievert.me/reviews/reviews.html).
```
load('data/movie_reviews.RData')
library(topicmodels)
```
### Associated Press articles
Do some topic modeling on articles from the Associated Press data from the First Text Retrieval Conference in 1992\. The following will load the DTM, so you are ready to go. See how your result compares with that of [Dave Blei](http://www.cs.columbia.edu/~blei/lda-c/ap-topics.pdf), based on 100 topics.
```
library(topicmodels)
data("AssociatedPress")
```
### Movie reviews
Perform a topic model on the [Cornell Movie review data](http://www.cs.cornell.edu/people/pabo/movie-review-data/). I’ve done some initial cleaning (e.g. removing stopwords, punctuation, etc.), and have both a tidy data frame and document term matrix for you to use. The former is provided if you want to do additional processing. But otherwise, just use the topicmodels package and perform your own analysis on the DTM. You can compare to [this result](https://ldavis.cpsievert.me/reviews/reviews.html).
```
load('data/movie_reviews.RData')
library(topicmodels)
```
### Associated Press articles
Do some topic modeling on articles from the Associated Press data from the First Text Retrieval Conference in 1992\. The following will load the DTM, so you are ready to go. See how your result compares with that of [Dave Blei](http://www.cs.columbia.edu/~blei/lda-c/ap-topics.pdf), based on 100 topics.
```
library(topicmodels)
data("AssociatedPress")
```
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/topic-modeling.html |
Topic modeling
==============
Basic idea
----------
Topic modeling as typically conducted is a tool for much more than text. The primary technique of Latent Dirichlet Allocation (LDA) should be as much a part of your toolbox as principal components and factor analysis. It can be seen merely as a dimension reduction approach, but it can also be used for its rich interpretative quality as well. The basic idea is that we’ll take a whole lot of features and boil them down to a few ‘topics’. In this sense LDA is akin to discrete PCA. Another way to think about this is more from the perspective of factor analysis, where we are keenly interested in interpretation of the result, and want to know both what terms are associated with which topics, and what documents are more likely to present which topics.
In the standard setting, to be able to conduct such an analysis from text one needs a document\-term matrix, where rows represent documents, and columns terms. Each cell is a count of how many times the term occurs in the document. Terms are typically words, but could be any n\-gram of interest.
Outside of text analysis terms could represent bacterial composition, genetic information, or whatever the researcher is interested in. Likewise, documents can be people, geographic regions, etc. The gist is, despite the common text\-based application, that what constitutes a document or term is dependent upon the research question, and LDA can be applied in a variety of research settings.
Steps
-----
When it comes to text analysis, most of the time in topic modeling is spent on processing the text itself. Importing/scraping it, dealing with capitalization, punctuation, removing stopwords, dealing with encoding issues, removing other miscellaneous common words. It is a highly iterative process such that once you get to the document\-term matrix, you’re just going to find the stuff that was missed before and repeat the process with new ‘cleaning parameters’ in place. So getting to the analysis stage is the hard part. See the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish), which comprises 5 acts, of which the first four and some additional scenes represent all the processing needed to get to the final scene of topic modeling. In what follows we’ll start at the end of that journey.
Topic Model Example
-------------------
### Shakespeare
In this example, we’ll look at Shakespeare’s plays and poems, using a topic model with 10 topics. For our needs, we’ll use the topicmodels package for the analysis, and mostly others for post\-processing. Due to the large number of terms, this could take a while to run depending on your machine (maybe a minute or two). We can also see how things compare with the academic classifications for the texts.
```
load('Data/shakes_dtm_stemmed.RData')
library(topicmodels)
shakes_10 = LDA(convert(shakes_dtm, to = "topicmodels"), k = 10)
```
#### Examine Terms within Topics
One of the first things to do is attempt to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish) for some examples of those.
#### Examine Document\-Topic Expression
Next we can look at which documents are more likely to express each topic.
```
t(topics(shakes_10, 2))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram. The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[9](#fn9). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix* (on its own), standard poems, a mixed bag of more love\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
So we can see that topic modeling can be used to classify the documents themselves into groups of documents most likely to express the same sorts of topics.
Extensions
----------
There are extensions of LDA used in topic modeling that will allow your analysis to go even further.
* Correlated Topic Models: the standard LDA does not estimate the topic correlation as part of the process.
* Supervised LDA: In this scenario, topics can be used for prediction, e.g. the classification of tragedy, comedy etc. (similar to PC regression)
* Structured Topic Models: Here we want to find the relevant covariates that can explain the topics (e.g. year written, author sex, etc.)
* Other: There are still other ways to examine topics.
Topic Model Exercise
--------------------
### Movie reviews
Perform a topic model on the [Cornell Movie review data](http://www.cs.cornell.edu/people/pabo/movie-review-data/). I’ve done some initial cleaning (e.g. removing stopwords, punctuation, etc.), and have both a tidy data frame and document term matrix for you to use. The former is provided if you want to do additional processing. But otherwise, just use the topicmodels package and perform your own analysis on the DTM. You can compare to [this result](https://ldavis.cpsievert.me/reviews/reviews.html).
```
load('data/movie_reviews.RData')
library(topicmodels)
```
### Associated Press articles
Do some topic modeling on articles from the Associated Press data from the First Text Retrieval Conference in 1992\. The following will load the DTM, so you are ready to go. See how your result compares with that of [Dave Blei](http://www.cs.columbia.edu/~blei/lda-c/ap-topics.pdf), based on 100 topics.
```
library(topicmodels)
data("AssociatedPress")
```
Basic idea
----------
Topic modeling as typically conducted is a tool for much more than text. The primary technique of Latent Dirichlet Allocation (LDA) should be as much a part of your toolbox as principal components and factor analysis. It can be seen merely as a dimension reduction approach, but it can also be used for its rich interpretative quality as well. The basic idea is that we’ll take a whole lot of features and boil them down to a few ‘topics’. In this sense LDA is akin to discrete PCA. Another way to think about this is more from the perspective of factor analysis, where we are keenly interested in interpretation of the result, and want to know both what terms are associated with which topics, and what documents are more likely to present which topics.
In the standard setting, to be able to conduct such an analysis from text one needs a document\-term matrix, where rows represent documents, and columns terms. Each cell is a count of how many times the term occurs in the document. Terms are typically words, but could be any n\-gram of interest.
Outside of text analysis terms could represent bacterial composition, genetic information, or whatever the researcher is interested in. Likewise, documents can be people, geographic regions, etc. The gist is, despite the common text\-based application, that what constitutes a document or term is dependent upon the research question, and LDA can be applied in a variety of research settings.
Steps
-----
When it comes to text analysis, most of the time in topic modeling is spent on processing the text itself. Importing/scraping it, dealing with capitalization, punctuation, removing stopwords, dealing with encoding issues, removing other miscellaneous common words. It is a highly iterative process such that once you get to the document\-term matrix, you’re just going to find the stuff that was missed before and repeat the process with new ‘cleaning parameters’ in place. So getting to the analysis stage is the hard part. See the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish), which comprises 5 acts, of which the first four and some additional scenes represent all the processing needed to get to the final scene of topic modeling. In what follows we’ll start at the end of that journey.
Topic Model Example
-------------------
### Shakespeare
In this example, we’ll look at Shakespeare’s plays and poems, using a topic model with 10 topics. For our needs, we’ll use the topicmodels package for the analysis, and mostly others for post\-processing. Due to the large number of terms, this could take a while to run depending on your machine (maybe a minute or two). We can also see how things compare with the academic classifications for the texts.
```
load('Data/shakes_dtm_stemmed.RData')
library(topicmodels)
shakes_10 = LDA(convert(shakes_dtm, to = "topicmodels"), k = 10)
```
#### Examine Terms within Topics
One of the first things to do is attempt to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish) for some examples of those.
#### Examine Document\-Topic Expression
Next we can look at which documents are more likely to express each topic.
```
t(topics(shakes_10, 2))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram. The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[9](#fn9). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix* (on its own), standard poems, a mixed bag of more love\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
So we can see that topic modeling can be used to classify the documents themselves into groups of documents most likely to express the same sorts of topics.
### Shakespeare
In this example, we’ll look at Shakespeare’s plays and poems, using a topic model with 10 topics. For our needs, we’ll use the topicmodels package for the analysis, and mostly others for post\-processing. Due to the large number of terms, this could take a while to run depending on your machine (maybe a minute or two). We can also see how things compare with the academic classifications for the texts.
```
load('Data/shakes_dtm_stemmed.RData')
library(topicmodels)
shakes_10 = LDA(convert(shakes_dtm, to = "topicmodels"), k = 10)
```
#### Examine Terms within Topics
One of the first things to do is attempt to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish) for some examples of those.
#### Examine Document\-Topic Expression
Next we can look at which documents are more likely to express each topic.
```
t(topics(shakes_10, 2))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram. The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[9](#fn9). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix* (on its own), standard poems, a mixed bag of more love\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
So we can see that topic modeling can be used to classify the documents themselves into groups of documents most likely to express the same sorts of topics.
#### Examine Terms within Topics
One of the first things to do is attempt to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the [Shakespeare section](shakespeare.html#shakespeare-start-to-finish) for some examples of those.
#### Examine Document\-Topic Expression
Next we can look at which documents are more likely to express each topic.
```
t(topics(shakes_10, 2))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram. The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[9](#fn9). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix* (on its own), standard poems, a mixed bag of more love\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
So we can see that topic modeling can be used to classify the documents themselves into groups of documents most likely to express the same sorts of topics.
Extensions
----------
There are extensions of LDA used in topic modeling that will allow your analysis to go even further.
* Correlated Topic Models: the standard LDA does not estimate the topic correlation as part of the process.
* Supervised LDA: In this scenario, topics can be used for prediction, e.g. the classification of tragedy, comedy etc. (similar to PC regression)
* Structured Topic Models: Here we want to find the relevant covariates that can explain the topics (e.g. year written, author sex, etc.)
* Other: There are still other ways to examine topics.
Topic Model Exercise
--------------------
### Movie reviews
Perform a topic model on the [Cornell Movie review data](http://www.cs.cornell.edu/people/pabo/movie-review-data/). I’ve done some initial cleaning (e.g. removing stopwords, punctuation, etc.), and have both a tidy data frame and document term matrix for you to use. The former is provided if you want to do additional processing. But otherwise, just use the topicmodels package and perform your own analysis on the DTM. You can compare to [this result](https://ldavis.cpsievert.me/reviews/reviews.html).
```
load('data/movie_reviews.RData')
library(topicmodels)
```
### Associated Press articles
Do some topic modeling on articles from the Associated Press data from the First Text Retrieval Conference in 1992\. The following will load the DTM, so you are ready to go. See how your result compares with that of [Dave Blei](http://www.cs.columbia.edu/~blei/lda-c/ap-topics.pdf), based on 100 topics.
```
library(topicmodels)
data("AssociatedPress")
```
### Movie reviews
Perform a topic model on the [Cornell Movie review data](http://www.cs.cornell.edu/people/pabo/movie-review-data/). I’ve done some initial cleaning (e.g. removing stopwords, punctuation, etc.), and have both a tidy data frame and document term matrix for you to use. The former is provided if you want to do additional processing. But otherwise, just use the topicmodels package and perform your own analysis on the DTM. You can compare to [this result](https://ldavis.cpsievert.me/reviews/reviews.html).
```
load('data/movie_reviews.RData')
library(topicmodels)
```
### Associated Press articles
Do some topic modeling on articles from the Associated Press data from the First Text Retrieval Conference in 1992\. The following will load the DTM, so you are ready to go. See how your result compares with that of [Dave Blei](http://www.cs.columbia.edu/~blei/lda-c/ap-topics.pdf), based on 100 topics.
```
library(topicmodels)
data("AssociatedPress")
```
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/word-embeddings.html |
Word Embeddings
===============
A key idea in the examination of text concerns representing words as numeric quantities. There are a number of ways to go about this, and we’ve actually already done so. In the sentiment analysis section words were given a sentiment score. In topic modeling, words were represented as frequencies across documents. Once we get to a numeric representation, we can then run statistical models.
Consider topic modeling again. We take the document\-term matrix, and reduce the dimensionality of it to just a few topics. Now consider a co\-occurrence matrix, where if there are \\(k\\) words, it is a \\(k\\) x \\(k\\) matrix, where the diagonal values tell us how frequently wordi occurs with wordj. Just like in topic modeling, we could now perform some matrix factorization technique to reduce the dimensionality of the matrix[10](#fn10). Now for each word we have a vector of numeric values (across factors) to represent them. Indeed, this is how some earlier approaches were done, for example, using principal components analysis on the co\-occurrence matrix.
Newer techniques such as word2vec and GloVe use neural net approaches to construct word vectors. The details are not important for applied users to benefit from them. Furthermore, applications have been made to create sentence and other vector representations[11](#fn11). In any case, with vector representations of words we can see how similar they are to each other, and perform other tasks based on that information.
A tired example from the literature is as follows:
\\\[\\mathrm{king \- man \+ woman \= queen}\\]
So a woman\-king is a queen.
Here is another example:
\\\[\\mathrm{Paris \- France \+ Germany \= Berlin}\\]
Berlin is the Paris of Germany.
The idea is that with vectors created just based on co\-occurrence we can recover things like analogies. Subtracting the **man** vector from the **king** vector and adding **woman**, the most similar word to this would be **queen**. For more on why this works, take a look [here](http://p.migdal.pl/2017/01/06/king-man-woman-queen-why.html).
Shakespeare example
-------------------
We start with some already tokenized data from the works of Shakespeare. We’ll treat the words as if they just come from one big Shakespeare document, and only consider the words as tokens, as opposed to using n\-grams. We create an iterator object for text2vec functions to use, and with that in hand, create the vocabulary, keeping only those that occur at least 5 times. This example generally follows that of the package [vignette](http://text2vec.org/glove.html), which you’ll definitely want to spend some time with.
```
load('data/shakes_words_df_4text2vec.RData')
library(text2vec)
## shakes_words
```
```
shakes_words_ls = list(shakes_words$word)
it = itoken(shakes_words_ls, progressbar = FALSE)
shakes_vocab = create_vocabulary(it)
shakes_vocab = prune_vocabulary(shakes_vocab, term_count_min = 5)
```
Let’s take a look at what we have at this point. We’ve just created word counts, that’s all the vocabulary object is.
```
shakes_vocab
```
```
Number of docs: 1
0 stopwords: ...
ngram_min = 1; ngram_max = 1
Vocabulary:
term term_count doc_count
1: bounties 5 1
2: rag 5 1
3: merchant's 5 1
4: ungovern'd 5 1
5: cozening 5 1
---
9090: of 17784 1
9091: to 20693 1
9092: i 21097 1
9093: and 26032 1
9094: the 28831 1
```
The next step is to create the token co\-occurrence matrix (TCM). The definition of whether two words occur together is arbitrary. Should we just look at previous and next word? Five behind and forward? This will definitely affect results so you will want to play around with it.
```
# maps words to indices
vectorizer = vocab_vectorizer(shakes_vocab)
# use window of 10 for context words
shakes_tcm = create_tcm(it, vectorizer, skip_grams_window = 10)
```
Note that such a matrix will be extremely sparse. Most words do not go with other words in the grand scheme of things. So when they do, it usually matters.
Now we are ready to create the word vectors based on the GloVe model. Various options exist, so you’ll want to dive into the associated help files and perhaps [the original articles](http://nlp.stanford.edu/projects/glove/) to see how you might play around with it. The following takes roughly a minute or two on my machine. I suggest you start with `n_iter = 10` and/or `convergence_tol = 0.001` to gauge how long you might have to wait.
In this setting, we can think of our word of interest as the target, and any/all other words (within the window) as the context. Word vectors are learned for both.
```
glove = GlobalVectors$new(word_vectors_size = 50, vocabulary = shakes_vocab, x_max = 10)
shakes_wv_main = glove$fit_transform(shakes_tcm, n_iter = 1000, convergence_tol = 0.00001)
# dim(shakes_wv_main)
shakes_wv_context = glove$components
# dim(shakes_wv_context)
# Either word-vectors matrices could work, but the developers of the technique
# suggest the sum/mean may work better
shakes_word_vectors = shakes_wv_main + t(shakes_wv_context)
```
Now we can start to play. The measure of interest in comparing two vectors will be cosine similarity, which, if you’re not familiar, you can think of it similarly to the standard correlation[12](#fn12). Let’s see what is similar to Romeo.
```
rom = shakes_word_vectors["romeo", , drop = F]
# ham = shakes_word_vectors["hamlet", , drop = F]
cos_sim_rom = sim2(x = shakes_word_vectors, y = rom, method = "cosine", norm = "l2")
# head(sort(cos_sim_rom[,1], decreasing = T), 10)
```
| romeo | juliet | tybalt | benvolio | nurse | iago | friar | mercutio | aaron | roderigo |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 0\.78 | 0\.72 | 0\.65 | 0\.64 | 0\.63 | 0\.61 | 0\.6 | 0\.6 | 0\.59 |
Obviously Romeo is most like Romeo, but after that comes the rest of the crew in the play. As this text is somewhat raw, it is likely due to names associated with lines in the play. As such, one may want to narrow the window[13](#fn13). Let’s try **love**.
```
love = shakes_word_vectors["love", , drop = F]
cos_sim_rom = sim2(x = shakes_word_vectors, y = love, method = "cosine", norm = "l2")
# head(sort(cos_sim_rom[,1], decreasing = T), 10)
```
| | x |
| --- | --- |
| love | 1\.00 |
| that | 0\.80 |
| did | 0\.72 |
| not | 0\.72 |
| in | 0\.72 |
| her | 0\.72 |
| but | 0\.71 |
| so | 0\.71 |
| know | 0\.71 |
| do | 0\.70 |
The issue here is that love is so commonly used in Shakespeare, it’s most like other very common words. What if we take Romeo, subtract his friend Mercutio, and add Nurse? This is similar to the analogy example we had at the start.
```
test = shakes_word_vectors["romeo", , drop = F] -
shakes_word_vectors["mercutio", , drop = F] +
shakes_word_vectors["nurse", , drop = F]
cos_sim_test = sim2(x = shakes_word_vectors, y = test, method = "cosine", norm = "l2")
# head(sort(cos_sim_test[,1], decreasing = T), 10)
```
| | x |
| --- | --- |
| nurse | 0\.87 |
| juliet | 0\.72 |
| romeo | 0\.70 |
It looks like we get Juliet as the most likely word (after the ones we actually used), just as we might have expected. Again, we can think of this as Romeo is to Mercutio as Juliet is to the Nurse. Let’s try another like that.
```
test = shakes_word_vectors["romeo", , drop = F] -
shakes_word_vectors["juliet", , drop = F] +
shakes_word_vectors["cleopatra", , drop = F]
cos_sim_test = sim2(x = shakes_word_vectors, y = test, method = "cosine", norm = "l2")
# head(sort(cos_sim_test[,1], decreasing = T), 3)
```
| | x |
| --- | --- |
| cleopatra | 0\.81 |
| romeo | 0\.70 |
| antony | 0\.70 |
One can play with stuff like this all day. For example, you may find that a Romeo without love is a Tybalt!
Wikipedia
---------
The following shows the code for analyzing text from Wikipedia, and comes directly from the text2vec vignette. Note that this is a relatively large amount of text (100MB), and so will take notably longer to process.
```
text8_file = "data/texts_raw/text8"
if (!file.exists(text8_file)) {
download.file("http://mattmahoney.net/dc/text8.zip", "data/text8.zip")
unzip("data/text8.zip", files = "text8", exdir = "data/texts_raw/")
}
wiki = readLines(text8_file, n = 1, warn = FALSE)
tokens = space_tokenizer(wiki)
it = itoken(tokens, progressbar = FALSE)
vocab = create_vocabulary(it)
vocab = prune_vocabulary(vocab, term_count_min = 5L)
vectorizer = vocab_vectorizer(vocab)
tcm = create_tcm(it, vectorizer, skip_grams_window = 5L)
glove = GlobalVectors$new(word_vectors_size = 50, vocabulary = vocab, x_max = 10)
wv_main = glove$fit_transform(tcm, n_iter = 100, convergence_tol = 0.001)
wv_context = glove$components
word_vectors = wv_main + t(wv_context)
```
Let’s try our Berlin example.
```
berlin = word_vectors["paris", , drop = FALSE] -
word_vectors["france", , drop = FALSE] +
word_vectors["germany", , drop = FALSE]
berlin_cos_sim = sim2(x = word_vectors, y = berlin, method = "cosine", norm = "l2")
head(sort(berlin_cos_sim[,1], decreasing = TRUE), 5)
```
```
paris berlin munich germany at
0.7575511 0.7560328 0.6721202 0.6559778 0.6519383
```
Success! Now let’s try the queen example.
```
queen = word_vectors["king", , drop = FALSE] -
word_vectors["man", , drop = FALSE] +
word_vectors["woman", , drop = FALSE]
queen_cos_sim = sim2(x = word_vectors, y = queen, method = "cosine", norm = "l2")
head(sort(queen_cos_sim[,1], decreasing = TRUE), 5)
```
```
king son alexander henry queen
0.8831932 0.7575572 0.7042561 0.6769456 0.6755054
```
Not so much, though it is still a top result. Results are of course highly dependent upon the data and settings you choose, so keep the context in mind when trying this out.
Now that words are vectors, we can use them in any model we want, for example, to predict sentimentality. Furthermore, extensions have been made to deal with sentences, paragraphs, and even [lda2vec](https://multithreaded.stitchfix.com/blog/2016/05/27/lda2vec/#topic=38&lambda=1&term=)! In any event, hopefully you have some idea of what word embeddings are and can do for you, and have added another tool to your text analysis toolbox.
Shakespeare example
-------------------
We start with some already tokenized data from the works of Shakespeare. We’ll treat the words as if they just come from one big Shakespeare document, and only consider the words as tokens, as opposed to using n\-grams. We create an iterator object for text2vec functions to use, and with that in hand, create the vocabulary, keeping only those that occur at least 5 times. This example generally follows that of the package [vignette](http://text2vec.org/glove.html), which you’ll definitely want to spend some time with.
```
load('data/shakes_words_df_4text2vec.RData')
library(text2vec)
## shakes_words
```
```
shakes_words_ls = list(shakes_words$word)
it = itoken(shakes_words_ls, progressbar = FALSE)
shakes_vocab = create_vocabulary(it)
shakes_vocab = prune_vocabulary(shakes_vocab, term_count_min = 5)
```
Let’s take a look at what we have at this point. We’ve just created word counts, that’s all the vocabulary object is.
```
shakes_vocab
```
```
Number of docs: 1
0 stopwords: ...
ngram_min = 1; ngram_max = 1
Vocabulary:
term term_count doc_count
1: bounties 5 1
2: rag 5 1
3: merchant's 5 1
4: ungovern'd 5 1
5: cozening 5 1
---
9090: of 17784 1
9091: to 20693 1
9092: i 21097 1
9093: and 26032 1
9094: the 28831 1
```
The next step is to create the token co\-occurrence matrix (TCM). The definition of whether two words occur together is arbitrary. Should we just look at previous and next word? Five behind and forward? This will definitely affect results so you will want to play around with it.
```
# maps words to indices
vectorizer = vocab_vectorizer(shakes_vocab)
# use window of 10 for context words
shakes_tcm = create_tcm(it, vectorizer, skip_grams_window = 10)
```
Note that such a matrix will be extremely sparse. Most words do not go with other words in the grand scheme of things. So when they do, it usually matters.
Now we are ready to create the word vectors based on the GloVe model. Various options exist, so you’ll want to dive into the associated help files and perhaps [the original articles](http://nlp.stanford.edu/projects/glove/) to see how you might play around with it. The following takes roughly a minute or two on my machine. I suggest you start with `n_iter = 10` and/or `convergence_tol = 0.001` to gauge how long you might have to wait.
In this setting, we can think of our word of interest as the target, and any/all other words (within the window) as the context. Word vectors are learned for both.
```
glove = GlobalVectors$new(word_vectors_size = 50, vocabulary = shakes_vocab, x_max = 10)
shakes_wv_main = glove$fit_transform(shakes_tcm, n_iter = 1000, convergence_tol = 0.00001)
# dim(shakes_wv_main)
shakes_wv_context = glove$components
# dim(shakes_wv_context)
# Either word-vectors matrices could work, but the developers of the technique
# suggest the sum/mean may work better
shakes_word_vectors = shakes_wv_main + t(shakes_wv_context)
```
Now we can start to play. The measure of interest in comparing two vectors will be cosine similarity, which, if you’re not familiar, you can think of it similarly to the standard correlation[12](#fn12). Let’s see what is similar to Romeo.
```
rom = shakes_word_vectors["romeo", , drop = F]
# ham = shakes_word_vectors["hamlet", , drop = F]
cos_sim_rom = sim2(x = shakes_word_vectors, y = rom, method = "cosine", norm = "l2")
# head(sort(cos_sim_rom[,1], decreasing = T), 10)
```
| romeo | juliet | tybalt | benvolio | nurse | iago | friar | mercutio | aaron | roderigo |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 0\.78 | 0\.72 | 0\.65 | 0\.64 | 0\.63 | 0\.61 | 0\.6 | 0\.6 | 0\.59 |
Obviously Romeo is most like Romeo, but after that comes the rest of the crew in the play. As this text is somewhat raw, it is likely due to names associated with lines in the play. As such, one may want to narrow the window[13](#fn13). Let’s try **love**.
```
love = shakes_word_vectors["love", , drop = F]
cos_sim_rom = sim2(x = shakes_word_vectors, y = love, method = "cosine", norm = "l2")
# head(sort(cos_sim_rom[,1], decreasing = T), 10)
```
| | x |
| --- | --- |
| love | 1\.00 |
| that | 0\.80 |
| did | 0\.72 |
| not | 0\.72 |
| in | 0\.72 |
| her | 0\.72 |
| but | 0\.71 |
| so | 0\.71 |
| know | 0\.71 |
| do | 0\.70 |
The issue here is that love is so commonly used in Shakespeare, it’s most like other very common words. What if we take Romeo, subtract his friend Mercutio, and add Nurse? This is similar to the analogy example we had at the start.
```
test = shakes_word_vectors["romeo", , drop = F] -
shakes_word_vectors["mercutio", , drop = F] +
shakes_word_vectors["nurse", , drop = F]
cos_sim_test = sim2(x = shakes_word_vectors, y = test, method = "cosine", norm = "l2")
# head(sort(cos_sim_test[,1], decreasing = T), 10)
```
| | x |
| --- | --- |
| nurse | 0\.87 |
| juliet | 0\.72 |
| romeo | 0\.70 |
It looks like we get Juliet as the most likely word (after the ones we actually used), just as we might have expected. Again, we can think of this as Romeo is to Mercutio as Juliet is to the Nurse. Let’s try another like that.
```
test = shakes_word_vectors["romeo", , drop = F] -
shakes_word_vectors["juliet", , drop = F] +
shakes_word_vectors["cleopatra", , drop = F]
cos_sim_test = sim2(x = shakes_word_vectors, y = test, method = "cosine", norm = "l2")
# head(sort(cos_sim_test[,1], decreasing = T), 3)
```
| | x |
| --- | --- |
| cleopatra | 0\.81 |
| romeo | 0\.70 |
| antony | 0\.70 |
One can play with stuff like this all day. For example, you may find that a Romeo without love is a Tybalt!
Wikipedia
---------
The following shows the code for analyzing text from Wikipedia, and comes directly from the text2vec vignette. Note that this is a relatively large amount of text (100MB), and so will take notably longer to process.
```
text8_file = "data/texts_raw/text8"
if (!file.exists(text8_file)) {
download.file("http://mattmahoney.net/dc/text8.zip", "data/text8.zip")
unzip("data/text8.zip", files = "text8", exdir = "data/texts_raw/")
}
wiki = readLines(text8_file, n = 1, warn = FALSE)
tokens = space_tokenizer(wiki)
it = itoken(tokens, progressbar = FALSE)
vocab = create_vocabulary(it)
vocab = prune_vocabulary(vocab, term_count_min = 5L)
vectorizer = vocab_vectorizer(vocab)
tcm = create_tcm(it, vectorizer, skip_grams_window = 5L)
glove = GlobalVectors$new(word_vectors_size = 50, vocabulary = vocab, x_max = 10)
wv_main = glove$fit_transform(tcm, n_iter = 100, convergence_tol = 0.001)
wv_context = glove$components
word_vectors = wv_main + t(wv_context)
```
Let’s try our Berlin example.
```
berlin = word_vectors["paris", , drop = FALSE] -
word_vectors["france", , drop = FALSE] +
word_vectors["germany", , drop = FALSE]
berlin_cos_sim = sim2(x = word_vectors, y = berlin, method = "cosine", norm = "l2")
head(sort(berlin_cos_sim[,1], decreasing = TRUE), 5)
```
```
paris berlin munich germany at
0.7575511 0.7560328 0.6721202 0.6559778 0.6519383
```
Success! Now let’s try the queen example.
```
queen = word_vectors["king", , drop = FALSE] -
word_vectors["man", , drop = FALSE] +
word_vectors["woman", , drop = FALSE]
queen_cos_sim = sim2(x = word_vectors, y = queen, method = "cosine", norm = "l2")
head(sort(queen_cos_sim[,1], decreasing = TRUE), 5)
```
```
king son alexander henry queen
0.8831932 0.7575572 0.7042561 0.6769456 0.6755054
```
Not so much, though it is still a top result. Results are of course highly dependent upon the data and settings you choose, so keep the context in mind when trying this out.
Now that words are vectors, we can use them in any model we want, for example, to predict sentimentality. Furthermore, extensions have been made to deal with sentences, paragraphs, and even [lda2vec](https://multithreaded.stitchfix.com/blog/2016/05/27/lda2vec/#topic=38&lambda=1&term=)! In any event, hopefully you have some idea of what word embeddings are and can do for you, and have added another tool to your text analysis toolbox.
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/word-embeddings.html |
Word Embeddings
===============
A key idea in the examination of text concerns representing words as numeric quantities. There are a number of ways to go about this, and we’ve actually already done so. In the sentiment analysis section words were given a sentiment score. In topic modeling, words were represented as frequencies across documents. Once we get to a numeric representation, we can then run statistical models.
Consider topic modeling again. We take the document\-term matrix, and reduce the dimensionality of it to just a few topics. Now consider a co\-occurrence matrix, where if there are \\(k\\) words, it is a \\(k\\) x \\(k\\) matrix, where the diagonal values tell us how frequently wordi occurs with wordj. Just like in topic modeling, we could now perform some matrix factorization technique to reduce the dimensionality of the matrix[10](#fn10). Now for each word we have a vector of numeric values (across factors) to represent them. Indeed, this is how some earlier approaches were done, for example, using principal components analysis on the co\-occurrence matrix.
Newer techniques such as word2vec and GloVe use neural net approaches to construct word vectors. The details are not important for applied users to benefit from them. Furthermore, applications have been made to create sentence and other vector representations[11](#fn11). In any case, with vector representations of words we can see how similar they are to each other, and perform other tasks based on that information.
A tired example from the literature is as follows:
\\\[\\mathrm{king \- man \+ woman \= queen}\\]
So a woman\-king is a queen.
Here is another example:
\\\[\\mathrm{Paris \- France \+ Germany \= Berlin}\\]
Berlin is the Paris of Germany.
The idea is that with vectors created just based on co\-occurrence we can recover things like analogies. Subtracting the **man** vector from the **king** vector and adding **woman**, the most similar word to this would be **queen**. For more on why this works, take a look [here](http://p.migdal.pl/2017/01/06/king-man-woman-queen-why.html).
Shakespeare example
-------------------
We start with some already tokenized data from the works of Shakespeare. We’ll treat the words as if they just come from one big Shakespeare document, and only consider the words as tokens, as opposed to using n\-grams. We create an iterator object for text2vec functions to use, and with that in hand, create the vocabulary, keeping only those that occur at least 5 times. This example generally follows that of the package [vignette](http://text2vec.org/glove.html), which you’ll definitely want to spend some time with.
```
load('data/shakes_words_df_4text2vec.RData')
library(text2vec)
## shakes_words
```
```
shakes_words_ls = list(shakes_words$word)
it = itoken(shakes_words_ls, progressbar = FALSE)
shakes_vocab = create_vocabulary(it)
shakes_vocab = prune_vocabulary(shakes_vocab, term_count_min = 5)
```
Let’s take a look at what we have at this point. We’ve just created word counts, that’s all the vocabulary object is.
```
shakes_vocab
```
```
Number of docs: 1
0 stopwords: ...
ngram_min = 1; ngram_max = 1
Vocabulary:
term term_count doc_count
1: bounties 5 1
2: rag 5 1
3: merchant's 5 1
4: ungovern'd 5 1
5: cozening 5 1
---
9090: of 17784 1
9091: to 20693 1
9092: i 21097 1
9093: and 26032 1
9094: the 28831 1
```
The next step is to create the token co\-occurrence matrix (TCM). The definition of whether two words occur together is arbitrary. Should we just look at previous and next word? Five behind and forward? This will definitely affect results so you will want to play around with it.
```
# maps words to indices
vectorizer = vocab_vectorizer(shakes_vocab)
# use window of 10 for context words
shakes_tcm = create_tcm(it, vectorizer, skip_grams_window = 10)
```
Note that such a matrix will be extremely sparse. Most words do not go with other words in the grand scheme of things. So when they do, it usually matters.
Now we are ready to create the word vectors based on the GloVe model. Various options exist, so you’ll want to dive into the associated help files and perhaps [the original articles](http://nlp.stanford.edu/projects/glove/) to see how you might play around with it. The following takes roughly a minute or two on my machine. I suggest you start with `n_iter = 10` and/or `convergence_tol = 0.001` to gauge how long you might have to wait.
In this setting, we can think of our word of interest as the target, and any/all other words (within the window) as the context. Word vectors are learned for both.
```
glove = GlobalVectors$new(word_vectors_size = 50, vocabulary = shakes_vocab, x_max = 10)
shakes_wv_main = glove$fit_transform(shakes_tcm, n_iter = 1000, convergence_tol = 0.00001)
# dim(shakes_wv_main)
shakes_wv_context = glove$components
# dim(shakes_wv_context)
# Either word-vectors matrices could work, but the developers of the technique
# suggest the sum/mean may work better
shakes_word_vectors = shakes_wv_main + t(shakes_wv_context)
```
Now we can start to play. The measure of interest in comparing two vectors will be cosine similarity, which, if you’re not familiar, you can think of it similarly to the standard correlation[12](#fn12). Let’s see what is similar to Romeo.
```
rom = shakes_word_vectors["romeo", , drop = F]
# ham = shakes_word_vectors["hamlet", , drop = F]
cos_sim_rom = sim2(x = shakes_word_vectors, y = rom, method = "cosine", norm = "l2")
# head(sort(cos_sim_rom[,1], decreasing = T), 10)
```
| romeo | juliet | tybalt | benvolio | nurse | iago | friar | mercutio | aaron | roderigo |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 0\.78 | 0\.72 | 0\.65 | 0\.64 | 0\.63 | 0\.61 | 0\.6 | 0\.6 | 0\.59 |
Obviously Romeo is most like Romeo, but after that comes the rest of the crew in the play. As this text is somewhat raw, it is likely due to names associated with lines in the play. As such, one may want to narrow the window[13](#fn13). Let’s try **love**.
```
love = shakes_word_vectors["love", , drop = F]
cos_sim_rom = sim2(x = shakes_word_vectors, y = love, method = "cosine", norm = "l2")
# head(sort(cos_sim_rom[,1], decreasing = T), 10)
```
| | x |
| --- | --- |
| love | 1\.00 |
| that | 0\.80 |
| did | 0\.72 |
| not | 0\.72 |
| in | 0\.72 |
| her | 0\.72 |
| but | 0\.71 |
| so | 0\.71 |
| know | 0\.71 |
| do | 0\.70 |
The issue here is that love is so commonly used in Shakespeare, it’s most like other very common words. What if we take Romeo, subtract his friend Mercutio, and add Nurse? This is similar to the analogy example we had at the start.
```
test = shakes_word_vectors["romeo", , drop = F] -
shakes_word_vectors["mercutio", , drop = F] +
shakes_word_vectors["nurse", , drop = F]
cos_sim_test = sim2(x = shakes_word_vectors, y = test, method = "cosine", norm = "l2")
# head(sort(cos_sim_test[,1], decreasing = T), 10)
```
| | x |
| --- | --- |
| nurse | 0\.87 |
| juliet | 0\.72 |
| romeo | 0\.70 |
It looks like we get Juliet as the most likely word (after the ones we actually used), just as we might have expected. Again, we can think of this as Romeo is to Mercutio as Juliet is to the Nurse. Let’s try another like that.
```
test = shakes_word_vectors["romeo", , drop = F] -
shakes_word_vectors["juliet", , drop = F] +
shakes_word_vectors["cleopatra", , drop = F]
cos_sim_test = sim2(x = shakes_word_vectors, y = test, method = "cosine", norm = "l2")
# head(sort(cos_sim_test[,1], decreasing = T), 3)
```
| | x |
| --- | --- |
| cleopatra | 0\.81 |
| romeo | 0\.70 |
| antony | 0\.70 |
One can play with stuff like this all day. For example, you may find that a Romeo without love is a Tybalt!
Wikipedia
---------
The following shows the code for analyzing text from Wikipedia, and comes directly from the text2vec vignette. Note that this is a relatively large amount of text (100MB), and so will take notably longer to process.
```
text8_file = "data/texts_raw/text8"
if (!file.exists(text8_file)) {
download.file("http://mattmahoney.net/dc/text8.zip", "data/text8.zip")
unzip("data/text8.zip", files = "text8", exdir = "data/texts_raw/")
}
wiki = readLines(text8_file, n = 1, warn = FALSE)
tokens = space_tokenizer(wiki)
it = itoken(tokens, progressbar = FALSE)
vocab = create_vocabulary(it)
vocab = prune_vocabulary(vocab, term_count_min = 5L)
vectorizer = vocab_vectorizer(vocab)
tcm = create_tcm(it, vectorizer, skip_grams_window = 5L)
glove = GlobalVectors$new(word_vectors_size = 50, vocabulary = vocab, x_max = 10)
wv_main = glove$fit_transform(tcm, n_iter = 100, convergence_tol = 0.001)
wv_context = glove$components
word_vectors = wv_main + t(wv_context)
```
Let’s try our Berlin example.
```
berlin = word_vectors["paris", , drop = FALSE] -
word_vectors["france", , drop = FALSE] +
word_vectors["germany", , drop = FALSE]
berlin_cos_sim = sim2(x = word_vectors, y = berlin, method = "cosine", norm = "l2")
head(sort(berlin_cos_sim[,1], decreasing = TRUE), 5)
```
```
paris berlin munich germany at
0.7575511 0.7560328 0.6721202 0.6559778 0.6519383
```
Success! Now let’s try the queen example.
```
queen = word_vectors["king", , drop = FALSE] -
word_vectors["man", , drop = FALSE] +
word_vectors["woman", , drop = FALSE]
queen_cos_sim = sim2(x = word_vectors, y = queen, method = "cosine", norm = "l2")
head(sort(queen_cos_sim[,1], decreasing = TRUE), 5)
```
```
king son alexander henry queen
0.8831932 0.7575572 0.7042561 0.6769456 0.6755054
```
Not so much, though it is still a top result. Results are of course highly dependent upon the data and settings you choose, so keep the context in mind when trying this out.
Now that words are vectors, we can use them in any model we want, for example, to predict sentimentality. Furthermore, extensions have been made to deal with sentences, paragraphs, and even [lda2vec](https://multithreaded.stitchfix.com/blog/2016/05/27/lda2vec/#topic=38&lambda=1&term=)! In any event, hopefully you have some idea of what word embeddings are and can do for you, and have added another tool to your text analysis toolbox.
Shakespeare example
-------------------
We start with some already tokenized data from the works of Shakespeare. We’ll treat the words as if they just come from one big Shakespeare document, and only consider the words as tokens, as opposed to using n\-grams. We create an iterator object for text2vec functions to use, and with that in hand, create the vocabulary, keeping only those that occur at least 5 times. This example generally follows that of the package [vignette](http://text2vec.org/glove.html), which you’ll definitely want to spend some time with.
```
load('data/shakes_words_df_4text2vec.RData')
library(text2vec)
## shakes_words
```
```
shakes_words_ls = list(shakes_words$word)
it = itoken(shakes_words_ls, progressbar = FALSE)
shakes_vocab = create_vocabulary(it)
shakes_vocab = prune_vocabulary(shakes_vocab, term_count_min = 5)
```
Let’s take a look at what we have at this point. We’ve just created word counts, that’s all the vocabulary object is.
```
shakes_vocab
```
```
Number of docs: 1
0 stopwords: ...
ngram_min = 1; ngram_max = 1
Vocabulary:
term term_count doc_count
1: bounties 5 1
2: rag 5 1
3: merchant's 5 1
4: ungovern'd 5 1
5: cozening 5 1
---
9090: of 17784 1
9091: to 20693 1
9092: i 21097 1
9093: and 26032 1
9094: the 28831 1
```
The next step is to create the token co\-occurrence matrix (TCM). The definition of whether two words occur together is arbitrary. Should we just look at previous and next word? Five behind and forward? This will definitely affect results so you will want to play around with it.
```
# maps words to indices
vectorizer = vocab_vectorizer(shakes_vocab)
# use window of 10 for context words
shakes_tcm = create_tcm(it, vectorizer, skip_grams_window = 10)
```
Note that such a matrix will be extremely sparse. Most words do not go with other words in the grand scheme of things. So when they do, it usually matters.
Now we are ready to create the word vectors based on the GloVe model. Various options exist, so you’ll want to dive into the associated help files and perhaps [the original articles](http://nlp.stanford.edu/projects/glove/) to see how you might play around with it. The following takes roughly a minute or two on my machine. I suggest you start with `n_iter = 10` and/or `convergence_tol = 0.001` to gauge how long you might have to wait.
In this setting, we can think of our word of interest as the target, and any/all other words (within the window) as the context. Word vectors are learned for both.
```
glove = GlobalVectors$new(word_vectors_size = 50, vocabulary = shakes_vocab, x_max = 10)
shakes_wv_main = glove$fit_transform(shakes_tcm, n_iter = 1000, convergence_tol = 0.00001)
# dim(shakes_wv_main)
shakes_wv_context = glove$components
# dim(shakes_wv_context)
# Either word-vectors matrices could work, but the developers of the technique
# suggest the sum/mean may work better
shakes_word_vectors = shakes_wv_main + t(shakes_wv_context)
```
Now we can start to play. The measure of interest in comparing two vectors will be cosine similarity, which, if you’re not familiar, you can think of it similarly to the standard correlation[12](#fn12). Let’s see what is similar to Romeo.
```
rom = shakes_word_vectors["romeo", , drop = F]
# ham = shakes_word_vectors["hamlet", , drop = F]
cos_sim_rom = sim2(x = shakes_word_vectors, y = rom, method = "cosine", norm = "l2")
# head(sort(cos_sim_rom[,1], decreasing = T), 10)
```
| romeo | juliet | tybalt | benvolio | nurse | iago | friar | mercutio | aaron | roderigo |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 0\.78 | 0\.72 | 0\.65 | 0\.64 | 0\.63 | 0\.61 | 0\.6 | 0\.6 | 0\.59 |
Obviously Romeo is most like Romeo, but after that comes the rest of the crew in the play. As this text is somewhat raw, it is likely due to names associated with lines in the play. As such, one may want to narrow the window[13](#fn13). Let’s try **love**.
```
love = shakes_word_vectors["love", , drop = F]
cos_sim_rom = sim2(x = shakes_word_vectors, y = love, method = "cosine", norm = "l2")
# head(sort(cos_sim_rom[,1], decreasing = T), 10)
```
| | x |
| --- | --- |
| love | 1\.00 |
| that | 0\.80 |
| did | 0\.72 |
| not | 0\.72 |
| in | 0\.72 |
| her | 0\.72 |
| but | 0\.71 |
| so | 0\.71 |
| know | 0\.71 |
| do | 0\.70 |
The issue here is that love is so commonly used in Shakespeare, it’s most like other very common words. What if we take Romeo, subtract his friend Mercutio, and add Nurse? This is similar to the analogy example we had at the start.
```
test = shakes_word_vectors["romeo", , drop = F] -
shakes_word_vectors["mercutio", , drop = F] +
shakes_word_vectors["nurse", , drop = F]
cos_sim_test = sim2(x = shakes_word_vectors, y = test, method = "cosine", norm = "l2")
# head(sort(cos_sim_test[,1], decreasing = T), 10)
```
| | x |
| --- | --- |
| nurse | 0\.87 |
| juliet | 0\.72 |
| romeo | 0\.70 |
It looks like we get Juliet as the most likely word (after the ones we actually used), just as we might have expected. Again, we can think of this as Romeo is to Mercutio as Juliet is to the Nurse. Let’s try another like that.
```
test = shakes_word_vectors["romeo", , drop = F] -
shakes_word_vectors["juliet", , drop = F] +
shakes_word_vectors["cleopatra", , drop = F]
cos_sim_test = sim2(x = shakes_word_vectors, y = test, method = "cosine", norm = "l2")
# head(sort(cos_sim_test[,1], decreasing = T), 3)
```
| | x |
| --- | --- |
| cleopatra | 0\.81 |
| romeo | 0\.70 |
| antony | 0\.70 |
One can play with stuff like this all day. For example, you may find that a Romeo without love is a Tybalt!
Wikipedia
---------
The following shows the code for analyzing text from Wikipedia, and comes directly from the text2vec vignette. Note that this is a relatively large amount of text (100MB), and so will take notably longer to process.
```
text8_file = "data/texts_raw/text8"
if (!file.exists(text8_file)) {
download.file("http://mattmahoney.net/dc/text8.zip", "data/text8.zip")
unzip("data/text8.zip", files = "text8", exdir = "data/texts_raw/")
}
wiki = readLines(text8_file, n = 1, warn = FALSE)
tokens = space_tokenizer(wiki)
it = itoken(tokens, progressbar = FALSE)
vocab = create_vocabulary(it)
vocab = prune_vocabulary(vocab, term_count_min = 5L)
vectorizer = vocab_vectorizer(vocab)
tcm = create_tcm(it, vectorizer, skip_grams_window = 5L)
glove = GlobalVectors$new(word_vectors_size = 50, vocabulary = vocab, x_max = 10)
wv_main = glove$fit_transform(tcm, n_iter = 100, convergence_tol = 0.001)
wv_context = glove$components
word_vectors = wv_main + t(wv_context)
```
Let’s try our Berlin example.
```
berlin = word_vectors["paris", , drop = FALSE] -
word_vectors["france", , drop = FALSE] +
word_vectors["germany", , drop = FALSE]
berlin_cos_sim = sim2(x = word_vectors, y = berlin, method = "cosine", norm = "l2")
head(sort(berlin_cos_sim[,1], decreasing = TRUE), 5)
```
```
paris berlin munich germany at
0.7575511 0.7560328 0.6721202 0.6559778 0.6519383
```
Success! Now let’s try the queen example.
```
queen = word_vectors["king", , drop = FALSE] -
word_vectors["man", , drop = FALSE] +
word_vectors["woman", , drop = FALSE]
queen_cos_sim = sim2(x = word_vectors, y = queen, method = "cosine", norm = "l2")
head(sort(queen_cos_sim[,1], decreasing = TRUE), 5)
```
```
king son alexander henry queen
0.8831932 0.7575572 0.7042561 0.6769456 0.6755054
```
Not so much, though it is still a top result. Results are of course highly dependent upon the data and settings you choose, so keep the context in mind when trying this out.
Now that words are vectors, we can use them in any model we want, for example, to predict sentimentality. Furthermore, extensions have been made to deal with sentences, paragraphs, and even [lda2vec](https://multithreaded.stitchfix.com/blog/2016/05/27/lda2vec/#topic=38&lambda=1&term=)! In any event, hopefully you have some idea of what word embeddings are and can do for you, and have added another tool to your text analysis toolbox.
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/word-embeddings.html |
Word Embeddings
===============
A key idea in the examination of text concerns representing words as numeric quantities. There are a number of ways to go about this, and we’ve actually already done so. In the sentiment analysis section words were given a sentiment score. In topic modeling, words were represented as frequencies across documents. Once we get to a numeric representation, we can then run statistical models.
Consider topic modeling again. We take the document\-term matrix, and reduce the dimensionality of it to just a few topics. Now consider a co\-occurrence matrix, where if there are \\(k\\) words, it is a \\(k\\) x \\(k\\) matrix, where the diagonal values tell us how frequently wordi occurs with wordj. Just like in topic modeling, we could now perform some matrix factorization technique to reduce the dimensionality of the matrix[10](#fn10). Now for each word we have a vector of numeric values (across factors) to represent them. Indeed, this is how some earlier approaches were done, for example, using principal components analysis on the co\-occurrence matrix.
Newer techniques such as word2vec and GloVe use neural net approaches to construct word vectors. The details are not important for applied users to benefit from them. Furthermore, applications have been made to create sentence and other vector representations[11](#fn11). In any case, with vector representations of words we can see how similar they are to each other, and perform other tasks based on that information.
A tired example from the literature is as follows:
\\\[\\mathrm{king \- man \+ woman \= queen}\\]
So a woman\-king is a queen.
Here is another example:
\\\[\\mathrm{Paris \- France \+ Germany \= Berlin}\\]
Berlin is the Paris of Germany.
The idea is that with vectors created just based on co\-occurrence we can recover things like analogies. Subtracting the **man** vector from the **king** vector and adding **woman**, the most similar word to this would be **queen**. For more on why this works, take a look [here](http://p.migdal.pl/2017/01/06/king-man-woman-queen-why.html).
Shakespeare example
-------------------
We start with some already tokenized data from the works of Shakespeare. We’ll treat the words as if they just come from one big Shakespeare document, and only consider the words as tokens, as opposed to using n\-grams. We create an iterator object for text2vec functions to use, and with that in hand, create the vocabulary, keeping only those that occur at least 5 times. This example generally follows that of the package [vignette](http://text2vec.org/glove.html), which you’ll definitely want to spend some time with.
```
load('data/shakes_words_df_4text2vec.RData')
library(text2vec)
## shakes_words
```
```
shakes_words_ls = list(shakes_words$word)
it = itoken(shakes_words_ls, progressbar = FALSE)
shakes_vocab = create_vocabulary(it)
shakes_vocab = prune_vocabulary(shakes_vocab, term_count_min = 5)
```
Let’s take a look at what we have at this point. We’ve just created word counts, that’s all the vocabulary object is.
```
shakes_vocab
```
```
Number of docs: 1
0 stopwords: ...
ngram_min = 1; ngram_max = 1
Vocabulary:
term term_count doc_count
1: bounties 5 1
2: rag 5 1
3: merchant's 5 1
4: ungovern'd 5 1
5: cozening 5 1
---
9090: of 17784 1
9091: to 20693 1
9092: i 21097 1
9093: and 26032 1
9094: the 28831 1
```
The next step is to create the token co\-occurrence matrix (TCM). The definition of whether two words occur together is arbitrary. Should we just look at previous and next word? Five behind and forward? This will definitely affect results so you will want to play around with it.
```
# maps words to indices
vectorizer = vocab_vectorizer(shakes_vocab)
# use window of 10 for context words
shakes_tcm = create_tcm(it, vectorizer, skip_grams_window = 10)
```
Note that such a matrix will be extremely sparse. Most words do not go with other words in the grand scheme of things. So when they do, it usually matters.
Now we are ready to create the word vectors based on the GloVe model. Various options exist, so you’ll want to dive into the associated help files and perhaps [the original articles](http://nlp.stanford.edu/projects/glove/) to see how you might play around with it. The following takes roughly a minute or two on my machine. I suggest you start with `n_iter = 10` and/or `convergence_tol = 0.001` to gauge how long you might have to wait.
In this setting, we can think of our word of interest as the target, and any/all other words (within the window) as the context. Word vectors are learned for both.
```
glove = GlobalVectors$new(word_vectors_size = 50, vocabulary = shakes_vocab, x_max = 10)
shakes_wv_main = glove$fit_transform(shakes_tcm, n_iter = 1000, convergence_tol = 0.00001)
# dim(shakes_wv_main)
shakes_wv_context = glove$components
# dim(shakes_wv_context)
# Either word-vectors matrices could work, but the developers of the technique
# suggest the sum/mean may work better
shakes_word_vectors = shakes_wv_main + t(shakes_wv_context)
```
Now we can start to play. The measure of interest in comparing two vectors will be cosine similarity, which, if you’re not familiar, you can think of it similarly to the standard correlation[12](#fn12). Let’s see what is similar to Romeo.
```
rom = shakes_word_vectors["romeo", , drop = F]
# ham = shakes_word_vectors["hamlet", , drop = F]
cos_sim_rom = sim2(x = shakes_word_vectors, y = rom, method = "cosine", norm = "l2")
# head(sort(cos_sim_rom[,1], decreasing = T), 10)
```
| romeo | juliet | tybalt | benvolio | nurse | iago | friar | mercutio | aaron | roderigo |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 0\.78 | 0\.72 | 0\.65 | 0\.64 | 0\.63 | 0\.61 | 0\.6 | 0\.6 | 0\.59 |
Obviously Romeo is most like Romeo, but after that comes the rest of the crew in the play. As this text is somewhat raw, it is likely due to names associated with lines in the play. As such, one may want to narrow the window[13](#fn13). Let’s try **love**.
```
love = shakes_word_vectors["love", , drop = F]
cos_sim_rom = sim2(x = shakes_word_vectors, y = love, method = "cosine", norm = "l2")
# head(sort(cos_sim_rom[,1], decreasing = T), 10)
```
| | x |
| --- | --- |
| love | 1\.00 |
| that | 0\.80 |
| did | 0\.72 |
| not | 0\.72 |
| in | 0\.72 |
| her | 0\.72 |
| but | 0\.71 |
| so | 0\.71 |
| know | 0\.71 |
| do | 0\.70 |
The issue here is that love is so commonly used in Shakespeare, it’s most like other very common words. What if we take Romeo, subtract his friend Mercutio, and add Nurse? This is similar to the analogy example we had at the start.
```
test = shakes_word_vectors["romeo", , drop = F] -
shakes_word_vectors["mercutio", , drop = F] +
shakes_word_vectors["nurse", , drop = F]
cos_sim_test = sim2(x = shakes_word_vectors, y = test, method = "cosine", norm = "l2")
# head(sort(cos_sim_test[,1], decreasing = T), 10)
```
| | x |
| --- | --- |
| nurse | 0\.87 |
| juliet | 0\.72 |
| romeo | 0\.70 |
It looks like we get Juliet as the most likely word (after the ones we actually used), just as we might have expected. Again, we can think of this as Romeo is to Mercutio as Juliet is to the Nurse. Let’s try another like that.
```
test = shakes_word_vectors["romeo", , drop = F] -
shakes_word_vectors["juliet", , drop = F] +
shakes_word_vectors["cleopatra", , drop = F]
cos_sim_test = sim2(x = shakes_word_vectors, y = test, method = "cosine", norm = "l2")
# head(sort(cos_sim_test[,1], decreasing = T), 3)
```
| | x |
| --- | --- |
| cleopatra | 0\.81 |
| romeo | 0\.70 |
| antony | 0\.70 |
One can play with stuff like this all day. For example, you may find that a Romeo without love is a Tybalt!
Wikipedia
---------
The following shows the code for analyzing text from Wikipedia, and comes directly from the text2vec vignette. Note that this is a relatively large amount of text (100MB), and so will take notably longer to process.
```
text8_file = "data/texts_raw/text8"
if (!file.exists(text8_file)) {
download.file("http://mattmahoney.net/dc/text8.zip", "data/text8.zip")
unzip("data/text8.zip", files = "text8", exdir = "data/texts_raw/")
}
wiki = readLines(text8_file, n = 1, warn = FALSE)
tokens = space_tokenizer(wiki)
it = itoken(tokens, progressbar = FALSE)
vocab = create_vocabulary(it)
vocab = prune_vocabulary(vocab, term_count_min = 5L)
vectorizer = vocab_vectorizer(vocab)
tcm = create_tcm(it, vectorizer, skip_grams_window = 5L)
glove = GlobalVectors$new(word_vectors_size = 50, vocabulary = vocab, x_max = 10)
wv_main = glove$fit_transform(tcm, n_iter = 100, convergence_tol = 0.001)
wv_context = glove$components
word_vectors = wv_main + t(wv_context)
```
Let’s try our Berlin example.
```
berlin = word_vectors["paris", , drop = FALSE] -
word_vectors["france", , drop = FALSE] +
word_vectors["germany", , drop = FALSE]
berlin_cos_sim = sim2(x = word_vectors, y = berlin, method = "cosine", norm = "l2")
head(sort(berlin_cos_sim[,1], decreasing = TRUE), 5)
```
```
paris berlin munich germany at
0.7575511 0.7560328 0.6721202 0.6559778 0.6519383
```
Success! Now let’s try the queen example.
```
queen = word_vectors["king", , drop = FALSE] -
word_vectors["man", , drop = FALSE] +
word_vectors["woman", , drop = FALSE]
queen_cos_sim = sim2(x = word_vectors, y = queen, method = "cosine", norm = "l2")
head(sort(queen_cos_sim[,1], decreasing = TRUE), 5)
```
```
king son alexander henry queen
0.8831932 0.7575572 0.7042561 0.6769456 0.6755054
```
Not so much, though it is still a top result. Results are of course highly dependent upon the data and settings you choose, so keep the context in mind when trying this out.
Now that words are vectors, we can use them in any model we want, for example, to predict sentimentality. Furthermore, extensions have been made to deal with sentences, paragraphs, and even [lda2vec](https://multithreaded.stitchfix.com/blog/2016/05/27/lda2vec/#topic=38&lambda=1&term=)! In any event, hopefully you have some idea of what word embeddings are and can do for you, and have added another tool to your text analysis toolbox.
Shakespeare example
-------------------
We start with some already tokenized data from the works of Shakespeare. We’ll treat the words as if they just come from one big Shakespeare document, and only consider the words as tokens, as opposed to using n\-grams. We create an iterator object for text2vec functions to use, and with that in hand, create the vocabulary, keeping only those that occur at least 5 times. This example generally follows that of the package [vignette](http://text2vec.org/glove.html), which you’ll definitely want to spend some time with.
```
load('data/shakes_words_df_4text2vec.RData')
library(text2vec)
## shakes_words
```
```
shakes_words_ls = list(shakes_words$word)
it = itoken(shakes_words_ls, progressbar = FALSE)
shakes_vocab = create_vocabulary(it)
shakes_vocab = prune_vocabulary(shakes_vocab, term_count_min = 5)
```
Let’s take a look at what we have at this point. We’ve just created word counts, that’s all the vocabulary object is.
```
shakes_vocab
```
```
Number of docs: 1
0 stopwords: ...
ngram_min = 1; ngram_max = 1
Vocabulary:
term term_count doc_count
1: bounties 5 1
2: rag 5 1
3: merchant's 5 1
4: ungovern'd 5 1
5: cozening 5 1
---
9090: of 17784 1
9091: to 20693 1
9092: i 21097 1
9093: and 26032 1
9094: the 28831 1
```
The next step is to create the token co\-occurrence matrix (TCM). The definition of whether two words occur together is arbitrary. Should we just look at previous and next word? Five behind and forward? This will definitely affect results so you will want to play around with it.
```
# maps words to indices
vectorizer = vocab_vectorizer(shakes_vocab)
# use window of 10 for context words
shakes_tcm = create_tcm(it, vectorizer, skip_grams_window = 10)
```
Note that such a matrix will be extremely sparse. Most words do not go with other words in the grand scheme of things. So when they do, it usually matters.
Now we are ready to create the word vectors based on the GloVe model. Various options exist, so you’ll want to dive into the associated help files and perhaps [the original articles](http://nlp.stanford.edu/projects/glove/) to see how you might play around with it. The following takes roughly a minute or two on my machine. I suggest you start with `n_iter = 10` and/or `convergence_tol = 0.001` to gauge how long you might have to wait.
In this setting, we can think of our word of interest as the target, and any/all other words (within the window) as the context. Word vectors are learned for both.
```
glove = GlobalVectors$new(word_vectors_size = 50, vocabulary = shakes_vocab, x_max = 10)
shakes_wv_main = glove$fit_transform(shakes_tcm, n_iter = 1000, convergence_tol = 0.00001)
# dim(shakes_wv_main)
shakes_wv_context = glove$components
# dim(shakes_wv_context)
# Either word-vectors matrices could work, but the developers of the technique
# suggest the sum/mean may work better
shakes_word_vectors = shakes_wv_main + t(shakes_wv_context)
```
Now we can start to play. The measure of interest in comparing two vectors will be cosine similarity, which, if you’re not familiar, you can think of it similarly to the standard correlation[12](#fn12). Let’s see what is similar to Romeo.
```
rom = shakes_word_vectors["romeo", , drop = F]
# ham = shakes_word_vectors["hamlet", , drop = F]
cos_sim_rom = sim2(x = shakes_word_vectors, y = rom, method = "cosine", norm = "l2")
# head(sort(cos_sim_rom[,1], decreasing = T), 10)
```
| romeo | juliet | tybalt | benvolio | nurse | iago | friar | mercutio | aaron | roderigo |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 0\.78 | 0\.72 | 0\.65 | 0\.64 | 0\.63 | 0\.61 | 0\.6 | 0\.6 | 0\.59 |
Obviously Romeo is most like Romeo, but after that comes the rest of the crew in the play. As this text is somewhat raw, it is likely due to names associated with lines in the play. As such, one may want to narrow the window[13](#fn13). Let’s try **love**.
```
love = shakes_word_vectors["love", , drop = F]
cos_sim_rom = sim2(x = shakes_word_vectors, y = love, method = "cosine", norm = "l2")
# head(sort(cos_sim_rom[,1], decreasing = T), 10)
```
| | x |
| --- | --- |
| love | 1\.00 |
| that | 0\.80 |
| did | 0\.72 |
| not | 0\.72 |
| in | 0\.72 |
| her | 0\.72 |
| but | 0\.71 |
| so | 0\.71 |
| know | 0\.71 |
| do | 0\.70 |
The issue here is that love is so commonly used in Shakespeare, it’s most like other very common words. What if we take Romeo, subtract his friend Mercutio, and add Nurse? This is similar to the analogy example we had at the start.
```
test = shakes_word_vectors["romeo", , drop = F] -
shakes_word_vectors["mercutio", , drop = F] +
shakes_word_vectors["nurse", , drop = F]
cos_sim_test = sim2(x = shakes_word_vectors, y = test, method = "cosine", norm = "l2")
# head(sort(cos_sim_test[,1], decreasing = T), 10)
```
| | x |
| --- | --- |
| nurse | 0\.87 |
| juliet | 0\.72 |
| romeo | 0\.70 |
It looks like we get Juliet as the most likely word (after the ones we actually used), just as we might have expected. Again, we can think of this as Romeo is to Mercutio as Juliet is to the Nurse. Let’s try another like that.
```
test = shakes_word_vectors["romeo", , drop = F] -
shakes_word_vectors["juliet", , drop = F] +
shakes_word_vectors["cleopatra", , drop = F]
cos_sim_test = sim2(x = shakes_word_vectors, y = test, method = "cosine", norm = "l2")
# head(sort(cos_sim_test[,1], decreasing = T), 3)
```
| | x |
| --- | --- |
| cleopatra | 0\.81 |
| romeo | 0\.70 |
| antony | 0\.70 |
One can play with stuff like this all day. For example, you may find that a Romeo without love is a Tybalt!
Wikipedia
---------
The following shows the code for analyzing text from Wikipedia, and comes directly from the text2vec vignette. Note that this is a relatively large amount of text (100MB), and so will take notably longer to process.
```
text8_file = "data/texts_raw/text8"
if (!file.exists(text8_file)) {
download.file("http://mattmahoney.net/dc/text8.zip", "data/text8.zip")
unzip("data/text8.zip", files = "text8", exdir = "data/texts_raw/")
}
wiki = readLines(text8_file, n = 1, warn = FALSE)
tokens = space_tokenizer(wiki)
it = itoken(tokens, progressbar = FALSE)
vocab = create_vocabulary(it)
vocab = prune_vocabulary(vocab, term_count_min = 5L)
vectorizer = vocab_vectorizer(vocab)
tcm = create_tcm(it, vectorizer, skip_grams_window = 5L)
glove = GlobalVectors$new(word_vectors_size = 50, vocabulary = vocab, x_max = 10)
wv_main = glove$fit_transform(tcm, n_iter = 100, convergence_tol = 0.001)
wv_context = glove$components
word_vectors = wv_main + t(wv_context)
```
Let’s try our Berlin example.
```
berlin = word_vectors["paris", , drop = FALSE] -
word_vectors["france", , drop = FALSE] +
word_vectors["germany", , drop = FALSE]
berlin_cos_sim = sim2(x = word_vectors, y = berlin, method = "cosine", norm = "l2")
head(sort(berlin_cos_sim[,1], decreasing = TRUE), 5)
```
```
paris berlin munich germany at
0.7575511 0.7560328 0.6721202 0.6559778 0.6519383
```
Success! Now let’s try the queen example.
```
queen = word_vectors["king", , drop = FALSE] -
word_vectors["man", , drop = FALSE] +
word_vectors["woman", , drop = FALSE]
queen_cos_sim = sim2(x = word_vectors, y = queen, method = "cosine", norm = "l2")
head(sort(queen_cos_sim[,1], decreasing = TRUE), 5)
```
```
king son alexander henry queen
0.8831932 0.7575572 0.7042561 0.6769456 0.6755054
```
Not so much, though it is still a top result. Results are of course highly dependent upon the data and settings you choose, so keep the context in mind when trying this out.
Now that words are vectors, we can use them in any model we want, for example, to predict sentimentality. Furthermore, extensions have been made to deal with sentences, paragraphs, and even [lda2vec](https://multithreaded.stitchfix.com/blog/2016/05/27/lda2vec/#topic=38&lambda=1&term=)! In any event, hopefully you have some idea of what word embeddings are and can do for you, and have added another tool to your text analysis toolbox.
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/word-embeddings.html |
Word Embeddings
===============
A key idea in the examination of text concerns representing words as numeric quantities. There are a number of ways to go about this, and we’ve actually already done so. In the sentiment analysis section words were given a sentiment score. In topic modeling, words were represented as frequencies across documents. Once we get to a numeric representation, we can then run statistical models.
Consider topic modeling again. We take the document\-term matrix, and reduce the dimensionality of it to just a few topics. Now consider a co\-occurrence matrix, where if there are \\(k\\) words, it is a \\(k\\) x \\(k\\) matrix, where the diagonal values tell us how frequently wordi occurs with wordj. Just like in topic modeling, we could now perform some matrix factorization technique to reduce the dimensionality of the matrix[10](#fn10). Now for each word we have a vector of numeric values (across factors) to represent them. Indeed, this is how some earlier approaches were done, for example, using principal components analysis on the co\-occurrence matrix.
Newer techniques such as word2vec and GloVe use neural net approaches to construct word vectors. The details are not important for applied users to benefit from them. Furthermore, applications have been made to create sentence and other vector representations[11](#fn11). In any case, with vector representations of words we can see how similar they are to each other, and perform other tasks based on that information.
A tired example from the literature is as follows:
\\\[\\mathrm{king \- man \+ woman \= queen}\\]
So a woman\-king is a queen.
Here is another example:
\\\[\\mathrm{Paris \- France \+ Germany \= Berlin}\\]
Berlin is the Paris of Germany.
The idea is that with vectors created just based on co\-occurrence we can recover things like analogies. Subtracting the **man** vector from the **king** vector and adding **woman**, the most similar word to this would be **queen**. For more on why this works, take a look [here](http://p.migdal.pl/2017/01/06/king-man-woman-queen-why.html).
Shakespeare example
-------------------
We start with some already tokenized data from the works of Shakespeare. We’ll treat the words as if they just come from one big Shakespeare document, and only consider the words as tokens, as opposed to using n\-grams. We create an iterator object for text2vec functions to use, and with that in hand, create the vocabulary, keeping only those that occur at least 5 times. This example generally follows that of the package [vignette](http://text2vec.org/glove.html), which you’ll definitely want to spend some time with.
```
load('data/shakes_words_df_4text2vec.RData')
library(text2vec)
## shakes_words
```
```
shakes_words_ls = list(shakes_words$word)
it = itoken(shakes_words_ls, progressbar = FALSE)
shakes_vocab = create_vocabulary(it)
shakes_vocab = prune_vocabulary(shakes_vocab, term_count_min = 5)
```
Let’s take a look at what we have at this point. We’ve just created word counts, that’s all the vocabulary object is.
```
shakes_vocab
```
```
Number of docs: 1
0 stopwords: ...
ngram_min = 1; ngram_max = 1
Vocabulary:
term term_count doc_count
1: bounties 5 1
2: rag 5 1
3: merchant's 5 1
4: ungovern'd 5 1
5: cozening 5 1
---
9090: of 17784 1
9091: to 20693 1
9092: i 21097 1
9093: and 26032 1
9094: the 28831 1
```
The next step is to create the token co\-occurrence matrix (TCM). The definition of whether two words occur together is arbitrary. Should we just look at previous and next word? Five behind and forward? This will definitely affect results so you will want to play around with it.
```
# maps words to indices
vectorizer = vocab_vectorizer(shakes_vocab)
# use window of 10 for context words
shakes_tcm = create_tcm(it, vectorizer, skip_grams_window = 10)
```
Note that such a matrix will be extremely sparse. Most words do not go with other words in the grand scheme of things. So when they do, it usually matters.
Now we are ready to create the word vectors based on the GloVe model. Various options exist, so you’ll want to dive into the associated help files and perhaps [the original articles](http://nlp.stanford.edu/projects/glove/) to see how you might play around with it. The following takes roughly a minute or two on my machine. I suggest you start with `n_iter = 10` and/or `convergence_tol = 0.001` to gauge how long you might have to wait.
In this setting, we can think of our word of interest as the target, and any/all other words (within the window) as the context. Word vectors are learned for both.
```
glove = GlobalVectors$new(word_vectors_size = 50, vocabulary = shakes_vocab, x_max = 10)
shakes_wv_main = glove$fit_transform(shakes_tcm, n_iter = 1000, convergence_tol = 0.00001)
# dim(shakes_wv_main)
shakes_wv_context = glove$components
# dim(shakes_wv_context)
# Either word-vectors matrices could work, but the developers of the technique
# suggest the sum/mean may work better
shakes_word_vectors = shakes_wv_main + t(shakes_wv_context)
```
Now we can start to play. The measure of interest in comparing two vectors will be cosine similarity, which, if you’re not familiar, you can think of it similarly to the standard correlation[12](#fn12). Let’s see what is similar to Romeo.
```
rom = shakes_word_vectors["romeo", , drop = F]
# ham = shakes_word_vectors["hamlet", , drop = F]
cos_sim_rom = sim2(x = shakes_word_vectors, y = rom, method = "cosine", norm = "l2")
# head(sort(cos_sim_rom[,1], decreasing = T), 10)
```
| romeo | juliet | tybalt | benvolio | nurse | iago | friar | mercutio | aaron | roderigo |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 0\.78 | 0\.72 | 0\.65 | 0\.64 | 0\.63 | 0\.61 | 0\.6 | 0\.6 | 0\.59 |
Obviously Romeo is most like Romeo, but after that comes the rest of the crew in the play. As this text is somewhat raw, it is likely due to names associated with lines in the play. As such, one may want to narrow the window[13](#fn13). Let’s try **love**.
```
love = shakes_word_vectors["love", , drop = F]
cos_sim_rom = sim2(x = shakes_word_vectors, y = love, method = "cosine", norm = "l2")
# head(sort(cos_sim_rom[,1], decreasing = T), 10)
```
| | x |
| --- | --- |
| love | 1\.00 |
| that | 0\.80 |
| did | 0\.72 |
| not | 0\.72 |
| in | 0\.72 |
| her | 0\.72 |
| but | 0\.71 |
| so | 0\.71 |
| know | 0\.71 |
| do | 0\.70 |
The issue here is that love is so commonly used in Shakespeare, it’s most like other very common words. What if we take Romeo, subtract his friend Mercutio, and add Nurse? This is similar to the analogy example we had at the start.
```
test = shakes_word_vectors["romeo", , drop = F] -
shakes_word_vectors["mercutio", , drop = F] +
shakes_word_vectors["nurse", , drop = F]
cos_sim_test = sim2(x = shakes_word_vectors, y = test, method = "cosine", norm = "l2")
# head(sort(cos_sim_test[,1], decreasing = T), 10)
```
| | x |
| --- | --- |
| nurse | 0\.87 |
| juliet | 0\.72 |
| romeo | 0\.70 |
It looks like we get Juliet as the most likely word (after the ones we actually used), just as we might have expected. Again, we can think of this as Romeo is to Mercutio as Juliet is to the Nurse. Let’s try another like that.
```
test = shakes_word_vectors["romeo", , drop = F] -
shakes_word_vectors["juliet", , drop = F] +
shakes_word_vectors["cleopatra", , drop = F]
cos_sim_test = sim2(x = shakes_word_vectors, y = test, method = "cosine", norm = "l2")
# head(sort(cos_sim_test[,1], decreasing = T), 3)
```
| | x |
| --- | --- |
| cleopatra | 0\.81 |
| romeo | 0\.70 |
| antony | 0\.70 |
One can play with stuff like this all day. For example, you may find that a Romeo without love is a Tybalt!
Wikipedia
---------
The following shows the code for analyzing text from Wikipedia, and comes directly from the text2vec vignette. Note that this is a relatively large amount of text (100MB), and so will take notably longer to process.
```
text8_file = "data/texts_raw/text8"
if (!file.exists(text8_file)) {
download.file("http://mattmahoney.net/dc/text8.zip", "data/text8.zip")
unzip("data/text8.zip", files = "text8", exdir = "data/texts_raw/")
}
wiki = readLines(text8_file, n = 1, warn = FALSE)
tokens = space_tokenizer(wiki)
it = itoken(tokens, progressbar = FALSE)
vocab = create_vocabulary(it)
vocab = prune_vocabulary(vocab, term_count_min = 5L)
vectorizer = vocab_vectorizer(vocab)
tcm = create_tcm(it, vectorizer, skip_grams_window = 5L)
glove = GlobalVectors$new(word_vectors_size = 50, vocabulary = vocab, x_max = 10)
wv_main = glove$fit_transform(tcm, n_iter = 100, convergence_tol = 0.001)
wv_context = glove$components
word_vectors = wv_main + t(wv_context)
```
Let’s try our Berlin example.
```
berlin = word_vectors["paris", , drop = FALSE] -
word_vectors["france", , drop = FALSE] +
word_vectors["germany", , drop = FALSE]
berlin_cos_sim = sim2(x = word_vectors, y = berlin, method = "cosine", norm = "l2")
head(sort(berlin_cos_sim[,1], decreasing = TRUE), 5)
```
```
paris berlin munich germany at
0.7575511 0.7560328 0.6721202 0.6559778 0.6519383
```
Success! Now let’s try the queen example.
```
queen = word_vectors["king", , drop = FALSE] -
word_vectors["man", , drop = FALSE] +
word_vectors["woman", , drop = FALSE]
queen_cos_sim = sim2(x = word_vectors, y = queen, method = "cosine", norm = "l2")
head(sort(queen_cos_sim[,1], decreasing = TRUE), 5)
```
```
king son alexander henry queen
0.8831932 0.7575572 0.7042561 0.6769456 0.6755054
```
Not so much, though it is still a top result. Results are of course highly dependent upon the data and settings you choose, so keep the context in mind when trying this out.
Now that words are vectors, we can use them in any model we want, for example, to predict sentimentality. Furthermore, extensions have been made to deal with sentences, paragraphs, and even [lda2vec](https://multithreaded.stitchfix.com/blog/2016/05/27/lda2vec/#topic=38&lambda=1&term=)! In any event, hopefully you have some idea of what word embeddings are and can do for you, and have added another tool to your text analysis toolbox.
Shakespeare example
-------------------
We start with some already tokenized data from the works of Shakespeare. We’ll treat the words as if they just come from one big Shakespeare document, and only consider the words as tokens, as opposed to using n\-grams. We create an iterator object for text2vec functions to use, and with that in hand, create the vocabulary, keeping only those that occur at least 5 times. This example generally follows that of the package [vignette](http://text2vec.org/glove.html), which you’ll definitely want to spend some time with.
```
load('data/shakes_words_df_4text2vec.RData')
library(text2vec)
## shakes_words
```
```
shakes_words_ls = list(shakes_words$word)
it = itoken(shakes_words_ls, progressbar = FALSE)
shakes_vocab = create_vocabulary(it)
shakes_vocab = prune_vocabulary(shakes_vocab, term_count_min = 5)
```
Let’s take a look at what we have at this point. We’ve just created word counts, that’s all the vocabulary object is.
```
shakes_vocab
```
```
Number of docs: 1
0 stopwords: ...
ngram_min = 1; ngram_max = 1
Vocabulary:
term term_count doc_count
1: bounties 5 1
2: rag 5 1
3: merchant's 5 1
4: ungovern'd 5 1
5: cozening 5 1
---
9090: of 17784 1
9091: to 20693 1
9092: i 21097 1
9093: and 26032 1
9094: the 28831 1
```
The next step is to create the token co\-occurrence matrix (TCM). The definition of whether two words occur together is arbitrary. Should we just look at previous and next word? Five behind and forward? This will definitely affect results so you will want to play around with it.
```
# maps words to indices
vectorizer = vocab_vectorizer(shakes_vocab)
# use window of 10 for context words
shakes_tcm = create_tcm(it, vectorizer, skip_grams_window = 10)
```
Note that such a matrix will be extremely sparse. Most words do not go with other words in the grand scheme of things. So when they do, it usually matters.
Now we are ready to create the word vectors based on the GloVe model. Various options exist, so you’ll want to dive into the associated help files and perhaps [the original articles](http://nlp.stanford.edu/projects/glove/) to see how you might play around with it. The following takes roughly a minute or two on my machine. I suggest you start with `n_iter = 10` and/or `convergence_tol = 0.001` to gauge how long you might have to wait.
In this setting, we can think of our word of interest as the target, and any/all other words (within the window) as the context. Word vectors are learned for both.
```
glove = GlobalVectors$new(word_vectors_size = 50, vocabulary = shakes_vocab, x_max = 10)
shakes_wv_main = glove$fit_transform(shakes_tcm, n_iter = 1000, convergence_tol = 0.00001)
# dim(shakes_wv_main)
shakes_wv_context = glove$components
# dim(shakes_wv_context)
# Either word-vectors matrices could work, but the developers of the technique
# suggest the sum/mean may work better
shakes_word_vectors = shakes_wv_main + t(shakes_wv_context)
```
Now we can start to play. The measure of interest in comparing two vectors will be cosine similarity, which, if you’re not familiar, you can think of it similarly to the standard correlation[12](#fn12). Let’s see what is similar to Romeo.
```
rom = shakes_word_vectors["romeo", , drop = F]
# ham = shakes_word_vectors["hamlet", , drop = F]
cos_sim_rom = sim2(x = shakes_word_vectors, y = rom, method = "cosine", norm = "l2")
# head(sort(cos_sim_rom[,1], decreasing = T), 10)
```
| romeo | juliet | tybalt | benvolio | nurse | iago | friar | mercutio | aaron | roderigo |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 0\.78 | 0\.72 | 0\.65 | 0\.64 | 0\.63 | 0\.61 | 0\.6 | 0\.6 | 0\.59 |
Obviously Romeo is most like Romeo, but after that comes the rest of the crew in the play. As this text is somewhat raw, it is likely due to names associated with lines in the play. As such, one may want to narrow the window[13](#fn13). Let’s try **love**.
```
love = shakes_word_vectors["love", , drop = F]
cos_sim_rom = sim2(x = shakes_word_vectors, y = love, method = "cosine", norm = "l2")
# head(sort(cos_sim_rom[,1], decreasing = T), 10)
```
| | x |
| --- | --- |
| love | 1\.00 |
| that | 0\.80 |
| did | 0\.72 |
| not | 0\.72 |
| in | 0\.72 |
| her | 0\.72 |
| but | 0\.71 |
| so | 0\.71 |
| know | 0\.71 |
| do | 0\.70 |
The issue here is that love is so commonly used in Shakespeare, it’s most like other very common words. What if we take Romeo, subtract his friend Mercutio, and add Nurse? This is similar to the analogy example we had at the start.
```
test = shakes_word_vectors["romeo", , drop = F] -
shakes_word_vectors["mercutio", , drop = F] +
shakes_word_vectors["nurse", , drop = F]
cos_sim_test = sim2(x = shakes_word_vectors, y = test, method = "cosine", norm = "l2")
# head(sort(cos_sim_test[,1], decreasing = T), 10)
```
| | x |
| --- | --- |
| nurse | 0\.87 |
| juliet | 0\.72 |
| romeo | 0\.70 |
It looks like we get Juliet as the most likely word (after the ones we actually used), just as we might have expected. Again, we can think of this as Romeo is to Mercutio as Juliet is to the Nurse. Let’s try another like that.
```
test = shakes_word_vectors["romeo", , drop = F] -
shakes_word_vectors["juliet", , drop = F] +
shakes_word_vectors["cleopatra", , drop = F]
cos_sim_test = sim2(x = shakes_word_vectors, y = test, method = "cosine", norm = "l2")
# head(sort(cos_sim_test[,1], decreasing = T), 3)
```
| | x |
| --- | --- |
| cleopatra | 0\.81 |
| romeo | 0\.70 |
| antony | 0\.70 |
One can play with stuff like this all day. For example, you may find that a Romeo without love is a Tybalt!
Wikipedia
---------
The following shows the code for analyzing text from Wikipedia, and comes directly from the text2vec vignette. Note that this is a relatively large amount of text (100MB), and so will take notably longer to process.
```
text8_file = "data/texts_raw/text8"
if (!file.exists(text8_file)) {
download.file("http://mattmahoney.net/dc/text8.zip", "data/text8.zip")
unzip("data/text8.zip", files = "text8", exdir = "data/texts_raw/")
}
wiki = readLines(text8_file, n = 1, warn = FALSE)
tokens = space_tokenizer(wiki)
it = itoken(tokens, progressbar = FALSE)
vocab = create_vocabulary(it)
vocab = prune_vocabulary(vocab, term_count_min = 5L)
vectorizer = vocab_vectorizer(vocab)
tcm = create_tcm(it, vectorizer, skip_grams_window = 5L)
glove = GlobalVectors$new(word_vectors_size = 50, vocabulary = vocab, x_max = 10)
wv_main = glove$fit_transform(tcm, n_iter = 100, convergence_tol = 0.001)
wv_context = glove$components
word_vectors = wv_main + t(wv_context)
```
Let’s try our Berlin example.
```
berlin = word_vectors["paris", , drop = FALSE] -
word_vectors["france", , drop = FALSE] +
word_vectors["germany", , drop = FALSE]
berlin_cos_sim = sim2(x = word_vectors, y = berlin, method = "cosine", norm = "l2")
head(sort(berlin_cos_sim[,1], decreasing = TRUE), 5)
```
```
paris berlin munich germany at
0.7575511 0.7560328 0.6721202 0.6559778 0.6519383
```
Success! Now let’s try the queen example.
```
queen = word_vectors["king", , drop = FALSE] -
word_vectors["man", , drop = FALSE] +
word_vectors["woman", , drop = FALSE]
queen_cos_sim = sim2(x = word_vectors, y = queen, method = "cosine", norm = "l2")
head(sort(queen_cos_sim[,1], decreasing = TRUE), 5)
```
```
king son alexander henry queen
0.8831932 0.7575572 0.7042561 0.6769456 0.6755054
```
Not so much, though it is still a top result. Results are of course highly dependent upon the data and settings you choose, so keep the context in mind when trying this out.
Now that words are vectors, we can use them in any model we want, for example, to predict sentimentality. Furthermore, extensions have been made to deal with sentences, paragraphs, and even [lda2vec](https://multithreaded.stitchfix.com/blog/2016/05/27/lda2vec/#topic=38&lambda=1&term=)! In any event, hopefully you have some idea of what word embeddings are and can do for you, and have added another tool to your text analysis toolbox.
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/summary.html |
Summary
=======
It should be clear at this point that text can be seen as amenable to analysis as anything else in statistics. Depending on the goals, the exploration of text can take on one of many forms. In most situations, at least some preprocessing may be required, and often it will be quite an undertaking to make the text amenable to analysis. However, this is often rewarded by interesting insights and a better understanding of the data at hand, and makes possible what otherwise would not be if only human\-powered analysis were applied.
For more natural language processing tools in R, one should consult the corresponding [task view](https://www.r-pkg.org/ctv/NaturalLanguageProcessing). However, one should be aware that it doesn’t take much to strain one’s computing resources with R’s tools and standard approach. As an example, the Shakespeare corpus is very small by any standard, and even then it will take some time for certain statistics or topic modeling to be conducted. As such, one should be prepared to also spend time learning ways to make computing more efficient. Luckily, many aspects of the process may be easily distributed/parallelized.
Much natural language processing is actually done with deep learning techniques, which generally requires a lot of data, notable computing resources, copious amounts of fine tuning, and often involves optimization towards a specific task. Most of the cutting\-edge work there is done in Python, and as a starting point for more common text\-analytic approaches, you can check out the [Natural Language Toolkit](http://www.nltk.org/book/).
Dealing with text is not always easy, but it’s definitely easier than it ever has been. The number of tools at your disposal is vast, and more are being added all the time. One of the main take home messages is that text analysis can be a lot of fun, so enjoy the process!
Best of luck with your data! \\(\\qquad\\sim\\mathbb{M}\\)
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/summary.html |
Summary
=======
It should be clear at this point that text can be seen as amenable to analysis as anything else in statistics. Depending on the goals, the exploration of text can take on one of many forms. In most situations, at least some preprocessing may be required, and often it will be quite an undertaking to make the text amenable to analysis. However, this is often rewarded by interesting insights and a better understanding of the data at hand, and makes possible what otherwise would not be if only human\-powered analysis were applied.
For more natural language processing tools in R, one should consult the corresponding [task view](https://www.r-pkg.org/ctv/NaturalLanguageProcessing). However, one should be aware that it doesn’t take much to strain one’s computing resources with R’s tools and standard approach. As an example, the Shakespeare corpus is very small by any standard, and even then it will take some time for certain statistics or topic modeling to be conducted. As such, one should be prepared to also spend time learning ways to make computing more efficient. Luckily, many aspects of the process may be easily distributed/parallelized.
Much natural language processing is actually done with deep learning techniques, which generally requires a lot of data, notable computing resources, copious amounts of fine tuning, and often involves optimization towards a specific task. Most of the cutting\-edge work there is done in Python, and as a starting point for more common text\-analytic approaches, you can check out the [Natural Language Toolkit](http://www.nltk.org/book/).
Dealing with text is not always easy, but it’s definitely easier than it ever has been. The number of tools at your disposal is vast, and more are being added all the time. One of the main take home messages is that text analysis can be a lot of fun, so enjoy the process!
Best of luck with your data! \\(\\qquad\\sim\\mathbb{M}\\)
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/summary.html |
Summary
=======
It should be clear at this point that text can be seen as amenable to analysis as anything else in statistics. Depending on the goals, the exploration of text can take on one of many forms. In most situations, at least some preprocessing may be required, and often it will be quite an undertaking to make the text amenable to analysis. However, this is often rewarded by interesting insights and a better understanding of the data at hand, and makes possible what otherwise would not be if only human\-powered analysis were applied.
For more natural language processing tools in R, one should consult the corresponding [task view](https://www.r-pkg.org/ctv/NaturalLanguageProcessing). However, one should be aware that it doesn’t take much to strain one’s computing resources with R’s tools and standard approach. As an example, the Shakespeare corpus is very small by any standard, and even then it will take some time for certain statistics or topic modeling to be conducted. As such, one should be prepared to also spend time learning ways to make computing more efficient. Luckily, many aspects of the process may be easily distributed/parallelized.
Much natural language processing is actually done with deep learning techniques, which generally requires a lot of data, notable computing resources, copious amounts of fine tuning, and often involves optimization towards a specific task. Most of the cutting\-edge work there is done in Python, and as a starting point for more common text\-analytic approaches, you can check out the [Natural Language Toolkit](http://www.nltk.org/book/).
Dealing with text is not always easy, but it’s definitely easier than it ever has been. The number of tools at your disposal is vast, and more are being added all the time. One of the main take home messages is that text analysis can be a lot of fun, so enjoy the process!
Best of luck with your data! \\(\\qquad\\sim\\mathbb{M}\\)
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/summary.html |
Summary
=======
It should be clear at this point that text can be seen as amenable to analysis as anything else in statistics. Depending on the goals, the exploration of text can take on one of many forms. In most situations, at least some preprocessing may be required, and often it will be quite an undertaking to make the text amenable to analysis. However, this is often rewarded by interesting insights and a better understanding of the data at hand, and makes possible what otherwise would not be if only human\-powered analysis were applied.
For more natural language processing tools in R, one should consult the corresponding [task view](https://www.r-pkg.org/ctv/NaturalLanguageProcessing). However, one should be aware that it doesn’t take much to strain one’s computing resources with R’s tools and standard approach. As an example, the Shakespeare corpus is very small by any standard, and even then it will take some time for certain statistics or topic modeling to be conducted. As such, one should be prepared to also spend time learning ways to make computing more efficient. Luckily, many aspects of the process may be easily distributed/parallelized.
Much natural language processing is actually done with deep learning techniques, which generally requires a lot of data, notable computing resources, copious amounts of fine tuning, and often involves optimization towards a specific task. Most of the cutting\-edge work there is done in Python, and as a starting point for more common text\-analytic approaches, you can check out the [Natural Language Toolkit](http://www.nltk.org/book/).
Dealing with text is not always easy, but it’s definitely easier than it ever has been. The number of tools at your disposal is vast, and more are being added all the time. One of the main take home messages is that text analysis can be a lot of fun, so enjoy the process!
Best of luck with your data! \\(\\qquad\\sim\\mathbb{M}\\)
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/shakespeare.html |
Shakespeare Start to Finish
===========================
The following attempts to demonstrate the usual difficulties one encounters dealing with text by procuring and processing the works of Shakespeare. The source is [MIT](http://shakespeare.mit.edu/), which has made the ‘complete’ works available on the web since 1993, plus one other from Gutenberg. The initial issue is simply getting the works from the web. Subsequently there is metadata, character names, stopwords etc. to be removed. At that point, we can stem and count the words in each work, which, when complete, puts us at the point we are ready for analysis.
The primary packages used are tidytext, stringr, and when things are ready for analysis, quanteda.
ACT I. Scrape MIT and Gutenberg Shakespeare
-------------------------------------------
### Scene I. Scrape main works
Initially we must scrape the web to get the documents we need. The rvest package will be used as follows.
* Start with the url of the site
* Get the links off that page to serve as base urls for the works
* Scrape the document for each url
* Deal with the collection of Sonnets separately
* Write out results
```
library(rvest); library(tidyverse); library(stringr)
page0 = read_html('http://shakespeare.mit.edu/')
works_urls0 = page0 %>%
html_nodes('a') %>%
html_attr('href')
main = works_urls0 %>%
grep(pattern='index', value=T) %>%
str_replace_all(pattern='index', replacement='full')
other = works_urls0[!grepl(works_urls0, pattern='index|edu|org|news')]
works_urls = c(main, other)
works_urls[1:3]
```
Now we just paste the main site url to the work urls and download them. Here is where we come across our first snag. The html\_text function has what I would call a bug but what the author feels is a feature. [Basically, it ignores line breaks of the form `<br>` in certain situations](https://github.com/hadley/rvest/issues/175). This means it will smash text together that shouldn’t be, thereby making *any* analysis of it fairly useless[14](#fn14). Luckily, [@rentrop](https://github.com/rentrop) provided a solution, which is in `r/fix_read_html.R`.
```
works0 = lapply(works_urls, function(x) read_html(paste0('http://shakespeare.mit.edu/', x)))
source('r/fix_read_html.R')
html_text_collapse(works0[[1]]) #works
works = lapply(works0, html_text_collapse)
names(works) = c("All's Well That Ends Well", "As You Like It", "Comedy of Errors"
"Cymbeline", "Love's Labour's Lost", "Measure for Measure"
"The Merry Wives of Windsor", "The Merchant of Venice", "A Midsummer Night's Dream"
"Much Ado about Nothing", "Pericles Prince of Tyre", "The Taming of the Shrew"
"The Tempest", "Troilus and Cressida", "Twelfth Night"
"The Two Gentlemen of Verona", "The Winter's Tale", "King Henry IV Part 1"
"King Henry IV Part 2", "Henry V", "Henry VI Part 1"
"Henry VI Part 2", "Henry VI Part 3", "Henry VIII"
"King John", "Richard II", "Richard III"
"Antony and Cleopatra", "Coriolanus", "Hamlet"
"Julius Caesar", "King Lear", "Macbeth"
"Othello", "Romeo and Juliet", "Timon of Athens"
"Titus Andronicus", "Sonnets", "A Lover's Complaint"
"The Rape of Lucrece", "Venus and Adonis", "Elegy")
```
### Scene II. Sonnets
We now hit a slight nuisance with the Sonnets. The Sonnets have a bit of a different structure than the plays. All links are in a single page, with a different form for the url, and each sonnet has its own page.
```
sonnet_urls = paste0('http://shakespeare.mit.edu/', grep(works_urls0, pattern='sonnet', value=T)) %>%
read_html() %>%
html_nodes('a') %>%
html_attr('href')
sonnet_urls = grep(sonnet_urls, pattern = 'sonnet', value=T) # remove amazon link
# read the texts
sonnet0 = purrr::map(sonnet_urls, function(x) read_html(paste0('http://shakespeare.mit.edu/Poetry/', x)))
# collapse to one 'Sonnets' work
sonnet = sapply(sonnet0, html_text_collapse)
works$Sonnets = sonnet
```
### Scene III. Save and write out
Now we can save our results so we won’t have to repeat any of the previous scraping. We want to save the main text object as an RData file, and write out the texts to their own file. When dealing with text, you’ll regularly want to save stages so you can avoid repeating what you don’t have to, as often you will need to go back after discovering new issues further down the line.
```
save(works, file='data/texts_raw/shakes/moby_from_web.RData')
```
### Scene IV. Read text from files
After the above is done, it’s not required to redo, so we can always get what we need. I’ll start with the raw text as files, as that is one of the more common ways one deals with documents. When text is nice and clean, this can be fairly straightforward.
The function at the end comes from the tidyr package. Up to that line, each element in the text column is the entire text, while the column itself is thus a ‘list\-column’. In other words, we have a 42 x 2 matrix. But to do what we need, we’ll want to have access to each line, and the unnest function unpacks each line within the title. The first few lines of the result are shown after.
```
library(tidyverse); library(stringr)
shakes0 =
data_frame(file = dir('data/texts_raw/shakes/moby/', full.names = TRUE)) %>%
transmute(id = basename(file), text) %>%
unnest(text)
save(shakes0, file='data/initial_shakes_dt.RData')
# Alternate that provides for more options
# library(readtext)
# shakes0 =
# data_frame(file = dir('data/texts_raw/shakes/moby/', full.names = TRUE)) %>%
# mutate(text = map(file, readtext, encoding='UTF8')) %>%
# unnest(text)
```
### Scene V. Add additional works
It is typical to be gathering texts from multiple sources. In this case, we’ll get *The Phoenix and the Turtle* from the Project Gutenberg website. There is an R package that will allow us to work directly with the site, making the process straightforward[15](#fn15). I also considered two other works, but I refrained from “The Two Noble Kinsmen” because like many other of Shakespeare’s versions on Gutenberg, it’s basically written in a different language. I also refrained from *The Passionate Pilgrim* because it’s mostly not Shakespeare.
When first doing this project, I actually started with Gutenberg, but it became a notable PITA. The texts were inconsistent in source, and sometimes reproduced printing errors purposely, which would have compounded typical problems. I thought it could have been solved by using the *Complete Works of Shakespeare* but the download only came with that title, meaning one would have to hunt for and delineate each separate work. This might not have been too big of an issue, except that there is no table of contents, nor consistent naming of titles across different printings. The MIT approach, on the other hand, was a few lines of code. This represents a common issue in text analysis when dealing with sources, a different option may save a lot of time in the end.
The following code could be more succinct to deal with one text, but I initially was dealing with multiple works, so I’ve left it in that mode. In the end, we’ll have a tibble with an id column for the file/work name, and another column that contains the lines of text.
```
library(gutenbergr)
works_not_included = c("The Phoenix and the Turtle") # add others if desired
gute0 = gutenberg_works(title %in% works_not_included)
gute = lapply(gute0$gutenberg_id, gutenberg_download)
gute = mapply(function(x, y) mutate(x, id=y) %>% select(-gutenberg_id),
x=gute,
y=works_not_included,
SIMPLIFY=F)
shakes = shakes0 %>%
bind_rows(gute) %>%
mutate(id = str_replace_all(id, " |'", '_')) %>%
mutate(id = str_replace(id, '.txt', '')) %>%
arrange(id)
# shakes %>% split(.$id) # inspect
save(shakes, file='data/texts_raw/shakes/shakes_df.RData')
```
ACT II. Preliminary Cleaning
----------------------------
If you think we’re even remotely getting close to being ready for analysis, I say Ha! to you. Our journey has only just begun (cue the Carpenters).
Now we can start thinking about prepping the data for eventual analysis. One of the nice things about having the data in a tidy format is that we can use string functionality over the column of text in a simple fashion.
### Scene I. Remove initial text/metadata
First on our to\-do list is to get rid of all the preliminary text of titles, authorship, and similar. This is fairly straightforward when you realize the text we want will be associated with something like `ACT I`, or in the case of the Sonnets, the word `Sonnet`. So, the idea it to drop all text up to those points. I’ve created a [function](https://github.com/m-clark/text-analysis-with-R/blob/master/r/detect_first_act.R) that will do that, and then just apply it to each works tibble[16](#fn16). For the poems and *A Funeral Elegy for Master William Peter*, we look instead for the line where his name or initials start the line.
```
source('r/detect_first_act.R')
shakes_trim = shakes %>%
split(.$id) %>%
lapply(detect_first_act) %>%
bind_rows
shakes %>% filter(id=='Romeo_and_Juliet') %>% head
```
```
# A tibble: 6 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet Romeo and Juliet: Entire Play
2 Romeo_and_Juliet " "
3 Romeo_and_Juliet ""
4 Romeo_and_Juliet ""
5 Romeo_and_Juliet ""
6 Romeo_and_Juliet Romeo and Juliet
```
```
shakes_trim %>% filter(id=='Romeo_and_Juliet') %>% head
```
```
# A tibble: 6 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet ""
2 Romeo_and_Juliet ""
3 Romeo_and_Juliet PROLOGUE
4 Romeo_and_Juliet ""
5 Romeo_and_Juliet ""
6 Romeo_and_Juliet ""
```
### Scene II. Miscellaneous removal
Next, we’ll want to remove empty rows, any remaining titles, lines that denote the act or scene, and other stuff. I’m going to remove the word *prologue* and *epilogue* as a stopword later. While some texts have a line that just says that (`PROLOGUE`), others have text that describes the scene (`Prologue. Blah blah`) and which I’ve decided to keep. As such, we just need the word itself gone.
```
titles = c("A Lover's Complaint", "All's Well That Ends Well", "As You Like It", "The Comedy of Errors",
"Cymbeline", "Love's Labour's Lost", "Measure for Measure",
"The Merry Wives of Windsor", "The Merchant of Venice", "A Midsummer Night's Dream",
"Much Ado about Nothing", "Pericles Prince of Tyre", "The Taming of the Shrew",
"The Tempest", "Troilus and Cressida", "Twelfth Night",
"The Two Gentlemen of Verona", "The Winter's Tale", "King Henry IV, Part 1",
"King Henry IV, Part 2", "Henry V", "Henry VI, Part 1",
"Henry VI, Part 2", "Henry VI, Part 3", "Henry VIII",
"King John", "Richard II", "Richard III",
"Antony and Cleopatra", "Coriolanus", "Hamlet",
"Julius Caesar", "King Lear", "Macbeth",
"Othello", "Romeo and Juliet", "Timon of Athens",
"Titus Andronicus", "Sonnets",
"The Rape of Lucrece", "Venus and Adonis", "A Funeral Elegy", "The Phoenix and the Turtle")
shakes_trim = shakes_trim %>%
filter(text != '', # remove empties
!text %in% titles, # remove titles
!str_detect(text, '^ACT|^SCENE|^Enter|^Exit|^Exeunt|^Sonnet') # remove acts etc.
)
shakes_trim %>% filter(id=='Romeo_and_Juliet') # we'll get prologue later
```
```
# A tibble: 3,992 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet PROLOGUE
2 Romeo_and_Juliet Two households, both alike in dignity,
3 Romeo_and_Juliet In fair Verona, where we lay our scene,
4 Romeo_and_Juliet From ancient grudge break to new mutiny,
5 Romeo_and_Juliet Where civil blood makes civil hands unclean.
6 Romeo_and_Juliet From forth the fatal loins of these two foes
7 Romeo_and_Juliet A pair of star-cross'd lovers take their life;
8 Romeo_and_Juliet Whose misadventured piteous overthrows
9 Romeo_and_Juliet Do with their death bury their parents' strife.
10 Romeo_and_Juliet The fearful passage of their death-mark'd love,
# ... with 3,982 more rows
```
### Scene III. Classification of works
While we’re at it, we can save the classical (sometimes arbitrary) classifications of Shakespeare’s works for later comparison to what we’ll get in our analyses. We’ll save them to call as needed.
```
shakes_types = data_frame(title=unique(shakes_trim$id)) %>%
mutate(class = 'Comedy',
class = if_else(str_detect(title, pattern='Adonis|Lucrece|Complaint|Turtle|Pilgrim|Sonnet|Elegy'), 'Poem', class),
class = if_else(str_detect(title, pattern='Henry|Richard|John'), 'History', class),
class = if_else(str_detect(title, pattern='Troilus|Coriolanus|Titus|Romeo|Timon|Julius|Macbeth|Hamlet|Othello|Antony|Cymbeline|Lear'), 'Tragedy', class),
problem = if_else(str_detect(title, pattern='Measure|Merchant|^All|Troilus|Timon|Passion'), 'Problem', 'Not'),
late_romance = if_else(str_detect(title, pattern='Cymbeline|Kinsmen|Pericles|Winter|Tempest'), 'Late', 'Other'))
save(shakes_types, file='data/shakespeare_classification.RData') # save for later
```
ACT III. Stop words
-------------------
As we’ve noted before, we’ll want to get rid of stop words, things like articles, possessive pronouns, and other very common words. In this case, we also want to include character names. However, the big wrinkle here is that this is not English as currently spoken, so we need to remove ‘ye’, ‘thee’, ‘thine’ etc. In addition, there are things that need to be replaced, like o’er to over, which may then also be removed. In short, this is not so straightforward.
### Scene I. Character names
We’ll get the list of character names from [opensourceshakespeare.org](http://opensourceshakespeare.org/) via rvest, but I added some from the poems and others that still came through the processing one way or another, e.g. abbreviated names.
```
shakes_char_url = 'https://www.opensourceshakespeare.org/views/plays/characters/chardisplay.php'
page0 = read_html(shakes_char_url)
tabs = page0 %>% html_table()
shakes_char = tabs[[2]][-(1:2), c(1,3,5)] # remove header and phantom columns
colnames(shakes_char) = c('Nspeeches', 'Character', 'Play')
shakes_char = shakes_char %>%
distinct(Character,.keep_all=T)
save(shakes_char, file='data/shakespeare_characters.RData')
```
A new snag is that some characters with multiple names may be represented (typically) by the first or last name, or in the case of three, the middle, e.g. Sir Toby Belch. Others are still difficultly named e.g. RICHARD PLANTAGENET (DUKE OF GLOUCESTER). The following should capture everything by splitting the names on spaces, removing parentheses, and keeping unique terms.
```
# remove paren and split
chars = shakes_char$Character
chars = str_replace_all(chars, '\\(|\\)', '')
chars = str_split(chars, ' ') %>%
unlist
# these were found after intial processsing
chars_other = c('enobarbus', 'marcius', 'katharina', 'clarence','pyramus',
'andrew', 'arcite', 'perithous', 'hippolita', 'schoolmaster',
'cressid', 'diomed', 'kate', 'titinius', 'Palamon', 'Tarquin',
'lucrece', 'isidore', 'tom', 'thisbe', 'paul',
'aemelia', 'sycorax', 'montague', 'capulet', 'collatinus')
chars = unique(c(chars, chars_other))
chars = chars[chars != '']
sample(chars)[1:3]
```
```
[1] "Children" "Dionyza" "Aaron"
```
### Scene II. Old, Middle, \& Modern English
While Shakespeare is considered [Early Modern English](https://en.wikipedia.org/wiki/Early_Modern_English), some text may be more historical, so I include Middle and Old English stopwords, as they were readily available from the cltk Python module ([link](https://github.com/cltk/cltk)). I also added some things to the modern English list like “thou’ldst” that I found lingering after initial passes. I first started using the works from Gutenberg, and there, the Old English might have had some utility. As the texts there were inconsistently translated and otherwise problematic, I abandoned using them. Here, the Old English vocabulary applied to these texts it only removes ‘wit’, so I refrain from using it.
```
# old and me from python cltk module;
# em from http://earlymodernconversions.com/wp-content/uploads/2013/12/stopwords.txt;
# I also added some to me
old_stops0 = read_lines('data/old_english_stop_words.txt')
# sort(old_stops0)
old_stops = data_frame(word=str_conv(old_stops0, 'UTF8'),
lexicon = 'cltk')
me_stops0 = read_lines('data/middle_english_stop_words')
# sort(me_stops0)
me_stops = data_frame(word=str_conv(me_stops0, 'UTF8'),
lexicon = 'cltk')
em_stops0 = read_lines('data/early_modern_english_stop_words.txt')
# sort(em_stops0)
em_stops = data_frame(word=str_conv(em_stops0, 'UTF8'),
lexicon = 'emc')
```
### Scene III. Remove stopwords
We’re now ready to start removing words. However, right now, we have lines not words. We can use the tidytext function unnest\_tokens, which is like unnest from tidyr, but works on different tokens, e.g. words, sentences, or paragraphs. Note that by default, the function will make all words lower case to make matching more efficient.
```
library(tidytext)
shakes_words = shakes_trim %>%
unnest_tokens(word, text, token='words')
save(shakes_words, file='data/shakes_words_df_4text2vec.RData')
```
We also will be doing a little stemming here. I’m getting rid of suffixes that end with the suffix after an apostrophe. Many of the remaining words will either be stopwords or need to be further stemmed later. I also created a middle/modern English stemmer for words that are not caught otherwise (me\_st\_stem). Again, this is the sort of thing you discover after initial passes (e.g. ‘criedst’). After that, we can use the anti\_join remove the stopwords.
```
source('r/st_stem.R')
shakes_words = shakes_words %>%
mutate(word = str_trim(word), # remove possible whitespace
word = str_replace(word, "'er$|'d$|'t$|'ld$|'rt$|'st$|'dst$", ''), # remove me style endings
word = str_replace_all(word, "[0-9]", ''), # remove sonnet numbers
word = vapply(word, me_st_stem, 'a')) %>%
anti_join(em_stops) %>%
anti_join(me_stops) %>%
anti_join(data_frame(word=str_to_lower(c(chars, 'prologue', 'epilogue')))) %>%
anti_join(data_frame(word=str_to_lower(paste0(chars, "'s")))) %>% # remove possessive names
anti_join(stop_words)
```
As before, you should do a couple spot checks.
```
any(shakes_words$word == 'romeo')
any(shakes_words$word == 'prologue')
any(shakes_words$word == 'mayst')
```
```
[1] FALSE
[1] FALSE
[1] FALSE
```
ACT IV. Other fixes
-------------------
Now we’re ready to finally do the word counts. Just kidding! There is *still* work to do for the remainder, and you’ll continue to spot things after runs. One remaining issue is the words that end in ‘st’ and ‘est’, and others that are not consistently spelled or otherwise need to be dealt with. For example, ‘crost’ will not be stemmed to ‘cross’, as ‘crossed’ would be. Finally, I limit the result to any words that have more than two characters, as my inspection suggested these are left\-over suffixes, or otherwise would be considered stopwords anyway.
```
# porter should catch remaining 'est'
add_a = c('mongst', 'gainst') # words to add a to
shakes_words = shakes_words %>%
mutate(word = if_else(word=='honour', 'honor', word),
word = if_else(word=='durst', 'dare', word),
word = if_else(word=='wast', 'was', word),
word = if_else(word=='dust', 'does', word),
word = if_else(word=='curst', 'cursed', word),
word = if_else(word=='blest', 'blessed', word),
word = if_else(word=='crost', 'crossed', word),
word = if_else(word=='accurst', 'accursed', word),
word = if_else(word %in% add_a,
paste0('a', word),
word),
word = str_replace(word, "'s$", ''), # strip remaining possessives
word = if_else(str_detect(word, pattern="o'er"), # change o'er over
str_replace(word, "'", 'v'),
word)) %>%
filter(!(id=='Antony_and_Cleopatra' & word == 'mark')) %>% # mark here is almost exclusively the character name
filter(str_count(word)>2)
```
At this point we could still maybe add things to this list of additional fixes, but I think it’s time to actually start playing with the data.
ACT V. Fun stuff
----------------
We are finally ready to get to the fun stuff. Finally! And now things get easy.
### Scene I. Count the terms
We can get term counts with standard dplyr approaches, and packages like tidytext will take that and also do some other things we might want. Specifically, we can use the latter to create the document\-term matrix (DTM) that will be used in other analysis. The function cast\_dfm will create a dfm class object, or ‘document\-feature’ matrix class object (from quanteda), which is the same thing but recognizes this sort of stuff is not specific to words. With word counts in hand, would be good save to save at this point, since they’ll serve as the basis for other processing.
```
term_counts = shakes_words %>%
group_by(id, word) %>%
count
term_counts %>%
arrange(desc(n))
library(quanteda)
shakes_dtm = term_counts %>%
cast_dfm(document=id, term=word, value=n)
## save(shakes_words, term_counts, shakes_dtm, file='data/shakes_words_df.RData')
```
```
# A tibble: 115,954 x 3
# Groups: id, word [115,954]
id word n
<chr> <chr> <int>
1 Sonnets love 195
2 The_Two_Gentlemen_of_Verona love 171
3 Romeo_and_Juliet love 150
4 As_You_Like_It love 118
5 Love_s_Labour_s_Lost love 118
6 A_Midsummer_Night_s_Dream love 114
7 Richard_III god 111
8 Titus_Andronicus rome 103
9 Much_Ado_about_Nothing love 92
10 Coriolanus rome 90
# ... with 115,944 more rows
```
Now things are looking like Shakespeare, with love for everyone[17](#fn17). You’ll notice I’ve kept place names such as Rome, but this might be something you’d prefer to remove. Other candidates would be madam, woman, man, majesty (as in ‘his/her’) etc. This sort of thing is up to the researcher.
### Scene II. Stemming
Now we’ll stem the words. This is actually more of a pre\-processing step, one that we’d do along with (and typically after) stopword removal. I do it here to mostly demonstrate how to use quanteda to do it, as it can also be used to remove stopwords and do many of the other things we did with tidytext.
Stemming will make words like eye and eyes just *ey*, or convert war, wars and warring to *war*. In other words, it will reduce variations of a word to a common root form, or ‘word stem’. We could have done this in a step prior to counting the terms, but then you only have the stemmed result to work with for the document term matrix from then on. Depending on your situation, you may or may not want to stem, or maybe you’d want to compare results. The quanteda package will actually stem with the DTM (i.e. work on the column names) and collapse the word counts accordingly. I note the difference in words before and after stemming.
```
shakes_dtm
ncol(shakes_dtm)
shakes_dtm = shakes_dtm %>%
dfm_wordstem()
shakes_dtm
ncol(shakes_dtm)
```
```
Document-feature matrix of: 43 documents, 22,052 features (87.8% sparse).
[1] 22052
Document-feature matrix of: 43 documents, 13,325 features (83.8% sparse).
[1] 13325
```
The result is notably fewer columns, which will speed up any analysis, as well as produce a slightly more dense matrix.
### Scene III. Exploration
#### Top features
Let’s start looking at the data more intently. The following shows the 10 most common words and their respective counts. This is also an easy way to find candidates to add to the stopword list. Note that dai and prai are stems for day and pray. Love occurs 2\.15 times as much as the most frequent word!
```
top10 = topfeatures(shakes_dtm, 10)
top10
```
```
love heart eye god day hand hear live death night
2918 1359 1300 1284 1229 1226 1043 1015 1010 1001
```
The following is a word cloud. They are among the most useless visual displays imaginable. Just because you can, doesn’t mean you should.
If you want to display relative frequency do so.
#### Similarity
The quanteda package has some built in similarity measures such as [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity), which you can think of similarly to the standard correlation (also available as an option). I display it visually to better get a sense of things.
```
## textstat_simil(shakes_dtm, margin = "documents", method = "cosine")
```
We can already begin to see the clusters of documents. For example, the more historical are the clump in the upper left. The oddball is [*The Phoenix and the Turtle*](https://en.wikipedia.org/wiki/The_Phoenix_and_the_Turtle), though *Lover’s Complaint* and the *Elegy* are also less similar than standard Shakespeare. The Phoenix and the Turtle is about the death of ideal love, represented by the Phoenix and Turtledove, for which there is a funeral. It actually is considered by scholars to be in stark contrast to his other output. [Elegy](https://en.wikipedia.org/wiki/Shakespeare_apocrypha#A_Funeral_Elegy) itself is actually written for a funeral, but probably not by Shakespeare. [*A Lover’s Complaint*](https://en.wikipedia.org/wiki/A_Lover%27s_Complaint) is thought to be an inferior work by the Bard by some critics, and maybe not even authored by him, so perhaps what we’re seeing is a reflection of that lack of quality. In general, we’re seeing things that we might expect.
#### Readability
We can examine readability scores for the texts, but for this we’ll need them in raw form. We already had them from before, I just added *Phoenix* from the Gutenberg download.
```
raw_texts
```
```
# A tibble: 43 x 2
id text
<chr> <list>
1 A_Lover_s_Complaint.txt <chr [813]>
2 A_Midsummer_Night_s_Dream.txt <chr [6,630]>
3 All_s_Well_That_Ends_Well.txt <chr [10,993]>
4 Antony_and_Cleopatra.txt <chr [14,064]>
5 As_You_Like_It.txt <chr [9,706]>
6 Coriolanus.txt <chr [13,440]>
7 Cymbeline.txt <chr [11,388]>
8 Elegy.txt <chr [1,316]>
9 Hamlet.txt <chr [13,950]>
10 Henry_V.txt <chr [9,777]>
# ... with 33 more rows
```
With raw texts, we need to convert them to a corpus object to proceed more easily. The corpus function from quanteda won’t read directly from a list column or a list at all, so we’ll convert it via the tm package, which more or less defeats the purpose of using the quanteda package, except that the textstat\_readability function gives us what we want, but I digress.
Unfortunately, the concept of readability is ill\-defined, and as such, there are dozens of measures available dating back nearly 75 years. The following is based on the Coleman\-Liau grade score (higher grade \= more difficult). The conclusion here is first, Shakespeare isn’t exactly a difficult read, and two, the poems may be more so relative to the other works.
```
library(tm)
raw_text_corpus = corpus(VCorpus(VectorSource(raw_texts$text)))
shakes_read = textstat_readability(raw_text_corpus)
```
#### Lexical diversity
There are also metrics of lexical diversity. As with readability, there is no one way to measure ‘diversity’. Here we’ll go back to using the standard DTM, as the focus is on the terms, whereas readability is more at the sentence level. Most standard measures of lexical diversity are variants on what is called the type\-token ratio, which in our setting is the number of unique terms (types) relative to the total terms (tokens). We can use textstat\_lexdiv for our purposes here, which will provide several measures of diversity by default.
```
ld = textstat_lexdiv(shakes_dtm)
```
This visual is based on the (absolute) scaled values of those several metrics, and might suggest that the poems are relatively more diverse. This certainly might be the case for *Phoenix*, but it could also be a reflection of the limitation of several of the measures, such that longer works are seen as less diverse, as tokens are added more so than types the longer the text goes.
As a comparison, the following shows the results of the ‘Measure of Textual Diversity’ calculated using the koRpus package[18](#fn18). It is notably less affected by text length, though the conclusions are largely the same. There is notable correlation between the MTLD and readability as well[19](#fn19). In general, Shakespeare tends to be more expressive in poems, and less so with comedies.
### Scene IV. Topic model
I’d say we’re now ready for topic model. That didn’t take too much did it?
#### Running the model and exploring the topics
We’ll run one with 10 topics. As in the previous example in this document, we’ll use topicmodels and the LDA function. Later, we’ll also compare our results with the traditional classifications of the texts. Note that this will take a while to run depending on your machine (maybe a minute or two). Faster implementation can be found with text2vec.
```
library(topicmodels)
shakes_10 = LDA(convert(shakes_dtm, to = "topicmodels"), k = 10, control=list(seed=1234))
```
One of the first things to do is to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* are common as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the stm package and corresponding labelTopics function as a way to get several alternatives. As an example, I show the results of their version of the following[20](#fn20):
* FREX: **FR**equency and **EX**clusivity, it is a weighted harmonic mean of a term’s rank within a topic in terms of frequency and exclusivity.
* lift: Ratio of the term’s probability within a topic to its probability of occurrence across all documents. Overly sensitive to rare words.
* score: Another approach that will give more weight to more exclusive terms.
* prob: This is just the raw probability of the term within a given topic.
As another approach, consider the saliency and relevance of term via the LDAvis package. While you can play with it here, it’s probably easier to [open it separately](vis/index.html). Note that this has to be done separately from the model, and may have topic numbers in a different order.
Your browser does not support iframes.
Given all these measures, one can assess how well they match what topics the documents would be most associated with.
```
t(topics(shakes_10, 3))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. The other measures pick up on words like Dane and Denmark. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram[21](#fn21). The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[22](#fn22). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix*, *Complaint*, standard poems, a mixed bag of more romance\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
The following shows the average topic probability for each of the traditional classes. Topics are represented by their first five most probable terms.
Aside from the poems, the classes are a good mix of topics, and appear to have some overlap. Tragedies are perhaps most diverse.
#### Summary of Topic Models
This is where the summary would go, but I grow weary…
**FIN**
ACT I. Scrape MIT and Gutenberg Shakespeare
-------------------------------------------
### Scene I. Scrape main works
Initially we must scrape the web to get the documents we need. The rvest package will be used as follows.
* Start with the url of the site
* Get the links off that page to serve as base urls for the works
* Scrape the document for each url
* Deal with the collection of Sonnets separately
* Write out results
```
library(rvest); library(tidyverse); library(stringr)
page0 = read_html('http://shakespeare.mit.edu/')
works_urls0 = page0 %>%
html_nodes('a') %>%
html_attr('href')
main = works_urls0 %>%
grep(pattern='index', value=T) %>%
str_replace_all(pattern='index', replacement='full')
other = works_urls0[!grepl(works_urls0, pattern='index|edu|org|news')]
works_urls = c(main, other)
works_urls[1:3]
```
Now we just paste the main site url to the work urls and download them. Here is where we come across our first snag. The html\_text function has what I would call a bug but what the author feels is a feature. [Basically, it ignores line breaks of the form `<br>` in certain situations](https://github.com/hadley/rvest/issues/175). This means it will smash text together that shouldn’t be, thereby making *any* analysis of it fairly useless[14](#fn14). Luckily, [@rentrop](https://github.com/rentrop) provided a solution, which is in `r/fix_read_html.R`.
```
works0 = lapply(works_urls, function(x) read_html(paste0('http://shakespeare.mit.edu/', x)))
source('r/fix_read_html.R')
html_text_collapse(works0[[1]]) #works
works = lapply(works0, html_text_collapse)
names(works) = c("All's Well That Ends Well", "As You Like It", "Comedy of Errors"
"Cymbeline", "Love's Labour's Lost", "Measure for Measure"
"The Merry Wives of Windsor", "The Merchant of Venice", "A Midsummer Night's Dream"
"Much Ado about Nothing", "Pericles Prince of Tyre", "The Taming of the Shrew"
"The Tempest", "Troilus and Cressida", "Twelfth Night"
"The Two Gentlemen of Verona", "The Winter's Tale", "King Henry IV Part 1"
"King Henry IV Part 2", "Henry V", "Henry VI Part 1"
"Henry VI Part 2", "Henry VI Part 3", "Henry VIII"
"King John", "Richard II", "Richard III"
"Antony and Cleopatra", "Coriolanus", "Hamlet"
"Julius Caesar", "King Lear", "Macbeth"
"Othello", "Romeo and Juliet", "Timon of Athens"
"Titus Andronicus", "Sonnets", "A Lover's Complaint"
"The Rape of Lucrece", "Venus and Adonis", "Elegy")
```
### Scene II. Sonnets
We now hit a slight nuisance with the Sonnets. The Sonnets have a bit of a different structure than the plays. All links are in a single page, with a different form for the url, and each sonnet has its own page.
```
sonnet_urls = paste0('http://shakespeare.mit.edu/', grep(works_urls0, pattern='sonnet', value=T)) %>%
read_html() %>%
html_nodes('a') %>%
html_attr('href')
sonnet_urls = grep(sonnet_urls, pattern = 'sonnet', value=T) # remove amazon link
# read the texts
sonnet0 = purrr::map(sonnet_urls, function(x) read_html(paste0('http://shakespeare.mit.edu/Poetry/', x)))
# collapse to one 'Sonnets' work
sonnet = sapply(sonnet0, html_text_collapse)
works$Sonnets = sonnet
```
### Scene III. Save and write out
Now we can save our results so we won’t have to repeat any of the previous scraping. We want to save the main text object as an RData file, and write out the texts to their own file. When dealing with text, you’ll regularly want to save stages so you can avoid repeating what you don’t have to, as often you will need to go back after discovering new issues further down the line.
```
save(works, file='data/texts_raw/shakes/moby_from_web.RData')
```
### Scene IV. Read text from files
After the above is done, it’s not required to redo, so we can always get what we need. I’ll start with the raw text as files, as that is one of the more common ways one deals with documents. When text is nice and clean, this can be fairly straightforward.
The function at the end comes from the tidyr package. Up to that line, each element in the text column is the entire text, while the column itself is thus a ‘list\-column’. In other words, we have a 42 x 2 matrix. But to do what we need, we’ll want to have access to each line, and the unnest function unpacks each line within the title. The first few lines of the result are shown after.
```
library(tidyverse); library(stringr)
shakes0 =
data_frame(file = dir('data/texts_raw/shakes/moby/', full.names = TRUE)) %>%
transmute(id = basename(file), text) %>%
unnest(text)
save(shakes0, file='data/initial_shakes_dt.RData')
# Alternate that provides for more options
# library(readtext)
# shakes0 =
# data_frame(file = dir('data/texts_raw/shakes/moby/', full.names = TRUE)) %>%
# mutate(text = map(file, readtext, encoding='UTF8')) %>%
# unnest(text)
```
### Scene V. Add additional works
It is typical to be gathering texts from multiple sources. In this case, we’ll get *The Phoenix and the Turtle* from the Project Gutenberg website. There is an R package that will allow us to work directly with the site, making the process straightforward[15](#fn15). I also considered two other works, but I refrained from “The Two Noble Kinsmen” because like many other of Shakespeare’s versions on Gutenberg, it’s basically written in a different language. I also refrained from *The Passionate Pilgrim* because it’s mostly not Shakespeare.
When first doing this project, I actually started with Gutenberg, but it became a notable PITA. The texts were inconsistent in source, and sometimes reproduced printing errors purposely, which would have compounded typical problems. I thought it could have been solved by using the *Complete Works of Shakespeare* but the download only came with that title, meaning one would have to hunt for and delineate each separate work. This might not have been too big of an issue, except that there is no table of contents, nor consistent naming of titles across different printings. The MIT approach, on the other hand, was a few lines of code. This represents a common issue in text analysis when dealing with sources, a different option may save a lot of time in the end.
The following code could be more succinct to deal with one text, but I initially was dealing with multiple works, so I’ve left it in that mode. In the end, we’ll have a tibble with an id column for the file/work name, and another column that contains the lines of text.
```
library(gutenbergr)
works_not_included = c("The Phoenix and the Turtle") # add others if desired
gute0 = gutenberg_works(title %in% works_not_included)
gute = lapply(gute0$gutenberg_id, gutenberg_download)
gute = mapply(function(x, y) mutate(x, id=y) %>% select(-gutenberg_id),
x=gute,
y=works_not_included,
SIMPLIFY=F)
shakes = shakes0 %>%
bind_rows(gute) %>%
mutate(id = str_replace_all(id, " |'", '_')) %>%
mutate(id = str_replace(id, '.txt', '')) %>%
arrange(id)
# shakes %>% split(.$id) # inspect
save(shakes, file='data/texts_raw/shakes/shakes_df.RData')
```
### Scene I. Scrape main works
Initially we must scrape the web to get the documents we need. The rvest package will be used as follows.
* Start with the url of the site
* Get the links off that page to serve as base urls for the works
* Scrape the document for each url
* Deal with the collection of Sonnets separately
* Write out results
```
library(rvest); library(tidyverse); library(stringr)
page0 = read_html('http://shakespeare.mit.edu/')
works_urls0 = page0 %>%
html_nodes('a') %>%
html_attr('href')
main = works_urls0 %>%
grep(pattern='index', value=T) %>%
str_replace_all(pattern='index', replacement='full')
other = works_urls0[!grepl(works_urls0, pattern='index|edu|org|news')]
works_urls = c(main, other)
works_urls[1:3]
```
Now we just paste the main site url to the work urls and download them. Here is where we come across our first snag. The html\_text function has what I would call a bug but what the author feels is a feature. [Basically, it ignores line breaks of the form `<br>` in certain situations](https://github.com/hadley/rvest/issues/175). This means it will smash text together that shouldn’t be, thereby making *any* analysis of it fairly useless[14](#fn14). Luckily, [@rentrop](https://github.com/rentrop) provided a solution, which is in `r/fix_read_html.R`.
```
works0 = lapply(works_urls, function(x) read_html(paste0('http://shakespeare.mit.edu/', x)))
source('r/fix_read_html.R')
html_text_collapse(works0[[1]]) #works
works = lapply(works0, html_text_collapse)
names(works) = c("All's Well That Ends Well", "As You Like It", "Comedy of Errors"
"Cymbeline", "Love's Labour's Lost", "Measure for Measure"
"The Merry Wives of Windsor", "The Merchant of Venice", "A Midsummer Night's Dream"
"Much Ado about Nothing", "Pericles Prince of Tyre", "The Taming of the Shrew"
"The Tempest", "Troilus and Cressida", "Twelfth Night"
"The Two Gentlemen of Verona", "The Winter's Tale", "King Henry IV Part 1"
"King Henry IV Part 2", "Henry V", "Henry VI Part 1"
"Henry VI Part 2", "Henry VI Part 3", "Henry VIII"
"King John", "Richard II", "Richard III"
"Antony and Cleopatra", "Coriolanus", "Hamlet"
"Julius Caesar", "King Lear", "Macbeth"
"Othello", "Romeo and Juliet", "Timon of Athens"
"Titus Andronicus", "Sonnets", "A Lover's Complaint"
"The Rape of Lucrece", "Venus and Adonis", "Elegy")
```
### Scene II. Sonnets
We now hit a slight nuisance with the Sonnets. The Sonnets have a bit of a different structure than the plays. All links are in a single page, with a different form for the url, and each sonnet has its own page.
```
sonnet_urls = paste0('http://shakespeare.mit.edu/', grep(works_urls0, pattern='sonnet', value=T)) %>%
read_html() %>%
html_nodes('a') %>%
html_attr('href')
sonnet_urls = grep(sonnet_urls, pattern = 'sonnet', value=T) # remove amazon link
# read the texts
sonnet0 = purrr::map(sonnet_urls, function(x) read_html(paste0('http://shakespeare.mit.edu/Poetry/', x)))
# collapse to one 'Sonnets' work
sonnet = sapply(sonnet0, html_text_collapse)
works$Sonnets = sonnet
```
### Scene III. Save and write out
Now we can save our results so we won’t have to repeat any of the previous scraping. We want to save the main text object as an RData file, and write out the texts to their own file. When dealing with text, you’ll regularly want to save stages so you can avoid repeating what you don’t have to, as often you will need to go back after discovering new issues further down the line.
```
save(works, file='data/texts_raw/shakes/moby_from_web.RData')
```
### Scene IV. Read text from files
After the above is done, it’s not required to redo, so we can always get what we need. I’ll start with the raw text as files, as that is one of the more common ways one deals with documents. When text is nice and clean, this can be fairly straightforward.
The function at the end comes from the tidyr package. Up to that line, each element in the text column is the entire text, while the column itself is thus a ‘list\-column’. In other words, we have a 42 x 2 matrix. But to do what we need, we’ll want to have access to each line, and the unnest function unpacks each line within the title. The first few lines of the result are shown after.
```
library(tidyverse); library(stringr)
shakes0 =
data_frame(file = dir('data/texts_raw/shakes/moby/', full.names = TRUE)) %>%
transmute(id = basename(file), text) %>%
unnest(text)
save(shakes0, file='data/initial_shakes_dt.RData')
# Alternate that provides for more options
# library(readtext)
# shakes0 =
# data_frame(file = dir('data/texts_raw/shakes/moby/', full.names = TRUE)) %>%
# mutate(text = map(file, readtext, encoding='UTF8')) %>%
# unnest(text)
```
### Scene V. Add additional works
It is typical to be gathering texts from multiple sources. In this case, we’ll get *The Phoenix and the Turtle* from the Project Gutenberg website. There is an R package that will allow us to work directly with the site, making the process straightforward[15](#fn15). I also considered two other works, but I refrained from “The Two Noble Kinsmen” because like many other of Shakespeare’s versions on Gutenberg, it’s basically written in a different language. I also refrained from *The Passionate Pilgrim* because it’s mostly not Shakespeare.
When first doing this project, I actually started with Gutenberg, but it became a notable PITA. The texts were inconsistent in source, and sometimes reproduced printing errors purposely, which would have compounded typical problems. I thought it could have been solved by using the *Complete Works of Shakespeare* but the download only came with that title, meaning one would have to hunt for and delineate each separate work. This might not have been too big of an issue, except that there is no table of contents, nor consistent naming of titles across different printings. The MIT approach, on the other hand, was a few lines of code. This represents a common issue in text analysis when dealing with sources, a different option may save a lot of time in the end.
The following code could be more succinct to deal with one text, but I initially was dealing with multiple works, so I’ve left it in that mode. In the end, we’ll have a tibble with an id column for the file/work name, and another column that contains the lines of text.
```
library(gutenbergr)
works_not_included = c("The Phoenix and the Turtle") # add others if desired
gute0 = gutenberg_works(title %in% works_not_included)
gute = lapply(gute0$gutenberg_id, gutenberg_download)
gute = mapply(function(x, y) mutate(x, id=y) %>% select(-gutenberg_id),
x=gute,
y=works_not_included,
SIMPLIFY=F)
shakes = shakes0 %>%
bind_rows(gute) %>%
mutate(id = str_replace_all(id, " |'", '_')) %>%
mutate(id = str_replace(id, '.txt', '')) %>%
arrange(id)
# shakes %>% split(.$id) # inspect
save(shakes, file='data/texts_raw/shakes/shakes_df.RData')
```
ACT II. Preliminary Cleaning
----------------------------
If you think we’re even remotely getting close to being ready for analysis, I say Ha! to you. Our journey has only just begun (cue the Carpenters).
Now we can start thinking about prepping the data for eventual analysis. One of the nice things about having the data in a tidy format is that we can use string functionality over the column of text in a simple fashion.
### Scene I. Remove initial text/metadata
First on our to\-do list is to get rid of all the preliminary text of titles, authorship, and similar. This is fairly straightforward when you realize the text we want will be associated with something like `ACT I`, or in the case of the Sonnets, the word `Sonnet`. So, the idea it to drop all text up to those points. I’ve created a [function](https://github.com/m-clark/text-analysis-with-R/blob/master/r/detect_first_act.R) that will do that, and then just apply it to each works tibble[16](#fn16). For the poems and *A Funeral Elegy for Master William Peter*, we look instead for the line where his name or initials start the line.
```
source('r/detect_first_act.R')
shakes_trim = shakes %>%
split(.$id) %>%
lapply(detect_first_act) %>%
bind_rows
shakes %>% filter(id=='Romeo_and_Juliet') %>% head
```
```
# A tibble: 6 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet Romeo and Juliet: Entire Play
2 Romeo_and_Juliet " "
3 Romeo_and_Juliet ""
4 Romeo_and_Juliet ""
5 Romeo_and_Juliet ""
6 Romeo_and_Juliet Romeo and Juliet
```
```
shakes_trim %>% filter(id=='Romeo_and_Juliet') %>% head
```
```
# A tibble: 6 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet ""
2 Romeo_and_Juliet ""
3 Romeo_and_Juliet PROLOGUE
4 Romeo_and_Juliet ""
5 Romeo_and_Juliet ""
6 Romeo_and_Juliet ""
```
### Scene II. Miscellaneous removal
Next, we’ll want to remove empty rows, any remaining titles, lines that denote the act or scene, and other stuff. I’m going to remove the word *prologue* and *epilogue* as a stopword later. While some texts have a line that just says that (`PROLOGUE`), others have text that describes the scene (`Prologue. Blah blah`) and which I’ve decided to keep. As such, we just need the word itself gone.
```
titles = c("A Lover's Complaint", "All's Well That Ends Well", "As You Like It", "The Comedy of Errors",
"Cymbeline", "Love's Labour's Lost", "Measure for Measure",
"The Merry Wives of Windsor", "The Merchant of Venice", "A Midsummer Night's Dream",
"Much Ado about Nothing", "Pericles Prince of Tyre", "The Taming of the Shrew",
"The Tempest", "Troilus and Cressida", "Twelfth Night",
"The Two Gentlemen of Verona", "The Winter's Tale", "King Henry IV, Part 1",
"King Henry IV, Part 2", "Henry V", "Henry VI, Part 1",
"Henry VI, Part 2", "Henry VI, Part 3", "Henry VIII",
"King John", "Richard II", "Richard III",
"Antony and Cleopatra", "Coriolanus", "Hamlet",
"Julius Caesar", "King Lear", "Macbeth",
"Othello", "Romeo and Juliet", "Timon of Athens",
"Titus Andronicus", "Sonnets",
"The Rape of Lucrece", "Venus and Adonis", "A Funeral Elegy", "The Phoenix and the Turtle")
shakes_trim = shakes_trim %>%
filter(text != '', # remove empties
!text %in% titles, # remove titles
!str_detect(text, '^ACT|^SCENE|^Enter|^Exit|^Exeunt|^Sonnet') # remove acts etc.
)
shakes_trim %>% filter(id=='Romeo_and_Juliet') # we'll get prologue later
```
```
# A tibble: 3,992 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet PROLOGUE
2 Romeo_and_Juliet Two households, both alike in dignity,
3 Romeo_and_Juliet In fair Verona, where we lay our scene,
4 Romeo_and_Juliet From ancient grudge break to new mutiny,
5 Romeo_and_Juliet Where civil blood makes civil hands unclean.
6 Romeo_and_Juliet From forth the fatal loins of these two foes
7 Romeo_and_Juliet A pair of star-cross'd lovers take their life;
8 Romeo_and_Juliet Whose misadventured piteous overthrows
9 Romeo_and_Juliet Do with their death bury their parents' strife.
10 Romeo_and_Juliet The fearful passage of their death-mark'd love,
# ... with 3,982 more rows
```
### Scene III. Classification of works
While we’re at it, we can save the classical (sometimes arbitrary) classifications of Shakespeare’s works for later comparison to what we’ll get in our analyses. We’ll save them to call as needed.
```
shakes_types = data_frame(title=unique(shakes_trim$id)) %>%
mutate(class = 'Comedy',
class = if_else(str_detect(title, pattern='Adonis|Lucrece|Complaint|Turtle|Pilgrim|Sonnet|Elegy'), 'Poem', class),
class = if_else(str_detect(title, pattern='Henry|Richard|John'), 'History', class),
class = if_else(str_detect(title, pattern='Troilus|Coriolanus|Titus|Romeo|Timon|Julius|Macbeth|Hamlet|Othello|Antony|Cymbeline|Lear'), 'Tragedy', class),
problem = if_else(str_detect(title, pattern='Measure|Merchant|^All|Troilus|Timon|Passion'), 'Problem', 'Not'),
late_romance = if_else(str_detect(title, pattern='Cymbeline|Kinsmen|Pericles|Winter|Tempest'), 'Late', 'Other'))
save(shakes_types, file='data/shakespeare_classification.RData') # save for later
```
### Scene I. Remove initial text/metadata
First on our to\-do list is to get rid of all the preliminary text of titles, authorship, and similar. This is fairly straightforward when you realize the text we want will be associated with something like `ACT I`, or in the case of the Sonnets, the word `Sonnet`. So, the idea it to drop all text up to those points. I’ve created a [function](https://github.com/m-clark/text-analysis-with-R/blob/master/r/detect_first_act.R) that will do that, and then just apply it to each works tibble[16](#fn16). For the poems and *A Funeral Elegy for Master William Peter*, we look instead for the line where his name or initials start the line.
```
source('r/detect_first_act.R')
shakes_trim = shakes %>%
split(.$id) %>%
lapply(detect_first_act) %>%
bind_rows
shakes %>% filter(id=='Romeo_and_Juliet') %>% head
```
```
# A tibble: 6 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet Romeo and Juliet: Entire Play
2 Romeo_and_Juliet " "
3 Romeo_and_Juliet ""
4 Romeo_and_Juliet ""
5 Romeo_and_Juliet ""
6 Romeo_and_Juliet Romeo and Juliet
```
```
shakes_trim %>% filter(id=='Romeo_and_Juliet') %>% head
```
```
# A tibble: 6 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet ""
2 Romeo_and_Juliet ""
3 Romeo_and_Juliet PROLOGUE
4 Romeo_and_Juliet ""
5 Romeo_and_Juliet ""
6 Romeo_and_Juliet ""
```
### Scene II. Miscellaneous removal
Next, we’ll want to remove empty rows, any remaining titles, lines that denote the act or scene, and other stuff. I’m going to remove the word *prologue* and *epilogue* as a stopword later. While some texts have a line that just says that (`PROLOGUE`), others have text that describes the scene (`Prologue. Blah blah`) and which I’ve decided to keep. As such, we just need the word itself gone.
```
titles = c("A Lover's Complaint", "All's Well That Ends Well", "As You Like It", "The Comedy of Errors",
"Cymbeline", "Love's Labour's Lost", "Measure for Measure",
"The Merry Wives of Windsor", "The Merchant of Venice", "A Midsummer Night's Dream",
"Much Ado about Nothing", "Pericles Prince of Tyre", "The Taming of the Shrew",
"The Tempest", "Troilus and Cressida", "Twelfth Night",
"The Two Gentlemen of Verona", "The Winter's Tale", "King Henry IV, Part 1",
"King Henry IV, Part 2", "Henry V", "Henry VI, Part 1",
"Henry VI, Part 2", "Henry VI, Part 3", "Henry VIII",
"King John", "Richard II", "Richard III",
"Antony and Cleopatra", "Coriolanus", "Hamlet",
"Julius Caesar", "King Lear", "Macbeth",
"Othello", "Romeo and Juliet", "Timon of Athens",
"Titus Andronicus", "Sonnets",
"The Rape of Lucrece", "Venus and Adonis", "A Funeral Elegy", "The Phoenix and the Turtle")
shakes_trim = shakes_trim %>%
filter(text != '', # remove empties
!text %in% titles, # remove titles
!str_detect(text, '^ACT|^SCENE|^Enter|^Exit|^Exeunt|^Sonnet') # remove acts etc.
)
shakes_trim %>% filter(id=='Romeo_and_Juliet') # we'll get prologue later
```
```
# A tibble: 3,992 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet PROLOGUE
2 Romeo_and_Juliet Two households, both alike in dignity,
3 Romeo_and_Juliet In fair Verona, where we lay our scene,
4 Romeo_and_Juliet From ancient grudge break to new mutiny,
5 Romeo_and_Juliet Where civil blood makes civil hands unclean.
6 Romeo_and_Juliet From forth the fatal loins of these two foes
7 Romeo_and_Juliet A pair of star-cross'd lovers take their life;
8 Romeo_and_Juliet Whose misadventured piteous overthrows
9 Romeo_and_Juliet Do with their death bury their parents' strife.
10 Romeo_and_Juliet The fearful passage of their death-mark'd love,
# ... with 3,982 more rows
```
### Scene III. Classification of works
While we’re at it, we can save the classical (sometimes arbitrary) classifications of Shakespeare’s works for later comparison to what we’ll get in our analyses. We’ll save them to call as needed.
```
shakes_types = data_frame(title=unique(shakes_trim$id)) %>%
mutate(class = 'Comedy',
class = if_else(str_detect(title, pattern='Adonis|Lucrece|Complaint|Turtle|Pilgrim|Sonnet|Elegy'), 'Poem', class),
class = if_else(str_detect(title, pattern='Henry|Richard|John'), 'History', class),
class = if_else(str_detect(title, pattern='Troilus|Coriolanus|Titus|Romeo|Timon|Julius|Macbeth|Hamlet|Othello|Antony|Cymbeline|Lear'), 'Tragedy', class),
problem = if_else(str_detect(title, pattern='Measure|Merchant|^All|Troilus|Timon|Passion'), 'Problem', 'Not'),
late_romance = if_else(str_detect(title, pattern='Cymbeline|Kinsmen|Pericles|Winter|Tempest'), 'Late', 'Other'))
save(shakes_types, file='data/shakespeare_classification.RData') # save for later
```
ACT III. Stop words
-------------------
As we’ve noted before, we’ll want to get rid of stop words, things like articles, possessive pronouns, and other very common words. In this case, we also want to include character names. However, the big wrinkle here is that this is not English as currently spoken, so we need to remove ‘ye’, ‘thee’, ‘thine’ etc. In addition, there are things that need to be replaced, like o’er to over, which may then also be removed. In short, this is not so straightforward.
### Scene I. Character names
We’ll get the list of character names from [opensourceshakespeare.org](http://opensourceshakespeare.org/) via rvest, but I added some from the poems and others that still came through the processing one way or another, e.g. abbreviated names.
```
shakes_char_url = 'https://www.opensourceshakespeare.org/views/plays/characters/chardisplay.php'
page0 = read_html(shakes_char_url)
tabs = page0 %>% html_table()
shakes_char = tabs[[2]][-(1:2), c(1,3,5)] # remove header and phantom columns
colnames(shakes_char) = c('Nspeeches', 'Character', 'Play')
shakes_char = shakes_char %>%
distinct(Character,.keep_all=T)
save(shakes_char, file='data/shakespeare_characters.RData')
```
A new snag is that some characters with multiple names may be represented (typically) by the first or last name, or in the case of three, the middle, e.g. Sir Toby Belch. Others are still difficultly named e.g. RICHARD PLANTAGENET (DUKE OF GLOUCESTER). The following should capture everything by splitting the names on spaces, removing parentheses, and keeping unique terms.
```
# remove paren and split
chars = shakes_char$Character
chars = str_replace_all(chars, '\\(|\\)', '')
chars = str_split(chars, ' ') %>%
unlist
# these were found after intial processsing
chars_other = c('enobarbus', 'marcius', 'katharina', 'clarence','pyramus',
'andrew', 'arcite', 'perithous', 'hippolita', 'schoolmaster',
'cressid', 'diomed', 'kate', 'titinius', 'Palamon', 'Tarquin',
'lucrece', 'isidore', 'tom', 'thisbe', 'paul',
'aemelia', 'sycorax', 'montague', 'capulet', 'collatinus')
chars = unique(c(chars, chars_other))
chars = chars[chars != '']
sample(chars)[1:3]
```
```
[1] "Children" "Dionyza" "Aaron"
```
### Scene II. Old, Middle, \& Modern English
While Shakespeare is considered [Early Modern English](https://en.wikipedia.org/wiki/Early_Modern_English), some text may be more historical, so I include Middle and Old English stopwords, as they were readily available from the cltk Python module ([link](https://github.com/cltk/cltk)). I also added some things to the modern English list like “thou’ldst” that I found lingering after initial passes. I first started using the works from Gutenberg, and there, the Old English might have had some utility. As the texts there were inconsistently translated and otherwise problematic, I abandoned using them. Here, the Old English vocabulary applied to these texts it only removes ‘wit’, so I refrain from using it.
```
# old and me from python cltk module;
# em from http://earlymodernconversions.com/wp-content/uploads/2013/12/stopwords.txt;
# I also added some to me
old_stops0 = read_lines('data/old_english_stop_words.txt')
# sort(old_stops0)
old_stops = data_frame(word=str_conv(old_stops0, 'UTF8'),
lexicon = 'cltk')
me_stops0 = read_lines('data/middle_english_stop_words')
# sort(me_stops0)
me_stops = data_frame(word=str_conv(me_stops0, 'UTF8'),
lexicon = 'cltk')
em_stops0 = read_lines('data/early_modern_english_stop_words.txt')
# sort(em_stops0)
em_stops = data_frame(word=str_conv(em_stops0, 'UTF8'),
lexicon = 'emc')
```
### Scene III. Remove stopwords
We’re now ready to start removing words. However, right now, we have lines not words. We can use the tidytext function unnest\_tokens, which is like unnest from tidyr, but works on different tokens, e.g. words, sentences, or paragraphs. Note that by default, the function will make all words lower case to make matching more efficient.
```
library(tidytext)
shakes_words = shakes_trim %>%
unnest_tokens(word, text, token='words')
save(shakes_words, file='data/shakes_words_df_4text2vec.RData')
```
We also will be doing a little stemming here. I’m getting rid of suffixes that end with the suffix after an apostrophe. Many of the remaining words will either be stopwords or need to be further stemmed later. I also created a middle/modern English stemmer for words that are not caught otherwise (me\_st\_stem). Again, this is the sort of thing you discover after initial passes (e.g. ‘criedst’). After that, we can use the anti\_join remove the stopwords.
```
source('r/st_stem.R')
shakes_words = shakes_words %>%
mutate(word = str_trim(word), # remove possible whitespace
word = str_replace(word, "'er$|'d$|'t$|'ld$|'rt$|'st$|'dst$", ''), # remove me style endings
word = str_replace_all(word, "[0-9]", ''), # remove sonnet numbers
word = vapply(word, me_st_stem, 'a')) %>%
anti_join(em_stops) %>%
anti_join(me_stops) %>%
anti_join(data_frame(word=str_to_lower(c(chars, 'prologue', 'epilogue')))) %>%
anti_join(data_frame(word=str_to_lower(paste0(chars, "'s")))) %>% # remove possessive names
anti_join(stop_words)
```
As before, you should do a couple spot checks.
```
any(shakes_words$word == 'romeo')
any(shakes_words$word == 'prologue')
any(shakes_words$word == 'mayst')
```
```
[1] FALSE
[1] FALSE
[1] FALSE
```
### Scene I. Character names
We’ll get the list of character names from [opensourceshakespeare.org](http://opensourceshakespeare.org/) via rvest, but I added some from the poems and others that still came through the processing one way or another, e.g. abbreviated names.
```
shakes_char_url = 'https://www.opensourceshakespeare.org/views/plays/characters/chardisplay.php'
page0 = read_html(shakes_char_url)
tabs = page0 %>% html_table()
shakes_char = tabs[[2]][-(1:2), c(1,3,5)] # remove header and phantom columns
colnames(shakes_char) = c('Nspeeches', 'Character', 'Play')
shakes_char = shakes_char %>%
distinct(Character,.keep_all=T)
save(shakes_char, file='data/shakespeare_characters.RData')
```
A new snag is that some characters with multiple names may be represented (typically) by the first or last name, or in the case of three, the middle, e.g. Sir Toby Belch. Others are still difficultly named e.g. RICHARD PLANTAGENET (DUKE OF GLOUCESTER). The following should capture everything by splitting the names on spaces, removing parentheses, and keeping unique terms.
```
# remove paren and split
chars = shakes_char$Character
chars = str_replace_all(chars, '\\(|\\)', '')
chars = str_split(chars, ' ') %>%
unlist
# these were found after intial processsing
chars_other = c('enobarbus', 'marcius', 'katharina', 'clarence','pyramus',
'andrew', 'arcite', 'perithous', 'hippolita', 'schoolmaster',
'cressid', 'diomed', 'kate', 'titinius', 'Palamon', 'Tarquin',
'lucrece', 'isidore', 'tom', 'thisbe', 'paul',
'aemelia', 'sycorax', 'montague', 'capulet', 'collatinus')
chars = unique(c(chars, chars_other))
chars = chars[chars != '']
sample(chars)[1:3]
```
```
[1] "Children" "Dionyza" "Aaron"
```
### Scene II. Old, Middle, \& Modern English
While Shakespeare is considered [Early Modern English](https://en.wikipedia.org/wiki/Early_Modern_English), some text may be more historical, so I include Middle and Old English stopwords, as they were readily available from the cltk Python module ([link](https://github.com/cltk/cltk)). I also added some things to the modern English list like “thou’ldst” that I found lingering after initial passes. I first started using the works from Gutenberg, and there, the Old English might have had some utility. As the texts there were inconsistently translated and otherwise problematic, I abandoned using them. Here, the Old English vocabulary applied to these texts it only removes ‘wit’, so I refrain from using it.
```
# old and me from python cltk module;
# em from http://earlymodernconversions.com/wp-content/uploads/2013/12/stopwords.txt;
# I also added some to me
old_stops0 = read_lines('data/old_english_stop_words.txt')
# sort(old_stops0)
old_stops = data_frame(word=str_conv(old_stops0, 'UTF8'),
lexicon = 'cltk')
me_stops0 = read_lines('data/middle_english_stop_words')
# sort(me_stops0)
me_stops = data_frame(word=str_conv(me_stops0, 'UTF8'),
lexicon = 'cltk')
em_stops0 = read_lines('data/early_modern_english_stop_words.txt')
# sort(em_stops0)
em_stops = data_frame(word=str_conv(em_stops0, 'UTF8'),
lexicon = 'emc')
```
### Scene III. Remove stopwords
We’re now ready to start removing words. However, right now, we have lines not words. We can use the tidytext function unnest\_tokens, which is like unnest from tidyr, but works on different tokens, e.g. words, sentences, or paragraphs. Note that by default, the function will make all words lower case to make matching more efficient.
```
library(tidytext)
shakes_words = shakes_trim %>%
unnest_tokens(word, text, token='words')
save(shakes_words, file='data/shakes_words_df_4text2vec.RData')
```
We also will be doing a little stemming here. I’m getting rid of suffixes that end with the suffix after an apostrophe. Many of the remaining words will either be stopwords or need to be further stemmed later. I also created a middle/modern English stemmer for words that are not caught otherwise (me\_st\_stem). Again, this is the sort of thing you discover after initial passes (e.g. ‘criedst’). After that, we can use the anti\_join remove the stopwords.
```
source('r/st_stem.R')
shakes_words = shakes_words %>%
mutate(word = str_trim(word), # remove possible whitespace
word = str_replace(word, "'er$|'d$|'t$|'ld$|'rt$|'st$|'dst$", ''), # remove me style endings
word = str_replace_all(word, "[0-9]", ''), # remove sonnet numbers
word = vapply(word, me_st_stem, 'a')) %>%
anti_join(em_stops) %>%
anti_join(me_stops) %>%
anti_join(data_frame(word=str_to_lower(c(chars, 'prologue', 'epilogue')))) %>%
anti_join(data_frame(word=str_to_lower(paste0(chars, "'s")))) %>% # remove possessive names
anti_join(stop_words)
```
As before, you should do a couple spot checks.
```
any(shakes_words$word == 'romeo')
any(shakes_words$word == 'prologue')
any(shakes_words$word == 'mayst')
```
```
[1] FALSE
[1] FALSE
[1] FALSE
```
ACT IV. Other fixes
-------------------
Now we’re ready to finally do the word counts. Just kidding! There is *still* work to do for the remainder, and you’ll continue to spot things after runs. One remaining issue is the words that end in ‘st’ and ‘est’, and others that are not consistently spelled or otherwise need to be dealt with. For example, ‘crost’ will not be stemmed to ‘cross’, as ‘crossed’ would be. Finally, I limit the result to any words that have more than two characters, as my inspection suggested these are left\-over suffixes, or otherwise would be considered stopwords anyway.
```
# porter should catch remaining 'est'
add_a = c('mongst', 'gainst') # words to add a to
shakes_words = shakes_words %>%
mutate(word = if_else(word=='honour', 'honor', word),
word = if_else(word=='durst', 'dare', word),
word = if_else(word=='wast', 'was', word),
word = if_else(word=='dust', 'does', word),
word = if_else(word=='curst', 'cursed', word),
word = if_else(word=='blest', 'blessed', word),
word = if_else(word=='crost', 'crossed', word),
word = if_else(word=='accurst', 'accursed', word),
word = if_else(word %in% add_a,
paste0('a', word),
word),
word = str_replace(word, "'s$", ''), # strip remaining possessives
word = if_else(str_detect(word, pattern="o'er"), # change o'er over
str_replace(word, "'", 'v'),
word)) %>%
filter(!(id=='Antony_and_Cleopatra' & word == 'mark')) %>% # mark here is almost exclusively the character name
filter(str_count(word)>2)
```
At this point we could still maybe add things to this list of additional fixes, but I think it’s time to actually start playing with the data.
ACT V. Fun stuff
----------------
We are finally ready to get to the fun stuff. Finally! And now things get easy.
### Scene I. Count the terms
We can get term counts with standard dplyr approaches, and packages like tidytext will take that and also do some other things we might want. Specifically, we can use the latter to create the document\-term matrix (DTM) that will be used in other analysis. The function cast\_dfm will create a dfm class object, or ‘document\-feature’ matrix class object (from quanteda), which is the same thing but recognizes this sort of stuff is not specific to words. With word counts in hand, would be good save to save at this point, since they’ll serve as the basis for other processing.
```
term_counts = shakes_words %>%
group_by(id, word) %>%
count
term_counts %>%
arrange(desc(n))
library(quanteda)
shakes_dtm = term_counts %>%
cast_dfm(document=id, term=word, value=n)
## save(shakes_words, term_counts, shakes_dtm, file='data/shakes_words_df.RData')
```
```
# A tibble: 115,954 x 3
# Groups: id, word [115,954]
id word n
<chr> <chr> <int>
1 Sonnets love 195
2 The_Two_Gentlemen_of_Verona love 171
3 Romeo_and_Juliet love 150
4 As_You_Like_It love 118
5 Love_s_Labour_s_Lost love 118
6 A_Midsummer_Night_s_Dream love 114
7 Richard_III god 111
8 Titus_Andronicus rome 103
9 Much_Ado_about_Nothing love 92
10 Coriolanus rome 90
# ... with 115,944 more rows
```
Now things are looking like Shakespeare, with love for everyone[17](#fn17). You’ll notice I’ve kept place names such as Rome, but this might be something you’d prefer to remove. Other candidates would be madam, woman, man, majesty (as in ‘his/her’) etc. This sort of thing is up to the researcher.
### Scene II. Stemming
Now we’ll stem the words. This is actually more of a pre\-processing step, one that we’d do along with (and typically after) stopword removal. I do it here to mostly demonstrate how to use quanteda to do it, as it can also be used to remove stopwords and do many of the other things we did with tidytext.
Stemming will make words like eye and eyes just *ey*, or convert war, wars and warring to *war*. In other words, it will reduce variations of a word to a common root form, or ‘word stem’. We could have done this in a step prior to counting the terms, but then you only have the stemmed result to work with for the document term matrix from then on. Depending on your situation, you may or may not want to stem, or maybe you’d want to compare results. The quanteda package will actually stem with the DTM (i.e. work on the column names) and collapse the word counts accordingly. I note the difference in words before and after stemming.
```
shakes_dtm
ncol(shakes_dtm)
shakes_dtm = shakes_dtm %>%
dfm_wordstem()
shakes_dtm
ncol(shakes_dtm)
```
```
Document-feature matrix of: 43 documents, 22,052 features (87.8% sparse).
[1] 22052
Document-feature matrix of: 43 documents, 13,325 features (83.8% sparse).
[1] 13325
```
The result is notably fewer columns, which will speed up any analysis, as well as produce a slightly more dense matrix.
### Scene III. Exploration
#### Top features
Let’s start looking at the data more intently. The following shows the 10 most common words and their respective counts. This is also an easy way to find candidates to add to the stopword list. Note that dai and prai are stems for day and pray. Love occurs 2\.15 times as much as the most frequent word!
```
top10 = topfeatures(shakes_dtm, 10)
top10
```
```
love heart eye god day hand hear live death night
2918 1359 1300 1284 1229 1226 1043 1015 1010 1001
```
The following is a word cloud. They are among the most useless visual displays imaginable. Just because you can, doesn’t mean you should.
If you want to display relative frequency do so.
#### Similarity
The quanteda package has some built in similarity measures such as [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity), which you can think of similarly to the standard correlation (also available as an option). I display it visually to better get a sense of things.
```
## textstat_simil(shakes_dtm, margin = "documents", method = "cosine")
```
We can already begin to see the clusters of documents. For example, the more historical are the clump in the upper left. The oddball is [*The Phoenix and the Turtle*](https://en.wikipedia.org/wiki/The_Phoenix_and_the_Turtle), though *Lover’s Complaint* and the *Elegy* are also less similar than standard Shakespeare. The Phoenix and the Turtle is about the death of ideal love, represented by the Phoenix and Turtledove, for which there is a funeral. It actually is considered by scholars to be in stark contrast to his other output. [Elegy](https://en.wikipedia.org/wiki/Shakespeare_apocrypha#A_Funeral_Elegy) itself is actually written for a funeral, but probably not by Shakespeare. [*A Lover’s Complaint*](https://en.wikipedia.org/wiki/A_Lover%27s_Complaint) is thought to be an inferior work by the Bard by some critics, and maybe not even authored by him, so perhaps what we’re seeing is a reflection of that lack of quality. In general, we’re seeing things that we might expect.
#### Readability
We can examine readability scores for the texts, but for this we’ll need them in raw form. We already had them from before, I just added *Phoenix* from the Gutenberg download.
```
raw_texts
```
```
# A tibble: 43 x 2
id text
<chr> <list>
1 A_Lover_s_Complaint.txt <chr [813]>
2 A_Midsummer_Night_s_Dream.txt <chr [6,630]>
3 All_s_Well_That_Ends_Well.txt <chr [10,993]>
4 Antony_and_Cleopatra.txt <chr [14,064]>
5 As_You_Like_It.txt <chr [9,706]>
6 Coriolanus.txt <chr [13,440]>
7 Cymbeline.txt <chr [11,388]>
8 Elegy.txt <chr [1,316]>
9 Hamlet.txt <chr [13,950]>
10 Henry_V.txt <chr [9,777]>
# ... with 33 more rows
```
With raw texts, we need to convert them to a corpus object to proceed more easily. The corpus function from quanteda won’t read directly from a list column or a list at all, so we’ll convert it via the tm package, which more or less defeats the purpose of using the quanteda package, except that the textstat\_readability function gives us what we want, but I digress.
Unfortunately, the concept of readability is ill\-defined, and as such, there are dozens of measures available dating back nearly 75 years. The following is based on the Coleman\-Liau grade score (higher grade \= more difficult). The conclusion here is first, Shakespeare isn’t exactly a difficult read, and two, the poems may be more so relative to the other works.
```
library(tm)
raw_text_corpus = corpus(VCorpus(VectorSource(raw_texts$text)))
shakes_read = textstat_readability(raw_text_corpus)
```
#### Lexical diversity
There are also metrics of lexical diversity. As with readability, there is no one way to measure ‘diversity’. Here we’ll go back to using the standard DTM, as the focus is on the terms, whereas readability is more at the sentence level. Most standard measures of lexical diversity are variants on what is called the type\-token ratio, which in our setting is the number of unique terms (types) relative to the total terms (tokens). We can use textstat\_lexdiv for our purposes here, which will provide several measures of diversity by default.
```
ld = textstat_lexdiv(shakes_dtm)
```
This visual is based on the (absolute) scaled values of those several metrics, and might suggest that the poems are relatively more diverse. This certainly might be the case for *Phoenix*, but it could also be a reflection of the limitation of several of the measures, such that longer works are seen as less diverse, as tokens are added more so than types the longer the text goes.
As a comparison, the following shows the results of the ‘Measure of Textual Diversity’ calculated using the koRpus package[18](#fn18). It is notably less affected by text length, though the conclusions are largely the same. There is notable correlation between the MTLD and readability as well[19](#fn19). In general, Shakespeare tends to be more expressive in poems, and less so with comedies.
### Scene IV. Topic model
I’d say we’re now ready for topic model. That didn’t take too much did it?
#### Running the model and exploring the topics
We’ll run one with 10 topics. As in the previous example in this document, we’ll use topicmodels and the LDA function. Later, we’ll also compare our results with the traditional classifications of the texts. Note that this will take a while to run depending on your machine (maybe a minute or two). Faster implementation can be found with text2vec.
```
library(topicmodels)
shakes_10 = LDA(convert(shakes_dtm, to = "topicmodels"), k = 10, control=list(seed=1234))
```
One of the first things to do is to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* are common as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the stm package and corresponding labelTopics function as a way to get several alternatives. As an example, I show the results of their version of the following[20](#fn20):
* FREX: **FR**equency and **EX**clusivity, it is a weighted harmonic mean of a term’s rank within a topic in terms of frequency and exclusivity.
* lift: Ratio of the term’s probability within a topic to its probability of occurrence across all documents. Overly sensitive to rare words.
* score: Another approach that will give more weight to more exclusive terms.
* prob: This is just the raw probability of the term within a given topic.
As another approach, consider the saliency and relevance of term via the LDAvis package. While you can play with it here, it’s probably easier to [open it separately](vis/index.html). Note that this has to be done separately from the model, and may have topic numbers in a different order.
Your browser does not support iframes.
Given all these measures, one can assess how well they match what topics the documents would be most associated with.
```
t(topics(shakes_10, 3))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. The other measures pick up on words like Dane and Denmark. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram[21](#fn21). The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[22](#fn22). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix*, *Complaint*, standard poems, a mixed bag of more romance\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
The following shows the average topic probability for each of the traditional classes. Topics are represented by their first five most probable terms.
Aside from the poems, the classes are a good mix of topics, and appear to have some overlap. Tragedies are perhaps most diverse.
#### Summary of Topic Models
This is where the summary would go, but I grow weary…
**FIN**
### Scene I. Count the terms
We can get term counts with standard dplyr approaches, and packages like tidytext will take that and also do some other things we might want. Specifically, we can use the latter to create the document\-term matrix (DTM) that will be used in other analysis. The function cast\_dfm will create a dfm class object, or ‘document\-feature’ matrix class object (from quanteda), which is the same thing but recognizes this sort of stuff is not specific to words. With word counts in hand, would be good save to save at this point, since they’ll serve as the basis for other processing.
```
term_counts = shakes_words %>%
group_by(id, word) %>%
count
term_counts %>%
arrange(desc(n))
library(quanteda)
shakes_dtm = term_counts %>%
cast_dfm(document=id, term=word, value=n)
## save(shakes_words, term_counts, shakes_dtm, file='data/shakes_words_df.RData')
```
```
# A tibble: 115,954 x 3
# Groups: id, word [115,954]
id word n
<chr> <chr> <int>
1 Sonnets love 195
2 The_Two_Gentlemen_of_Verona love 171
3 Romeo_and_Juliet love 150
4 As_You_Like_It love 118
5 Love_s_Labour_s_Lost love 118
6 A_Midsummer_Night_s_Dream love 114
7 Richard_III god 111
8 Titus_Andronicus rome 103
9 Much_Ado_about_Nothing love 92
10 Coriolanus rome 90
# ... with 115,944 more rows
```
Now things are looking like Shakespeare, with love for everyone[17](#fn17). You’ll notice I’ve kept place names such as Rome, but this might be something you’d prefer to remove. Other candidates would be madam, woman, man, majesty (as in ‘his/her’) etc. This sort of thing is up to the researcher.
### Scene II. Stemming
Now we’ll stem the words. This is actually more of a pre\-processing step, one that we’d do along with (and typically after) stopword removal. I do it here to mostly demonstrate how to use quanteda to do it, as it can also be used to remove stopwords and do many of the other things we did with tidytext.
Stemming will make words like eye and eyes just *ey*, or convert war, wars and warring to *war*. In other words, it will reduce variations of a word to a common root form, or ‘word stem’. We could have done this in a step prior to counting the terms, but then you only have the stemmed result to work with for the document term matrix from then on. Depending on your situation, you may or may not want to stem, or maybe you’d want to compare results. The quanteda package will actually stem with the DTM (i.e. work on the column names) and collapse the word counts accordingly. I note the difference in words before and after stemming.
```
shakes_dtm
ncol(shakes_dtm)
shakes_dtm = shakes_dtm %>%
dfm_wordstem()
shakes_dtm
ncol(shakes_dtm)
```
```
Document-feature matrix of: 43 documents, 22,052 features (87.8% sparse).
[1] 22052
Document-feature matrix of: 43 documents, 13,325 features (83.8% sparse).
[1] 13325
```
The result is notably fewer columns, which will speed up any analysis, as well as produce a slightly more dense matrix.
### Scene III. Exploration
#### Top features
Let’s start looking at the data more intently. The following shows the 10 most common words and their respective counts. This is also an easy way to find candidates to add to the stopword list. Note that dai and prai are stems for day and pray. Love occurs 2\.15 times as much as the most frequent word!
```
top10 = topfeatures(shakes_dtm, 10)
top10
```
```
love heart eye god day hand hear live death night
2918 1359 1300 1284 1229 1226 1043 1015 1010 1001
```
The following is a word cloud. They are among the most useless visual displays imaginable. Just because you can, doesn’t mean you should.
If you want to display relative frequency do so.
#### Similarity
The quanteda package has some built in similarity measures such as [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity), which you can think of similarly to the standard correlation (also available as an option). I display it visually to better get a sense of things.
```
## textstat_simil(shakes_dtm, margin = "documents", method = "cosine")
```
We can already begin to see the clusters of documents. For example, the more historical are the clump in the upper left. The oddball is [*The Phoenix and the Turtle*](https://en.wikipedia.org/wiki/The_Phoenix_and_the_Turtle), though *Lover’s Complaint* and the *Elegy* are also less similar than standard Shakespeare. The Phoenix and the Turtle is about the death of ideal love, represented by the Phoenix and Turtledove, for which there is a funeral. It actually is considered by scholars to be in stark contrast to his other output. [Elegy](https://en.wikipedia.org/wiki/Shakespeare_apocrypha#A_Funeral_Elegy) itself is actually written for a funeral, but probably not by Shakespeare. [*A Lover’s Complaint*](https://en.wikipedia.org/wiki/A_Lover%27s_Complaint) is thought to be an inferior work by the Bard by some critics, and maybe not even authored by him, so perhaps what we’re seeing is a reflection of that lack of quality. In general, we’re seeing things that we might expect.
#### Readability
We can examine readability scores for the texts, but for this we’ll need them in raw form. We already had them from before, I just added *Phoenix* from the Gutenberg download.
```
raw_texts
```
```
# A tibble: 43 x 2
id text
<chr> <list>
1 A_Lover_s_Complaint.txt <chr [813]>
2 A_Midsummer_Night_s_Dream.txt <chr [6,630]>
3 All_s_Well_That_Ends_Well.txt <chr [10,993]>
4 Antony_and_Cleopatra.txt <chr [14,064]>
5 As_You_Like_It.txt <chr [9,706]>
6 Coriolanus.txt <chr [13,440]>
7 Cymbeline.txt <chr [11,388]>
8 Elegy.txt <chr [1,316]>
9 Hamlet.txt <chr [13,950]>
10 Henry_V.txt <chr [9,777]>
# ... with 33 more rows
```
With raw texts, we need to convert them to a corpus object to proceed more easily. The corpus function from quanteda won’t read directly from a list column or a list at all, so we’ll convert it via the tm package, which more or less defeats the purpose of using the quanteda package, except that the textstat\_readability function gives us what we want, but I digress.
Unfortunately, the concept of readability is ill\-defined, and as such, there are dozens of measures available dating back nearly 75 years. The following is based on the Coleman\-Liau grade score (higher grade \= more difficult). The conclusion here is first, Shakespeare isn’t exactly a difficult read, and two, the poems may be more so relative to the other works.
```
library(tm)
raw_text_corpus = corpus(VCorpus(VectorSource(raw_texts$text)))
shakes_read = textstat_readability(raw_text_corpus)
```
#### Lexical diversity
There are also metrics of lexical diversity. As with readability, there is no one way to measure ‘diversity’. Here we’ll go back to using the standard DTM, as the focus is on the terms, whereas readability is more at the sentence level. Most standard measures of lexical diversity are variants on what is called the type\-token ratio, which in our setting is the number of unique terms (types) relative to the total terms (tokens). We can use textstat\_lexdiv for our purposes here, which will provide several measures of diversity by default.
```
ld = textstat_lexdiv(shakes_dtm)
```
This visual is based on the (absolute) scaled values of those several metrics, and might suggest that the poems are relatively more diverse. This certainly might be the case for *Phoenix*, but it could also be a reflection of the limitation of several of the measures, such that longer works are seen as less diverse, as tokens are added more so than types the longer the text goes.
As a comparison, the following shows the results of the ‘Measure of Textual Diversity’ calculated using the koRpus package[18](#fn18). It is notably less affected by text length, though the conclusions are largely the same. There is notable correlation between the MTLD and readability as well[19](#fn19). In general, Shakespeare tends to be more expressive in poems, and less so with comedies.
#### Top features
Let’s start looking at the data more intently. The following shows the 10 most common words and their respective counts. This is also an easy way to find candidates to add to the stopword list. Note that dai and prai are stems for day and pray. Love occurs 2\.15 times as much as the most frequent word!
```
top10 = topfeatures(shakes_dtm, 10)
top10
```
```
love heart eye god day hand hear live death night
2918 1359 1300 1284 1229 1226 1043 1015 1010 1001
```
The following is a word cloud. They are among the most useless visual displays imaginable. Just because you can, doesn’t mean you should.
If you want to display relative frequency do so.
#### Similarity
The quanteda package has some built in similarity measures such as [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity), which you can think of similarly to the standard correlation (also available as an option). I display it visually to better get a sense of things.
```
## textstat_simil(shakes_dtm, margin = "documents", method = "cosine")
```
We can already begin to see the clusters of documents. For example, the more historical are the clump in the upper left. The oddball is [*The Phoenix and the Turtle*](https://en.wikipedia.org/wiki/The_Phoenix_and_the_Turtle), though *Lover’s Complaint* and the *Elegy* are also less similar than standard Shakespeare. The Phoenix and the Turtle is about the death of ideal love, represented by the Phoenix and Turtledove, for which there is a funeral. It actually is considered by scholars to be in stark contrast to his other output. [Elegy](https://en.wikipedia.org/wiki/Shakespeare_apocrypha#A_Funeral_Elegy) itself is actually written for a funeral, but probably not by Shakespeare. [*A Lover’s Complaint*](https://en.wikipedia.org/wiki/A_Lover%27s_Complaint) is thought to be an inferior work by the Bard by some critics, and maybe not even authored by him, so perhaps what we’re seeing is a reflection of that lack of quality. In general, we’re seeing things that we might expect.
#### Readability
We can examine readability scores for the texts, but for this we’ll need them in raw form. We already had them from before, I just added *Phoenix* from the Gutenberg download.
```
raw_texts
```
```
# A tibble: 43 x 2
id text
<chr> <list>
1 A_Lover_s_Complaint.txt <chr [813]>
2 A_Midsummer_Night_s_Dream.txt <chr [6,630]>
3 All_s_Well_That_Ends_Well.txt <chr [10,993]>
4 Antony_and_Cleopatra.txt <chr [14,064]>
5 As_You_Like_It.txt <chr [9,706]>
6 Coriolanus.txt <chr [13,440]>
7 Cymbeline.txt <chr [11,388]>
8 Elegy.txt <chr [1,316]>
9 Hamlet.txt <chr [13,950]>
10 Henry_V.txt <chr [9,777]>
# ... with 33 more rows
```
With raw texts, we need to convert them to a corpus object to proceed more easily. The corpus function from quanteda won’t read directly from a list column or a list at all, so we’ll convert it via the tm package, which more or less defeats the purpose of using the quanteda package, except that the textstat\_readability function gives us what we want, but I digress.
Unfortunately, the concept of readability is ill\-defined, and as such, there are dozens of measures available dating back nearly 75 years. The following is based on the Coleman\-Liau grade score (higher grade \= more difficult). The conclusion here is first, Shakespeare isn’t exactly a difficult read, and two, the poems may be more so relative to the other works.
```
library(tm)
raw_text_corpus = corpus(VCorpus(VectorSource(raw_texts$text)))
shakes_read = textstat_readability(raw_text_corpus)
```
#### Lexical diversity
There are also metrics of lexical diversity. As with readability, there is no one way to measure ‘diversity’. Here we’ll go back to using the standard DTM, as the focus is on the terms, whereas readability is more at the sentence level. Most standard measures of lexical diversity are variants on what is called the type\-token ratio, which in our setting is the number of unique terms (types) relative to the total terms (tokens). We can use textstat\_lexdiv for our purposes here, which will provide several measures of diversity by default.
```
ld = textstat_lexdiv(shakes_dtm)
```
This visual is based on the (absolute) scaled values of those several metrics, and might suggest that the poems are relatively more diverse. This certainly might be the case for *Phoenix*, but it could also be a reflection of the limitation of several of the measures, such that longer works are seen as less diverse, as tokens are added more so than types the longer the text goes.
As a comparison, the following shows the results of the ‘Measure of Textual Diversity’ calculated using the koRpus package[18](#fn18). It is notably less affected by text length, though the conclusions are largely the same. There is notable correlation between the MTLD and readability as well[19](#fn19). In general, Shakespeare tends to be more expressive in poems, and less so with comedies.
### Scene IV. Topic model
I’d say we’re now ready for topic model. That didn’t take too much did it?
#### Running the model and exploring the topics
We’ll run one with 10 topics. As in the previous example in this document, we’ll use topicmodels and the LDA function. Later, we’ll also compare our results with the traditional classifications of the texts. Note that this will take a while to run depending on your machine (maybe a minute or two). Faster implementation can be found with text2vec.
```
library(topicmodels)
shakes_10 = LDA(convert(shakes_dtm, to = "topicmodels"), k = 10, control=list(seed=1234))
```
One of the first things to do is to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* are common as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the stm package and corresponding labelTopics function as a way to get several alternatives. As an example, I show the results of their version of the following[20](#fn20):
* FREX: **FR**equency and **EX**clusivity, it is a weighted harmonic mean of a term’s rank within a topic in terms of frequency and exclusivity.
* lift: Ratio of the term’s probability within a topic to its probability of occurrence across all documents. Overly sensitive to rare words.
* score: Another approach that will give more weight to more exclusive terms.
* prob: This is just the raw probability of the term within a given topic.
As another approach, consider the saliency and relevance of term via the LDAvis package. While you can play with it here, it’s probably easier to [open it separately](vis/index.html). Note that this has to be done separately from the model, and may have topic numbers in a different order.
Your browser does not support iframes.
Given all these measures, one can assess how well they match what topics the documents would be most associated with.
```
t(topics(shakes_10, 3))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. The other measures pick up on words like Dane and Denmark. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram[21](#fn21). The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[22](#fn22). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix*, *Complaint*, standard poems, a mixed bag of more romance\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
The following shows the average topic probability for each of the traditional classes. Topics are represented by their first five most probable terms.
Aside from the poems, the classes are a good mix of topics, and appear to have some overlap. Tragedies are perhaps most diverse.
#### Summary of Topic Models
This is where the summary would go, but I grow weary…
**FIN**
#### Running the model and exploring the topics
We’ll run one with 10 topics. As in the previous example in this document, we’ll use topicmodels and the LDA function. Later, we’ll also compare our results with the traditional classifications of the texts. Note that this will take a while to run depending on your machine (maybe a minute or two). Faster implementation can be found with text2vec.
```
library(topicmodels)
shakes_10 = LDA(convert(shakes_dtm, to = "topicmodels"), k = 10, control=list(seed=1234))
```
One of the first things to do is to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* are common as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the stm package and corresponding labelTopics function as a way to get several alternatives. As an example, I show the results of their version of the following[20](#fn20):
* FREX: **FR**equency and **EX**clusivity, it is a weighted harmonic mean of a term’s rank within a topic in terms of frequency and exclusivity.
* lift: Ratio of the term’s probability within a topic to its probability of occurrence across all documents. Overly sensitive to rare words.
* score: Another approach that will give more weight to more exclusive terms.
* prob: This is just the raw probability of the term within a given topic.
As another approach, consider the saliency and relevance of term via the LDAvis package. While you can play with it here, it’s probably easier to [open it separately](vis/index.html). Note that this has to be done separately from the model, and may have topic numbers in a different order.
Your browser does not support iframes.
Given all these measures, one can assess how well they match what topics the documents would be most associated with.
```
t(topics(shakes_10, 3))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. The other measures pick up on words like Dane and Denmark. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram[21](#fn21). The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[22](#fn22). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix*, *Complaint*, standard poems, a mixed bag of more romance\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
The following shows the average topic probability for each of the traditional classes. Topics are represented by their first five most probable terms.
Aside from the poems, the classes are a good mix of topics, and appear to have some overlap. Tragedies are perhaps most diverse.
#### Summary of Topic Models
This is where the summary would go, but I grow weary…
**FIN**
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/shakespeare.html |
Shakespeare Start to Finish
===========================
The following attempts to demonstrate the usual difficulties one encounters dealing with text by procuring and processing the works of Shakespeare. The source is [MIT](http://shakespeare.mit.edu/), which has made the ‘complete’ works available on the web since 1993, plus one other from Gutenberg. The initial issue is simply getting the works from the web. Subsequently there is metadata, character names, stopwords etc. to be removed. At that point, we can stem and count the words in each work, which, when complete, puts us at the point we are ready for analysis.
The primary packages used are tidytext, stringr, and when things are ready for analysis, quanteda.
ACT I. Scrape MIT and Gutenberg Shakespeare
-------------------------------------------
### Scene I. Scrape main works
Initially we must scrape the web to get the documents we need. The rvest package will be used as follows.
* Start with the url of the site
* Get the links off that page to serve as base urls for the works
* Scrape the document for each url
* Deal with the collection of Sonnets separately
* Write out results
```
library(rvest); library(tidyverse); library(stringr)
page0 = read_html('http://shakespeare.mit.edu/')
works_urls0 = page0 %>%
html_nodes('a') %>%
html_attr('href')
main = works_urls0 %>%
grep(pattern='index', value=T) %>%
str_replace_all(pattern='index', replacement='full')
other = works_urls0[!grepl(works_urls0, pattern='index|edu|org|news')]
works_urls = c(main, other)
works_urls[1:3]
```
Now we just paste the main site url to the work urls and download them. Here is where we come across our first snag. The html\_text function has what I would call a bug but what the author feels is a feature. [Basically, it ignores line breaks of the form `<br>` in certain situations](https://github.com/hadley/rvest/issues/175). This means it will smash text together that shouldn’t be, thereby making *any* analysis of it fairly useless[14](#fn14). Luckily, [@rentrop](https://github.com/rentrop) provided a solution, which is in `r/fix_read_html.R`.
```
works0 = lapply(works_urls, function(x) read_html(paste0('http://shakespeare.mit.edu/', x)))
source('r/fix_read_html.R')
html_text_collapse(works0[[1]]) #works
works = lapply(works0, html_text_collapse)
names(works) = c("All's Well That Ends Well", "As You Like It", "Comedy of Errors"
"Cymbeline", "Love's Labour's Lost", "Measure for Measure"
"The Merry Wives of Windsor", "The Merchant of Venice", "A Midsummer Night's Dream"
"Much Ado about Nothing", "Pericles Prince of Tyre", "The Taming of the Shrew"
"The Tempest", "Troilus and Cressida", "Twelfth Night"
"The Two Gentlemen of Verona", "The Winter's Tale", "King Henry IV Part 1"
"King Henry IV Part 2", "Henry V", "Henry VI Part 1"
"Henry VI Part 2", "Henry VI Part 3", "Henry VIII"
"King John", "Richard II", "Richard III"
"Antony and Cleopatra", "Coriolanus", "Hamlet"
"Julius Caesar", "King Lear", "Macbeth"
"Othello", "Romeo and Juliet", "Timon of Athens"
"Titus Andronicus", "Sonnets", "A Lover's Complaint"
"The Rape of Lucrece", "Venus and Adonis", "Elegy")
```
### Scene II. Sonnets
We now hit a slight nuisance with the Sonnets. The Sonnets have a bit of a different structure than the plays. All links are in a single page, with a different form for the url, and each sonnet has its own page.
```
sonnet_urls = paste0('http://shakespeare.mit.edu/', grep(works_urls0, pattern='sonnet', value=T)) %>%
read_html() %>%
html_nodes('a') %>%
html_attr('href')
sonnet_urls = grep(sonnet_urls, pattern = 'sonnet', value=T) # remove amazon link
# read the texts
sonnet0 = purrr::map(sonnet_urls, function(x) read_html(paste0('http://shakespeare.mit.edu/Poetry/', x)))
# collapse to one 'Sonnets' work
sonnet = sapply(sonnet0, html_text_collapse)
works$Sonnets = sonnet
```
### Scene III. Save and write out
Now we can save our results so we won’t have to repeat any of the previous scraping. We want to save the main text object as an RData file, and write out the texts to their own file. When dealing with text, you’ll regularly want to save stages so you can avoid repeating what you don’t have to, as often you will need to go back after discovering new issues further down the line.
```
save(works, file='data/texts_raw/shakes/moby_from_web.RData')
```
### Scene IV. Read text from files
After the above is done, it’s not required to redo, so we can always get what we need. I’ll start with the raw text as files, as that is one of the more common ways one deals with documents. When text is nice and clean, this can be fairly straightforward.
The function at the end comes from the tidyr package. Up to that line, each element in the text column is the entire text, while the column itself is thus a ‘list\-column’. In other words, we have a 42 x 2 matrix. But to do what we need, we’ll want to have access to each line, and the unnest function unpacks each line within the title. The first few lines of the result are shown after.
```
library(tidyverse); library(stringr)
shakes0 =
data_frame(file = dir('data/texts_raw/shakes/moby/', full.names = TRUE)) %>%
transmute(id = basename(file), text) %>%
unnest(text)
save(shakes0, file='data/initial_shakes_dt.RData')
# Alternate that provides for more options
# library(readtext)
# shakes0 =
# data_frame(file = dir('data/texts_raw/shakes/moby/', full.names = TRUE)) %>%
# mutate(text = map(file, readtext, encoding='UTF8')) %>%
# unnest(text)
```
### Scene V. Add additional works
It is typical to be gathering texts from multiple sources. In this case, we’ll get *The Phoenix and the Turtle* from the Project Gutenberg website. There is an R package that will allow us to work directly with the site, making the process straightforward[15](#fn15). I also considered two other works, but I refrained from “The Two Noble Kinsmen” because like many other of Shakespeare’s versions on Gutenberg, it’s basically written in a different language. I also refrained from *The Passionate Pilgrim* because it’s mostly not Shakespeare.
When first doing this project, I actually started with Gutenberg, but it became a notable PITA. The texts were inconsistent in source, and sometimes reproduced printing errors purposely, which would have compounded typical problems. I thought it could have been solved by using the *Complete Works of Shakespeare* but the download only came with that title, meaning one would have to hunt for and delineate each separate work. This might not have been too big of an issue, except that there is no table of contents, nor consistent naming of titles across different printings. The MIT approach, on the other hand, was a few lines of code. This represents a common issue in text analysis when dealing with sources, a different option may save a lot of time in the end.
The following code could be more succinct to deal with one text, but I initially was dealing with multiple works, so I’ve left it in that mode. In the end, we’ll have a tibble with an id column for the file/work name, and another column that contains the lines of text.
```
library(gutenbergr)
works_not_included = c("The Phoenix and the Turtle") # add others if desired
gute0 = gutenberg_works(title %in% works_not_included)
gute = lapply(gute0$gutenberg_id, gutenberg_download)
gute = mapply(function(x, y) mutate(x, id=y) %>% select(-gutenberg_id),
x=gute,
y=works_not_included,
SIMPLIFY=F)
shakes = shakes0 %>%
bind_rows(gute) %>%
mutate(id = str_replace_all(id, " |'", '_')) %>%
mutate(id = str_replace(id, '.txt', '')) %>%
arrange(id)
# shakes %>% split(.$id) # inspect
save(shakes, file='data/texts_raw/shakes/shakes_df.RData')
```
ACT II. Preliminary Cleaning
----------------------------
If you think we’re even remotely getting close to being ready for analysis, I say Ha! to you. Our journey has only just begun (cue the Carpenters).
Now we can start thinking about prepping the data for eventual analysis. One of the nice things about having the data in a tidy format is that we can use string functionality over the column of text in a simple fashion.
### Scene I. Remove initial text/metadata
First on our to\-do list is to get rid of all the preliminary text of titles, authorship, and similar. This is fairly straightforward when you realize the text we want will be associated with something like `ACT I`, or in the case of the Sonnets, the word `Sonnet`. So, the idea it to drop all text up to those points. I’ve created a [function](https://github.com/m-clark/text-analysis-with-R/blob/master/r/detect_first_act.R) that will do that, and then just apply it to each works tibble[16](#fn16). For the poems and *A Funeral Elegy for Master William Peter*, we look instead for the line where his name or initials start the line.
```
source('r/detect_first_act.R')
shakes_trim = shakes %>%
split(.$id) %>%
lapply(detect_first_act) %>%
bind_rows
shakes %>% filter(id=='Romeo_and_Juliet') %>% head
```
```
# A tibble: 6 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet Romeo and Juliet: Entire Play
2 Romeo_and_Juliet " "
3 Romeo_and_Juliet ""
4 Romeo_and_Juliet ""
5 Romeo_and_Juliet ""
6 Romeo_and_Juliet Romeo and Juliet
```
```
shakes_trim %>% filter(id=='Romeo_and_Juliet') %>% head
```
```
# A tibble: 6 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet ""
2 Romeo_and_Juliet ""
3 Romeo_and_Juliet PROLOGUE
4 Romeo_and_Juliet ""
5 Romeo_and_Juliet ""
6 Romeo_and_Juliet ""
```
### Scene II. Miscellaneous removal
Next, we’ll want to remove empty rows, any remaining titles, lines that denote the act or scene, and other stuff. I’m going to remove the word *prologue* and *epilogue* as a stopword later. While some texts have a line that just says that (`PROLOGUE`), others have text that describes the scene (`Prologue. Blah blah`) and which I’ve decided to keep. As such, we just need the word itself gone.
```
titles = c("A Lover's Complaint", "All's Well That Ends Well", "As You Like It", "The Comedy of Errors",
"Cymbeline", "Love's Labour's Lost", "Measure for Measure",
"The Merry Wives of Windsor", "The Merchant of Venice", "A Midsummer Night's Dream",
"Much Ado about Nothing", "Pericles Prince of Tyre", "The Taming of the Shrew",
"The Tempest", "Troilus and Cressida", "Twelfth Night",
"The Two Gentlemen of Verona", "The Winter's Tale", "King Henry IV, Part 1",
"King Henry IV, Part 2", "Henry V", "Henry VI, Part 1",
"Henry VI, Part 2", "Henry VI, Part 3", "Henry VIII",
"King John", "Richard II", "Richard III",
"Antony and Cleopatra", "Coriolanus", "Hamlet",
"Julius Caesar", "King Lear", "Macbeth",
"Othello", "Romeo and Juliet", "Timon of Athens",
"Titus Andronicus", "Sonnets",
"The Rape of Lucrece", "Venus and Adonis", "A Funeral Elegy", "The Phoenix and the Turtle")
shakes_trim = shakes_trim %>%
filter(text != '', # remove empties
!text %in% titles, # remove titles
!str_detect(text, '^ACT|^SCENE|^Enter|^Exit|^Exeunt|^Sonnet') # remove acts etc.
)
shakes_trim %>% filter(id=='Romeo_and_Juliet') # we'll get prologue later
```
```
# A tibble: 3,992 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet PROLOGUE
2 Romeo_and_Juliet Two households, both alike in dignity,
3 Romeo_and_Juliet In fair Verona, where we lay our scene,
4 Romeo_and_Juliet From ancient grudge break to new mutiny,
5 Romeo_and_Juliet Where civil blood makes civil hands unclean.
6 Romeo_and_Juliet From forth the fatal loins of these two foes
7 Romeo_and_Juliet A pair of star-cross'd lovers take their life;
8 Romeo_and_Juliet Whose misadventured piteous overthrows
9 Romeo_and_Juliet Do with their death bury their parents' strife.
10 Romeo_and_Juliet The fearful passage of their death-mark'd love,
# ... with 3,982 more rows
```
### Scene III. Classification of works
While we’re at it, we can save the classical (sometimes arbitrary) classifications of Shakespeare’s works for later comparison to what we’ll get in our analyses. We’ll save them to call as needed.
```
shakes_types = data_frame(title=unique(shakes_trim$id)) %>%
mutate(class = 'Comedy',
class = if_else(str_detect(title, pattern='Adonis|Lucrece|Complaint|Turtle|Pilgrim|Sonnet|Elegy'), 'Poem', class),
class = if_else(str_detect(title, pattern='Henry|Richard|John'), 'History', class),
class = if_else(str_detect(title, pattern='Troilus|Coriolanus|Titus|Romeo|Timon|Julius|Macbeth|Hamlet|Othello|Antony|Cymbeline|Lear'), 'Tragedy', class),
problem = if_else(str_detect(title, pattern='Measure|Merchant|^All|Troilus|Timon|Passion'), 'Problem', 'Not'),
late_romance = if_else(str_detect(title, pattern='Cymbeline|Kinsmen|Pericles|Winter|Tempest'), 'Late', 'Other'))
save(shakes_types, file='data/shakespeare_classification.RData') # save for later
```
ACT III. Stop words
-------------------
As we’ve noted before, we’ll want to get rid of stop words, things like articles, possessive pronouns, and other very common words. In this case, we also want to include character names. However, the big wrinkle here is that this is not English as currently spoken, so we need to remove ‘ye’, ‘thee’, ‘thine’ etc. In addition, there are things that need to be replaced, like o’er to over, which may then also be removed. In short, this is not so straightforward.
### Scene I. Character names
We’ll get the list of character names from [opensourceshakespeare.org](http://opensourceshakespeare.org/) via rvest, but I added some from the poems and others that still came through the processing one way or another, e.g. abbreviated names.
```
shakes_char_url = 'https://www.opensourceshakespeare.org/views/plays/characters/chardisplay.php'
page0 = read_html(shakes_char_url)
tabs = page0 %>% html_table()
shakes_char = tabs[[2]][-(1:2), c(1,3,5)] # remove header and phantom columns
colnames(shakes_char) = c('Nspeeches', 'Character', 'Play')
shakes_char = shakes_char %>%
distinct(Character,.keep_all=T)
save(shakes_char, file='data/shakespeare_characters.RData')
```
A new snag is that some characters with multiple names may be represented (typically) by the first or last name, or in the case of three, the middle, e.g. Sir Toby Belch. Others are still difficultly named e.g. RICHARD PLANTAGENET (DUKE OF GLOUCESTER). The following should capture everything by splitting the names on spaces, removing parentheses, and keeping unique terms.
```
# remove paren and split
chars = shakes_char$Character
chars = str_replace_all(chars, '\\(|\\)', '')
chars = str_split(chars, ' ') %>%
unlist
# these were found after intial processsing
chars_other = c('enobarbus', 'marcius', 'katharina', 'clarence','pyramus',
'andrew', 'arcite', 'perithous', 'hippolita', 'schoolmaster',
'cressid', 'diomed', 'kate', 'titinius', 'Palamon', 'Tarquin',
'lucrece', 'isidore', 'tom', 'thisbe', 'paul',
'aemelia', 'sycorax', 'montague', 'capulet', 'collatinus')
chars = unique(c(chars, chars_other))
chars = chars[chars != '']
sample(chars)[1:3]
```
```
[1] "Children" "Dionyza" "Aaron"
```
### Scene II. Old, Middle, \& Modern English
While Shakespeare is considered [Early Modern English](https://en.wikipedia.org/wiki/Early_Modern_English), some text may be more historical, so I include Middle and Old English stopwords, as they were readily available from the cltk Python module ([link](https://github.com/cltk/cltk)). I also added some things to the modern English list like “thou’ldst” that I found lingering after initial passes. I first started using the works from Gutenberg, and there, the Old English might have had some utility. As the texts there were inconsistently translated and otherwise problematic, I abandoned using them. Here, the Old English vocabulary applied to these texts it only removes ‘wit’, so I refrain from using it.
```
# old and me from python cltk module;
# em from http://earlymodernconversions.com/wp-content/uploads/2013/12/stopwords.txt;
# I also added some to me
old_stops0 = read_lines('data/old_english_stop_words.txt')
# sort(old_stops0)
old_stops = data_frame(word=str_conv(old_stops0, 'UTF8'),
lexicon = 'cltk')
me_stops0 = read_lines('data/middle_english_stop_words')
# sort(me_stops0)
me_stops = data_frame(word=str_conv(me_stops0, 'UTF8'),
lexicon = 'cltk')
em_stops0 = read_lines('data/early_modern_english_stop_words.txt')
# sort(em_stops0)
em_stops = data_frame(word=str_conv(em_stops0, 'UTF8'),
lexicon = 'emc')
```
### Scene III. Remove stopwords
We’re now ready to start removing words. However, right now, we have lines not words. We can use the tidytext function unnest\_tokens, which is like unnest from tidyr, but works on different tokens, e.g. words, sentences, or paragraphs. Note that by default, the function will make all words lower case to make matching more efficient.
```
library(tidytext)
shakes_words = shakes_trim %>%
unnest_tokens(word, text, token='words')
save(shakes_words, file='data/shakes_words_df_4text2vec.RData')
```
We also will be doing a little stemming here. I’m getting rid of suffixes that end with the suffix after an apostrophe. Many of the remaining words will either be stopwords or need to be further stemmed later. I also created a middle/modern English stemmer for words that are not caught otherwise (me\_st\_stem). Again, this is the sort of thing you discover after initial passes (e.g. ‘criedst’). After that, we can use the anti\_join remove the stopwords.
```
source('r/st_stem.R')
shakes_words = shakes_words %>%
mutate(word = str_trim(word), # remove possible whitespace
word = str_replace(word, "'er$|'d$|'t$|'ld$|'rt$|'st$|'dst$", ''), # remove me style endings
word = str_replace_all(word, "[0-9]", ''), # remove sonnet numbers
word = vapply(word, me_st_stem, 'a')) %>%
anti_join(em_stops) %>%
anti_join(me_stops) %>%
anti_join(data_frame(word=str_to_lower(c(chars, 'prologue', 'epilogue')))) %>%
anti_join(data_frame(word=str_to_lower(paste0(chars, "'s")))) %>% # remove possessive names
anti_join(stop_words)
```
As before, you should do a couple spot checks.
```
any(shakes_words$word == 'romeo')
any(shakes_words$word == 'prologue')
any(shakes_words$word == 'mayst')
```
```
[1] FALSE
[1] FALSE
[1] FALSE
```
ACT IV. Other fixes
-------------------
Now we’re ready to finally do the word counts. Just kidding! There is *still* work to do for the remainder, and you’ll continue to spot things after runs. One remaining issue is the words that end in ‘st’ and ‘est’, and others that are not consistently spelled or otherwise need to be dealt with. For example, ‘crost’ will not be stemmed to ‘cross’, as ‘crossed’ would be. Finally, I limit the result to any words that have more than two characters, as my inspection suggested these are left\-over suffixes, or otherwise would be considered stopwords anyway.
```
# porter should catch remaining 'est'
add_a = c('mongst', 'gainst') # words to add a to
shakes_words = shakes_words %>%
mutate(word = if_else(word=='honour', 'honor', word),
word = if_else(word=='durst', 'dare', word),
word = if_else(word=='wast', 'was', word),
word = if_else(word=='dust', 'does', word),
word = if_else(word=='curst', 'cursed', word),
word = if_else(word=='blest', 'blessed', word),
word = if_else(word=='crost', 'crossed', word),
word = if_else(word=='accurst', 'accursed', word),
word = if_else(word %in% add_a,
paste0('a', word),
word),
word = str_replace(word, "'s$", ''), # strip remaining possessives
word = if_else(str_detect(word, pattern="o'er"), # change o'er over
str_replace(word, "'", 'v'),
word)) %>%
filter(!(id=='Antony_and_Cleopatra' & word == 'mark')) %>% # mark here is almost exclusively the character name
filter(str_count(word)>2)
```
At this point we could still maybe add things to this list of additional fixes, but I think it’s time to actually start playing with the data.
ACT V. Fun stuff
----------------
We are finally ready to get to the fun stuff. Finally! And now things get easy.
### Scene I. Count the terms
We can get term counts with standard dplyr approaches, and packages like tidytext will take that and also do some other things we might want. Specifically, we can use the latter to create the document\-term matrix (DTM) that will be used in other analysis. The function cast\_dfm will create a dfm class object, or ‘document\-feature’ matrix class object (from quanteda), which is the same thing but recognizes this sort of stuff is not specific to words. With word counts in hand, would be good save to save at this point, since they’ll serve as the basis for other processing.
```
term_counts = shakes_words %>%
group_by(id, word) %>%
count
term_counts %>%
arrange(desc(n))
library(quanteda)
shakes_dtm = term_counts %>%
cast_dfm(document=id, term=word, value=n)
## save(shakes_words, term_counts, shakes_dtm, file='data/shakes_words_df.RData')
```
```
# A tibble: 115,954 x 3
# Groups: id, word [115,954]
id word n
<chr> <chr> <int>
1 Sonnets love 195
2 The_Two_Gentlemen_of_Verona love 171
3 Romeo_and_Juliet love 150
4 As_You_Like_It love 118
5 Love_s_Labour_s_Lost love 118
6 A_Midsummer_Night_s_Dream love 114
7 Richard_III god 111
8 Titus_Andronicus rome 103
9 Much_Ado_about_Nothing love 92
10 Coriolanus rome 90
# ... with 115,944 more rows
```
Now things are looking like Shakespeare, with love for everyone[17](#fn17). You’ll notice I’ve kept place names such as Rome, but this might be something you’d prefer to remove. Other candidates would be madam, woman, man, majesty (as in ‘his/her’) etc. This sort of thing is up to the researcher.
### Scene II. Stemming
Now we’ll stem the words. This is actually more of a pre\-processing step, one that we’d do along with (and typically after) stopword removal. I do it here to mostly demonstrate how to use quanteda to do it, as it can also be used to remove stopwords and do many of the other things we did with tidytext.
Stemming will make words like eye and eyes just *ey*, or convert war, wars and warring to *war*. In other words, it will reduce variations of a word to a common root form, or ‘word stem’. We could have done this in a step prior to counting the terms, but then you only have the stemmed result to work with for the document term matrix from then on. Depending on your situation, you may or may not want to stem, or maybe you’d want to compare results. The quanteda package will actually stem with the DTM (i.e. work on the column names) and collapse the word counts accordingly. I note the difference in words before and after stemming.
```
shakes_dtm
ncol(shakes_dtm)
shakes_dtm = shakes_dtm %>%
dfm_wordstem()
shakes_dtm
ncol(shakes_dtm)
```
```
Document-feature matrix of: 43 documents, 22,052 features (87.8% sparse).
[1] 22052
Document-feature matrix of: 43 documents, 13,325 features (83.8% sparse).
[1] 13325
```
The result is notably fewer columns, which will speed up any analysis, as well as produce a slightly more dense matrix.
### Scene III. Exploration
#### Top features
Let’s start looking at the data more intently. The following shows the 10 most common words and their respective counts. This is also an easy way to find candidates to add to the stopword list. Note that dai and prai are stems for day and pray. Love occurs 2\.15 times as much as the most frequent word!
```
top10 = topfeatures(shakes_dtm, 10)
top10
```
```
love heart eye god day hand hear live death night
2918 1359 1300 1284 1229 1226 1043 1015 1010 1001
```
The following is a word cloud. They are among the most useless visual displays imaginable. Just because you can, doesn’t mean you should.
If you want to display relative frequency do so.
#### Similarity
The quanteda package has some built in similarity measures such as [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity), which you can think of similarly to the standard correlation (also available as an option). I display it visually to better get a sense of things.
```
## textstat_simil(shakes_dtm, margin = "documents", method = "cosine")
```
We can already begin to see the clusters of documents. For example, the more historical are the clump in the upper left. The oddball is [*The Phoenix and the Turtle*](https://en.wikipedia.org/wiki/The_Phoenix_and_the_Turtle), though *Lover’s Complaint* and the *Elegy* are also less similar than standard Shakespeare. The Phoenix and the Turtle is about the death of ideal love, represented by the Phoenix and Turtledove, for which there is a funeral. It actually is considered by scholars to be in stark contrast to his other output. [Elegy](https://en.wikipedia.org/wiki/Shakespeare_apocrypha#A_Funeral_Elegy) itself is actually written for a funeral, but probably not by Shakespeare. [*A Lover’s Complaint*](https://en.wikipedia.org/wiki/A_Lover%27s_Complaint) is thought to be an inferior work by the Bard by some critics, and maybe not even authored by him, so perhaps what we’re seeing is a reflection of that lack of quality. In general, we’re seeing things that we might expect.
#### Readability
We can examine readability scores for the texts, but for this we’ll need them in raw form. We already had them from before, I just added *Phoenix* from the Gutenberg download.
```
raw_texts
```
```
# A tibble: 43 x 2
id text
<chr> <list>
1 A_Lover_s_Complaint.txt <chr [813]>
2 A_Midsummer_Night_s_Dream.txt <chr [6,630]>
3 All_s_Well_That_Ends_Well.txt <chr [10,993]>
4 Antony_and_Cleopatra.txt <chr [14,064]>
5 As_You_Like_It.txt <chr [9,706]>
6 Coriolanus.txt <chr [13,440]>
7 Cymbeline.txt <chr [11,388]>
8 Elegy.txt <chr [1,316]>
9 Hamlet.txt <chr [13,950]>
10 Henry_V.txt <chr [9,777]>
# ... with 33 more rows
```
With raw texts, we need to convert them to a corpus object to proceed more easily. The corpus function from quanteda won’t read directly from a list column or a list at all, so we’ll convert it via the tm package, which more or less defeats the purpose of using the quanteda package, except that the textstat\_readability function gives us what we want, but I digress.
Unfortunately, the concept of readability is ill\-defined, and as such, there are dozens of measures available dating back nearly 75 years. The following is based on the Coleman\-Liau grade score (higher grade \= more difficult). The conclusion here is first, Shakespeare isn’t exactly a difficult read, and two, the poems may be more so relative to the other works.
```
library(tm)
raw_text_corpus = corpus(VCorpus(VectorSource(raw_texts$text)))
shakes_read = textstat_readability(raw_text_corpus)
```
#### Lexical diversity
There are also metrics of lexical diversity. As with readability, there is no one way to measure ‘diversity’. Here we’ll go back to using the standard DTM, as the focus is on the terms, whereas readability is more at the sentence level. Most standard measures of lexical diversity are variants on what is called the type\-token ratio, which in our setting is the number of unique terms (types) relative to the total terms (tokens). We can use textstat\_lexdiv for our purposes here, which will provide several measures of diversity by default.
```
ld = textstat_lexdiv(shakes_dtm)
```
This visual is based on the (absolute) scaled values of those several metrics, and might suggest that the poems are relatively more diverse. This certainly might be the case for *Phoenix*, but it could also be a reflection of the limitation of several of the measures, such that longer works are seen as less diverse, as tokens are added more so than types the longer the text goes.
As a comparison, the following shows the results of the ‘Measure of Textual Diversity’ calculated using the koRpus package[18](#fn18). It is notably less affected by text length, though the conclusions are largely the same. There is notable correlation between the MTLD and readability as well[19](#fn19). In general, Shakespeare tends to be more expressive in poems, and less so with comedies.
### Scene IV. Topic model
I’d say we’re now ready for topic model. That didn’t take too much did it?
#### Running the model and exploring the topics
We’ll run one with 10 topics. As in the previous example in this document, we’ll use topicmodels and the LDA function. Later, we’ll also compare our results with the traditional classifications of the texts. Note that this will take a while to run depending on your machine (maybe a minute or two). Faster implementation can be found with text2vec.
```
library(topicmodels)
shakes_10 = LDA(convert(shakes_dtm, to = "topicmodels"), k = 10, control=list(seed=1234))
```
One of the first things to do is to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* are common as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the stm package and corresponding labelTopics function as a way to get several alternatives. As an example, I show the results of their version of the following[20](#fn20):
* FREX: **FR**equency and **EX**clusivity, it is a weighted harmonic mean of a term’s rank within a topic in terms of frequency and exclusivity.
* lift: Ratio of the term’s probability within a topic to its probability of occurrence across all documents. Overly sensitive to rare words.
* score: Another approach that will give more weight to more exclusive terms.
* prob: This is just the raw probability of the term within a given topic.
As another approach, consider the saliency and relevance of term via the LDAvis package. While you can play with it here, it’s probably easier to [open it separately](vis/index.html). Note that this has to be done separately from the model, and may have topic numbers in a different order.
Your browser does not support iframes.
Given all these measures, one can assess how well they match what topics the documents would be most associated with.
```
t(topics(shakes_10, 3))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. The other measures pick up on words like Dane and Denmark. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram[21](#fn21). The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[22](#fn22). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix*, *Complaint*, standard poems, a mixed bag of more romance\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
The following shows the average topic probability for each of the traditional classes. Topics are represented by their first five most probable terms.
Aside from the poems, the classes are a good mix of topics, and appear to have some overlap. Tragedies are perhaps most diverse.
#### Summary of Topic Models
This is where the summary would go, but I grow weary…
**FIN**
ACT I. Scrape MIT and Gutenberg Shakespeare
-------------------------------------------
### Scene I. Scrape main works
Initially we must scrape the web to get the documents we need. The rvest package will be used as follows.
* Start with the url of the site
* Get the links off that page to serve as base urls for the works
* Scrape the document for each url
* Deal with the collection of Sonnets separately
* Write out results
```
library(rvest); library(tidyverse); library(stringr)
page0 = read_html('http://shakespeare.mit.edu/')
works_urls0 = page0 %>%
html_nodes('a') %>%
html_attr('href')
main = works_urls0 %>%
grep(pattern='index', value=T) %>%
str_replace_all(pattern='index', replacement='full')
other = works_urls0[!grepl(works_urls0, pattern='index|edu|org|news')]
works_urls = c(main, other)
works_urls[1:3]
```
Now we just paste the main site url to the work urls and download them. Here is where we come across our first snag. The html\_text function has what I would call a bug but what the author feels is a feature. [Basically, it ignores line breaks of the form `<br>` in certain situations](https://github.com/hadley/rvest/issues/175). This means it will smash text together that shouldn’t be, thereby making *any* analysis of it fairly useless[14](#fn14). Luckily, [@rentrop](https://github.com/rentrop) provided a solution, which is in `r/fix_read_html.R`.
```
works0 = lapply(works_urls, function(x) read_html(paste0('http://shakespeare.mit.edu/', x)))
source('r/fix_read_html.R')
html_text_collapse(works0[[1]]) #works
works = lapply(works0, html_text_collapse)
names(works) = c("All's Well That Ends Well", "As You Like It", "Comedy of Errors"
"Cymbeline", "Love's Labour's Lost", "Measure for Measure"
"The Merry Wives of Windsor", "The Merchant of Venice", "A Midsummer Night's Dream"
"Much Ado about Nothing", "Pericles Prince of Tyre", "The Taming of the Shrew"
"The Tempest", "Troilus and Cressida", "Twelfth Night"
"The Two Gentlemen of Verona", "The Winter's Tale", "King Henry IV Part 1"
"King Henry IV Part 2", "Henry V", "Henry VI Part 1"
"Henry VI Part 2", "Henry VI Part 3", "Henry VIII"
"King John", "Richard II", "Richard III"
"Antony and Cleopatra", "Coriolanus", "Hamlet"
"Julius Caesar", "King Lear", "Macbeth"
"Othello", "Romeo and Juliet", "Timon of Athens"
"Titus Andronicus", "Sonnets", "A Lover's Complaint"
"The Rape of Lucrece", "Venus and Adonis", "Elegy")
```
### Scene II. Sonnets
We now hit a slight nuisance with the Sonnets. The Sonnets have a bit of a different structure than the plays. All links are in a single page, with a different form for the url, and each sonnet has its own page.
```
sonnet_urls = paste0('http://shakespeare.mit.edu/', grep(works_urls0, pattern='sonnet', value=T)) %>%
read_html() %>%
html_nodes('a') %>%
html_attr('href')
sonnet_urls = grep(sonnet_urls, pattern = 'sonnet', value=T) # remove amazon link
# read the texts
sonnet0 = purrr::map(sonnet_urls, function(x) read_html(paste0('http://shakespeare.mit.edu/Poetry/', x)))
# collapse to one 'Sonnets' work
sonnet = sapply(sonnet0, html_text_collapse)
works$Sonnets = sonnet
```
### Scene III. Save and write out
Now we can save our results so we won’t have to repeat any of the previous scraping. We want to save the main text object as an RData file, and write out the texts to their own file. When dealing with text, you’ll regularly want to save stages so you can avoid repeating what you don’t have to, as often you will need to go back after discovering new issues further down the line.
```
save(works, file='data/texts_raw/shakes/moby_from_web.RData')
```
### Scene IV. Read text from files
After the above is done, it’s not required to redo, so we can always get what we need. I’ll start with the raw text as files, as that is one of the more common ways one deals with documents. When text is nice and clean, this can be fairly straightforward.
The function at the end comes from the tidyr package. Up to that line, each element in the text column is the entire text, while the column itself is thus a ‘list\-column’. In other words, we have a 42 x 2 matrix. But to do what we need, we’ll want to have access to each line, and the unnest function unpacks each line within the title. The first few lines of the result are shown after.
```
library(tidyverse); library(stringr)
shakes0 =
data_frame(file = dir('data/texts_raw/shakes/moby/', full.names = TRUE)) %>%
transmute(id = basename(file), text) %>%
unnest(text)
save(shakes0, file='data/initial_shakes_dt.RData')
# Alternate that provides for more options
# library(readtext)
# shakes0 =
# data_frame(file = dir('data/texts_raw/shakes/moby/', full.names = TRUE)) %>%
# mutate(text = map(file, readtext, encoding='UTF8')) %>%
# unnest(text)
```
### Scene V. Add additional works
It is typical to be gathering texts from multiple sources. In this case, we’ll get *The Phoenix and the Turtle* from the Project Gutenberg website. There is an R package that will allow us to work directly with the site, making the process straightforward[15](#fn15). I also considered two other works, but I refrained from “The Two Noble Kinsmen” because like many other of Shakespeare’s versions on Gutenberg, it’s basically written in a different language. I also refrained from *The Passionate Pilgrim* because it’s mostly not Shakespeare.
When first doing this project, I actually started with Gutenberg, but it became a notable PITA. The texts were inconsistent in source, and sometimes reproduced printing errors purposely, which would have compounded typical problems. I thought it could have been solved by using the *Complete Works of Shakespeare* but the download only came with that title, meaning one would have to hunt for and delineate each separate work. This might not have been too big of an issue, except that there is no table of contents, nor consistent naming of titles across different printings. The MIT approach, on the other hand, was a few lines of code. This represents a common issue in text analysis when dealing with sources, a different option may save a lot of time in the end.
The following code could be more succinct to deal with one text, but I initially was dealing with multiple works, so I’ve left it in that mode. In the end, we’ll have a tibble with an id column for the file/work name, and another column that contains the lines of text.
```
library(gutenbergr)
works_not_included = c("The Phoenix and the Turtle") # add others if desired
gute0 = gutenberg_works(title %in% works_not_included)
gute = lapply(gute0$gutenberg_id, gutenberg_download)
gute = mapply(function(x, y) mutate(x, id=y) %>% select(-gutenberg_id),
x=gute,
y=works_not_included,
SIMPLIFY=F)
shakes = shakes0 %>%
bind_rows(gute) %>%
mutate(id = str_replace_all(id, " |'", '_')) %>%
mutate(id = str_replace(id, '.txt', '')) %>%
arrange(id)
# shakes %>% split(.$id) # inspect
save(shakes, file='data/texts_raw/shakes/shakes_df.RData')
```
### Scene I. Scrape main works
Initially we must scrape the web to get the documents we need. The rvest package will be used as follows.
* Start with the url of the site
* Get the links off that page to serve as base urls for the works
* Scrape the document for each url
* Deal with the collection of Sonnets separately
* Write out results
```
library(rvest); library(tidyverse); library(stringr)
page0 = read_html('http://shakespeare.mit.edu/')
works_urls0 = page0 %>%
html_nodes('a') %>%
html_attr('href')
main = works_urls0 %>%
grep(pattern='index', value=T) %>%
str_replace_all(pattern='index', replacement='full')
other = works_urls0[!grepl(works_urls0, pattern='index|edu|org|news')]
works_urls = c(main, other)
works_urls[1:3]
```
Now we just paste the main site url to the work urls and download them. Here is where we come across our first snag. The html\_text function has what I would call a bug but what the author feels is a feature. [Basically, it ignores line breaks of the form `<br>` in certain situations](https://github.com/hadley/rvest/issues/175). This means it will smash text together that shouldn’t be, thereby making *any* analysis of it fairly useless[14](#fn14). Luckily, [@rentrop](https://github.com/rentrop) provided a solution, which is in `r/fix_read_html.R`.
```
works0 = lapply(works_urls, function(x) read_html(paste0('http://shakespeare.mit.edu/', x)))
source('r/fix_read_html.R')
html_text_collapse(works0[[1]]) #works
works = lapply(works0, html_text_collapse)
names(works) = c("All's Well That Ends Well", "As You Like It", "Comedy of Errors"
"Cymbeline", "Love's Labour's Lost", "Measure for Measure"
"The Merry Wives of Windsor", "The Merchant of Venice", "A Midsummer Night's Dream"
"Much Ado about Nothing", "Pericles Prince of Tyre", "The Taming of the Shrew"
"The Tempest", "Troilus and Cressida", "Twelfth Night"
"The Two Gentlemen of Verona", "The Winter's Tale", "King Henry IV Part 1"
"King Henry IV Part 2", "Henry V", "Henry VI Part 1"
"Henry VI Part 2", "Henry VI Part 3", "Henry VIII"
"King John", "Richard II", "Richard III"
"Antony and Cleopatra", "Coriolanus", "Hamlet"
"Julius Caesar", "King Lear", "Macbeth"
"Othello", "Romeo and Juliet", "Timon of Athens"
"Titus Andronicus", "Sonnets", "A Lover's Complaint"
"The Rape of Lucrece", "Venus and Adonis", "Elegy")
```
### Scene II. Sonnets
We now hit a slight nuisance with the Sonnets. The Sonnets have a bit of a different structure than the plays. All links are in a single page, with a different form for the url, and each sonnet has its own page.
```
sonnet_urls = paste0('http://shakespeare.mit.edu/', grep(works_urls0, pattern='sonnet', value=T)) %>%
read_html() %>%
html_nodes('a') %>%
html_attr('href')
sonnet_urls = grep(sonnet_urls, pattern = 'sonnet', value=T) # remove amazon link
# read the texts
sonnet0 = purrr::map(sonnet_urls, function(x) read_html(paste0('http://shakespeare.mit.edu/Poetry/', x)))
# collapse to one 'Sonnets' work
sonnet = sapply(sonnet0, html_text_collapse)
works$Sonnets = sonnet
```
### Scene III. Save and write out
Now we can save our results so we won’t have to repeat any of the previous scraping. We want to save the main text object as an RData file, and write out the texts to their own file. When dealing with text, you’ll regularly want to save stages so you can avoid repeating what you don’t have to, as often you will need to go back after discovering new issues further down the line.
```
save(works, file='data/texts_raw/shakes/moby_from_web.RData')
```
### Scene IV. Read text from files
After the above is done, it’s not required to redo, so we can always get what we need. I’ll start with the raw text as files, as that is one of the more common ways one deals with documents. When text is nice and clean, this can be fairly straightforward.
The function at the end comes from the tidyr package. Up to that line, each element in the text column is the entire text, while the column itself is thus a ‘list\-column’. In other words, we have a 42 x 2 matrix. But to do what we need, we’ll want to have access to each line, and the unnest function unpacks each line within the title. The first few lines of the result are shown after.
```
library(tidyverse); library(stringr)
shakes0 =
data_frame(file = dir('data/texts_raw/shakes/moby/', full.names = TRUE)) %>%
transmute(id = basename(file), text) %>%
unnest(text)
save(shakes0, file='data/initial_shakes_dt.RData')
# Alternate that provides for more options
# library(readtext)
# shakes0 =
# data_frame(file = dir('data/texts_raw/shakes/moby/', full.names = TRUE)) %>%
# mutate(text = map(file, readtext, encoding='UTF8')) %>%
# unnest(text)
```
### Scene V. Add additional works
It is typical to be gathering texts from multiple sources. In this case, we’ll get *The Phoenix and the Turtle* from the Project Gutenberg website. There is an R package that will allow us to work directly with the site, making the process straightforward[15](#fn15). I also considered two other works, but I refrained from “The Two Noble Kinsmen” because like many other of Shakespeare’s versions on Gutenberg, it’s basically written in a different language. I also refrained from *The Passionate Pilgrim* because it’s mostly not Shakespeare.
When first doing this project, I actually started with Gutenberg, but it became a notable PITA. The texts were inconsistent in source, and sometimes reproduced printing errors purposely, which would have compounded typical problems. I thought it could have been solved by using the *Complete Works of Shakespeare* but the download only came with that title, meaning one would have to hunt for and delineate each separate work. This might not have been too big of an issue, except that there is no table of contents, nor consistent naming of titles across different printings. The MIT approach, on the other hand, was a few lines of code. This represents a common issue in text analysis when dealing with sources, a different option may save a lot of time in the end.
The following code could be more succinct to deal with one text, but I initially was dealing with multiple works, so I’ve left it in that mode. In the end, we’ll have a tibble with an id column for the file/work name, and another column that contains the lines of text.
```
library(gutenbergr)
works_not_included = c("The Phoenix and the Turtle") # add others if desired
gute0 = gutenberg_works(title %in% works_not_included)
gute = lapply(gute0$gutenberg_id, gutenberg_download)
gute = mapply(function(x, y) mutate(x, id=y) %>% select(-gutenberg_id),
x=gute,
y=works_not_included,
SIMPLIFY=F)
shakes = shakes0 %>%
bind_rows(gute) %>%
mutate(id = str_replace_all(id, " |'", '_')) %>%
mutate(id = str_replace(id, '.txt', '')) %>%
arrange(id)
# shakes %>% split(.$id) # inspect
save(shakes, file='data/texts_raw/shakes/shakes_df.RData')
```
ACT II. Preliminary Cleaning
----------------------------
If you think we’re even remotely getting close to being ready for analysis, I say Ha! to you. Our journey has only just begun (cue the Carpenters).
Now we can start thinking about prepping the data for eventual analysis. One of the nice things about having the data in a tidy format is that we can use string functionality over the column of text in a simple fashion.
### Scene I. Remove initial text/metadata
First on our to\-do list is to get rid of all the preliminary text of titles, authorship, and similar. This is fairly straightforward when you realize the text we want will be associated with something like `ACT I`, or in the case of the Sonnets, the word `Sonnet`. So, the idea it to drop all text up to those points. I’ve created a [function](https://github.com/m-clark/text-analysis-with-R/blob/master/r/detect_first_act.R) that will do that, and then just apply it to each works tibble[16](#fn16). For the poems and *A Funeral Elegy for Master William Peter*, we look instead for the line where his name or initials start the line.
```
source('r/detect_first_act.R')
shakes_trim = shakes %>%
split(.$id) %>%
lapply(detect_first_act) %>%
bind_rows
shakes %>% filter(id=='Romeo_and_Juliet') %>% head
```
```
# A tibble: 6 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet Romeo and Juliet: Entire Play
2 Romeo_and_Juliet " "
3 Romeo_and_Juliet ""
4 Romeo_and_Juliet ""
5 Romeo_and_Juliet ""
6 Romeo_and_Juliet Romeo and Juliet
```
```
shakes_trim %>% filter(id=='Romeo_and_Juliet') %>% head
```
```
# A tibble: 6 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet ""
2 Romeo_and_Juliet ""
3 Romeo_and_Juliet PROLOGUE
4 Romeo_and_Juliet ""
5 Romeo_and_Juliet ""
6 Romeo_and_Juliet ""
```
### Scene II. Miscellaneous removal
Next, we’ll want to remove empty rows, any remaining titles, lines that denote the act or scene, and other stuff. I’m going to remove the word *prologue* and *epilogue* as a stopword later. While some texts have a line that just says that (`PROLOGUE`), others have text that describes the scene (`Prologue. Blah blah`) and which I’ve decided to keep. As such, we just need the word itself gone.
```
titles = c("A Lover's Complaint", "All's Well That Ends Well", "As You Like It", "The Comedy of Errors",
"Cymbeline", "Love's Labour's Lost", "Measure for Measure",
"The Merry Wives of Windsor", "The Merchant of Venice", "A Midsummer Night's Dream",
"Much Ado about Nothing", "Pericles Prince of Tyre", "The Taming of the Shrew",
"The Tempest", "Troilus and Cressida", "Twelfth Night",
"The Two Gentlemen of Verona", "The Winter's Tale", "King Henry IV, Part 1",
"King Henry IV, Part 2", "Henry V", "Henry VI, Part 1",
"Henry VI, Part 2", "Henry VI, Part 3", "Henry VIII",
"King John", "Richard II", "Richard III",
"Antony and Cleopatra", "Coriolanus", "Hamlet",
"Julius Caesar", "King Lear", "Macbeth",
"Othello", "Romeo and Juliet", "Timon of Athens",
"Titus Andronicus", "Sonnets",
"The Rape of Lucrece", "Venus and Adonis", "A Funeral Elegy", "The Phoenix and the Turtle")
shakes_trim = shakes_trim %>%
filter(text != '', # remove empties
!text %in% titles, # remove titles
!str_detect(text, '^ACT|^SCENE|^Enter|^Exit|^Exeunt|^Sonnet') # remove acts etc.
)
shakes_trim %>% filter(id=='Romeo_and_Juliet') # we'll get prologue later
```
```
# A tibble: 3,992 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet PROLOGUE
2 Romeo_and_Juliet Two households, both alike in dignity,
3 Romeo_and_Juliet In fair Verona, where we lay our scene,
4 Romeo_and_Juliet From ancient grudge break to new mutiny,
5 Romeo_and_Juliet Where civil blood makes civil hands unclean.
6 Romeo_and_Juliet From forth the fatal loins of these two foes
7 Romeo_and_Juliet A pair of star-cross'd lovers take their life;
8 Romeo_and_Juliet Whose misadventured piteous overthrows
9 Romeo_and_Juliet Do with their death bury their parents' strife.
10 Romeo_and_Juliet The fearful passage of their death-mark'd love,
# ... with 3,982 more rows
```
### Scene III. Classification of works
While we’re at it, we can save the classical (sometimes arbitrary) classifications of Shakespeare’s works for later comparison to what we’ll get in our analyses. We’ll save them to call as needed.
```
shakes_types = data_frame(title=unique(shakes_trim$id)) %>%
mutate(class = 'Comedy',
class = if_else(str_detect(title, pattern='Adonis|Lucrece|Complaint|Turtle|Pilgrim|Sonnet|Elegy'), 'Poem', class),
class = if_else(str_detect(title, pattern='Henry|Richard|John'), 'History', class),
class = if_else(str_detect(title, pattern='Troilus|Coriolanus|Titus|Romeo|Timon|Julius|Macbeth|Hamlet|Othello|Antony|Cymbeline|Lear'), 'Tragedy', class),
problem = if_else(str_detect(title, pattern='Measure|Merchant|^All|Troilus|Timon|Passion'), 'Problem', 'Not'),
late_romance = if_else(str_detect(title, pattern='Cymbeline|Kinsmen|Pericles|Winter|Tempest'), 'Late', 'Other'))
save(shakes_types, file='data/shakespeare_classification.RData') # save for later
```
### Scene I. Remove initial text/metadata
First on our to\-do list is to get rid of all the preliminary text of titles, authorship, and similar. This is fairly straightforward when you realize the text we want will be associated with something like `ACT I`, or in the case of the Sonnets, the word `Sonnet`. So, the idea it to drop all text up to those points. I’ve created a [function](https://github.com/m-clark/text-analysis-with-R/blob/master/r/detect_first_act.R) that will do that, and then just apply it to each works tibble[16](#fn16). For the poems and *A Funeral Elegy for Master William Peter*, we look instead for the line where his name or initials start the line.
```
source('r/detect_first_act.R')
shakes_trim = shakes %>%
split(.$id) %>%
lapply(detect_first_act) %>%
bind_rows
shakes %>% filter(id=='Romeo_and_Juliet') %>% head
```
```
# A tibble: 6 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet Romeo and Juliet: Entire Play
2 Romeo_and_Juliet " "
3 Romeo_and_Juliet ""
4 Romeo_and_Juliet ""
5 Romeo_and_Juliet ""
6 Romeo_and_Juliet Romeo and Juliet
```
```
shakes_trim %>% filter(id=='Romeo_and_Juliet') %>% head
```
```
# A tibble: 6 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet ""
2 Romeo_and_Juliet ""
3 Romeo_and_Juliet PROLOGUE
4 Romeo_and_Juliet ""
5 Romeo_and_Juliet ""
6 Romeo_and_Juliet ""
```
### Scene II. Miscellaneous removal
Next, we’ll want to remove empty rows, any remaining titles, lines that denote the act or scene, and other stuff. I’m going to remove the word *prologue* and *epilogue* as a stopword later. While some texts have a line that just says that (`PROLOGUE`), others have text that describes the scene (`Prologue. Blah blah`) and which I’ve decided to keep. As such, we just need the word itself gone.
```
titles = c("A Lover's Complaint", "All's Well That Ends Well", "As You Like It", "The Comedy of Errors",
"Cymbeline", "Love's Labour's Lost", "Measure for Measure",
"The Merry Wives of Windsor", "The Merchant of Venice", "A Midsummer Night's Dream",
"Much Ado about Nothing", "Pericles Prince of Tyre", "The Taming of the Shrew",
"The Tempest", "Troilus and Cressida", "Twelfth Night",
"The Two Gentlemen of Verona", "The Winter's Tale", "King Henry IV, Part 1",
"King Henry IV, Part 2", "Henry V", "Henry VI, Part 1",
"Henry VI, Part 2", "Henry VI, Part 3", "Henry VIII",
"King John", "Richard II", "Richard III",
"Antony and Cleopatra", "Coriolanus", "Hamlet",
"Julius Caesar", "King Lear", "Macbeth",
"Othello", "Romeo and Juliet", "Timon of Athens",
"Titus Andronicus", "Sonnets",
"The Rape of Lucrece", "Venus and Adonis", "A Funeral Elegy", "The Phoenix and the Turtle")
shakes_trim = shakes_trim %>%
filter(text != '', # remove empties
!text %in% titles, # remove titles
!str_detect(text, '^ACT|^SCENE|^Enter|^Exit|^Exeunt|^Sonnet') # remove acts etc.
)
shakes_trim %>% filter(id=='Romeo_and_Juliet') # we'll get prologue later
```
```
# A tibble: 3,992 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet PROLOGUE
2 Romeo_and_Juliet Two households, both alike in dignity,
3 Romeo_and_Juliet In fair Verona, where we lay our scene,
4 Romeo_and_Juliet From ancient grudge break to new mutiny,
5 Romeo_and_Juliet Where civil blood makes civil hands unclean.
6 Romeo_and_Juliet From forth the fatal loins of these two foes
7 Romeo_and_Juliet A pair of star-cross'd lovers take their life;
8 Romeo_and_Juliet Whose misadventured piteous overthrows
9 Romeo_and_Juliet Do with their death bury their parents' strife.
10 Romeo_and_Juliet The fearful passage of their death-mark'd love,
# ... with 3,982 more rows
```
### Scene III. Classification of works
While we’re at it, we can save the classical (sometimes arbitrary) classifications of Shakespeare’s works for later comparison to what we’ll get in our analyses. We’ll save them to call as needed.
```
shakes_types = data_frame(title=unique(shakes_trim$id)) %>%
mutate(class = 'Comedy',
class = if_else(str_detect(title, pattern='Adonis|Lucrece|Complaint|Turtle|Pilgrim|Sonnet|Elegy'), 'Poem', class),
class = if_else(str_detect(title, pattern='Henry|Richard|John'), 'History', class),
class = if_else(str_detect(title, pattern='Troilus|Coriolanus|Titus|Romeo|Timon|Julius|Macbeth|Hamlet|Othello|Antony|Cymbeline|Lear'), 'Tragedy', class),
problem = if_else(str_detect(title, pattern='Measure|Merchant|^All|Troilus|Timon|Passion'), 'Problem', 'Not'),
late_romance = if_else(str_detect(title, pattern='Cymbeline|Kinsmen|Pericles|Winter|Tempest'), 'Late', 'Other'))
save(shakes_types, file='data/shakespeare_classification.RData') # save for later
```
ACT III. Stop words
-------------------
As we’ve noted before, we’ll want to get rid of stop words, things like articles, possessive pronouns, and other very common words. In this case, we also want to include character names. However, the big wrinkle here is that this is not English as currently spoken, so we need to remove ‘ye’, ‘thee’, ‘thine’ etc. In addition, there are things that need to be replaced, like o’er to over, which may then also be removed. In short, this is not so straightforward.
### Scene I. Character names
We’ll get the list of character names from [opensourceshakespeare.org](http://opensourceshakespeare.org/) via rvest, but I added some from the poems and others that still came through the processing one way or another, e.g. abbreviated names.
```
shakes_char_url = 'https://www.opensourceshakespeare.org/views/plays/characters/chardisplay.php'
page0 = read_html(shakes_char_url)
tabs = page0 %>% html_table()
shakes_char = tabs[[2]][-(1:2), c(1,3,5)] # remove header and phantom columns
colnames(shakes_char) = c('Nspeeches', 'Character', 'Play')
shakes_char = shakes_char %>%
distinct(Character,.keep_all=T)
save(shakes_char, file='data/shakespeare_characters.RData')
```
A new snag is that some characters with multiple names may be represented (typically) by the first or last name, or in the case of three, the middle, e.g. Sir Toby Belch. Others are still difficultly named e.g. RICHARD PLANTAGENET (DUKE OF GLOUCESTER). The following should capture everything by splitting the names on spaces, removing parentheses, and keeping unique terms.
```
# remove paren and split
chars = shakes_char$Character
chars = str_replace_all(chars, '\\(|\\)', '')
chars = str_split(chars, ' ') %>%
unlist
# these were found after intial processsing
chars_other = c('enobarbus', 'marcius', 'katharina', 'clarence','pyramus',
'andrew', 'arcite', 'perithous', 'hippolita', 'schoolmaster',
'cressid', 'diomed', 'kate', 'titinius', 'Palamon', 'Tarquin',
'lucrece', 'isidore', 'tom', 'thisbe', 'paul',
'aemelia', 'sycorax', 'montague', 'capulet', 'collatinus')
chars = unique(c(chars, chars_other))
chars = chars[chars != '']
sample(chars)[1:3]
```
```
[1] "Children" "Dionyza" "Aaron"
```
### Scene II. Old, Middle, \& Modern English
While Shakespeare is considered [Early Modern English](https://en.wikipedia.org/wiki/Early_Modern_English), some text may be more historical, so I include Middle and Old English stopwords, as they were readily available from the cltk Python module ([link](https://github.com/cltk/cltk)). I also added some things to the modern English list like “thou’ldst” that I found lingering after initial passes. I first started using the works from Gutenberg, and there, the Old English might have had some utility. As the texts there were inconsistently translated and otherwise problematic, I abandoned using them. Here, the Old English vocabulary applied to these texts it only removes ‘wit’, so I refrain from using it.
```
# old and me from python cltk module;
# em from http://earlymodernconversions.com/wp-content/uploads/2013/12/stopwords.txt;
# I also added some to me
old_stops0 = read_lines('data/old_english_stop_words.txt')
# sort(old_stops0)
old_stops = data_frame(word=str_conv(old_stops0, 'UTF8'),
lexicon = 'cltk')
me_stops0 = read_lines('data/middle_english_stop_words')
# sort(me_stops0)
me_stops = data_frame(word=str_conv(me_stops0, 'UTF8'),
lexicon = 'cltk')
em_stops0 = read_lines('data/early_modern_english_stop_words.txt')
# sort(em_stops0)
em_stops = data_frame(word=str_conv(em_stops0, 'UTF8'),
lexicon = 'emc')
```
### Scene III. Remove stopwords
We’re now ready to start removing words. However, right now, we have lines not words. We can use the tidytext function unnest\_tokens, which is like unnest from tidyr, but works on different tokens, e.g. words, sentences, or paragraphs. Note that by default, the function will make all words lower case to make matching more efficient.
```
library(tidytext)
shakes_words = shakes_trim %>%
unnest_tokens(word, text, token='words')
save(shakes_words, file='data/shakes_words_df_4text2vec.RData')
```
We also will be doing a little stemming here. I’m getting rid of suffixes that end with the suffix after an apostrophe. Many of the remaining words will either be stopwords or need to be further stemmed later. I also created a middle/modern English stemmer for words that are not caught otherwise (me\_st\_stem). Again, this is the sort of thing you discover after initial passes (e.g. ‘criedst’). After that, we can use the anti\_join remove the stopwords.
```
source('r/st_stem.R')
shakes_words = shakes_words %>%
mutate(word = str_trim(word), # remove possible whitespace
word = str_replace(word, "'er$|'d$|'t$|'ld$|'rt$|'st$|'dst$", ''), # remove me style endings
word = str_replace_all(word, "[0-9]", ''), # remove sonnet numbers
word = vapply(word, me_st_stem, 'a')) %>%
anti_join(em_stops) %>%
anti_join(me_stops) %>%
anti_join(data_frame(word=str_to_lower(c(chars, 'prologue', 'epilogue')))) %>%
anti_join(data_frame(word=str_to_lower(paste0(chars, "'s")))) %>% # remove possessive names
anti_join(stop_words)
```
As before, you should do a couple spot checks.
```
any(shakes_words$word == 'romeo')
any(shakes_words$word == 'prologue')
any(shakes_words$word == 'mayst')
```
```
[1] FALSE
[1] FALSE
[1] FALSE
```
### Scene I. Character names
We’ll get the list of character names from [opensourceshakespeare.org](http://opensourceshakespeare.org/) via rvest, but I added some from the poems and others that still came through the processing one way or another, e.g. abbreviated names.
```
shakes_char_url = 'https://www.opensourceshakespeare.org/views/plays/characters/chardisplay.php'
page0 = read_html(shakes_char_url)
tabs = page0 %>% html_table()
shakes_char = tabs[[2]][-(1:2), c(1,3,5)] # remove header and phantom columns
colnames(shakes_char) = c('Nspeeches', 'Character', 'Play')
shakes_char = shakes_char %>%
distinct(Character,.keep_all=T)
save(shakes_char, file='data/shakespeare_characters.RData')
```
A new snag is that some characters with multiple names may be represented (typically) by the first or last name, or in the case of three, the middle, e.g. Sir Toby Belch. Others are still difficultly named e.g. RICHARD PLANTAGENET (DUKE OF GLOUCESTER). The following should capture everything by splitting the names on spaces, removing parentheses, and keeping unique terms.
```
# remove paren and split
chars = shakes_char$Character
chars = str_replace_all(chars, '\\(|\\)', '')
chars = str_split(chars, ' ') %>%
unlist
# these were found after intial processsing
chars_other = c('enobarbus', 'marcius', 'katharina', 'clarence','pyramus',
'andrew', 'arcite', 'perithous', 'hippolita', 'schoolmaster',
'cressid', 'diomed', 'kate', 'titinius', 'Palamon', 'Tarquin',
'lucrece', 'isidore', 'tom', 'thisbe', 'paul',
'aemelia', 'sycorax', 'montague', 'capulet', 'collatinus')
chars = unique(c(chars, chars_other))
chars = chars[chars != '']
sample(chars)[1:3]
```
```
[1] "Children" "Dionyza" "Aaron"
```
### Scene II. Old, Middle, \& Modern English
While Shakespeare is considered [Early Modern English](https://en.wikipedia.org/wiki/Early_Modern_English), some text may be more historical, so I include Middle and Old English stopwords, as they were readily available from the cltk Python module ([link](https://github.com/cltk/cltk)). I also added some things to the modern English list like “thou’ldst” that I found lingering after initial passes. I first started using the works from Gutenberg, and there, the Old English might have had some utility. As the texts there were inconsistently translated and otherwise problematic, I abandoned using them. Here, the Old English vocabulary applied to these texts it only removes ‘wit’, so I refrain from using it.
```
# old and me from python cltk module;
# em from http://earlymodernconversions.com/wp-content/uploads/2013/12/stopwords.txt;
# I also added some to me
old_stops0 = read_lines('data/old_english_stop_words.txt')
# sort(old_stops0)
old_stops = data_frame(word=str_conv(old_stops0, 'UTF8'),
lexicon = 'cltk')
me_stops0 = read_lines('data/middle_english_stop_words')
# sort(me_stops0)
me_stops = data_frame(word=str_conv(me_stops0, 'UTF8'),
lexicon = 'cltk')
em_stops0 = read_lines('data/early_modern_english_stop_words.txt')
# sort(em_stops0)
em_stops = data_frame(word=str_conv(em_stops0, 'UTF8'),
lexicon = 'emc')
```
### Scene III. Remove stopwords
We’re now ready to start removing words. However, right now, we have lines not words. We can use the tidytext function unnest\_tokens, which is like unnest from tidyr, but works on different tokens, e.g. words, sentences, or paragraphs. Note that by default, the function will make all words lower case to make matching more efficient.
```
library(tidytext)
shakes_words = shakes_trim %>%
unnest_tokens(word, text, token='words')
save(shakes_words, file='data/shakes_words_df_4text2vec.RData')
```
We also will be doing a little stemming here. I’m getting rid of suffixes that end with the suffix after an apostrophe. Many of the remaining words will either be stopwords or need to be further stemmed later. I also created a middle/modern English stemmer for words that are not caught otherwise (me\_st\_stem). Again, this is the sort of thing you discover after initial passes (e.g. ‘criedst’). After that, we can use the anti\_join remove the stopwords.
```
source('r/st_stem.R')
shakes_words = shakes_words %>%
mutate(word = str_trim(word), # remove possible whitespace
word = str_replace(word, "'er$|'d$|'t$|'ld$|'rt$|'st$|'dst$", ''), # remove me style endings
word = str_replace_all(word, "[0-9]", ''), # remove sonnet numbers
word = vapply(word, me_st_stem, 'a')) %>%
anti_join(em_stops) %>%
anti_join(me_stops) %>%
anti_join(data_frame(word=str_to_lower(c(chars, 'prologue', 'epilogue')))) %>%
anti_join(data_frame(word=str_to_lower(paste0(chars, "'s")))) %>% # remove possessive names
anti_join(stop_words)
```
As before, you should do a couple spot checks.
```
any(shakes_words$word == 'romeo')
any(shakes_words$word == 'prologue')
any(shakes_words$word == 'mayst')
```
```
[1] FALSE
[1] FALSE
[1] FALSE
```
ACT IV. Other fixes
-------------------
Now we’re ready to finally do the word counts. Just kidding! There is *still* work to do for the remainder, and you’ll continue to spot things after runs. One remaining issue is the words that end in ‘st’ and ‘est’, and others that are not consistently spelled or otherwise need to be dealt with. For example, ‘crost’ will not be stemmed to ‘cross’, as ‘crossed’ would be. Finally, I limit the result to any words that have more than two characters, as my inspection suggested these are left\-over suffixes, or otherwise would be considered stopwords anyway.
```
# porter should catch remaining 'est'
add_a = c('mongst', 'gainst') # words to add a to
shakes_words = shakes_words %>%
mutate(word = if_else(word=='honour', 'honor', word),
word = if_else(word=='durst', 'dare', word),
word = if_else(word=='wast', 'was', word),
word = if_else(word=='dust', 'does', word),
word = if_else(word=='curst', 'cursed', word),
word = if_else(word=='blest', 'blessed', word),
word = if_else(word=='crost', 'crossed', word),
word = if_else(word=='accurst', 'accursed', word),
word = if_else(word %in% add_a,
paste0('a', word),
word),
word = str_replace(word, "'s$", ''), # strip remaining possessives
word = if_else(str_detect(word, pattern="o'er"), # change o'er over
str_replace(word, "'", 'v'),
word)) %>%
filter(!(id=='Antony_and_Cleopatra' & word == 'mark')) %>% # mark here is almost exclusively the character name
filter(str_count(word)>2)
```
At this point we could still maybe add things to this list of additional fixes, but I think it’s time to actually start playing with the data.
ACT V. Fun stuff
----------------
We are finally ready to get to the fun stuff. Finally! And now things get easy.
### Scene I. Count the terms
We can get term counts with standard dplyr approaches, and packages like tidytext will take that and also do some other things we might want. Specifically, we can use the latter to create the document\-term matrix (DTM) that will be used in other analysis. The function cast\_dfm will create a dfm class object, or ‘document\-feature’ matrix class object (from quanteda), which is the same thing but recognizes this sort of stuff is not specific to words. With word counts in hand, would be good save to save at this point, since they’ll serve as the basis for other processing.
```
term_counts = shakes_words %>%
group_by(id, word) %>%
count
term_counts %>%
arrange(desc(n))
library(quanteda)
shakes_dtm = term_counts %>%
cast_dfm(document=id, term=word, value=n)
## save(shakes_words, term_counts, shakes_dtm, file='data/shakes_words_df.RData')
```
```
# A tibble: 115,954 x 3
# Groups: id, word [115,954]
id word n
<chr> <chr> <int>
1 Sonnets love 195
2 The_Two_Gentlemen_of_Verona love 171
3 Romeo_and_Juliet love 150
4 As_You_Like_It love 118
5 Love_s_Labour_s_Lost love 118
6 A_Midsummer_Night_s_Dream love 114
7 Richard_III god 111
8 Titus_Andronicus rome 103
9 Much_Ado_about_Nothing love 92
10 Coriolanus rome 90
# ... with 115,944 more rows
```
Now things are looking like Shakespeare, with love for everyone[17](#fn17). You’ll notice I’ve kept place names such as Rome, but this might be something you’d prefer to remove. Other candidates would be madam, woman, man, majesty (as in ‘his/her’) etc. This sort of thing is up to the researcher.
### Scene II. Stemming
Now we’ll stem the words. This is actually more of a pre\-processing step, one that we’d do along with (and typically after) stopword removal. I do it here to mostly demonstrate how to use quanteda to do it, as it can also be used to remove stopwords and do many of the other things we did with tidytext.
Stemming will make words like eye and eyes just *ey*, or convert war, wars and warring to *war*. In other words, it will reduce variations of a word to a common root form, or ‘word stem’. We could have done this in a step prior to counting the terms, but then you only have the stemmed result to work with for the document term matrix from then on. Depending on your situation, you may or may not want to stem, or maybe you’d want to compare results. The quanteda package will actually stem with the DTM (i.e. work on the column names) and collapse the word counts accordingly. I note the difference in words before and after stemming.
```
shakes_dtm
ncol(shakes_dtm)
shakes_dtm = shakes_dtm %>%
dfm_wordstem()
shakes_dtm
ncol(shakes_dtm)
```
```
Document-feature matrix of: 43 documents, 22,052 features (87.8% sparse).
[1] 22052
Document-feature matrix of: 43 documents, 13,325 features (83.8% sparse).
[1] 13325
```
The result is notably fewer columns, which will speed up any analysis, as well as produce a slightly more dense matrix.
### Scene III. Exploration
#### Top features
Let’s start looking at the data more intently. The following shows the 10 most common words and their respective counts. This is also an easy way to find candidates to add to the stopword list. Note that dai and prai are stems for day and pray. Love occurs 2\.15 times as much as the most frequent word!
```
top10 = topfeatures(shakes_dtm, 10)
top10
```
```
love heart eye god day hand hear live death night
2918 1359 1300 1284 1229 1226 1043 1015 1010 1001
```
The following is a word cloud. They are among the most useless visual displays imaginable. Just because you can, doesn’t mean you should.
If you want to display relative frequency do so.
#### Similarity
The quanteda package has some built in similarity measures such as [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity), which you can think of similarly to the standard correlation (also available as an option). I display it visually to better get a sense of things.
```
## textstat_simil(shakes_dtm, margin = "documents", method = "cosine")
```
We can already begin to see the clusters of documents. For example, the more historical are the clump in the upper left. The oddball is [*The Phoenix and the Turtle*](https://en.wikipedia.org/wiki/The_Phoenix_and_the_Turtle), though *Lover’s Complaint* and the *Elegy* are also less similar than standard Shakespeare. The Phoenix and the Turtle is about the death of ideal love, represented by the Phoenix and Turtledove, for which there is a funeral. It actually is considered by scholars to be in stark contrast to his other output. [Elegy](https://en.wikipedia.org/wiki/Shakespeare_apocrypha#A_Funeral_Elegy) itself is actually written for a funeral, but probably not by Shakespeare. [*A Lover’s Complaint*](https://en.wikipedia.org/wiki/A_Lover%27s_Complaint) is thought to be an inferior work by the Bard by some critics, and maybe not even authored by him, so perhaps what we’re seeing is a reflection of that lack of quality. In general, we’re seeing things that we might expect.
#### Readability
We can examine readability scores for the texts, but for this we’ll need them in raw form. We already had them from before, I just added *Phoenix* from the Gutenberg download.
```
raw_texts
```
```
# A tibble: 43 x 2
id text
<chr> <list>
1 A_Lover_s_Complaint.txt <chr [813]>
2 A_Midsummer_Night_s_Dream.txt <chr [6,630]>
3 All_s_Well_That_Ends_Well.txt <chr [10,993]>
4 Antony_and_Cleopatra.txt <chr [14,064]>
5 As_You_Like_It.txt <chr [9,706]>
6 Coriolanus.txt <chr [13,440]>
7 Cymbeline.txt <chr [11,388]>
8 Elegy.txt <chr [1,316]>
9 Hamlet.txt <chr [13,950]>
10 Henry_V.txt <chr [9,777]>
# ... with 33 more rows
```
With raw texts, we need to convert them to a corpus object to proceed more easily. The corpus function from quanteda won’t read directly from a list column or a list at all, so we’ll convert it via the tm package, which more or less defeats the purpose of using the quanteda package, except that the textstat\_readability function gives us what we want, but I digress.
Unfortunately, the concept of readability is ill\-defined, and as such, there are dozens of measures available dating back nearly 75 years. The following is based on the Coleman\-Liau grade score (higher grade \= more difficult). The conclusion here is first, Shakespeare isn’t exactly a difficult read, and two, the poems may be more so relative to the other works.
```
library(tm)
raw_text_corpus = corpus(VCorpus(VectorSource(raw_texts$text)))
shakes_read = textstat_readability(raw_text_corpus)
```
#### Lexical diversity
There are also metrics of lexical diversity. As with readability, there is no one way to measure ‘diversity’. Here we’ll go back to using the standard DTM, as the focus is on the terms, whereas readability is more at the sentence level. Most standard measures of lexical diversity are variants on what is called the type\-token ratio, which in our setting is the number of unique terms (types) relative to the total terms (tokens). We can use textstat\_lexdiv for our purposes here, which will provide several measures of diversity by default.
```
ld = textstat_lexdiv(shakes_dtm)
```
This visual is based on the (absolute) scaled values of those several metrics, and might suggest that the poems are relatively more diverse. This certainly might be the case for *Phoenix*, but it could also be a reflection of the limitation of several of the measures, such that longer works are seen as less diverse, as tokens are added more so than types the longer the text goes.
As a comparison, the following shows the results of the ‘Measure of Textual Diversity’ calculated using the koRpus package[18](#fn18). It is notably less affected by text length, though the conclusions are largely the same. There is notable correlation between the MTLD and readability as well[19](#fn19). In general, Shakespeare tends to be more expressive in poems, and less so with comedies.
### Scene IV. Topic model
I’d say we’re now ready for topic model. That didn’t take too much did it?
#### Running the model and exploring the topics
We’ll run one with 10 topics. As in the previous example in this document, we’ll use topicmodels and the LDA function. Later, we’ll also compare our results with the traditional classifications of the texts. Note that this will take a while to run depending on your machine (maybe a minute or two). Faster implementation can be found with text2vec.
```
library(topicmodels)
shakes_10 = LDA(convert(shakes_dtm, to = "topicmodels"), k = 10, control=list(seed=1234))
```
One of the first things to do is to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* are common as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the stm package and corresponding labelTopics function as a way to get several alternatives. As an example, I show the results of their version of the following[20](#fn20):
* FREX: **FR**equency and **EX**clusivity, it is a weighted harmonic mean of a term’s rank within a topic in terms of frequency and exclusivity.
* lift: Ratio of the term’s probability within a topic to its probability of occurrence across all documents. Overly sensitive to rare words.
* score: Another approach that will give more weight to more exclusive terms.
* prob: This is just the raw probability of the term within a given topic.
As another approach, consider the saliency and relevance of term via the LDAvis package. While you can play with it here, it’s probably easier to [open it separately](vis/index.html). Note that this has to be done separately from the model, and may have topic numbers in a different order.
Your browser does not support iframes.
Given all these measures, one can assess how well they match what topics the documents would be most associated with.
```
t(topics(shakes_10, 3))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. The other measures pick up on words like Dane and Denmark. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram[21](#fn21). The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[22](#fn22). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix*, *Complaint*, standard poems, a mixed bag of more romance\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
The following shows the average topic probability for each of the traditional classes. Topics are represented by their first five most probable terms.
Aside from the poems, the classes are a good mix of topics, and appear to have some overlap. Tragedies are perhaps most diverse.
#### Summary of Topic Models
This is where the summary would go, but I grow weary…
**FIN**
### Scene I. Count the terms
We can get term counts with standard dplyr approaches, and packages like tidytext will take that and also do some other things we might want. Specifically, we can use the latter to create the document\-term matrix (DTM) that will be used in other analysis. The function cast\_dfm will create a dfm class object, or ‘document\-feature’ matrix class object (from quanteda), which is the same thing but recognizes this sort of stuff is not specific to words. With word counts in hand, would be good save to save at this point, since they’ll serve as the basis for other processing.
```
term_counts = shakes_words %>%
group_by(id, word) %>%
count
term_counts %>%
arrange(desc(n))
library(quanteda)
shakes_dtm = term_counts %>%
cast_dfm(document=id, term=word, value=n)
## save(shakes_words, term_counts, shakes_dtm, file='data/shakes_words_df.RData')
```
```
# A tibble: 115,954 x 3
# Groups: id, word [115,954]
id word n
<chr> <chr> <int>
1 Sonnets love 195
2 The_Two_Gentlemen_of_Verona love 171
3 Romeo_and_Juliet love 150
4 As_You_Like_It love 118
5 Love_s_Labour_s_Lost love 118
6 A_Midsummer_Night_s_Dream love 114
7 Richard_III god 111
8 Titus_Andronicus rome 103
9 Much_Ado_about_Nothing love 92
10 Coriolanus rome 90
# ... with 115,944 more rows
```
Now things are looking like Shakespeare, with love for everyone[17](#fn17). You’ll notice I’ve kept place names such as Rome, but this might be something you’d prefer to remove. Other candidates would be madam, woman, man, majesty (as in ‘his/her’) etc. This sort of thing is up to the researcher.
### Scene II. Stemming
Now we’ll stem the words. This is actually more of a pre\-processing step, one that we’d do along with (and typically after) stopword removal. I do it here to mostly demonstrate how to use quanteda to do it, as it can also be used to remove stopwords and do many of the other things we did with tidytext.
Stemming will make words like eye and eyes just *ey*, or convert war, wars and warring to *war*. In other words, it will reduce variations of a word to a common root form, or ‘word stem’. We could have done this in a step prior to counting the terms, but then you only have the stemmed result to work with for the document term matrix from then on. Depending on your situation, you may or may not want to stem, or maybe you’d want to compare results. The quanteda package will actually stem with the DTM (i.e. work on the column names) and collapse the word counts accordingly. I note the difference in words before and after stemming.
```
shakes_dtm
ncol(shakes_dtm)
shakes_dtm = shakes_dtm %>%
dfm_wordstem()
shakes_dtm
ncol(shakes_dtm)
```
```
Document-feature matrix of: 43 documents, 22,052 features (87.8% sparse).
[1] 22052
Document-feature matrix of: 43 documents, 13,325 features (83.8% sparse).
[1] 13325
```
The result is notably fewer columns, which will speed up any analysis, as well as produce a slightly more dense matrix.
### Scene III. Exploration
#### Top features
Let’s start looking at the data more intently. The following shows the 10 most common words and their respective counts. This is also an easy way to find candidates to add to the stopword list. Note that dai and prai are stems for day and pray. Love occurs 2\.15 times as much as the most frequent word!
```
top10 = topfeatures(shakes_dtm, 10)
top10
```
```
love heart eye god day hand hear live death night
2918 1359 1300 1284 1229 1226 1043 1015 1010 1001
```
The following is a word cloud. They are among the most useless visual displays imaginable. Just because you can, doesn’t mean you should.
If you want to display relative frequency do so.
#### Similarity
The quanteda package has some built in similarity measures such as [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity), which you can think of similarly to the standard correlation (also available as an option). I display it visually to better get a sense of things.
```
## textstat_simil(shakes_dtm, margin = "documents", method = "cosine")
```
We can already begin to see the clusters of documents. For example, the more historical are the clump in the upper left. The oddball is [*The Phoenix and the Turtle*](https://en.wikipedia.org/wiki/The_Phoenix_and_the_Turtle), though *Lover’s Complaint* and the *Elegy* are also less similar than standard Shakespeare. The Phoenix and the Turtle is about the death of ideal love, represented by the Phoenix and Turtledove, for which there is a funeral. It actually is considered by scholars to be in stark contrast to his other output. [Elegy](https://en.wikipedia.org/wiki/Shakespeare_apocrypha#A_Funeral_Elegy) itself is actually written for a funeral, but probably not by Shakespeare. [*A Lover’s Complaint*](https://en.wikipedia.org/wiki/A_Lover%27s_Complaint) is thought to be an inferior work by the Bard by some critics, and maybe not even authored by him, so perhaps what we’re seeing is a reflection of that lack of quality. In general, we’re seeing things that we might expect.
#### Readability
We can examine readability scores for the texts, but for this we’ll need them in raw form. We already had them from before, I just added *Phoenix* from the Gutenberg download.
```
raw_texts
```
```
# A tibble: 43 x 2
id text
<chr> <list>
1 A_Lover_s_Complaint.txt <chr [813]>
2 A_Midsummer_Night_s_Dream.txt <chr [6,630]>
3 All_s_Well_That_Ends_Well.txt <chr [10,993]>
4 Antony_and_Cleopatra.txt <chr [14,064]>
5 As_You_Like_It.txt <chr [9,706]>
6 Coriolanus.txt <chr [13,440]>
7 Cymbeline.txt <chr [11,388]>
8 Elegy.txt <chr [1,316]>
9 Hamlet.txt <chr [13,950]>
10 Henry_V.txt <chr [9,777]>
# ... with 33 more rows
```
With raw texts, we need to convert them to a corpus object to proceed more easily. The corpus function from quanteda won’t read directly from a list column or a list at all, so we’ll convert it via the tm package, which more or less defeats the purpose of using the quanteda package, except that the textstat\_readability function gives us what we want, but I digress.
Unfortunately, the concept of readability is ill\-defined, and as such, there are dozens of measures available dating back nearly 75 years. The following is based on the Coleman\-Liau grade score (higher grade \= more difficult). The conclusion here is first, Shakespeare isn’t exactly a difficult read, and two, the poems may be more so relative to the other works.
```
library(tm)
raw_text_corpus = corpus(VCorpus(VectorSource(raw_texts$text)))
shakes_read = textstat_readability(raw_text_corpus)
```
#### Lexical diversity
There are also metrics of lexical diversity. As with readability, there is no one way to measure ‘diversity’. Here we’ll go back to using the standard DTM, as the focus is on the terms, whereas readability is more at the sentence level. Most standard measures of lexical diversity are variants on what is called the type\-token ratio, which in our setting is the number of unique terms (types) relative to the total terms (tokens). We can use textstat\_lexdiv for our purposes here, which will provide several measures of diversity by default.
```
ld = textstat_lexdiv(shakes_dtm)
```
This visual is based on the (absolute) scaled values of those several metrics, and might suggest that the poems are relatively more diverse. This certainly might be the case for *Phoenix*, but it could also be a reflection of the limitation of several of the measures, such that longer works are seen as less diverse, as tokens are added more so than types the longer the text goes.
As a comparison, the following shows the results of the ‘Measure of Textual Diversity’ calculated using the koRpus package[18](#fn18). It is notably less affected by text length, though the conclusions are largely the same. There is notable correlation between the MTLD and readability as well[19](#fn19). In general, Shakespeare tends to be more expressive in poems, and less so with comedies.
#### Top features
Let’s start looking at the data more intently. The following shows the 10 most common words and their respective counts. This is also an easy way to find candidates to add to the stopword list. Note that dai and prai are stems for day and pray. Love occurs 2\.15 times as much as the most frequent word!
```
top10 = topfeatures(shakes_dtm, 10)
top10
```
```
love heart eye god day hand hear live death night
2918 1359 1300 1284 1229 1226 1043 1015 1010 1001
```
The following is a word cloud. They are among the most useless visual displays imaginable. Just because you can, doesn’t mean you should.
If you want to display relative frequency do so.
#### Similarity
The quanteda package has some built in similarity measures such as [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity), which you can think of similarly to the standard correlation (also available as an option). I display it visually to better get a sense of things.
```
## textstat_simil(shakes_dtm, margin = "documents", method = "cosine")
```
We can already begin to see the clusters of documents. For example, the more historical are the clump in the upper left. The oddball is [*The Phoenix and the Turtle*](https://en.wikipedia.org/wiki/The_Phoenix_and_the_Turtle), though *Lover’s Complaint* and the *Elegy* are also less similar than standard Shakespeare. The Phoenix and the Turtle is about the death of ideal love, represented by the Phoenix and Turtledove, for which there is a funeral. It actually is considered by scholars to be in stark contrast to his other output. [Elegy](https://en.wikipedia.org/wiki/Shakespeare_apocrypha#A_Funeral_Elegy) itself is actually written for a funeral, but probably not by Shakespeare. [*A Lover’s Complaint*](https://en.wikipedia.org/wiki/A_Lover%27s_Complaint) is thought to be an inferior work by the Bard by some critics, and maybe not even authored by him, so perhaps what we’re seeing is a reflection of that lack of quality. In general, we’re seeing things that we might expect.
#### Readability
We can examine readability scores for the texts, but for this we’ll need them in raw form. We already had them from before, I just added *Phoenix* from the Gutenberg download.
```
raw_texts
```
```
# A tibble: 43 x 2
id text
<chr> <list>
1 A_Lover_s_Complaint.txt <chr [813]>
2 A_Midsummer_Night_s_Dream.txt <chr [6,630]>
3 All_s_Well_That_Ends_Well.txt <chr [10,993]>
4 Antony_and_Cleopatra.txt <chr [14,064]>
5 As_You_Like_It.txt <chr [9,706]>
6 Coriolanus.txt <chr [13,440]>
7 Cymbeline.txt <chr [11,388]>
8 Elegy.txt <chr [1,316]>
9 Hamlet.txt <chr [13,950]>
10 Henry_V.txt <chr [9,777]>
# ... with 33 more rows
```
With raw texts, we need to convert them to a corpus object to proceed more easily. The corpus function from quanteda won’t read directly from a list column or a list at all, so we’ll convert it via the tm package, which more or less defeats the purpose of using the quanteda package, except that the textstat\_readability function gives us what we want, but I digress.
Unfortunately, the concept of readability is ill\-defined, and as such, there are dozens of measures available dating back nearly 75 years. The following is based on the Coleman\-Liau grade score (higher grade \= more difficult). The conclusion here is first, Shakespeare isn’t exactly a difficult read, and two, the poems may be more so relative to the other works.
```
library(tm)
raw_text_corpus = corpus(VCorpus(VectorSource(raw_texts$text)))
shakes_read = textstat_readability(raw_text_corpus)
```
#### Lexical diversity
There are also metrics of lexical diversity. As with readability, there is no one way to measure ‘diversity’. Here we’ll go back to using the standard DTM, as the focus is on the terms, whereas readability is more at the sentence level. Most standard measures of lexical diversity are variants on what is called the type\-token ratio, which in our setting is the number of unique terms (types) relative to the total terms (tokens). We can use textstat\_lexdiv for our purposes here, which will provide several measures of diversity by default.
```
ld = textstat_lexdiv(shakes_dtm)
```
This visual is based on the (absolute) scaled values of those several metrics, and might suggest that the poems are relatively more diverse. This certainly might be the case for *Phoenix*, but it could also be a reflection of the limitation of several of the measures, such that longer works are seen as less diverse, as tokens are added more so than types the longer the text goes.
As a comparison, the following shows the results of the ‘Measure of Textual Diversity’ calculated using the koRpus package[18](#fn18). It is notably less affected by text length, though the conclusions are largely the same. There is notable correlation between the MTLD and readability as well[19](#fn19). In general, Shakespeare tends to be more expressive in poems, and less so with comedies.
### Scene IV. Topic model
I’d say we’re now ready for topic model. That didn’t take too much did it?
#### Running the model and exploring the topics
We’ll run one with 10 topics. As in the previous example in this document, we’ll use topicmodels and the LDA function. Later, we’ll also compare our results with the traditional classifications of the texts. Note that this will take a while to run depending on your machine (maybe a minute or two). Faster implementation can be found with text2vec.
```
library(topicmodels)
shakes_10 = LDA(convert(shakes_dtm, to = "topicmodels"), k = 10, control=list(seed=1234))
```
One of the first things to do is to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* are common as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the stm package and corresponding labelTopics function as a way to get several alternatives. As an example, I show the results of their version of the following[20](#fn20):
* FREX: **FR**equency and **EX**clusivity, it is a weighted harmonic mean of a term’s rank within a topic in terms of frequency and exclusivity.
* lift: Ratio of the term’s probability within a topic to its probability of occurrence across all documents. Overly sensitive to rare words.
* score: Another approach that will give more weight to more exclusive terms.
* prob: This is just the raw probability of the term within a given topic.
As another approach, consider the saliency and relevance of term via the LDAvis package. While you can play with it here, it’s probably easier to [open it separately](vis/index.html). Note that this has to be done separately from the model, and may have topic numbers in a different order.
Your browser does not support iframes.
Given all these measures, one can assess how well they match what topics the documents would be most associated with.
```
t(topics(shakes_10, 3))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. The other measures pick up on words like Dane and Denmark. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram[21](#fn21). The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[22](#fn22). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix*, *Complaint*, standard poems, a mixed bag of more romance\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
The following shows the average topic probability for each of the traditional classes. Topics are represented by their first five most probable terms.
Aside from the poems, the classes are a good mix of topics, and appear to have some overlap. Tragedies are perhaps most diverse.
#### Summary of Topic Models
This is where the summary would go, but I grow weary…
**FIN**
#### Running the model and exploring the topics
We’ll run one with 10 topics. As in the previous example in this document, we’ll use topicmodels and the LDA function. Later, we’ll also compare our results with the traditional classifications of the texts. Note that this will take a while to run depending on your machine (maybe a minute or two). Faster implementation can be found with text2vec.
```
library(topicmodels)
shakes_10 = LDA(convert(shakes_dtm, to = "topicmodels"), k = 10, control=list(seed=1234))
```
One of the first things to do is to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* are common as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the stm package and corresponding labelTopics function as a way to get several alternatives. As an example, I show the results of their version of the following[20](#fn20):
* FREX: **FR**equency and **EX**clusivity, it is a weighted harmonic mean of a term’s rank within a topic in terms of frequency and exclusivity.
* lift: Ratio of the term’s probability within a topic to its probability of occurrence across all documents. Overly sensitive to rare words.
* score: Another approach that will give more weight to more exclusive terms.
* prob: This is just the raw probability of the term within a given topic.
As another approach, consider the saliency and relevance of term via the LDAvis package. While you can play with it here, it’s probably easier to [open it separately](vis/index.html). Note that this has to be done separately from the model, and may have topic numbers in a different order.
Your browser does not support iframes.
Given all these measures, one can assess how well they match what topics the documents would be most associated with.
```
t(topics(shakes_10, 3))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. The other measures pick up on words like Dane and Denmark. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram[21](#fn21). The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[22](#fn22). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix*, *Complaint*, standard poems, a mixed bag of more romance\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
The following shows the average topic probability for each of the traditional classes. Topics are represented by their first five most probable terms.
Aside from the poems, the classes are a good mix of topics, and appear to have some overlap. Tragedies are perhaps most diverse.
#### Summary of Topic Models
This is where the summary would go, but I grow weary…
**FIN**
| Data Visualization |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/shakespeare.html |
Shakespeare Start to Finish
===========================
The following attempts to demonstrate the usual difficulties one encounters dealing with text by procuring and processing the works of Shakespeare. The source is [MIT](http://shakespeare.mit.edu/), which has made the ‘complete’ works available on the web since 1993, plus one other from Gutenberg. The initial issue is simply getting the works from the web. Subsequently there is metadata, character names, stopwords etc. to be removed. At that point, we can stem and count the words in each work, which, when complete, puts us at the point we are ready for analysis.
The primary packages used are tidytext, stringr, and when things are ready for analysis, quanteda.
ACT I. Scrape MIT and Gutenberg Shakespeare
-------------------------------------------
### Scene I. Scrape main works
Initially we must scrape the web to get the documents we need. The rvest package will be used as follows.
* Start with the url of the site
* Get the links off that page to serve as base urls for the works
* Scrape the document for each url
* Deal with the collection of Sonnets separately
* Write out results
```
library(rvest); library(tidyverse); library(stringr)
page0 = read_html('http://shakespeare.mit.edu/')
works_urls0 = page0 %>%
html_nodes('a') %>%
html_attr('href')
main = works_urls0 %>%
grep(pattern='index', value=T) %>%
str_replace_all(pattern='index', replacement='full')
other = works_urls0[!grepl(works_urls0, pattern='index|edu|org|news')]
works_urls = c(main, other)
works_urls[1:3]
```
Now we just paste the main site url to the work urls and download them. Here is where we come across our first snag. The html\_text function has what I would call a bug but what the author feels is a feature. [Basically, it ignores line breaks of the form `<br>` in certain situations](https://github.com/hadley/rvest/issues/175). This means it will smash text together that shouldn’t be, thereby making *any* analysis of it fairly useless[14](#fn14). Luckily, [@rentrop](https://github.com/rentrop) provided a solution, which is in `r/fix_read_html.R`.
```
works0 = lapply(works_urls, function(x) read_html(paste0('http://shakespeare.mit.edu/', x)))
source('r/fix_read_html.R')
html_text_collapse(works0[[1]]) #works
works = lapply(works0, html_text_collapse)
names(works) = c("All's Well That Ends Well", "As You Like It", "Comedy of Errors"
"Cymbeline", "Love's Labour's Lost", "Measure for Measure"
"The Merry Wives of Windsor", "The Merchant of Venice", "A Midsummer Night's Dream"
"Much Ado about Nothing", "Pericles Prince of Tyre", "The Taming of the Shrew"
"The Tempest", "Troilus and Cressida", "Twelfth Night"
"The Two Gentlemen of Verona", "The Winter's Tale", "King Henry IV Part 1"
"King Henry IV Part 2", "Henry V", "Henry VI Part 1"
"Henry VI Part 2", "Henry VI Part 3", "Henry VIII"
"King John", "Richard II", "Richard III"
"Antony and Cleopatra", "Coriolanus", "Hamlet"
"Julius Caesar", "King Lear", "Macbeth"
"Othello", "Romeo and Juliet", "Timon of Athens"
"Titus Andronicus", "Sonnets", "A Lover's Complaint"
"The Rape of Lucrece", "Venus and Adonis", "Elegy")
```
### Scene II. Sonnets
We now hit a slight nuisance with the Sonnets. The Sonnets have a bit of a different structure than the plays. All links are in a single page, with a different form for the url, and each sonnet has its own page.
```
sonnet_urls = paste0('http://shakespeare.mit.edu/', grep(works_urls0, pattern='sonnet', value=T)) %>%
read_html() %>%
html_nodes('a') %>%
html_attr('href')
sonnet_urls = grep(sonnet_urls, pattern = 'sonnet', value=T) # remove amazon link
# read the texts
sonnet0 = purrr::map(sonnet_urls, function(x) read_html(paste0('http://shakespeare.mit.edu/Poetry/', x)))
# collapse to one 'Sonnets' work
sonnet = sapply(sonnet0, html_text_collapse)
works$Sonnets = sonnet
```
### Scene III. Save and write out
Now we can save our results so we won’t have to repeat any of the previous scraping. We want to save the main text object as an RData file, and write out the texts to their own file. When dealing with text, you’ll regularly want to save stages so you can avoid repeating what you don’t have to, as often you will need to go back after discovering new issues further down the line.
```
save(works, file='data/texts_raw/shakes/moby_from_web.RData')
```
### Scene IV. Read text from files
After the above is done, it’s not required to redo, so we can always get what we need. I’ll start with the raw text as files, as that is one of the more common ways one deals with documents. When text is nice and clean, this can be fairly straightforward.
The function at the end comes from the tidyr package. Up to that line, each element in the text column is the entire text, while the column itself is thus a ‘list\-column’. In other words, we have a 42 x 2 matrix. But to do what we need, we’ll want to have access to each line, and the unnest function unpacks each line within the title. The first few lines of the result are shown after.
```
library(tidyverse); library(stringr)
shakes0 =
data_frame(file = dir('data/texts_raw/shakes/moby/', full.names = TRUE)) %>%
transmute(id = basename(file), text) %>%
unnest(text)
save(shakes0, file='data/initial_shakes_dt.RData')
# Alternate that provides for more options
# library(readtext)
# shakes0 =
# data_frame(file = dir('data/texts_raw/shakes/moby/', full.names = TRUE)) %>%
# mutate(text = map(file, readtext, encoding='UTF8')) %>%
# unnest(text)
```
### Scene V. Add additional works
It is typical to be gathering texts from multiple sources. In this case, we’ll get *The Phoenix and the Turtle* from the Project Gutenberg website. There is an R package that will allow us to work directly with the site, making the process straightforward[15](#fn15). I also considered two other works, but I refrained from “The Two Noble Kinsmen” because like many other of Shakespeare’s versions on Gutenberg, it’s basically written in a different language. I also refrained from *The Passionate Pilgrim* because it’s mostly not Shakespeare.
When first doing this project, I actually started with Gutenberg, but it became a notable PITA. The texts were inconsistent in source, and sometimes reproduced printing errors purposely, which would have compounded typical problems. I thought it could have been solved by using the *Complete Works of Shakespeare* but the download only came with that title, meaning one would have to hunt for and delineate each separate work. This might not have been too big of an issue, except that there is no table of contents, nor consistent naming of titles across different printings. The MIT approach, on the other hand, was a few lines of code. This represents a common issue in text analysis when dealing with sources, a different option may save a lot of time in the end.
The following code could be more succinct to deal with one text, but I initially was dealing with multiple works, so I’ve left it in that mode. In the end, we’ll have a tibble with an id column for the file/work name, and another column that contains the lines of text.
```
library(gutenbergr)
works_not_included = c("The Phoenix and the Turtle") # add others if desired
gute0 = gutenberg_works(title %in% works_not_included)
gute = lapply(gute0$gutenberg_id, gutenberg_download)
gute = mapply(function(x, y) mutate(x, id=y) %>% select(-gutenberg_id),
x=gute,
y=works_not_included,
SIMPLIFY=F)
shakes = shakes0 %>%
bind_rows(gute) %>%
mutate(id = str_replace_all(id, " |'", '_')) %>%
mutate(id = str_replace(id, '.txt', '')) %>%
arrange(id)
# shakes %>% split(.$id) # inspect
save(shakes, file='data/texts_raw/shakes/shakes_df.RData')
```
ACT II. Preliminary Cleaning
----------------------------
If you think we’re even remotely getting close to being ready for analysis, I say Ha! to you. Our journey has only just begun (cue the Carpenters).
Now we can start thinking about prepping the data for eventual analysis. One of the nice things about having the data in a tidy format is that we can use string functionality over the column of text in a simple fashion.
### Scene I. Remove initial text/metadata
First on our to\-do list is to get rid of all the preliminary text of titles, authorship, and similar. This is fairly straightforward when you realize the text we want will be associated with something like `ACT I`, or in the case of the Sonnets, the word `Sonnet`. So, the idea it to drop all text up to those points. I’ve created a [function](https://github.com/m-clark/text-analysis-with-R/blob/master/r/detect_first_act.R) that will do that, and then just apply it to each works tibble[16](#fn16). For the poems and *A Funeral Elegy for Master William Peter*, we look instead for the line where his name or initials start the line.
```
source('r/detect_first_act.R')
shakes_trim = shakes %>%
split(.$id) %>%
lapply(detect_first_act) %>%
bind_rows
shakes %>% filter(id=='Romeo_and_Juliet') %>% head
```
```
# A tibble: 6 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet Romeo and Juliet: Entire Play
2 Romeo_and_Juliet " "
3 Romeo_and_Juliet ""
4 Romeo_and_Juliet ""
5 Romeo_and_Juliet ""
6 Romeo_and_Juliet Romeo and Juliet
```
```
shakes_trim %>% filter(id=='Romeo_and_Juliet') %>% head
```
```
# A tibble: 6 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet ""
2 Romeo_and_Juliet ""
3 Romeo_and_Juliet PROLOGUE
4 Romeo_and_Juliet ""
5 Romeo_and_Juliet ""
6 Romeo_and_Juliet ""
```
### Scene II. Miscellaneous removal
Next, we’ll want to remove empty rows, any remaining titles, lines that denote the act or scene, and other stuff. I’m going to remove the word *prologue* and *epilogue* as a stopword later. While some texts have a line that just says that (`PROLOGUE`), others have text that describes the scene (`Prologue. Blah blah`) and which I’ve decided to keep. As such, we just need the word itself gone.
```
titles = c("A Lover's Complaint", "All's Well That Ends Well", "As You Like It", "The Comedy of Errors",
"Cymbeline", "Love's Labour's Lost", "Measure for Measure",
"The Merry Wives of Windsor", "The Merchant of Venice", "A Midsummer Night's Dream",
"Much Ado about Nothing", "Pericles Prince of Tyre", "The Taming of the Shrew",
"The Tempest", "Troilus and Cressida", "Twelfth Night",
"The Two Gentlemen of Verona", "The Winter's Tale", "King Henry IV, Part 1",
"King Henry IV, Part 2", "Henry V", "Henry VI, Part 1",
"Henry VI, Part 2", "Henry VI, Part 3", "Henry VIII",
"King John", "Richard II", "Richard III",
"Antony and Cleopatra", "Coriolanus", "Hamlet",
"Julius Caesar", "King Lear", "Macbeth",
"Othello", "Romeo and Juliet", "Timon of Athens",
"Titus Andronicus", "Sonnets",
"The Rape of Lucrece", "Venus and Adonis", "A Funeral Elegy", "The Phoenix and the Turtle")
shakes_trim = shakes_trim %>%
filter(text != '', # remove empties
!text %in% titles, # remove titles
!str_detect(text, '^ACT|^SCENE|^Enter|^Exit|^Exeunt|^Sonnet') # remove acts etc.
)
shakes_trim %>% filter(id=='Romeo_and_Juliet') # we'll get prologue later
```
```
# A tibble: 3,992 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet PROLOGUE
2 Romeo_and_Juliet Two households, both alike in dignity,
3 Romeo_and_Juliet In fair Verona, where we lay our scene,
4 Romeo_and_Juliet From ancient grudge break to new mutiny,
5 Romeo_and_Juliet Where civil blood makes civil hands unclean.
6 Romeo_and_Juliet From forth the fatal loins of these two foes
7 Romeo_and_Juliet A pair of star-cross'd lovers take their life;
8 Romeo_and_Juliet Whose misadventured piteous overthrows
9 Romeo_and_Juliet Do with their death bury their parents' strife.
10 Romeo_and_Juliet The fearful passage of their death-mark'd love,
# ... with 3,982 more rows
```
### Scene III. Classification of works
While we’re at it, we can save the classical (sometimes arbitrary) classifications of Shakespeare’s works for later comparison to what we’ll get in our analyses. We’ll save them to call as needed.
```
shakes_types = data_frame(title=unique(shakes_trim$id)) %>%
mutate(class = 'Comedy',
class = if_else(str_detect(title, pattern='Adonis|Lucrece|Complaint|Turtle|Pilgrim|Sonnet|Elegy'), 'Poem', class),
class = if_else(str_detect(title, pattern='Henry|Richard|John'), 'History', class),
class = if_else(str_detect(title, pattern='Troilus|Coriolanus|Titus|Romeo|Timon|Julius|Macbeth|Hamlet|Othello|Antony|Cymbeline|Lear'), 'Tragedy', class),
problem = if_else(str_detect(title, pattern='Measure|Merchant|^All|Troilus|Timon|Passion'), 'Problem', 'Not'),
late_romance = if_else(str_detect(title, pattern='Cymbeline|Kinsmen|Pericles|Winter|Tempest'), 'Late', 'Other'))
save(shakes_types, file='data/shakespeare_classification.RData') # save for later
```
ACT III. Stop words
-------------------
As we’ve noted before, we’ll want to get rid of stop words, things like articles, possessive pronouns, and other very common words. In this case, we also want to include character names. However, the big wrinkle here is that this is not English as currently spoken, so we need to remove ‘ye’, ‘thee’, ‘thine’ etc. In addition, there are things that need to be replaced, like o’er to over, which may then also be removed. In short, this is not so straightforward.
### Scene I. Character names
We’ll get the list of character names from [opensourceshakespeare.org](http://opensourceshakespeare.org/) via rvest, but I added some from the poems and others that still came through the processing one way or another, e.g. abbreviated names.
```
shakes_char_url = 'https://www.opensourceshakespeare.org/views/plays/characters/chardisplay.php'
page0 = read_html(shakes_char_url)
tabs = page0 %>% html_table()
shakes_char = tabs[[2]][-(1:2), c(1,3,5)] # remove header and phantom columns
colnames(shakes_char) = c('Nspeeches', 'Character', 'Play')
shakes_char = shakes_char %>%
distinct(Character,.keep_all=T)
save(shakes_char, file='data/shakespeare_characters.RData')
```
A new snag is that some characters with multiple names may be represented (typically) by the first or last name, or in the case of three, the middle, e.g. Sir Toby Belch. Others are still difficultly named e.g. RICHARD PLANTAGENET (DUKE OF GLOUCESTER). The following should capture everything by splitting the names on spaces, removing parentheses, and keeping unique terms.
```
# remove paren and split
chars = shakes_char$Character
chars = str_replace_all(chars, '\\(|\\)', '')
chars = str_split(chars, ' ') %>%
unlist
# these were found after intial processsing
chars_other = c('enobarbus', 'marcius', 'katharina', 'clarence','pyramus',
'andrew', 'arcite', 'perithous', 'hippolita', 'schoolmaster',
'cressid', 'diomed', 'kate', 'titinius', 'Palamon', 'Tarquin',
'lucrece', 'isidore', 'tom', 'thisbe', 'paul',
'aemelia', 'sycorax', 'montague', 'capulet', 'collatinus')
chars = unique(c(chars, chars_other))
chars = chars[chars != '']
sample(chars)[1:3]
```
```
[1] "Children" "Dionyza" "Aaron"
```
### Scene II. Old, Middle, \& Modern English
While Shakespeare is considered [Early Modern English](https://en.wikipedia.org/wiki/Early_Modern_English), some text may be more historical, so I include Middle and Old English stopwords, as they were readily available from the cltk Python module ([link](https://github.com/cltk/cltk)). I also added some things to the modern English list like “thou’ldst” that I found lingering after initial passes. I first started using the works from Gutenberg, and there, the Old English might have had some utility. As the texts there were inconsistently translated and otherwise problematic, I abandoned using them. Here, the Old English vocabulary applied to these texts it only removes ‘wit’, so I refrain from using it.
```
# old and me from python cltk module;
# em from http://earlymodernconversions.com/wp-content/uploads/2013/12/stopwords.txt;
# I also added some to me
old_stops0 = read_lines('data/old_english_stop_words.txt')
# sort(old_stops0)
old_stops = data_frame(word=str_conv(old_stops0, 'UTF8'),
lexicon = 'cltk')
me_stops0 = read_lines('data/middle_english_stop_words')
# sort(me_stops0)
me_stops = data_frame(word=str_conv(me_stops0, 'UTF8'),
lexicon = 'cltk')
em_stops0 = read_lines('data/early_modern_english_stop_words.txt')
# sort(em_stops0)
em_stops = data_frame(word=str_conv(em_stops0, 'UTF8'),
lexicon = 'emc')
```
### Scene III. Remove stopwords
We’re now ready to start removing words. However, right now, we have lines not words. We can use the tidytext function unnest\_tokens, which is like unnest from tidyr, but works on different tokens, e.g. words, sentences, or paragraphs. Note that by default, the function will make all words lower case to make matching more efficient.
```
library(tidytext)
shakes_words = shakes_trim %>%
unnest_tokens(word, text, token='words')
save(shakes_words, file='data/shakes_words_df_4text2vec.RData')
```
We also will be doing a little stemming here. I’m getting rid of suffixes that end with the suffix after an apostrophe. Many of the remaining words will either be stopwords or need to be further stemmed later. I also created a middle/modern English stemmer for words that are not caught otherwise (me\_st\_stem). Again, this is the sort of thing you discover after initial passes (e.g. ‘criedst’). After that, we can use the anti\_join remove the stopwords.
```
source('r/st_stem.R')
shakes_words = shakes_words %>%
mutate(word = str_trim(word), # remove possible whitespace
word = str_replace(word, "'er$|'d$|'t$|'ld$|'rt$|'st$|'dst$", ''), # remove me style endings
word = str_replace_all(word, "[0-9]", ''), # remove sonnet numbers
word = vapply(word, me_st_stem, 'a')) %>%
anti_join(em_stops) %>%
anti_join(me_stops) %>%
anti_join(data_frame(word=str_to_lower(c(chars, 'prologue', 'epilogue')))) %>%
anti_join(data_frame(word=str_to_lower(paste0(chars, "'s")))) %>% # remove possessive names
anti_join(stop_words)
```
As before, you should do a couple spot checks.
```
any(shakes_words$word == 'romeo')
any(shakes_words$word == 'prologue')
any(shakes_words$word == 'mayst')
```
```
[1] FALSE
[1] FALSE
[1] FALSE
```
ACT IV. Other fixes
-------------------
Now we’re ready to finally do the word counts. Just kidding! There is *still* work to do for the remainder, and you’ll continue to spot things after runs. One remaining issue is the words that end in ‘st’ and ‘est’, and others that are not consistently spelled or otherwise need to be dealt with. For example, ‘crost’ will not be stemmed to ‘cross’, as ‘crossed’ would be. Finally, I limit the result to any words that have more than two characters, as my inspection suggested these are left\-over suffixes, or otherwise would be considered stopwords anyway.
```
# porter should catch remaining 'est'
add_a = c('mongst', 'gainst') # words to add a to
shakes_words = shakes_words %>%
mutate(word = if_else(word=='honour', 'honor', word),
word = if_else(word=='durst', 'dare', word),
word = if_else(word=='wast', 'was', word),
word = if_else(word=='dust', 'does', word),
word = if_else(word=='curst', 'cursed', word),
word = if_else(word=='blest', 'blessed', word),
word = if_else(word=='crost', 'crossed', word),
word = if_else(word=='accurst', 'accursed', word),
word = if_else(word %in% add_a,
paste0('a', word),
word),
word = str_replace(word, "'s$", ''), # strip remaining possessives
word = if_else(str_detect(word, pattern="o'er"), # change o'er over
str_replace(word, "'", 'v'),
word)) %>%
filter(!(id=='Antony_and_Cleopatra' & word == 'mark')) %>% # mark here is almost exclusively the character name
filter(str_count(word)>2)
```
At this point we could still maybe add things to this list of additional fixes, but I think it’s time to actually start playing with the data.
ACT V. Fun stuff
----------------
We are finally ready to get to the fun stuff. Finally! And now things get easy.
### Scene I. Count the terms
We can get term counts with standard dplyr approaches, and packages like tidytext will take that and also do some other things we might want. Specifically, we can use the latter to create the document\-term matrix (DTM) that will be used in other analysis. The function cast\_dfm will create a dfm class object, or ‘document\-feature’ matrix class object (from quanteda), which is the same thing but recognizes this sort of stuff is not specific to words. With word counts in hand, would be good save to save at this point, since they’ll serve as the basis for other processing.
```
term_counts = shakes_words %>%
group_by(id, word) %>%
count
term_counts %>%
arrange(desc(n))
library(quanteda)
shakes_dtm = term_counts %>%
cast_dfm(document=id, term=word, value=n)
## save(shakes_words, term_counts, shakes_dtm, file='data/shakes_words_df.RData')
```
```
# A tibble: 115,954 x 3
# Groups: id, word [115,954]
id word n
<chr> <chr> <int>
1 Sonnets love 195
2 The_Two_Gentlemen_of_Verona love 171
3 Romeo_and_Juliet love 150
4 As_You_Like_It love 118
5 Love_s_Labour_s_Lost love 118
6 A_Midsummer_Night_s_Dream love 114
7 Richard_III god 111
8 Titus_Andronicus rome 103
9 Much_Ado_about_Nothing love 92
10 Coriolanus rome 90
# ... with 115,944 more rows
```
Now things are looking like Shakespeare, with love for everyone[17](#fn17). You’ll notice I’ve kept place names such as Rome, but this might be something you’d prefer to remove. Other candidates would be madam, woman, man, majesty (as in ‘his/her’) etc. This sort of thing is up to the researcher.
### Scene II. Stemming
Now we’ll stem the words. This is actually more of a pre\-processing step, one that we’d do along with (and typically after) stopword removal. I do it here to mostly demonstrate how to use quanteda to do it, as it can also be used to remove stopwords and do many of the other things we did with tidytext.
Stemming will make words like eye and eyes just *ey*, or convert war, wars and warring to *war*. In other words, it will reduce variations of a word to a common root form, or ‘word stem’. We could have done this in a step prior to counting the terms, but then you only have the stemmed result to work with for the document term matrix from then on. Depending on your situation, you may or may not want to stem, or maybe you’d want to compare results. The quanteda package will actually stem with the DTM (i.e. work on the column names) and collapse the word counts accordingly. I note the difference in words before and after stemming.
```
shakes_dtm
ncol(shakes_dtm)
shakes_dtm = shakes_dtm %>%
dfm_wordstem()
shakes_dtm
ncol(shakes_dtm)
```
```
Document-feature matrix of: 43 documents, 22,052 features (87.8% sparse).
[1] 22052
Document-feature matrix of: 43 documents, 13,325 features (83.8% sparse).
[1] 13325
```
The result is notably fewer columns, which will speed up any analysis, as well as produce a slightly more dense matrix.
### Scene III. Exploration
#### Top features
Let’s start looking at the data more intently. The following shows the 10 most common words and their respective counts. This is also an easy way to find candidates to add to the stopword list. Note that dai and prai are stems for day and pray. Love occurs 2\.15 times as much as the most frequent word!
```
top10 = topfeatures(shakes_dtm, 10)
top10
```
```
love heart eye god day hand hear live death night
2918 1359 1300 1284 1229 1226 1043 1015 1010 1001
```
The following is a word cloud. They are among the most useless visual displays imaginable. Just because you can, doesn’t mean you should.
If you want to display relative frequency do so.
#### Similarity
The quanteda package has some built in similarity measures such as [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity), which you can think of similarly to the standard correlation (also available as an option). I display it visually to better get a sense of things.
```
## textstat_simil(shakes_dtm, margin = "documents", method = "cosine")
```
We can already begin to see the clusters of documents. For example, the more historical are the clump in the upper left. The oddball is [*The Phoenix and the Turtle*](https://en.wikipedia.org/wiki/The_Phoenix_and_the_Turtle), though *Lover’s Complaint* and the *Elegy* are also less similar than standard Shakespeare. The Phoenix and the Turtle is about the death of ideal love, represented by the Phoenix and Turtledove, for which there is a funeral. It actually is considered by scholars to be in stark contrast to his other output. [Elegy](https://en.wikipedia.org/wiki/Shakespeare_apocrypha#A_Funeral_Elegy) itself is actually written for a funeral, but probably not by Shakespeare. [*A Lover’s Complaint*](https://en.wikipedia.org/wiki/A_Lover%27s_Complaint) is thought to be an inferior work by the Bard by some critics, and maybe not even authored by him, so perhaps what we’re seeing is a reflection of that lack of quality. In general, we’re seeing things that we might expect.
#### Readability
We can examine readability scores for the texts, but for this we’ll need them in raw form. We already had them from before, I just added *Phoenix* from the Gutenberg download.
```
raw_texts
```
```
# A tibble: 43 x 2
id text
<chr> <list>
1 A_Lover_s_Complaint.txt <chr [813]>
2 A_Midsummer_Night_s_Dream.txt <chr [6,630]>
3 All_s_Well_That_Ends_Well.txt <chr [10,993]>
4 Antony_and_Cleopatra.txt <chr [14,064]>
5 As_You_Like_It.txt <chr [9,706]>
6 Coriolanus.txt <chr [13,440]>
7 Cymbeline.txt <chr [11,388]>
8 Elegy.txt <chr [1,316]>
9 Hamlet.txt <chr [13,950]>
10 Henry_V.txt <chr [9,777]>
# ... with 33 more rows
```
With raw texts, we need to convert them to a corpus object to proceed more easily. The corpus function from quanteda won’t read directly from a list column or a list at all, so we’ll convert it via the tm package, which more or less defeats the purpose of using the quanteda package, except that the textstat\_readability function gives us what we want, but I digress.
Unfortunately, the concept of readability is ill\-defined, and as such, there are dozens of measures available dating back nearly 75 years. The following is based on the Coleman\-Liau grade score (higher grade \= more difficult). The conclusion here is first, Shakespeare isn’t exactly a difficult read, and two, the poems may be more so relative to the other works.
```
library(tm)
raw_text_corpus = corpus(VCorpus(VectorSource(raw_texts$text)))
shakes_read = textstat_readability(raw_text_corpus)
```
#### Lexical diversity
There are also metrics of lexical diversity. As with readability, there is no one way to measure ‘diversity’. Here we’ll go back to using the standard DTM, as the focus is on the terms, whereas readability is more at the sentence level. Most standard measures of lexical diversity are variants on what is called the type\-token ratio, which in our setting is the number of unique terms (types) relative to the total terms (tokens). We can use textstat\_lexdiv for our purposes here, which will provide several measures of diversity by default.
```
ld = textstat_lexdiv(shakes_dtm)
```
This visual is based on the (absolute) scaled values of those several metrics, and might suggest that the poems are relatively more diverse. This certainly might be the case for *Phoenix*, but it could also be a reflection of the limitation of several of the measures, such that longer works are seen as less diverse, as tokens are added more so than types the longer the text goes.
As a comparison, the following shows the results of the ‘Measure of Textual Diversity’ calculated using the koRpus package[18](#fn18). It is notably less affected by text length, though the conclusions are largely the same. There is notable correlation between the MTLD and readability as well[19](#fn19). In general, Shakespeare tends to be more expressive in poems, and less so with comedies.
### Scene IV. Topic model
I’d say we’re now ready for topic model. That didn’t take too much did it?
#### Running the model and exploring the topics
We’ll run one with 10 topics. As in the previous example in this document, we’ll use topicmodels and the LDA function. Later, we’ll also compare our results with the traditional classifications of the texts. Note that this will take a while to run depending on your machine (maybe a minute or two). Faster implementation can be found with text2vec.
```
library(topicmodels)
shakes_10 = LDA(convert(shakes_dtm, to = "topicmodels"), k = 10, control=list(seed=1234))
```
One of the first things to do is to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* are common as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the stm package and corresponding labelTopics function as a way to get several alternatives. As an example, I show the results of their version of the following[20](#fn20):
* FREX: **FR**equency and **EX**clusivity, it is a weighted harmonic mean of a term’s rank within a topic in terms of frequency and exclusivity.
* lift: Ratio of the term’s probability within a topic to its probability of occurrence across all documents. Overly sensitive to rare words.
* score: Another approach that will give more weight to more exclusive terms.
* prob: This is just the raw probability of the term within a given topic.
As another approach, consider the saliency and relevance of term via the LDAvis package. While you can play with it here, it’s probably easier to [open it separately](vis/index.html). Note that this has to be done separately from the model, and may have topic numbers in a different order.
Your browser does not support iframes.
Given all these measures, one can assess how well they match what topics the documents would be most associated with.
```
t(topics(shakes_10, 3))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. The other measures pick up on words like Dane and Denmark. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram[21](#fn21). The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[22](#fn22). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix*, *Complaint*, standard poems, a mixed bag of more romance\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
The following shows the average topic probability for each of the traditional classes. Topics are represented by their first five most probable terms.
Aside from the poems, the classes are a good mix of topics, and appear to have some overlap. Tragedies are perhaps most diverse.
#### Summary of Topic Models
This is where the summary would go, but I grow weary…
**FIN**
ACT I. Scrape MIT and Gutenberg Shakespeare
-------------------------------------------
### Scene I. Scrape main works
Initially we must scrape the web to get the documents we need. The rvest package will be used as follows.
* Start with the url of the site
* Get the links off that page to serve as base urls for the works
* Scrape the document for each url
* Deal with the collection of Sonnets separately
* Write out results
```
library(rvest); library(tidyverse); library(stringr)
page0 = read_html('http://shakespeare.mit.edu/')
works_urls0 = page0 %>%
html_nodes('a') %>%
html_attr('href')
main = works_urls0 %>%
grep(pattern='index', value=T) %>%
str_replace_all(pattern='index', replacement='full')
other = works_urls0[!grepl(works_urls0, pattern='index|edu|org|news')]
works_urls = c(main, other)
works_urls[1:3]
```
Now we just paste the main site url to the work urls and download them. Here is where we come across our first snag. The html\_text function has what I would call a bug but what the author feels is a feature. [Basically, it ignores line breaks of the form `<br>` in certain situations](https://github.com/hadley/rvest/issues/175). This means it will smash text together that shouldn’t be, thereby making *any* analysis of it fairly useless[14](#fn14). Luckily, [@rentrop](https://github.com/rentrop) provided a solution, which is in `r/fix_read_html.R`.
```
works0 = lapply(works_urls, function(x) read_html(paste0('http://shakespeare.mit.edu/', x)))
source('r/fix_read_html.R')
html_text_collapse(works0[[1]]) #works
works = lapply(works0, html_text_collapse)
names(works) = c("All's Well That Ends Well", "As You Like It", "Comedy of Errors"
"Cymbeline", "Love's Labour's Lost", "Measure for Measure"
"The Merry Wives of Windsor", "The Merchant of Venice", "A Midsummer Night's Dream"
"Much Ado about Nothing", "Pericles Prince of Tyre", "The Taming of the Shrew"
"The Tempest", "Troilus and Cressida", "Twelfth Night"
"The Two Gentlemen of Verona", "The Winter's Tale", "King Henry IV Part 1"
"King Henry IV Part 2", "Henry V", "Henry VI Part 1"
"Henry VI Part 2", "Henry VI Part 3", "Henry VIII"
"King John", "Richard II", "Richard III"
"Antony and Cleopatra", "Coriolanus", "Hamlet"
"Julius Caesar", "King Lear", "Macbeth"
"Othello", "Romeo and Juliet", "Timon of Athens"
"Titus Andronicus", "Sonnets", "A Lover's Complaint"
"The Rape of Lucrece", "Venus and Adonis", "Elegy")
```
### Scene II. Sonnets
We now hit a slight nuisance with the Sonnets. The Sonnets have a bit of a different structure than the plays. All links are in a single page, with a different form for the url, and each sonnet has its own page.
```
sonnet_urls = paste0('http://shakespeare.mit.edu/', grep(works_urls0, pattern='sonnet', value=T)) %>%
read_html() %>%
html_nodes('a') %>%
html_attr('href')
sonnet_urls = grep(sonnet_urls, pattern = 'sonnet', value=T) # remove amazon link
# read the texts
sonnet0 = purrr::map(sonnet_urls, function(x) read_html(paste0('http://shakespeare.mit.edu/Poetry/', x)))
# collapse to one 'Sonnets' work
sonnet = sapply(sonnet0, html_text_collapse)
works$Sonnets = sonnet
```
### Scene III. Save and write out
Now we can save our results so we won’t have to repeat any of the previous scraping. We want to save the main text object as an RData file, and write out the texts to their own file. When dealing with text, you’ll regularly want to save stages so you can avoid repeating what you don’t have to, as often you will need to go back after discovering new issues further down the line.
```
save(works, file='data/texts_raw/shakes/moby_from_web.RData')
```
### Scene IV. Read text from files
After the above is done, it’s not required to redo, so we can always get what we need. I’ll start with the raw text as files, as that is one of the more common ways one deals with documents. When text is nice and clean, this can be fairly straightforward.
The function at the end comes from the tidyr package. Up to that line, each element in the text column is the entire text, while the column itself is thus a ‘list\-column’. In other words, we have a 42 x 2 matrix. But to do what we need, we’ll want to have access to each line, and the unnest function unpacks each line within the title. The first few lines of the result are shown after.
```
library(tidyverse); library(stringr)
shakes0 =
data_frame(file = dir('data/texts_raw/shakes/moby/', full.names = TRUE)) %>%
transmute(id = basename(file), text) %>%
unnest(text)
save(shakes0, file='data/initial_shakes_dt.RData')
# Alternate that provides for more options
# library(readtext)
# shakes0 =
# data_frame(file = dir('data/texts_raw/shakes/moby/', full.names = TRUE)) %>%
# mutate(text = map(file, readtext, encoding='UTF8')) %>%
# unnest(text)
```
### Scene V. Add additional works
It is typical to be gathering texts from multiple sources. In this case, we’ll get *The Phoenix and the Turtle* from the Project Gutenberg website. There is an R package that will allow us to work directly with the site, making the process straightforward[15](#fn15). I also considered two other works, but I refrained from “The Two Noble Kinsmen” because like many other of Shakespeare’s versions on Gutenberg, it’s basically written in a different language. I also refrained from *The Passionate Pilgrim* because it’s mostly not Shakespeare.
When first doing this project, I actually started with Gutenberg, but it became a notable PITA. The texts were inconsistent in source, and sometimes reproduced printing errors purposely, which would have compounded typical problems. I thought it could have been solved by using the *Complete Works of Shakespeare* but the download only came with that title, meaning one would have to hunt for and delineate each separate work. This might not have been too big of an issue, except that there is no table of contents, nor consistent naming of titles across different printings. The MIT approach, on the other hand, was a few lines of code. This represents a common issue in text analysis when dealing with sources, a different option may save a lot of time in the end.
The following code could be more succinct to deal with one text, but I initially was dealing with multiple works, so I’ve left it in that mode. In the end, we’ll have a tibble with an id column for the file/work name, and another column that contains the lines of text.
```
library(gutenbergr)
works_not_included = c("The Phoenix and the Turtle") # add others if desired
gute0 = gutenberg_works(title %in% works_not_included)
gute = lapply(gute0$gutenberg_id, gutenberg_download)
gute = mapply(function(x, y) mutate(x, id=y) %>% select(-gutenberg_id),
x=gute,
y=works_not_included,
SIMPLIFY=F)
shakes = shakes0 %>%
bind_rows(gute) %>%
mutate(id = str_replace_all(id, " |'", '_')) %>%
mutate(id = str_replace(id, '.txt', '')) %>%
arrange(id)
# shakes %>% split(.$id) # inspect
save(shakes, file='data/texts_raw/shakes/shakes_df.RData')
```
### Scene I. Scrape main works
Initially we must scrape the web to get the documents we need. The rvest package will be used as follows.
* Start with the url of the site
* Get the links off that page to serve as base urls for the works
* Scrape the document for each url
* Deal with the collection of Sonnets separately
* Write out results
```
library(rvest); library(tidyverse); library(stringr)
page0 = read_html('http://shakespeare.mit.edu/')
works_urls0 = page0 %>%
html_nodes('a') %>%
html_attr('href')
main = works_urls0 %>%
grep(pattern='index', value=T) %>%
str_replace_all(pattern='index', replacement='full')
other = works_urls0[!grepl(works_urls0, pattern='index|edu|org|news')]
works_urls = c(main, other)
works_urls[1:3]
```
Now we just paste the main site url to the work urls and download them. Here is where we come across our first snag. The html\_text function has what I would call a bug but what the author feels is a feature. [Basically, it ignores line breaks of the form `<br>` in certain situations](https://github.com/hadley/rvest/issues/175). This means it will smash text together that shouldn’t be, thereby making *any* analysis of it fairly useless[14](#fn14). Luckily, [@rentrop](https://github.com/rentrop) provided a solution, which is in `r/fix_read_html.R`.
```
works0 = lapply(works_urls, function(x) read_html(paste0('http://shakespeare.mit.edu/', x)))
source('r/fix_read_html.R')
html_text_collapse(works0[[1]]) #works
works = lapply(works0, html_text_collapse)
names(works) = c("All's Well That Ends Well", "As You Like It", "Comedy of Errors"
"Cymbeline", "Love's Labour's Lost", "Measure for Measure"
"The Merry Wives of Windsor", "The Merchant of Venice", "A Midsummer Night's Dream"
"Much Ado about Nothing", "Pericles Prince of Tyre", "The Taming of the Shrew"
"The Tempest", "Troilus and Cressida", "Twelfth Night"
"The Two Gentlemen of Verona", "The Winter's Tale", "King Henry IV Part 1"
"King Henry IV Part 2", "Henry V", "Henry VI Part 1"
"Henry VI Part 2", "Henry VI Part 3", "Henry VIII"
"King John", "Richard II", "Richard III"
"Antony and Cleopatra", "Coriolanus", "Hamlet"
"Julius Caesar", "King Lear", "Macbeth"
"Othello", "Romeo and Juliet", "Timon of Athens"
"Titus Andronicus", "Sonnets", "A Lover's Complaint"
"The Rape of Lucrece", "Venus and Adonis", "Elegy")
```
### Scene II. Sonnets
We now hit a slight nuisance with the Sonnets. The Sonnets have a bit of a different structure than the plays. All links are in a single page, with a different form for the url, and each sonnet has its own page.
```
sonnet_urls = paste0('http://shakespeare.mit.edu/', grep(works_urls0, pattern='sonnet', value=T)) %>%
read_html() %>%
html_nodes('a') %>%
html_attr('href')
sonnet_urls = grep(sonnet_urls, pattern = 'sonnet', value=T) # remove amazon link
# read the texts
sonnet0 = purrr::map(sonnet_urls, function(x) read_html(paste0('http://shakespeare.mit.edu/Poetry/', x)))
# collapse to one 'Sonnets' work
sonnet = sapply(sonnet0, html_text_collapse)
works$Sonnets = sonnet
```
### Scene III. Save and write out
Now we can save our results so we won’t have to repeat any of the previous scraping. We want to save the main text object as an RData file, and write out the texts to their own file. When dealing with text, you’ll regularly want to save stages so you can avoid repeating what you don’t have to, as often you will need to go back after discovering new issues further down the line.
```
save(works, file='data/texts_raw/shakes/moby_from_web.RData')
```
### Scene IV. Read text from files
After the above is done, it’s not required to redo, so we can always get what we need. I’ll start with the raw text as files, as that is one of the more common ways one deals with documents. When text is nice and clean, this can be fairly straightforward.
The function at the end comes from the tidyr package. Up to that line, each element in the text column is the entire text, while the column itself is thus a ‘list\-column’. In other words, we have a 42 x 2 matrix. But to do what we need, we’ll want to have access to each line, and the unnest function unpacks each line within the title. The first few lines of the result are shown after.
```
library(tidyverse); library(stringr)
shakes0 =
data_frame(file = dir('data/texts_raw/shakes/moby/', full.names = TRUE)) %>%
transmute(id = basename(file), text) %>%
unnest(text)
save(shakes0, file='data/initial_shakes_dt.RData')
# Alternate that provides for more options
# library(readtext)
# shakes0 =
# data_frame(file = dir('data/texts_raw/shakes/moby/', full.names = TRUE)) %>%
# mutate(text = map(file, readtext, encoding='UTF8')) %>%
# unnest(text)
```
### Scene V. Add additional works
It is typical to be gathering texts from multiple sources. In this case, we’ll get *The Phoenix and the Turtle* from the Project Gutenberg website. There is an R package that will allow us to work directly with the site, making the process straightforward[15](#fn15). I also considered two other works, but I refrained from “The Two Noble Kinsmen” because like many other of Shakespeare’s versions on Gutenberg, it’s basically written in a different language. I also refrained from *The Passionate Pilgrim* because it’s mostly not Shakespeare.
When first doing this project, I actually started with Gutenberg, but it became a notable PITA. The texts were inconsistent in source, and sometimes reproduced printing errors purposely, which would have compounded typical problems. I thought it could have been solved by using the *Complete Works of Shakespeare* but the download only came with that title, meaning one would have to hunt for and delineate each separate work. This might not have been too big of an issue, except that there is no table of contents, nor consistent naming of titles across different printings. The MIT approach, on the other hand, was a few lines of code. This represents a common issue in text analysis when dealing with sources, a different option may save a lot of time in the end.
The following code could be more succinct to deal with one text, but I initially was dealing with multiple works, so I’ve left it in that mode. In the end, we’ll have a tibble with an id column for the file/work name, and another column that contains the lines of text.
```
library(gutenbergr)
works_not_included = c("The Phoenix and the Turtle") # add others if desired
gute0 = gutenberg_works(title %in% works_not_included)
gute = lapply(gute0$gutenberg_id, gutenberg_download)
gute = mapply(function(x, y) mutate(x, id=y) %>% select(-gutenberg_id),
x=gute,
y=works_not_included,
SIMPLIFY=F)
shakes = shakes0 %>%
bind_rows(gute) %>%
mutate(id = str_replace_all(id, " |'", '_')) %>%
mutate(id = str_replace(id, '.txt', '')) %>%
arrange(id)
# shakes %>% split(.$id) # inspect
save(shakes, file='data/texts_raw/shakes/shakes_df.RData')
```
ACT II. Preliminary Cleaning
----------------------------
If you think we’re even remotely getting close to being ready for analysis, I say Ha! to you. Our journey has only just begun (cue the Carpenters).
Now we can start thinking about prepping the data for eventual analysis. One of the nice things about having the data in a tidy format is that we can use string functionality over the column of text in a simple fashion.
### Scene I. Remove initial text/metadata
First on our to\-do list is to get rid of all the preliminary text of titles, authorship, and similar. This is fairly straightforward when you realize the text we want will be associated with something like `ACT I`, or in the case of the Sonnets, the word `Sonnet`. So, the idea it to drop all text up to those points. I’ve created a [function](https://github.com/m-clark/text-analysis-with-R/blob/master/r/detect_first_act.R) that will do that, and then just apply it to each works tibble[16](#fn16). For the poems and *A Funeral Elegy for Master William Peter*, we look instead for the line where his name or initials start the line.
```
source('r/detect_first_act.R')
shakes_trim = shakes %>%
split(.$id) %>%
lapply(detect_first_act) %>%
bind_rows
shakes %>% filter(id=='Romeo_and_Juliet') %>% head
```
```
# A tibble: 6 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet Romeo and Juliet: Entire Play
2 Romeo_and_Juliet " "
3 Romeo_and_Juliet ""
4 Romeo_and_Juliet ""
5 Romeo_and_Juliet ""
6 Romeo_and_Juliet Romeo and Juliet
```
```
shakes_trim %>% filter(id=='Romeo_and_Juliet') %>% head
```
```
# A tibble: 6 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet ""
2 Romeo_and_Juliet ""
3 Romeo_and_Juliet PROLOGUE
4 Romeo_and_Juliet ""
5 Romeo_and_Juliet ""
6 Romeo_and_Juliet ""
```
### Scene II. Miscellaneous removal
Next, we’ll want to remove empty rows, any remaining titles, lines that denote the act or scene, and other stuff. I’m going to remove the word *prologue* and *epilogue* as a stopword later. While some texts have a line that just says that (`PROLOGUE`), others have text that describes the scene (`Prologue. Blah blah`) and which I’ve decided to keep. As such, we just need the word itself gone.
```
titles = c("A Lover's Complaint", "All's Well That Ends Well", "As You Like It", "The Comedy of Errors",
"Cymbeline", "Love's Labour's Lost", "Measure for Measure",
"The Merry Wives of Windsor", "The Merchant of Venice", "A Midsummer Night's Dream",
"Much Ado about Nothing", "Pericles Prince of Tyre", "The Taming of the Shrew",
"The Tempest", "Troilus and Cressida", "Twelfth Night",
"The Two Gentlemen of Verona", "The Winter's Tale", "King Henry IV, Part 1",
"King Henry IV, Part 2", "Henry V", "Henry VI, Part 1",
"Henry VI, Part 2", "Henry VI, Part 3", "Henry VIII",
"King John", "Richard II", "Richard III",
"Antony and Cleopatra", "Coriolanus", "Hamlet",
"Julius Caesar", "King Lear", "Macbeth",
"Othello", "Romeo and Juliet", "Timon of Athens",
"Titus Andronicus", "Sonnets",
"The Rape of Lucrece", "Venus and Adonis", "A Funeral Elegy", "The Phoenix and the Turtle")
shakes_trim = shakes_trim %>%
filter(text != '', # remove empties
!text %in% titles, # remove titles
!str_detect(text, '^ACT|^SCENE|^Enter|^Exit|^Exeunt|^Sonnet') # remove acts etc.
)
shakes_trim %>% filter(id=='Romeo_and_Juliet') # we'll get prologue later
```
```
# A tibble: 3,992 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet PROLOGUE
2 Romeo_and_Juliet Two households, both alike in dignity,
3 Romeo_and_Juliet In fair Verona, where we lay our scene,
4 Romeo_and_Juliet From ancient grudge break to new mutiny,
5 Romeo_and_Juliet Where civil blood makes civil hands unclean.
6 Romeo_and_Juliet From forth the fatal loins of these two foes
7 Romeo_and_Juliet A pair of star-cross'd lovers take their life;
8 Romeo_and_Juliet Whose misadventured piteous overthrows
9 Romeo_and_Juliet Do with their death bury their parents' strife.
10 Romeo_and_Juliet The fearful passage of their death-mark'd love,
# ... with 3,982 more rows
```
### Scene III. Classification of works
While we’re at it, we can save the classical (sometimes arbitrary) classifications of Shakespeare’s works for later comparison to what we’ll get in our analyses. We’ll save them to call as needed.
```
shakes_types = data_frame(title=unique(shakes_trim$id)) %>%
mutate(class = 'Comedy',
class = if_else(str_detect(title, pattern='Adonis|Lucrece|Complaint|Turtle|Pilgrim|Sonnet|Elegy'), 'Poem', class),
class = if_else(str_detect(title, pattern='Henry|Richard|John'), 'History', class),
class = if_else(str_detect(title, pattern='Troilus|Coriolanus|Titus|Romeo|Timon|Julius|Macbeth|Hamlet|Othello|Antony|Cymbeline|Lear'), 'Tragedy', class),
problem = if_else(str_detect(title, pattern='Measure|Merchant|^All|Troilus|Timon|Passion'), 'Problem', 'Not'),
late_romance = if_else(str_detect(title, pattern='Cymbeline|Kinsmen|Pericles|Winter|Tempest'), 'Late', 'Other'))
save(shakes_types, file='data/shakespeare_classification.RData') # save for later
```
### Scene I. Remove initial text/metadata
First on our to\-do list is to get rid of all the preliminary text of titles, authorship, and similar. This is fairly straightforward when you realize the text we want will be associated with something like `ACT I`, or in the case of the Sonnets, the word `Sonnet`. So, the idea it to drop all text up to those points. I’ve created a [function](https://github.com/m-clark/text-analysis-with-R/blob/master/r/detect_first_act.R) that will do that, and then just apply it to each works tibble[16](#fn16). For the poems and *A Funeral Elegy for Master William Peter*, we look instead for the line where his name or initials start the line.
```
source('r/detect_first_act.R')
shakes_trim = shakes %>%
split(.$id) %>%
lapply(detect_first_act) %>%
bind_rows
shakes %>% filter(id=='Romeo_and_Juliet') %>% head
```
```
# A tibble: 6 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet Romeo and Juliet: Entire Play
2 Romeo_and_Juliet " "
3 Romeo_and_Juliet ""
4 Romeo_and_Juliet ""
5 Romeo_and_Juliet ""
6 Romeo_and_Juliet Romeo and Juliet
```
```
shakes_trim %>% filter(id=='Romeo_and_Juliet') %>% head
```
```
# A tibble: 6 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet ""
2 Romeo_and_Juliet ""
3 Romeo_and_Juliet PROLOGUE
4 Romeo_and_Juliet ""
5 Romeo_and_Juliet ""
6 Romeo_and_Juliet ""
```
### Scene II. Miscellaneous removal
Next, we’ll want to remove empty rows, any remaining titles, lines that denote the act or scene, and other stuff. I’m going to remove the word *prologue* and *epilogue* as a stopword later. While some texts have a line that just says that (`PROLOGUE`), others have text that describes the scene (`Prologue. Blah blah`) and which I’ve decided to keep. As such, we just need the word itself gone.
```
titles = c("A Lover's Complaint", "All's Well That Ends Well", "As You Like It", "The Comedy of Errors",
"Cymbeline", "Love's Labour's Lost", "Measure for Measure",
"The Merry Wives of Windsor", "The Merchant of Venice", "A Midsummer Night's Dream",
"Much Ado about Nothing", "Pericles Prince of Tyre", "The Taming of the Shrew",
"The Tempest", "Troilus and Cressida", "Twelfth Night",
"The Two Gentlemen of Verona", "The Winter's Tale", "King Henry IV, Part 1",
"King Henry IV, Part 2", "Henry V", "Henry VI, Part 1",
"Henry VI, Part 2", "Henry VI, Part 3", "Henry VIII",
"King John", "Richard II", "Richard III",
"Antony and Cleopatra", "Coriolanus", "Hamlet",
"Julius Caesar", "King Lear", "Macbeth",
"Othello", "Romeo and Juliet", "Timon of Athens",
"Titus Andronicus", "Sonnets",
"The Rape of Lucrece", "Venus and Adonis", "A Funeral Elegy", "The Phoenix and the Turtle")
shakes_trim = shakes_trim %>%
filter(text != '', # remove empties
!text %in% titles, # remove titles
!str_detect(text, '^ACT|^SCENE|^Enter|^Exit|^Exeunt|^Sonnet') # remove acts etc.
)
shakes_trim %>% filter(id=='Romeo_and_Juliet') # we'll get prologue later
```
```
# A tibble: 3,992 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet PROLOGUE
2 Romeo_and_Juliet Two households, both alike in dignity,
3 Romeo_and_Juliet In fair Verona, where we lay our scene,
4 Romeo_and_Juliet From ancient grudge break to new mutiny,
5 Romeo_and_Juliet Where civil blood makes civil hands unclean.
6 Romeo_and_Juliet From forth the fatal loins of these two foes
7 Romeo_and_Juliet A pair of star-cross'd lovers take their life;
8 Romeo_and_Juliet Whose misadventured piteous overthrows
9 Romeo_and_Juliet Do with their death bury their parents' strife.
10 Romeo_and_Juliet The fearful passage of their death-mark'd love,
# ... with 3,982 more rows
```
### Scene III. Classification of works
While we’re at it, we can save the classical (sometimes arbitrary) classifications of Shakespeare’s works for later comparison to what we’ll get in our analyses. We’ll save them to call as needed.
```
shakes_types = data_frame(title=unique(shakes_trim$id)) %>%
mutate(class = 'Comedy',
class = if_else(str_detect(title, pattern='Adonis|Lucrece|Complaint|Turtle|Pilgrim|Sonnet|Elegy'), 'Poem', class),
class = if_else(str_detect(title, pattern='Henry|Richard|John'), 'History', class),
class = if_else(str_detect(title, pattern='Troilus|Coriolanus|Titus|Romeo|Timon|Julius|Macbeth|Hamlet|Othello|Antony|Cymbeline|Lear'), 'Tragedy', class),
problem = if_else(str_detect(title, pattern='Measure|Merchant|^All|Troilus|Timon|Passion'), 'Problem', 'Not'),
late_romance = if_else(str_detect(title, pattern='Cymbeline|Kinsmen|Pericles|Winter|Tempest'), 'Late', 'Other'))
save(shakes_types, file='data/shakespeare_classification.RData') # save for later
```
ACT III. Stop words
-------------------
As we’ve noted before, we’ll want to get rid of stop words, things like articles, possessive pronouns, and other very common words. In this case, we also want to include character names. However, the big wrinkle here is that this is not English as currently spoken, so we need to remove ‘ye’, ‘thee’, ‘thine’ etc. In addition, there are things that need to be replaced, like o’er to over, which may then also be removed. In short, this is not so straightforward.
### Scene I. Character names
We’ll get the list of character names from [opensourceshakespeare.org](http://opensourceshakespeare.org/) via rvest, but I added some from the poems and others that still came through the processing one way or another, e.g. abbreviated names.
```
shakes_char_url = 'https://www.opensourceshakespeare.org/views/plays/characters/chardisplay.php'
page0 = read_html(shakes_char_url)
tabs = page0 %>% html_table()
shakes_char = tabs[[2]][-(1:2), c(1,3,5)] # remove header and phantom columns
colnames(shakes_char) = c('Nspeeches', 'Character', 'Play')
shakes_char = shakes_char %>%
distinct(Character,.keep_all=T)
save(shakes_char, file='data/shakespeare_characters.RData')
```
A new snag is that some characters with multiple names may be represented (typically) by the first or last name, or in the case of three, the middle, e.g. Sir Toby Belch. Others are still difficultly named e.g. RICHARD PLANTAGENET (DUKE OF GLOUCESTER). The following should capture everything by splitting the names on spaces, removing parentheses, and keeping unique terms.
```
# remove paren and split
chars = shakes_char$Character
chars = str_replace_all(chars, '\\(|\\)', '')
chars = str_split(chars, ' ') %>%
unlist
# these were found after intial processsing
chars_other = c('enobarbus', 'marcius', 'katharina', 'clarence','pyramus',
'andrew', 'arcite', 'perithous', 'hippolita', 'schoolmaster',
'cressid', 'diomed', 'kate', 'titinius', 'Palamon', 'Tarquin',
'lucrece', 'isidore', 'tom', 'thisbe', 'paul',
'aemelia', 'sycorax', 'montague', 'capulet', 'collatinus')
chars = unique(c(chars, chars_other))
chars = chars[chars != '']
sample(chars)[1:3]
```
```
[1] "Children" "Dionyza" "Aaron"
```
### Scene II. Old, Middle, \& Modern English
While Shakespeare is considered [Early Modern English](https://en.wikipedia.org/wiki/Early_Modern_English), some text may be more historical, so I include Middle and Old English stopwords, as they were readily available from the cltk Python module ([link](https://github.com/cltk/cltk)). I also added some things to the modern English list like “thou’ldst” that I found lingering after initial passes. I first started using the works from Gutenberg, and there, the Old English might have had some utility. As the texts there were inconsistently translated and otherwise problematic, I abandoned using them. Here, the Old English vocabulary applied to these texts it only removes ‘wit’, so I refrain from using it.
```
# old and me from python cltk module;
# em from http://earlymodernconversions.com/wp-content/uploads/2013/12/stopwords.txt;
# I also added some to me
old_stops0 = read_lines('data/old_english_stop_words.txt')
# sort(old_stops0)
old_stops = data_frame(word=str_conv(old_stops0, 'UTF8'),
lexicon = 'cltk')
me_stops0 = read_lines('data/middle_english_stop_words')
# sort(me_stops0)
me_stops = data_frame(word=str_conv(me_stops0, 'UTF8'),
lexicon = 'cltk')
em_stops0 = read_lines('data/early_modern_english_stop_words.txt')
# sort(em_stops0)
em_stops = data_frame(word=str_conv(em_stops0, 'UTF8'),
lexicon = 'emc')
```
### Scene III. Remove stopwords
We’re now ready to start removing words. However, right now, we have lines not words. We can use the tidytext function unnest\_tokens, which is like unnest from tidyr, but works on different tokens, e.g. words, sentences, or paragraphs. Note that by default, the function will make all words lower case to make matching more efficient.
```
library(tidytext)
shakes_words = shakes_trim %>%
unnest_tokens(word, text, token='words')
save(shakes_words, file='data/shakes_words_df_4text2vec.RData')
```
We also will be doing a little stemming here. I’m getting rid of suffixes that end with the suffix after an apostrophe. Many of the remaining words will either be stopwords or need to be further stemmed later. I also created a middle/modern English stemmer for words that are not caught otherwise (me\_st\_stem). Again, this is the sort of thing you discover after initial passes (e.g. ‘criedst’). After that, we can use the anti\_join remove the stopwords.
```
source('r/st_stem.R')
shakes_words = shakes_words %>%
mutate(word = str_trim(word), # remove possible whitespace
word = str_replace(word, "'er$|'d$|'t$|'ld$|'rt$|'st$|'dst$", ''), # remove me style endings
word = str_replace_all(word, "[0-9]", ''), # remove sonnet numbers
word = vapply(word, me_st_stem, 'a')) %>%
anti_join(em_stops) %>%
anti_join(me_stops) %>%
anti_join(data_frame(word=str_to_lower(c(chars, 'prologue', 'epilogue')))) %>%
anti_join(data_frame(word=str_to_lower(paste0(chars, "'s")))) %>% # remove possessive names
anti_join(stop_words)
```
As before, you should do a couple spot checks.
```
any(shakes_words$word == 'romeo')
any(shakes_words$word == 'prologue')
any(shakes_words$word == 'mayst')
```
```
[1] FALSE
[1] FALSE
[1] FALSE
```
### Scene I. Character names
We’ll get the list of character names from [opensourceshakespeare.org](http://opensourceshakespeare.org/) via rvest, but I added some from the poems and others that still came through the processing one way or another, e.g. abbreviated names.
```
shakes_char_url = 'https://www.opensourceshakespeare.org/views/plays/characters/chardisplay.php'
page0 = read_html(shakes_char_url)
tabs = page0 %>% html_table()
shakes_char = tabs[[2]][-(1:2), c(1,3,5)] # remove header and phantom columns
colnames(shakes_char) = c('Nspeeches', 'Character', 'Play')
shakes_char = shakes_char %>%
distinct(Character,.keep_all=T)
save(shakes_char, file='data/shakespeare_characters.RData')
```
A new snag is that some characters with multiple names may be represented (typically) by the first or last name, or in the case of three, the middle, e.g. Sir Toby Belch. Others are still difficultly named e.g. RICHARD PLANTAGENET (DUKE OF GLOUCESTER). The following should capture everything by splitting the names on spaces, removing parentheses, and keeping unique terms.
```
# remove paren and split
chars = shakes_char$Character
chars = str_replace_all(chars, '\\(|\\)', '')
chars = str_split(chars, ' ') %>%
unlist
# these were found after intial processsing
chars_other = c('enobarbus', 'marcius', 'katharina', 'clarence','pyramus',
'andrew', 'arcite', 'perithous', 'hippolita', 'schoolmaster',
'cressid', 'diomed', 'kate', 'titinius', 'Palamon', 'Tarquin',
'lucrece', 'isidore', 'tom', 'thisbe', 'paul',
'aemelia', 'sycorax', 'montague', 'capulet', 'collatinus')
chars = unique(c(chars, chars_other))
chars = chars[chars != '']
sample(chars)[1:3]
```
```
[1] "Children" "Dionyza" "Aaron"
```
### Scene II. Old, Middle, \& Modern English
While Shakespeare is considered [Early Modern English](https://en.wikipedia.org/wiki/Early_Modern_English), some text may be more historical, so I include Middle and Old English stopwords, as they were readily available from the cltk Python module ([link](https://github.com/cltk/cltk)). I also added some things to the modern English list like “thou’ldst” that I found lingering after initial passes. I first started using the works from Gutenberg, and there, the Old English might have had some utility. As the texts there were inconsistently translated and otherwise problematic, I abandoned using them. Here, the Old English vocabulary applied to these texts it only removes ‘wit’, so I refrain from using it.
```
# old and me from python cltk module;
# em from http://earlymodernconversions.com/wp-content/uploads/2013/12/stopwords.txt;
# I also added some to me
old_stops0 = read_lines('data/old_english_stop_words.txt')
# sort(old_stops0)
old_stops = data_frame(word=str_conv(old_stops0, 'UTF8'),
lexicon = 'cltk')
me_stops0 = read_lines('data/middle_english_stop_words')
# sort(me_stops0)
me_stops = data_frame(word=str_conv(me_stops0, 'UTF8'),
lexicon = 'cltk')
em_stops0 = read_lines('data/early_modern_english_stop_words.txt')
# sort(em_stops0)
em_stops = data_frame(word=str_conv(em_stops0, 'UTF8'),
lexicon = 'emc')
```
### Scene III. Remove stopwords
We’re now ready to start removing words. However, right now, we have lines not words. We can use the tidytext function unnest\_tokens, which is like unnest from tidyr, but works on different tokens, e.g. words, sentences, or paragraphs. Note that by default, the function will make all words lower case to make matching more efficient.
```
library(tidytext)
shakes_words = shakes_trim %>%
unnest_tokens(word, text, token='words')
save(shakes_words, file='data/shakes_words_df_4text2vec.RData')
```
We also will be doing a little stemming here. I’m getting rid of suffixes that end with the suffix after an apostrophe. Many of the remaining words will either be stopwords or need to be further stemmed later. I also created a middle/modern English stemmer for words that are not caught otherwise (me\_st\_stem). Again, this is the sort of thing you discover after initial passes (e.g. ‘criedst’). After that, we can use the anti\_join remove the stopwords.
```
source('r/st_stem.R')
shakes_words = shakes_words %>%
mutate(word = str_trim(word), # remove possible whitespace
word = str_replace(word, "'er$|'d$|'t$|'ld$|'rt$|'st$|'dst$", ''), # remove me style endings
word = str_replace_all(word, "[0-9]", ''), # remove sonnet numbers
word = vapply(word, me_st_stem, 'a')) %>%
anti_join(em_stops) %>%
anti_join(me_stops) %>%
anti_join(data_frame(word=str_to_lower(c(chars, 'prologue', 'epilogue')))) %>%
anti_join(data_frame(word=str_to_lower(paste0(chars, "'s")))) %>% # remove possessive names
anti_join(stop_words)
```
As before, you should do a couple spot checks.
```
any(shakes_words$word == 'romeo')
any(shakes_words$word == 'prologue')
any(shakes_words$word == 'mayst')
```
```
[1] FALSE
[1] FALSE
[1] FALSE
```
ACT IV. Other fixes
-------------------
Now we’re ready to finally do the word counts. Just kidding! There is *still* work to do for the remainder, and you’ll continue to spot things after runs. One remaining issue is the words that end in ‘st’ and ‘est’, and others that are not consistently spelled or otherwise need to be dealt with. For example, ‘crost’ will not be stemmed to ‘cross’, as ‘crossed’ would be. Finally, I limit the result to any words that have more than two characters, as my inspection suggested these are left\-over suffixes, or otherwise would be considered stopwords anyway.
```
# porter should catch remaining 'est'
add_a = c('mongst', 'gainst') # words to add a to
shakes_words = shakes_words %>%
mutate(word = if_else(word=='honour', 'honor', word),
word = if_else(word=='durst', 'dare', word),
word = if_else(word=='wast', 'was', word),
word = if_else(word=='dust', 'does', word),
word = if_else(word=='curst', 'cursed', word),
word = if_else(word=='blest', 'blessed', word),
word = if_else(word=='crost', 'crossed', word),
word = if_else(word=='accurst', 'accursed', word),
word = if_else(word %in% add_a,
paste0('a', word),
word),
word = str_replace(word, "'s$", ''), # strip remaining possessives
word = if_else(str_detect(word, pattern="o'er"), # change o'er over
str_replace(word, "'", 'v'),
word)) %>%
filter(!(id=='Antony_and_Cleopatra' & word == 'mark')) %>% # mark here is almost exclusively the character name
filter(str_count(word)>2)
```
At this point we could still maybe add things to this list of additional fixes, but I think it’s time to actually start playing with the data.
ACT V. Fun stuff
----------------
We are finally ready to get to the fun stuff. Finally! And now things get easy.
### Scene I. Count the terms
We can get term counts with standard dplyr approaches, and packages like tidytext will take that and also do some other things we might want. Specifically, we can use the latter to create the document\-term matrix (DTM) that will be used in other analysis. The function cast\_dfm will create a dfm class object, or ‘document\-feature’ matrix class object (from quanteda), which is the same thing but recognizes this sort of stuff is not specific to words. With word counts in hand, would be good save to save at this point, since they’ll serve as the basis for other processing.
```
term_counts = shakes_words %>%
group_by(id, word) %>%
count
term_counts %>%
arrange(desc(n))
library(quanteda)
shakes_dtm = term_counts %>%
cast_dfm(document=id, term=word, value=n)
## save(shakes_words, term_counts, shakes_dtm, file='data/shakes_words_df.RData')
```
```
# A tibble: 115,954 x 3
# Groups: id, word [115,954]
id word n
<chr> <chr> <int>
1 Sonnets love 195
2 The_Two_Gentlemen_of_Verona love 171
3 Romeo_and_Juliet love 150
4 As_You_Like_It love 118
5 Love_s_Labour_s_Lost love 118
6 A_Midsummer_Night_s_Dream love 114
7 Richard_III god 111
8 Titus_Andronicus rome 103
9 Much_Ado_about_Nothing love 92
10 Coriolanus rome 90
# ... with 115,944 more rows
```
Now things are looking like Shakespeare, with love for everyone[17](#fn17). You’ll notice I’ve kept place names such as Rome, but this might be something you’d prefer to remove. Other candidates would be madam, woman, man, majesty (as in ‘his/her’) etc. This sort of thing is up to the researcher.
### Scene II. Stemming
Now we’ll stem the words. This is actually more of a pre\-processing step, one that we’d do along with (and typically after) stopword removal. I do it here to mostly demonstrate how to use quanteda to do it, as it can also be used to remove stopwords and do many of the other things we did with tidytext.
Stemming will make words like eye and eyes just *ey*, or convert war, wars and warring to *war*. In other words, it will reduce variations of a word to a common root form, or ‘word stem’. We could have done this in a step prior to counting the terms, but then you only have the stemmed result to work with for the document term matrix from then on. Depending on your situation, you may or may not want to stem, or maybe you’d want to compare results. The quanteda package will actually stem with the DTM (i.e. work on the column names) and collapse the word counts accordingly. I note the difference in words before and after stemming.
```
shakes_dtm
ncol(shakes_dtm)
shakes_dtm = shakes_dtm %>%
dfm_wordstem()
shakes_dtm
ncol(shakes_dtm)
```
```
Document-feature matrix of: 43 documents, 22,052 features (87.8% sparse).
[1] 22052
Document-feature matrix of: 43 documents, 13,325 features (83.8% sparse).
[1] 13325
```
The result is notably fewer columns, which will speed up any analysis, as well as produce a slightly more dense matrix.
### Scene III. Exploration
#### Top features
Let’s start looking at the data more intently. The following shows the 10 most common words and their respective counts. This is also an easy way to find candidates to add to the stopword list. Note that dai and prai are stems for day and pray. Love occurs 2\.15 times as much as the most frequent word!
```
top10 = topfeatures(shakes_dtm, 10)
top10
```
```
love heart eye god day hand hear live death night
2918 1359 1300 1284 1229 1226 1043 1015 1010 1001
```
The following is a word cloud. They are among the most useless visual displays imaginable. Just because you can, doesn’t mean you should.
If you want to display relative frequency do so.
#### Similarity
The quanteda package has some built in similarity measures such as [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity), which you can think of similarly to the standard correlation (also available as an option). I display it visually to better get a sense of things.
```
## textstat_simil(shakes_dtm, margin = "documents", method = "cosine")
```
We can already begin to see the clusters of documents. For example, the more historical are the clump in the upper left. The oddball is [*The Phoenix and the Turtle*](https://en.wikipedia.org/wiki/The_Phoenix_and_the_Turtle), though *Lover’s Complaint* and the *Elegy* are also less similar than standard Shakespeare. The Phoenix and the Turtle is about the death of ideal love, represented by the Phoenix and Turtledove, for which there is a funeral. It actually is considered by scholars to be in stark contrast to his other output. [Elegy](https://en.wikipedia.org/wiki/Shakespeare_apocrypha#A_Funeral_Elegy) itself is actually written for a funeral, but probably not by Shakespeare. [*A Lover’s Complaint*](https://en.wikipedia.org/wiki/A_Lover%27s_Complaint) is thought to be an inferior work by the Bard by some critics, and maybe not even authored by him, so perhaps what we’re seeing is a reflection of that lack of quality. In general, we’re seeing things that we might expect.
#### Readability
We can examine readability scores for the texts, but for this we’ll need them in raw form. We already had them from before, I just added *Phoenix* from the Gutenberg download.
```
raw_texts
```
```
# A tibble: 43 x 2
id text
<chr> <list>
1 A_Lover_s_Complaint.txt <chr [813]>
2 A_Midsummer_Night_s_Dream.txt <chr [6,630]>
3 All_s_Well_That_Ends_Well.txt <chr [10,993]>
4 Antony_and_Cleopatra.txt <chr [14,064]>
5 As_You_Like_It.txt <chr [9,706]>
6 Coriolanus.txt <chr [13,440]>
7 Cymbeline.txt <chr [11,388]>
8 Elegy.txt <chr [1,316]>
9 Hamlet.txt <chr [13,950]>
10 Henry_V.txt <chr [9,777]>
# ... with 33 more rows
```
With raw texts, we need to convert them to a corpus object to proceed more easily. The corpus function from quanteda won’t read directly from a list column or a list at all, so we’ll convert it via the tm package, which more or less defeats the purpose of using the quanteda package, except that the textstat\_readability function gives us what we want, but I digress.
Unfortunately, the concept of readability is ill\-defined, and as such, there are dozens of measures available dating back nearly 75 years. The following is based on the Coleman\-Liau grade score (higher grade \= more difficult). The conclusion here is first, Shakespeare isn’t exactly a difficult read, and two, the poems may be more so relative to the other works.
```
library(tm)
raw_text_corpus = corpus(VCorpus(VectorSource(raw_texts$text)))
shakes_read = textstat_readability(raw_text_corpus)
```
#### Lexical diversity
There are also metrics of lexical diversity. As with readability, there is no one way to measure ‘diversity’. Here we’ll go back to using the standard DTM, as the focus is on the terms, whereas readability is more at the sentence level. Most standard measures of lexical diversity are variants on what is called the type\-token ratio, which in our setting is the number of unique terms (types) relative to the total terms (tokens). We can use textstat\_lexdiv for our purposes here, which will provide several measures of diversity by default.
```
ld = textstat_lexdiv(shakes_dtm)
```
This visual is based on the (absolute) scaled values of those several metrics, and might suggest that the poems are relatively more diverse. This certainly might be the case for *Phoenix*, but it could also be a reflection of the limitation of several of the measures, such that longer works are seen as less diverse, as tokens are added more so than types the longer the text goes.
As a comparison, the following shows the results of the ‘Measure of Textual Diversity’ calculated using the koRpus package[18](#fn18). It is notably less affected by text length, though the conclusions are largely the same. There is notable correlation between the MTLD and readability as well[19](#fn19). In general, Shakespeare tends to be more expressive in poems, and less so with comedies.
### Scene IV. Topic model
I’d say we’re now ready for topic model. That didn’t take too much did it?
#### Running the model and exploring the topics
We’ll run one with 10 topics. As in the previous example in this document, we’ll use topicmodels and the LDA function. Later, we’ll also compare our results with the traditional classifications of the texts. Note that this will take a while to run depending on your machine (maybe a minute or two). Faster implementation can be found with text2vec.
```
library(topicmodels)
shakes_10 = LDA(convert(shakes_dtm, to = "topicmodels"), k = 10, control=list(seed=1234))
```
One of the first things to do is to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* are common as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the stm package and corresponding labelTopics function as a way to get several alternatives. As an example, I show the results of their version of the following[20](#fn20):
* FREX: **FR**equency and **EX**clusivity, it is a weighted harmonic mean of a term’s rank within a topic in terms of frequency and exclusivity.
* lift: Ratio of the term’s probability within a topic to its probability of occurrence across all documents. Overly sensitive to rare words.
* score: Another approach that will give more weight to more exclusive terms.
* prob: This is just the raw probability of the term within a given topic.
As another approach, consider the saliency and relevance of term via the LDAvis package. While you can play with it here, it’s probably easier to [open it separately](vis/index.html). Note that this has to be done separately from the model, and may have topic numbers in a different order.
Your browser does not support iframes.
Given all these measures, one can assess how well they match what topics the documents would be most associated with.
```
t(topics(shakes_10, 3))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. The other measures pick up on words like Dane and Denmark. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram[21](#fn21). The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[22](#fn22). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix*, *Complaint*, standard poems, a mixed bag of more romance\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
The following shows the average topic probability for each of the traditional classes. Topics are represented by their first five most probable terms.
Aside from the poems, the classes are a good mix of topics, and appear to have some overlap. Tragedies are perhaps most diverse.
#### Summary of Topic Models
This is where the summary would go, but I grow weary…
**FIN**
### Scene I. Count the terms
We can get term counts with standard dplyr approaches, and packages like tidytext will take that and also do some other things we might want. Specifically, we can use the latter to create the document\-term matrix (DTM) that will be used in other analysis. The function cast\_dfm will create a dfm class object, or ‘document\-feature’ matrix class object (from quanteda), which is the same thing but recognizes this sort of stuff is not specific to words. With word counts in hand, would be good save to save at this point, since they’ll serve as the basis for other processing.
```
term_counts = shakes_words %>%
group_by(id, word) %>%
count
term_counts %>%
arrange(desc(n))
library(quanteda)
shakes_dtm = term_counts %>%
cast_dfm(document=id, term=word, value=n)
## save(shakes_words, term_counts, shakes_dtm, file='data/shakes_words_df.RData')
```
```
# A tibble: 115,954 x 3
# Groups: id, word [115,954]
id word n
<chr> <chr> <int>
1 Sonnets love 195
2 The_Two_Gentlemen_of_Verona love 171
3 Romeo_and_Juliet love 150
4 As_You_Like_It love 118
5 Love_s_Labour_s_Lost love 118
6 A_Midsummer_Night_s_Dream love 114
7 Richard_III god 111
8 Titus_Andronicus rome 103
9 Much_Ado_about_Nothing love 92
10 Coriolanus rome 90
# ... with 115,944 more rows
```
Now things are looking like Shakespeare, with love for everyone[17](#fn17). You’ll notice I’ve kept place names such as Rome, but this might be something you’d prefer to remove. Other candidates would be madam, woman, man, majesty (as in ‘his/her’) etc. This sort of thing is up to the researcher.
### Scene II. Stemming
Now we’ll stem the words. This is actually more of a pre\-processing step, one that we’d do along with (and typically after) stopword removal. I do it here to mostly demonstrate how to use quanteda to do it, as it can also be used to remove stopwords and do many of the other things we did with tidytext.
Stemming will make words like eye and eyes just *ey*, or convert war, wars and warring to *war*. In other words, it will reduce variations of a word to a common root form, or ‘word stem’. We could have done this in a step prior to counting the terms, but then you only have the stemmed result to work with for the document term matrix from then on. Depending on your situation, you may or may not want to stem, or maybe you’d want to compare results. The quanteda package will actually stem with the DTM (i.e. work on the column names) and collapse the word counts accordingly. I note the difference in words before and after stemming.
```
shakes_dtm
ncol(shakes_dtm)
shakes_dtm = shakes_dtm %>%
dfm_wordstem()
shakes_dtm
ncol(shakes_dtm)
```
```
Document-feature matrix of: 43 documents, 22,052 features (87.8% sparse).
[1] 22052
Document-feature matrix of: 43 documents, 13,325 features (83.8% sparse).
[1] 13325
```
The result is notably fewer columns, which will speed up any analysis, as well as produce a slightly more dense matrix.
### Scene III. Exploration
#### Top features
Let’s start looking at the data more intently. The following shows the 10 most common words and their respective counts. This is also an easy way to find candidates to add to the stopword list. Note that dai and prai are stems for day and pray. Love occurs 2\.15 times as much as the most frequent word!
```
top10 = topfeatures(shakes_dtm, 10)
top10
```
```
love heart eye god day hand hear live death night
2918 1359 1300 1284 1229 1226 1043 1015 1010 1001
```
The following is a word cloud. They are among the most useless visual displays imaginable. Just because you can, doesn’t mean you should.
If you want to display relative frequency do so.
#### Similarity
The quanteda package has some built in similarity measures such as [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity), which you can think of similarly to the standard correlation (also available as an option). I display it visually to better get a sense of things.
```
## textstat_simil(shakes_dtm, margin = "documents", method = "cosine")
```
We can already begin to see the clusters of documents. For example, the more historical are the clump in the upper left. The oddball is [*The Phoenix and the Turtle*](https://en.wikipedia.org/wiki/The_Phoenix_and_the_Turtle), though *Lover’s Complaint* and the *Elegy* are also less similar than standard Shakespeare. The Phoenix and the Turtle is about the death of ideal love, represented by the Phoenix and Turtledove, for which there is a funeral. It actually is considered by scholars to be in stark contrast to his other output. [Elegy](https://en.wikipedia.org/wiki/Shakespeare_apocrypha#A_Funeral_Elegy) itself is actually written for a funeral, but probably not by Shakespeare. [*A Lover’s Complaint*](https://en.wikipedia.org/wiki/A_Lover%27s_Complaint) is thought to be an inferior work by the Bard by some critics, and maybe not even authored by him, so perhaps what we’re seeing is a reflection of that lack of quality. In general, we’re seeing things that we might expect.
#### Readability
We can examine readability scores for the texts, but for this we’ll need them in raw form. We already had them from before, I just added *Phoenix* from the Gutenberg download.
```
raw_texts
```
```
# A tibble: 43 x 2
id text
<chr> <list>
1 A_Lover_s_Complaint.txt <chr [813]>
2 A_Midsummer_Night_s_Dream.txt <chr [6,630]>
3 All_s_Well_That_Ends_Well.txt <chr [10,993]>
4 Antony_and_Cleopatra.txt <chr [14,064]>
5 As_You_Like_It.txt <chr [9,706]>
6 Coriolanus.txt <chr [13,440]>
7 Cymbeline.txt <chr [11,388]>
8 Elegy.txt <chr [1,316]>
9 Hamlet.txt <chr [13,950]>
10 Henry_V.txt <chr [9,777]>
# ... with 33 more rows
```
With raw texts, we need to convert them to a corpus object to proceed more easily. The corpus function from quanteda won’t read directly from a list column or a list at all, so we’ll convert it via the tm package, which more or less defeats the purpose of using the quanteda package, except that the textstat\_readability function gives us what we want, but I digress.
Unfortunately, the concept of readability is ill\-defined, and as such, there are dozens of measures available dating back nearly 75 years. The following is based on the Coleman\-Liau grade score (higher grade \= more difficult). The conclusion here is first, Shakespeare isn’t exactly a difficult read, and two, the poems may be more so relative to the other works.
```
library(tm)
raw_text_corpus = corpus(VCorpus(VectorSource(raw_texts$text)))
shakes_read = textstat_readability(raw_text_corpus)
```
#### Lexical diversity
There are also metrics of lexical diversity. As with readability, there is no one way to measure ‘diversity’. Here we’ll go back to using the standard DTM, as the focus is on the terms, whereas readability is more at the sentence level. Most standard measures of lexical diversity are variants on what is called the type\-token ratio, which in our setting is the number of unique terms (types) relative to the total terms (tokens). We can use textstat\_lexdiv for our purposes here, which will provide several measures of diversity by default.
```
ld = textstat_lexdiv(shakes_dtm)
```
This visual is based on the (absolute) scaled values of those several metrics, and might suggest that the poems are relatively more diverse. This certainly might be the case for *Phoenix*, but it could also be a reflection of the limitation of several of the measures, such that longer works are seen as less diverse, as tokens are added more so than types the longer the text goes.
As a comparison, the following shows the results of the ‘Measure of Textual Diversity’ calculated using the koRpus package[18](#fn18). It is notably less affected by text length, though the conclusions are largely the same. There is notable correlation between the MTLD and readability as well[19](#fn19). In general, Shakespeare tends to be more expressive in poems, and less so with comedies.
#### Top features
Let’s start looking at the data more intently. The following shows the 10 most common words and their respective counts. This is also an easy way to find candidates to add to the stopword list. Note that dai and prai are stems for day and pray. Love occurs 2\.15 times as much as the most frequent word!
```
top10 = topfeatures(shakes_dtm, 10)
top10
```
```
love heart eye god day hand hear live death night
2918 1359 1300 1284 1229 1226 1043 1015 1010 1001
```
The following is a word cloud. They are among the most useless visual displays imaginable. Just because you can, doesn’t mean you should.
If you want to display relative frequency do so.
#### Similarity
The quanteda package has some built in similarity measures such as [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity), which you can think of similarly to the standard correlation (also available as an option). I display it visually to better get a sense of things.
```
## textstat_simil(shakes_dtm, margin = "documents", method = "cosine")
```
We can already begin to see the clusters of documents. For example, the more historical are the clump in the upper left. The oddball is [*The Phoenix and the Turtle*](https://en.wikipedia.org/wiki/The_Phoenix_and_the_Turtle), though *Lover’s Complaint* and the *Elegy* are also less similar than standard Shakespeare. The Phoenix and the Turtle is about the death of ideal love, represented by the Phoenix and Turtledove, for which there is a funeral. It actually is considered by scholars to be in stark contrast to his other output. [Elegy](https://en.wikipedia.org/wiki/Shakespeare_apocrypha#A_Funeral_Elegy) itself is actually written for a funeral, but probably not by Shakespeare. [*A Lover’s Complaint*](https://en.wikipedia.org/wiki/A_Lover%27s_Complaint) is thought to be an inferior work by the Bard by some critics, and maybe not even authored by him, so perhaps what we’re seeing is a reflection of that lack of quality. In general, we’re seeing things that we might expect.
#### Readability
We can examine readability scores for the texts, but for this we’ll need them in raw form. We already had them from before, I just added *Phoenix* from the Gutenberg download.
```
raw_texts
```
```
# A tibble: 43 x 2
id text
<chr> <list>
1 A_Lover_s_Complaint.txt <chr [813]>
2 A_Midsummer_Night_s_Dream.txt <chr [6,630]>
3 All_s_Well_That_Ends_Well.txt <chr [10,993]>
4 Antony_and_Cleopatra.txt <chr [14,064]>
5 As_You_Like_It.txt <chr [9,706]>
6 Coriolanus.txt <chr [13,440]>
7 Cymbeline.txt <chr [11,388]>
8 Elegy.txt <chr [1,316]>
9 Hamlet.txt <chr [13,950]>
10 Henry_V.txt <chr [9,777]>
# ... with 33 more rows
```
With raw texts, we need to convert them to a corpus object to proceed more easily. The corpus function from quanteda won’t read directly from a list column or a list at all, so we’ll convert it via the tm package, which more or less defeats the purpose of using the quanteda package, except that the textstat\_readability function gives us what we want, but I digress.
Unfortunately, the concept of readability is ill\-defined, and as such, there are dozens of measures available dating back nearly 75 years. The following is based on the Coleman\-Liau grade score (higher grade \= more difficult). The conclusion here is first, Shakespeare isn’t exactly a difficult read, and two, the poems may be more so relative to the other works.
```
library(tm)
raw_text_corpus = corpus(VCorpus(VectorSource(raw_texts$text)))
shakes_read = textstat_readability(raw_text_corpus)
```
#### Lexical diversity
There are also metrics of lexical diversity. As with readability, there is no one way to measure ‘diversity’. Here we’ll go back to using the standard DTM, as the focus is on the terms, whereas readability is more at the sentence level. Most standard measures of lexical diversity are variants on what is called the type\-token ratio, which in our setting is the number of unique terms (types) relative to the total terms (tokens). We can use textstat\_lexdiv for our purposes here, which will provide several measures of diversity by default.
```
ld = textstat_lexdiv(shakes_dtm)
```
This visual is based on the (absolute) scaled values of those several metrics, and might suggest that the poems are relatively more diverse. This certainly might be the case for *Phoenix*, but it could also be a reflection of the limitation of several of the measures, such that longer works are seen as less diverse, as tokens are added more so than types the longer the text goes.
As a comparison, the following shows the results of the ‘Measure of Textual Diversity’ calculated using the koRpus package[18](#fn18). It is notably less affected by text length, though the conclusions are largely the same. There is notable correlation between the MTLD and readability as well[19](#fn19). In general, Shakespeare tends to be more expressive in poems, and less so with comedies.
### Scene IV. Topic model
I’d say we’re now ready for topic model. That didn’t take too much did it?
#### Running the model and exploring the topics
We’ll run one with 10 topics. As in the previous example in this document, we’ll use topicmodels and the LDA function. Later, we’ll also compare our results with the traditional classifications of the texts. Note that this will take a while to run depending on your machine (maybe a minute or two). Faster implementation can be found with text2vec.
```
library(topicmodels)
shakes_10 = LDA(convert(shakes_dtm, to = "topicmodels"), k = 10, control=list(seed=1234))
```
One of the first things to do is to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* are common as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the stm package and corresponding labelTopics function as a way to get several alternatives. As an example, I show the results of their version of the following[20](#fn20):
* FREX: **FR**equency and **EX**clusivity, it is a weighted harmonic mean of a term’s rank within a topic in terms of frequency and exclusivity.
* lift: Ratio of the term’s probability within a topic to its probability of occurrence across all documents. Overly sensitive to rare words.
* score: Another approach that will give more weight to more exclusive terms.
* prob: This is just the raw probability of the term within a given topic.
As another approach, consider the saliency and relevance of term via the LDAvis package. While you can play with it here, it’s probably easier to [open it separately](vis/index.html). Note that this has to be done separately from the model, and may have topic numbers in a different order.
Your browser does not support iframes.
Given all these measures, one can assess how well they match what topics the documents would be most associated with.
```
t(topics(shakes_10, 3))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. The other measures pick up on words like Dane and Denmark. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram[21](#fn21). The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[22](#fn22). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix*, *Complaint*, standard poems, a mixed bag of more romance\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
The following shows the average topic probability for each of the traditional classes. Topics are represented by their first five most probable terms.
Aside from the poems, the classes are a good mix of topics, and appear to have some overlap. Tragedies are perhaps most diverse.
#### Summary of Topic Models
This is where the summary would go, but I grow weary…
**FIN**
#### Running the model and exploring the topics
We’ll run one with 10 topics. As in the previous example in this document, we’ll use topicmodels and the LDA function. Later, we’ll also compare our results with the traditional classifications of the texts. Note that this will take a while to run depending on your machine (maybe a minute or two). Faster implementation can be found with text2vec.
```
library(topicmodels)
shakes_10 = LDA(convert(shakes_dtm, to = "topicmodels"), k = 10, control=list(seed=1234))
```
One of the first things to do is to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* are common as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the stm package and corresponding labelTopics function as a way to get several alternatives. As an example, I show the results of their version of the following[20](#fn20):
* FREX: **FR**equency and **EX**clusivity, it is a weighted harmonic mean of a term’s rank within a topic in terms of frequency and exclusivity.
* lift: Ratio of the term’s probability within a topic to its probability of occurrence across all documents. Overly sensitive to rare words.
* score: Another approach that will give more weight to more exclusive terms.
* prob: This is just the raw probability of the term within a given topic.
As another approach, consider the saliency and relevance of term via the LDAvis package. While you can play with it here, it’s probably easier to [open it separately](vis/index.html). Note that this has to be done separately from the model, and may have topic numbers in a different order.
Your browser does not support iframes.
Given all these measures, one can assess how well they match what topics the documents would be most associated with.
```
t(topics(shakes_10, 3))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. The other measures pick up on words like Dane and Denmark. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram[21](#fn21). The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[22](#fn22). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix*, *Complaint*, standard poems, a mixed bag of more romance\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
The following shows the average topic probability for each of the traditional classes. Topics are represented by their first five most probable terms.
Aside from the poems, the classes are a good mix of topics, and appear to have some overlap. Tragedies are perhaps most diverse.
#### Summary of Topic Models
This is where the summary would go, but I grow weary…
**FIN**
| Text Analysis |
m-clark.github.io | https://m-clark.github.io/text-analysis-with-R/shakespeare.html |
Shakespeare Start to Finish
===========================
The following attempts to demonstrate the usual difficulties one encounters dealing with text by procuring and processing the works of Shakespeare. The source is [MIT](http://shakespeare.mit.edu/), which has made the ‘complete’ works available on the web since 1993, plus one other from Gutenberg. The initial issue is simply getting the works from the web. Subsequently there is metadata, character names, stopwords etc. to be removed. At that point, we can stem and count the words in each work, which, when complete, puts us at the point we are ready for analysis.
The primary packages used are tidytext, stringr, and when things are ready for analysis, quanteda.
ACT I. Scrape MIT and Gutenberg Shakespeare
-------------------------------------------
### Scene I. Scrape main works
Initially we must scrape the web to get the documents we need. The rvest package will be used as follows.
* Start with the url of the site
* Get the links off that page to serve as base urls for the works
* Scrape the document for each url
* Deal with the collection of Sonnets separately
* Write out results
```
library(rvest); library(tidyverse); library(stringr)
page0 = read_html('http://shakespeare.mit.edu/')
works_urls0 = page0 %>%
html_nodes('a') %>%
html_attr('href')
main = works_urls0 %>%
grep(pattern='index', value=T) %>%
str_replace_all(pattern='index', replacement='full')
other = works_urls0[!grepl(works_urls0, pattern='index|edu|org|news')]
works_urls = c(main, other)
works_urls[1:3]
```
Now we just paste the main site url to the work urls and download them. Here is where we come across our first snag. The html\_text function has what I would call a bug but what the author feels is a feature. [Basically, it ignores line breaks of the form `<br>` in certain situations](https://github.com/hadley/rvest/issues/175). This means it will smash text together that shouldn’t be, thereby making *any* analysis of it fairly useless[14](#fn14). Luckily, [@rentrop](https://github.com/rentrop) provided a solution, which is in `r/fix_read_html.R`.
```
works0 = lapply(works_urls, function(x) read_html(paste0('http://shakespeare.mit.edu/', x)))
source('r/fix_read_html.R')
html_text_collapse(works0[[1]]) #works
works = lapply(works0, html_text_collapse)
names(works) = c("All's Well That Ends Well", "As You Like It", "Comedy of Errors"
"Cymbeline", "Love's Labour's Lost", "Measure for Measure"
"The Merry Wives of Windsor", "The Merchant of Venice", "A Midsummer Night's Dream"
"Much Ado about Nothing", "Pericles Prince of Tyre", "The Taming of the Shrew"
"The Tempest", "Troilus and Cressida", "Twelfth Night"
"The Two Gentlemen of Verona", "The Winter's Tale", "King Henry IV Part 1"
"King Henry IV Part 2", "Henry V", "Henry VI Part 1"
"Henry VI Part 2", "Henry VI Part 3", "Henry VIII"
"King John", "Richard II", "Richard III"
"Antony and Cleopatra", "Coriolanus", "Hamlet"
"Julius Caesar", "King Lear", "Macbeth"
"Othello", "Romeo and Juliet", "Timon of Athens"
"Titus Andronicus", "Sonnets", "A Lover's Complaint"
"The Rape of Lucrece", "Venus and Adonis", "Elegy")
```
### Scene II. Sonnets
We now hit a slight nuisance with the Sonnets. The Sonnets have a bit of a different structure than the plays. All links are in a single page, with a different form for the url, and each sonnet has its own page.
```
sonnet_urls = paste0('http://shakespeare.mit.edu/', grep(works_urls0, pattern='sonnet', value=T)) %>%
read_html() %>%
html_nodes('a') %>%
html_attr('href')
sonnet_urls = grep(sonnet_urls, pattern = 'sonnet', value=T) # remove amazon link
# read the texts
sonnet0 = purrr::map(sonnet_urls, function(x) read_html(paste0('http://shakespeare.mit.edu/Poetry/', x)))
# collapse to one 'Sonnets' work
sonnet = sapply(sonnet0, html_text_collapse)
works$Sonnets = sonnet
```
### Scene III. Save and write out
Now we can save our results so we won’t have to repeat any of the previous scraping. We want to save the main text object as an RData file, and write out the texts to their own file. When dealing with text, you’ll regularly want to save stages so you can avoid repeating what you don’t have to, as often you will need to go back after discovering new issues further down the line.
```
save(works, file='data/texts_raw/shakes/moby_from_web.RData')
```
### Scene IV. Read text from files
After the above is done, it’s not required to redo, so we can always get what we need. I’ll start with the raw text as files, as that is one of the more common ways one deals with documents. When text is nice and clean, this can be fairly straightforward.
The function at the end comes from the tidyr package. Up to that line, each element in the text column is the entire text, while the column itself is thus a ‘list\-column’. In other words, we have a 42 x 2 matrix. But to do what we need, we’ll want to have access to each line, and the unnest function unpacks each line within the title. The first few lines of the result are shown after.
```
library(tidyverse); library(stringr)
shakes0 =
data_frame(file = dir('data/texts_raw/shakes/moby/', full.names = TRUE)) %>%
transmute(id = basename(file), text) %>%
unnest(text)
save(shakes0, file='data/initial_shakes_dt.RData')
# Alternate that provides for more options
# library(readtext)
# shakes0 =
# data_frame(file = dir('data/texts_raw/shakes/moby/', full.names = TRUE)) %>%
# mutate(text = map(file, readtext, encoding='UTF8')) %>%
# unnest(text)
```
### Scene V. Add additional works
It is typical to be gathering texts from multiple sources. In this case, we’ll get *The Phoenix and the Turtle* from the Project Gutenberg website. There is an R package that will allow us to work directly with the site, making the process straightforward[15](#fn15). I also considered two other works, but I refrained from “The Two Noble Kinsmen” because like many other of Shakespeare’s versions on Gutenberg, it’s basically written in a different language. I also refrained from *The Passionate Pilgrim* because it’s mostly not Shakespeare.
When first doing this project, I actually started with Gutenberg, but it became a notable PITA. The texts were inconsistent in source, and sometimes reproduced printing errors purposely, which would have compounded typical problems. I thought it could have been solved by using the *Complete Works of Shakespeare* but the download only came with that title, meaning one would have to hunt for and delineate each separate work. This might not have been too big of an issue, except that there is no table of contents, nor consistent naming of titles across different printings. The MIT approach, on the other hand, was a few lines of code. This represents a common issue in text analysis when dealing with sources, a different option may save a lot of time in the end.
The following code could be more succinct to deal with one text, but I initially was dealing with multiple works, so I’ve left it in that mode. In the end, we’ll have a tibble with an id column for the file/work name, and another column that contains the lines of text.
```
library(gutenbergr)
works_not_included = c("The Phoenix and the Turtle") # add others if desired
gute0 = gutenberg_works(title %in% works_not_included)
gute = lapply(gute0$gutenberg_id, gutenberg_download)
gute = mapply(function(x, y) mutate(x, id=y) %>% select(-gutenberg_id),
x=gute,
y=works_not_included,
SIMPLIFY=F)
shakes = shakes0 %>%
bind_rows(gute) %>%
mutate(id = str_replace_all(id, " |'", '_')) %>%
mutate(id = str_replace(id, '.txt', '')) %>%
arrange(id)
# shakes %>% split(.$id) # inspect
save(shakes, file='data/texts_raw/shakes/shakes_df.RData')
```
ACT II. Preliminary Cleaning
----------------------------
If you think we’re even remotely getting close to being ready for analysis, I say Ha! to you. Our journey has only just begun (cue the Carpenters).
Now we can start thinking about prepping the data for eventual analysis. One of the nice things about having the data in a tidy format is that we can use string functionality over the column of text in a simple fashion.
### Scene I. Remove initial text/metadata
First on our to\-do list is to get rid of all the preliminary text of titles, authorship, and similar. This is fairly straightforward when you realize the text we want will be associated with something like `ACT I`, or in the case of the Sonnets, the word `Sonnet`. So, the idea it to drop all text up to those points. I’ve created a [function](https://github.com/m-clark/text-analysis-with-R/blob/master/r/detect_first_act.R) that will do that, and then just apply it to each works tibble[16](#fn16). For the poems and *A Funeral Elegy for Master William Peter*, we look instead for the line where his name or initials start the line.
```
source('r/detect_first_act.R')
shakes_trim = shakes %>%
split(.$id) %>%
lapply(detect_first_act) %>%
bind_rows
shakes %>% filter(id=='Romeo_and_Juliet') %>% head
```
```
# A tibble: 6 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet Romeo and Juliet: Entire Play
2 Romeo_and_Juliet " "
3 Romeo_and_Juliet ""
4 Romeo_and_Juliet ""
5 Romeo_and_Juliet ""
6 Romeo_and_Juliet Romeo and Juliet
```
```
shakes_trim %>% filter(id=='Romeo_and_Juliet') %>% head
```
```
# A tibble: 6 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet ""
2 Romeo_and_Juliet ""
3 Romeo_and_Juliet PROLOGUE
4 Romeo_and_Juliet ""
5 Romeo_and_Juliet ""
6 Romeo_and_Juliet ""
```
### Scene II. Miscellaneous removal
Next, we’ll want to remove empty rows, any remaining titles, lines that denote the act or scene, and other stuff. I’m going to remove the word *prologue* and *epilogue* as a stopword later. While some texts have a line that just says that (`PROLOGUE`), others have text that describes the scene (`Prologue. Blah blah`) and which I’ve decided to keep. As such, we just need the word itself gone.
```
titles = c("A Lover's Complaint", "All's Well That Ends Well", "As You Like It", "The Comedy of Errors",
"Cymbeline", "Love's Labour's Lost", "Measure for Measure",
"The Merry Wives of Windsor", "The Merchant of Venice", "A Midsummer Night's Dream",
"Much Ado about Nothing", "Pericles Prince of Tyre", "The Taming of the Shrew",
"The Tempest", "Troilus and Cressida", "Twelfth Night",
"The Two Gentlemen of Verona", "The Winter's Tale", "King Henry IV, Part 1",
"King Henry IV, Part 2", "Henry V", "Henry VI, Part 1",
"Henry VI, Part 2", "Henry VI, Part 3", "Henry VIII",
"King John", "Richard II", "Richard III",
"Antony and Cleopatra", "Coriolanus", "Hamlet",
"Julius Caesar", "King Lear", "Macbeth",
"Othello", "Romeo and Juliet", "Timon of Athens",
"Titus Andronicus", "Sonnets",
"The Rape of Lucrece", "Venus and Adonis", "A Funeral Elegy", "The Phoenix and the Turtle")
shakes_trim = shakes_trim %>%
filter(text != '', # remove empties
!text %in% titles, # remove titles
!str_detect(text, '^ACT|^SCENE|^Enter|^Exit|^Exeunt|^Sonnet') # remove acts etc.
)
shakes_trim %>% filter(id=='Romeo_and_Juliet') # we'll get prologue later
```
```
# A tibble: 3,992 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet PROLOGUE
2 Romeo_and_Juliet Two households, both alike in dignity,
3 Romeo_and_Juliet In fair Verona, where we lay our scene,
4 Romeo_and_Juliet From ancient grudge break to new mutiny,
5 Romeo_and_Juliet Where civil blood makes civil hands unclean.
6 Romeo_and_Juliet From forth the fatal loins of these two foes
7 Romeo_and_Juliet A pair of star-cross'd lovers take their life;
8 Romeo_and_Juliet Whose misadventured piteous overthrows
9 Romeo_and_Juliet Do with their death bury their parents' strife.
10 Romeo_and_Juliet The fearful passage of their death-mark'd love,
# ... with 3,982 more rows
```
### Scene III. Classification of works
While we’re at it, we can save the classical (sometimes arbitrary) classifications of Shakespeare’s works for later comparison to what we’ll get in our analyses. We’ll save them to call as needed.
```
shakes_types = data_frame(title=unique(shakes_trim$id)) %>%
mutate(class = 'Comedy',
class = if_else(str_detect(title, pattern='Adonis|Lucrece|Complaint|Turtle|Pilgrim|Sonnet|Elegy'), 'Poem', class),
class = if_else(str_detect(title, pattern='Henry|Richard|John'), 'History', class),
class = if_else(str_detect(title, pattern='Troilus|Coriolanus|Titus|Romeo|Timon|Julius|Macbeth|Hamlet|Othello|Antony|Cymbeline|Lear'), 'Tragedy', class),
problem = if_else(str_detect(title, pattern='Measure|Merchant|^All|Troilus|Timon|Passion'), 'Problem', 'Not'),
late_romance = if_else(str_detect(title, pattern='Cymbeline|Kinsmen|Pericles|Winter|Tempest'), 'Late', 'Other'))
save(shakes_types, file='data/shakespeare_classification.RData') # save for later
```
ACT III. Stop words
-------------------
As we’ve noted before, we’ll want to get rid of stop words, things like articles, possessive pronouns, and other very common words. In this case, we also want to include character names. However, the big wrinkle here is that this is not English as currently spoken, so we need to remove ‘ye’, ‘thee’, ‘thine’ etc. In addition, there are things that need to be replaced, like o’er to over, which may then also be removed. In short, this is not so straightforward.
### Scene I. Character names
We’ll get the list of character names from [opensourceshakespeare.org](http://opensourceshakespeare.org/) via rvest, but I added some from the poems and others that still came through the processing one way or another, e.g. abbreviated names.
```
shakes_char_url = 'https://www.opensourceshakespeare.org/views/plays/characters/chardisplay.php'
page0 = read_html(shakes_char_url)
tabs = page0 %>% html_table()
shakes_char = tabs[[2]][-(1:2), c(1,3,5)] # remove header and phantom columns
colnames(shakes_char) = c('Nspeeches', 'Character', 'Play')
shakes_char = shakes_char %>%
distinct(Character,.keep_all=T)
save(shakes_char, file='data/shakespeare_characters.RData')
```
A new snag is that some characters with multiple names may be represented (typically) by the first or last name, or in the case of three, the middle, e.g. Sir Toby Belch. Others are still difficultly named e.g. RICHARD PLANTAGENET (DUKE OF GLOUCESTER). The following should capture everything by splitting the names on spaces, removing parentheses, and keeping unique terms.
```
# remove paren and split
chars = shakes_char$Character
chars = str_replace_all(chars, '\\(|\\)', '')
chars = str_split(chars, ' ') %>%
unlist
# these were found after intial processsing
chars_other = c('enobarbus', 'marcius', 'katharina', 'clarence','pyramus',
'andrew', 'arcite', 'perithous', 'hippolita', 'schoolmaster',
'cressid', 'diomed', 'kate', 'titinius', 'Palamon', 'Tarquin',
'lucrece', 'isidore', 'tom', 'thisbe', 'paul',
'aemelia', 'sycorax', 'montague', 'capulet', 'collatinus')
chars = unique(c(chars, chars_other))
chars = chars[chars != '']
sample(chars)[1:3]
```
```
[1] "Children" "Dionyza" "Aaron"
```
### Scene II. Old, Middle, \& Modern English
While Shakespeare is considered [Early Modern English](https://en.wikipedia.org/wiki/Early_Modern_English), some text may be more historical, so I include Middle and Old English stopwords, as they were readily available from the cltk Python module ([link](https://github.com/cltk/cltk)). I also added some things to the modern English list like “thou’ldst” that I found lingering after initial passes. I first started using the works from Gutenberg, and there, the Old English might have had some utility. As the texts there were inconsistently translated and otherwise problematic, I abandoned using them. Here, the Old English vocabulary applied to these texts it only removes ‘wit’, so I refrain from using it.
```
# old and me from python cltk module;
# em from http://earlymodernconversions.com/wp-content/uploads/2013/12/stopwords.txt;
# I also added some to me
old_stops0 = read_lines('data/old_english_stop_words.txt')
# sort(old_stops0)
old_stops = data_frame(word=str_conv(old_stops0, 'UTF8'),
lexicon = 'cltk')
me_stops0 = read_lines('data/middle_english_stop_words')
# sort(me_stops0)
me_stops = data_frame(word=str_conv(me_stops0, 'UTF8'),
lexicon = 'cltk')
em_stops0 = read_lines('data/early_modern_english_stop_words.txt')
# sort(em_stops0)
em_stops = data_frame(word=str_conv(em_stops0, 'UTF8'),
lexicon = 'emc')
```
### Scene III. Remove stopwords
We’re now ready to start removing words. However, right now, we have lines not words. We can use the tidytext function unnest\_tokens, which is like unnest from tidyr, but works on different tokens, e.g. words, sentences, or paragraphs. Note that by default, the function will make all words lower case to make matching more efficient.
```
library(tidytext)
shakes_words = shakes_trim %>%
unnest_tokens(word, text, token='words')
save(shakes_words, file='data/shakes_words_df_4text2vec.RData')
```
We also will be doing a little stemming here. I’m getting rid of suffixes that end with the suffix after an apostrophe. Many of the remaining words will either be stopwords or need to be further stemmed later. I also created a middle/modern English stemmer for words that are not caught otherwise (me\_st\_stem). Again, this is the sort of thing you discover after initial passes (e.g. ‘criedst’). After that, we can use the anti\_join remove the stopwords.
```
source('r/st_stem.R')
shakes_words = shakes_words %>%
mutate(word = str_trim(word), # remove possible whitespace
word = str_replace(word, "'er$|'d$|'t$|'ld$|'rt$|'st$|'dst$", ''), # remove me style endings
word = str_replace_all(word, "[0-9]", ''), # remove sonnet numbers
word = vapply(word, me_st_stem, 'a')) %>%
anti_join(em_stops) %>%
anti_join(me_stops) %>%
anti_join(data_frame(word=str_to_lower(c(chars, 'prologue', 'epilogue')))) %>%
anti_join(data_frame(word=str_to_lower(paste0(chars, "'s")))) %>% # remove possessive names
anti_join(stop_words)
```
As before, you should do a couple spot checks.
```
any(shakes_words$word == 'romeo')
any(shakes_words$word == 'prologue')
any(shakes_words$word == 'mayst')
```
```
[1] FALSE
[1] FALSE
[1] FALSE
```
ACT IV. Other fixes
-------------------
Now we’re ready to finally do the word counts. Just kidding! There is *still* work to do for the remainder, and you’ll continue to spot things after runs. One remaining issue is the words that end in ‘st’ and ‘est’, and others that are not consistently spelled or otherwise need to be dealt with. For example, ‘crost’ will not be stemmed to ‘cross’, as ‘crossed’ would be. Finally, I limit the result to any words that have more than two characters, as my inspection suggested these are left\-over suffixes, or otherwise would be considered stopwords anyway.
```
# porter should catch remaining 'est'
add_a = c('mongst', 'gainst') # words to add a to
shakes_words = shakes_words %>%
mutate(word = if_else(word=='honour', 'honor', word),
word = if_else(word=='durst', 'dare', word),
word = if_else(word=='wast', 'was', word),
word = if_else(word=='dust', 'does', word),
word = if_else(word=='curst', 'cursed', word),
word = if_else(word=='blest', 'blessed', word),
word = if_else(word=='crost', 'crossed', word),
word = if_else(word=='accurst', 'accursed', word),
word = if_else(word %in% add_a,
paste0('a', word),
word),
word = str_replace(word, "'s$", ''), # strip remaining possessives
word = if_else(str_detect(word, pattern="o'er"), # change o'er over
str_replace(word, "'", 'v'),
word)) %>%
filter(!(id=='Antony_and_Cleopatra' & word == 'mark')) %>% # mark here is almost exclusively the character name
filter(str_count(word)>2)
```
At this point we could still maybe add things to this list of additional fixes, but I think it’s time to actually start playing with the data.
ACT V. Fun stuff
----------------
We are finally ready to get to the fun stuff. Finally! And now things get easy.
### Scene I. Count the terms
We can get term counts with standard dplyr approaches, and packages like tidytext will take that and also do some other things we might want. Specifically, we can use the latter to create the document\-term matrix (DTM) that will be used in other analysis. The function cast\_dfm will create a dfm class object, or ‘document\-feature’ matrix class object (from quanteda), which is the same thing but recognizes this sort of stuff is not specific to words. With word counts in hand, would be good save to save at this point, since they’ll serve as the basis for other processing.
```
term_counts = shakes_words %>%
group_by(id, word) %>%
count
term_counts %>%
arrange(desc(n))
library(quanteda)
shakes_dtm = term_counts %>%
cast_dfm(document=id, term=word, value=n)
## save(shakes_words, term_counts, shakes_dtm, file='data/shakes_words_df.RData')
```
```
# A tibble: 115,954 x 3
# Groups: id, word [115,954]
id word n
<chr> <chr> <int>
1 Sonnets love 195
2 The_Two_Gentlemen_of_Verona love 171
3 Romeo_and_Juliet love 150
4 As_You_Like_It love 118
5 Love_s_Labour_s_Lost love 118
6 A_Midsummer_Night_s_Dream love 114
7 Richard_III god 111
8 Titus_Andronicus rome 103
9 Much_Ado_about_Nothing love 92
10 Coriolanus rome 90
# ... with 115,944 more rows
```
Now things are looking like Shakespeare, with love for everyone[17](#fn17). You’ll notice I’ve kept place names such as Rome, but this might be something you’d prefer to remove. Other candidates would be madam, woman, man, majesty (as in ‘his/her’) etc. This sort of thing is up to the researcher.
### Scene II. Stemming
Now we’ll stem the words. This is actually more of a pre\-processing step, one that we’d do along with (and typically after) stopword removal. I do it here to mostly demonstrate how to use quanteda to do it, as it can also be used to remove stopwords and do many of the other things we did with tidytext.
Stemming will make words like eye and eyes just *ey*, or convert war, wars and warring to *war*. In other words, it will reduce variations of a word to a common root form, or ‘word stem’. We could have done this in a step prior to counting the terms, but then you only have the stemmed result to work with for the document term matrix from then on. Depending on your situation, you may or may not want to stem, or maybe you’d want to compare results. The quanteda package will actually stem with the DTM (i.e. work on the column names) and collapse the word counts accordingly. I note the difference in words before and after stemming.
```
shakes_dtm
ncol(shakes_dtm)
shakes_dtm = shakes_dtm %>%
dfm_wordstem()
shakes_dtm
ncol(shakes_dtm)
```
```
Document-feature matrix of: 43 documents, 22,052 features (87.8% sparse).
[1] 22052
Document-feature matrix of: 43 documents, 13,325 features (83.8% sparse).
[1] 13325
```
The result is notably fewer columns, which will speed up any analysis, as well as produce a slightly more dense matrix.
### Scene III. Exploration
#### Top features
Let’s start looking at the data more intently. The following shows the 10 most common words and their respective counts. This is also an easy way to find candidates to add to the stopword list. Note that dai and prai are stems for day and pray. Love occurs 2\.15 times as much as the most frequent word!
```
top10 = topfeatures(shakes_dtm, 10)
top10
```
```
love heart eye god day hand hear live death night
2918 1359 1300 1284 1229 1226 1043 1015 1010 1001
```
The following is a word cloud. They are among the most useless visual displays imaginable. Just because you can, doesn’t mean you should.
If you want to display relative frequency do so.
#### Similarity
The quanteda package has some built in similarity measures such as [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity), which you can think of similarly to the standard correlation (also available as an option). I display it visually to better get a sense of things.
```
## textstat_simil(shakes_dtm, margin = "documents", method = "cosine")
```
We can already begin to see the clusters of documents. For example, the more historical are the clump in the upper left. The oddball is [*The Phoenix and the Turtle*](https://en.wikipedia.org/wiki/The_Phoenix_and_the_Turtle), though *Lover’s Complaint* and the *Elegy* are also less similar than standard Shakespeare. The Phoenix and the Turtle is about the death of ideal love, represented by the Phoenix and Turtledove, for which there is a funeral. It actually is considered by scholars to be in stark contrast to his other output. [Elegy](https://en.wikipedia.org/wiki/Shakespeare_apocrypha#A_Funeral_Elegy) itself is actually written for a funeral, but probably not by Shakespeare. [*A Lover’s Complaint*](https://en.wikipedia.org/wiki/A_Lover%27s_Complaint) is thought to be an inferior work by the Bard by some critics, and maybe not even authored by him, so perhaps what we’re seeing is a reflection of that lack of quality. In general, we’re seeing things that we might expect.
#### Readability
We can examine readability scores for the texts, but for this we’ll need them in raw form. We already had them from before, I just added *Phoenix* from the Gutenberg download.
```
raw_texts
```
```
# A tibble: 43 x 2
id text
<chr> <list>
1 A_Lover_s_Complaint.txt <chr [813]>
2 A_Midsummer_Night_s_Dream.txt <chr [6,630]>
3 All_s_Well_That_Ends_Well.txt <chr [10,993]>
4 Antony_and_Cleopatra.txt <chr [14,064]>
5 As_You_Like_It.txt <chr [9,706]>
6 Coriolanus.txt <chr [13,440]>
7 Cymbeline.txt <chr [11,388]>
8 Elegy.txt <chr [1,316]>
9 Hamlet.txt <chr [13,950]>
10 Henry_V.txt <chr [9,777]>
# ... with 33 more rows
```
With raw texts, we need to convert them to a corpus object to proceed more easily. The corpus function from quanteda won’t read directly from a list column or a list at all, so we’ll convert it via the tm package, which more or less defeats the purpose of using the quanteda package, except that the textstat\_readability function gives us what we want, but I digress.
Unfortunately, the concept of readability is ill\-defined, and as such, there are dozens of measures available dating back nearly 75 years. The following is based on the Coleman\-Liau grade score (higher grade \= more difficult). The conclusion here is first, Shakespeare isn’t exactly a difficult read, and two, the poems may be more so relative to the other works.
```
library(tm)
raw_text_corpus = corpus(VCorpus(VectorSource(raw_texts$text)))
shakes_read = textstat_readability(raw_text_corpus)
```
#### Lexical diversity
There are also metrics of lexical diversity. As with readability, there is no one way to measure ‘diversity’. Here we’ll go back to using the standard DTM, as the focus is on the terms, whereas readability is more at the sentence level. Most standard measures of lexical diversity are variants on what is called the type\-token ratio, which in our setting is the number of unique terms (types) relative to the total terms (tokens). We can use textstat\_lexdiv for our purposes here, which will provide several measures of diversity by default.
```
ld = textstat_lexdiv(shakes_dtm)
```
This visual is based on the (absolute) scaled values of those several metrics, and might suggest that the poems are relatively more diverse. This certainly might be the case for *Phoenix*, but it could also be a reflection of the limitation of several of the measures, such that longer works are seen as less diverse, as tokens are added more so than types the longer the text goes.
As a comparison, the following shows the results of the ‘Measure of Textual Diversity’ calculated using the koRpus package[18](#fn18). It is notably less affected by text length, though the conclusions are largely the same. There is notable correlation between the MTLD and readability as well[19](#fn19). In general, Shakespeare tends to be more expressive in poems, and less so with comedies.
### Scene IV. Topic model
I’d say we’re now ready for topic model. That didn’t take too much did it?
#### Running the model and exploring the topics
We’ll run one with 10 topics. As in the previous example in this document, we’ll use topicmodels and the LDA function. Later, we’ll also compare our results with the traditional classifications of the texts. Note that this will take a while to run depending on your machine (maybe a minute or two). Faster implementation can be found with text2vec.
```
library(topicmodels)
shakes_10 = LDA(convert(shakes_dtm, to = "topicmodels"), k = 10, control=list(seed=1234))
```
One of the first things to do is to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* are common as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the stm package and corresponding labelTopics function as a way to get several alternatives. As an example, I show the results of their version of the following[20](#fn20):
* FREX: **FR**equency and **EX**clusivity, it is a weighted harmonic mean of a term’s rank within a topic in terms of frequency and exclusivity.
* lift: Ratio of the term’s probability within a topic to its probability of occurrence across all documents. Overly sensitive to rare words.
* score: Another approach that will give more weight to more exclusive terms.
* prob: This is just the raw probability of the term within a given topic.
As another approach, consider the saliency and relevance of term via the LDAvis package. While you can play with it here, it’s probably easier to [open it separately](vis/index.html). Note that this has to be done separately from the model, and may have topic numbers in a different order.
Your browser does not support iframes.
Given all these measures, one can assess how well they match what topics the documents would be most associated with.
```
t(topics(shakes_10, 3))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. The other measures pick up on words like Dane and Denmark. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram[21](#fn21). The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[22](#fn22). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix*, *Complaint*, standard poems, a mixed bag of more romance\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
The following shows the average topic probability for each of the traditional classes. Topics are represented by their first five most probable terms.
Aside from the poems, the classes are a good mix of topics, and appear to have some overlap. Tragedies are perhaps most diverse.
#### Summary of Topic Models
This is where the summary would go, but I grow weary…
**FIN**
ACT I. Scrape MIT and Gutenberg Shakespeare
-------------------------------------------
### Scene I. Scrape main works
Initially we must scrape the web to get the documents we need. The rvest package will be used as follows.
* Start with the url of the site
* Get the links off that page to serve as base urls for the works
* Scrape the document for each url
* Deal with the collection of Sonnets separately
* Write out results
```
library(rvest); library(tidyverse); library(stringr)
page0 = read_html('http://shakespeare.mit.edu/')
works_urls0 = page0 %>%
html_nodes('a') %>%
html_attr('href')
main = works_urls0 %>%
grep(pattern='index', value=T) %>%
str_replace_all(pattern='index', replacement='full')
other = works_urls0[!grepl(works_urls0, pattern='index|edu|org|news')]
works_urls = c(main, other)
works_urls[1:3]
```
Now we just paste the main site url to the work urls and download them. Here is where we come across our first snag. The html\_text function has what I would call a bug but what the author feels is a feature. [Basically, it ignores line breaks of the form `<br>` in certain situations](https://github.com/hadley/rvest/issues/175). This means it will smash text together that shouldn’t be, thereby making *any* analysis of it fairly useless[14](#fn14). Luckily, [@rentrop](https://github.com/rentrop) provided a solution, which is in `r/fix_read_html.R`.
```
works0 = lapply(works_urls, function(x) read_html(paste0('http://shakespeare.mit.edu/', x)))
source('r/fix_read_html.R')
html_text_collapse(works0[[1]]) #works
works = lapply(works0, html_text_collapse)
names(works) = c("All's Well That Ends Well", "As You Like It", "Comedy of Errors"
"Cymbeline", "Love's Labour's Lost", "Measure for Measure"
"The Merry Wives of Windsor", "The Merchant of Venice", "A Midsummer Night's Dream"
"Much Ado about Nothing", "Pericles Prince of Tyre", "The Taming of the Shrew"
"The Tempest", "Troilus and Cressida", "Twelfth Night"
"The Two Gentlemen of Verona", "The Winter's Tale", "King Henry IV Part 1"
"King Henry IV Part 2", "Henry V", "Henry VI Part 1"
"Henry VI Part 2", "Henry VI Part 3", "Henry VIII"
"King John", "Richard II", "Richard III"
"Antony and Cleopatra", "Coriolanus", "Hamlet"
"Julius Caesar", "King Lear", "Macbeth"
"Othello", "Romeo and Juliet", "Timon of Athens"
"Titus Andronicus", "Sonnets", "A Lover's Complaint"
"The Rape of Lucrece", "Venus and Adonis", "Elegy")
```
### Scene II. Sonnets
We now hit a slight nuisance with the Sonnets. The Sonnets have a bit of a different structure than the plays. All links are in a single page, with a different form for the url, and each sonnet has its own page.
```
sonnet_urls = paste0('http://shakespeare.mit.edu/', grep(works_urls0, pattern='sonnet', value=T)) %>%
read_html() %>%
html_nodes('a') %>%
html_attr('href')
sonnet_urls = grep(sonnet_urls, pattern = 'sonnet', value=T) # remove amazon link
# read the texts
sonnet0 = purrr::map(sonnet_urls, function(x) read_html(paste0('http://shakespeare.mit.edu/Poetry/', x)))
# collapse to one 'Sonnets' work
sonnet = sapply(sonnet0, html_text_collapse)
works$Sonnets = sonnet
```
### Scene III. Save and write out
Now we can save our results so we won’t have to repeat any of the previous scraping. We want to save the main text object as an RData file, and write out the texts to their own file. When dealing with text, you’ll regularly want to save stages so you can avoid repeating what you don’t have to, as often you will need to go back after discovering new issues further down the line.
```
save(works, file='data/texts_raw/shakes/moby_from_web.RData')
```
### Scene IV. Read text from files
After the above is done, it’s not required to redo, so we can always get what we need. I’ll start with the raw text as files, as that is one of the more common ways one deals with documents. When text is nice and clean, this can be fairly straightforward.
The function at the end comes from the tidyr package. Up to that line, each element in the text column is the entire text, while the column itself is thus a ‘list\-column’. In other words, we have a 42 x 2 matrix. But to do what we need, we’ll want to have access to each line, and the unnest function unpacks each line within the title. The first few lines of the result are shown after.
```
library(tidyverse); library(stringr)
shakes0 =
data_frame(file = dir('data/texts_raw/shakes/moby/', full.names = TRUE)) %>%
transmute(id = basename(file), text) %>%
unnest(text)
save(shakes0, file='data/initial_shakes_dt.RData')
# Alternate that provides for more options
# library(readtext)
# shakes0 =
# data_frame(file = dir('data/texts_raw/shakes/moby/', full.names = TRUE)) %>%
# mutate(text = map(file, readtext, encoding='UTF8')) %>%
# unnest(text)
```
### Scene V. Add additional works
It is typical to be gathering texts from multiple sources. In this case, we’ll get *The Phoenix and the Turtle* from the Project Gutenberg website. There is an R package that will allow us to work directly with the site, making the process straightforward[15](#fn15). I also considered two other works, but I refrained from “The Two Noble Kinsmen” because like many other of Shakespeare’s versions on Gutenberg, it’s basically written in a different language. I also refrained from *The Passionate Pilgrim* because it’s mostly not Shakespeare.
When first doing this project, I actually started with Gutenberg, but it became a notable PITA. The texts were inconsistent in source, and sometimes reproduced printing errors purposely, which would have compounded typical problems. I thought it could have been solved by using the *Complete Works of Shakespeare* but the download only came with that title, meaning one would have to hunt for and delineate each separate work. This might not have been too big of an issue, except that there is no table of contents, nor consistent naming of titles across different printings. The MIT approach, on the other hand, was a few lines of code. This represents a common issue in text analysis when dealing with sources, a different option may save a lot of time in the end.
The following code could be more succinct to deal with one text, but I initially was dealing with multiple works, so I’ve left it in that mode. In the end, we’ll have a tibble with an id column for the file/work name, and another column that contains the lines of text.
```
library(gutenbergr)
works_not_included = c("The Phoenix and the Turtle") # add others if desired
gute0 = gutenberg_works(title %in% works_not_included)
gute = lapply(gute0$gutenberg_id, gutenberg_download)
gute = mapply(function(x, y) mutate(x, id=y) %>% select(-gutenberg_id),
x=gute,
y=works_not_included,
SIMPLIFY=F)
shakes = shakes0 %>%
bind_rows(gute) %>%
mutate(id = str_replace_all(id, " |'", '_')) %>%
mutate(id = str_replace(id, '.txt', '')) %>%
arrange(id)
# shakes %>% split(.$id) # inspect
save(shakes, file='data/texts_raw/shakes/shakes_df.RData')
```
### Scene I. Scrape main works
Initially we must scrape the web to get the documents we need. The rvest package will be used as follows.
* Start with the url of the site
* Get the links off that page to serve as base urls for the works
* Scrape the document for each url
* Deal with the collection of Sonnets separately
* Write out results
```
library(rvest); library(tidyverse); library(stringr)
page0 = read_html('http://shakespeare.mit.edu/')
works_urls0 = page0 %>%
html_nodes('a') %>%
html_attr('href')
main = works_urls0 %>%
grep(pattern='index', value=T) %>%
str_replace_all(pattern='index', replacement='full')
other = works_urls0[!grepl(works_urls0, pattern='index|edu|org|news')]
works_urls = c(main, other)
works_urls[1:3]
```
Now we just paste the main site url to the work urls and download them. Here is where we come across our first snag. The html\_text function has what I would call a bug but what the author feels is a feature. [Basically, it ignores line breaks of the form `<br>` in certain situations](https://github.com/hadley/rvest/issues/175). This means it will smash text together that shouldn’t be, thereby making *any* analysis of it fairly useless[14](#fn14). Luckily, [@rentrop](https://github.com/rentrop) provided a solution, which is in `r/fix_read_html.R`.
```
works0 = lapply(works_urls, function(x) read_html(paste0('http://shakespeare.mit.edu/', x)))
source('r/fix_read_html.R')
html_text_collapse(works0[[1]]) #works
works = lapply(works0, html_text_collapse)
names(works) = c("All's Well That Ends Well", "As You Like It", "Comedy of Errors"
"Cymbeline", "Love's Labour's Lost", "Measure for Measure"
"The Merry Wives of Windsor", "The Merchant of Venice", "A Midsummer Night's Dream"
"Much Ado about Nothing", "Pericles Prince of Tyre", "The Taming of the Shrew"
"The Tempest", "Troilus and Cressida", "Twelfth Night"
"The Two Gentlemen of Verona", "The Winter's Tale", "King Henry IV Part 1"
"King Henry IV Part 2", "Henry V", "Henry VI Part 1"
"Henry VI Part 2", "Henry VI Part 3", "Henry VIII"
"King John", "Richard II", "Richard III"
"Antony and Cleopatra", "Coriolanus", "Hamlet"
"Julius Caesar", "King Lear", "Macbeth"
"Othello", "Romeo and Juliet", "Timon of Athens"
"Titus Andronicus", "Sonnets", "A Lover's Complaint"
"The Rape of Lucrece", "Venus and Adonis", "Elegy")
```
### Scene II. Sonnets
We now hit a slight nuisance with the Sonnets. The Sonnets have a bit of a different structure than the plays. All links are in a single page, with a different form for the url, and each sonnet has its own page.
```
sonnet_urls = paste0('http://shakespeare.mit.edu/', grep(works_urls0, pattern='sonnet', value=T)) %>%
read_html() %>%
html_nodes('a') %>%
html_attr('href')
sonnet_urls = grep(sonnet_urls, pattern = 'sonnet', value=T) # remove amazon link
# read the texts
sonnet0 = purrr::map(sonnet_urls, function(x) read_html(paste0('http://shakespeare.mit.edu/Poetry/', x)))
# collapse to one 'Sonnets' work
sonnet = sapply(sonnet0, html_text_collapse)
works$Sonnets = sonnet
```
### Scene III. Save and write out
Now we can save our results so we won’t have to repeat any of the previous scraping. We want to save the main text object as an RData file, and write out the texts to their own file. When dealing with text, you’ll regularly want to save stages so you can avoid repeating what you don’t have to, as often you will need to go back after discovering new issues further down the line.
```
save(works, file='data/texts_raw/shakes/moby_from_web.RData')
```
### Scene IV. Read text from files
After the above is done, it’s not required to redo, so we can always get what we need. I’ll start with the raw text as files, as that is one of the more common ways one deals with documents. When text is nice and clean, this can be fairly straightforward.
The function at the end comes from the tidyr package. Up to that line, each element in the text column is the entire text, while the column itself is thus a ‘list\-column’. In other words, we have a 42 x 2 matrix. But to do what we need, we’ll want to have access to each line, and the unnest function unpacks each line within the title. The first few lines of the result are shown after.
```
library(tidyverse); library(stringr)
shakes0 =
data_frame(file = dir('data/texts_raw/shakes/moby/', full.names = TRUE)) %>%
transmute(id = basename(file), text) %>%
unnest(text)
save(shakes0, file='data/initial_shakes_dt.RData')
# Alternate that provides for more options
# library(readtext)
# shakes0 =
# data_frame(file = dir('data/texts_raw/shakes/moby/', full.names = TRUE)) %>%
# mutate(text = map(file, readtext, encoding='UTF8')) %>%
# unnest(text)
```
### Scene V. Add additional works
It is typical to be gathering texts from multiple sources. In this case, we’ll get *The Phoenix and the Turtle* from the Project Gutenberg website. There is an R package that will allow us to work directly with the site, making the process straightforward[15](#fn15). I also considered two other works, but I refrained from “The Two Noble Kinsmen” because like many other of Shakespeare’s versions on Gutenberg, it’s basically written in a different language. I also refrained from *The Passionate Pilgrim* because it’s mostly not Shakespeare.
When first doing this project, I actually started with Gutenberg, but it became a notable PITA. The texts were inconsistent in source, and sometimes reproduced printing errors purposely, which would have compounded typical problems. I thought it could have been solved by using the *Complete Works of Shakespeare* but the download only came with that title, meaning one would have to hunt for and delineate each separate work. This might not have been too big of an issue, except that there is no table of contents, nor consistent naming of titles across different printings. The MIT approach, on the other hand, was a few lines of code. This represents a common issue in text analysis when dealing with sources, a different option may save a lot of time in the end.
The following code could be more succinct to deal with one text, but I initially was dealing with multiple works, so I’ve left it in that mode. In the end, we’ll have a tibble with an id column for the file/work name, and another column that contains the lines of text.
```
library(gutenbergr)
works_not_included = c("The Phoenix and the Turtle") # add others if desired
gute0 = gutenberg_works(title %in% works_not_included)
gute = lapply(gute0$gutenberg_id, gutenberg_download)
gute = mapply(function(x, y) mutate(x, id=y) %>% select(-gutenberg_id),
x=gute,
y=works_not_included,
SIMPLIFY=F)
shakes = shakes0 %>%
bind_rows(gute) %>%
mutate(id = str_replace_all(id, " |'", '_')) %>%
mutate(id = str_replace(id, '.txt', '')) %>%
arrange(id)
# shakes %>% split(.$id) # inspect
save(shakes, file='data/texts_raw/shakes/shakes_df.RData')
```
ACT II. Preliminary Cleaning
----------------------------
If you think we’re even remotely getting close to being ready for analysis, I say Ha! to you. Our journey has only just begun (cue the Carpenters).
Now we can start thinking about prepping the data for eventual analysis. One of the nice things about having the data in a tidy format is that we can use string functionality over the column of text in a simple fashion.
### Scene I. Remove initial text/metadata
First on our to\-do list is to get rid of all the preliminary text of titles, authorship, and similar. This is fairly straightforward when you realize the text we want will be associated with something like `ACT I`, or in the case of the Sonnets, the word `Sonnet`. So, the idea it to drop all text up to those points. I’ve created a [function](https://github.com/m-clark/text-analysis-with-R/blob/master/r/detect_first_act.R) that will do that, and then just apply it to each works tibble[16](#fn16). For the poems and *A Funeral Elegy for Master William Peter*, we look instead for the line where his name or initials start the line.
```
source('r/detect_first_act.R')
shakes_trim = shakes %>%
split(.$id) %>%
lapply(detect_first_act) %>%
bind_rows
shakes %>% filter(id=='Romeo_and_Juliet') %>% head
```
```
# A tibble: 6 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet Romeo and Juliet: Entire Play
2 Romeo_and_Juliet " "
3 Romeo_and_Juliet ""
4 Romeo_and_Juliet ""
5 Romeo_and_Juliet ""
6 Romeo_and_Juliet Romeo and Juliet
```
```
shakes_trim %>% filter(id=='Romeo_and_Juliet') %>% head
```
```
# A tibble: 6 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet ""
2 Romeo_and_Juliet ""
3 Romeo_and_Juliet PROLOGUE
4 Romeo_and_Juliet ""
5 Romeo_and_Juliet ""
6 Romeo_and_Juliet ""
```
### Scene II. Miscellaneous removal
Next, we’ll want to remove empty rows, any remaining titles, lines that denote the act or scene, and other stuff. I’m going to remove the word *prologue* and *epilogue* as a stopword later. While some texts have a line that just says that (`PROLOGUE`), others have text that describes the scene (`Prologue. Blah blah`) and which I’ve decided to keep. As such, we just need the word itself gone.
```
titles = c("A Lover's Complaint", "All's Well That Ends Well", "As You Like It", "The Comedy of Errors",
"Cymbeline", "Love's Labour's Lost", "Measure for Measure",
"The Merry Wives of Windsor", "The Merchant of Venice", "A Midsummer Night's Dream",
"Much Ado about Nothing", "Pericles Prince of Tyre", "The Taming of the Shrew",
"The Tempest", "Troilus and Cressida", "Twelfth Night",
"The Two Gentlemen of Verona", "The Winter's Tale", "King Henry IV, Part 1",
"King Henry IV, Part 2", "Henry V", "Henry VI, Part 1",
"Henry VI, Part 2", "Henry VI, Part 3", "Henry VIII",
"King John", "Richard II", "Richard III",
"Antony and Cleopatra", "Coriolanus", "Hamlet",
"Julius Caesar", "King Lear", "Macbeth",
"Othello", "Romeo and Juliet", "Timon of Athens",
"Titus Andronicus", "Sonnets",
"The Rape of Lucrece", "Venus and Adonis", "A Funeral Elegy", "The Phoenix and the Turtle")
shakes_trim = shakes_trim %>%
filter(text != '', # remove empties
!text %in% titles, # remove titles
!str_detect(text, '^ACT|^SCENE|^Enter|^Exit|^Exeunt|^Sonnet') # remove acts etc.
)
shakes_trim %>% filter(id=='Romeo_and_Juliet') # we'll get prologue later
```
```
# A tibble: 3,992 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet PROLOGUE
2 Romeo_and_Juliet Two households, both alike in dignity,
3 Romeo_and_Juliet In fair Verona, where we lay our scene,
4 Romeo_and_Juliet From ancient grudge break to new mutiny,
5 Romeo_and_Juliet Where civil blood makes civil hands unclean.
6 Romeo_and_Juliet From forth the fatal loins of these two foes
7 Romeo_and_Juliet A pair of star-cross'd lovers take their life;
8 Romeo_and_Juliet Whose misadventured piteous overthrows
9 Romeo_and_Juliet Do with their death bury their parents' strife.
10 Romeo_and_Juliet The fearful passage of their death-mark'd love,
# ... with 3,982 more rows
```
### Scene III. Classification of works
While we’re at it, we can save the classical (sometimes arbitrary) classifications of Shakespeare’s works for later comparison to what we’ll get in our analyses. We’ll save them to call as needed.
```
shakes_types = data_frame(title=unique(shakes_trim$id)) %>%
mutate(class = 'Comedy',
class = if_else(str_detect(title, pattern='Adonis|Lucrece|Complaint|Turtle|Pilgrim|Sonnet|Elegy'), 'Poem', class),
class = if_else(str_detect(title, pattern='Henry|Richard|John'), 'History', class),
class = if_else(str_detect(title, pattern='Troilus|Coriolanus|Titus|Romeo|Timon|Julius|Macbeth|Hamlet|Othello|Antony|Cymbeline|Lear'), 'Tragedy', class),
problem = if_else(str_detect(title, pattern='Measure|Merchant|^All|Troilus|Timon|Passion'), 'Problem', 'Not'),
late_romance = if_else(str_detect(title, pattern='Cymbeline|Kinsmen|Pericles|Winter|Tempest'), 'Late', 'Other'))
save(shakes_types, file='data/shakespeare_classification.RData') # save for later
```
### Scene I. Remove initial text/metadata
First on our to\-do list is to get rid of all the preliminary text of titles, authorship, and similar. This is fairly straightforward when you realize the text we want will be associated with something like `ACT I`, or in the case of the Sonnets, the word `Sonnet`. So, the idea it to drop all text up to those points. I’ve created a [function](https://github.com/m-clark/text-analysis-with-R/blob/master/r/detect_first_act.R) that will do that, and then just apply it to each works tibble[16](#fn16). For the poems and *A Funeral Elegy for Master William Peter*, we look instead for the line where his name or initials start the line.
```
source('r/detect_first_act.R')
shakes_trim = shakes %>%
split(.$id) %>%
lapply(detect_first_act) %>%
bind_rows
shakes %>% filter(id=='Romeo_and_Juliet') %>% head
```
```
# A tibble: 6 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet Romeo and Juliet: Entire Play
2 Romeo_and_Juliet " "
3 Romeo_and_Juliet ""
4 Romeo_and_Juliet ""
5 Romeo_and_Juliet ""
6 Romeo_and_Juliet Romeo and Juliet
```
```
shakes_trim %>% filter(id=='Romeo_and_Juliet') %>% head
```
```
# A tibble: 6 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet ""
2 Romeo_and_Juliet ""
3 Romeo_and_Juliet PROLOGUE
4 Romeo_and_Juliet ""
5 Romeo_and_Juliet ""
6 Romeo_and_Juliet ""
```
### Scene II. Miscellaneous removal
Next, we’ll want to remove empty rows, any remaining titles, lines that denote the act or scene, and other stuff. I’m going to remove the word *prologue* and *epilogue* as a stopword later. While some texts have a line that just says that (`PROLOGUE`), others have text that describes the scene (`Prologue. Blah blah`) and which I’ve decided to keep. As such, we just need the word itself gone.
```
titles = c("A Lover's Complaint", "All's Well That Ends Well", "As You Like It", "The Comedy of Errors",
"Cymbeline", "Love's Labour's Lost", "Measure for Measure",
"The Merry Wives of Windsor", "The Merchant of Venice", "A Midsummer Night's Dream",
"Much Ado about Nothing", "Pericles Prince of Tyre", "The Taming of the Shrew",
"The Tempest", "Troilus and Cressida", "Twelfth Night",
"The Two Gentlemen of Verona", "The Winter's Tale", "King Henry IV, Part 1",
"King Henry IV, Part 2", "Henry V", "Henry VI, Part 1",
"Henry VI, Part 2", "Henry VI, Part 3", "Henry VIII",
"King John", "Richard II", "Richard III",
"Antony and Cleopatra", "Coriolanus", "Hamlet",
"Julius Caesar", "King Lear", "Macbeth",
"Othello", "Romeo and Juliet", "Timon of Athens",
"Titus Andronicus", "Sonnets",
"The Rape of Lucrece", "Venus and Adonis", "A Funeral Elegy", "The Phoenix and the Turtle")
shakes_trim = shakes_trim %>%
filter(text != '', # remove empties
!text %in% titles, # remove titles
!str_detect(text, '^ACT|^SCENE|^Enter|^Exit|^Exeunt|^Sonnet') # remove acts etc.
)
shakes_trim %>% filter(id=='Romeo_and_Juliet') # we'll get prologue later
```
```
# A tibble: 3,992 x 2
id text
<chr> <chr>
1 Romeo_and_Juliet PROLOGUE
2 Romeo_and_Juliet Two households, both alike in dignity,
3 Romeo_and_Juliet In fair Verona, where we lay our scene,
4 Romeo_and_Juliet From ancient grudge break to new mutiny,
5 Romeo_and_Juliet Where civil blood makes civil hands unclean.
6 Romeo_and_Juliet From forth the fatal loins of these two foes
7 Romeo_and_Juliet A pair of star-cross'd lovers take their life;
8 Romeo_and_Juliet Whose misadventured piteous overthrows
9 Romeo_and_Juliet Do with their death bury their parents' strife.
10 Romeo_and_Juliet The fearful passage of their death-mark'd love,
# ... with 3,982 more rows
```
### Scene III. Classification of works
While we’re at it, we can save the classical (sometimes arbitrary) classifications of Shakespeare’s works for later comparison to what we’ll get in our analyses. We’ll save them to call as needed.
```
shakes_types = data_frame(title=unique(shakes_trim$id)) %>%
mutate(class = 'Comedy',
class = if_else(str_detect(title, pattern='Adonis|Lucrece|Complaint|Turtle|Pilgrim|Sonnet|Elegy'), 'Poem', class),
class = if_else(str_detect(title, pattern='Henry|Richard|John'), 'History', class),
class = if_else(str_detect(title, pattern='Troilus|Coriolanus|Titus|Romeo|Timon|Julius|Macbeth|Hamlet|Othello|Antony|Cymbeline|Lear'), 'Tragedy', class),
problem = if_else(str_detect(title, pattern='Measure|Merchant|^All|Troilus|Timon|Passion'), 'Problem', 'Not'),
late_romance = if_else(str_detect(title, pattern='Cymbeline|Kinsmen|Pericles|Winter|Tempest'), 'Late', 'Other'))
save(shakes_types, file='data/shakespeare_classification.RData') # save for later
```
ACT III. Stop words
-------------------
As we’ve noted before, we’ll want to get rid of stop words, things like articles, possessive pronouns, and other very common words. In this case, we also want to include character names. However, the big wrinkle here is that this is not English as currently spoken, so we need to remove ‘ye’, ‘thee’, ‘thine’ etc. In addition, there are things that need to be replaced, like o’er to over, which may then also be removed. In short, this is not so straightforward.
### Scene I. Character names
We’ll get the list of character names from [opensourceshakespeare.org](http://opensourceshakespeare.org/) via rvest, but I added some from the poems and others that still came through the processing one way or another, e.g. abbreviated names.
```
shakes_char_url = 'https://www.opensourceshakespeare.org/views/plays/characters/chardisplay.php'
page0 = read_html(shakes_char_url)
tabs = page0 %>% html_table()
shakes_char = tabs[[2]][-(1:2), c(1,3,5)] # remove header and phantom columns
colnames(shakes_char) = c('Nspeeches', 'Character', 'Play')
shakes_char = shakes_char %>%
distinct(Character,.keep_all=T)
save(shakes_char, file='data/shakespeare_characters.RData')
```
A new snag is that some characters with multiple names may be represented (typically) by the first or last name, or in the case of three, the middle, e.g. Sir Toby Belch. Others are still difficultly named e.g. RICHARD PLANTAGENET (DUKE OF GLOUCESTER). The following should capture everything by splitting the names on spaces, removing parentheses, and keeping unique terms.
```
# remove paren and split
chars = shakes_char$Character
chars = str_replace_all(chars, '\\(|\\)', '')
chars = str_split(chars, ' ') %>%
unlist
# these were found after intial processsing
chars_other = c('enobarbus', 'marcius', 'katharina', 'clarence','pyramus',
'andrew', 'arcite', 'perithous', 'hippolita', 'schoolmaster',
'cressid', 'diomed', 'kate', 'titinius', 'Palamon', 'Tarquin',
'lucrece', 'isidore', 'tom', 'thisbe', 'paul',
'aemelia', 'sycorax', 'montague', 'capulet', 'collatinus')
chars = unique(c(chars, chars_other))
chars = chars[chars != '']
sample(chars)[1:3]
```
```
[1] "Children" "Dionyza" "Aaron"
```
### Scene II. Old, Middle, \& Modern English
While Shakespeare is considered [Early Modern English](https://en.wikipedia.org/wiki/Early_Modern_English), some text may be more historical, so I include Middle and Old English stopwords, as they were readily available from the cltk Python module ([link](https://github.com/cltk/cltk)). I also added some things to the modern English list like “thou’ldst” that I found lingering after initial passes. I first started using the works from Gutenberg, and there, the Old English might have had some utility. As the texts there were inconsistently translated and otherwise problematic, I abandoned using them. Here, the Old English vocabulary applied to these texts it only removes ‘wit’, so I refrain from using it.
```
# old and me from python cltk module;
# em from http://earlymodernconversions.com/wp-content/uploads/2013/12/stopwords.txt;
# I also added some to me
old_stops0 = read_lines('data/old_english_stop_words.txt')
# sort(old_stops0)
old_stops = data_frame(word=str_conv(old_stops0, 'UTF8'),
lexicon = 'cltk')
me_stops0 = read_lines('data/middle_english_stop_words')
# sort(me_stops0)
me_stops = data_frame(word=str_conv(me_stops0, 'UTF8'),
lexicon = 'cltk')
em_stops0 = read_lines('data/early_modern_english_stop_words.txt')
# sort(em_stops0)
em_stops = data_frame(word=str_conv(em_stops0, 'UTF8'),
lexicon = 'emc')
```
### Scene III. Remove stopwords
We’re now ready to start removing words. However, right now, we have lines not words. We can use the tidytext function unnest\_tokens, which is like unnest from tidyr, but works on different tokens, e.g. words, sentences, or paragraphs. Note that by default, the function will make all words lower case to make matching more efficient.
```
library(tidytext)
shakes_words = shakes_trim %>%
unnest_tokens(word, text, token='words')
save(shakes_words, file='data/shakes_words_df_4text2vec.RData')
```
We also will be doing a little stemming here. I’m getting rid of suffixes that end with the suffix after an apostrophe. Many of the remaining words will either be stopwords or need to be further stemmed later. I also created a middle/modern English stemmer for words that are not caught otherwise (me\_st\_stem). Again, this is the sort of thing you discover after initial passes (e.g. ‘criedst’). After that, we can use the anti\_join remove the stopwords.
```
source('r/st_stem.R')
shakes_words = shakes_words %>%
mutate(word = str_trim(word), # remove possible whitespace
word = str_replace(word, "'er$|'d$|'t$|'ld$|'rt$|'st$|'dst$", ''), # remove me style endings
word = str_replace_all(word, "[0-9]", ''), # remove sonnet numbers
word = vapply(word, me_st_stem, 'a')) %>%
anti_join(em_stops) %>%
anti_join(me_stops) %>%
anti_join(data_frame(word=str_to_lower(c(chars, 'prologue', 'epilogue')))) %>%
anti_join(data_frame(word=str_to_lower(paste0(chars, "'s")))) %>% # remove possessive names
anti_join(stop_words)
```
As before, you should do a couple spot checks.
```
any(shakes_words$word == 'romeo')
any(shakes_words$word == 'prologue')
any(shakes_words$word == 'mayst')
```
```
[1] FALSE
[1] FALSE
[1] FALSE
```
### Scene I. Character names
We’ll get the list of character names from [opensourceshakespeare.org](http://opensourceshakespeare.org/) via rvest, but I added some from the poems and others that still came through the processing one way or another, e.g. abbreviated names.
```
shakes_char_url = 'https://www.opensourceshakespeare.org/views/plays/characters/chardisplay.php'
page0 = read_html(shakes_char_url)
tabs = page0 %>% html_table()
shakes_char = tabs[[2]][-(1:2), c(1,3,5)] # remove header and phantom columns
colnames(shakes_char) = c('Nspeeches', 'Character', 'Play')
shakes_char = shakes_char %>%
distinct(Character,.keep_all=T)
save(shakes_char, file='data/shakespeare_characters.RData')
```
A new snag is that some characters with multiple names may be represented (typically) by the first or last name, or in the case of three, the middle, e.g. Sir Toby Belch. Others are still difficultly named e.g. RICHARD PLANTAGENET (DUKE OF GLOUCESTER). The following should capture everything by splitting the names on spaces, removing parentheses, and keeping unique terms.
```
# remove paren and split
chars = shakes_char$Character
chars = str_replace_all(chars, '\\(|\\)', '')
chars = str_split(chars, ' ') %>%
unlist
# these were found after intial processsing
chars_other = c('enobarbus', 'marcius', 'katharina', 'clarence','pyramus',
'andrew', 'arcite', 'perithous', 'hippolita', 'schoolmaster',
'cressid', 'diomed', 'kate', 'titinius', 'Palamon', 'Tarquin',
'lucrece', 'isidore', 'tom', 'thisbe', 'paul',
'aemelia', 'sycorax', 'montague', 'capulet', 'collatinus')
chars = unique(c(chars, chars_other))
chars = chars[chars != '']
sample(chars)[1:3]
```
```
[1] "Children" "Dionyza" "Aaron"
```
### Scene II. Old, Middle, \& Modern English
While Shakespeare is considered [Early Modern English](https://en.wikipedia.org/wiki/Early_Modern_English), some text may be more historical, so I include Middle and Old English stopwords, as they were readily available from the cltk Python module ([link](https://github.com/cltk/cltk)). I also added some things to the modern English list like “thou’ldst” that I found lingering after initial passes. I first started using the works from Gutenberg, and there, the Old English might have had some utility. As the texts there were inconsistently translated and otherwise problematic, I abandoned using them. Here, the Old English vocabulary applied to these texts it only removes ‘wit’, so I refrain from using it.
```
# old and me from python cltk module;
# em from http://earlymodernconversions.com/wp-content/uploads/2013/12/stopwords.txt;
# I also added some to me
old_stops0 = read_lines('data/old_english_stop_words.txt')
# sort(old_stops0)
old_stops = data_frame(word=str_conv(old_stops0, 'UTF8'),
lexicon = 'cltk')
me_stops0 = read_lines('data/middle_english_stop_words')
# sort(me_stops0)
me_stops = data_frame(word=str_conv(me_stops0, 'UTF8'),
lexicon = 'cltk')
em_stops0 = read_lines('data/early_modern_english_stop_words.txt')
# sort(em_stops0)
em_stops = data_frame(word=str_conv(em_stops0, 'UTF8'),
lexicon = 'emc')
```
### Scene III. Remove stopwords
We’re now ready to start removing words. However, right now, we have lines not words. We can use the tidytext function unnest\_tokens, which is like unnest from tidyr, but works on different tokens, e.g. words, sentences, or paragraphs. Note that by default, the function will make all words lower case to make matching more efficient.
```
library(tidytext)
shakes_words = shakes_trim %>%
unnest_tokens(word, text, token='words')
save(shakes_words, file='data/shakes_words_df_4text2vec.RData')
```
We also will be doing a little stemming here. I’m getting rid of suffixes that end with the suffix after an apostrophe. Many of the remaining words will either be stopwords or need to be further stemmed later. I also created a middle/modern English stemmer for words that are not caught otherwise (me\_st\_stem). Again, this is the sort of thing you discover after initial passes (e.g. ‘criedst’). After that, we can use the anti\_join remove the stopwords.
```
source('r/st_stem.R')
shakes_words = shakes_words %>%
mutate(word = str_trim(word), # remove possible whitespace
word = str_replace(word, "'er$|'d$|'t$|'ld$|'rt$|'st$|'dst$", ''), # remove me style endings
word = str_replace_all(word, "[0-9]", ''), # remove sonnet numbers
word = vapply(word, me_st_stem, 'a')) %>%
anti_join(em_stops) %>%
anti_join(me_stops) %>%
anti_join(data_frame(word=str_to_lower(c(chars, 'prologue', 'epilogue')))) %>%
anti_join(data_frame(word=str_to_lower(paste0(chars, "'s")))) %>% # remove possessive names
anti_join(stop_words)
```
As before, you should do a couple spot checks.
```
any(shakes_words$word == 'romeo')
any(shakes_words$word == 'prologue')
any(shakes_words$word == 'mayst')
```
```
[1] FALSE
[1] FALSE
[1] FALSE
```
ACT IV. Other fixes
-------------------
Now we’re ready to finally do the word counts. Just kidding! There is *still* work to do for the remainder, and you’ll continue to spot things after runs. One remaining issue is the words that end in ‘st’ and ‘est’, and others that are not consistently spelled or otherwise need to be dealt with. For example, ‘crost’ will not be stemmed to ‘cross’, as ‘crossed’ would be. Finally, I limit the result to any words that have more than two characters, as my inspection suggested these are left\-over suffixes, or otherwise would be considered stopwords anyway.
```
# porter should catch remaining 'est'
add_a = c('mongst', 'gainst') # words to add a to
shakes_words = shakes_words %>%
mutate(word = if_else(word=='honour', 'honor', word),
word = if_else(word=='durst', 'dare', word),
word = if_else(word=='wast', 'was', word),
word = if_else(word=='dust', 'does', word),
word = if_else(word=='curst', 'cursed', word),
word = if_else(word=='blest', 'blessed', word),
word = if_else(word=='crost', 'crossed', word),
word = if_else(word=='accurst', 'accursed', word),
word = if_else(word %in% add_a,
paste0('a', word),
word),
word = str_replace(word, "'s$", ''), # strip remaining possessives
word = if_else(str_detect(word, pattern="o'er"), # change o'er over
str_replace(word, "'", 'v'),
word)) %>%
filter(!(id=='Antony_and_Cleopatra' & word == 'mark')) %>% # mark here is almost exclusively the character name
filter(str_count(word)>2)
```
At this point we could still maybe add things to this list of additional fixes, but I think it’s time to actually start playing with the data.
ACT V. Fun stuff
----------------
We are finally ready to get to the fun stuff. Finally! And now things get easy.
### Scene I. Count the terms
We can get term counts with standard dplyr approaches, and packages like tidytext will take that and also do some other things we might want. Specifically, we can use the latter to create the document\-term matrix (DTM) that will be used in other analysis. The function cast\_dfm will create a dfm class object, or ‘document\-feature’ matrix class object (from quanteda), which is the same thing but recognizes this sort of stuff is not specific to words. With word counts in hand, would be good save to save at this point, since they’ll serve as the basis for other processing.
```
term_counts = shakes_words %>%
group_by(id, word) %>%
count
term_counts %>%
arrange(desc(n))
library(quanteda)
shakes_dtm = term_counts %>%
cast_dfm(document=id, term=word, value=n)
## save(shakes_words, term_counts, shakes_dtm, file='data/shakes_words_df.RData')
```
```
# A tibble: 115,954 x 3
# Groups: id, word [115,954]
id word n
<chr> <chr> <int>
1 Sonnets love 195
2 The_Two_Gentlemen_of_Verona love 171
3 Romeo_and_Juliet love 150
4 As_You_Like_It love 118
5 Love_s_Labour_s_Lost love 118
6 A_Midsummer_Night_s_Dream love 114
7 Richard_III god 111
8 Titus_Andronicus rome 103
9 Much_Ado_about_Nothing love 92
10 Coriolanus rome 90
# ... with 115,944 more rows
```
Now things are looking like Shakespeare, with love for everyone[17](#fn17). You’ll notice I’ve kept place names such as Rome, but this might be something you’d prefer to remove. Other candidates would be madam, woman, man, majesty (as in ‘his/her’) etc. This sort of thing is up to the researcher.
### Scene II. Stemming
Now we’ll stem the words. This is actually more of a pre\-processing step, one that we’d do along with (and typically after) stopword removal. I do it here to mostly demonstrate how to use quanteda to do it, as it can also be used to remove stopwords and do many of the other things we did with tidytext.
Stemming will make words like eye and eyes just *ey*, or convert war, wars and warring to *war*. In other words, it will reduce variations of a word to a common root form, or ‘word stem’. We could have done this in a step prior to counting the terms, but then you only have the stemmed result to work with for the document term matrix from then on. Depending on your situation, you may or may not want to stem, or maybe you’d want to compare results. The quanteda package will actually stem with the DTM (i.e. work on the column names) and collapse the word counts accordingly. I note the difference in words before and after stemming.
```
shakes_dtm
ncol(shakes_dtm)
shakes_dtm = shakes_dtm %>%
dfm_wordstem()
shakes_dtm
ncol(shakes_dtm)
```
```
Document-feature matrix of: 43 documents, 22,052 features (87.8% sparse).
[1] 22052
Document-feature matrix of: 43 documents, 13,325 features (83.8% sparse).
[1] 13325
```
The result is notably fewer columns, which will speed up any analysis, as well as produce a slightly more dense matrix.
### Scene III. Exploration
#### Top features
Let’s start looking at the data more intently. The following shows the 10 most common words and their respective counts. This is also an easy way to find candidates to add to the stopword list. Note that dai and prai are stems for day and pray. Love occurs 2\.15 times as much as the most frequent word!
```
top10 = topfeatures(shakes_dtm, 10)
top10
```
```
love heart eye god day hand hear live death night
2918 1359 1300 1284 1229 1226 1043 1015 1010 1001
```
The following is a word cloud. They are among the most useless visual displays imaginable. Just because you can, doesn’t mean you should.
If you want to display relative frequency do so.
#### Similarity
The quanteda package has some built in similarity measures such as [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity), which you can think of similarly to the standard correlation (also available as an option). I display it visually to better get a sense of things.
```
## textstat_simil(shakes_dtm, margin = "documents", method = "cosine")
```
We can already begin to see the clusters of documents. For example, the more historical are the clump in the upper left. The oddball is [*The Phoenix and the Turtle*](https://en.wikipedia.org/wiki/The_Phoenix_and_the_Turtle), though *Lover’s Complaint* and the *Elegy* are also less similar than standard Shakespeare. The Phoenix and the Turtle is about the death of ideal love, represented by the Phoenix and Turtledove, for which there is a funeral. It actually is considered by scholars to be in stark contrast to his other output. [Elegy](https://en.wikipedia.org/wiki/Shakespeare_apocrypha#A_Funeral_Elegy) itself is actually written for a funeral, but probably not by Shakespeare. [*A Lover’s Complaint*](https://en.wikipedia.org/wiki/A_Lover%27s_Complaint) is thought to be an inferior work by the Bard by some critics, and maybe not even authored by him, so perhaps what we’re seeing is a reflection of that lack of quality. In general, we’re seeing things that we might expect.
#### Readability
We can examine readability scores for the texts, but for this we’ll need them in raw form. We already had them from before, I just added *Phoenix* from the Gutenberg download.
```
raw_texts
```
```
# A tibble: 43 x 2
id text
<chr> <list>
1 A_Lover_s_Complaint.txt <chr [813]>
2 A_Midsummer_Night_s_Dream.txt <chr [6,630]>
3 All_s_Well_That_Ends_Well.txt <chr [10,993]>
4 Antony_and_Cleopatra.txt <chr [14,064]>
5 As_You_Like_It.txt <chr [9,706]>
6 Coriolanus.txt <chr [13,440]>
7 Cymbeline.txt <chr [11,388]>
8 Elegy.txt <chr [1,316]>
9 Hamlet.txt <chr [13,950]>
10 Henry_V.txt <chr [9,777]>
# ... with 33 more rows
```
With raw texts, we need to convert them to a corpus object to proceed more easily. The corpus function from quanteda won’t read directly from a list column or a list at all, so we’ll convert it via the tm package, which more or less defeats the purpose of using the quanteda package, except that the textstat\_readability function gives us what we want, but I digress.
Unfortunately, the concept of readability is ill\-defined, and as such, there are dozens of measures available dating back nearly 75 years. The following is based on the Coleman\-Liau grade score (higher grade \= more difficult). The conclusion here is first, Shakespeare isn’t exactly a difficult read, and two, the poems may be more so relative to the other works.
```
library(tm)
raw_text_corpus = corpus(VCorpus(VectorSource(raw_texts$text)))
shakes_read = textstat_readability(raw_text_corpus)
```
#### Lexical diversity
There are also metrics of lexical diversity. As with readability, there is no one way to measure ‘diversity’. Here we’ll go back to using the standard DTM, as the focus is on the terms, whereas readability is more at the sentence level. Most standard measures of lexical diversity are variants on what is called the type\-token ratio, which in our setting is the number of unique terms (types) relative to the total terms (tokens). We can use textstat\_lexdiv for our purposes here, which will provide several measures of diversity by default.
```
ld = textstat_lexdiv(shakes_dtm)
```
This visual is based on the (absolute) scaled values of those several metrics, and might suggest that the poems are relatively more diverse. This certainly might be the case for *Phoenix*, but it could also be a reflection of the limitation of several of the measures, such that longer works are seen as less diverse, as tokens are added more so than types the longer the text goes.
As a comparison, the following shows the results of the ‘Measure of Textual Diversity’ calculated using the koRpus package[18](#fn18). It is notably less affected by text length, though the conclusions are largely the same. There is notable correlation between the MTLD and readability as well[19](#fn19). In general, Shakespeare tends to be more expressive in poems, and less so with comedies.
### Scene IV. Topic model
I’d say we’re now ready for topic model. That didn’t take too much did it?
#### Running the model and exploring the topics
We’ll run one with 10 topics. As in the previous example in this document, we’ll use topicmodels and the LDA function. Later, we’ll also compare our results with the traditional classifications of the texts. Note that this will take a while to run depending on your machine (maybe a minute or two). Faster implementation can be found with text2vec.
```
library(topicmodels)
shakes_10 = LDA(convert(shakes_dtm, to = "topicmodels"), k = 10, control=list(seed=1234))
```
One of the first things to do is to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* are common as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the stm package and corresponding labelTopics function as a way to get several alternatives. As an example, I show the results of their version of the following[20](#fn20):
* FREX: **FR**equency and **EX**clusivity, it is a weighted harmonic mean of a term’s rank within a topic in terms of frequency and exclusivity.
* lift: Ratio of the term’s probability within a topic to its probability of occurrence across all documents. Overly sensitive to rare words.
* score: Another approach that will give more weight to more exclusive terms.
* prob: This is just the raw probability of the term within a given topic.
As another approach, consider the saliency and relevance of term via the LDAvis package. While you can play with it here, it’s probably easier to [open it separately](vis/index.html). Note that this has to be done separately from the model, and may have topic numbers in a different order.
Your browser does not support iframes.
Given all these measures, one can assess how well they match what topics the documents would be most associated with.
```
t(topics(shakes_10, 3))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. The other measures pick up on words like Dane and Denmark. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram[21](#fn21). The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[22](#fn22). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix*, *Complaint*, standard poems, a mixed bag of more romance\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
The following shows the average topic probability for each of the traditional classes. Topics are represented by their first five most probable terms.
Aside from the poems, the classes are a good mix of topics, and appear to have some overlap. Tragedies are perhaps most diverse.
#### Summary of Topic Models
This is where the summary would go, but I grow weary…
**FIN**
### Scene I. Count the terms
We can get term counts with standard dplyr approaches, and packages like tidytext will take that and also do some other things we might want. Specifically, we can use the latter to create the document\-term matrix (DTM) that will be used in other analysis. The function cast\_dfm will create a dfm class object, or ‘document\-feature’ matrix class object (from quanteda), which is the same thing but recognizes this sort of stuff is not specific to words. With word counts in hand, would be good save to save at this point, since they’ll serve as the basis for other processing.
```
term_counts = shakes_words %>%
group_by(id, word) %>%
count
term_counts %>%
arrange(desc(n))
library(quanteda)
shakes_dtm = term_counts %>%
cast_dfm(document=id, term=word, value=n)
## save(shakes_words, term_counts, shakes_dtm, file='data/shakes_words_df.RData')
```
```
# A tibble: 115,954 x 3
# Groups: id, word [115,954]
id word n
<chr> <chr> <int>
1 Sonnets love 195
2 The_Two_Gentlemen_of_Verona love 171
3 Romeo_and_Juliet love 150
4 As_You_Like_It love 118
5 Love_s_Labour_s_Lost love 118
6 A_Midsummer_Night_s_Dream love 114
7 Richard_III god 111
8 Titus_Andronicus rome 103
9 Much_Ado_about_Nothing love 92
10 Coriolanus rome 90
# ... with 115,944 more rows
```
Now things are looking like Shakespeare, with love for everyone[17](#fn17). You’ll notice I’ve kept place names such as Rome, but this might be something you’d prefer to remove. Other candidates would be madam, woman, man, majesty (as in ‘his/her’) etc. This sort of thing is up to the researcher.
### Scene II. Stemming
Now we’ll stem the words. This is actually more of a pre\-processing step, one that we’d do along with (and typically after) stopword removal. I do it here to mostly demonstrate how to use quanteda to do it, as it can also be used to remove stopwords and do many of the other things we did with tidytext.
Stemming will make words like eye and eyes just *ey*, or convert war, wars and warring to *war*. In other words, it will reduce variations of a word to a common root form, or ‘word stem’. We could have done this in a step prior to counting the terms, but then you only have the stemmed result to work with for the document term matrix from then on. Depending on your situation, you may or may not want to stem, or maybe you’d want to compare results. The quanteda package will actually stem with the DTM (i.e. work on the column names) and collapse the word counts accordingly. I note the difference in words before and after stemming.
```
shakes_dtm
ncol(shakes_dtm)
shakes_dtm = shakes_dtm %>%
dfm_wordstem()
shakes_dtm
ncol(shakes_dtm)
```
```
Document-feature matrix of: 43 documents, 22,052 features (87.8% sparse).
[1] 22052
Document-feature matrix of: 43 documents, 13,325 features (83.8% sparse).
[1] 13325
```
The result is notably fewer columns, which will speed up any analysis, as well as produce a slightly more dense matrix.
### Scene III. Exploration
#### Top features
Let’s start looking at the data more intently. The following shows the 10 most common words and their respective counts. This is also an easy way to find candidates to add to the stopword list. Note that dai and prai are stems for day and pray. Love occurs 2\.15 times as much as the most frequent word!
```
top10 = topfeatures(shakes_dtm, 10)
top10
```
```
love heart eye god day hand hear live death night
2918 1359 1300 1284 1229 1226 1043 1015 1010 1001
```
The following is a word cloud. They are among the most useless visual displays imaginable. Just because you can, doesn’t mean you should.
If you want to display relative frequency do so.
#### Similarity
The quanteda package has some built in similarity measures such as [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity), which you can think of similarly to the standard correlation (also available as an option). I display it visually to better get a sense of things.
```
## textstat_simil(shakes_dtm, margin = "documents", method = "cosine")
```
We can already begin to see the clusters of documents. For example, the more historical are the clump in the upper left. The oddball is [*The Phoenix and the Turtle*](https://en.wikipedia.org/wiki/The_Phoenix_and_the_Turtle), though *Lover’s Complaint* and the *Elegy* are also less similar than standard Shakespeare. The Phoenix and the Turtle is about the death of ideal love, represented by the Phoenix and Turtledove, for which there is a funeral. It actually is considered by scholars to be in stark contrast to his other output. [Elegy](https://en.wikipedia.org/wiki/Shakespeare_apocrypha#A_Funeral_Elegy) itself is actually written for a funeral, but probably not by Shakespeare. [*A Lover’s Complaint*](https://en.wikipedia.org/wiki/A_Lover%27s_Complaint) is thought to be an inferior work by the Bard by some critics, and maybe not even authored by him, so perhaps what we’re seeing is a reflection of that lack of quality. In general, we’re seeing things that we might expect.
#### Readability
We can examine readability scores for the texts, but for this we’ll need them in raw form. We already had them from before, I just added *Phoenix* from the Gutenberg download.
```
raw_texts
```
```
# A tibble: 43 x 2
id text
<chr> <list>
1 A_Lover_s_Complaint.txt <chr [813]>
2 A_Midsummer_Night_s_Dream.txt <chr [6,630]>
3 All_s_Well_That_Ends_Well.txt <chr [10,993]>
4 Antony_and_Cleopatra.txt <chr [14,064]>
5 As_You_Like_It.txt <chr [9,706]>
6 Coriolanus.txt <chr [13,440]>
7 Cymbeline.txt <chr [11,388]>
8 Elegy.txt <chr [1,316]>
9 Hamlet.txt <chr [13,950]>
10 Henry_V.txt <chr [9,777]>
# ... with 33 more rows
```
With raw texts, we need to convert them to a corpus object to proceed more easily. The corpus function from quanteda won’t read directly from a list column or a list at all, so we’ll convert it via the tm package, which more or less defeats the purpose of using the quanteda package, except that the textstat\_readability function gives us what we want, but I digress.
Unfortunately, the concept of readability is ill\-defined, and as such, there are dozens of measures available dating back nearly 75 years. The following is based on the Coleman\-Liau grade score (higher grade \= more difficult). The conclusion here is first, Shakespeare isn’t exactly a difficult read, and two, the poems may be more so relative to the other works.
```
library(tm)
raw_text_corpus = corpus(VCorpus(VectorSource(raw_texts$text)))
shakes_read = textstat_readability(raw_text_corpus)
```
#### Lexical diversity
There are also metrics of lexical diversity. As with readability, there is no one way to measure ‘diversity’. Here we’ll go back to using the standard DTM, as the focus is on the terms, whereas readability is more at the sentence level. Most standard measures of lexical diversity are variants on what is called the type\-token ratio, which in our setting is the number of unique terms (types) relative to the total terms (tokens). We can use textstat\_lexdiv for our purposes here, which will provide several measures of diversity by default.
```
ld = textstat_lexdiv(shakes_dtm)
```
This visual is based on the (absolute) scaled values of those several metrics, and might suggest that the poems are relatively more diverse. This certainly might be the case for *Phoenix*, but it could also be a reflection of the limitation of several of the measures, such that longer works are seen as less diverse, as tokens are added more so than types the longer the text goes.
As a comparison, the following shows the results of the ‘Measure of Textual Diversity’ calculated using the koRpus package[18](#fn18). It is notably less affected by text length, though the conclusions are largely the same. There is notable correlation between the MTLD and readability as well[19](#fn19). In general, Shakespeare tends to be more expressive in poems, and less so with comedies.
#### Top features
Let’s start looking at the data more intently. The following shows the 10 most common words and their respective counts. This is also an easy way to find candidates to add to the stopword list. Note that dai and prai are stems for day and pray. Love occurs 2\.15 times as much as the most frequent word!
```
top10 = topfeatures(shakes_dtm, 10)
top10
```
```
love heart eye god day hand hear live death night
2918 1359 1300 1284 1229 1226 1043 1015 1010 1001
```
The following is a word cloud. They are among the most useless visual displays imaginable. Just because you can, doesn’t mean you should.
If you want to display relative frequency do so.
#### Similarity
The quanteda package has some built in similarity measures such as [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity), which you can think of similarly to the standard correlation (also available as an option). I display it visually to better get a sense of things.
```
## textstat_simil(shakes_dtm, margin = "documents", method = "cosine")
```
We can already begin to see the clusters of documents. For example, the more historical are the clump in the upper left. The oddball is [*The Phoenix and the Turtle*](https://en.wikipedia.org/wiki/The_Phoenix_and_the_Turtle), though *Lover’s Complaint* and the *Elegy* are also less similar than standard Shakespeare. The Phoenix and the Turtle is about the death of ideal love, represented by the Phoenix and Turtledove, for which there is a funeral. It actually is considered by scholars to be in stark contrast to his other output. [Elegy](https://en.wikipedia.org/wiki/Shakespeare_apocrypha#A_Funeral_Elegy) itself is actually written for a funeral, but probably not by Shakespeare. [*A Lover’s Complaint*](https://en.wikipedia.org/wiki/A_Lover%27s_Complaint) is thought to be an inferior work by the Bard by some critics, and maybe not even authored by him, so perhaps what we’re seeing is a reflection of that lack of quality. In general, we’re seeing things that we might expect.
#### Readability
We can examine readability scores for the texts, but for this we’ll need them in raw form. We already had them from before, I just added *Phoenix* from the Gutenberg download.
```
raw_texts
```
```
# A tibble: 43 x 2
id text
<chr> <list>
1 A_Lover_s_Complaint.txt <chr [813]>
2 A_Midsummer_Night_s_Dream.txt <chr [6,630]>
3 All_s_Well_That_Ends_Well.txt <chr [10,993]>
4 Antony_and_Cleopatra.txt <chr [14,064]>
5 As_You_Like_It.txt <chr [9,706]>
6 Coriolanus.txt <chr [13,440]>
7 Cymbeline.txt <chr [11,388]>
8 Elegy.txt <chr [1,316]>
9 Hamlet.txt <chr [13,950]>
10 Henry_V.txt <chr [9,777]>
# ... with 33 more rows
```
With raw texts, we need to convert them to a corpus object to proceed more easily. The corpus function from quanteda won’t read directly from a list column or a list at all, so we’ll convert it via the tm package, which more or less defeats the purpose of using the quanteda package, except that the textstat\_readability function gives us what we want, but I digress.
Unfortunately, the concept of readability is ill\-defined, and as such, there are dozens of measures available dating back nearly 75 years. The following is based on the Coleman\-Liau grade score (higher grade \= more difficult). The conclusion here is first, Shakespeare isn’t exactly a difficult read, and two, the poems may be more so relative to the other works.
```
library(tm)
raw_text_corpus = corpus(VCorpus(VectorSource(raw_texts$text)))
shakes_read = textstat_readability(raw_text_corpus)
```
#### Lexical diversity
There are also metrics of lexical diversity. As with readability, there is no one way to measure ‘diversity’. Here we’ll go back to using the standard DTM, as the focus is on the terms, whereas readability is more at the sentence level. Most standard measures of lexical diversity are variants on what is called the type\-token ratio, which in our setting is the number of unique terms (types) relative to the total terms (tokens). We can use textstat\_lexdiv for our purposes here, which will provide several measures of diversity by default.
```
ld = textstat_lexdiv(shakes_dtm)
```
This visual is based on the (absolute) scaled values of those several metrics, and might suggest that the poems are relatively more diverse. This certainly might be the case for *Phoenix*, but it could also be a reflection of the limitation of several of the measures, such that longer works are seen as less diverse, as tokens are added more so than types the longer the text goes.
As a comparison, the following shows the results of the ‘Measure of Textual Diversity’ calculated using the koRpus package[18](#fn18). It is notably less affected by text length, though the conclusions are largely the same. There is notable correlation between the MTLD and readability as well[19](#fn19). In general, Shakespeare tends to be more expressive in poems, and less so with comedies.
### Scene IV. Topic model
I’d say we’re now ready for topic model. That didn’t take too much did it?
#### Running the model and exploring the topics
We’ll run one with 10 topics. As in the previous example in this document, we’ll use topicmodels and the LDA function. Later, we’ll also compare our results with the traditional classifications of the texts. Note that this will take a while to run depending on your machine (maybe a minute or two). Faster implementation can be found with text2vec.
```
library(topicmodels)
shakes_10 = LDA(convert(shakes_dtm, to = "topicmodels"), k = 10, control=list(seed=1234))
```
One of the first things to do is to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* are common as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the stm package and corresponding labelTopics function as a way to get several alternatives. As an example, I show the results of their version of the following[20](#fn20):
* FREX: **FR**equency and **EX**clusivity, it is a weighted harmonic mean of a term’s rank within a topic in terms of frequency and exclusivity.
* lift: Ratio of the term’s probability within a topic to its probability of occurrence across all documents. Overly sensitive to rare words.
* score: Another approach that will give more weight to more exclusive terms.
* prob: This is just the raw probability of the term within a given topic.
As another approach, consider the saliency and relevance of term via the LDAvis package. While you can play with it here, it’s probably easier to [open it separately](vis/index.html). Note that this has to be done separately from the model, and may have topic numbers in a different order.
Your browser does not support iframes.
Given all these measures, one can assess how well they match what topics the documents would be most associated with.
```
t(topics(shakes_10, 3))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. The other measures pick up on words like Dane and Denmark. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram[21](#fn21). The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[22](#fn22). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix*, *Complaint*, standard poems, a mixed bag of more romance\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
The following shows the average topic probability for each of the traditional classes. Topics are represented by their first five most probable terms.
Aside from the poems, the classes are a good mix of topics, and appear to have some overlap. Tragedies are perhaps most diverse.
#### Summary of Topic Models
This is where the summary would go, but I grow weary…
**FIN**
#### Running the model and exploring the topics
We’ll run one with 10 topics. As in the previous example in this document, we’ll use topicmodels and the LDA function. Later, we’ll also compare our results with the traditional classifications of the texts. Note that this will take a while to run depending on your machine (maybe a minute or two). Faster implementation can be found with text2vec.
```
library(topicmodels)
shakes_10 = LDA(convert(shakes_dtm, to = "topicmodels"), k = 10, control=list(seed=1234))
```
One of the first things to do is to interpret the topics, and we can start by seeing which terms are most probable for each topic.
```
get_terms(shakes_10, 20)
```
We can see there is a lot of overlap in these topics for top terms. Just looking at the top 10, *love* occurs in all of them, *god* and *heart* are common as well, but we could have guessed this just looking at how often they occur in general. Other measures can be used to assess term importance, such as those that seek to balance the term’s probability of occurrence within a document, and term *exclusivity*, or how likely a term is to occur in only one particular topic. See the stm package and corresponding labelTopics function as a way to get several alternatives. As an example, I show the results of their version of the following[20](#fn20):
* FREX: **FR**equency and **EX**clusivity, it is a weighted harmonic mean of a term’s rank within a topic in terms of frequency and exclusivity.
* lift: Ratio of the term’s probability within a topic to its probability of occurrence across all documents. Overly sensitive to rare words.
* score: Another approach that will give more weight to more exclusive terms.
* prob: This is just the raw probability of the term within a given topic.
As another approach, consider the saliency and relevance of term via the LDAvis package. While you can play with it here, it’s probably easier to [open it separately](vis/index.html). Note that this has to be done separately from the model, and may have topic numbers in a different order.
Your browser does not support iframes.
Given all these measures, one can assess how well they match what topics the documents would be most associated with.
```
t(topics(shakes_10, 3))
```
For example, based just on term frequency, Hamlet is most likely to be associated with Topic 1\. That topic is affiliated with the (stemmed words) love, night, heaven, heart, natur, ey, hear, hand, life, fear, death, prai, poor, friend, soul, hold, word, live, stand, head. The other measures pick up on words like Dane and Denmark. Sounds about right for Hamlet.
The following visualization shows a heatmap for the topic probabilities of each document. Darker values mean higher probability for a document expressing that topic. I’ve also added a cluster analysis based on the cosine distance matrix, and the resulting dendrogram[21](#fn21). The colored bar on the right represents the given classification of a work as history, tragedy, comedy, or poem.
A couple things stand out. To begin with, most works are associated with one topic[22](#fn22). In terms of the discovered topics, traditional classification really probably only works for the historical works, as they cluster together as expected (except for Henry the VIII, possibly due to it being a collaborative work). Furthermore, tragedies and comedies might hit on the same topics, albeit from different perspectives. In addition, at least some works are very poetical, or at least have topics in common with the poems (love, beauty). If we take four clusters from the cluster analysis, the result boils down to *Phoenix*, *Complaint*, standard poems, a mixed bag of more romance\-oriented works and the remaining poems, then everything else.
Alternatively, one could merely classify the works based on their probable topics, which would make more sense if clustering of the works is in fact the goal. The following visualization attempts to order them based on their most probable topic. The order is based on the most likely topics across all documents.
The following shows the average topic probability for each of the traditional classes. Topics are represented by their first five most probable terms.
Aside from the poems, the classes are a good mix of topics, and appear to have some overlap. Tragedies are perhaps most diverse.
#### Summary of Topic Models
This is where the summary would go, but I grow weary…
**FIN**
| Text Analysis |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.