content
stringlengths 0
14.9M
| filename
stringlengths 44
136
|
---|---|
## ----chunk-options, include=FALSE---------------------------------------------
library("knitr")
opts_chunk$set(eval = FALSE)
## ----getting-started----------------------------------------------------------
# vignette("wflow-01-getting-started", "workflowr")
## -----------------------------------------------------------------------------
# library("workflowr")
# wflow_quickstart("~/projects/misc/*Rmd", username = "<github-username>",
# directory = "~/projects/new-project/")
## -----------------------------------------------------------------------------
# library("workflowr")
# # Create project directory and change working directory to this location
# wflow_start("~/projects/new-project")
# # Copy the files to the analysis subdirectory of the workflowr project
# file.copy(from = Sys.glob("~/projects/misc/*Rmd"), to = "analysis")
## -----------------------------------------------------------------------------
# wflow_publish("analysis/*Rmd", "Publish analysis files")
## -----------------------------------------------------------------------------
# library("workflowr")
# wflow_start("~/projects/mature-project", existing = TRUE)
## -----------------------------------------------------------------------------
# wflow_publish("analysis/*Rmd", "Publish analysis files")
## -----------------------------------------------------------------------------
# library("workflowr")
# wflow_start("~/projects/my-package", existing = TRUE)
|
/scratch/gouwar.j/cran-all/cranData/workflowr/inst/doc/wflow-03-migrating.R
|
---
title: "Migrating an existing project to use workflowr"
subtitle: "workflowr version `r utils::packageVersion('workflowr')`"
author: "John Blischak"
date: "`r Sys.Date()`"
output:
rmarkdown::html_vignette:
toc: true
vignette: >
%\VignetteIndexEntry{Migrating an existing project to use workflowr}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r chunk-options, include=FALSE}
library("knitr")
opts_chunk$set(eval = FALSE)
```
## Introduction
This vignette is for those users that already have an existing project and wish
to incorporate workflowr to create a research website. Migrating an existing
project to use workflowr varies from straightforward to difficult depending on
the scenario and your comfort level with Git. This vignette assumes that you
have the background knowledge of workflowr explained in the [Getting started][vig-start]
vignette. Even if you have no need for a new workflowr project, please run
through that vignette first as an exercise to familiarize yourself with the
workflowr philosophy and functions.
```{r getting-started}
vignette("wflow-01-getting-started", "workflowr")
```
[vig-start]: wflow-01-getting-started.html
## Scenario: I have a collection of R Markdown files
If you have a collection of R Markdown files, but no version control or other
files, the quickest solution is to use the function `wflow_quickstart()`. The
code below 1) starts a new workflowr project in `~/projects/new-project/`,
2) copies the existing Rmd files in `~/projects/misc/` to the `analysis/`
subdirectory of the new project, 3) builds and commits the website, and 4)
configures the project to use GitHub (which is why the GitHub username is
required).
```{r}
library("workflowr")
wflow_quickstart("~/projects/misc/*Rmd", username = "<github-username>",
directory = "~/projects/new-project/")
```
Alternatively, you can manually perform each step to migrate your existing
analysis by starting a workflowr project in a new directory and then moving the
R Markdown files to the `analysis/` subdirectory. In the hypothetical example
below, the original R Markdown files are located in the directory
`~/projects/misc/` and the workflowr project will be created in the new
directory `~/projects/new-project/`.
```{r}
library("workflowr")
# Create project directory and change working directory to this location
wflow_start("~/projects/new-project")
# Copy the files to the analysis subdirectory of the workflowr project
file.copy(from = Sys.glob("~/projects/misc/*Rmd"), to = "analysis")
```
Next run `wflow_build()` to see if your files run without error. Lastly, build
and commit the website using `wflow_publish()`:
```{r}
wflow_publish("analysis/*Rmd", "Publish analysis files")
```
When you are ready to share the results online, you can run `wflow_use_github()`
or `wflow_use_gitlab()`.
## Scenario: I have a collection of R Markdown files and other project infrastructure
If your project already has lots of infrastructure, it is most convenient to add
the workflowr files directory to your already existing directory. This is
controlled with the argument `existing`. In the hypothetical example below, the
existing project is located at `~/projects/mature-project/`.
```{r}
library("workflowr")
wflow_start("~/projects/mature-project", existing = TRUE)
```
The above command will add the workflowr files to your existing project and also
commit them to version control (it will initialize a Git repo if it doesn't
already exist). If you'd prefer to not use version control for your project or
you'd prefer to commit the workflowr files yourself manually, you can set `git =
FALSE` (this is also useful if you want to first test to see what would happen
without committing the results).
By default `wflow_start()` will not overwrite your existing files (e.g. if
you already have a `README.md`). If you'd prefer to overwrite your files with
the default workflowr files, set `overwrite = TRUE`.
To add your R Markdown files to the research website, you can move them to the
subdirectory `analysis/` (note you can do this before or after running
`wflow_start()`).
Next run `wflow_build()` to see if your files run without error. Lastly, build
and commit the website using `wflow_publish()`:
```{r}
wflow_publish("analysis/*Rmd", "Publish analysis files")
```
## Scenario: I have an R package
If your project is organized as an R package, you can still add a website using
workflowr. In the hypothetical example below, the
existing package is located at `~/projects/my-package/`.
```{r}
library("workflowr")
wflow_start("~/projects/my-package", existing = TRUE)
```
The above command will add the workflowr files to your existing project and also
commit them to version control (it will initialize a Git repo if it doesn't
already exist). If you'd prefer to not use version control for your project or
you'd prefer to commit the workflowr files yourself manually, you can set `git =
FALSE` (this is also useful if you want to first test to see what would happen
without committing the results).
You'll want R to ignore the workflowr directories when building the R package.
Thus add the following to the `.Rbuildignore` file:
```
^analysis$
^docs$
^data$
^code$
^output$
^_workflowr.yml$
```
Furthermore, to prevent R from compressing the files in `data/` (which is
harmless but time-consuming), you can set `LazyData: false` in the file
`DESCRIPTION`. However, if you do want to distribute data files with your R
package, you'll need to instead rename the workflowr subdirectory and update the
R Markdown files to search for files in the updated directory name (and also
update `.Rbuildignore` to ignore this new directory and not `data/`). Then you
can save the data files to distribute with the package in `data/`. For more
details, see the relevant sections in the CRAN manual [Writing R
Extensions][data-in-packages] and Hadley's [R Packages][r-pkgs-data].
[data-in-packages]: https://cran.r-project.org/doc/manuals/r-release/R-exts.html#Data-in-packages
[r-pkgs-data]: https://r-pkgs.org/data.html
If your primary purpose for creating a website to accompany your package is to
share the package documentation, please check out the package [pkgdown][]. It
creates a website from the vignettes and function documentation files (i.e. the
Rd files in `man/`). In contrast, if the purpose of the website is to
demonstrate results you obtained using the package, use workflowr.
[pkgdown]: https://github.com/r-lib/pkgdown
|
/scratch/gouwar.j/cran-all/cranData/workflowr/inst/doc/wflow-03-migrating.Rmd
|
---
title: "How the workflowr package works"
subtitle: "workflowr version `r utils::packageVersion('workflowr')`"
author: "John Blischak"
date: "`r Sys.Date()`"
output:
rmarkdown::html_vignette:
toc: true
vignette: >
%\VignetteIndexEntry{How the workflowr package works}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
The workflowr package combines many powerful tools in order to produce a
research website. It is absolutely **not** necessary to understand all the
underlying tools to take advantage of workflowr, and in fact that is one of the
primary goals of workflowr: to allow researchers to focus on their analyses
without having to worry too much about the technical details. However, if you
are interested in implementing advanced customization options, contributing to
workflowr, or simply want to learn more about these tools, the sections below
provide some explanations of how workflowr works.
## Overview
[R][] is the computer programming language used to perform the analysis.
[knitr][] is an R package that executes code chunks in an R Markdown file to create a Markdown file.
[Markdown][] is a lightweight markup language that is easier to read and write than HTML.
[rmarkdown][] is an R package that combines the functionality of [knitr][] and the document converter [pandoc][].
[Pandoc][] powers the conversion of [knitr][]-produced Markdown files into HTML, Word, or PDF documents.
Additionally, newer versions of [rmarkdown][] contain functions for building websites.
The styling of the websites is performed by the web framework [Bootstrap][].
[Bootstrap][] implements the navigation bar at the top of the website, has many available themes to customize the look of the site, and dynamically adjusts the website so it can be viewed on a desktop, tablet, or mobile device.
The [rmarkdown][] website configuration file `_site.yml` allows convenient customization of the [Bootstrap][] navigation bar and theme.
[Git][] is a distributed version control system (VCS) that tracks code development.
It has many powerful features, but only a handful of the main functions are required to use workflowr.
[git2r][] is an R package which provides an interface to [libgit2][], which is a portable, pure C implementation of the Git core methods (this is why you don't need to install Git before using workflowr).
[GitHub][] is a website that hosts [Git][] repositories and additionally provides collaboration tools for developing software.
[GitHub Pages][] is a [GitHub][] service that offers free hosting of [static websites][static].
By placing the HTML files for the website in the subdirectory `docs/`, [GitHub Pages][] serves them online.
To aid reproducibility, workflowr provides an R Markdown output format
`wflow_html()` template that automatically sets a seed for random number
generation, records the session information, and reports the status of the Git
repository (so you always know which version of the code produced the results
contained in that particular file). These options are controlled by the settings
in `_workflowr.yml`. It also provides a custom site generator `wflow_site()`
that enables `wflow_html()` to work with R Markdown websites. These options are
controlled in `analysis/_site.yml`.
[R]: https://cran.r-project.org/
[knitr]: https://yihui.org/knitr/
[Markdown]: https://daringfireball.net/projects/markdown/
[rmarkdown]: https://rmarkdown.rstudio.com/
[pandoc]: https://pandoc.org/
[Bootstrap]: https://getbootstrap.com/
[Git]: https://git-scm.com/
[SHA-1]: https://en.wikipedia.org/wiki/SHA-1
[GitHub]: https://github.com/
[GitHub Pages]: https://pages.github.com/
[static]: https://en.wikipedia.org/wiki/Static_web_page
## Where are the figures?
workflowr saves the figures into an organized, hierarchical directory structure
within `analysis/`. For example, the first figure generated by the chunk named
`plot-data` in the file `filename.Rmd` will be saved as
`analysis/figure/filename.Rmd/plot-data-1.png`. Furthermore, the figure files
are _moved_ to `docs/` when `render_site` is run (this is the rmarkdown package
function called by `wflow_build`, `wflow_publish`, and the RStudio Knit button).
The figures have to be committed to the Git repository in `docs/` in order to be
displayed properly on the website. `wflow_publish` automatically commits the
figures in `docs` corresponding to new or updated R Markdown files, and
`analysis/figure/` is in the `.gitignore` file to prevent accidentally
committing duplicate files.
Because workflowr requires the figures to be saved to a specific location in
order to function properly, it will override any custom setting of the knitr
option `fig.path` (which controls where figure files are saved) and insert a
warning into the HTML file to alert the user that their value for `fig.path` was
ignored.
## Additional tools
[Posit Software, PBC][] is a company that develops open source software for R users.
They are the principal developers of [RStudio][], an integrated development environment (IDE) for R, and the [rmarkdown][] package.
Because of this tight integration, new developments in the [rmarkdown][] package are quickly incorporated into the [RStudio][] IDE.
While not strictly required for using workflowr, using [RStudio][] provides many benefits, including:
* RStudio projects make it easier to setup your R environment, e.g. set the correct working directory, and quickly switch between different projects
* The Git pane allows you to conveniently view your changes and run the main Git functions
* The Viewer pane displays the rendered HTML results for immediate feedback
* Clicking the `Knit` button automatically uses the [Bootstrap][] options specified in `_site.yml` and moves the rendered HTML to the website subdirectory `docs/` (requires version 1.0 or greater)
* Includes an up-to-date copy of [pandoc][] so you don't have to install or update it
* Tons of other cool [features][rstudio-features] like debugging and source code inspection
Another key R package used by workflowr is [rprojroot][].
This package finds the root of the repository, so workflowr functions like `wflow_build` will work the same regardless of the current working directory.
Specifically, [rprojroot][] searches for the RStudio project `.Rproj` file at the base of the workflowr project (so don't delete it!).
[Posit Software, PBC]: https://posit.co/
[RStudio]: https://posit.co/products/open-source/rstudio/
[rstudioapi]: https://github.com/rstudio/rstudioapi
[rprojroot]: https://cran.r-project.org/package=rprojroot
[git2r]: https://cran.r-project.org/package=git2r
[libgit2]: https://libgit2.org/
[rstudio-features]: https://posit.co/products/open-source/rstudio/
## Background and related work
There is lots of interest and development around reproducible research with R.
Projects like workflowr are possible due to two key developments. First, the R
packages [knitr][] and [rmarkdown][] have made it easy for any R programmer to
generate reports that combine text, code, output, and figures. Second, the
version control software [Git][], the Git hosting site [GitHub][], and the
static website hosting service [GitHub Pages][] have made it easy to share not
only source code but also static HTML files (i.e. no need to purchase a domain
name, setup a server, etc).
My first attempt at sharing a reproducible project online was [singleCellSeq][].
Basically, I started by copying the documentation website of [rmarkdown][] and
added some customizations to organize the generated figures and to insert the
status of the Git repository directly into the HTML pages. The workflowr R
package is my attempt to simplify my previous workflow and provide helper
functions so that any researcher can take advantage of this workflow.
Workflowr encompasses multiple functions: 1) provides a project template, 2)
version controls the R Markdown and HTML files, and 3) builds a website.
Furthermore, it provides R functions to perform each of these steps. There are
many other related works that provide similar functionality. Some are templates
to be copied, some are R packages, and some involve more complex software (e.g.
static blog software). Depending on your use case, one of the related works
listed at [r-project-workflows][] may better suit your needs. Please check them
out!
[r-project-workflows]: https://github.com/jdblischak/r-project-workflows#readme
[singleCellSeq]: https://jdblischak.github.io/singleCellSeq/analysis/
## Further reading
* How the code, results, and figures are executed and displayed can be customized using [knitr chunk and package options](https://yihui.org/knitr/options/)
* How [R Markdown websites](https://bookdown.org/yihui/rmarkdown/rmarkdown-site.html) are configured
* The many [features][rstudio-features] of the [RStudio][] IDE
* [Directions](https://docs.github.com/articles/configuring-a-publishing-source-for-github-pages) to publish a [GitHub Pages][] site using the `docs/` subdirectory
|
/scratch/gouwar.j/cran-all/cranData/workflowr/inst/doc/wflow-04-how-it-works.Rmd
|
## ----render-single-page-pdf, eval=FALSE---------------------------------------
# library("rmarkdown")
# # Create analysis/file.pdf
# render("analysis/file.Rmd", pdf_document(), knit_root_dir = "..")
## ----render-single-page-html-1, eval=FALSE------------------------------------
# library("rmarkdown")
# # Create analysis/file.html, includes navigation bar
# render("analysis/file.Rmd", html_document(), knit_root_dir = "..")
## ----render-single-page-html-2, eval=FALSE------------------------------------
# library("rmarkdown")
# # Create analysis/file.html, no navigation bar nor advanced features
# render("analysis/file.Rmd", html_document_base(), knit_root_dir = "..")
## ----render-single-page-html-3, eval=FALSE------------------------------------
# library("rmarkdown")
#
# # Temporarily rename _site.yml
# file.rename("analysis/_site.yml", "analysis/_site.yml_tmp")
#
# # Create analysis/file.html, no navigation bar
# render("analysis/file.Rmd", html_document(), knit_root_dir = "..")
#
# # Restore _site.yml
# file.rename("analysis/_site.yml_tmp", "analysis/_site.yml")
## ----render-single-page-html-4, eval=FALSE------------------------------------
# render("analysis/file.Rmd",
# html_document(toc = TRUE, toc_float = TRUE, theme = "cosmo", highlight = "textmate"),
# knit_root_dir = "..",
# output_file = "standalone.html")
## ----render-single-page-html-5, eval=FALSE------------------------------------
# # use the workflowr::wflow_html settings in analysis/_site.yml
# render("analysis/file.Rmd", knit_root_dir = "..")
|
/scratch/gouwar.j/cran-all/cranData/workflowr/inst/doc/wflow-05-faq.R
|
---
title: "Frequently asked questions"
subtitle: "workflowr version `r utils::packageVersion('workflowr')`"
author: "John Blischak"
date: "`r Sys.Date()`"
output:
rmarkdown::html_vignette:
toc: true
vignette: >
%\VignetteIndexEntry{Frequently asked questions}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
## Why isn't my website displaying online?
Occasionally your website may not display (or recent updates will not
immediately appear), and you may even receive an email from GitHub with the
following message:
> The page build failed for the `master` branch with the following error:
>
> unable to build page. Please try again later.
>
> For information on troubleshooting Jekyll see:
>
> https://docs.github.com/articles/troubleshooting-jekyll-builds
>
> If you have any questions you can contact us by replying to this email.
If you've followed the setup instructions from the [Getting started
vignette](wflow-01-getting-started.html), and especially if the website
displayed in the past, it's _very unlikely_ that you caused the problem. The
hosting is provided by [GitHub Pages][gh-pages], and it sometimes is delayed or
down. Overall for a free service, it is very reliable. If you wait 5 minutes (or
30 minutes at most), your website will likely be back to normal.
If you are anxious to know if there is a problem and when it will be resolved,
you can check the status of the Twitter account [GitHub Status][gh-status] for
the most up-to-date reports from GitHub. If you suspect that the problem may
have been caused by your recent changes to your website (again, this is
unlikely), you can view the GitHub help page [Troubleshooting GitHub Pages
builds][gh-troubleshooting].
## Can I make my workflowr website private?
Yes. While it it is **not** possible to make a [GitHub Pages][gh-pages] site
private (the default setup described in the ["Getting Started"
vignette][vig-getting-started]), there are various other hosting platforms that
provide access control. Below are the currently documented options, in order of
least to most amount of technical setup required:
* You can host a private site on [GitLab Pages][gl-pages] and grant access to
collaborators. All they need is a GitLab.com account (and they can use a social
account, e.g. Twitter, to login to GitLab.com) - [Deploy your site with GitLab
Pages][vig-gitlab]
* You can use [Beaker Browser][beaker] to securely self-host your site and share
the link with collaborators - [Deploy your site with Beaker
Browser][deploy-beaker]
* You can deploy a password-protected site using [Amazon Web Services][aws]
(requires familiarity with cloud technologies) - [Deploy your site with
AWS][deploy-aws]
To see all the currently documented deployment options, see the vignette
[Alternative strategies for deploying workflowr websites][vig-deploy].
## How should I manage large data files in a workflowr project?
Tracking the changes to your project's large data files is critical for
reproducibility. Unfortunately Git, which is the version control software used
by workflowr, was designed to version small files containing code. See the
vignette [Using large data files with workflowr][vig-data] for various options
for versioning the large data files used in your workflowr project.
## How can I include external images in my website?
Image files that are generated by the code executed in the R Markdown files are
automatically handled by workflowr. If you'd like to include additional image
files to be displayed in your webpages, follow the steps below. The instructions
refer to `docs/` for the website directory since this is the default. If you are
not using GitHub Pages to host the website, you may need to change this. For
example, if you are hosting with GitLab Pages, replace `docs/` with `public/`.
1. Inside the website directory, create a subdirectory named `assets` to include
any file that should be part of the website but is not created by one of the R
Markdown files in `analysis/`:
```
dir.create("docs/assets")
```
1. Move the image file(s) to `docs/assets/`
1. In the R Markdown file, refer to the image file(s) using the relative path
from `docs/` (because this is where the HTML files are located), e.g.:
```

```
Alternatively, you could use `knitr::include_graphics()` inside of an R code
chunk, which will automatically center the image and also follow the knitr
chunk options `out.width` and `out.height`:
```
knitr::include_graphics("assets/external.png", error = FALSE)
```
Note that the image will not be previewed in the R Markdown file inside of
RStudio because it is in a different directory than the R Markdown file. You
have to set `error = FALSE` because the function throws an error if it can't
find the file. This breaks the workflowr setup, since the file path only
exists once the HTML file is moved to `docs/`. If you'd like to disable
knitr from throwing this error for all the code in your project, add the
following line to the `.Rprofile` in your project:
`options(knitr.graphics.error = FALSE)`
1. Run `wflow_build()` to confirm the external image file(s) are properly
displayed
1. Use `wflow_git_commit()` to commit the file(s) to the Git repo (so that they
get pushed to the remote repository, e.g. on GitHub):
```
wflow_git_commit("docs/assets/external.png", "Add external image of ...")
# If you are adding multiple files, you could use a file glob
wflow_git_commit("docs/assets/*.png", "Add external images of ...")
```
1. Run `wflow_publish()` on the R Markdown file that contains the external
image file(s)
Another option is to first upload the image, e.g. to
[Imgur](https://imgur.com/), [Figshare](https://figshare.com/), another GitHub
repository, etc. Then you can link directly to the image in your Rmd file using
the absolute URL. This has the added advantage that the image will automatically
display in the Rmd file as you edit it in RStudio. The main disadvantage is that
the image isn't in the same location as the rest of your project files.
## How can I save a figure in a vector graphics format (e.g. PDF)?
The default file format is PNG. This is ideal for displaying figure files on a
web page. However, you might need to import a figure into a vector graphics
editor (e.g. Illustrator, Inkscape) for final editing for a manuscript. There
are multiple options for achieving this.
One option is to switch to the file format SVG. It is a vector graphics format
that is also well supported by web browsers. The code chunk below saves the
figure file as an SVG:
````
```{r plot-for-paper, dev='svg'}`r ''`
library(ggplot2)
data(mtcars)
p <- ggplot(mtcars, aes(x = mpg, y = disp)) + geom_point()
p
```
````
To apply this to every figure file in a particular document, you can create a
"setup" chunk at the beginning of the document that sets the [knitr chunk
option](https://yihui.org/knitr/options/) globally:
````
```{r setup, dev='svg'}`r ''`
knitr::opts_chunk$set(dev = 'svg')
```
````
Another option is to simultaneously create a PNG for display in the web page and
a PDF for further editing. The example code below saves both a PNG and PDF
version of the figure, but inserts the PNG into the web page:
````
```{r plot-for-paper, dev=c('png', 'pdf')}`r ''`
library(ggplot2)
data(mtcars)
p <- ggplot(mtcars, aes(x = mpg, y = disp)) + geom_point()
p
```
````
The main advantage of the above approaches is that the figure files are still
saved in an organized fashion (i.e. the file path is still something like
`docs/figure/file.Rmd/chunk-name.ext`). Furthermore, `wflow_publish()` will
automatically version the figure files regardless of the file extension.
A similar option to the one above is to have two separate code chunks. The
advantage of this more verbose option is that you can specify different chunk
names (and thus different filenames) and also set different `fig.width` and
`fig.height` for the website and paper versions. By setting `include=FALSE` for
the second chunk, neither the code nor the PDF figure file is displayed in the
web page.
````
```{r plot-for-paper}`r ''`
library(ggplot2)
data(mtcars)
p <- ggplot(mtcars, aes(x = mpg, y = disp)) + geom_point()
p
```
````
````
```{r figure1A, include=FALSE, dev='pdf', fig.height=3, fig.width=9}`r ''`
p
```
````
However, for the most control, you can always save the figure manually, e.g.
using `ggsave()`. For example, the example chunk below creates a 10x10 inch PNG
file that is automatically versioned by workflowr, but also uses `ggsave()` to
save a 5x5 inch PDF file in the subdirectory `paper/` (which would need to be
manually committed by the user, e.g. with `wflow_git_commit()`):
````
```{r plot-for-paper, fig.width=10, fig.height=10}`r ''`
library(ggplot2)
data(mtcars)
p <- ggplot(mtcars, aes(x = mpg, y = disp)) + geom_point()
p
ggsave("paper/plot-to-edit.pdf", width = 5, height = 5)
```
````
## Can I include Shiny apps in my website?
Yes, but not directly. You cannot directly embed the Shiny app into the Rmd file
using `runtime: shiny_prerendered` for two reasons. First, workflowr creates a
static website, and the free deployment options (e.g. GitHub Pages), only
provide static web hosting. Shiny apps require a dynamic website because they
need to call a server to run the R code. Second, even if you setup your own web
server, the supporting files (e.g. CSS/JS) for a Shiny app have to be in a
[different location][shiny-external-resources] than the standard for an
Rmd-based website.
[shiny-external-resources]: https://rmarkdown.rstudio.com/authoring_shiny_prerendered.html#external_resources
However, there is still a good option for embedding the Shiny app directly into
the web page. You can upload your Shiny app to
[shinyapps.io](https://www.shinyapps.io/), and then embed it directly into your
document by calling `knitr::include_app()` inside a code chunk, as shown below:
````markdown
`r ''````{r shiny-app}
knitr::include_app("https://<user-name>.shinyapps.io/<app-name>/")
```
````
Using this method, the R code for the Shiny app is executed on the servers at
shinyapps.io, but your readers are able to explore the app without leaving your
website.
## Can I change "X" on my website?
Almost certainly yes, but some things are easier to customize than others. The
vignette [Customize your research website](wflow-02-customization.html) provides
some ideas that are simple to implement. Check out the documentation for
[rmarkdown][] and [Twitter Bootstrap][Bootstrap] for inspiration.
## How can I suppress the workflowr report?
To suppress the insertion of the workflowr report for all of the files in your
project, activate the option `suppress_report` in the `_workflowr.yml` file by
adding the following line:
```
suppress_report: TRUE
```
And then republishing your project:
```
wflow_publish("_workflowr.yml", republish = TRUE)
```
To suppress the workflowr report only for a specific file, add the following
lines to its YAML header:
```
workflowr:
suppress_report: TRUE
```
## Why am I not getting the same result with wflow_build() as with the RStudio Knit HTML button?
`wflow_build()` is designed to have the same functionality as the Knit HTML
button in RStudio, namely that it knits the HTML file in a separate R session to
avoid any clashes with variables or packages in use in the current R session.
However, the technical implementations are not identical, and thus informally we
have noticed the behavior of the two options occasionally differs. At the
moment, we believe that if the file results in an error when using
`wflow_build()`, the file needs to be fixed, regardless of whether the file is
able to be built with the RStudio button. If you have a use case that you think
should be supported by `wflow_build()`, please open an [Issue][issues] and
provide a small reproducible example.
## How should I install packages to use with a workflowr project?
When you start a new workflowr project with `wflow_start()`, it automatically
creates a local `.Rprofile` file that only affects your R session when you run R
from within your workflowr project. This is why you see the following lines each
time you open R:
```
Loading .Rprofile for the current workflowr project
This is workflowr version 1.3.0
Run ?workflowr for help getting started
>
```
This is intended to be a convenience so that you don't have to type
`library(workflowr)` every time you return to your project (or restart your R
session). However, the downside is that this has the potential to cause problems
when you install new packages. If you attempt to install one of the packages
that workflowr depends on, or if you attempt to install a package that then
updates one of these dependencies, this may cause an error. For example, here is
a typical error caused by updating git2r when the workflowr package is loaded:
```
Error: package or namespace load failed for ‘git2r’ in get(method, envir = home):
lazy-load database '/usr/local/lib/R/site-library/git2r/R/git2r.rdb' is corrupt
In addition: Warning message:
In get(method, envir = home) : internal error -3 in R_decompress1
```
The short term solution is to restart your current R session, which should fix
everything. In the long term, if you start to get this type of error often, you
can try one of the following strategies:
1. Always restart your R session after installing new packages
(Ctrl/Command+Shift+F10 in RStudio)
1. Open R from a directory that is not a workflowr project when installing new
packages
1. Delete `.Rprofile` with `wflow_remove(".Rprofile")` and manually load
workflowr with `library(workflowr)` every time you start a new R session
## Can I create a single HTML or PDF file of one of my workflowr R Markdown files?
Yes! You can create a single HTML or PDF file to distribute an isolated analysis
from your project by directly running the [rmarkdown][] function `render()`. The
main limitation is that any links to other pages will no longer be functional.
### Working directory
You will need to be careful with the working directory in which the code is
executed. By default, code in R Markdown documents are executed in the same
directory as the file. This is cumbersome, so the default behavior of workflowr
is to set the working directory to the root project directory for convenience.
To get around this, you can pass `knit_root_dir = ".."` or `knit_root_dir =
normalizePath(".")` to `render()`, which both have the effect of running the
code in the project root. If you have configured your workflowr project to
execute the files in `analysis/`, then you don't have to worry about this.
### PDF
To convert a single analysis to PDF, use `pdf_document()`. Note that this
requires a functional LaTeX setup.
```{r render-single-page-pdf, eval=FALSE}
library("rmarkdown")
# Create analysis/file.pdf
render("analysis/file.Rmd", pdf_document(), knit_root_dir = "..")
```
### HTML
Rendering a single HTML page is slightly more complex because `html_document()`
always includes the navigation bar. If you don't mind the non-functional navbar
at the top of the document, you can simply use `html_document()`.
```{r render-single-page-html-1, eval=FALSE}
library("rmarkdown")
# Create analysis/file.html, includes navigation bar
render("analysis/file.Rmd", html_document(), knit_root_dir = "..")
```
The standalone file will be saved as `analysis/file.html` unless you specify a
different name via the argument `output_file`.
To create a very simple HTML file, you can instead use `html_document_base()`.
This eliminates the navbar, but it may also remove some of the advanced
stylistic features of `html_document()` that you rely on.
```{r render-single-page-html-2, eval=FALSE}
library("rmarkdown")
# Create analysis/file.html, no navigation bar nor advanced features
render("analysis/file.Rmd", html_document_base(), knit_root_dir = "..")
```
If you are determined to have a full-featured standalone HTML file
without the navigation bar, you can temporarily rename the `_site.yml` file,
which prevents `html_document()` from including the navbar.
```{r render-single-page-html-3, eval=FALSE}
library("rmarkdown")
# Temporarily rename _site.yml
file.rename("analysis/_site.yml", "analysis/_site.yml_tmp")
# Create analysis/file.html, no navigation bar
render("analysis/file.Rmd", html_document(), knit_root_dir = "..")
# Restore _site.yml
file.rename("analysis/_site.yml_tmp", "analysis/_site.yml")
```
If you'd like your standalone HTML file to have a similar appearance to your
workflowr website, you can pass the style arguments directly to
`html_document()` so that the theme is similar (copy from your project's
`analysis/_site.yml`, below are the default values for a new workflowr project):
```{r render-single-page-html-4, eval=FALSE}
render("analysis/file.Rmd",
html_document(toc = TRUE, toc_float = TRUE, theme = "cosmo", highlight = "textmate"),
knit_root_dir = "..",
output_file = "standalone.html")
```
Alternatively, if you'd prefer to keep the workflowr report and other
workflowr-specific features in the standalone document, don't use `output_file`,
as this will cause workflowr to insert the warning ```The custom `fig.path` you
set was ignored by workflowr``` if the analysis contains any figures. Instead,
omit both `html_document()` and `output_file`. The standalone HTML file will be
saved in `analysis/` (the standard, non-standalone HTML files are always moved
to `docs/`), and you can move/rename it after it has rendered.
```{r render-single-page-html-5, eval=FALSE}
# use the workflowr::wflow_html settings in analysis/_site.yml
render("analysis/file.Rmd", knit_root_dir = "..")
```
### RStudio Knit button
Ideally you should also be able to use the RStudio Knit button to conveniently
create the standalone HTML or PDF files. For example, you could update the
YAML header to have multiple output formats, and then choose the output format
that RStudio should create. The workflowr functions like `wflow_build()` will
ignore these other output formats because they use the output format defined in
`analysis/_site.yml`.
```
---
output:
pdf_document: default
html_document_base: default
workflowr::wflow_html: default
---
```
However, just like when calling `render()` directly, you'll need to be careful
about the working directory. To execute the code in the project directory, you
can manually set the Knit Directory to "Project Directory" (the default is
"Document Directory" to match the default behavior of R Markdown files).
Lastly, the RStudio Knit button is somewhat finicky when custom output formats
are included (e.g. workflowr, bookdown). If you are having trouble getting it
to display the output format you want as an option, try knitting it to the current
format. That should update the menu to include all the options you've written in
the YAML header. See [Issue #261][issue-261] for more details.
[issue-261]: https://github.com/workflowr/workflowr/issues/261
## Can I use R notebooks with workflowr?
Yes! You can use RStudio's notebook features while you interactively develop
your analysis, either directly using the output format
`rmarkdown::html_notebook()` or indirectly with "inline code chunks" in your R
Markdown files. However, you need to take a few precautions to make sure your
notebook-style usage is compatible with the workflowr options.
First note that the R Markdown files created by `wflow_start()` and
`wflow_open()` include the lines below in the YAML header. These purposefully
disable inline code chunks to proactively prevent any potential
incompatibilities with workflowr. To activate inline code chunks, you can either
delete these two lines or replace `console` with `inline`.
```
editor_options:
chunk_output_type: console
```
Second, note that the working directory of the inline code chunks can be
different than the working directory of the R console. This is very
counterintuitive, but the working directory of the inline code chunks is set by
the "Knit Directory" setting in RStudio. The setting of "Knit Directory" may be
different in your collaborator's version of RStudio, or even your own RStudio
installed on a different computer. Thus it's not a good idea to rely on this
value. Instead, you can explicitly specify the working directory to be used for
the inline code chunks by setting the knitr option `root.dir` in a chunk called
`setup`, which RStudio treats specially. Adding the code chunk below to your R
Markdown file will cause all the inline code chunks to be executed from the root
of the project directory. This is consistent with the default workflowr setting.
````markdown
`r ''````{r setup}
knitr::opts_knit$set(root.dir = "..")
```
````
If you change the value of `knit_root_dir` in `_workflowr.yml`, then you would
need to change the value of `root.dir` in the setup chunk accordingly. Warning
that this is fragile, i.e. trying to change `root.dir` to any arbitrary
directory may result in an error. If you're going to use inline code chunks,
it's best two follow one of the following options:
1. Execute code in the root of the project directory (the default workflowr
setting). Don't change `knit_root_dir` in `_workflowr.yml`. Add the setup chunk
defined above to your R Markdown files. Note that this setup chunk will affect
RStudio but not the workflowr functions `wflow_build()` or `wflow_publish()`.
2. Execute code in the R Markdown directory (named `analysis/` by default).
Delete the `knit_root_dir` entry in `_workflowr.yml`. Don't explicitly set
`root.dir` in a setup code chunk in your R Markdown files. Ensure that the
RStudio setting "Knit Directory" is set to "Document Directory".
Third, note that if you are using `html_notebook()`, any settings you specify
for it will be ignored when you run `wflow_build()` or `wflow_publish()`. This
is because the settings in `_site.yml` override them. If you wish to change the
setting of one particular notebook file, as opposed to every file in your
project, you can set it with `workflowr::wflow_html()`. For example, if you want
to enable folding of code chunks and disable the table of contents for only this
file, you could use the following YAML header.
```
---
title: "Using R Notebook with workflowr"
output:
html_notebook: default
workflowr::wflow_html:
toc: false
code_folding: show
---
```
## Can I use a Git hosting service that uses the HTTP protocol?
Workflowr works best with Git hosting services that use the HTTPS protocol.
However, with some minimal upfront configuration, it is possible to use the HTTP
protocol.
The configuration differs depending on whether you are authenticating with SSH
keys or username/password.
**SSH keys**
1. Configure the remote with `wflow_git_remote()` and `protocol = "ssh"`
1. You can use `wflow_git_push()` and `wflow_git_pull()`
1. For the embedded links to past versions of the files to be correct, you need
to manually include the URL to the project in `_workflowr.yml` (for historical
reasons, this variable is named `github`)
```
github: https://custom-site.com/<username>/<reponame>
```
**Username/Password**
1. You can't use `wflow_git_remote()`. Instead use either
a. `git2r::remote_add()` in R:
```
git2r::remote_add(name = "origin", url = "https://custom-site/<username>/<reponame>.git")
```
a. `git remote add origin` in the terminal:
```
git remote add origin https://custom-site/<username>/<reponame>.git
```
1. You cannot use `wflow_git_push()` and `wflow_git_pull()`. Instead run
`git push` and `git pull` in the terminal
1. The embedded links to past versions of the files will be correct because they
will be based off of your remote URL
## How should I pronounce and spell workflowr?
There are multiple options for pronouncing workflowr:
1. workflow + er
1. workflow + R
1. work + flower
I (John) started primarily saying "workflow + er" but have more recently
transitioned to saying "workflow + R" more often. You can choose whichever is
most natural to you.
Workflowr should be capitalized at the beginning of a sentence, but otherwise
the lowercase workflowr should be the preferred option.
[aws]: https://aws.amazon.com/s3/
[beaker]: https://beakerbrowser.com/
[Bootstrap]: https://getbootstrap.com/
[deploy-aws]: wflow-08-deploy.html#amazon-s3-password-protected
[deploy-beaker]: wflow-08-deploy.html#beaker-browser-secure-sharing
[gh-pages]: https://pages.github.com/
[gh-status]: https://twitter.com/githubstatus
[gh-troubleshooting]: https://docs.github.com/articles/troubleshooting-github-pages-builds
[gl-pages]: https://docs.gitlab.com/ce/user/project/pages/index.html
[issues]: https://github.com/workflowr/workflowr/issues
[rmarkdown]: https://rmarkdown.rstudio.com/
[vig-data]: wflow-10-data.html
[vig-deploy]: wflow-08-deploy.html
[vig-getting-started]: wflow-01-getting-started.html
[vig-gitlab]: wflow-06-gitlab.html
|
/scratch/gouwar.j/cran-all/cranData/workflowr/inst/doc/wflow-05-faq.Rmd
|
## ----wflow-use-gitlab, eval=FALSE---------------------------------------------
# wflow_use_gitlab(username = "myname", repository = "myproject")
## ----wflow-git-push, eval=FALSE-----------------------------------------------
# wflow_git_push()
|
/scratch/gouwar.j/cran-all/cranData/workflowr/inst/doc/wflow-06-gitlab.R
|
---
title: "Hosting workflowr websites using GitLab"
subtitle: "workflowr version `r utils::packageVersion('workflowr')`"
author: "Luke Zappia, John Blischak"
date: "`r Sys.Date()`"
output:
rmarkdown::html_vignette:
toc: true
vignette: >
%\VignetteIndexEntry{Hosting workflowr websites using GitLab}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
## What is in this vignette?
By default workflowr assumes that the project will be hosted on GitHub, but this
is not always the case. Users may prefer to use another service or have a
private Git repository hosting instance. This vignette details how to host a
workflowr project on GitLab. Unlike GitHub Pages, GitLab Pages offers both
public and private sites. For more details, see the documentation for [GitLab
Pages][gitlab-pages]. Similar steps will be required for other platforms but
some of the specifics will be different.
[gitlab-pages]: https://docs.gitlab.com/ee/ci/yaml/README.html#pages
## Step 0: Set up a project
The first thing we need to do is set up the project we want to host. We can do
this by following the first few steps of the instructions in the "Getting
started" vignette. When you get to the section [Deploy the
website](wflow-01-getting-started.html#deploy-the-website), follow the rest of
the steps in this vignette.
## Step 1: Create a remote repository on GitLab
**Note:** You can skip this step if you'd like because GitLab will automatically
create the new repository after you push it in Step 4 below. This
[feature][push-to-create-a-new-project] was introduced in [GitLab
10.5][gitlab-10.5], released in February 2018.
[push-to-create-a-new-project]: https://docs.gitlab.com/ee/user/project/#create-a-new-project-with-git-push
[gitlab-10.5]: https://about.gitlab.com/releases/2018/02/22/gitlab-10-5-released/
Log in to the GitLab instance you want to use and create a repository to host
your project. We recommend setting the project to be Public so that others
can inspect the code behind your results and extend your work.
## Step 2: Configure your local workflowr project to use GitLab
You will need to know your user name and the repository name for the following
steps (here we are going to use "myname" and "myproject") as well as a URL for
the hosting instance. The example below assumes you are using GitLab.com. If
instead you are using a custom instance of GitLab, you will need to change the
value for the argument `domain` accordingly ^[For example, the University of
Chicago hosts a GitLab instance for its researchers at
https://git.rcc.uchicago.edu/, which would require setting `domain =
"git.rcc.uchicago.edu"`].
```{r wflow-use-gitlab, eval=FALSE}
wflow_use_gitlab(username = "myname", repository = "myproject")
```
The function `wflow_use_gitlab()` automates all the local configuration
necessary to use GitLab. It changes the website directory from `docs/` to
`public/`, it creates the GitLab-specific configuration file `.gitlab-ci.yml`
with the necessary settings, and it connects the local Git repository to
communicate with the remote repository on GitLab.
## Step 3: Republish the analyses
In order for the correct URLs to past versions to be inserted into the HTML
pages, republish the analyses with `wflow_publish()`.
```
wflow_publish(republish = TRUE)
```
## Step 4: Push to GitLab
As a final step, push the workflowr project to GitLab (you will be prompted for
your GitLab username and password):
```{r wflow-git-push, eval=FALSE}
wflow_git_push()
```
If this step has worked correctly you should be able to refresh your GitLab page
and see all the files in your workflowr project. You can view your site at
`myname.gitlab.io/myproject/`, replacing with your username and project (note it
may take a minute for the site to be deployed).
If you skipped Step 1 above, the new repository created during the initial push
will be private by default. Unless you are working with sensitive data, you
should consider making the project public so that it is easier to share with
other researchers (e.g. collaborators, reviewers). You can change the visibility
by going to `Settings` -> `General` -> `Visibility` and changing `Project
visibility` to `Public`.
## Access control for private sites
If you need to keep your project private, you can [grant access][access-control]
to your collaborators by going to `Settings` -> `Members`. You can invite them
to the project via email, but they'll need a GitLab login to access the source
code and site. They can login to GitLab using common social sites like Google
and Twitter.
[access-control]: https://gitlab.com/help/user/project/pages/pages_access_control.md
## Compatibility with custom GitLab instances
Currently workflowr works best with the public GitLab instance hosted at
gitlab.com. If you are using a custom GitLab instance that is hosted by your
institution, it may not work as smoothly.
If you cannot view your workflowr website, this may be because your
administrators have not enabled [GitLab Pages][gitlab-pages]. You will need to
email them to activate this feature. You can include this link to the [GitLab
Pages administration][gitlab-pages-admin] instructions.
[gitlab-pages-admin]: https://git.rcc.uchicago.edu/help/administration/pages/index.md
If GitLab Pages is enabled, the links to past versions of the R Markdown files
should work correctly (open an [Issue][workflowr-issues] if you are having
problems). However, there is currently no way to conveniently view the past
versions of the HTML files. This is because workflowr uses the free service
[raw.githack.com][] to host past HTML files, and it only supports the URLs
`raw.githubusercontent.com`, `gist.githubusercontent.com`, `bitbucket.org`, and
`gitlab.com`.
[raw.githack.com]: https://raw.githack.com/
[workflowr-issues]: https://github.com/workflowr/workflowr/issues
|
/scratch/gouwar.j/cran-all/cranData/workflowr/inst/doc/wflow-06-gitlab.Rmd
|
## ----chunk-options, include=FALSE---------------------------------------------
library("knitr")
opts_chunk$set(eval = FALSE)
## ----knit-expand-vignette-----------------------------------------------------
# vignette("knit_expand", package = "knitr")
|
/scratch/gouwar.j/cran-all/cranData/workflowr/inst/doc/wflow-07-common-code.R
|
---
title: "Sharing common code across analyses"
subtitle: "workflowr version `r utils::packageVersion('workflowr')`"
author: "Tim Trice, John Blischak"
date: "`r Sys.Date()`"
output:
rmarkdown::html_vignette:
toc: true
vignette: >
%\VignetteIndexEntry{Sharing common code across analyses}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r chunk-options, include=FALSE}
library("knitr")
opts_chunk$set(eval = FALSE)
```
During the course of a project, you may want to repeat a similar analysis
across multiple R Markdown files. To avoid duplicated code across your files
(which is difficult to update), there are multiple strategies you can use to
share common code:
1. To share R code like function definitions, you can put this code in an R
script and import it in each file with the function `source()`
1. To share common R Markdown text and code chunks, you can use [child documents](https://yihui.org/knitr/demo/child/)
1. To share common templates, you can use the function `knitr::knit_expand()`
Each of these strategies is detailed below, with a special emphasis on how to
use them within the workflowr framework. In order to source scripts or use child
documents, it is suggested you use the [here][] package, which helps to locate
the root directory of your project regardless of the directory your script or
analysis file is, making sourcing documents cleaner.
[here]: https://cran.r-project.org/package=here
## Overview of directories
First, a quick overview of the directories in a workflowr project. This is
critical for importing these shared files.
In a standard R Markdown file, the code is executed in the directory where the R
Markdown file is saved. Thus any paths to files in the R Markdown file should be
relative to this directory. However, the directory where the code is executed,
referred to as the "knit directory" in the workflowr documentation, can be
configured. The default for a new workflowr project is to run the code in the
root of the workflowr project (this is defined in the file `_workflowr.yml`; see
`?wflow_html` for configuration details). Thus any filepaths should be relative
to the root of the project. As an example, if you have shared R functions
defined in the file `~/Desktop/myproject/code/common.R`, the relative filepath
from the root of the project directory would be `"code/common.R"`.
## Share R code with source()
If you have R code you want to re-use across multiple R Markdown files, the most
straightforward option is to save this code in an R script, e.g.
`code/functions.R`.
Then in each R Markdown file that needs to use the code defined in that file,
you can use `source()` to load it. If the code in your workflowr project is
executed in the root of the project directory (which is the default behavior for
new workflowr projects), then you would add the following chunk:
````
`r ''````{r shared-code}
source("code/functions.R")
```
````
On the other hand, if you have changed the value of `knit_root_dir` in the file
`_workflowr.yml`, you need to ensure that the filepath to the R script is
relative to this directory. For example, if you set `knit_root_dir: "analysis"`,
you would use this code chunk:
````
`r ''````{r shared-code}
source("../code/functions.R")
```
````
To avoid having to figure out the correct relative path (or having to update it
in the future if you were to change `knit_root_dir`), you can use `here::here()`
as it is always based off the project root. Additionally, it will help
readability when using child documents as discussed below.
````
`r ''````{r shared-code}
source(here::here("code/functions.R"))
```
````
## Share child documents with chunk option
To share text and code chunks across R Markdown files, you can use [child
documents](https://yihui.org/knitr/demo/child/), a feature of the [knitr][]
package.
[knitr]: https://cran.r-project.org/package=knitr
Here is a example of a simple R Markdown file that you can use to test this
feature. Note that it contains an H2 header, some regular text, and a code
chunk.
````
## Header in child document
Text in child document.
`r ''````{r child-code-chunk}
str(mtcars)
```
````
You can save this child document anywhere in the workflowr project with one
critical exception: it cannot be saved in the R Markdown directory (`analysis/`
by default) with the file extension `.Rmd` or `.rmd`. This is because workflowr
expects every R Markdown file in this directory to be a standalone analysis that
has a 1:1 correspondence with an HTML file in the website directory (`docs/` by
default). We recommend saving child documents in a subdirectory of the R
Markdown directory, e.g. `analysis/child/ex-child.Rmd`.
To include the content of the child document, you can reference it using
`here::here()` in your chunk options.
````
`r ''````{r parent, child = here::here("analysis/child/ex-child.Rmd")}
```
````
However, this fails if you wish to include plots in the code chunks of the child
documents. It will not generate an error, but the plot will be missing ^[The
reason for this is very technical and requires more understanding of how
workflowr is implemented than is necessary to use it effectively in the majority
of cases. Whenever workflowr builds an R Markdown file, it first copies it to a
temporary directory so that it can inject extra code chunks that implement some
of its reproducibility features. The figures in the child documents end up being
saved there and then lost.]. In a situation like this, you would want to
generate the plot within the parent R Markdown file or use
`knitr::knit_expand()` as described in the next section.
## Share templates with knit_expand()
If you need to pass parameters to the code in your child document, then you can
use `knitr::knit_expand()`. Also, this strategy has the added benefit that it
can handle plots in the child document. However, this requires setting
`knit_root_dir: "analysis"` in the file `_workflowr.yml` for plots to work
properly.
Below is an example child document with one variable to be expanded: `{{title}}`
refers to a species in the iris data set. The value assigned will be used to
filter the iris data set and label the section, chunk, and plot. We will refer
to this file as `analysis/child/iris.Rmd`.
````
## {{title}}
`r ''````{r plot_{{title}}}
iris %>%
filter(Species == "{{title}}") %>%
ggplot() +
aes(x = Sepal.Length, y = Sepal.Width) +
geom_point() +
labs(title = "{{title}}")
```
````
To generate a plot using the species `"setosa"`, you can expand the child
document in a hidden code chunk:
````
`r ''````{r, include = FALSE}
src <- knitr::knit_expand(file = here::here("analysis/child/iris.Rmd"),
title = "setosa")
```
````
and then later knit it using an inline code expression^[Before calling
`knitr::knit()`, you'll need to load the dplyr and ggplot2 packages to run the
code in this example child document.]:
`` `r
knitr::knit(text = unlist(src))` ``
The convenience of using `knitr::knit_expand()` gives you the flexibility to
generate multiple plots along with custom headers, figure labels, and more. For
example, if you want to generate a scatter plot for each Species in the `iris`
datasets, you can call `knitr::knit_expand()` within a `lapply()` or
`purrr::map()` call:
````
`r ''````{r, include = FALSE}
src <- lapply(
sort(unique(iris$Species)),
FUN = function(x) {
knitr::knit_expand(
file = here::here("analysis/child/iris.Rmd"),
title = x
)
}
)
```
````
This example code loops through each unique `iris$Species` and sends it to the
template as the variable `title`. `title` is inserted into the header, the chunk
label, the `dplyr::filter()`, and the title of the plot. This generates three
plots with custom plot titles and labels while keeping your analysis flow clean
and simple.
Remember to insert `knitr::knit(text = unlist(src))` in an inline R expression
as noted above to knit the code in the desired location of your main document.
Read the `knitr::knit_expand()` vignette for more information.
```{r knit-expand-vignette}
vignette("knit_expand", package = "knitr")
```
|
/scratch/gouwar.j/cran-all/cranData/workflowr/inst/doc/wflow-07-common-code.Rmd
|
---
title: "Alternative strategies for deploying workflowr websites"
subtitle: "workflowr version `r utils::packageVersion('workflowr')`"
author: "John Blischak"
date: "`r Sys.Date()`"
output:
rmarkdown::html_vignette:
toc: true
vignette: >
%\VignetteIndexEntry{Alternative strategies for deploying workflowr websites}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
## Introduction
The [Getting Started vignette][vig-getting-started] provides instructions for
deploying the workflowr website using the service [GitHub Pages][gh-pages]
because it is quick and convenient. However, the static website created by
workflowr can be deployed using any strategy you like. Below are instructions
for deploying the workflowr website contributed by other workflowr users. If you
would like to contribute instructions for another deployment strategy, please
fork the [workflowr repository][workflowr] on GitHub and add your instructions
to this file. If you need any assistance with this, please
don't hesitate to open an [Issue][wflow-issues].
[gh-pages]: https://pages.github.com/
[vig-getting-started]: wflow-01-getting-started.html
[wflow-issues]: https://github.com/workflowr/workflowr/issues
[workflowr]: https://github.com/workflowr/workflowr
## Amazon S3 (password-protected)
Another way to privately share your workflowr site is by uploading it to [Amazon
S3][s3]. S3 is an object storage service for the Amazon cloud, and can be used
to host static websites. Basic HTTP authentication can be accomplished using
[CloudFront][cloudfront], Amazon's content delivery network, and
[Lamba@Edge][lambda], which enables the execution of serverless functions to
customize content delivered by the CDN. This [blog post][hackernoon] goes into
more detail about what that all means. A more detailed guide to setting up the
bucket is [here][kynatro]. Some templates for scripting the process are
[here][dumrauf].
Contributed by E. David Aja ([edavidaja][]).
[cloudfront]: https://aws.amazon.com/cloudfront/
[edavidaja]: https://github.com/edavidaja
[dumrauf]: https://github.com/dumrauf/serverless_static_website_with_basic_auth
[hackernoon]: https://hackernoon.com/serverless-password-protecting-a-static-website-in-an-aws-s3-bucket-bfaaa01b8666
[kynatro]: https://kynatro.com/blog/2018/01/03/a-step-by-step-guide-to-creating-a-password-protected-s3-bucket/
[lambda]: https://docs.aws.amazon.com/lambda/latest/dg/lambda-edge.html
[s3]: https://aws.amazon.com/s3/
## Beaker Browser (secure sharing)
If your project contains sensitive data that prevents you from publicly sharing
the results, one alternative option is to self-host your workflowr website using
[Beaker Browser][beaker].
[Beaker Browser][beaker] allows website creation, cloning, modification, and
publishing locally. After the site is ready, hitting "share" produces a unique
[Dat project dat://][dat] hyperlink, for example:
dat://adef21aa8bbac5e93b0c20a97c6f57f93150cf4e7f5eb1eb522eb88e682309bc
This dat:// link can then be shared and the site opened *all the while being
hosted locally on the site producer's machine.* The particular example above is
a site, produced in RStudio using workflowr, with placeholder content and R code
chunks, compiled as usual.
Security for your site is achieved with site encryption inherent in the Dat
protocol (see [Security][dat-security] on the [datproject docs page][dat-docs]),
as well as the obscurity of the unique link. Beaker Browser saves your
individual project sites in the folder `~/Sites`.
To create a Beaker Browser version of your workflowr site:
1. [Install][beaker-install] Beaker Browser and run it.
1. Select "New Site" in the three-bar dropdown menu found to the right of the
"omnibar" for web link entry, and enter its Title and (optional) a Description
of the site. This creates a folder in the Beaker Browser `~/Sites` directory
named for your Title, for example, "placeholder_workflowr", and populates the
folder with a `dat.json` file.
1. In the main Beaker Browser pane, use "Add Files" or "Open Folder" to copy the
entire contents of the workflowr `docs/` folder to your new Beaker Browser site
folder (see Symlink Synchronization, below).
1. Once copied, the new site is ready to go. Pressing "Share" in the main Beaker
Browser pane reveals the unique dat:// link generated for your Beaker Browser
site. Sharing this link with anyone running Beaker Browser will allow them to
access your workflowr HTML files...*directly from your computer*.
Instead of having to manually copy your workflowr `docs/` directory to your
Beaker Browser site directory, you can create a symlink from your workflowr
`docs/` directory to the Beaker Browser site directory. The line below links the
`docs/` directory of a hypothetical "workflowr-project" saved in `~/github/` to
the hypothetical Beaker `placeholder_workflowr` subdirectory:
ln -s ~/github/workflowr-project/docs ~/Users/joshua/Sites/placeholder_workflowr
The direct-sharing nature of the above workflow means that the host computer
needs to be running for site access. Two alternative recommended by Beaker
Browser developer [Paul Frazee][pfrazee] are [hashbase.io][] and the Beaker
Browser subproject [dathttpd][]. While hosting Beaker Browser sites is outside
of the scope of this direct sharing paradigm, each of these options has
strengths. The former, hashbase.io (free account required), is a web-hosted
central location for dat:// -linked content, removing the need for the host
computer to be running. The latter dathttpd example is an additional
server/self-hosting option that can be used if desired.
This solution was contributed by [Josh Johnson][johnsonlab]. For more details,
please read his [blog post][johnsonlab-blog] and the discussion in Issue
[#59][].
[#59]: https://github.com/workflowr/workflowr/issues/59
[beaker]: https://beakerbrowser.com/
[beaker-install]: https://beakerbrowser.com/install/
[dat]: https://dat.foundation
[dat-docs]: https://docs.datproject.org/
[dat-security]: https://docs.datproject.org/docs/security-faq
[dathttpd]: https://github.com/beakerbrowser/dathttpd
[hashbase.io]: https://hashbase.io
[johnsonlab]: https://github.com/johnsonlab
[johnsonlab-blog]: https://johnsonlab.github.io/blog-post-22/
[pfrazee]: https://github.com/pfrazee
## GitLab Pages
To deploy your workflowr website with [GitLab Pages][gitlab], you can use the
function `wflow_use_gitlab()`. You can choose if the site is public or private.
For more details, please see the dedicated vignette [Hosting workflowr websites
using GitLab](wflow-06-gitlab.html).
[gitlab]: https://docs.gitlab.com/ee/ci/yaml/README.html#pages
|
/scratch/gouwar.j/cran-all/cranData/workflowr/inst/doc/wflow-08-deploy.Rmd
|
## ----setup, include=FALSE-----------------------------------------------------
knitr::opts_chunk$set(eval = FALSE, fig.align = "center")
## ----update-packages, eval=FALSE----------------------------------------------
# update.packages()
## ----config-------------------------------------------------------------------
# library(workflowr)
# wflow_git_config(user.name = "First Last", user.email = "[email protected]")
## ----rstudio-create-project, eval=TRUE, echo=FALSE, out.width = "50%"---------
knitr::include_graphics("img/rstudio-create-project.png")
## ----rstudio-project-type, eval=TRUE, echo=FALSE, out.width = "50%"-----------
knitr::include_graphics("img/rstudio-project-type.png")
## ----rstudio-workflowr-template, eval=TRUE, echo=FALSE, out.width = "50%"-----
knitr::include_graphics("img/rstudio-workflowr-template.png")
## ----teeth, eval=TRUE---------------------------------------------------------
data("ToothGrowth")
head(ToothGrowth)
summary(ToothGrowth)
str(ToothGrowth)
## ----teeth-write--------------------------------------------------------------
# write.csv(ToothGrowth, file = "data/teeth.csv")
## -----------------------------------------------------------------------------
# write.csv(ToothGrowth, file = "C:/Users/GraceHopper/Documents/myproject/data/teeth.csv")
## -----------------------------------------------------------------------------
# write.csv(ToothGrowth, file = "data/teeth.csv")
## ----open-teeth---------------------------------------------------------------
# wflow_open("analysis/teeth.Rmd")
## ----test-boxplots, eval=TRUE, include=FALSE----------------------------------
data("ToothGrowth")
teeth <- ToothGrowth
boxplot(len ~ dose, data = teeth)
boxplot(len ~ supp, data = teeth)
boxplot(len ~ dose + supp, data = teeth)
## ----test-permute, eval=TRUE, include=FALSE-----------------------------------
# Observed difference in teeth length due to supplement method
mean(teeth$len[teeth$supp == "OJ"]) - mean(teeth$len[teeth$supp == "VC"])
# Permute the observations
supp_perm <- sample(teeth$supp)
# Caclculate mean difference in permuted data
mean(teeth$len[supp_perm == "OJ"]) - mean(teeth$len[supp_perm == "VC"])
## ----workflowr-report-checks, eval=TRUE, echo=FALSE, out.width = "75%"--------
knitr::include_graphics("img/workflowr-report-checks.png")
## ----publish-teeth-growth-----------------------------------------------------
# wflow_publish("analysis/teeth.Rmd", message = "Analyze teeth growth")
## ----workflowr-past-versions-1, eval=TRUE, echo=FALSE, out.width = "75%"------
knitr::include_graphics("img/workflowr-past-versions-1.png")
## ----publish-other-files------------------------------------------------------
# wflow_publish(c("analysis/*Rmd", "data/teeth.csv"), message = "Publish data and other files")
## ----wflow-use-github---------------------------------------------------------
# wflow_use_github("your-github-username")
## ----republish----------------------------------------------------------------
# wflow_publish(republish = TRUE)
## ----wflow-git-push-----------------------------------------------------------
# wflow_git_push()
## ----github-pages-settings, eval=TRUE, echo=FALSE, out.width = "75%"----------
knitr::include_graphics("img/github-pages-settings.png")
## ----workflowr-past-versions-2, eval=TRUE, echo=FALSE, out.width = "75%"------
knitr::include_graphics("img/workflowr-past-versions-2.png")
## ----github-new-repo, eval=TRUE, echo=FALSE, out.width="25%"------------------
knitr::include_graphics("img/github-new-repo.png")
|
/scratch/gouwar.j/cran-all/cranData/workflowr/inst/doc/wflow-09-workshop.R
|
---
title: "Reproducible research with workflowr"
subtitle: "workflowr version `r utils::packageVersion('workflowr')`"
author: "John Blischak and Matthew Stephens"
output:
rmarkdown::html_vignette:
toc: true
toc_depth: 2
rmarkdown::pdf_document: default
vignette: >
%\VignetteIndexEntry{Reproducible research with workflowr}
%\VignetteEncoding{UTF-8}
%\VignetteEngine{knitr::rmarkdown}
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(eval = FALSE, fig.align = "center")
```
## Introduction
The [workflowr][] R package makes it easier for you to organize, reproduce, and
share your data analyses. This short tutorial will introduce you to the
workflowr framework. You will create a workflowr project that implements a small
data analysis in R, and by the end you will have a working website that you can
use to share your work. If you are completing this tutorial as part of a live
workshop, please follow the [setup instructions](#setup) in the next section
prior to arriving.
Workflowr combines literate programming with [R Markdown][rmd] and version
control with [Git][git] to generate a website containing time-stamped,
versioned, and documented results. By the end of this tutorial, you will have a
website hosted on [GitHub Pages][gh-pages] that contains the results of a
reproducible statistical analysis.
[gh-pages]: https://pages.github.com/
[git]: https://git-scm.com/
[rmd]: https://rmarkdown.rstudio.com/
[workflowr]: https://github.com/workflowr/workflowr
## Setup
1. Install [R][r]
1. Install [RStudio][rstudio]
1. Install workflowr from [CRAN][cran]:
```r
install.packages("workflowr")
```
1. Create an account on [GitHub][gh]
To minimize the possibility of any potential issues with your computational
setup, you are encouraged to update your version of RStudio (`Help` -> `Check
for Updates`) and update your R packages:
```{r update-packages, eval=FALSE}
update.packages()
```
If you do encounter any issues during the tutorial, consult the
[Troubleshooting](#troubleshooting) section for solutions to the most common
problems.
[cran]: https://cran.r-project.org/package=workflowr
[gh]: https://github.com
[r]: https://cran.r-project.org
[rstudio]: https://posit.co/download/rstudio-desktop/
## Organize your research
To help you stay organized, workflowr creates a project directory with the
necessary configuration files as well as subdirectories for saving data and
other project files. This tutorial uses the [RStudio project
template][rstudio-proj-template] for workflowr, but note that the same can be
achieved via the function `wflow_start()`.
[rstudio-proj-template]: https://rstudio.github.io/rstudio-extensions/rstudio_project_templates.html
To start your workflowr project, follow these steps:
1. Open RStudio.
1. In the R console, run `wflow_git_config()` to register your name and email
with Git. This only has to be done once per computer. If you've used Git
on this machine before, you can skip this step. For a better GitHub experience,
use the same email you used to register your GitHub account.
```{r config}
library(workflowr)
wflow_git_config(user.name = "First Last", user.email = "[email protected]")
```
1. In the menu bar, choose `File` -> `New Project`.
1. Choose `New Directory` and then scroll down the list of project types to
select `workflowr project`. If you don't see the workflowr project template, go
to [Troubleshooting](#missing-template).
```{r rstudio-create-project, eval=TRUE, echo=FALSE, out.width = "50%"}
knitr::include_graphics("img/rstudio-create-project.png")
```
```{r rstudio-project-type, eval=TRUE, echo=FALSE, out.width = "50%"}
knitr::include_graphics("img/rstudio-project-type.png")
```
1. Type `myproject` (or a more inventive name if you prefer) as the directory
name, choose where to save it on your computer, and click `Create Project`.
```{r rstudio-workflowr-template, eval=TRUE, echo=FALSE, out.width = "50%"}
knitr::include_graphics("img/rstudio-workflowr-template.png")
```
RStudio will create a workflowr project `myproject` and opened the project in
RStudio. Under the hood, RStudio is running a workflowr command `wflow_start()`
- so if you prefer to start a new project from the console instead of using the
RStudio menus then you could use `wflow_start()`.
Take a look at the workflowr directory structure in the Files pane, which should
be something like this:
```
myproject/
|-- .gitignore
|-- .Rprofile
|-- _workflowr.yml
|-- analysis/
| |-- about.Rmd
| |-- index.Rmd
| |-- license.Rmd
| |-- _site.yml
|-- code/
| |-- README.md
|-- data/
| |-- README.md
|-- docs/
|-- myproject.Rproj
|-- output/
| |-- README.md
|-- README.md
```
The most important directory for you to pay attention to now is the `analysis/`
directory. This is where you should store all your analyses as R Markdown (Rmd)
files. Other directories created for your convenience include `data/` for
storing data, and `code/` for storing long-running or supplemental code you
don't want to include in an Rmd file. Note that the `docs/` directory is where
the website HTML files will be created and stored by workflowr, and should not
be edited by the user.
## Build your website
The files and directories created by workflowr are already almost a website! The
only thing missing are the crucial `html` files. Take a look in the `docs/`
directory where the html files for your website need to be created... notice
that it is sadly empty.
In workflowr the html files for your website are created in the `docs/`
directory by knitting (or "building") the `.Rmd` files in the `analysis/`
directory. When you knit or build those files -- either by using the knitr
button, or by typing `wflow_build()` in the console -- the resulting html files
are saved in the docs directory.
The `docs/` directory is currently empty because we haven't run any of the
`.Rmd` files yet. So now let's run these files. We will do it both ways, using
both the knit button and using `wflow_build()`:
1. Open the file `analysis/index.Rmd` and knit it now. You can open it by using
the files pane, or by typing `wflow_open("analysis/index.Rmd")` in the R
console. You knit the file by pressing the knit button in RStudio.
1. There are also two other `.Rmd` files in the `analysis` directory. Build
these by typing `wflow_build()` in the R console. This will build all the R
Markdown files in `analysis/`, and save the resulting html files in `docs/`.
(Note, it won't re-build `index.Rmd` because you have not changed it since
running it before, so it does not need to.^[The default behavior when
`wflow_build()` is run without any arguments is to build any R Markdown file
that has been edited since its corresponding HTML file was last built.]
Ignore the warnings in the workflowr report for now; we will return to these
later.
## Collect some data!
To do an interesting analysis you will need some data. Here, instead of doing a
time-consuming experiment, we will use a convenient built-in data set from R.
While not the most realistic, this avoids any issues with downloading data from
the internet and saving it correctly. The data set `ToothGrowth` contains the
length of the teeth for 60 guinea pigs given 3 different doses of vitamin C
either via orange juice (`OC`) or directly with ascorbic acid (`VC`).
1. To get a quick sense of the data set, run the following in the R console.
```{r teeth, eval=TRUE}
data("ToothGrowth")
head(ToothGrowth)
summary(ToothGrowth)
str(ToothGrowth)
```
1. To mimic a real project that will have external data files, save the
`ToothGrowth` data set to the `data/` subdirectory using `write.csv()`.
```{r teeth-write}
write.csv(ToothGrowth, file = "data/teeth.csv")
```
## Understanding paths
Look at that last line of code. Where will the file be saved on your computer?
To understand this very important issue you need to understand the idea of
"relative paths" and "working directory".
Before explaining these ideas, let us consider a different way we could have
saved the file. Suppose we had typed
```{r}
write.csv(ToothGrowth, file = "C:/Users/GraceHopper/Documents/myproject/data/teeth.csv")
```
Then it is clear exactly where on the computer we want the file to be saved.
Specifying the file location this very explicit way is called specifying the
"full path" to the file. It is conceptually simple. But it is also a pain for
many reasons -- it is more typing, and (more importantly) if we move the project
to a different computer it will likely no longer work because the paths will
change!
Instead we typed
```{r}
write.csv(ToothGrowth, file = "data/teeth.csv")
```
Specifying the file location this way is called specifying the "relative path"
because it specifies the path to the file *relative to the current working
directory*. This means the full path to the file will be obtained by appending
the specified relative path to the (full) path of the current working directory.
For example, if the current working directory is
`C:/Users/GraceHopper/Documents/myproject/` then the file will be saved to
`C:/Users/GraceHopper/Documents/myproject/data/teeth.csv`. If the current
working directory is `C:/Users/Matt124/myproject` then the file will be saved to
`C:/Users/Matt124/myproject/data/teeth.csv`.
So, what is your current working directory? When you start or open a workflowr
project in RStudio (e.g. by clicking on `myproject.Rproj`) RStudio will set the
working directory to the location of the workflowr project on your computer. So
your current working directory should be the location you chose when you started
your workflowr project. You can check this by typing `getwd()` in the R console.
Notice how, by using relative paths, the code used here works for you whatever
operating system you are on and however your computer is set up! *You should
always use relative paths where possible because it can help make your code
easier for others to run and easier for you to run on different computers and
different operating systems.*
## Create a new analysis
So, now we have some data, we are ready to perform a small analysis. To start a
new analysis in RStudio, use the `wflow_open()` command.
1. In the R console, open a new R Markdown file by typing
```{r open-teeth}
wflow_open("analysis/teeth.Rmd")
```
Notice that we again used a relative path! Relative paths are good for
opening files as well as saving files. This command should create a new
`.Rmd` file in the `analysis` subdirectory of your workflowr project, and
open it for editing in RStudio. The file looks pretty much like other `.Rmd`
files, but in the header note that workflowr provides its own custom output
format, `workflowr::wflow_html`. The other minor difference is that
`wflow_open()` adds the editor option `chunk_output_type: console`, which
causes the code to be executed in the R console instead of within the
document. If you prefer the results of the code chunks be embedded inside
the document while you perform the analysis, you can delete those lines
(note that this has no effect on the final results, only on the display
within RStudio).
1. Copy the code chunk below and paste it at the bottom of the file `teeth.Rmd`.
The code imports the data set from the file you previously created^[Note that
the default working directory for a workflowr project is the root of the
project. Hence the relative path is `data/teeth.csv`. The working directory can
be changed via the workflowr option `knit_root_dir` in `_workflowr.yml`. See
`?wflow_html` for more details.]. Execute the code in the R console by clicking
on the Run button or using the shortcut `Ctrl`/`CMD`+`Enter`.
````
```{r import-teeth}`r ''`
teeth <- read.csv("data/teeth.csv", row.names = 1)
head(teeth)
```
````
Note: if you copy and paste this chunk, make sure to remove any spaces
before each of the backticks (` ``` `) so that they will be correctly
recognized as indicating the beginning and end of a code chunk.
1. Next create some boxplots to explore the data. Copy the code chunk below and
paste it at the bottom of the file `teeth.Rmd`. Execute the code to see create
the plots.
````
```{r boxplots}`r ''`
boxplot(len ~ dose, data = teeth)
boxplot(len ~ supp, data = teeth)
boxplot(len ~ dose + supp, data = teeth)
```
````
```{r test-boxplots, eval=TRUE, include=FALSE}
data("ToothGrowth")
teeth <- ToothGrowth
boxplot(len ~ dose, data = teeth)
boxplot(len ~ supp, data = teeth)
boxplot(len ~ dose + supp, data = teeth)
```
1. To compare the tooth length of the guinea pigs given orange juice versus
those given vitamin C, you could perform a [permutation-based statistical
test][permutation]. This would involve comparing the observed difference in
teeth length due to the supplement method to the observed differences calculated
from random permutations of the data. The basic idea is that if the observed
difference is an outlier compared to the differences generated after permuting
the supplement method column, it is more likely to be a true signal not due to
chance alone. We are not going to perform the full permutation test here, but we
will just demonstrate the idea of a permutation. Copy the code chunk below,
paste it at the bottom of of the file `teeth.Rmd`, and execute it. Try executing
it several times -- does it give you a different answer each time?
````
```{r permute}`r ''`
# Observed difference in teeth length due to supplement method
mean(teeth$len[teeth$supp == "OJ"]) - mean(teeth$len[teeth$supp == "VC"])
# Permute the observations
supp_perm <- sample(teeth$supp)
# Calculate mean difference in permuted data
mean(teeth$len[supp_perm == "OJ"]) - mean(teeth$len[supp_perm == "VC"])
```
````
[permutation]: https://en.wikipedia.org/wiki/Resampling_%28statistics%29#Permutation_tests
```{r test-permute, eval=TRUE, include=FALSE}
# Observed difference in teeth length due to supplement method
mean(teeth$len[teeth$supp == "OJ"]) - mean(teeth$len[teeth$supp == "VC"])
# Permute the observations
supp_perm <- sample(teeth$supp)
# Caclculate mean difference in permuted data
mean(teeth$len[supp_perm == "OJ"]) - mean(teeth$len[supp_perm == "VC"])
```
1. In the R console, run `wflow_build()`. Note the value of the observed
difference in the permuted data.
1. In RStudio, click on the Knit button. Has the value of the observed
difference in the permuted data changed? It should be identical. This is because
workflowr always sets the same seed prior to running the analysis.^[Note that
everyone in the workshop will have the same result because by default workflowr
uses a seed that is the date the project was created as YYYYMMDD. You can change
this by editing the file `_workflowr.yml`.] To better understand this behavior
as well as the other reproducibility safeguards and checks that workflowr
performs for each analysis, click on the workflowr button at the top and select
the "Checks" tab.
```{r workflowr-report-checks, eval=TRUE, echo=FALSE, out.width = "75%"}
knitr::include_graphics("img/workflowr-report-checks.png")
```
You can see the value of the seed that was set using `set.seed()` before the
code was executed.
## Publish your analysis!
You should also notice that workflowr is still giving you a warning: it says you
have "uncommitted changes" in your .Rmd file. The term "commit" is a term from
version control: it basically means to save a snapshot of the current version of
a file so that you could return to it later if you wanted (even if you changed
or deleted the file in between).
So, workflowr is warning you that you haven't saved a snapshot of your current
analysis. If this analysis is something you are currently (somewhat) happy with
then you should save a snapshot that will allow you to go back to it at any time
in the future (even if you change the .Rmd file between now and then). In
workflowr we use the term "publish" for this process: any analysis that you
"publish" will be one that you can go back to in the future. You will see that
it is pretty easy to publish an analysis so you should do it whenever you create
a first working version, and whenever you make a change that you might want to
keep. Don't wait to think that it is your "final" version before publishing, or
you will never publish!
1. Publish your analysis by typing:
```{r publish-teeth-growth}
wflow_publish("analysis/teeth.Rmd", message = "Analyze teeth growth")
```
The function `wflow_publish()` performs three steps: 1) commits (snapshots)
the .Rmd files, 2) rebuilds the Rmd files to create the html file and
figures, and 3) commits the HTML and figure files. This guarantees that the
results in each html file is always generated from an exact, known version
of the Rmd file (you can see this version embedded in the workflowr report).
An informative message will help you find a particular version later.
1. Open the workflowr report of `teeth.html` by clicking on the button at the
top of the page. Navigate to the tab "Past versions". Note that the record
of all past versions will be saved here. Once the project has been added to
GitHub (you will do this in the next section), the "Past versions" tab will
include hyperlinks for convenient viewing of the past versions of the Rmd and
HTML files.
```{r workflowr-past-versions-1, eval=TRUE, echo=FALSE, out.width = "75%"}
knitr::include_graphics("img/workflowr-past-versions-1.png")
```
## Checking your status
When you are working on several analyses over a period of time it can be
difficult to keep track of which ones need attention, etc. You can use
`wflow_status()` to check on all your files.
1. In the R console, run `wflow_status()`. This will show you the status of each
of the Rmd files in your workflowr project. You should see that `teeth.rmd` has
status "Published" because you just published it. But the other `.Rmd` files
have status "Unpublished" because you haven't published them yet. Also you will
notice a comment that the file `data/teeth.csv` is "untracked". This basically
means that the data file has not had a snapshot kept, which is dangerous as our
analyses obviously depend on the version of the data we use....
1. In the R console, run the command below to "publish" these other files ^[The
command uses the wildcard character `*` to match all the Rmd files in
`analysis/`. If this fails on your computer, try running the more verbose
command: `wflow_publish(c("analysis/index.Rmd", "data/teeth.csv"), message =
"Analyze teeth growth")`].
```{r publish-other-files}
wflow_publish(c("analysis/*Rmd", "data/teeth.csv"), message = "Publish data and other files")
```
1. Navigate to check html files in the `docs` directory, you should find that
they all have a green light and no warnings.
1. And run `wflow_status()` again to confirm all is OK. Everything is published!
## Share your results
So, now you have a website, with an analysis in it. But it is only on your
computer, not the internet. To share your website with the world we will use the
free service GitHub Pages.
1. In the R console, run the function `wflow_use_github()`. The only required
argument is your GitHub username. The name of the repository will automatically
be named the same as the directory containing the workflowr project, in this
case "myproject".
```{r wflow-use-github}
wflow_use_github("your-github-username")
```
When the function asks if you would like it to create the repository on
GitHub for you, enter `1`. This should open your web browser so that you can
authenticate with GitHub and then give permission for workflowr to create
the repository on your behalf. Additionally, this function connects to your
local repository with the remote GitHub repository and inserts a link to the
GitHub repository into the navigation bar. If this fails to create a GitHub
repository, go to [Troubleshooting](#no-repo).
1. To update your workflowr website to use GitHub links to past versions of the
files (as well as update the navigation bar to include the GitHub link),
republish the files. (You would not have to do this in future)
```{r republish}
wflow_publish(republish = TRUE)
```
1. To send your project to GitHub, run `wflow_git_push()`. This will prompt you
for your GitHub username and password. If this fails, go to
[Troubleshooting](#failed-push).
```{r wflow-git-push}
wflow_git_push()
```
1. On GitHub, navigate to the Settings tab of your GitHub repository^[If your
GitHub repository wasn't automatically opened by `wflow_git_push()`, you can
manually enter the URL into the browser:
`https://github.com/username/myproject`.]. Scroll down to the section "GitHub
Pages". For Source choose "master branch /docs folder". After it updates, scroll
back down and click on the URL. If the URL doesn't display your website, go to
[Troubleshooting](#no-gh-pages).
```{r github-pages-settings, eval=TRUE, echo=FALSE, out.width = "75%"}
knitr::include_graphics("img/github-pages-settings.png")
```
## Index your new analysis
Unfortunately your home page is not very inspiring. Also there is not an easy
want to find that nice analysis you did! A great way to keep track of analyses
and make them easy to find is to keep an index on your website homepage. The
homepage is created by `analysis/index.Rmd`, so we are now going to edit this
file to add a link to our new analysis.
1. Open the file `analysis/index.Rmd`. You can open it from the Files pane or
run `wflow_open("analysis/index.Rmd")`.
1. Copy the line below and paste it at the bottom of the file
`analysis/index.Rmd`. This text uses "markdown" syntax to create a hyperlink to
the tooth analysis. The text between the square brackets is displayed on the
webpage, and the text in parentheses is the relative path to the teeth webpage.
Note that you don't need to include the subdirectory `docs/` because
`index.html` and `teeth.html` are both already in `docs/`. (In an html file
relative paths are specified relative to the current page which in this case
will be `index.html`.) Also note that you need to use the file extension `.html`
since that is the file that needs to be opened by the web browser.
```
* [Teeth growth analysis](teeth.html)
```
1. Maybe you would like to write a short introductory message in your index
file e.g. "Welcome to my first workflowr website"!
1. You might also want to add a bit more details on what the tooth growth
analysis did -- a little detail in your index can be really helpful when it
starts getting bigger...
1. Run `wflow_build()` and then confirm that clicking on the link "Teeth growth"
takes you to your teeth analysis page.
1. Run `wflow_publish("analysis/index.Rmd")` to publish this new index file.
1. Run `wflow_status()` to check everything is OK.
1. Run `wflow_git_push()` to push the changes to GitHub.
1. Now go to your GitHub page again, and check out your website! (It can take a
couple of minutes to refresh after pushing, so you may need to be patient).
Navigate to the tooth analysis. Click on the links in the "Past versions" tab to
see the past results. Click on the HTML hyperlink to view the past version of
the HTML file. Click on the Rmd hyperlink to view the past version of the Rmd
file on GitHub. Enjoy!
```{r workflowr-past-versions-2, eval=TRUE, echo=FALSE, out.width = "75%"}
knitr::include_graphics("img/workflowr-past-versions-2.png")
```
## Conclusion
You have successfully created and shared a reproducible research website. The
key commands are a pretty short list: `wflow_build()`, `wflow_publish()`,
`wflow_status()`, and `wflow_git_push()`. Using the same workflowr commands, you
can do the same for one of your own research projects and share it with
collaborators and colleagues.
To learn more about workflowr, you can read the following vignettes:
* [Customize your research website](wflow-02-customization.html)
* [Migrating an existing project to use workflowr](wflow-03-migrating.html)
* [How the workflowr package works](wflow-04-how-it-works.html)
* [Frequently asked questions](wflow-05-faq.html)
* [Hosting workflowr websites using GitLab](wflow-06-gitlab.html)
* [Sharing common code across analyses](wflow-07-common-code.html)
* [Alternative strategies for deploying workflowr websites](wflow-08-deploy.html)
* [Using large data files with workflowr](wflow-10-data.html)
## Troubleshooting
### I don't see the workflowr project as an available RStudio Project Type. {#missing-template}
If you just installed workflowr, close and re-open RStudio. Also, make sure you
scroll down to the bottom of the list.
### The GitHub repository wasn't created automatically by `wflow_use_github()`. {#no-repo}
If `wflow_use_github()` failed unexpectedly when creating the GitHub repository,
or if you declined by entering `n`, you can manually created the repository on
GitHub. After logging in to GitHub, click on the "+" in the top right of the
page. Choose "New repository". For the repository name, type `myproject`. Do not
change any of the other settings. Click on the green button "Create repository".
Once that is completed, you can return to the next step in the tutorial.
```{r github-new-repo, eval=TRUE, echo=FALSE, out.width="25%"}
knitr::include_graphics("img/github-new-repo.png")
```
### I wasn't able to push to GitHub with `wflow_git_push()`. {#failed-push}
Unfortunately this function has a high failure rate because it relies on the
correct configuration of various system software dependencies. If this fails,
you can push to Git using another technique, but this will require that you have
previously installed Git on your computer. For example, you can use the RStudio
Git pane (click on the green arrow that says "Push"). Alternatively, you can
directly use Git by running `git push` in the terminal.
### My website isn't displaying after I activated GitHub Pages. {#no-gh-pages}
It is not uncommon for there to be a short delay before your website is
available. One trick to try is to specify the exact page that you want at the
end of the URL, e.g. add `/index.html` to the end of the URL.
|
/scratch/gouwar.j/cran-all/cranData/workflowr/inst/doc/wflow-09-workshop.Rmd
|
## ----setup, include=FALSE-----------------------------------------------------
knitr::opts_chunk$set(eval = FALSE, fig.align = "center")
|
/scratch/gouwar.j/cran-all/cranData/workflowr/inst/doc/wflow-10-data.R
|
---
title: "Using large data files with workflowr"
subtitle: "workflowr version `r utils::packageVersion('workflowr')`"
author: "John Blischak"
date: "`r Sys.Date()`"
output:
rmarkdown::html_vignette:
toc: true
toc_depth: 2
rmarkdown::pdf_document: default
vignette: >
%\VignetteIndexEntry{Using large data files with workflowr}
%\VignetteEncoding{UTF-8}
%\VignetteEngine{knitr::rmarkdown}
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(eval = FALSE, fig.align = "center")
```
## Introduction
Workflowr provides many features to track the progress of your data analysis
project and make it easier to reproduce both the current version as well as
previous versions of the project. However, this is only possible if the data
files from previous versions can also be restored. In other words, even if you
can obtain the code from six months ago, if you can't obtain the data from six
months ago, you won't be able to reproduce your previous analysis.
Unfortunately, if you have large data files, you can't simply commit them to the
Git repository along with the code. The max file size able to be pushed to
GitHub is [100 MB][100mb], and this is in general a good practice to follow no
matter what Git hosting service you are using. Large files will make each push
and pull take much longer and increase the risk of the download timing out. This
vignette discusses various strategies for versioning your large data files.
[100mb]: https://help.github.com/en/github/managing-large-files/conditions-for-large-files
## Option 0: Reconsider versioning your large data files
Before considering any of the options below, you need to reconsider if this is
even necessary for your project. And if it is, which data files need to be
versioned. Specifically, large raw data files that are never modified do not
need to be versioned. Instead, you could follow these steps:
1. Upload the files to an online data repository, a private FTP server, etc.
1. Add a script to your workflowr project that can download all the files
1. Include the instructions in your README and your workflowr website that
explain how to download the files
For example, an [RNA sequencing][rna-seq] project will produce [FASTQ][fastq]
files that are large and won't be modified. Instead of committing these files to
the Git repository, they should instead be uploaded to [GEO][geo]/[SRA][sra].
[fastq]: https://en.wikipedia.org/wiki/FASTQ_format
[geo]: https://www.ncbi.nlm.nih.gov/geo/
[rna-seq]: https://en.wikipedia.org/wiki/RNA-Seq
[sra]: https://www.ncbi.nlm.nih.gov/sra
## Option 1: Record metadata
If your large data files are modified throughout the project, one option would
be to record metadata about the data files, save it in a plain text file, and
then commit the plain text file to the Git repository. For example, you could
record the modification date, file size, [MD5 checksum][md5], number of rows,
number of columns, column means, etc.
[md5]: https://en.wikipedia.org/wiki/MD5
For example, if your data file contains observational measurements from a remote
sensor, you could record the date of the last observation and commit this
information. Then if you need to reproduce an analysis from six months ago, you
could recreate the previous version of the data file by filtering on the date
column.
## Option 2: Use Git LFS (Large File Storage)
If you are comfortable using Git in the terminal, a good option is [Git
LFS][lfs]. It is an extension to Git that adds extra functionality to the
standard Git commands. Thus it is completely compatible with workflowr.
Instead of committing the large file to the Git repository, it instead commits a
plain text file containing a unique hash. It then uploads the large file to a
remote server. If you checkout a previous version of the code, it will use the
unique hash in the file to download the previous version of the large data file
from the server.
Git LFS is [integrated into GitHub][bandwidth]. However, a free account is only
allotted 1 GB of free storage and 1 GB a month of free bandwidth. Thus you may
have to upgrade to a paid GitHub account if you need to version lots of large
data files.
See the [Git LFS][lfs] website to download the software and set it up to track
your large data files.
Note that for workflowr you can't use Git LFS with any of the website files in
`docs/`. [GitHub Pages][gh-pages] serves the website using the exact versions of
the files in that directory on GitHub. In other words, it won't pull the large
data files from the LFS server. Therefore everything will look fine on your
local machine, but break once pushed to GitHub.
As an example of a workflowr project that uses Git LFS, see the GitHub
repository [singlecell-qtl][scqtl]. Note that the large data files, e.g.
[`data/eset/02192018.rds`][eset] , contain the phrase "Stored with Git LFS ". If
you download the repository with `git clone`, the large data files will only
contain the unique hashes. See the [contributing instructions][contributing] for
how to use Git LFS to download the latest version of the large data files.
[bandwidth]: https://help.github.com/en/github/managing-large-files/about-storage-and-bandwidth-usage
[contributing]: https://jdblischak.github.io/singlecell-qtl/contributing.html
[eset]: https://github.com/jdblischak/singlecell-qtl/blob/master/data/eset/02192018.rds
[gh-pages]: https://pages.github.com/
[lfs]: https://git-lfs.com/
[scqtl]: https://github.com/jdblischak/singlecell-qtl
## Option 3: Use piggyback
An alternative option to Git LFS is the R package [piggyback][]. Its main
advantages are that it doesn't require paying to upgrade your GitHub account or
configuring Git. Instead, it uses R functions to upload large data files to
[releases][] on your GitHub repository. The main disadvantage, especially for
workflowr, is that it isn't integrated with Git. Therefore you will have to
manually version the large data files by uploading them via piggyback, and
recording the release version in a file in the workflowr project. This option is
recommended if you anticipate substantial, but infrequent, changes to your large
data files.
[piggyback]: https://cran.r-project.org/package=piggyback
[releases]: https://help.github.com/en/github/administering-a-repository/about-releases
## Option 4: Use a database
Importing large amounts of data into an R session can drastically degrade R's
performance or even cause it to crash. If you have a large amount of data stored
in one or more tabular files, but only need to access a subset at a time, you
should consider converting your large data files into a single database. Then
you can query the database from R to obtain a given subset of the data needed
for a particular analysis. Not only is this memory efficient, but you will
benefit from the improved organization of your project's data. See the CRAN Task
View on [Databases][ctv-databases] for resources for interacting with databases
with R.
[ctv-databases]: https://cran.r-project.org/view=Databases
|
/scratch/gouwar.j/cran-all/cranData/workflowr/inst/doc/wflow-10-data.Rmd
|
---
title: "Getting started with workflowr"
subtitle: "workflowr version `r utils::packageVersion('workflowr')`"
author: "John Blischak"
date: "`r Sys.Date()`"
output:
rmarkdown::html_vignette:
toc: true
vignette: >
%\VignetteIndexEntry{Getting started with workflowr}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r decide-to-execute, cache=FALSE, echo=FALSE}
library("knitr")
# The code in this vignette requires a functional Git setup. If a workflowr user
# has a .git directory upstream of R's temporary diretory, then wflow_start will
# fail. If this situation is detected, the code is not evaluated.
if (git2r::in_repository(tempdir())) {
opts_chunk$set(eval = FALSE)
warning(workflowr:::wrap(
"Because you have a Git repository upstream of R's temporary directory,
none of the code below was executed. Please refer to the online
documentation to see the output:
https://workflowr.github.io/workflowr/articles/wflow-01-getting-started.html
\n\nYou should consider removing the directory since it was likely created
in error: ",
workflowr:::git2r_slot(git2r::repository(tempdir(), discover = TRUE), "path")))
}
# The code in this vignette requires pandoc. Not every CRAN server has pandoc
# installed.
if (!rmarkdown::pandoc_available()) {
opts_chunk$set(eval = FALSE)
message(workflowr:::wrap(
"The code chunks below were not executed because this machine does not
have pandoc installed."
))
}
```
```{r chunk-options, cache=FALSE, include=FALSE}
.tmp <- tempfile("wflow-01-getting-started-")
.tmp <- workflowr:::absolute(.tmp)
.project <- file.path(.tmp, "myproject")
fs::dir_create(.project)
opts_knit$set(root.dir = .project)
opts_chunk$set(collapse = TRUE)
```
The workflowr R package helps scientists organize their research in a way that
promotes effective project management, reproducibility, collaboration, and
sharing of results. Workflowr combines literate programming (knitr and
rmarkdown) and version control (Git, via git2r) to generate a website containing
time-stamped, versioned, and documented results. Any R user can quickly and
easily adopt workflowr.
This tutorial assumes you have already followed the [installation
instructions](https://workflowr.github.io/workflowr/index.html#installation).
Specifically, you need to have R, pandoc (or RStudio), and workflowr installed
on your computer. Furthermore, you need an account on [GitHub][gh] or
[GitLab][gl].
[gh]: https://github.com
[gl]: https://about.gitlab.com/
## Overview
A workflowr project has two key components:
1. An R Markdown-based website. This consists of a configuration file
(`_site.yml`), a collection of R Markdown files, and their
corresponding HTML files.
2. A Git repository. Git is a [version control system][vcs] that helps track
code development^[There are many ways to use Git: in the Terminal, in the RStudio
Git pane, or another Git graphical user interface (GUI) (see
[here](https://git-scm.com/download/gui/linux) for GUI options).]. Workflowr is
able to run the basic Git commands, so there is no need to install Git prior to
using workflowr.
One of the main goals of workflowr is to help make your research more
transparent and reproducible. This is achieved by displaying multiple
"reproducibility checks" at the top of each analysis, including the unique
identifier that Git assigns a snapshot of your code (or "commit" as Git calls
it), so you always know which version of the code produced the results.
[vcs]: https://en.wikipedia.org/wiki/Version_control
## Start the project
To start a new project, open R (or RStudio) and load the workflowr package (note
that all the code in this vignette should be run directly in the R console, i.e.
do **not** try to run workflowr functions inside of R Markdown documents).
```{r load-workflowr}
library("workflowr")
```
If you have never created a Git repository on your computer before, you need to
run the following command to tell Git your name and email. Git uses this
information to assign the changes you make to the code to you (analogous to how
Track Changes in a Microsoft Office Word document assigns your changes to you).
You do not need to use the exact same name and email as you used for your
account on GitHub or GitLab. Also, you only need to run this command once per
computer, and all subsequent workflowr projects will use this information (you
can also update it at any time by re-running the command with different input).
```{r wflow-git-config, eval=FALSE}
# Replace the example text with your information
wflow_git_config(user.name = "Your Name", user.email = "email@domain")
```
Now you are ready to start your first workflowr project!
`wflow_start("myproject")` creates a directory called `myproject/` that contains
all the files to get started. It also changes the working directory to
`myproject/`^[If you're using RStudio, you can alternatively create a new
workflowr project using the RStudio project template. Go to `File` -> `New
Project...` and select `workflowr project` from the list of project types. In
the future you can return to your project by choosing `Open Project...` and
selecting the file `myproject.Rproj`. This will set the correct working
directory in the R console, switch the file navigator to the project, and
configure the Git pane.] and initializes a Git repository with the initial
commit already made.
```{r wflow-start, eval=FALSE}
wflow_start("myproject")
```
```{r wflow-start-hidden, echo=FALSE}
setwd(.tmp)
unlink(.project, recursive = TRUE)
wflow_start("myproject", user.name = "Your Name", user.email = "email@domain")
```
`wflow_start()` created the following directory structure in `myproject/`:
```
myproject/
├── .gitignore
├── .Rprofile
├── _workflowr.yml
├── analysis/
│ ├── about.Rmd
│ ├── index.Rmd
│ ├── license.Rmd
│ └── _site.yml
├── code/
│ ├── README.md
├── data/
│ └── README.md
├── docs/
├── myproject.Rproj
├── output/
│ └── README.md
└── README.md
```
At this point, you have a minimal but complete workflowr project; that is, you
have all the files needed to use the main workflowr commands and publish a
research website. Later on, as you get more comfortable with the basic setup,
you can modify and add to the initial file structure. The overall rationale for
this setup is to help organize the files that will be commonly included in a
data analysis project. However, not all of these files are required to use
workflowr.
The two **required** subdirectories are `analysis/` and `docs/`. These
directories should never be removed from the workflowr project.
* `analysis/`: This directory contains all the source R Markdown files for
implementing the data analyses for your project. It also contains a special R
Markdown file, `index.Rmd`, that does not contain any R code, but will be used
to generate `index.html`, the homepage for your website. In addition, this
directory contains the important configuration file `_site.yml`, which you can
use to edit the theme, navigation bar, and other website aesthetics (for more
details see the documentation on [R Markdown websites][rmd-website]). Do not
delete `index.Rmd` or `_site.yml`.
[rmd-website]: https://bookdown.org/yihui/rmarkdown/rmarkdown-site.html
* `docs/`: This directory contains all the HTML files for your
website. The HTML files are built from the R Markdown files in
`analysis/`. Furthermore, any figures created by the R Markdown files
are saved here. Each of these figures is saved according to the
following pattern: `docs/figure/<insert Rmd filename>/<insert chunk
name>-#.png`, where `#` corresponds to which of the plots the chunk
generated (since one chunk can produce an arbitrary number of plots)^[Because of
this requirement, you can't customize the knitr option `fig.path` (which
controls where figure files are saved) in any R Markdown file that is part of a
workflowr project. If you do set it, it will be ignored and workflowr will
insert a warning into the HTML file to alert you.].
The workflowr-specific configuration file is `_workflowr.yml`. It will apply the
workflowr reproducibility checks consistently across all your R Markdown files.
The most critical setting is `knit_root_dir`, which determines the directory
where the files in `analysis/` will be executed. The default is to execute the
code in the root of the project where `_workflowr.yml` is located (i.e. `"."`).
To instead execute the code from `analysis/`, change the setting to
`knit_root_dir: "analysis"`. See `?wflow_html` for more details.
Also required is the RStudio project file, in this example `myproject.Rproj`.
Even if you are not using RStudio, do not delete this file because the workflowr
functions rely on it to determine the root directory of the project.
The **optional** directories are `data/`, `code/`, and `output/`.
These directories are suggestions for organizing your data analysis
project, but can be removed if you do not find them useful.
* `data/`: This directory is for raw data files.
* `code/`: This directory is for code that might not be appropriate to include
in R Markdown format (e.g. for pre-processing the data, or for long-running
code).
* `output/`: This directory is for processed data files and other
outputs generated from the code and data. For example, scripts in
`code/` that pre-process raw data files from `data/` should save the
processed data files in `output/`.
The `.Rprofile` file is a regular R script that is run once when the project is
opened. It contains the call `library("workflowr")`, ensuring that workflowr is
loaded automatically each time a workflowr-project is opened.
## Build the website
You will notice that the `docs/` directory is currently empty. That is
because we have not yet generated the website from the `analysis/`
files. This is what we will do next.
To build the website, run the function `wflow_build()` in the R
console:
```{r wflow-build, eval=FALSE}
wflow_build()
```
```{r wflow-build-hidden, echo=FALSE}
# Don't want to actually open the website when building the vignette
wflow_build(view = FALSE)
```
This command builds all the R Markdown files in `analysis/` and saves
the corresponding HTML files in `docs/`. It sets the same seed before
running every file so that any function that generates random data
(e.g. permutations) is reproducible. Furthermore, each file is built
in its own external R session to avoid any potential conflicts between
analyses (e.g. accidentally sharing a variable with the same name across files).
Lastly, it displays the website in the RStudio Viewer or default web browser.
The default action of `wflow_build()` is to behave similar to a
[Makefile](https://swcarpentry.github.io/make-novice/) (`make = TRUE` is the
default when no input files are provided), i.e. it only builds R Markdown files
that have been modified more recently than their corresponding HTML files. Thus
if you run it again, no files are built (and no files are displayed).
```{r wflow-build-no-action}
wflow_build()
```
To view the site without first building any files, run `wflow_view()`, which by
default displays the file `docs/index.html`:
```{r wflow-view, eval=FALSE}
wflow_view()
```
This is how you can view your site right on your local machine. Go ahead and
edit the files `index.Rmd`, `about.Rmd`, and `license.Rmd` to describe your
project. Then run `wflow_build()` to re-build the HTML files and display them in
the RStudio Viewer or your browser.
```{r edit-files, include=FALSE}
for (f in file.path("analysis", c("index.Rmd", "about.Rmd", "license.Rmd"))) {
cat("\nedit\n", file = f, append = TRUE)
}
```
## Publish the website
workflowr makes an important distinction between R Markdown files that are
published versus unpublished. A published file is included in the website
online; whereas, the HTML file of an unpublished R Markdown file is only able to
be viewed on the local computer. Since the project was just started, there are
no published files. To view the status of the workflowr project, run
`wflow_status()`.
```{r wflow-status}
wflow_status()
```
This alerts us that our project has 3 R Markdown files, and they are all
unpublished ("Unp"). Furthermore, it instructs how to publish them: use
`wflow_publish()`. The first argument to `wflow_publish()` is a character vector
of the R Markdown files to publish ^[Instead of listing each file individually,
you can also pass [file globs](https://en.wikipedia.org/wiki/Glob_(programming))
as input to any workflowr function, e.g. `wflow_publish("analysis/*Rmd",
"Publish the initial files for myproject")`.]. The second is a message that will
be recorded by the version control system Git when it commits (i.e. saves a
snapshot of) these files. The more informative the commit message the better (so
that future you knows what you were trying to accomplish).
```{r wflow-publish, eval=FALSE}
wflow_publish(c("analysis/index.Rmd", "analysis/about.Rmd", "analysis/license.Rmd"),
"Publish the initial files for myproject")
```
```{r wflow-publish-hidden, echo=FALSE}
# Don't want to actually open the website when building the vignette
wflow_publish(c("analysis/index.Rmd", "analysis/about.Rmd", "analysis/license.Rmd"),
"Publish the initial files for myproject",
view = FALSE)
```
`wflow_publish()` reports the 3 steps it took:
* **Step 1:** Commits the 3 R Markdown files using the custom commit message
* **Step 2:** Builds the HTML files using `wflow_build()`
* **Step 3:** Commits the 3 HTML files plus the files that specify the style of
the website (e.g. CSS and JavaScript files)
Performing these 3 steps ensures that the HTML files are always in sync with the
latest versions of the R Markdown files. Performing these steps manually would
be tedious and error-prone (e.g. an HTML file may have been built with an
outdated version of an R Markdown file). However, `wflow_publish()` makes it
easy to keep the pages of your site in sync.
Now when you run `wflow_status()`, it reports that all the files are published
and up-to-date.
```{r wflow-status-post-publish}
wflow_status()
```
## Deploy the website
At this point you have built a version-controlled website that exists on your
local computer. The next step is to put your code on GitHub so that it can serve
your website online. If you are using GitLab, switch to the vignette [Hosting
workflowr websites using GitLab](wflow-06-gitlab.html) and then continue with
the next section.
All the required setup can be performed by the workflowr function
`wflow_use_github()`. The only required argument is your GitHub username^[The
default is to name the GitHub repository using the same name as the directory
that contains the workflowr project. This is likely what you used with
`wflow_start()`, which in this case was `"myproject"`. If you'd prefer the
GitHub repository to have a different name, or if you've already created a
GitHub repo with a different name, you can pass the argument `repository =
"other-name"`.]:
```{r wflow-use-github, eval=FALSE}
wflow_use_github("myname")
```
```{r wflow-use-github-hidden, echo=FALSE}
# Don't want to try to authenticate on GitHub
wflow_use_github("myname", create_on_github = FALSE)
```
This has two main effects on your local machine: 1) it configures Git to
communicate with your future GitHub repository, and 2) it inserts a link to your
future GitHub repository into the navigation bar (you'll need to run
`wflow_build()` or `wflow_publish()` to observe this change). Furthermore,
`wflow_use_github()` will prompt you to ask if you'd like to authorize workflowr
to automatically create the repository on GitHub. If you agree, a browser tab
will open, and you will need to authenticate with your username and password,
and then give permission to the "workflowr-oauth-app" to access your
account^[This sounds scarier than it actually is. The "workflowr-oauth-app" is
simply a formality for GitHub to grant authorization. The "app" itself is the R
code running on your local machine. Once `wflow_use_github()` finishes, the
authorization is deleted, and nothing (and no one) can access your account].
If you decline the offer from `wflow_use_github()` to automatically create the
GitHub repository, you need to manually create it. To do this, login to your
account on GitHub and create a new repository following these
[instructions][new-repo]. The screenshot below shows the menu in the topright of
the webpage.
<img src="img/github-new-repo.png" alt="Create a new repository on GitHub."
style="display: block; margin: auto; border: black 1px solid">
<p class="caption" style="text-align: center;">
Create a new repository on GitHub.
</p>
Note that in this tutorial the GitHub repository also has the name
"myproject." This isn't strictly necessary (you can name your GitHub repository
whatever you like), but it's generally good organizational practice to use the
same name for both your GitHub repository and the local directory on your
computer.
Next, you need to send your files to GitHub. Push your files to GitHub with the
function `wflow_git_push()`:^[Unfortunately this can fail for many different
reasons. If you already regularly use `git push` in the Terminal, you will
probably want to continue using this. If you don't have Git installed on your
computer and thus must use `wflow_git_push()`, you can search the [git2r
Issues](https://github.com/ropensci/git2r/issues) for troubleshooting ideas.]
```{r wflow-git-push}
wflow_git_push(dry_run = TRUE)
```
Using `dry_run = TRUE` previews what the function will do. Remove this argument
to actually push to GitHub. You will be prompted to enter your GitHub username
and password for authentication^[If you'd prefer to use SSH keys for
authentication, please see the section [Setup SSH
keys](wflow-02-customization.html#setup-ssh-keys).]. Each time you make changes
to your project, e.g. run `wflow_publish()`, you will need to run
`wflow_git_push()` to send the changes to GitHub.
Lastly, now that your code is on GitHub, you need to tell GitHub that you want
the files in `docs/` to be published as a website. Go to Settings -> GitHub
Pages and choose "master branch docs/ folder" as the Source
([instructions][publish-docs]). Using the hypothetical names above, the
repository would be hosted at the URL `myname.github.io/myproject/`^[It may take
a few minutes for the site to be rendered.]. If you scroll back down to the
GitHub Pages section of the Settings page, you can click on the URL there.
[new-repo]: https://docs.github.com/articles/creating-a-new-repository
[publish-docs]: https://docs.github.com/articles/configuring-a-publishing-source-for-github-pages
## Add a new analysis file
Now that you have a functioning website, the next step is to start analyzing
data! Create a new R Markdown file, save it as `analysis/first-analysis.Rmd`,
and open it in your preferred text editor (e.g. RStudio). Alternatively, you can
use the convenience function `wflow_open()`, which will create the file (and
open it if you are using RStudio):
```{r create-file, eval=FALSE}
wflow_open("analysis/first-analysis.Rmd")
```
```{r create-file-hidden, echo=FALSE}
# Don't want to actually open the file when building the vignette in RStudio
wflow_open("analysis/first-analysis.Rmd", edit_in_rstudio = FALSE)
```
Now you are ready to start writing! Go ahead and add some example code. If you
are using RStudio, press the Knit button to build the file and see a preview in
the Viewer pane. Alternatively from the R console, you can run `wflow_build()`
again (this function can be run from the base directory of your project or any
subdirectory).
Check out your new file `first-analysis.html`. Near the top you will see the
workflowr reproducibility report. If you click on the button, the full menu will
drop down. Click around to learn more about the reproducibility safety checks,
why their important, and whether or not the file passed or failed each one.
You'll notice that the first check failed because the R Markdown file had
uncommitted changes. This is OK now since the file is a draft. Once you are
ready to publish it to share with others, you can use `wflow_publish()` to
ensure that any changes to the R Markdown file are committed to the Git
repository prior to generating the results.
In order to make it easier to navigate to your new file, you can include a link
to it on the main index page. First open `analysis/index.Rmd` (optionally using
`wflow_open()`). Second paste the following line into `index.Rmd`:
```
Click on this [link](first-analysis.html) to see my results.
```
```{r edit-index, include=FALSE}
cat("\nClick on this [link](first-analysis.html) to see my results.\n",
file = "analysis/index.Rmd", append = TRUE)
```
This uses the Markdown syntax for creating a hyperlink (for a quick reference
guide in RStudio click "Help" -> "Markdown Quick Reference"). You specify the
HTML version of the file since this is what comprises the website. Click Knit
(or run `wflow_build()` again) to check that the link works.
Now run `wflow_status()` again. As expected, two files need attention.
`index.Rmd` has status "Mod" for modified. This means it is a published file
that has subsequently been modified. `first-analysis.Rmd` has status "Scr" for
Scratch. This means not only is the HTML not published, but the R Markdown file
is not yet being tracked by Git.
```{r wflow-status-newfile}
wflow_status()
```
To publish the new analysis and the updated index page, again use
`wflow_publish()`:
```{r wflow-publish-newfile, eval=FALSE}
wflow_publish(c("analysis/index.Rmd", "analysis/first-analysis.Rmd"),
"Add my first analysis")
```
```{r wflow-publish-newfile-hidden, echo=FALSE}
# Don't want to actually open the website when building the vignette
wflow_publish(c("analysis/index.Rmd", "analysis/first-analysis.Rmd"),
"Add my first analysis", view = FALSE)
```
Lastly, push the changes to GitHub or GitLab with
`wflow_git_push()`^[Alternatively you can run `git push` in the Terminal or use
the RStudio Git Pane.] to deploy these latest changes to the website.
## The workflow
This is the general workflow:^[Note that some workflowr functions are also
available as [RStudio Addins][rstudio-addins]. You may prefer these compared to
running the commands in the R console, especially since you can [bind the addins
to keyboard shortcuts][rstudio-addins-shortcuts].]
[rstudio-addins]: https://rstudio.github.io/rstudioaddins/
[rstudio-addins-shortcuts]: https://rstudio.github.io/rstudioaddins/#keyboard-shorcuts
1. Open a new or existing R Markdown file in `analysis/` (optionally using
`wflow_open()`)
1. Perform your analysis in the R Markdown file (For RStudio users: to quickly
develop the code I recommend executing the code in the R console via Ctrl-Enter
to send one line or Ctrl-Alt-C to execute the entire code chunk)
1. Run `wflow_build()` to view the results as they will
appear on the website (alternatively press the Knit button in RStudio)
1. Go back to step 2 until you are satisfied with the result
1. Run `wflow_publish()` to commit the source files (R Markdown files or other
files in `code/`, `data/`, and `output/`), build the HTML files, and commit the
HTML files
1. Push the changes to GitHub or GitLab with `wflow_git_push()` (or `git push`
in the Terminal)
This ensures that the code version recorded at the top of an HTML file
corresponds to the state of the Git repository at the time it was built.
The only exception to this workflow is if you are updating the aesthetics of
your website (e.g. anytime you make edits to `analysis/_site.yml`). In this case
you'll want to update all the published HTML files, regardless of whether or not
their corresponding R Markdown files have been updated. To republish every HTML
page, run `wflow_publish()` with `republish = TRUE`. This behavior is only
previewed below by specifying `dry_run = TRUE`.
```{r republish}
wflow_publish("analysis/_site.yml", republish = TRUE, dry_run = TRUE)
```
## Next steps
To learn more about workflowr, you can read the following vignettes:
* [Customize your research website](wflow-02-customization.html)
* [Migrating an existing project to use workflowr](wflow-03-migrating.html)
* [How the workflowr package works](wflow-04-how-it-works.html)
* [Frequently asked questions](wflow-05-faq.html)
* [Hosting workflowr websites using GitLab](wflow-06-gitlab.html)
* [Sharing common code across analyses](wflow-07-common-code.html)
* [Alternative strategies for deploying workflowr websites](wflow-08-deploy.html)
* [Reproducible research with workflowr (workshop)](wflow-09-workshop.html)
* [Using large data files with workflowr](wflow-10-data.html)
## Further reading
* For advice on using R Markdown files to organize your analysis, read the
chapter [R Markdown workflow](https://r4ds.had.co.nz/r-markdown-workflow.html) in
the book [R for Data Science](https://r4ds.had.co.nz/) by Garrett Grolemund and
Hadley Wickham
|
/scratch/gouwar.j/cran-all/cranData/workflowr/vignettes/wflow-01-getting-started.Rmd
|
---
title: "Customize your research website"
subtitle: "workflowr version `r utils::packageVersion('workflowr')`"
author: "John Blischak"
date: "`r Sys.Date()`"
output:
rmarkdown::html_vignette:
toc: true
vignette: >
%\VignetteIndexEntry{Customize your research website}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r chunk-options, include=FALSE}
library("knitr")
opts_chunk$set(eval = FALSE)
```
There are many ways to customize your research website. Below are some common
options.
## Adding project details
workflowr automatically creates many files when the project is first started. As
a first step for customizing your site, add the following information:
* Briefly describe your project in `analysis/index.Rmd`
* Share details about yourself in `analysis/about.Rmd`
* State a software license in `analysis/license.Rmd`. See [A Quick Guide to
Software Licensing for the Scientist-Programmer][morin2012] by Morin et al.,
2012 for advice. If you're ambivalent, the MIT license is a standard choice.
[morin2012]: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1002598
## Changing the theme
The theme is defined in the file `analysis/_site.yml`. The default is cosmo, but
the rmarkdown package accepts multiple Bootstrap themes. These are listed in the
[rmarkdown documentation][rmd-themes]. Go to
[bootswatch.com](https://bootswatch.com/) to compare the bootstrap themes. When
typing the theme, make sure it is all lowercase (e.g. spacelab, united, etc.).
When experimenting with different themes, you'll want to build a fast-running
file, e.g. likely `analysis/index.Rmd`, instead of rebuilding the entire site
every time. Click the RStudio Knit button or run `wflow_build()` in the R
console to preview each theme:
```
wflow_build("analysis/index.Rmd")
```
Once you have chosen a theme, update the website by running the following:
```{r wflow-publish-theme}
wflow_publish("analysis/_site.yml", "Change the theme", republish = TRUE)
```
This commits `analysis/_site.yml`, re-builds every previously published HTML
file using the new theme, and commits all the republished HTML pages.
[rmd-themes]: https://bookdown.org/yihui/rmarkdown/html-document.html
## Style with custom CSS
For ultimate control of the style of your website, you can write [custom CSS
rules to apply to the R Markdown files][custom-css]. For a workflowr project,
follow these steps to get started:
1. Create the file `analysis/style.css`
1. Register the CSS file in `analysis/_site.yml`:
```
output:
workflowr::wflow_html:
toc: true
toc_float: true
theme: cosmo
highlight: textmate
css: style.css
```
1. Run `wflow_build()` to preview the changes
1. Once you are satisfied with the appearance of the site, publish the results
```{r custom-css-publish, eval=FALSE}
wflow_publish(c("analysis/_site.yml", "analysis/style.css"),
message = "Customize website style.",
republish = TRUE)
```
[custom-css]: https://bookdown.org/yihui/rmarkdown/html-document.html#custom-css
To specifically change the style of the workflowr components of the website, you
can write your CSS rules to target the custom workflowr classes. The example CSS
rules below demonstrate how to affect every workflowr button using the class
`btn-workflowr` and also how to affect specific workflowr buttons using the more
specialized classes.
```
/* Center workflowr buttons */
.btn-workflowr {
display: block;
margin: auto;
}
/* Add red border around workflowr report button */
.btn-workflowr-report {
border: red 5px solid;
}
/* Add blue border around workflowr past figure version buttons */
.btn-workflowr-fig {
border: blue 5px solid;
}
/* Add purple border around workflowr session information button */
.btn-workflowr-sessioninfo {
border: purple 5px solid;
}
```
## Customize the navigation bar
The navigation bar appears on the top of each page. By default it includes links
to `index.html` (Home), `about.html` (About), and `license.html` (License). This
is all specified in `analysis/_site.yml`. If you run either `wflow_use_github()`
or `wflow_use_gitlab()`, a link to your source code on GitHub or GitLab will be
added to the navigation bar.
If you have other important pages, you can add them as well. For example, to add
the text "The main result" which links to `main-result.html`, you would add the
following:
```
- text: "The main result"
href: main-result.html
```
You can also create a drop-down menu from the navigation bar. See the [rmarkdown
documentation][navbar] for instructions.
Similar to changing the theme above, you will need to re-render each page of the
website (the navbar is embedded within each individual HTML file). Thus you
could run the same command as above:
```{r wflow-publish-navbar}
wflow_publish("analysis/_site.yml", "Add main result page to navbar",
republish = TRUE)
```
[navbar]: https://bookdown.org/yihui/rmarkdown/rmarkdown-site.html
## Setup SSH keys
Using the https protocol to communicate with GitHub is tedious because it
requires entering your GitHub username and password. Using SSH keys for
authentication removes the password requirement. Follow these [GitHub
instructions][ssh] for creating SSH keys and linking them to your GitHub
account. You'll need to create separate SSH keys and link them each to GitHub
for each machine where you clone your Git repository.
After you create your SSH keys and add them to your GitHub account, you'll need
to instruct your local Git repository to use the SSH protocol. For a
hypothetical GitHub username of "myname" and GitHub repository of "myproject",
you would change the remote "origin" (the default name by convention) using the
function `wflow_git_remote()`:
```{r https-to-ssh}
wflow_git_remote(remote = "origin", user = "myname", repo = "myproject",
protocol = "ssh", action = "set_url")
```
Alternatively you could update the remote URL using Git directly in the shell.
See this GitHub documentation on [changing a remote URL][set-url] for
instructions.
[ssh]: https://docs.github.com/articles/generating-an-ssh-key
[set-url]: https://docs.github.com/articles/changing-a-remote-s-url
## Change the session information function
The default function used to report the session information is `sessionInfo()`.
To change this, you can edit this setting in `_workflowr.yml`. For example, to
instead use `sessioninfo::session_info()`, add the following line to
`_workflowr.yml`:
```
sessioninfo: "sessioninfo::session_info()"
```
If you'd prefer to manually insert a more complex report of the session
information, disable the automatic reporting by adding the following to
`_workflowr.yml`:
```
sessioninfo: ""
```
Note however that workflowr will still check for the presence of a session
information function. Specifically it expects to find either `sessionInfo` or
`session_info` somewhere in the R Markdown document.
|
/scratch/gouwar.j/cran-all/cranData/workflowr/vignettes/wflow-02-customization.Rmd
|
---
title: "Migrating an existing project to use workflowr"
subtitle: "workflowr version `r utils::packageVersion('workflowr')`"
author: "John Blischak"
date: "`r Sys.Date()`"
output:
rmarkdown::html_vignette:
toc: true
vignette: >
%\VignetteIndexEntry{Migrating an existing project to use workflowr}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r chunk-options, include=FALSE}
library("knitr")
opts_chunk$set(eval = FALSE)
```
## Introduction
This vignette is for those users that already have an existing project and wish
to incorporate workflowr to create a research website. Migrating an existing
project to use workflowr varies from straightforward to difficult depending on
the scenario and your comfort level with Git. This vignette assumes that you
have the background knowledge of workflowr explained in the [Getting started][vig-start]
vignette. Even if you have no need for a new workflowr project, please run
through that vignette first as an exercise to familiarize yourself with the
workflowr philosophy and functions.
```{r getting-started}
vignette("wflow-01-getting-started", "workflowr")
```
[vig-start]: wflow-01-getting-started.html
## Scenario: I have a collection of R Markdown files
If you have a collection of R Markdown files, but no version control or other
files, the quickest solution is to use the function `wflow_quickstart()`. The
code below 1) starts a new workflowr project in `~/projects/new-project/`,
2) copies the existing Rmd files in `~/projects/misc/` to the `analysis/`
subdirectory of the new project, 3) builds and commits the website, and 4)
configures the project to use GitHub (which is why the GitHub username is
required).
```{r}
library("workflowr")
wflow_quickstart("~/projects/misc/*Rmd", username = "<github-username>",
directory = "~/projects/new-project/")
```
Alternatively, you can manually perform each step to migrate your existing
analysis by starting a workflowr project in a new directory and then moving the
R Markdown files to the `analysis/` subdirectory. In the hypothetical example
below, the original R Markdown files are located in the directory
`~/projects/misc/` and the workflowr project will be created in the new
directory `~/projects/new-project/`.
```{r}
library("workflowr")
# Create project directory and change working directory to this location
wflow_start("~/projects/new-project")
# Copy the files to the analysis subdirectory of the workflowr project
file.copy(from = Sys.glob("~/projects/misc/*Rmd"), to = "analysis")
```
Next run `wflow_build()` to see if your files run without error. Lastly, build
and commit the website using `wflow_publish()`:
```{r}
wflow_publish("analysis/*Rmd", "Publish analysis files")
```
When you are ready to share the results online, you can run `wflow_use_github()`
or `wflow_use_gitlab()`.
## Scenario: I have a collection of R Markdown files and other project infrastructure
If your project already has lots of infrastructure, it is most convenient to add
the workflowr files directory to your already existing directory. This is
controlled with the argument `existing`. In the hypothetical example below, the
existing project is located at `~/projects/mature-project/`.
```{r}
library("workflowr")
wflow_start("~/projects/mature-project", existing = TRUE)
```
The above command will add the workflowr files to your existing project and also
commit them to version control (it will initialize a Git repo if it doesn't
already exist). If you'd prefer to not use version control for your project or
you'd prefer to commit the workflowr files yourself manually, you can set `git =
FALSE` (this is also useful if you want to first test to see what would happen
without committing the results).
By default `wflow_start()` will not overwrite your existing files (e.g. if
you already have a `README.md`). If you'd prefer to overwrite your files with
the default workflowr files, set `overwrite = TRUE`.
To add your R Markdown files to the research website, you can move them to the
subdirectory `analysis/` (note you can do this before or after running
`wflow_start()`).
Next run `wflow_build()` to see if your files run without error. Lastly, build
and commit the website using `wflow_publish()`:
```{r}
wflow_publish("analysis/*Rmd", "Publish analysis files")
```
## Scenario: I have an R package
If your project is organized as an R package, you can still add a website using
workflowr. In the hypothetical example below, the
existing package is located at `~/projects/my-package/`.
```{r}
library("workflowr")
wflow_start("~/projects/my-package", existing = TRUE)
```
The above command will add the workflowr files to your existing project and also
commit them to version control (it will initialize a Git repo if it doesn't
already exist). If you'd prefer to not use version control for your project or
you'd prefer to commit the workflowr files yourself manually, you can set `git =
FALSE` (this is also useful if you want to first test to see what would happen
without committing the results).
You'll want R to ignore the workflowr directories when building the R package.
Thus add the following to the `.Rbuildignore` file:
```
^analysis$
^docs$
^data$
^code$
^output$
^_workflowr.yml$
```
Furthermore, to prevent R from compressing the files in `data/` (which is
harmless but time-consuming), you can set `LazyData: false` in the file
`DESCRIPTION`. However, if you do want to distribute data files with your R
package, you'll need to instead rename the workflowr subdirectory and update the
R Markdown files to search for files in the updated directory name (and also
update `.Rbuildignore` to ignore this new directory and not `data/`). Then you
can save the data files to distribute with the package in `data/`. For more
details, see the relevant sections in the CRAN manual [Writing R
Extensions][data-in-packages] and Hadley's [R Packages][r-pkgs-data].
[data-in-packages]: https://cran.r-project.org/doc/manuals/r-release/R-exts.html#Data-in-packages
[r-pkgs-data]: https://r-pkgs.org/data.html
If your primary purpose for creating a website to accompany your package is to
share the package documentation, please check out the package [pkgdown][]. It
creates a website from the vignettes and function documentation files (i.e. the
Rd files in `man/`). In contrast, if the purpose of the website is to
demonstrate results you obtained using the package, use workflowr.
[pkgdown]: https://github.com/r-lib/pkgdown
|
/scratch/gouwar.j/cran-all/cranData/workflowr/vignettes/wflow-03-migrating.Rmd
|
---
title: "How the workflowr package works"
subtitle: "workflowr version `r utils::packageVersion('workflowr')`"
author: "John Blischak"
date: "`r Sys.Date()`"
output:
rmarkdown::html_vignette:
toc: true
vignette: >
%\VignetteIndexEntry{How the workflowr package works}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
The workflowr package combines many powerful tools in order to produce a
research website. It is absolutely **not** necessary to understand all the
underlying tools to take advantage of workflowr, and in fact that is one of the
primary goals of workflowr: to allow researchers to focus on their analyses
without having to worry too much about the technical details. However, if you
are interested in implementing advanced customization options, contributing to
workflowr, or simply want to learn more about these tools, the sections below
provide some explanations of how workflowr works.
## Overview
[R][] is the computer programming language used to perform the analysis.
[knitr][] is an R package that executes code chunks in an R Markdown file to create a Markdown file.
[Markdown][] is a lightweight markup language that is easier to read and write than HTML.
[rmarkdown][] is an R package that combines the functionality of [knitr][] and the document converter [pandoc][].
[Pandoc][] powers the conversion of [knitr][]-produced Markdown files into HTML, Word, or PDF documents.
Additionally, newer versions of [rmarkdown][] contain functions for building websites.
The styling of the websites is performed by the web framework [Bootstrap][].
[Bootstrap][] implements the navigation bar at the top of the website, has many available themes to customize the look of the site, and dynamically adjusts the website so it can be viewed on a desktop, tablet, or mobile device.
The [rmarkdown][] website configuration file `_site.yml` allows convenient customization of the [Bootstrap][] navigation bar and theme.
[Git][] is a distributed version control system (VCS) that tracks code development.
It has many powerful features, but only a handful of the main functions are required to use workflowr.
[git2r][] is an R package which provides an interface to [libgit2][], which is a portable, pure C implementation of the Git core methods (this is why you don't need to install Git before using workflowr).
[GitHub][] is a website that hosts [Git][] repositories and additionally provides collaboration tools for developing software.
[GitHub Pages][] is a [GitHub][] service that offers free hosting of [static websites][static].
By placing the HTML files for the website in the subdirectory `docs/`, [GitHub Pages][] serves them online.
To aid reproducibility, workflowr provides an R Markdown output format
`wflow_html()` template that automatically sets a seed for random number
generation, records the session information, and reports the status of the Git
repository (so you always know which version of the code produced the results
contained in that particular file). These options are controlled by the settings
in `_workflowr.yml`. It also provides a custom site generator `wflow_site()`
that enables `wflow_html()` to work with R Markdown websites. These options are
controlled in `analysis/_site.yml`.
[R]: https://cran.r-project.org/
[knitr]: https://yihui.org/knitr/
[Markdown]: https://daringfireball.net/projects/markdown/
[rmarkdown]: https://rmarkdown.rstudio.com/
[pandoc]: https://pandoc.org/
[Bootstrap]: https://getbootstrap.com/
[Git]: https://git-scm.com/
[SHA-1]: https://en.wikipedia.org/wiki/SHA-1
[GitHub]: https://github.com/
[GitHub Pages]: https://pages.github.com/
[static]: https://en.wikipedia.org/wiki/Static_web_page
## Where are the figures?
workflowr saves the figures into an organized, hierarchical directory structure
within `analysis/`. For example, the first figure generated by the chunk named
`plot-data` in the file `filename.Rmd` will be saved as
`analysis/figure/filename.Rmd/plot-data-1.png`. Furthermore, the figure files
are _moved_ to `docs/` when `render_site` is run (this is the rmarkdown package
function called by `wflow_build`, `wflow_publish`, and the RStudio Knit button).
The figures have to be committed to the Git repository in `docs/` in order to be
displayed properly on the website. `wflow_publish` automatically commits the
figures in `docs` corresponding to new or updated R Markdown files, and
`analysis/figure/` is in the `.gitignore` file to prevent accidentally
committing duplicate files.
Because workflowr requires the figures to be saved to a specific location in
order to function properly, it will override any custom setting of the knitr
option `fig.path` (which controls where figure files are saved) and insert a
warning into the HTML file to alert the user that their value for `fig.path` was
ignored.
## Additional tools
[Posit Software, PBC][] is a company that develops open source software for R users.
They are the principal developers of [RStudio][], an integrated development environment (IDE) for R, and the [rmarkdown][] package.
Because of this tight integration, new developments in the [rmarkdown][] package are quickly incorporated into the [RStudio][] IDE.
While not strictly required for using workflowr, using [RStudio][] provides many benefits, including:
* RStudio projects make it easier to setup your R environment, e.g. set the correct working directory, and quickly switch between different projects
* The Git pane allows you to conveniently view your changes and run the main Git functions
* The Viewer pane displays the rendered HTML results for immediate feedback
* Clicking the `Knit` button automatically uses the [Bootstrap][] options specified in `_site.yml` and moves the rendered HTML to the website subdirectory `docs/` (requires version 1.0 or greater)
* Includes an up-to-date copy of [pandoc][] so you don't have to install or update it
* Tons of other cool [features][rstudio-features] like debugging and source code inspection
Another key R package used by workflowr is [rprojroot][].
This package finds the root of the repository, so workflowr functions like `wflow_build` will work the same regardless of the current working directory.
Specifically, [rprojroot][] searches for the RStudio project `.Rproj` file at the base of the workflowr project (so don't delete it!).
[Posit Software, PBC]: https://posit.co/
[RStudio]: https://posit.co/products/open-source/rstudio/
[rstudioapi]: https://github.com/rstudio/rstudioapi
[rprojroot]: https://cran.r-project.org/package=rprojroot
[git2r]: https://cran.r-project.org/package=git2r
[libgit2]: https://libgit2.org/
[rstudio-features]: https://posit.co/products/open-source/rstudio/
## Background and related work
There is lots of interest and development around reproducible research with R.
Projects like workflowr are possible due to two key developments. First, the R
packages [knitr][] and [rmarkdown][] have made it easy for any R programmer to
generate reports that combine text, code, output, and figures. Second, the
version control software [Git][], the Git hosting site [GitHub][], and the
static website hosting service [GitHub Pages][] have made it easy to share not
only source code but also static HTML files (i.e. no need to purchase a domain
name, setup a server, etc).
My first attempt at sharing a reproducible project online was [singleCellSeq][].
Basically, I started by copying the documentation website of [rmarkdown][] and
added some customizations to organize the generated figures and to insert the
status of the Git repository directly into the HTML pages. The workflowr R
package is my attempt to simplify my previous workflow and provide helper
functions so that any researcher can take advantage of this workflow.
Workflowr encompasses multiple functions: 1) provides a project template, 2)
version controls the R Markdown and HTML files, and 3) builds a website.
Furthermore, it provides R functions to perform each of these steps. There are
many other related works that provide similar functionality. Some are templates
to be copied, some are R packages, and some involve more complex software (e.g.
static blog software). Depending on your use case, one of the related works
listed at [r-project-workflows][] may better suit your needs. Please check them
out!
[r-project-workflows]: https://github.com/jdblischak/r-project-workflows#readme
[singleCellSeq]: https://jdblischak.github.io/singleCellSeq/analysis/
## Further reading
* How the code, results, and figures are executed and displayed can be customized using [knitr chunk and package options](https://yihui.org/knitr/options/)
* How [R Markdown websites](https://bookdown.org/yihui/rmarkdown/rmarkdown-site.html) are configured
* The many [features][rstudio-features] of the [RStudio][] IDE
* [Directions](https://docs.github.com/articles/configuring-a-publishing-source-for-github-pages) to publish a [GitHub Pages][] site using the `docs/` subdirectory
|
/scratch/gouwar.j/cran-all/cranData/workflowr/vignettes/wflow-04-how-it-works.Rmd
|
---
title: "Frequently asked questions"
subtitle: "workflowr version `r utils::packageVersion('workflowr')`"
author: "John Blischak"
date: "`r Sys.Date()`"
output:
rmarkdown::html_vignette:
toc: true
vignette: >
%\VignetteIndexEntry{Frequently asked questions}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
## Why isn't my website displaying online?
Occasionally your website may not display (or recent updates will not
immediately appear), and you may even receive an email from GitHub with the
following message:
> The page build failed for the `master` branch with the following error:
>
> unable to build page. Please try again later.
>
> For information on troubleshooting Jekyll see:
>
> https://docs.github.com/articles/troubleshooting-jekyll-builds
>
> If you have any questions you can contact us by replying to this email.
If you've followed the setup instructions from the [Getting started
vignette](wflow-01-getting-started.html), and especially if the website
displayed in the past, it's _very unlikely_ that you caused the problem. The
hosting is provided by [GitHub Pages][gh-pages], and it sometimes is delayed or
down. Overall for a free service, it is very reliable. If you wait 5 minutes (or
30 minutes at most), your website will likely be back to normal.
If you are anxious to know if there is a problem and when it will be resolved,
you can check the status of the Twitter account [GitHub Status][gh-status] for
the most up-to-date reports from GitHub. If you suspect that the problem may
have been caused by your recent changes to your website (again, this is
unlikely), you can view the GitHub help page [Troubleshooting GitHub Pages
builds][gh-troubleshooting].
## Can I make my workflowr website private?
Yes. While it it is **not** possible to make a [GitHub Pages][gh-pages] site
private (the default setup described in the ["Getting Started"
vignette][vig-getting-started]), there are various other hosting platforms that
provide access control. Below are the currently documented options, in order of
least to most amount of technical setup required:
* You can host a private site on [GitLab Pages][gl-pages] and grant access to
collaborators. All they need is a GitLab.com account (and they can use a social
account, e.g. Twitter, to login to GitLab.com) - [Deploy your site with GitLab
Pages][vig-gitlab]
* You can use [Beaker Browser][beaker] to securely self-host your site and share
the link with collaborators - [Deploy your site with Beaker
Browser][deploy-beaker]
* You can deploy a password-protected site using [Amazon Web Services][aws]
(requires familiarity with cloud technologies) - [Deploy your site with
AWS][deploy-aws]
To see all the currently documented deployment options, see the vignette
[Alternative strategies for deploying workflowr websites][vig-deploy].
## How should I manage large data files in a workflowr project?
Tracking the changes to your project's large data files is critical for
reproducibility. Unfortunately Git, which is the version control software used
by workflowr, was designed to version small files containing code. See the
vignette [Using large data files with workflowr][vig-data] for various options
for versioning the large data files used in your workflowr project.
## How can I include external images in my website?
Image files that are generated by the code executed in the R Markdown files are
automatically handled by workflowr. If you'd like to include additional image
files to be displayed in your webpages, follow the steps below. The instructions
refer to `docs/` for the website directory since this is the default. If you are
not using GitHub Pages to host the website, you may need to change this. For
example, if you are hosting with GitLab Pages, replace `docs/` with `public/`.
1. Inside the website directory, create a subdirectory named `assets` to include
any file that should be part of the website but is not created by one of the R
Markdown files in `analysis/`:
```
dir.create("docs/assets")
```
1. Move the image file(s) to `docs/assets/`
1. In the R Markdown file, refer to the image file(s) using the relative path
from `docs/` (because this is where the HTML files are located), e.g.:
```

```
Alternatively, you could use `knitr::include_graphics()` inside of an R code
chunk, which will automatically center the image and also follow the knitr
chunk options `out.width` and `out.height`:
```
knitr::include_graphics("assets/external.png", error = FALSE)
```
Note that the image will not be previewed in the R Markdown file inside of
RStudio because it is in a different directory than the R Markdown file. You
have to set `error = FALSE` because the function throws an error if it can't
find the file. This breaks the workflowr setup, since the file path only
exists once the HTML file is moved to `docs/`. If you'd like to disable
knitr from throwing this error for all the code in your project, add the
following line to the `.Rprofile` in your project:
`options(knitr.graphics.error = FALSE)`
1. Run `wflow_build()` to confirm the external image file(s) are properly
displayed
1. Use `wflow_git_commit()` to commit the file(s) to the Git repo (so that they
get pushed to the remote repository, e.g. on GitHub):
```
wflow_git_commit("docs/assets/external.png", "Add external image of ...")
# If you are adding multiple files, you could use a file glob
wflow_git_commit("docs/assets/*.png", "Add external images of ...")
```
1. Run `wflow_publish()` on the R Markdown file that contains the external
image file(s)
Another option is to first upload the image, e.g. to
[Imgur](https://imgur.com/), [Figshare](https://figshare.com/), another GitHub
repository, etc. Then you can link directly to the image in your Rmd file using
the absolute URL. This has the added advantage that the image will automatically
display in the Rmd file as you edit it in RStudio. The main disadvantage is that
the image isn't in the same location as the rest of your project files.
## How can I save a figure in a vector graphics format (e.g. PDF)?
The default file format is PNG. This is ideal for displaying figure files on a
web page. However, you might need to import a figure into a vector graphics
editor (e.g. Illustrator, Inkscape) for final editing for a manuscript. There
are multiple options for achieving this.
One option is to switch to the file format SVG. It is a vector graphics format
that is also well supported by web browsers. The code chunk below saves the
figure file as an SVG:
````
```{r plot-for-paper, dev='svg'}`r ''`
library(ggplot2)
data(mtcars)
p <- ggplot(mtcars, aes(x = mpg, y = disp)) + geom_point()
p
```
````
To apply this to every figure file in a particular document, you can create a
"setup" chunk at the beginning of the document that sets the [knitr chunk
option](https://yihui.org/knitr/options/) globally:
````
```{r setup, dev='svg'}`r ''`
knitr::opts_chunk$set(dev = 'svg')
```
````
Another option is to simultaneously create a PNG for display in the web page and
a PDF for further editing. The example code below saves both a PNG and PDF
version of the figure, but inserts the PNG into the web page:
````
```{r plot-for-paper, dev=c('png', 'pdf')}`r ''`
library(ggplot2)
data(mtcars)
p <- ggplot(mtcars, aes(x = mpg, y = disp)) + geom_point()
p
```
````
The main advantage of the above approaches is that the figure files are still
saved in an organized fashion (i.e. the file path is still something like
`docs/figure/file.Rmd/chunk-name.ext`). Furthermore, `wflow_publish()` will
automatically version the figure files regardless of the file extension.
A similar option to the one above is to have two separate code chunks. The
advantage of this more verbose option is that you can specify different chunk
names (and thus different filenames) and also set different `fig.width` and
`fig.height` for the website and paper versions. By setting `include=FALSE` for
the second chunk, neither the code nor the PDF figure file is displayed in the
web page.
````
```{r plot-for-paper}`r ''`
library(ggplot2)
data(mtcars)
p <- ggplot(mtcars, aes(x = mpg, y = disp)) + geom_point()
p
```
````
````
```{r figure1A, include=FALSE, dev='pdf', fig.height=3, fig.width=9}`r ''`
p
```
````
However, for the most control, you can always save the figure manually, e.g.
using `ggsave()`. For example, the example chunk below creates a 10x10 inch PNG
file that is automatically versioned by workflowr, but also uses `ggsave()` to
save a 5x5 inch PDF file in the subdirectory `paper/` (which would need to be
manually committed by the user, e.g. with `wflow_git_commit()`):
````
```{r plot-for-paper, fig.width=10, fig.height=10}`r ''`
library(ggplot2)
data(mtcars)
p <- ggplot(mtcars, aes(x = mpg, y = disp)) + geom_point()
p
ggsave("paper/plot-to-edit.pdf", width = 5, height = 5)
```
````
## Can I include Shiny apps in my website?
Yes, but not directly. You cannot directly embed the Shiny app into the Rmd file
using `runtime: shiny_prerendered` for two reasons. First, workflowr creates a
static website, and the free deployment options (e.g. GitHub Pages), only
provide static web hosting. Shiny apps require a dynamic website because they
need to call a server to run the R code. Second, even if you setup your own web
server, the supporting files (e.g. CSS/JS) for a Shiny app have to be in a
[different location][shiny-external-resources] than the standard for an
Rmd-based website.
[shiny-external-resources]: https://rmarkdown.rstudio.com/authoring_shiny_prerendered.html#external_resources
However, there is still a good option for embedding the Shiny app directly into
the web page. You can upload your Shiny app to
[shinyapps.io](https://www.shinyapps.io/), and then embed it directly into your
document by calling `knitr::include_app()` inside a code chunk, as shown below:
````markdown
`r ''````{r shiny-app}
knitr::include_app("https://<user-name>.shinyapps.io/<app-name>/")
```
````
Using this method, the R code for the Shiny app is executed on the servers at
shinyapps.io, but your readers are able to explore the app without leaving your
website.
## Can I change "X" on my website?
Almost certainly yes, but some things are easier to customize than others. The
vignette [Customize your research website](wflow-02-customization.html) provides
some ideas that are simple to implement. Check out the documentation for
[rmarkdown][] and [Twitter Bootstrap][Bootstrap] for inspiration.
## How can I suppress the workflowr report?
To suppress the insertion of the workflowr report for all of the files in your
project, activate the option `suppress_report` in the `_workflowr.yml` file by
adding the following line:
```
suppress_report: TRUE
```
And then republishing your project:
```
wflow_publish("_workflowr.yml", republish = TRUE)
```
To suppress the workflowr report only for a specific file, add the following
lines to its YAML header:
```
workflowr:
suppress_report: TRUE
```
## Why am I not getting the same result with wflow_build() as with the RStudio Knit HTML button?
`wflow_build()` is designed to have the same functionality as the Knit HTML
button in RStudio, namely that it knits the HTML file in a separate R session to
avoid any clashes with variables or packages in use in the current R session.
However, the technical implementations are not identical, and thus informally we
have noticed the behavior of the two options occasionally differs. At the
moment, we believe that if the file results in an error when using
`wflow_build()`, the file needs to be fixed, regardless of whether the file is
able to be built with the RStudio button. If you have a use case that you think
should be supported by `wflow_build()`, please open an [Issue][issues] and
provide a small reproducible example.
## How should I install packages to use with a workflowr project?
When you start a new workflowr project with `wflow_start()`, it automatically
creates a local `.Rprofile` file that only affects your R session when you run R
from within your workflowr project. This is why you see the following lines each
time you open R:
```
Loading .Rprofile for the current workflowr project
This is workflowr version 1.3.0
Run ?workflowr for help getting started
>
```
This is intended to be a convenience so that you don't have to type
`library(workflowr)` every time you return to your project (or restart your R
session). However, the downside is that this has the potential to cause problems
when you install new packages. If you attempt to install one of the packages
that workflowr depends on, or if you attempt to install a package that then
updates one of these dependencies, this may cause an error. For example, here is
a typical error caused by updating git2r when the workflowr package is loaded:
```
Error: package or namespace load failed for ‘git2r’ in get(method, envir = home):
lazy-load database '/usr/local/lib/R/site-library/git2r/R/git2r.rdb' is corrupt
In addition: Warning message:
In get(method, envir = home) : internal error -3 in R_decompress1
```
The short term solution is to restart your current R session, which should fix
everything. In the long term, if you start to get this type of error often, you
can try one of the following strategies:
1. Always restart your R session after installing new packages
(Ctrl/Command+Shift+F10 in RStudio)
1. Open R from a directory that is not a workflowr project when installing new
packages
1. Delete `.Rprofile` with `wflow_remove(".Rprofile")` and manually load
workflowr with `library(workflowr)` every time you start a new R session
## Can I create a single HTML or PDF file of one of my workflowr R Markdown files?
Yes! You can create a single HTML or PDF file to distribute an isolated analysis
from your project by directly running the [rmarkdown][] function `render()`. The
main limitation is that any links to other pages will no longer be functional.
### Working directory
You will need to be careful with the working directory in which the code is
executed. By default, code in R Markdown documents are executed in the same
directory as the file. This is cumbersome, so the default behavior of workflowr
is to set the working directory to the root project directory for convenience.
To get around this, you can pass `knit_root_dir = ".."` or `knit_root_dir =
normalizePath(".")` to `render()`, which both have the effect of running the
code in the project root. If you have configured your workflowr project to
execute the files in `analysis/`, then you don't have to worry about this.
### PDF
To convert a single analysis to PDF, use `pdf_document()`. Note that this
requires a functional LaTeX setup.
```{r render-single-page-pdf, eval=FALSE}
library("rmarkdown")
# Create analysis/file.pdf
render("analysis/file.Rmd", pdf_document(), knit_root_dir = "..")
```
### HTML
Rendering a single HTML page is slightly more complex because `html_document()`
always includes the navigation bar. If you don't mind the non-functional navbar
at the top of the document, you can simply use `html_document()`.
```{r render-single-page-html-1, eval=FALSE}
library("rmarkdown")
# Create analysis/file.html, includes navigation bar
render("analysis/file.Rmd", html_document(), knit_root_dir = "..")
```
The standalone file will be saved as `analysis/file.html` unless you specify a
different name via the argument `output_file`.
To create a very simple HTML file, you can instead use `html_document_base()`.
This eliminates the navbar, but it may also remove some of the advanced
stylistic features of `html_document()` that you rely on.
```{r render-single-page-html-2, eval=FALSE}
library("rmarkdown")
# Create analysis/file.html, no navigation bar nor advanced features
render("analysis/file.Rmd", html_document_base(), knit_root_dir = "..")
```
If you are determined to have a full-featured standalone HTML file
without the navigation bar, you can temporarily rename the `_site.yml` file,
which prevents `html_document()` from including the navbar.
```{r render-single-page-html-3, eval=FALSE}
library("rmarkdown")
# Temporarily rename _site.yml
file.rename("analysis/_site.yml", "analysis/_site.yml_tmp")
# Create analysis/file.html, no navigation bar
render("analysis/file.Rmd", html_document(), knit_root_dir = "..")
# Restore _site.yml
file.rename("analysis/_site.yml_tmp", "analysis/_site.yml")
```
If you'd like your standalone HTML file to have a similar appearance to your
workflowr website, you can pass the style arguments directly to
`html_document()` so that the theme is similar (copy from your project's
`analysis/_site.yml`, below are the default values for a new workflowr project):
```{r render-single-page-html-4, eval=FALSE}
render("analysis/file.Rmd",
html_document(toc = TRUE, toc_float = TRUE, theme = "cosmo", highlight = "textmate"),
knit_root_dir = "..",
output_file = "standalone.html")
```
Alternatively, if you'd prefer to keep the workflowr report and other
workflowr-specific features in the standalone document, don't use `output_file`,
as this will cause workflowr to insert the warning ```The custom `fig.path` you
set was ignored by workflowr``` if the analysis contains any figures. Instead,
omit both `html_document()` and `output_file`. The standalone HTML file will be
saved in `analysis/` (the standard, non-standalone HTML files are always moved
to `docs/`), and you can move/rename it after it has rendered.
```{r render-single-page-html-5, eval=FALSE}
# use the workflowr::wflow_html settings in analysis/_site.yml
render("analysis/file.Rmd", knit_root_dir = "..")
```
### RStudio Knit button
Ideally you should also be able to use the RStudio Knit button to conveniently
create the standalone HTML or PDF files. For example, you could update the
YAML header to have multiple output formats, and then choose the output format
that RStudio should create. The workflowr functions like `wflow_build()` will
ignore these other output formats because they use the output format defined in
`analysis/_site.yml`.
```
---
output:
pdf_document: default
html_document_base: default
workflowr::wflow_html: default
---
```
However, just like when calling `render()` directly, you'll need to be careful
about the working directory. To execute the code in the project directory, you
can manually set the Knit Directory to "Project Directory" (the default is
"Document Directory" to match the default behavior of R Markdown files).
Lastly, the RStudio Knit button is somewhat finicky when custom output formats
are included (e.g. workflowr, bookdown). If you are having trouble getting it
to display the output format you want as an option, try knitting it to the current
format. That should update the menu to include all the options you've written in
the YAML header. See [Issue #261][issue-261] for more details.
[issue-261]: https://github.com/workflowr/workflowr/issues/261
## Can I use R notebooks with workflowr?
Yes! You can use RStudio's notebook features while you interactively develop
your analysis, either directly using the output format
`rmarkdown::html_notebook()` or indirectly with "inline code chunks" in your R
Markdown files. However, you need to take a few precautions to make sure your
notebook-style usage is compatible with the workflowr options.
First note that the R Markdown files created by `wflow_start()` and
`wflow_open()` include the lines below in the YAML header. These purposefully
disable inline code chunks to proactively prevent any potential
incompatibilities with workflowr. To activate inline code chunks, you can either
delete these two lines or replace `console` with `inline`.
```
editor_options:
chunk_output_type: console
```
Second, note that the working directory of the inline code chunks can be
different than the working directory of the R console. This is very
counterintuitive, but the working directory of the inline code chunks is set by
the "Knit Directory" setting in RStudio. The setting of "Knit Directory" may be
different in your collaborator's version of RStudio, or even your own RStudio
installed on a different computer. Thus it's not a good idea to rely on this
value. Instead, you can explicitly specify the working directory to be used for
the inline code chunks by setting the knitr option `root.dir` in a chunk called
`setup`, which RStudio treats specially. Adding the code chunk below to your R
Markdown file will cause all the inline code chunks to be executed from the root
of the project directory. This is consistent with the default workflowr setting.
````markdown
`r ''````{r setup}
knitr::opts_knit$set(root.dir = "..")
```
````
If you change the value of `knit_root_dir` in `_workflowr.yml`, then you would
need to change the value of `root.dir` in the setup chunk accordingly. Warning
that this is fragile, i.e. trying to change `root.dir` to any arbitrary
directory may result in an error. If you're going to use inline code chunks,
it's best two follow one of the following options:
1. Execute code in the root of the project directory (the default workflowr
setting). Don't change `knit_root_dir` in `_workflowr.yml`. Add the setup chunk
defined above to your R Markdown files. Note that this setup chunk will affect
RStudio but not the workflowr functions `wflow_build()` or `wflow_publish()`.
2. Execute code in the R Markdown directory (named `analysis/` by default).
Delete the `knit_root_dir` entry in `_workflowr.yml`. Don't explicitly set
`root.dir` in a setup code chunk in your R Markdown files. Ensure that the
RStudio setting "Knit Directory" is set to "Document Directory".
Third, note that if you are using `html_notebook()`, any settings you specify
for it will be ignored when you run `wflow_build()` or `wflow_publish()`. This
is because the settings in `_site.yml` override them. If you wish to change the
setting of one particular notebook file, as opposed to every file in your
project, you can set it with `workflowr::wflow_html()`. For example, if you want
to enable folding of code chunks and disable the table of contents for only this
file, you could use the following YAML header.
```
---
title: "Using R Notebook with workflowr"
output:
html_notebook: default
workflowr::wflow_html:
toc: false
code_folding: show
---
```
## Can I use a Git hosting service that uses the HTTP protocol?
Workflowr works best with Git hosting services that use the HTTPS protocol.
However, with some minimal upfront configuration, it is possible to use the HTTP
protocol.
The configuration differs depending on whether you are authenticating with SSH
keys or username/password.
**SSH keys**
1. Configure the remote with `wflow_git_remote()` and `protocol = "ssh"`
1. You can use `wflow_git_push()` and `wflow_git_pull()`
1. For the embedded links to past versions of the files to be correct, you need
to manually include the URL to the project in `_workflowr.yml` (for historical
reasons, this variable is named `github`)
```
github: https://custom-site.com/<username>/<reponame>
```
**Username/Password**
1. You can't use `wflow_git_remote()`. Instead use either
a. `git2r::remote_add()` in R:
```
git2r::remote_add(name = "origin", url = "https://custom-site/<username>/<reponame>.git")
```
a. `git remote add origin` in the terminal:
```
git remote add origin https://custom-site/<username>/<reponame>.git
```
1. You cannot use `wflow_git_push()` and `wflow_git_pull()`. Instead run
`git push` and `git pull` in the terminal
1. The embedded links to past versions of the files will be correct because they
will be based off of your remote URL
## How should I pronounce and spell workflowr?
There are multiple options for pronouncing workflowr:
1. workflow + er
1. workflow + R
1. work + flower
I (John) started primarily saying "workflow + er" but have more recently
transitioned to saying "workflow + R" more often. You can choose whichever is
most natural to you.
Workflowr should be capitalized at the beginning of a sentence, but otherwise
the lowercase workflowr should be the preferred option.
[aws]: https://aws.amazon.com/s3/
[beaker]: https://beakerbrowser.com/
[Bootstrap]: https://getbootstrap.com/
[deploy-aws]: wflow-08-deploy.html#amazon-s3-password-protected
[deploy-beaker]: wflow-08-deploy.html#beaker-browser-secure-sharing
[gh-pages]: https://pages.github.com/
[gh-status]: https://twitter.com/githubstatus
[gh-troubleshooting]: https://docs.github.com/articles/troubleshooting-github-pages-builds
[gl-pages]: https://docs.gitlab.com/ce/user/project/pages/index.html
[issues]: https://github.com/workflowr/workflowr/issues
[rmarkdown]: https://rmarkdown.rstudio.com/
[vig-data]: wflow-10-data.html
[vig-deploy]: wflow-08-deploy.html
[vig-getting-started]: wflow-01-getting-started.html
[vig-gitlab]: wflow-06-gitlab.html
|
/scratch/gouwar.j/cran-all/cranData/workflowr/vignettes/wflow-05-faq.Rmd
|
---
title: "Hosting workflowr websites using GitLab"
subtitle: "workflowr version `r utils::packageVersion('workflowr')`"
author: "Luke Zappia, John Blischak"
date: "`r Sys.Date()`"
output:
rmarkdown::html_vignette:
toc: true
vignette: >
%\VignetteIndexEntry{Hosting workflowr websites using GitLab}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
## What is in this vignette?
By default workflowr assumes that the project will be hosted on GitHub, but this
is not always the case. Users may prefer to use another service or have a
private Git repository hosting instance. This vignette details how to host a
workflowr project on GitLab. Unlike GitHub Pages, GitLab Pages offers both
public and private sites. For more details, see the documentation for [GitLab
Pages][gitlab-pages]. Similar steps will be required for other platforms but
some of the specifics will be different.
[gitlab-pages]: https://docs.gitlab.com/ee/ci/yaml/README.html#pages
## Step 0: Set up a project
The first thing we need to do is set up the project we want to host. We can do
this by following the first few steps of the instructions in the "Getting
started" vignette. When you get to the section [Deploy the
website](wflow-01-getting-started.html#deploy-the-website), follow the rest of
the steps in this vignette.
## Step 1: Create a remote repository on GitLab
**Note:** You can skip this step if you'd like because GitLab will automatically
create the new repository after you push it in Step 4 below. This
[feature][push-to-create-a-new-project] was introduced in [GitLab
10.5][gitlab-10.5], released in February 2018.
[push-to-create-a-new-project]: https://docs.gitlab.com/ee/user/project/#create-a-new-project-with-git-push
[gitlab-10.5]: https://about.gitlab.com/releases/2018/02/22/gitlab-10-5-released/
Log in to the GitLab instance you want to use and create a repository to host
your project. We recommend setting the project to be Public so that others
can inspect the code behind your results and extend your work.
## Step 2: Configure your local workflowr project to use GitLab
You will need to know your user name and the repository name for the following
steps (here we are going to use "myname" and "myproject") as well as a URL for
the hosting instance. The example below assumes you are using GitLab.com. If
instead you are using a custom instance of GitLab, you will need to change the
value for the argument `domain` accordingly ^[For example, the University of
Chicago hosts a GitLab instance for its researchers at
https://git.rcc.uchicago.edu/, which would require setting `domain =
"git.rcc.uchicago.edu"`].
```{r wflow-use-gitlab, eval=FALSE}
wflow_use_gitlab(username = "myname", repository = "myproject")
```
The function `wflow_use_gitlab()` automates all the local configuration
necessary to use GitLab. It changes the website directory from `docs/` to
`public/`, it creates the GitLab-specific configuration file `.gitlab-ci.yml`
with the necessary settings, and it connects the local Git repository to
communicate with the remote repository on GitLab.
## Step 3: Republish the analyses
In order for the correct URLs to past versions to be inserted into the HTML
pages, republish the analyses with `wflow_publish()`.
```
wflow_publish(republish = TRUE)
```
## Step 4: Push to GitLab
As a final step, push the workflowr project to GitLab (you will be prompted for
your GitLab username and password):
```{r wflow-git-push, eval=FALSE}
wflow_git_push()
```
If this step has worked correctly you should be able to refresh your GitLab page
and see all the files in your workflowr project. You can view your site at
`myname.gitlab.io/myproject/`, replacing with your username and project (note it
may take a minute for the site to be deployed).
If you skipped Step 1 above, the new repository created during the initial push
will be private by default. Unless you are working with sensitive data, you
should consider making the project public so that it is easier to share with
other researchers (e.g. collaborators, reviewers). You can change the visibility
by going to `Settings` -> `General` -> `Visibility` and changing `Project
visibility` to `Public`.
## Access control for private sites
If you need to keep your project private, you can [grant access][access-control]
to your collaborators by going to `Settings` -> `Members`. You can invite them
to the project via email, but they'll need a GitLab login to access the source
code and site. They can login to GitLab using common social sites like Google
and Twitter.
[access-control]: https://gitlab.com/help/user/project/pages/pages_access_control.md
## Compatibility with custom GitLab instances
Currently workflowr works best with the public GitLab instance hosted at
gitlab.com. If you are using a custom GitLab instance that is hosted by your
institution, it may not work as smoothly.
If you cannot view your workflowr website, this may be because your
administrators have not enabled [GitLab Pages][gitlab-pages]. You will need to
email them to activate this feature. You can include this link to the [GitLab
Pages administration][gitlab-pages-admin] instructions.
[gitlab-pages-admin]: https://git.rcc.uchicago.edu/help/administration/pages/index.md
If GitLab Pages is enabled, the links to past versions of the R Markdown files
should work correctly (open an [Issue][workflowr-issues] if you are having
problems). However, there is currently no way to conveniently view the past
versions of the HTML files. This is because workflowr uses the free service
[raw.githack.com][] to host past HTML files, and it only supports the URLs
`raw.githubusercontent.com`, `gist.githubusercontent.com`, `bitbucket.org`, and
`gitlab.com`.
[raw.githack.com]: https://raw.githack.com/
[workflowr-issues]: https://github.com/workflowr/workflowr/issues
|
/scratch/gouwar.j/cran-all/cranData/workflowr/vignettes/wflow-06-gitlab.Rmd
|
---
title: "Sharing common code across analyses"
subtitle: "workflowr version `r utils::packageVersion('workflowr')`"
author: "Tim Trice, John Blischak"
date: "`r Sys.Date()`"
output:
rmarkdown::html_vignette:
toc: true
vignette: >
%\VignetteIndexEntry{Sharing common code across analyses}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r chunk-options, include=FALSE}
library("knitr")
opts_chunk$set(eval = FALSE)
```
During the course of a project, you may want to repeat a similar analysis
across multiple R Markdown files. To avoid duplicated code across your files
(which is difficult to update), there are multiple strategies you can use to
share common code:
1. To share R code like function definitions, you can put this code in an R
script and import it in each file with the function `source()`
1. To share common R Markdown text and code chunks, you can use [child documents](https://yihui.org/knitr/demo/child/)
1. To share common templates, you can use the function `knitr::knit_expand()`
Each of these strategies is detailed below, with a special emphasis on how to
use them within the workflowr framework. In order to source scripts or use child
documents, it is suggested you use the [here][] package, which helps to locate
the root directory of your project regardless of the directory your script or
analysis file is, making sourcing documents cleaner.
[here]: https://cran.r-project.org/package=here
## Overview of directories
First, a quick overview of the directories in a workflowr project. This is
critical for importing these shared files.
In a standard R Markdown file, the code is executed in the directory where the R
Markdown file is saved. Thus any paths to files in the R Markdown file should be
relative to this directory. However, the directory where the code is executed,
referred to as the "knit directory" in the workflowr documentation, can be
configured. The default for a new workflowr project is to run the code in the
root of the workflowr project (this is defined in the file `_workflowr.yml`; see
`?wflow_html` for configuration details). Thus any filepaths should be relative
to the root of the project. As an example, if you have shared R functions
defined in the file `~/Desktop/myproject/code/common.R`, the relative filepath
from the root of the project directory would be `"code/common.R"`.
## Share R code with source()
If you have R code you want to re-use across multiple R Markdown files, the most
straightforward option is to save this code in an R script, e.g.
`code/functions.R`.
Then in each R Markdown file that needs to use the code defined in that file,
you can use `source()` to load it. If the code in your workflowr project is
executed in the root of the project directory (which is the default behavior for
new workflowr projects), then you would add the following chunk:
````
`r ''````{r shared-code}
source("code/functions.R")
```
````
On the other hand, if you have changed the value of `knit_root_dir` in the file
`_workflowr.yml`, you need to ensure that the filepath to the R script is
relative to this directory. For example, if you set `knit_root_dir: "analysis"`,
you would use this code chunk:
````
`r ''````{r shared-code}
source("../code/functions.R")
```
````
To avoid having to figure out the correct relative path (or having to update it
in the future if you were to change `knit_root_dir`), you can use `here::here()`
as it is always based off the project root. Additionally, it will help
readability when using child documents as discussed below.
````
`r ''````{r shared-code}
source(here::here("code/functions.R"))
```
````
## Share child documents with chunk option
To share text and code chunks across R Markdown files, you can use [child
documents](https://yihui.org/knitr/demo/child/), a feature of the [knitr][]
package.
[knitr]: https://cran.r-project.org/package=knitr
Here is a example of a simple R Markdown file that you can use to test this
feature. Note that it contains an H2 header, some regular text, and a code
chunk.
````
## Header in child document
Text in child document.
`r ''````{r child-code-chunk}
str(mtcars)
```
````
You can save this child document anywhere in the workflowr project with one
critical exception: it cannot be saved in the R Markdown directory (`analysis/`
by default) with the file extension `.Rmd` or `.rmd`. This is because workflowr
expects every R Markdown file in this directory to be a standalone analysis that
has a 1:1 correspondence with an HTML file in the website directory (`docs/` by
default). We recommend saving child documents in a subdirectory of the R
Markdown directory, e.g. `analysis/child/ex-child.Rmd`.
To include the content of the child document, you can reference it using
`here::here()` in your chunk options.
````
`r ''````{r parent, child = here::here("analysis/child/ex-child.Rmd")}
```
````
However, this fails if you wish to include plots in the code chunks of the child
documents. It will not generate an error, but the plot will be missing ^[The
reason for this is very technical and requires more understanding of how
workflowr is implemented than is necessary to use it effectively in the majority
of cases. Whenever workflowr builds an R Markdown file, it first copies it to a
temporary directory so that it can inject extra code chunks that implement some
of its reproducibility features. The figures in the child documents end up being
saved there and then lost.]. In a situation like this, you would want to
generate the plot within the parent R Markdown file or use
`knitr::knit_expand()` as described in the next section.
## Share templates with knit_expand()
If you need to pass parameters to the code in your child document, then you can
use `knitr::knit_expand()`. Also, this strategy has the added benefit that it
can handle plots in the child document. However, this requires setting
`knit_root_dir: "analysis"` in the file `_workflowr.yml` for plots to work
properly.
Below is an example child document with one variable to be expanded: `{{title}}`
refers to a species in the iris data set. The value assigned will be used to
filter the iris data set and label the section, chunk, and plot. We will refer
to this file as `analysis/child/iris.Rmd`.
````
## {{title}}
`r ''````{r plot_{{title}}}
iris %>%
filter(Species == "{{title}}") %>%
ggplot() +
aes(x = Sepal.Length, y = Sepal.Width) +
geom_point() +
labs(title = "{{title}}")
```
````
To generate a plot using the species `"setosa"`, you can expand the child
document in a hidden code chunk:
````
`r ''````{r, include = FALSE}
src <- knitr::knit_expand(file = here::here("analysis/child/iris.Rmd"),
title = "setosa")
```
````
and then later knit it using an inline code expression^[Before calling
`knitr::knit()`, you'll need to load the dplyr and ggplot2 packages to run the
code in this example child document.]:
`` `r
knitr::knit(text = unlist(src))` ``
The convenience of using `knitr::knit_expand()` gives you the flexibility to
generate multiple plots along with custom headers, figure labels, and more. For
example, if you want to generate a scatter plot for each Species in the `iris`
datasets, you can call `knitr::knit_expand()` within a `lapply()` or
`purrr::map()` call:
````
`r ''````{r, include = FALSE}
src <- lapply(
sort(unique(iris$Species)),
FUN = function(x) {
knitr::knit_expand(
file = here::here("analysis/child/iris.Rmd"),
title = x
)
}
)
```
````
This example code loops through each unique `iris$Species` and sends it to the
template as the variable `title`. `title` is inserted into the header, the chunk
label, the `dplyr::filter()`, and the title of the plot. This generates three
plots with custom plot titles and labels while keeping your analysis flow clean
and simple.
Remember to insert `knitr::knit(text = unlist(src))` in an inline R expression
as noted above to knit the code in the desired location of your main document.
Read the `knitr::knit_expand()` vignette for more information.
```{r knit-expand-vignette}
vignette("knit_expand", package = "knitr")
```
|
/scratch/gouwar.j/cran-all/cranData/workflowr/vignettes/wflow-07-common-code.Rmd
|
---
title: "Alternative strategies for deploying workflowr websites"
subtitle: "workflowr version `r utils::packageVersion('workflowr')`"
author: "John Blischak"
date: "`r Sys.Date()`"
output:
rmarkdown::html_vignette:
toc: true
vignette: >
%\VignetteIndexEntry{Alternative strategies for deploying workflowr websites}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
## Introduction
The [Getting Started vignette][vig-getting-started] provides instructions for
deploying the workflowr website using the service [GitHub Pages][gh-pages]
because it is quick and convenient. However, the static website created by
workflowr can be deployed using any strategy you like. Below are instructions
for deploying the workflowr website contributed by other workflowr users. If you
would like to contribute instructions for another deployment strategy, please
fork the [workflowr repository][workflowr] on GitHub and add your instructions
to this file. If you need any assistance with this, please
don't hesitate to open an [Issue][wflow-issues].
[gh-pages]: https://pages.github.com/
[vig-getting-started]: wflow-01-getting-started.html
[wflow-issues]: https://github.com/workflowr/workflowr/issues
[workflowr]: https://github.com/workflowr/workflowr
## Amazon S3 (password-protected)
Another way to privately share your workflowr site is by uploading it to [Amazon
S3][s3]. S3 is an object storage service for the Amazon cloud, and can be used
to host static websites. Basic HTTP authentication can be accomplished using
[CloudFront][cloudfront], Amazon's content delivery network, and
[Lamba@Edge][lambda], which enables the execution of serverless functions to
customize content delivered by the CDN. This [blog post][hackernoon] goes into
more detail about what that all means. A more detailed guide to setting up the
bucket is [here][kynatro]. Some templates for scripting the process are
[here][dumrauf].
Contributed by E. David Aja ([edavidaja][]).
[cloudfront]: https://aws.amazon.com/cloudfront/
[edavidaja]: https://github.com/edavidaja
[dumrauf]: https://github.com/dumrauf/serverless_static_website_with_basic_auth
[hackernoon]: https://hackernoon.com/serverless-password-protecting-a-static-website-in-an-aws-s3-bucket-bfaaa01b8666
[kynatro]: https://kynatro.com/blog/2018/01/03/a-step-by-step-guide-to-creating-a-password-protected-s3-bucket/
[lambda]: https://docs.aws.amazon.com/lambda/latest/dg/lambda-edge.html
[s3]: https://aws.amazon.com/s3/
## Beaker Browser (secure sharing)
If your project contains sensitive data that prevents you from publicly sharing
the results, one alternative option is to self-host your workflowr website using
[Beaker Browser][beaker].
[Beaker Browser][beaker] allows website creation, cloning, modification, and
publishing locally. After the site is ready, hitting "share" produces a unique
[Dat project dat://][dat] hyperlink, for example:
dat://adef21aa8bbac5e93b0c20a97c6f57f93150cf4e7f5eb1eb522eb88e682309bc
This dat:// link can then be shared and the site opened *all the while being
hosted locally on the site producer's machine.* The particular example above is
a site, produced in RStudio using workflowr, with placeholder content and R code
chunks, compiled as usual.
Security for your site is achieved with site encryption inherent in the Dat
protocol (see [Security][dat-security] on the [datproject docs page][dat-docs]),
as well as the obscurity of the unique link. Beaker Browser saves your
individual project sites in the folder `~/Sites`.
To create a Beaker Browser version of your workflowr site:
1. [Install][beaker-install] Beaker Browser and run it.
1. Select "New Site" in the three-bar dropdown menu found to the right of the
"omnibar" for web link entry, and enter its Title and (optional) a Description
of the site. This creates a folder in the Beaker Browser `~/Sites` directory
named for your Title, for example, "placeholder_workflowr", and populates the
folder with a `dat.json` file.
1. In the main Beaker Browser pane, use "Add Files" or "Open Folder" to copy the
entire contents of the workflowr `docs/` folder to your new Beaker Browser site
folder (see Symlink Synchronization, below).
1. Once copied, the new site is ready to go. Pressing "Share" in the main Beaker
Browser pane reveals the unique dat:// link generated for your Beaker Browser
site. Sharing this link with anyone running Beaker Browser will allow them to
access your workflowr HTML files...*directly from your computer*.
Instead of having to manually copy your workflowr `docs/` directory to your
Beaker Browser site directory, you can create a symlink from your workflowr
`docs/` directory to the Beaker Browser site directory. The line below links the
`docs/` directory of a hypothetical "workflowr-project" saved in `~/github/` to
the hypothetical Beaker `placeholder_workflowr` subdirectory:
ln -s ~/github/workflowr-project/docs ~/Users/joshua/Sites/placeholder_workflowr
The direct-sharing nature of the above workflow means that the host computer
needs to be running for site access. Two alternative recommended by Beaker
Browser developer [Paul Frazee][pfrazee] are [hashbase.io][] and the Beaker
Browser subproject [dathttpd][]. While hosting Beaker Browser sites is outside
of the scope of this direct sharing paradigm, each of these options has
strengths. The former, hashbase.io (free account required), is a web-hosted
central location for dat:// -linked content, removing the need for the host
computer to be running. The latter dathttpd example is an additional
server/self-hosting option that can be used if desired.
This solution was contributed by [Josh Johnson][johnsonlab]. For more details,
please read his [blog post][johnsonlab-blog] and the discussion in Issue
[#59][].
[#59]: https://github.com/workflowr/workflowr/issues/59
[beaker]: https://beakerbrowser.com/
[beaker-install]: https://beakerbrowser.com/install/
[dat]: https://dat.foundation
[dat-docs]: https://docs.datproject.org/
[dat-security]: https://docs.datproject.org/docs/security-faq
[dathttpd]: https://github.com/beakerbrowser/dathttpd
[hashbase.io]: https://hashbase.io
[johnsonlab]: https://github.com/johnsonlab
[johnsonlab-blog]: https://johnsonlab.github.io/blog-post-22/
[pfrazee]: https://github.com/pfrazee
## GitLab Pages
To deploy your workflowr website with [GitLab Pages][gitlab], you can use the
function `wflow_use_gitlab()`. You can choose if the site is public or private.
For more details, please see the dedicated vignette [Hosting workflowr websites
using GitLab](wflow-06-gitlab.html).
[gitlab]: https://docs.gitlab.com/ee/ci/yaml/README.html#pages
|
/scratch/gouwar.j/cran-all/cranData/workflowr/vignettes/wflow-08-deploy.Rmd
|
---
title: "Reproducible research with workflowr"
subtitle: "workflowr version `r utils::packageVersion('workflowr')`"
author: "John Blischak and Matthew Stephens"
output:
rmarkdown::html_vignette:
toc: true
toc_depth: 2
rmarkdown::pdf_document: default
vignette: >
%\VignetteIndexEntry{Reproducible research with workflowr}
%\VignetteEncoding{UTF-8}
%\VignetteEngine{knitr::rmarkdown}
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(eval = FALSE, fig.align = "center")
```
## Introduction
The [workflowr][] R package makes it easier for you to organize, reproduce, and
share your data analyses. This short tutorial will introduce you to the
workflowr framework. You will create a workflowr project that implements a small
data analysis in R, and by the end you will have a working website that you can
use to share your work. If you are completing this tutorial as part of a live
workshop, please follow the [setup instructions](#setup) in the next section
prior to arriving.
Workflowr combines literate programming with [R Markdown][rmd] and version
control with [Git][git] to generate a website containing time-stamped,
versioned, and documented results. By the end of this tutorial, you will have a
website hosted on [GitHub Pages][gh-pages] that contains the results of a
reproducible statistical analysis.
[gh-pages]: https://pages.github.com/
[git]: https://git-scm.com/
[rmd]: https://rmarkdown.rstudio.com/
[workflowr]: https://github.com/workflowr/workflowr
## Setup
1. Install [R][r]
1. Install [RStudio][rstudio]
1. Install workflowr from [CRAN][cran]:
```r
install.packages("workflowr")
```
1. Create an account on [GitHub][gh]
To minimize the possibility of any potential issues with your computational
setup, you are encouraged to update your version of RStudio (`Help` -> `Check
for Updates`) and update your R packages:
```{r update-packages, eval=FALSE}
update.packages()
```
If you do encounter any issues during the tutorial, consult the
[Troubleshooting](#troubleshooting) section for solutions to the most common
problems.
[cran]: https://cran.r-project.org/package=workflowr
[gh]: https://github.com
[r]: https://cran.r-project.org
[rstudio]: https://posit.co/download/rstudio-desktop/
## Organize your research
To help you stay organized, workflowr creates a project directory with the
necessary configuration files as well as subdirectories for saving data and
other project files. This tutorial uses the [RStudio project
template][rstudio-proj-template] for workflowr, but note that the same can be
achieved via the function `wflow_start()`.
[rstudio-proj-template]: https://rstudio.github.io/rstudio-extensions/rstudio_project_templates.html
To start your workflowr project, follow these steps:
1. Open RStudio.
1. In the R console, run `wflow_git_config()` to register your name and email
with Git. This only has to be done once per computer. If you've used Git
on this machine before, you can skip this step. For a better GitHub experience,
use the same email you used to register your GitHub account.
```{r config}
library(workflowr)
wflow_git_config(user.name = "First Last", user.email = "[email protected]")
```
1. In the menu bar, choose `File` -> `New Project`.
1. Choose `New Directory` and then scroll down the list of project types to
select `workflowr project`. If you don't see the workflowr project template, go
to [Troubleshooting](#missing-template).
```{r rstudio-create-project, eval=TRUE, echo=FALSE, out.width = "50%"}
knitr::include_graphics("img/rstudio-create-project.png")
```
```{r rstudio-project-type, eval=TRUE, echo=FALSE, out.width = "50%"}
knitr::include_graphics("img/rstudio-project-type.png")
```
1. Type `myproject` (or a more inventive name if you prefer) as the directory
name, choose where to save it on your computer, and click `Create Project`.
```{r rstudio-workflowr-template, eval=TRUE, echo=FALSE, out.width = "50%"}
knitr::include_graphics("img/rstudio-workflowr-template.png")
```
RStudio will create a workflowr project `myproject` and opened the project in
RStudio. Under the hood, RStudio is running a workflowr command `wflow_start()`
- so if you prefer to start a new project from the console instead of using the
RStudio menus then you could use `wflow_start()`.
Take a look at the workflowr directory structure in the Files pane, which should
be something like this:
```
myproject/
|-- .gitignore
|-- .Rprofile
|-- _workflowr.yml
|-- analysis/
| |-- about.Rmd
| |-- index.Rmd
| |-- license.Rmd
| |-- _site.yml
|-- code/
| |-- README.md
|-- data/
| |-- README.md
|-- docs/
|-- myproject.Rproj
|-- output/
| |-- README.md
|-- README.md
```
The most important directory for you to pay attention to now is the `analysis/`
directory. This is where you should store all your analyses as R Markdown (Rmd)
files. Other directories created for your convenience include `data/` for
storing data, and `code/` for storing long-running or supplemental code you
don't want to include in an Rmd file. Note that the `docs/` directory is where
the website HTML files will be created and stored by workflowr, and should not
be edited by the user.
## Build your website
The files and directories created by workflowr are already almost a website! The
only thing missing are the crucial `html` files. Take a look in the `docs/`
directory where the html files for your website need to be created... notice
that it is sadly empty.
In workflowr the html files for your website are created in the `docs/`
directory by knitting (or "building") the `.Rmd` files in the `analysis/`
directory. When you knit or build those files -- either by using the knitr
button, or by typing `wflow_build()` in the console -- the resulting html files
are saved in the docs directory.
The `docs/` directory is currently empty because we haven't run any of the
`.Rmd` files yet. So now let's run these files. We will do it both ways, using
both the knit button and using `wflow_build()`:
1. Open the file `analysis/index.Rmd` and knit it now. You can open it by using
the files pane, or by typing `wflow_open("analysis/index.Rmd")` in the R
console. You knit the file by pressing the knit button in RStudio.
1. There are also two other `.Rmd` files in the `analysis` directory. Build
these by typing `wflow_build()` in the R console. This will build all the R
Markdown files in `analysis/`, and save the resulting html files in `docs/`.
(Note, it won't re-build `index.Rmd` because you have not changed it since
running it before, so it does not need to.^[The default behavior when
`wflow_build()` is run without any arguments is to build any R Markdown file
that has been edited since its corresponding HTML file was last built.]
Ignore the warnings in the workflowr report for now; we will return to these
later.
## Collect some data!
To do an interesting analysis you will need some data. Here, instead of doing a
time-consuming experiment, we will use a convenient built-in data set from R.
While not the most realistic, this avoids any issues with downloading data from
the internet and saving it correctly. The data set `ToothGrowth` contains the
length of the teeth for 60 guinea pigs given 3 different doses of vitamin C
either via orange juice (`OC`) or directly with ascorbic acid (`VC`).
1. To get a quick sense of the data set, run the following in the R console.
```{r teeth, eval=TRUE}
data("ToothGrowth")
head(ToothGrowth)
summary(ToothGrowth)
str(ToothGrowth)
```
1. To mimic a real project that will have external data files, save the
`ToothGrowth` data set to the `data/` subdirectory using `write.csv()`.
```{r teeth-write}
write.csv(ToothGrowth, file = "data/teeth.csv")
```
## Understanding paths
Look at that last line of code. Where will the file be saved on your computer?
To understand this very important issue you need to understand the idea of
"relative paths" and "working directory".
Before explaining these ideas, let us consider a different way we could have
saved the file. Suppose we had typed
```{r}
write.csv(ToothGrowth, file = "C:/Users/GraceHopper/Documents/myproject/data/teeth.csv")
```
Then it is clear exactly where on the computer we want the file to be saved.
Specifying the file location this very explicit way is called specifying the
"full path" to the file. It is conceptually simple. But it is also a pain for
many reasons -- it is more typing, and (more importantly) if we move the project
to a different computer it will likely no longer work because the paths will
change!
Instead we typed
```{r}
write.csv(ToothGrowth, file = "data/teeth.csv")
```
Specifying the file location this way is called specifying the "relative path"
because it specifies the path to the file *relative to the current working
directory*. This means the full path to the file will be obtained by appending
the specified relative path to the (full) path of the current working directory.
For example, if the current working directory is
`C:/Users/GraceHopper/Documents/myproject/` then the file will be saved to
`C:/Users/GraceHopper/Documents/myproject/data/teeth.csv`. If the current
working directory is `C:/Users/Matt124/myproject` then the file will be saved to
`C:/Users/Matt124/myproject/data/teeth.csv`.
So, what is your current working directory? When you start or open a workflowr
project in RStudio (e.g. by clicking on `myproject.Rproj`) RStudio will set the
working directory to the location of the workflowr project on your computer. So
your current working directory should be the location you chose when you started
your workflowr project. You can check this by typing `getwd()` in the R console.
Notice how, by using relative paths, the code used here works for you whatever
operating system you are on and however your computer is set up! *You should
always use relative paths where possible because it can help make your code
easier for others to run and easier for you to run on different computers and
different operating systems.*
## Create a new analysis
So, now we have some data, we are ready to perform a small analysis. To start a
new analysis in RStudio, use the `wflow_open()` command.
1. In the R console, open a new R Markdown file by typing
```{r open-teeth}
wflow_open("analysis/teeth.Rmd")
```
Notice that we again used a relative path! Relative paths are good for
opening files as well as saving files. This command should create a new
`.Rmd` file in the `analysis` subdirectory of your workflowr project, and
open it for editing in RStudio. The file looks pretty much like other `.Rmd`
files, but in the header note that workflowr provides its own custom output
format, `workflowr::wflow_html`. The other minor difference is that
`wflow_open()` adds the editor option `chunk_output_type: console`, which
causes the code to be executed in the R console instead of within the
document. If you prefer the results of the code chunks be embedded inside
the document while you perform the analysis, you can delete those lines
(note that this has no effect on the final results, only on the display
within RStudio).
1. Copy the code chunk below and paste it at the bottom of the file `teeth.Rmd`.
The code imports the data set from the file you previously created^[Note that
the default working directory for a workflowr project is the root of the
project. Hence the relative path is `data/teeth.csv`. The working directory can
be changed via the workflowr option `knit_root_dir` in `_workflowr.yml`. See
`?wflow_html` for more details.]. Execute the code in the R console by clicking
on the Run button or using the shortcut `Ctrl`/`CMD`+`Enter`.
````
```{r import-teeth}`r ''`
teeth <- read.csv("data/teeth.csv", row.names = 1)
head(teeth)
```
````
Note: if you copy and paste this chunk, make sure to remove any spaces
before each of the backticks (` ``` `) so that they will be correctly
recognized as indicating the beginning and end of a code chunk.
1. Next create some boxplots to explore the data. Copy the code chunk below and
paste it at the bottom of the file `teeth.Rmd`. Execute the code to see create
the plots.
````
```{r boxplots}`r ''`
boxplot(len ~ dose, data = teeth)
boxplot(len ~ supp, data = teeth)
boxplot(len ~ dose + supp, data = teeth)
```
````
```{r test-boxplots, eval=TRUE, include=FALSE}
data("ToothGrowth")
teeth <- ToothGrowth
boxplot(len ~ dose, data = teeth)
boxplot(len ~ supp, data = teeth)
boxplot(len ~ dose + supp, data = teeth)
```
1. To compare the tooth length of the guinea pigs given orange juice versus
those given vitamin C, you could perform a [permutation-based statistical
test][permutation]. This would involve comparing the observed difference in
teeth length due to the supplement method to the observed differences calculated
from random permutations of the data. The basic idea is that if the observed
difference is an outlier compared to the differences generated after permuting
the supplement method column, it is more likely to be a true signal not due to
chance alone. We are not going to perform the full permutation test here, but we
will just demonstrate the idea of a permutation. Copy the code chunk below,
paste it at the bottom of of the file `teeth.Rmd`, and execute it. Try executing
it several times -- does it give you a different answer each time?
````
```{r permute}`r ''`
# Observed difference in teeth length due to supplement method
mean(teeth$len[teeth$supp == "OJ"]) - mean(teeth$len[teeth$supp == "VC"])
# Permute the observations
supp_perm <- sample(teeth$supp)
# Calculate mean difference in permuted data
mean(teeth$len[supp_perm == "OJ"]) - mean(teeth$len[supp_perm == "VC"])
```
````
[permutation]: https://en.wikipedia.org/wiki/Resampling_%28statistics%29#Permutation_tests
```{r test-permute, eval=TRUE, include=FALSE}
# Observed difference in teeth length due to supplement method
mean(teeth$len[teeth$supp == "OJ"]) - mean(teeth$len[teeth$supp == "VC"])
# Permute the observations
supp_perm <- sample(teeth$supp)
# Caclculate mean difference in permuted data
mean(teeth$len[supp_perm == "OJ"]) - mean(teeth$len[supp_perm == "VC"])
```
1. In the R console, run `wflow_build()`. Note the value of the observed
difference in the permuted data.
1. In RStudio, click on the Knit button. Has the value of the observed
difference in the permuted data changed? It should be identical. This is because
workflowr always sets the same seed prior to running the analysis.^[Note that
everyone in the workshop will have the same result because by default workflowr
uses a seed that is the date the project was created as YYYYMMDD. You can change
this by editing the file `_workflowr.yml`.] To better understand this behavior
as well as the other reproducibility safeguards and checks that workflowr
performs for each analysis, click on the workflowr button at the top and select
the "Checks" tab.
```{r workflowr-report-checks, eval=TRUE, echo=FALSE, out.width = "75%"}
knitr::include_graphics("img/workflowr-report-checks.png")
```
You can see the value of the seed that was set using `set.seed()` before the
code was executed.
## Publish your analysis!
You should also notice that workflowr is still giving you a warning: it says you
have "uncommitted changes" in your .Rmd file. The term "commit" is a term from
version control: it basically means to save a snapshot of the current version of
a file so that you could return to it later if you wanted (even if you changed
or deleted the file in between).
So, workflowr is warning you that you haven't saved a snapshot of your current
analysis. If this analysis is something you are currently (somewhat) happy with
then you should save a snapshot that will allow you to go back to it at any time
in the future (even if you change the .Rmd file between now and then). In
workflowr we use the term "publish" for this process: any analysis that you
"publish" will be one that you can go back to in the future. You will see that
it is pretty easy to publish an analysis so you should do it whenever you create
a first working version, and whenever you make a change that you might want to
keep. Don't wait to think that it is your "final" version before publishing, or
you will never publish!
1. Publish your analysis by typing:
```{r publish-teeth-growth}
wflow_publish("analysis/teeth.Rmd", message = "Analyze teeth growth")
```
The function `wflow_publish()` performs three steps: 1) commits (snapshots)
the .Rmd files, 2) rebuilds the Rmd files to create the html file and
figures, and 3) commits the HTML and figure files. This guarantees that the
results in each html file is always generated from an exact, known version
of the Rmd file (you can see this version embedded in the workflowr report).
An informative message will help you find a particular version later.
1. Open the workflowr report of `teeth.html` by clicking on the button at the
top of the page. Navigate to the tab "Past versions". Note that the record
of all past versions will be saved here. Once the project has been added to
GitHub (you will do this in the next section), the "Past versions" tab will
include hyperlinks for convenient viewing of the past versions of the Rmd and
HTML files.
```{r workflowr-past-versions-1, eval=TRUE, echo=FALSE, out.width = "75%"}
knitr::include_graphics("img/workflowr-past-versions-1.png")
```
## Checking your status
When you are working on several analyses over a period of time it can be
difficult to keep track of which ones need attention, etc. You can use
`wflow_status()` to check on all your files.
1. In the R console, run `wflow_status()`. This will show you the status of each
of the Rmd files in your workflowr project. You should see that `teeth.rmd` has
status "Published" because you just published it. But the other `.Rmd` files
have status "Unpublished" because you haven't published them yet. Also you will
notice a comment that the file `data/teeth.csv` is "untracked". This basically
means that the data file has not had a snapshot kept, which is dangerous as our
analyses obviously depend on the version of the data we use....
1. In the R console, run the command below to "publish" these other files ^[The
command uses the wildcard character `*` to match all the Rmd files in
`analysis/`. If this fails on your computer, try running the more verbose
command: `wflow_publish(c("analysis/index.Rmd", "data/teeth.csv"), message =
"Analyze teeth growth")`].
```{r publish-other-files}
wflow_publish(c("analysis/*Rmd", "data/teeth.csv"), message = "Publish data and other files")
```
1. Navigate to check html files in the `docs` directory, you should find that
they all have a green light and no warnings.
1. And run `wflow_status()` again to confirm all is OK. Everything is published!
## Share your results
So, now you have a website, with an analysis in it. But it is only on your
computer, not the internet. To share your website with the world we will use the
free service GitHub Pages.
1. In the R console, run the function `wflow_use_github()`. The only required
argument is your GitHub username. The name of the repository will automatically
be named the same as the directory containing the workflowr project, in this
case "myproject".
```{r wflow-use-github}
wflow_use_github("your-github-username")
```
When the function asks if you would like it to create the repository on
GitHub for you, enter `1`. This should open your web browser so that you can
authenticate with GitHub and then give permission for workflowr to create
the repository on your behalf. Additionally, this function connects to your
local repository with the remote GitHub repository and inserts a link to the
GitHub repository into the navigation bar. If this fails to create a GitHub
repository, go to [Troubleshooting](#no-repo).
1. To update your workflowr website to use GitHub links to past versions of the
files (as well as update the navigation bar to include the GitHub link),
republish the files. (You would not have to do this in future)
```{r republish}
wflow_publish(republish = TRUE)
```
1. To send your project to GitHub, run `wflow_git_push()`. This will prompt you
for your GitHub username and password. If this fails, go to
[Troubleshooting](#failed-push).
```{r wflow-git-push}
wflow_git_push()
```
1. On GitHub, navigate to the Settings tab of your GitHub repository^[If your
GitHub repository wasn't automatically opened by `wflow_git_push()`, you can
manually enter the URL into the browser:
`https://github.com/username/myproject`.]. Scroll down to the section "GitHub
Pages". For Source choose "master branch /docs folder". After it updates, scroll
back down and click on the URL. If the URL doesn't display your website, go to
[Troubleshooting](#no-gh-pages).
```{r github-pages-settings, eval=TRUE, echo=FALSE, out.width = "75%"}
knitr::include_graphics("img/github-pages-settings.png")
```
## Index your new analysis
Unfortunately your home page is not very inspiring. Also there is not an easy
want to find that nice analysis you did! A great way to keep track of analyses
and make them easy to find is to keep an index on your website homepage. The
homepage is created by `analysis/index.Rmd`, so we are now going to edit this
file to add a link to our new analysis.
1. Open the file `analysis/index.Rmd`. You can open it from the Files pane or
run `wflow_open("analysis/index.Rmd")`.
1. Copy the line below and paste it at the bottom of the file
`analysis/index.Rmd`. This text uses "markdown" syntax to create a hyperlink to
the tooth analysis. The text between the square brackets is displayed on the
webpage, and the text in parentheses is the relative path to the teeth webpage.
Note that you don't need to include the subdirectory `docs/` because
`index.html` and `teeth.html` are both already in `docs/`. (In an html file
relative paths are specified relative to the current page which in this case
will be `index.html`.) Also note that you need to use the file extension `.html`
since that is the file that needs to be opened by the web browser.
```
* [Teeth growth analysis](teeth.html)
```
1. Maybe you would like to write a short introductory message in your index
file e.g. "Welcome to my first workflowr website"!
1. You might also want to add a bit more details on what the tooth growth
analysis did -- a little detail in your index can be really helpful when it
starts getting bigger...
1. Run `wflow_build()` and then confirm that clicking on the link "Teeth growth"
takes you to your teeth analysis page.
1. Run `wflow_publish("analysis/index.Rmd")` to publish this new index file.
1. Run `wflow_status()` to check everything is OK.
1. Run `wflow_git_push()` to push the changes to GitHub.
1. Now go to your GitHub page again, and check out your website! (It can take a
couple of minutes to refresh after pushing, so you may need to be patient).
Navigate to the tooth analysis. Click on the links in the "Past versions" tab to
see the past results. Click on the HTML hyperlink to view the past version of
the HTML file. Click on the Rmd hyperlink to view the past version of the Rmd
file on GitHub. Enjoy!
```{r workflowr-past-versions-2, eval=TRUE, echo=FALSE, out.width = "75%"}
knitr::include_graphics("img/workflowr-past-versions-2.png")
```
## Conclusion
You have successfully created and shared a reproducible research website. The
key commands are a pretty short list: `wflow_build()`, `wflow_publish()`,
`wflow_status()`, and `wflow_git_push()`. Using the same workflowr commands, you
can do the same for one of your own research projects and share it with
collaborators and colleagues.
To learn more about workflowr, you can read the following vignettes:
* [Customize your research website](wflow-02-customization.html)
* [Migrating an existing project to use workflowr](wflow-03-migrating.html)
* [How the workflowr package works](wflow-04-how-it-works.html)
* [Frequently asked questions](wflow-05-faq.html)
* [Hosting workflowr websites using GitLab](wflow-06-gitlab.html)
* [Sharing common code across analyses](wflow-07-common-code.html)
* [Alternative strategies for deploying workflowr websites](wflow-08-deploy.html)
* [Using large data files with workflowr](wflow-10-data.html)
## Troubleshooting
### I don't see the workflowr project as an available RStudio Project Type. {#missing-template}
If you just installed workflowr, close and re-open RStudio. Also, make sure you
scroll down to the bottom of the list.
### The GitHub repository wasn't created automatically by `wflow_use_github()`. {#no-repo}
If `wflow_use_github()` failed unexpectedly when creating the GitHub repository,
or if you declined by entering `n`, you can manually created the repository on
GitHub. After logging in to GitHub, click on the "+" in the top right of the
page. Choose "New repository". For the repository name, type `myproject`. Do not
change any of the other settings. Click on the green button "Create repository".
Once that is completed, you can return to the next step in the tutorial.
```{r github-new-repo, eval=TRUE, echo=FALSE, out.width="25%"}
knitr::include_graphics("img/github-new-repo.png")
```
### I wasn't able to push to GitHub with `wflow_git_push()`. {#failed-push}
Unfortunately this function has a high failure rate because it relies on the
correct configuration of various system software dependencies. If this fails,
you can push to Git using another technique, but this will require that you have
previously installed Git on your computer. For example, you can use the RStudio
Git pane (click on the green arrow that says "Push"). Alternatively, you can
directly use Git by running `git push` in the terminal.
### My website isn't displaying after I activated GitHub Pages. {#no-gh-pages}
It is not uncommon for there to be a short delay before your website is
available. One trick to try is to specify the exact page that you want at the
end of the URL, e.g. add `/index.html` to the end of the URL.
|
/scratch/gouwar.j/cran-all/cranData/workflowr/vignettes/wflow-09-workshop.Rmd
|
---
title: "Using large data files with workflowr"
subtitle: "workflowr version `r utils::packageVersion('workflowr')`"
author: "John Blischak"
date: "`r Sys.Date()`"
output:
rmarkdown::html_vignette:
toc: true
toc_depth: 2
rmarkdown::pdf_document: default
vignette: >
%\VignetteIndexEntry{Using large data files with workflowr}
%\VignetteEncoding{UTF-8}
%\VignetteEngine{knitr::rmarkdown}
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(eval = FALSE, fig.align = "center")
```
## Introduction
Workflowr provides many features to track the progress of your data analysis
project and make it easier to reproduce both the current version as well as
previous versions of the project. However, this is only possible if the data
files from previous versions can also be restored. In other words, even if you
can obtain the code from six months ago, if you can't obtain the data from six
months ago, you won't be able to reproduce your previous analysis.
Unfortunately, if you have large data files, you can't simply commit them to the
Git repository along with the code. The max file size able to be pushed to
GitHub is [100 MB][100mb], and this is in general a good practice to follow no
matter what Git hosting service you are using. Large files will make each push
and pull take much longer and increase the risk of the download timing out. This
vignette discusses various strategies for versioning your large data files.
[100mb]: https://help.github.com/en/github/managing-large-files/conditions-for-large-files
## Option 0: Reconsider versioning your large data files
Before considering any of the options below, you need to reconsider if this is
even necessary for your project. And if it is, which data files need to be
versioned. Specifically, large raw data files that are never modified do not
need to be versioned. Instead, you could follow these steps:
1. Upload the files to an online data repository, a private FTP server, etc.
1. Add a script to your workflowr project that can download all the files
1. Include the instructions in your README and your workflowr website that
explain how to download the files
For example, an [RNA sequencing][rna-seq] project will produce [FASTQ][fastq]
files that are large and won't be modified. Instead of committing these files to
the Git repository, they should instead be uploaded to [GEO][geo]/[SRA][sra].
[fastq]: https://en.wikipedia.org/wiki/FASTQ_format
[geo]: https://www.ncbi.nlm.nih.gov/geo/
[rna-seq]: https://en.wikipedia.org/wiki/RNA-Seq
[sra]: https://www.ncbi.nlm.nih.gov/sra
## Option 1: Record metadata
If your large data files are modified throughout the project, one option would
be to record metadata about the data files, save it in a plain text file, and
then commit the plain text file to the Git repository. For example, you could
record the modification date, file size, [MD5 checksum][md5], number of rows,
number of columns, column means, etc.
[md5]: https://en.wikipedia.org/wiki/MD5
For example, if your data file contains observational measurements from a remote
sensor, you could record the date of the last observation and commit this
information. Then if you need to reproduce an analysis from six months ago, you
could recreate the previous version of the data file by filtering on the date
column.
## Option 2: Use Git LFS (Large File Storage)
If you are comfortable using Git in the terminal, a good option is [Git
LFS][lfs]. It is an extension to Git that adds extra functionality to the
standard Git commands. Thus it is completely compatible with workflowr.
Instead of committing the large file to the Git repository, it instead commits a
plain text file containing a unique hash. It then uploads the large file to a
remote server. If you checkout a previous version of the code, it will use the
unique hash in the file to download the previous version of the large data file
from the server.
Git LFS is [integrated into GitHub][bandwidth]. However, a free account is only
allotted 1 GB of free storage and 1 GB a month of free bandwidth. Thus you may
have to upgrade to a paid GitHub account if you need to version lots of large
data files.
See the [Git LFS][lfs] website to download the software and set it up to track
your large data files.
Note that for workflowr you can't use Git LFS with any of the website files in
`docs/`. [GitHub Pages][gh-pages] serves the website using the exact versions of
the files in that directory on GitHub. In other words, it won't pull the large
data files from the LFS server. Therefore everything will look fine on your
local machine, but break once pushed to GitHub.
As an example of a workflowr project that uses Git LFS, see the GitHub
repository [singlecell-qtl][scqtl]. Note that the large data files, e.g.
[`data/eset/02192018.rds`][eset] , contain the phrase "Stored with Git LFS ". If
you download the repository with `git clone`, the large data files will only
contain the unique hashes. See the [contributing instructions][contributing] for
how to use Git LFS to download the latest version of the large data files.
[bandwidth]: https://help.github.com/en/github/managing-large-files/about-storage-and-bandwidth-usage
[contributing]: https://jdblischak.github.io/singlecell-qtl/contributing.html
[eset]: https://github.com/jdblischak/singlecell-qtl/blob/master/data/eset/02192018.rds
[gh-pages]: https://pages.github.com/
[lfs]: https://git-lfs.com/
[scqtl]: https://github.com/jdblischak/singlecell-qtl
## Option 3: Use piggyback
An alternative option to Git LFS is the R package [piggyback][]. Its main
advantages are that it doesn't require paying to upgrade your GitHub account or
configuring Git. Instead, it uses R functions to upload large data files to
[releases][] on your GitHub repository. The main disadvantage, especially for
workflowr, is that it isn't integrated with Git. Therefore you will have to
manually version the large data files by uploading them via piggyback, and
recording the release version in a file in the workflowr project. This option is
recommended if you anticipate substantial, but infrequent, changes to your large
data files.
[piggyback]: https://cran.r-project.org/package=piggyback
[releases]: https://help.github.com/en/github/administering-a-repository/about-releases
## Option 4: Use a database
Importing large amounts of data into an R session can drastically degrade R's
performance or even cause it to crash. If you have a large amount of data stored
in one or more tabular files, but only need to access a subset at a time, you
should consider converting your large data files into a single database. Then
you can query the database from R to obtain a given subset of the data needed
for a particular analysis. Not only is this memory efficient, but you will
benefit from the improved organization of your project's data. See the CRAN Task
View on [Databases][ctv-databases] for resources for interacting with databases
with R.
[ctv-databases]: https://cran.r-project.org/view=Databases
|
/scratch/gouwar.j/cran-all/cranData/workflowr/vignettes/wflow-10-data.Rmd
|
add_action <- function(x, action, name, ..., call = caller_env()) {
validate_is_workflow(x, call = call)
check_conflicts(action, x, call = call)
add_action_impl(x, action, name, call = call)
}
# ------------------------------------------------------------------------------
add_action_impl <- function(x, action, name, ..., call = caller_env()) {
check_dots_empty()
UseMethod("add_action_impl", action)
}
add_action_impl.action_pre <- function(x, action, name, ..., call = caller_env()) {
check_singleton(x$pre$actions, name, call = call)
x$pre <- add_action_to_stage(x$pre, action, name, order_stage_pre())
x
}
add_action_impl.action_fit <- function(x, action, name, ..., call = caller_env()) {
check_singleton(x$fit$actions, name, call = call)
x$fit <- add_action_to_stage(x$fit, action, name, order_stage_fit())
x
}
add_action_impl.action_post <- function(x, action, name, ..., call = caller_env()) {
check_singleton(x$post$actions, name, call = call)
x$post <- add_action_to_stage(x$post, action, name, order_stage_post())
x
}
# ------------------------------------------------------------------------------
order_stage_pre <- function() {
# Case weights must come before preprocessor
c(
c("case_weights"),
c("formula", "recipe", "variables")
)
}
order_stage_fit <- function() {
"model"
}
order_stage_post <- function() {
character()
}
# ------------------------------------------------------------------------------
add_action_to_stage <- function(stage, action, name, order) {
actions <- c(stage$actions, list2(!!name := action))
# Apply required ordering for this stage
order <- intersect(order, names(actions))
actions <- actions[order]
stage$actions <- actions
stage
}
# ------------------------------------------------------------------------------
# `check_conflicts()` allows us to to check that no other action interferes
# with the current action. For instance, we can't have a formula action with
# a recipe action
check_conflicts <- function(action, x, ..., call = caller_env()) {
check_dots_empty()
UseMethod("check_conflicts")
}
check_conflicts.default <- function(action, x, ..., call = caller_env()) {
invisible(action)
}
# ------------------------------------------------------------------------------
check_singleton <- function(actions, name, ..., call = caller_env()) {
check_dots_empty()
if (name %in% names(actions)) {
glubort("A `{name}` action has already been added to this workflow.", .call = call)
}
invisible(actions)
}
# ------------------------------------------------------------------------------
new_action_pre <- function(..., subclass = character()) {
new_action(..., subclass = c(subclass, "action_pre"))
}
new_action_fit <- function(..., subclass = character()) {
new_action(..., subclass = c(subclass, "action_fit"))
}
new_action_post <- function(..., subclass = character()) {
new_action(..., subclass = c(subclass, "action_post"))
}
is_action_pre <- function(x) {
inherits(x, "action_pre")
}
is_action_fit <- function(x) {
inherits(x, "action_fit")
}
is_action_post <- function(x) {
inherits(x, "action_post")
}
# ------------------------------------------------------------------------------
# An `action` is a list of objects that define how to perform a specific action,
# such as working with a recipe, or formula terms, or a model
new_action <- function(..., subclass = character()) {
data <- list2(...)
if (!is_uniquely_named(data)) {
abort("All elements of `...` must be uniquely named.", .internal = TRUE)
}
structure(data, class = c(subclass, "action"))
}
is_action <- function(x) {
inherits(x, "action")
}
# ------------------------------------------------------------------------------
is_list_of_actions <- function(x) {
x <- compact(x)
all(map_lgl(x, is_action))
}
|
/scratch/gouwar.j/cran-all/cranData/workflows/R/action.R
|
#' Tidy a workflow
#'
#' @description
#' This is a [generics::tidy()] method for a workflow that calls `tidy()` on
#' either the underlying parsnip model or the recipe, depending on the value
#' of `what`.
#'
#' `x` must be a fitted workflow, resulting in fitted parsnip model or prepped
#' recipe that you want to tidy.
#'
#' @details
#' To tidy the unprepped recipe, use [extract_preprocessor()] and `tidy()`
#' that directly.
#'
#' @param x A workflow
#'
#' @param what A single string. Either `"model"` or `"recipe"` to select
#' which part of the workflow to tidy. Defaults to tidying the model.
#'
#' @param ... Arguments passed on to methods
#'
#' @export
tidy.workflow <- function(x, what = "model", ...) {
what <- arg_match(what, values = c("model", "recipe"))
if (identical(what, "model")) {
x <- extract_fit_parsnip(x)
out <- tidy(x, ...)
return(out)
}
if (identical(what, "recipe")) {
x <- extract_recipe(x)
out <- tidy(x, ...)
return(out)
}
abort("`what` must be 'model' or 'recipe'.", .internal = TRUE)
}
# ------------------------------------------------------------------------------
#' Glance at a workflow model
#'
#' @description
#' This is a [generics::glance()] method for a workflow that calls `glance()` on
#' the underlying parsnip model.
#'
#' `x` must be a trained workflow, resulting in fitted parsnip model to
#' `glance()` at.
#'
#' @param x A workflow
#'
#' @param ... Arguments passed on to methods
#'
#' @export
#' @examples
#' if (rlang::is_installed("broom")) {
#'
#' library(parsnip)
#' library(magrittr)
#' library(modeldata)
#'
#' data("attrition")
#'
#' model <- logistic_reg() %>%
#' set_engine("glm")
#'
#' wf <- workflow() %>%
#' add_model(model) %>%
#' add_formula(
#' Attrition ~ BusinessTravel + YearsSinceLastPromotion + OverTime
#' )
#'
#' # Workflow must be trained to call `glance()`
#' try(glance(wf))
#'
#' wf_fit <- fit(wf, attrition)
#'
#' glance(wf_fit)
#'
#' }
glance.workflow <- function(x, ...) {
x <- extract_fit_parsnip(x)
glance(x, ...)
}
# ------------------------------------------------------------------------------
#' Augment data with predictions
#'
#' @description
#' This is a [generics::augment()] method for a workflow that calls
#' `augment()` on the underlying parsnip model with `new_data`.
#'
#' `x` must be a trained workflow, resulting in fitted parsnip model to
#' `augment()` with.
#'
#' `new_data` will be preprocessed using the preprocessor in the workflow,
#' and that preprocessed data will be used to generate predictions. The
#' final result will contain the original `new_data` with new columns containing
#' the prediction information.
#'
#' @param x A workflow
#'
#' @param new_data A data frame of predictors
#'
#' @param ... Arguments passed on to methods
#'
#' @return `new_data` with new prediction specific columns.
#'
#' @param eval_time For censored regression models, a vector of time points at
#' which the survival probability is estimated. See
#' [parsnip::augment.model_fit()] for more details.
#'
#' @export
#' @examples
#' if (rlang::is_installed("broom")) {
#'
#' library(parsnip)
#' library(magrittr)
#' library(modeldata)
#'
#' data("attrition")
#'
#' model <- logistic_reg() %>%
#' set_engine("glm")
#'
#' wf <- workflow() %>%
#' add_model(model) %>%
#' add_formula(
#' Attrition ~ BusinessTravel + YearsSinceLastPromotion + OverTime
#' )
#'
#' wf_fit <- fit(wf, attrition)
#'
#' augment(wf_fit, attrition)
#'
#' }
augment.workflow <- function(x, new_data, eval_time = NULL, ...) {
fit <- extract_fit_parsnip(x)
mold <- extract_mold(x)
# supply outcomes to `augment.model_fit()` if possible (#131)
outcomes <- FALSE
if (length(fit$preproc$y_var) > 0) {
outcomes <- all(fit$preproc$y_var %in% names(new_data))
}
# `augment.model_fit()` requires the pre-processed `new_data`
forged <- hardhat::forge(new_data, blueprint = mold$blueprint, outcomes = outcomes)
if (outcomes) {
new_data_forged <- vctrs::vec_cbind(forged$predictors, forged$outcomes)
} else {
new_data_forged <- forged$predictors
}
new_data_forged <- prepare_augment_new_data(new_data_forged)
out <- augment(fit, new_data_forged, eval_time = eval_time, ...)
augment_columns <- setdiff(
names(out),
names(new_data_forged)
)
out <- out[augment_columns]
# Return original `new_data` with new prediction columns
out <- vctrs::vec_cbind(out, new_data)
out
}
prepare_augment_new_data <- function(x) {
# `augment()` works best with a data frame of predictors,
# so we need to undo any matrix/sparse matrix compositions that
# were returned from `hardhat::forge()` (#148)
if (is.data.frame(x)) {
x
} else if (is.matrix(x)) {
as.data.frame(x)
} else if (inherits(x, "dgCMatrix")) {
x <- as.matrix(x)
as.data.frame(x)
} else {
abort("Unknown predictor type returned by `forge_predictors()`.", .internal = TRUE)
}
}
|
/scratch/gouwar.j/cran-all/cranData/workflows/R/broom.R
|
#' Butcher methods for a workflow
#'
#' These methods allow you to use the butcher package to reduce the size of
#' a workflow. After calling `butcher::butcher()` on a workflow, the only
#' guarantee is that you will still be able to `predict()` from that workflow.
#' Other functions may not work as expected.
#'
#' @param x A workflow.
#' @param verbose Should information be printed about how much memory is freed
#' from butchering?
#' @param ... Extra arguments possibly used by underlying methods.
#'
#' @name workflow-butcher
# @export - onLoad
#' @rdname workflow-butcher
axe_call.workflow <- function(x, verbose = FALSE, ...) {
fit <- extract_fit_parsnip(x)
fit <- butcher::axe_call(fit, verbose = verbose, ...)
x <- replace_workflow_fit(x, fit)
add_butcher_class(x)
}
# @export - onLoad
#' @rdname workflow-butcher
axe_ctrl.workflow <- function(x, verbose = FALSE, ...) {
fit <- extract_fit_parsnip(x)
fit <- butcher::axe_ctrl(fit, verbose = verbose, ...)
x <- replace_workflow_fit(x, fit)
add_butcher_class(x)
}
# @export - onLoad
#' @rdname workflow-butcher
axe_data.workflow <- function(x, verbose = FALSE, ...) {
fit <- extract_fit_parsnip(x)
fit <- butcher::axe_data(fit, verbose = verbose, ...)
x <- replace_workflow_fit(x, fit)
x <- replace_workflow_outcomes(x, NULL)
x <- replace_workflow_predictors(x, NULL)
add_butcher_class(x)
}
# @export - onLoad
#' @rdname workflow-butcher
axe_env.workflow <- function(x, verbose = FALSE, ...) {
fit <- extract_fit_parsnip(x)
fit <- butcher::axe_env(fit, verbose = verbose, ...)
x <- replace_workflow_fit(x, fit)
# Axe env of preprocessor
preprocessor <- extract_preprocessor(x)
if (has_preprocessor_recipe(x)) {
preprocessor <- butcher::axe_env(preprocessor, verbose = verbose, ...)
} else if (has_preprocessor_formula(x)) {
preprocessor <- butcher::axe_env(preprocessor, verbose = verbose, ...)
} else if (has_preprocessor_variables(x)) {
preprocessor$outcomes <- butcher::axe_env(preprocessor$outcomes, verbose = verbose, ...)
preprocessor$predictors <- butcher::axe_env(preprocessor$predictors, verbose = verbose, ...)
}
x <- replace_workflow_preprocessor(x, preprocessor)
# Axe env of prepped recipe (separate from fresh recipe preprocessor)
if (has_preprocessor_recipe(x)) {
prepped <- extract_recipe(x)
prepped <- butcher::axe_env(prepped, verbose = verbose, ...)
x <- replace_workflow_prepped_recipe(x, prepped)
}
add_butcher_class(x)
}
# @export - onLoad
#' @rdname workflow-butcher
axe_fitted.workflow <- function(x, verbose = FALSE, ...) {
fit <- extract_fit_parsnip(x)
fit <- butcher::axe_fitted(fit, verbose = verbose, ...)
x <- replace_workflow_fit(x, fit)
if (has_preprocessor_recipe(x)) {
# hardhat already removes the `$template` from the fitted recipe that we get
# back from `extract_recipe()`, so we only axe the preprocessor recipe here.
preprocessor <- extract_preprocessor(x)
preprocessor <- butcher::axe_fitted(preprocessor, verbose = verbose, ...)
x <- replace_workflow_preprocessor(x, preprocessor)
}
add_butcher_class(x)
}
# ------------------------------------------------------------------------------
# butcher:::add_butcher_class
add_butcher_class <- function(x) {
if (!any(grepl("butcher", class(x)))) {
class(x) <- append(paste0("butchered_", class(x)[1]), class(x))
}
x
}
# ------------------------------------------------------------------------------
# For internal usage only, no checks on `value`. `value` can even be `NULL` to
# remove the element from the list. This is useful for removing
# predictors/outcomes when butchering. This does a direct replacement, with
# no resetting of `trained` or any stages.
replace_workflow_preprocessor <- function(x, value, ..., call = caller_env()) {
check_dots_empty()
validate_is_workflow(x, call = call)
if (has_preprocessor_formula(x)) {
x$pre$actions$formula$formula <- value
} else if (has_preprocessor_recipe(x)) {
x$pre$actions$recipe$recipe <- value
} else if (has_preprocessor_variables(x)) {
x$pre$actions$variables$variables <- value
} else {
abort("The workflow does not have a preprocessor.", call = call)
}
x
}
replace_workflow_fit <- function(x, value, ..., call = caller_env()) {
check_dots_empty()
validate_is_workflow(x, call = call)
if (!has_fit(x)) {
message <- c(
"The workflow does not have a model fit.",
"Do you need to call `fit()`?"
)
abort(message, call = call)
}
x$fit$fit <- value
x
}
replace_workflow_predictors <- function(x, value, ..., call = caller_env()) {
check_dots_empty()
validate_is_workflow(x, call = call)
mold <- extract_mold(x)
mold$predictors <- value
replace_workflow_mold(x, mold, call = call)
}
replace_workflow_outcomes <- function(x, value, ..., call = caller_env()) {
check_dots_empty()
validate_is_workflow(x, call = call)
mold <- extract_mold(x)
mold$outcomes <- value
replace_workflow_mold(x, mold, call = call)
}
replace_workflow_prepped_recipe <- function(x, value, ..., call = caller_env()) {
check_dots_empty()
validate_is_workflow(x, call = call)
if (!has_preprocessor_recipe(x)) {
abort("The workflow must have a recipe preprocessor.", call = call)
}
mold <- extract_mold(x)
mold$blueprint$recipe <- value
replace_workflow_mold(x, mold, call = call)
}
replace_workflow_mold <- function(x, value, ..., call = caller_env()) {
check_dots_empty()
validate_is_workflow(x, call = call)
if (!has_mold(x)) {
abort("The workflow does not have a mold. Have you called `fit()` yet?", call = call)
}
x$pre$mold <- value
x
}
|
/scratch/gouwar.j/cran-all/cranData/workflows/R/butcher.R
|
# nocov start - compat-purrr (last updated: rlang 0.3.2.9000)
# This file serves as a reference for compatibility functions for
# purrr. They are not drop-in replacements but allow a similar style
# of programming. This is useful in cases where purrr is too heavy a
# package to depend on. Please find the most recent version in rlang's
# repository.
map <- function(.x, .f, ...) {
lapply(.x, .f, ...)
}
map_mold <- function(.x, .f, .mold, ...) {
out <- vapply(.x, .f, .mold, ..., USE.NAMES = FALSE)
names(out) <- names(.x)
out
}
map_lgl <- function(.x, .f, ...) {
map_mold(.x, .f, logical(1), ...)
}
map_int <- function(.x, .f, ...) {
map_mold(.x, .f, integer(1), ...)
}
map_dbl <- function(.x, .f, ...) {
map_mold(.x, .f, double(1), ...)
}
map_chr <- function(.x, .f, ...) {
map_mold(.x, .f, character(1), ...)
}
map_cpl <- function(.x, .f, ...) {
map_mold(.x, .f, complex(1), ...)
}
walk <- function(.x, .f, ...) {
map(.x, .f, ...)
invisible(.x)
}
pluck <- function(.x, .f) {
map(.x, `[[`, .f)
}
pluck_lgl <- function(.x, .f) {
map_lgl(.x, `[[`, .f)
}
pluck_int <- function(.x, .f) {
map_int(.x, `[[`, .f)
}
pluck_dbl <- function(.x, .f) {
map_dbl(.x, `[[`, .f)
}
pluck_chr <- function(.x, .f) {
map_chr(.x, `[[`, .f)
}
pluck_cpl <- function(.x, .f) {
map_cpl(.x, `[[`, .f)
}
map2 <- function(.x, .y, .f, ...) {
out <- mapply(.f, .x, .y, MoreArgs = list(...), SIMPLIFY = FALSE)
if (length(out) == length(.x)) {
set_names(out, names(.x))
} else {
set_names(out, NULL)
}
}
map2_lgl <- function(.x, .y, .f, ...) {
as.vector(map2(.x, .y, .f, ...), "logical")
}
map2_int <- function(.x, .y, .f, ...) {
as.vector(map2(.x, .y, .f, ...), "integer")
}
map2_dbl <- function(.x, .y, .f, ...) {
as.vector(map2(.x, .y, .f, ...), "double")
}
map2_chr <- function(.x, .y, .f, ...) {
as.vector(map2(.x, .y, .f, ...), "character")
}
map2_cpl <- function(.x, .y, .f, ...) {
as.vector(map2(.x, .y, .f, ...), "complex")
}
args_recycle <- function(args) {
lengths <- map_int(args, length)
n <- max(lengths)
stopifnot(all(lengths == 1L | lengths == n))
to_recycle <- lengths == 1L
args[to_recycle] <- map(args[to_recycle], function(x) rep.int(x, n))
args
}
pmap <- function(.l, .f, ...) {
args <- args_recycle(.l)
do.call("mapply", c(
FUN = list(quote(.f)),
args, MoreArgs = quote(list(...)),
SIMPLIFY = FALSE, USE.NAMES = FALSE
))
}
probe <- function(.x, .p, ...) {
if (is_logical(.p)) {
stopifnot(length(.p) == length(.x))
.p
} else {
map_lgl(.x, .p, ...)
}
}
keep <- function(.x, .f, ...) {
.x[probe(.x, .f, ...)]
}
discard <- function(.x, .p, ...) {
sel <- probe(.x, .p, ...)
.x[is.na(sel) | !sel]
}
map_if <- function(.x, .p, .f, ...) {
matches <- probe(.x, .p)
.x[matches] <- map(.x[matches], .f, ...)
.x
}
compact <- function(.x) {
Filter(length, .x)
}
transpose <- function(.l) {
inner_names <- names(.l[[1]])
if (is.null(inner_names)) {
fields <- seq_along(.l[[1]])
} else {
fields <- set_names(inner_names)
}
map(fields, function(i) {
map(.l, .subset2, i)
})
}
every <- function(.x, .p, ...) {
for (i in seq_along(.x)) {
if (!rlang::is_true(.p(.x[[i]], ...))) {
return(FALSE)
}
}
TRUE
}
some <- function(.x, .p, ...) {
for (i in seq_along(.x)) {
if (rlang::is_true(.p(.x[[i]], ...))) {
return(TRUE)
}
}
FALSE
}
negate <- function(.p) {
function(...) !.p(...)
}
reduce <- function(.x, .f, ..., .init) {
f <- function(x, y) .f(x, y, ...)
Reduce(f, .x, init = .init)
}
reduce_right <- function(.x, .f, ..., .init) {
f <- function(x, y) .f(y, x, ...)
Reduce(f, .x, init = .init, right = TRUE)
}
accumulate <- function(.x, .f, ..., .init) {
f <- function(x, y) .f(x, y, ...)
Reduce(f, .x, init = .init, accumulate = TRUE)
}
accumulate_right <- function(.x, .f, ..., .init) {
f <- function(x, y) .f(y, x, ...)
Reduce(f, .x, init = .init, right = TRUE, accumulate = TRUE)
}
detect <- function(.x, .f, ..., .right = FALSE, .p = is_true) {
for (i in index(.x, .right)) {
if (.p(.f(.x[[i]], ...))) {
return(.x[[i]])
}
}
NULL
}
detect_index <- function(.x, .f, ..., .right = FALSE, .p = is_true) {
for (i in index(.x, .right)) {
if (.p(.f(.x[[i]], ...))) {
return(i)
}
}
0L
}
index <- function(x, right = FALSE) {
idx <- seq_along(x)
if (right) {
idx <- rev(idx)
}
idx
}
imap <- function(.x, .f, ...) {
map2(.x, vec_index(.x), .f, ...)
}
vec_index <- function(x) {
names(x) %||% seq_along(x)
}
# nocov end
|
/scratch/gouwar.j/cran-all/cranData/workflows/R/compat-purrr.R
|
#' Control object for a workflow
#'
#' `control_workflow()` holds the control parameters for a workflow.
#'
#' @param control_parsnip A parsnip control object. If `NULL`, a default control
#' argument is constructed from [parsnip::control_parsnip()].
#'
#' @return
#' A `control_workflow` object for tweaking the workflow fitting process.
#'
#' @export
#' @examples
#' control_workflow()
control_workflow <- function(control_parsnip = NULL) {
control_parsnip <- check_control_parsnip(control_parsnip)
data <- list(
control_parsnip = control_parsnip
)
structure(data, class = "control_workflow")
}
#' @export
print.control_workflow <- function(x, ...) {
cat("<control_workflow>")
invisible()
}
check_control_parsnip <- function(x, ..., call = caller_env()) {
check_dots_empty()
if (is.null(x)) {
x <- parsnip::control_parsnip()
}
if (!inherits(x, "control_parsnip")) {
abort("`control_parsnip` must be a 'control_parsnip' object.", call = call)
}
x
}
is_control_workflow <- function(x) {
inherits(x, "control_workflow")
}
|
/scratch/gouwar.j/cran-all/cranData/workflows/R/control.R
|
#' Extract elements of a workflow
#'
#' @description
#' These functions extract various elements from a workflow object. If they do
#' not exist yet, an error is thrown.
#'
#' - `extract_preprocessor()` returns the formula, recipe, or variable
#' expressions used for preprocessing.
#'
#' - `extract_spec_parsnip()` returns the parsnip model specification.
#'
#' - `extract_fit_parsnip()` returns the parsnip model fit object.
#'
#' - `extract_fit_engine()` returns the engine specific fit embedded within
#' a parsnip model fit. For example, when using [parsnip::linear_reg()]
#' with the `"lm"` engine, this returns the underlying `lm` object.
#'
#' - `extract_mold()` returns the preprocessed "mold" object returned
#' from [hardhat::mold()]. It contains information about the preprocessing,
#' including either the prepped recipe, the formula terms object, or
#' variable selectors.
#'
#' - `extract_recipe()` returns the recipe. The `estimated` argument specifies
#' whether the fitted or original recipe is returned.
#'
#' - `extract_parameter_dials()` returns a single dials parameter object.
#'
#' - `extract_parameter_set_dials()` returns a set of dials parameter objects.
#'
#' @param x A workflow
#'
#' @param estimated A logical for whether the original (unfit) recipe or the
#' fitted recipe should be returned. This argument should be named.
#' @param parameter A single string for the parameter ID.
#' @param ... Not currently used.
#'
#' @details
#' Extracting the underlying engine fit can be helpful for describing the
#' model (via `print()`, `summary()`, `plot()`, etc.) or for variable
#' importance/explainers.
#'
#' However, users should not invoke the `predict()` method on an extracted
#' model. There may be preprocessing operations that `workflows` has executed on
#' the data prior to giving it to the model. Bypassing these can lead to errors
#' or silently generating incorrect predictions.
#'
#' *Good*:
#' ```r
#' workflow_fit %>% predict(new_data)
#' ```
#'
#' *Bad*:
#' ```r
#' workflow_fit %>% extract_fit_engine() %>% predict(new_data)
#' # or
#' workflow_fit %>% extract_fit_parsnip() %>% predict(new_data)
#' ```
#'
#' @return
#' The extracted value from the object, `x`, as described in the description
#' section.
#'
#' @name extract-workflow
#' @examples
#' library(parsnip)
#' library(recipes)
#' library(magrittr)
#'
#' model <- linear_reg() %>%
#' set_engine("lm")
#'
#' recipe <- recipe(mpg ~ cyl + disp, mtcars) %>%
#' step_log(disp)
#'
#' base_wf <- workflow() %>%
#' add_model(model)
#'
#' recipe_wf <- add_recipe(base_wf, recipe)
#' formula_wf <- add_formula(base_wf, mpg ~ cyl + log(disp))
#' variable_wf <- add_variables(base_wf, mpg, c(cyl, disp))
#'
#' fit_recipe_wf <- fit(recipe_wf, mtcars)
#' fit_formula_wf <- fit(formula_wf, mtcars)
#'
#' # The preprocessor is a recipe, formula, or a list holding the
#' # tidyselect expressions identifying the outcomes/predictors
#' extract_preprocessor(recipe_wf)
#' extract_preprocessor(formula_wf)
#' extract_preprocessor(variable_wf)
#'
#' # The `spec` is the parsnip spec before it has been fit.
#' # The `fit` is the fitted parsnip model.
#' extract_spec_parsnip(fit_formula_wf)
#' extract_fit_parsnip(fit_formula_wf)
#' extract_fit_engine(fit_formula_wf)
#'
#' # The mold is returned from `hardhat::mold()`, and contains the
#' # predictors, outcomes, and information about the preprocessing
#' # for use on new data at `predict()` time.
#' extract_mold(fit_recipe_wf)
#'
#' # A useful shortcut is to extract the fitted recipe from the workflow
#' extract_recipe(fit_recipe_wf)
#'
#' # That is identical to
#' identical(
#' extract_mold(fit_recipe_wf)$blueprint$recipe,
#' extract_recipe(fit_recipe_wf)
#' )
NULL
#' @export
#' @rdname extract-workflow
extract_spec_parsnip.workflow <- function(x, ...) {
if (has_spec(x)) {
return(x$fit$actions$model$spec)
}
abort("The workflow does not have a model spec.")
}
#' @export
#' @rdname extract-workflow
extract_recipe.workflow <- function(x, ..., estimated = TRUE) {
check_dots_empty()
if (!is_bool(estimated)) {
abort("`estimated` must be a single `TRUE` or `FALSE`.")
}
if (!has_preprocessor_recipe(x)) {
abort("The workflow must have a recipe preprocessor.")
}
if (estimated) {
# Gracefully fails if not yet fitted
mold <- extract_mold(x)
res <- mold$blueprint$recipe
} else {
res <- x$pre$actions$recipe$recipe
}
res
}
#' @export
#' @rdname extract-workflow
extract_fit_parsnip.workflow <- function(x, ...) {
if (has_fit(x)) {
return(x$fit$fit)
}
abort(c(
"Can't extract a model fit from an untrained workflow.",
i = "Do you need to call `fit()`?"
))
}
#' @export
#' @rdname extract-workflow
extract_fit_engine.workflow <- function(x, ...) {
extract_fit_parsnip(x)$fit
}
#' @export
#' @rdname extract-workflow
extract_mold.workflow <- function(x, ...) {
if (has_mold(x)) {
return(x$pre$mold)
}
abort(c(
"Can't extract a mold from an untrained workflow.",
i = "Do you need to call `fit()`?"
))
}
#' @export
#' @rdname extract-workflow
extract_preprocessor.workflow <- function(x, ...) {
if (has_preprocessor_formula(x)) {
return(x$pre$actions$formula$formula)
}
if (has_preprocessor_recipe(x)) {
return(x$pre$actions$recipe$recipe)
}
if (has_preprocessor_variables(x)) {
return(x$pre$actions$variables$variables)
}
abort("The workflow does not have a preprocessor.")
}
#' @export
#' @rdname extract-workflow
extract_parameter_set_dials.workflow <- function(x, ...) {
model <- extract_spec_parsnip(x)
param_data <- extract_parameter_set_dials(model)
if (has_preprocessor_recipe(x)) {
recipe <- extract_preprocessor(x)
recipe_param_data <- extract_parameter_set_dials(recipe)
param_data <- vctrs::vec_rbind(param_data, recipe_param_data)
}
dials::parameters_constr(
param_data$name,
param_data$id,
param_data$source,
param_data$component,
param_data$component_id,
param_data$object
)
}
#' @export
#' @rdname extract-workflow
extract_parameter_dials.workflow <- function(x, parameter, ...) {
extract_parameter_dials(extract_parameter_set_dials(x), parameter)
}
|
/scratch/gouwar.j/cran-all/cranData/workflows/R/extract.R
|
#' Add a model to a workflow
#'
#' @description
#' - `add_model()` adds a parsnip model to the workflow.
#'
#' - `remove_model()` removes the model specification as well as any fitted
#' model object. Any extra formulas are also removed.
#'
#' - `update_model()` first removes the model then adds the new specification to
#' the workflow.
#'
#' @details
#' `add_model()` is a required step to construct a minimal workflow.
#'
#' @includeRmd man/rmd/indicators.Rmd details
#'
#' @inheritParams rlang::args_dots_empty
#'
#' @param x A workflow.
#'
#' @param spec A parsnip model specification.
#'
#' @param formula An optional formula override to specify the terms of the
#' model. Typically, the terms are extracted from the formula or recipe
#' preprocessing methods. However, some models (like survival and bayesian
#' models) use the formula not to preprocess, but to specify the structure
#' of the model. In those cases, a formula specifying the model structure
#' must be passed unchanged into the model call itself. This argument is
#' used for those purposes.
#'
#' @return
#' `x`, updated with either a new or removed model.
#'
#' @export
#' @examples
#' library(parsnip)
#'
#' lm_model <- linear_reg()
#' lm_model <- set_engine(lm_model, "lm")
#'
#' regularized_model <- set_engine(lm_model, "glmnet")
#'
#' workflow <- workflow()
#' workflow <- add_model(workflow, lm_model)
#' workflow
#'
#' workflow <- add_formula(workflow, mpg ~ .)
#' workflow
#'
#' remove_model(workflow)
#'
#' fitted <- fit(workflow, data = mtcars)
#' fitted
#'
#' remove_model(fitted)
#'
#' remove_model(workflow)
#'
#' update_model(workflow, regularized_model)
#' update_model(fitted, regularized_model)
add_model <- function(x, spec, ..., formula = NULL) {
check_dots_empty()
action <- new_action_model(spec, formula)
add_action(x, action, "model")
}
#' @rdname add_model
#' @export
remove_model <- function(x) {
validate_is_workflow(x)
if (!has_spec(x)) {
rlang::warn("The workflow has no model to remove.")
}
new_workflow(
pre = x$pre,
fit = new_stage_fit(),
post = new_stage_post(actions = x$post$actions),
trained = FALSE
)
}
#' @rdname add_model
#' @export
update_model <- function(x, spec, ..., formula = NULL) {
check_dots_empty()
x <- remove_model(x)
add_model(x, spec, formula = formula)
}
# ------------------------------------------------------------------------------
fit.action_model <- function(object, workflow, control, ...) {
if (!is_control_workflow(control)) {
abort("`control` must be a workflows control object created by `control_workflow()`.")
}
control_parsnip <- control$control_parsnip
spec <- object$spec
formula <- object$formula
mold <- extract_mold0(workflow)
case_weights <- extract_case_weights0(workflow)
if (is.null(formula)) {
fit <- fit_from_xy(spec, mold, case_weights, control_parsnip)
} else {
fit <- fit_from_formula(spec, mold, case_weights, control_parsnip, formula)
}
workflow$fit$fit <- fit
# Only the workflow is returned
workflow
}
fit_from_xy <- function(spec, mold, case_weights, control_parsnip) {
fit_xy(
spec,
x = mold$predictors,
y = mold$outcomes,
case_weights = case_weights,
control = control_parsnip
)
}
fit_from_formula <- function(spec, mold, case_weights, control_parsnip, formula) {
data <- cbind(mold$outcomes, mold$predictors)
fit(
spec,
formula = formula,
data = data,
case_weights = case_weights,
control = control_parsnip
)
}
extract_mold0 <- function(workflow) {
mold <- workflow$pre$mold
if (is.null(mold)) {
abort("No mold exists. `workflow` pre stage has not been run.", .internal = TRUE)
}
mold
}
extract_case_weights0 <- function(workflow) {
if (!has_case_weights(workflow)) {
return(NULL)
}
case_weights <- workflow$pre$case_weights
if (is_null(case_weights)) {
abort("No case weights exist. `workflow` pre stage has not been run.", .internal = TRUE)
}
case_weights
}
# ------------------------------------------------------------------------------
new_action_model <- function(spec, formula, ..., call = caller_env()) {
check_dots_empty()
if (!is_model_spec(spec)) {
abort("`spec` must be a `model_spec`.", call = call)
}
mode <- spec$mode
if (is_string(mode, string = "unknown")) {
message <- c(
"`spec` must have a known mode.",
i = paste0(
"Set the mode of `spec` by using `parsnip::set_mode()` or by setting ",
"the mode directly in the parsnip specification function."
)
)
abort(message, call = call)
}
if (!is.null(formula) && !is_formula(formula)) {
abort("`formula` must be a formula, or `NULL`.", call = call)
}
if (!parsnip::spec_is_loaded(spec = spec) && inherits(spec, "model_spec")) {
parsnip::prompt_missing_implementation(
spec = spec,
prompt = cli::cli_abort,
call = call
)
}
new_action_fit(spec = spec, formula = formula, subclass = "action_model")
}
|
/scratch/gouwar.j/cran-all/cranData/workflows/R/fit-action-model.R
|
#' Fit a workflow object
#'
#' @description
#' Fitting a workflow currently involves two main steps:
#'
#' - Preprocessing the data using a formula preprocessor, or by calling
#' [recipes::prep()] on a recipe.
#'
#' - Fitting the underlying parsnip model using [parsnip::fit.model_spec()].
#'
#' @details
#' In the future, there will also be _postprocessing_ steps that can be added
#' after the model has been fit.
#'
#' @includeRmd man/rmd/indicators.Rmd details
#'
#' @param object A workflow
#'
#' @param data A data frame of predictors and outcomes to use when fitting the
#' workflow
#'
#' @param ... Not used
#'
#' @param control A [control_workflow()] object
#'
#' @return
#' The workflow `object`, updated with a fit parsnip model in the
#' `object$fit$fit` slot.
#'
#' @name fit-workflow
#' @export
#' @examples
#' library(parsnip)
#' library(recipes)
#' library(magrittr)
#'
#' model <- linear_reg() %>%
#' set_engine("lm")
#'
#' base_wf <- workflow() %>%
#' add_model(model)
#'
#' formula_wf <- base_wf %>%
#' add_formula(mpg ~ cyl + log(disp))
#'
#' fit(formula_wf, mtcars)
#'
#' recipe <- recipe(mpg ~ cyl + disp, mtcars) %>%
#' step_log(disp)
#'
#' recipe_wf <- base_wf %>%
#' add_recipe(recipe)
#'
#' fit(recipe_wf, mtcars)
fit.workflow <- function(object, data, ..., control = control_workflow()) {
check_dots_empty()
if (is_missing(data)) {
abort("`data` must be provided to fit a workflow.")
}
workflow <- object
workflow <- .fit_pre(workflow, data)
workflow <- .fit_model(workflow, control)
workflow <- .fit_finalize(workflow)
# TODO: Post-processing before `.fit_finalize()`?
workflow
}
# ------------------------------------------------------------------------------
#' Internal workflow functions
#'
#' `.fit_pre()`, `.fit_model()`, and `.fit_finalize()` are internal workflow
#' functions for _partially_ fitting a workflow object. They are only exported
#' for usage by the tuning package, [tune](https://github.com/tidymodels/tune),
#' and the general user should never need to worry about them.
#'
#' @param workflow A workflow
#'
#' For `.fit_pre()`, this should be a fresh workflow.
#'
#' For `.fit_model()`, this should be a workflow that has already been trained
#' through `.fit_pre()`.
#'
#' For `.fit_finalize()`, this should be a workflow that has been through
#' both `.fit_pre()` and `.fit_model()`.
#'
#' @param data A data frame of predictors and outcomes to use when fitting the
#' workflow
#'
#' @param control A [control_workflow()] object
#'
#' @name workflows-internals
#' @keywords internal
#' @export
#' @examples
#' library(parsnip)
#' library(recipes)
#' library(magrittr)
#'
#' model <- linear_reg() %>%
#' set_engine("lm")
#'
#' wf_unfit <- workflow() %>%
#' add_model(model) %>%
#' add_formula(mpg ~ cyl + log(disp))
#'
#' wf_fit_pre <- .fit_pre(wf_unfit, mtcars)
#' wf_fit_model <- .fit_model(wf_fit_pre, control_workflow())
#' wf_fit <- .fit_finalize(wf_fit_model)
#'
#' # Notice that fitting through the model doesn't mark the
#' # workflow as being "trained"
#' wf_fit_model
#'
#' # Finalizing the workflow marks it as "trained"
#' wf_fit
#'
#' # Which allows you to predict from it
#' try(predict(wf_fit_model, mtcars))
#'
#' predict(wf_fit, mtcars)
.fit_pre <- function(workflow, data) {
validate_has_preprocessor(workflow)
# A model spec is required to ensure that we can always
# finalize the blueprint, no matter the preprocessor
validate_has_model(workflow)
workflow <- finalize_blueprint(workflow)
n <- length(workflow[["pre"]]$actions)
for (i in seq_len(n)) {
action <- workflow[["pre"]]$actions[[i]]
# Update both the `workflow` and the `data` as we iterate through pre steps
result <- fit(action, workflow = workflow, data = data)
workflow <- result$workflow
data <- result$data
}
# But only return the workflow, it contains the final set of data in `mold`
workflow
}
#' @rdname workflows-internals
#' @export
.fit_model <- function(workflow, control) {
action_model <- workflow[["fit"]][["actions"]][["model"]]
fit(action_model, workflow = workflow, control = control)
}
#' @rdname workflows-internals
#' @export
.fit_finalize <- function(workflow) {
set_trained(workflow, TRUE)
}
# ------------------------------------------------------------------------------
validate_has_preprocessor <- function(x, ..., call = caller_env()) {
check_dots_empty()
has_preprocessor <-
has_preprocessor_formula(x) ||
has_preprocessor_recipe(x) ||
has_preprocessor_variables(x)
if (!has_preprocessor) {
message <- c(
"The workflow must have a formula, recipe, or variables preprocessor.",
i = "Provide one with `add_formula()`, `add_recipe()`, or `add_variables()`."
)
abort(message, call = call)
}
invisible(x)
}
validate_has_model <- function(x, ..., call = caller_env()) {
check_dots_empty()
has_model <- has_action(x$fit, "model")
if (!has_model) {
message <- c(
"The workflow must have a model.",
i = "Provide one with `add_model()`."
)
abort(message, call = call)
}
invisible(x)
}
# ------------------------------------------------------------------------------
finalize_blueprint <- function(workflow) {
# Use user supplied blueprint if provided
if (has_blueprint(workflow)) {
return(workflow)
}
if (has_preprocessor_recipe(workflow)) {
finalize_blueprint_recipe(workflow)
} else if (has_preprocessor_formula(workflow)) {
finalize_blueprint_formula(workflow)
} else if (has_preprocessor_variables(workflow)) {
finalize_blueprint_variables(workflow)
} else {
abort("`workflow` should have a preprocessor at this point.", .internal = TRUE)
}
}
finalize_blueprint_recipe <- function(workflow) {
# Use the default blueprint, no parsnip model encoding info is used here
blueprint <- hardhat::default_recipe_blueprint()
recipe <- extract_preprocessor(workflow)
update_recipe(workflow, recipe = recipe, blueprint = blueprint)
}
finalize_blueprint_formula <- function(workflow) {
tbl_encodings <- pull_workflow_spec_encoding_tbl(workflow)
indicators <- tbl_encodings$predictor_indicators
intercept <- tbl_encodings$compute_intercept
if (!is_string(indicators)) {
abort("`indicators` encoding from parsnip should be a string.", .internal = TRUE)
}
if (!is_bool(intercept)) {
abort("`intercept` encoding from parsnip should be a bool.", .internal = TRUE)
}
# Use model specific information to construct the blueprint
blueprint <- hardhat::default_formula_blueprint(
indicators = indicators,
intercept = intercept
)
formula <- extract_preprocessor(workflow)
update_formula(workflow, formula = formula, blueprint = blueprint)
}
pull_workflow_spec_encoding_tbl <- function(workflow) {
spec <- extract_spec_parsnip(workflow)
spec_cls <- class(spec)[[1]]
if (modelenv::is_unsupervised_spec(spec)) {
tbl_encodings <- modelenv::get_encoding(spec_cls)
} else {
tbl_encodings <- parsnip::get_encoding(spec_cls)
}
indicator_engine <- tbl_encodings$engine == spec$engine
indicator_mode <- tbl_encodings$mode == spec$mode
indicator_spec <- indicator_engine & indicator_mode
out <- tbl_encodings[indicator_spec, , drop = FALSE]
if (nrow(out) != 1L) {
abort("Exactly 1 model/engine/mode combination must be located.", .internal = TRUE)
}
out
}
finalize_blueprint_variables <- function(workflow) {
# Use the default blueprint, no parsnip model encoding info is used here
blueprint <- hardhat::default_xy_blueprint()
variables <- extract_preprocessor(workflow)
update_variables(
workflow,
blueprint = blueprint,
variables = variables
)
}
|
/scratch/gouwar.j/cran-all/cranData/workflows/R/fit.R
|
# Lazily registered in .onLoad()
required_pkgs.workflow <- function(x, infra = TRUE, ...) {
out <- character()
if (has_spec(x)) {
model <- extract_spec_parsnip(x)
pkgs <- generics::required_pkgs(model, infra = infra)
out <- c(pkgs, out)
}
if (has_preprocessor_recipe(x)) {
preprocessor <- extract_preprocessor(x)
# This also has the side effect of loading recipes, ensuring that its
# S3 methods for `required_pkgs()` are registered
if (!is_installed("recipes")) {
glubort(
"The recipes package must be installed to compute the ",
"`required_pkgs()` of a workflow with a recipe preprocessor."
)
}
pkgs <- generics::required_pkgs(preprocessor, infra = infra)
out <- c(pkgs, out)
}
out <- unique(out)
out
}
#' @export
tune_args.workflow <- function(object, ...) {
model <- extract_spec_parsnip(object)
param_data <- generics::tune_args(model)
if (has_preprocessor_recipe(object)) {
recipe <- extract_preprocessor(object)
recipe_param_data <- generics::tune_args(recipe)
param_data <- vctrs::vec_rbind(param_data, recipe_param_data)
}
param_data
}
#' @export
tunable.workflow <- function(x, ...) {
model <- extract_spec_parsnip(x)
param_data <- generics::tunable(model)
if (has_preprocessor_recipe(x)) {
recipe <- extract_preprocessor(x)
recipe_param_data <- generics::tunable(recipe)
param_data <- vctrs::vec_rbind(param_data, recipe_param_data)
}
param_data
}
|
/scratch/gouwar.j/cran-all/cranData/workflows/R/generics.R
|
#' Add case weights to a workflow
#'
#' @description
#' This family of functions revolves around selecting a column of `data` to use
#' for _case weights_. This column must be one of the allowed case weight types,
#' such as [hardhat::frequency_weights()] or [hardhat::importance_weights()].
#' Specifically, it must return `TRUE` from [hardhat::is_case_weights()]. The
#' underlying model will decide whether or not the type of case weights you have
#' supplied are applicable or not.
#'
#' - `add_case_weights()` specifies the column that will be interpreted as
#' case weights in the model. This column must be present in the `data`
#' supplied to [fit()][fit.workflow()].
#'
#' - `remove_case_weights()` removes the case weights. Additionally, if the
#' model has already been fit, then the fit is removed.
#'
#' - `update_case_weights()` first removes the case weights, then replaces them
#' with the new ones.
#'
#' @details
#' For formula and variable preprocessors, the case weights `col` is removed
#' from the data before the preprocessor is evaluated. This allows you to use
#' formulas like `y ~ .` or tidyselection like `everything()` without fear of
#' accidentally selecting the case weights column.
#'
#' For recipe preprocessors, the case weights `col` is not removed and is
#' passed along to the recipe. Typically, your recipe will include steps that
#' can utilize case weights.
#'
#' @param x A workflow
#'
#' @param col A single unquoted column name specifying the case weights for
#' the model. This must be a classed case weights column, as determined by
#' [hardhat::is_case_weights()].
#'
#' @export
#' @examples
#' library(parsnip)
#' library(magrittr)
#' library(hardhat)
#'
#' mtcars2 <- mtcars
#' mtcars2$gear <- frequency_weights(mtcars2$gear)
#'
#' spec <- linear_reg() %>%
#' set_engine("lm")
#'
#' wf <- workflow() %>%
#' add_case_weights(gear) %>%
#' add_formula(mpg ~ .) %>%
#' add_model(spec)
#'
#' wf <- fit(wf, mtcars2)
#'
#' # Notice that the case weights (gear) aren't included in the predictors
#' extract_mold(wf)$predictors
#'
#' # Strip them out of the workflow, which also resets the model
#' remove_case_weights(wf)
add_case_weights <- function(x, col) {
col <- enquo(col)
action <- new_action_case_weights(col)
# Ensures that case-weight actions are always before preprocessor actions
add_action(x, action, "case_weights")
}
#' @rdname add_case_weights
#' @export
remove_case_weights <- function(x) {
validate_is_workflow(x)
if (!has_case_weights(x)) {
warn("The workflow has no case weights specification to remove.")
}
actions <- x$pre$actions
actions[["case_weights"]] <- NULL
new_workflow(
pre = new_stage_pre(actions = actions),
fit = new_stage_fit(actions = x$fit$actions),
post = new_stage_post(actions = x$post$actions),
trained = FALSE
)
}
#' @rdname add_case_weights
#' @export
update_case_weights <- function(x, col) {
x <- remove_case_weights(x)
add_case_weights(x, {{ col }})
}
# ------------------------------------------------------------------------------
fit.action_case_weights <- function(object, workflow, data, ...) {
col <- object$col
loc <- eval_select_case_weights(col, data)
case_weights <- data[[loc]]
if (!hardhat::is_case_weights(case_weights)) {
abort(paste0(
"`col` must select a classed case weights column, as determined by ",
"`hardhat::is_case_weights()`. For example, it could be a column ",
"created by `hardhat::frequency_weights()` or ",
"`hardhat::importance_weights()`."
))
}
# Remove case weights for formula/variable preprocessors so `y ~ .` and
# `everything()` don't pick up the weights column. Recipe preprocessors
# likely need the case weights columns so we don't remove them in that case.
# They will be automatically tagged by the recipe with a `"case_weights"`
# role, so they won't be considered predictors during `bake()`, meaning
# that passing them through should be harmless.
remove <-
has_preprocessor_formula(workflow) ||
has_preprocessor_variables(workflow)
if (remove) {
data[[loc]] <- NULL
}
workflow$pre <- new_stage_pre(
actions = workflow$pre$actions,
mold = NULL,
case_weights = case_weights
)
# All pre steps return the `workflow` and `data`
list(workflow = workflow, data = data)
}
# ------------------------------------------------------------------------------
new_action_case_weights <- function(col) {
if (!is_quosure(col)) {
abort("`col` must be a quosure.", .internal = TRUE)
}
new_action_pre(
col = col,
subclass = "action_case_weights"
)
}
# ------------------------------------------------------------------------------
extract_case_weights_col <- function(x) {
x$pre$actions$case_weights$col
}
eval_select_case_weights <- function(col, data, ..., call = caller_env()) {
check_dots_empty()
# `col` is saved as a quosure, so it carries along the evaluation environment
env <- empty_env()
loc <- tidyselect::eval_select(
expr = col,
data = data,
env = env,
error_call = call
)
if (length(loc) != 1L) {
message <- paste0(
"`col` must specify exactly one column from ",
"`data` to extract case weights from."
)
abort(message, call = call)
}
loc
}
|
/scratch/gouwar.j/cran-all/cranData/workflows/R/pre-action-case-weights.R
|
#' Add formula terms to a workflow
#'
#' @description
#' - `add_formula()` specifies the terms of the model through the usage of a
#' formula.
#'
#' - `remove_formula()` removes the formula as well as any downstream objects
#' that might get created after the formula is used for preprocessing, such as
#' terms. Additionally, if the model has already been fit, then the fit is
#' removed.
#'
#' - `update_formula()` first removes the formula, then replaces the previous
#' formula with the new one. Any model that has already been fit based on this
#' formula will need to be refit.
#'
#' @details
#' To fit a workflow, exactly one of [add_formula()], [add_recipe()], or
#' [add_variables()] _must_ be specified.
#'
#' @includeRmd man/rmd/add-formula.Rmd details
#'
#' @param x A workflow
#'
#' @param formula A formula specifying the terms of the model. It is advised to
#' not do preprocessing in the formula, and instead use a recipe if that is
#' required.
#'
#' @param ... Not used.
#'
#' @param blueprint A hardhat blueprint used for fine tuning the preprocessing.
#'
#' If `NULL`, [hardhat::default_formula_blueprint()] is used and is passed
#' arguments that best align with the model present in the workflow.
#'
#' Note that preprocessing done here is separate from preprocessing that
#' might be done by the underlying model. For example, if a blueprint with
#' `indicators = "none"` is specified, no dummy variables will be created by
#' hardhat, but if the underlying model requires a formula interface that
#' internally uses [stats::model.matrix()], factors will still be expanded to
#' dummy variables by the model.
#'
#' @return
#' `x`, updated with either a new or removed formula preprocessor.
#'
#' @export
#' @examples
#' workflow <- workflow()
#' workflow <- add_formula(workflow, mpg ~ cyl)
#' workflow
#'
#' remove_formula(workflow)
#'
#' update_formula(workflow, mpg ~ disp)
add_formula <- function(x, formula, ..., blueprint = NULL) {
check_dots_empty()
action <- new_action_formula(formula, blueprint)
add_action(x, action, "formula")
}
#' @rdname add_formula
#' @export
remove_formula <- function(x) {
validate_is_workflow(x)
if (!has_preprocessor_formula(x)) {
rlang::warn("The workflow has no formula preprocessor to remove.")
}
actions <- x$pre$actions
actions[["formula"]] <- NULL
new_workflow(
pre = new_stage_pre(actions = actions),
fit = new_stage_fit(actions = x$fit$actions),
post = new_stage_post(actions = x$post$actions),
trained = FALSE
)
}
#' @rdname add_formula
#' @export
update_formula <- function(x, formula, ..., blueprint = NULL) {
check_dots_empty()
x <- remove_formula(x)
add_formula(x, formula, blueprint = blueprint)
}
# ------------------------------------------------------------------------------
fit.action_formula <- function(object, workflow, data, ...) {
formula <- object$formula
blueprint <- object$blueprint
# TODO - Strip out the formula environment at some time?
mold <- hardhat::mold(formula, data, blueprint = blueprint)
check_for_offset(mold)
workflow$pre <- new_stage_pre(
actions = workflow$pre$actions,
mold = mold,
case_weights = workflow$pre$case_weights
)
# All pre steps return the `workflow` and `data`
list(workflow = workflow, data = data)
}
check_for_offset <- function(mold, ..., call = caller_env()) {
check_dots_empty()
# `hardhat::mold()` specially detects offsets in the formula preprocessor and
# places them in an "extras" slot. This is useful for modeling package
# authors, but we don't want users to provide an offset in the formula
# supplied to `add_formula()` because "extra" columns aren't passed on to
# parsnip. They should use a model formula instead (#162).
offset <- mold$extras$offset
if (!is.null(offset)) {
message <- c(
"Can't use an offset in the formula supplied to `add_formula()`.",
i = "Instead, specify offsets through a model formula in `add_model(formula = )`."
)
abort(message, call = call)
}
}
# ------------------------------------------------------------------------------
check_conflicts.action_formula <- function(action, x, ..., call = caller_env()) {
pre <- x$pre
if (has_action(pre, "recipe")) {
abort("A formula cannot be added when a recipe already exists.", call = call)
}
if (has_action(pre, "variables")) {
abort("A formula cannot be added when variables already exist.", call = call)
}
invisible(action)
}
# ------------------------------------------------------------------------------
new_action_formula <- function(formula, blueprint, ..., call = caller_env()) {
check_dots_empty()
if (!is_formula(formula)) {
abort("`formula` must be a formula.", call = call)
}
# `NULL` blueprints are finalized at fit time
if (!is_null(blueprint) && !is_formula_blueprint(blueprint)) {
abort("`blueprint` must be a hardhat 'formula_blueprint'.", call = call)
}
new_action_pre(
formula = formula,
blueprint = blueprint,
subclass = "action_formula"
)
}
is_formula_blueprint <- function(x) {
inherits(x, "formula_blueprint")
}
|
/scratch/gouwar.j/cran-all/cranData/workflows/R/pre-action-formula.R
|
#' Add a recipe to a workflow
#'
#' @description
#' - `add_recipe()` specifies the terms of the model and any preprocessing that
#' is required through the usage of a recipe.
#'
#' - `remove_recipe()` removes the recipe as well as any downstream objects
#' that might get created after the recipe is used for preprocessing, such as
#' the prepped recipe. Additionally, if the model has already been fit, then
#' the fit is removed.
#'
#' - `update_recipe()` first removes the recipe, then replaces the previous
#' recipe with the new one. Any model that has already been fit based on this
#' recipe will need to be refit.
#'
#' @details
#' To fit a workflow, exactly one of [add_formula()], [add_recipe()], or
#' [add_variables()] _must_ be specified.
#'
#' @param x A workflow
#'
#' @param recipe A recipe created using [recipes::recipe()]. The recipe
#' should not have been trained already with [recipes::prep()]; workflows
#' will handle training internally.
#'
#' @param ... Not used.
#'
#' @param blueprint A hardhat blueprint used for fine tuning the preprocessing.
#'
#' If `NULL`, [hardhat::default_recipe_blueprint()] is used.
#'
#' Note that preprocessing done here is separate from preprocessing that
#' might be done automatically by the underlying model.
#'
#' @return
#' `x`, updated with either a new or removed recipe preprocessor.
#'
#' @export
#' @examples
#' library(recipes)
#' library(magrittr)
#'
#' recipe <- recipe(mpg ~ cyl, mtcars) %>%
#' step_log(cyl)
#'
#' workflow <- workflow() %>%
#' add_recipe(recipe)
#'
#' workflow
#'
#' remove_recipe(workflow)
#'
#' update_recipe(workflow, recipe(mpg ~ cyl, mtcars))
add_recipe <- function(x, recipe, ..., blueprint = NULL) {
check_dots_empty()
validate_recipes_available()
action <- new_action_recipe(recipe, blueprint)
add_action(x, action, "recipe")
}
#' @rdname add_recipe
#' @export
remove_recipe <- function(x) {
validate_is_workflow(x)
if (!has_preprocessor_recipe(x)) {
rlang::warn("The workflow has no recipe preprocessor to remove.")
}
actions <- x$pre$actions
actions[["recipe"]] <- NULL
new_workflow(
pre = new_stage_pre(actions = actions),
fit = new_stage_fit(actions = x$fit$actions),
post = new_stage_post(actions = x$post$actions),
trained = FALSE
)
}
#' @rdname add_recipe
#' @export
update_recipe <- function(x, recipe, ..., blueprint = NULL) {
check_dots_empty()
x <- remove_recipe(x)
add_recipe(x, recipe, blueprint = blueprint)
}
# ------------------------------------------------------------------------------
fit.action_recipe <- function(object, workflow, data, ...) {
recipe <- object$recipe
blueprint <- object$blueprint
mold <- hardhat::mold(recipe, data, blueprint = blueprint)
if (has_case_weights(workflow)) {
workflow <- update_retained_case_weights(workflow, mold)
}
workflow$pre <- new_stage_pre(
actions = workflow$pre$actions,
mold = mold,
case_weights = workflow$pre$case_weights
)
# All pre steps return the `workflow` and `data`
list(workflow = workflow, data = data)
}
# ------------------------------------------------------------------------------
check_conflicts.action_recipe <- function(action, x, ..., call = caller_env()) {
pre <- x$pre
if (has_action(pre, "formula")) {
abort("A recipe cannot be added when a formula already exists.", call = call)
}
if (has_action(pre, "variables")) {
abort("A recipe cannot be added when variables already exist.", call = call)
}
invisible(action)
}
# ------------------------------------------------------------------------------
new_action_recipe <- function(recipe, blueprint, ..., call = caller_env()) {
check_dots_empty()
if (!is_recipe(recipe)) {
abort("`recipe` must be a recipe.", call = call)
}
if (recipes::fully_trained(recipe)) {
abort("Can't add a trained recipe to a workflow.", call = call)
}
# `NULL` blueprints are finalized at fit time
if (!is_null(blueprint) && !is_recipe_blueprint(blueprint)) {
abort("`blueprint` must be a hardhat 'recipe_blueprint'.", call = call)
}
new_action_pre(
recipe = recipe,
blueprint = blueprint,
subclass = "action_recipe"
)
}
is_recipe <- function(x) {
inherits(x, "recipe")
}
is_recipe_blueprint <- function(x) {
inherits(x, "recipe_blueprint")
}
update_retained_case_weights <- function(workflow,
mold,
...,
call = caller_env()) {
# If the workflow was using case weights, then we retained these case weights
# in the `$pre$case_weights` slot. However, when a recipe is used we also
# pass the case weights on to the recipe. It is possible for the recipe to
# change the number of rows in the data (with a filter or upsample, for
# example), in which case we need to update the case weights column that we
# retain in the workflow. We also do quite a few checks to ensure that the
# recipe doesn't modify or rename the case weights column in any other way.
col <- extract_case_weights_col(workflow)
if (!is_quosure(col)) {
abort(
"`col` must be a quosure selecting the case weights column.",
.internal = TRUE,
call = call
)
}
case_weights_roles <- mold$extras$roles$case_weights
if (!is.data.frame(case_weights_roles)) {
message <- c(
'No columns with a `"case_weights"` role exist in the data after processing the recipe.',
i = "Did you remove or modify the case weights while processing the recipe?"
)
abort(message, call = call)
}
loc <- eval_select_case_weights(col, case_weights_roles, call = call)
case_weights <- case_weights_roles[[loc]]
if (!hardhat::is_case_weights(case_weights)) {
message <- c(
paste0(
'The column with a recipes role of `"case_weights"` must be a ',
"classed case weights column, as determined by ",
"`hardhat::is_case_weights()`."
),
i = "Did you modify the case weights while processing the recipe?"
)
abort(message, call = call)
}
workflow$pre$case_weights <- case_weights
workflow
}
|
/scratch/gouwar.j/cran-all/cranData/workflows/R/pre-action-recipe.R
|
#' Add variables to a workflow
#'
#' @description
#' - `add_variables()` specifies the terms of the model through the usage of
#' [tidyselect::select_helpers] for the `outcomes` and `predictors`.
#'
#' - `remove_variables()` removes the variables. Additionally, if the model
#' has already been fit, then the fit is removed.
#'
#' - `update_variables()` first removes the variables, then replaces the
#' previous variables with the new ones. Any model that has already been
#' fit based on the original variables will need to be refit.
#'
#' - `workflow_variables()` bundles `outcomes` and `predictors` into a single
#' variables object, which can be supplied to `add_variables()`.
#'
#' @details
#' To fit a workflow, exactly one of [add_formula()], [add_recipe()], or
#' [add_variables()] _must_ be specified.
#'
#' @param x A workflow
#'
#' @param outcomes,predictors Tidyselect expressions specifying the terms
#' of the model. `outcomes` is evaluated first, and then all outcome columns
#' are removed from the data before `predictors` is evaluated.
#' See [tidyselect::select_helpers] for the full range of possible ways to
#' specify terms.
#'
#' @param ... Not used.
#'
#' @param blueprint A hardhat blueprint used for fine tuning the preprocessing.
#'
#' If `NULL`, [hardhat::default_xy_blueprint()] is used.
#'
#' Note that preprocessing done here is separate from preprocessing that
#' might be done by the underlying model.
#'
#' @param variables An alternative specification of `outcomes` and `predictors`,
#' useful for supplying variables programmatically.
#'
#' - If `NULL`, this argument is unused, and `outcomes` and `predictors` are
#' used to specify the variables.
#'
#' - Otherwise, this must be the result of calling `workflow_variables()` to
#' create a standalone variables object. In this case, `outcomes` and
#' `predictors` are completely ignored.
#'
#' @return
#' - `add_variables()` returns `x` with a new variables preprocessor.
#'
#' - `remove_variables()` returns `x` after resetting any model fit and
#' removing the variables preprocessor.
#'
#' - `update_variables()` returns `x` after removing the variables preprocessor,
#' and then re-specifying it with new variables.
#'
#' - `workflow_variables()` returns a 'workflow_variables' object containing
#' both the `outcomes` and `predictors`.
#'
#' @export
#' @examples
#' library(parsnip)
#'
#' spec_lm <- linear_reg()
#' spec_lm <- set_engine(spec_lm, "lm")
#'
#' workflow <- workflow()
#' workflow <- add_model(workflow, spec_lm)
#'
#' # Add terms with tidyselect expressions.
#' # Outcomes are specified before predictors.
#' workflow1 <- add_variables(
#' workflow,
#' outcomes = mpg,
#' predictors = c(cyl, disp)
#' )
#'
#' workflow1 <- fit(workflow1, mtcars)
#' workflow1
#'
#' # Removing the variables of a fit workflow will also remove the model
#' remove_variables(workflow1)
#'
#' # Variables can also be updated
#' update_variables(workflow1, mpg, starts_with("d"))
#'
#' # The `outcomes` are removed before the `predictors` expression
#' # is evaluated. This allows you to easily specify the predictors
#' # as "everything except the outcomes".
#' workflow2 <- add_variables(workflow, mpg, everything())
#' workflow2 <- fit(workflow2, mtcars)
#' extract_mold(workflow2)$predictors
#'
#' # Variables can also be added from the result of a call to
#' # `workflow_variables()`, which creates a standalone variables object
#' variables <- workflow_variables(mpg, c(cyl, disp))
#' workflow3 <- add_variables(workflow, variables = variables)
#' fit(workflow3, mtcars)
add_variables <- function(x,
outcomes,
predictors,
...,
blueprint = NULL,
variables = NULL) {
check_dots_empty()
if (is_null(variables)) {
variables <- workflow_variables({{ outcomes }}, {{ predictors }})
}
if (!is_workflow_variables(variables)) {
glubort(
"`variables` must be a 'workflow_variables' object ",
"created from `workflow_variables()`."
)
}
action <- new_action_variables(variables, blueprint)
add_action(x, action, "variables")
}
#' @rdname add_variables
#' @export
remove_variables <- function(x) {
validate_is_workflow(x)
if (!has_preprocessor_variables(x)) {
rlang::warn("The workflow has no variables preprocessor to remove.")
}
actions <- x$pre$actions
actions[["variables"]] <- NULL
new_workflow(
pre = new_stage_pre(actions = actions),
fit = new_stage_fit(actions = x$fit$actions),
post = new_stage_post(actions = x$post$actions),
trained = FALSE
)
}
#' @rdname add_variables
#' @export
update_variables <- function(x,
outcomes,
predictors,
...,
blueprint = NULL,
variables = NULL) {
check_dots_empty()
x <- remove_variables(x)
if (is_null(variables)) {
variables <- workflow_variables({{ outcomes }}, {{ predictors }})
}
add_variables(
x = x,
blueprint = blueprint,
variables = variables
)
}
# ------------------------------------------------------------------------------
fit.action_variables <- function(object, workflow, data, ...) {
variables <- object$variables
outcomes <- variables$outcomes
predictors <- variables$predictors
blueprint <- object$blueprint
# `outcomes` and `predictors` should both be quosures,
# meaning they carry along their own environments to evaluate in.
env <- empty_env()
outcomes <- tidyselect::eval_select(
expr = outcomes,
data = data,
env = env
)
# Evaluate `predictors` without access to `outcomes`
not_outcomes <- vec_index_invert(outcomes)
data_potential_predictors <- data[not_outcomes]
predictors <- tidyselect::eval_select(
expr = predictors,
data = data_potential_predictors,
env = env
)
data_outcomes <- data[outcomes]
data_predictors <- data_potential_predictors[predictors]
mold <- hardhat::mold(
x = data_predictors,
y = data_outcomes,
blueprint = blueprint
)
workflow$pre <- new_stage_pre(
actions = workflow$pre$actions,
mold = mold,
case_weights = workflow$pre$case_weights
)
# All pre steps return the `workflow` and `data`
list(workflow = workflow, data = data)
}
# ------------------------------------------------------------------------------
check_conflicts.action_variables <- function(action, x, ..., call = caller_env()) {
pre <- x$pre
if (has_action(pre, "recipe")) {
abort("Variables cannot be added when a recipe already exists.", call = call)
}
if (has_action(pre, "formula")) {
abort("Variables cannot be added when a formula already exists.", call = call)
}
invisible(action)
}
# ------------------------------------------------------------------------------
new_action_variables <- function(variables, blueprint, ..., call = caller_env()) {
check_dots_empty()
# `NULL` blueprints are finalized at fit time
if (!is_null(blueprint) && !is_xy_blueprint(blueprint)) {
abort("`blueprint` must be a hardhat 'xy_blueprint'.", call = call)
}
new_action_pre(
variables = variables,
blueprint = blueprint,
subclass = "action_variables"
)
}
is_xy_blueprint <- function(x) {
inherits(x, "xy_blueprint")
}
# ------------------------------------------------------------------------------
#' @rdname add_variables
#' @export
workflow_variables <- function(outcomes, predictors) {
# TODO: Use partial evaluation with `eval_resolve()`
# to only capture expressions
# https://github.com/r-lib/tidyselect/issues/207
new_workflow_variables(
outcomes = enquo(outcomes),
predictors = enquo(predictors)
)
}
new_workflow_variables <- function(outcomes,
predictors,
...,
call = caller_env()) {
check_dots_empty()
if (!is_quosure(outcomes)) {
abort("`outcomes` must be a quosure.", .internal = TRUE)
}
if (!is_quosure(predictors)) {
abort("`predictors` must be a quosure.", .internal = TRUE)
}
if (quo_is_missing(outcomes)) {
abort("`outcomes` can't be missing.", call = call)
}
if (quo_is_missing(predictors)) {
abort("`predictors` can't be missing.", call = call)
}
data <- list(
outcomes = outcomes,
predictors = predictors
)
structure(data, class = "workflow_variables")
}
is_workflow_variables <- function(x) {
inherits(x, "workflow_variables")
}
|
/scratch/gouwar.j/cran-all/cranData/workflows/R/pre-action-variables.R
|
#' Predict from a workflow
#'
#' @description
#' This is the `predict()` method for a fit workflow object. The nice thing
#' about predicting from a workflow is that it will:
#'
#' - Preprocess `new_data` using the preprocessing method specified when the
#' workflow was created and fit. This is accomplished using
#' [hardhat::forge()], which will apply any formula preprocessing or call
#' [recipes::bake()] if a recipe was supplied.
#'
#' - Call [parsnip::predict.model_fit()] for you using the underlying fit
#' parsnip model.
#'
#' @inheritParams parsnip::predict.model_fit
#'
#' @param object A workflow that has been fit by [fit.workflow()]
#'
#' @param new_data A data frame containing the new predictors to preprocess
#' and predict on. If using a recipe preprocessor, you should not call
#' [recipes::bake()] on `new_data` before passing to this function.
#'
#' @return
#' A data frame of model predictions, with as many rows as `new_data` has.
#'
#' @name predict-workflow
#' @export
#' @examples
#' library(parsnip)
#' library(recipes)
#' library(magrittr)
#'
#' training <- mtcars[1:20, ]
#' testing <- mtcars[21:32, ]
#'
#' model <- linear_reg() %>%
#' set_engine("lm")
#'
#' workflow <- workflow() %>%
#' add_model(model)
#'
#' recipe <- recipe(mpg ~ cyl + disp, training) %>%
#' step_log(disp)
#'
#' workflow <- add_recipe(workflow, recipe)
#'
#' fit_workflow <- fit(workflow, training)
#'
#' # This will automatically `bake()` the recipe on `testing`,
#' # applying the log step to `disp`, and then fit the regression.
#' predict(fit_workflow, testing)
predict.workflow <- function(object, new_data, type = NULL, opts = list(), ...) {
workflow <- object
if (!is_trained_workflow(workflow)) {
abort(c(
"Can't predict on an untrained workflow.",
i = "Do you need to call `fit()`?"
))
}
fit <- extract_fit_parsnip(workflow)
new_data <- forge_predictors(new_data, workflow)
predict(fit, new_data, type = type, opts = opts, ...)
}
forge_predictors <- function(new_data, workflow) {
mold <- extract_mold(workflow)
forged <- hardhat::forge(new_data, blueprint = mold$blueprint)
forged$predictors
}
|
/scratch/gouwar.j/cran-all/cranData/workflows/R/predict.R
|
#' Extract elements of a workflow
#'
#' @description
#'
#' `r lifecycle::badge("soft-deprecated")`
#'
#' Please use the `extract_*()` functions instead of these
#' (e.g. [extract_mold()]).
#'
#' These functions extract various elements from a workflow object. If they do
#' not exist yet, an error is thrown.
#'
#' - `pull_workflow_preprocessor()` returns the formula, recipe, or variable
#' expressions used for preprocessing.
#'
#' - `pull_workflow_spec()` returns the parsnip model specification.
#'
#' - `pull_workflow_fit()` returns the parsnip model fit.
#'
#' - `pull_workflow_mold()` returns the preprocessed "mold" object returned
#' from [hardhat::mold()]. It contains information about the preprocessing,
#' including either the prepped recipe or the formula terms object.
#'
#' - `pull_workflow_prepped_recipe()` returns the prepped recipe. It is
#' extracted from the mold object returned from `pull_workflow_mold()`.
#'
#' @param x A workflow
#'
#' @return
#' The extracted value from the workflow, `x`, as described in the description
#' section.
#'
#' @name workflow-extractors
#' @keywords internal
#' @examples
#' library(parsnip)
#' library(recipes)
#' library(magrittr)
#'
#' model <- linear_reg() %>%
#' set_engine("lm")
#'
#' recipe <- recipe(mpg ~ cyl + disp, mtcars) %>%
#' step_log(disp)
#'
#' base_wf <- workflow() %>%
#' add_model(model)
#'
#' recipe_wf <- add_recipe(base_wf, recipe)
#' formula_wf <- add_formula(base_wf, mpg ~ cyl + log(disp))
#' variable_wf <- add_variables(base_wf, mpg, c(cyl, disp))
#'
#' fit_recipe_wf <- fit(recipe_wf, mtcars)
#' fit_formula_wf <- fit(formula_wf, mtcars)
#'
#' # The preprocessor is a recipes, formula, or a list holding the
#' # tidyselect expressions identifying the outcomes/predictors
#' pull_workflow_preprocessor(recipe_wf)
#' pull_workflow_preprocessor(formula_wf)
#' pull_workflow_preprocessor(variable_wf)
#'
#' # The `spec` is the parsnip spec before it has been fit.
#' # The `fit` is the fit parsnip model.
#' pull_workflow_spec(fit_formula_wf)
#' pull_workflow_fit(fit_formula_wf)
#'
#' # The mold is returned from `hardhat::mold()`, and contains the
#' # predictors, outcomes, and information about the preprocessing
#' # for use on new data at `predict()` time.
#' pull_workflow_mold(fit_recipe_wf)
#'
#' # A useful shortcut is to extract the prepped recipe from the workflow
#' pull_workflow_prepped_recipe(fit_recipe_wf)
#'
#' # That is identical to
#' identical(
#' pull_workflow_mold(fit_recipe_wf)$blueprint$recipe,
#' pull_workflow_prepped_recipe(fit_recipe_wf)
#' )
NULL
#' @rdname workflow-extractors
#' @export
pull_workflow_preprocessor <- function(x) {
lifecycle::deprecate_warn("0.2.3", "pull_workflow_preprocessor()", "extract_preprocessor()")
validate_is_workflow(x)
extract_preprocessor(x)
}
#' @rdname workflow-extractors
#' @export
pull_workflow_spec <- function(x) {
lifecycle::deprecate_warn("0.2.3", "pull_workflow_spec()", "extract_spec_parsnip()")
validate_is_workflow(x)
extract_spec_parsnip(x)
}
#' @rdname workflow-extractors
#' @export
pull_workflow_fit <- function(x) {
lifecycle::deprecate_warn("0.2.3", "pull_workflow_fit()", "extract_fit_parsnip()")
validate_is_workflow(x)
extract_fit_parsnip(x)
}
#' @rdname workflow-extractors
#' @export
pull_workflow_mold <- function(x) {
lifecycle::deprecate_warn("0.2.3", "pull_workflow_mold()", "extract_mold()")
validate_is_workflow(x)
extract_mold(x)
}
#' @rdname workflow-extractors
#' @export
pull_workflow_prepped_recipe <- function(x) {
lifecycle::deprecate_warn("0.2.3", "pull_workflow_prepped_recipe()", "extract_recipe()")
validate_is_workflow(x)
extract_recipe(x)
}
|
/scratch/gouwar.j/cran-all/cranData/workflows/R/pull.R
|
#' @importFrom hardhat extract_spec_parsnip
#' @export
hardhat::extract_spec_parsnip
#'
#' @importFrom hardhat extract_recipe
#' @export
hardhat::extract_recipe
#'
#' @importFrom hardhat extract_fit_parsnip
#' @export
hardhat::extract_fit_parsnip
#'
#' @importFrom hardhat extract_fit_engine
#' @export
hardhat::extract_fit_engine
#'
#' @importFrom hardhat extract_mold
#' @export
hardhat::extract_mold
#'
#' @importFrom hardhat extract_preprocessor
#' @export
hardhat::extract_preprocessor
#'
#' @importFrom hardhat extract_parameter_set_dials
#' @export
hardhat::extract_parameter_set_dials
#'
#' @importFrom hardhat extract_parameter_dials
#' @export
hardhat::extract_parameter_dials
|
/scratch/gouwar.j/cran-all/cranData/workflows/R/reexports.R
|
new_stage_pre <- function(actions = new_named_list(), mold = NULL, case_weights = NULL) {
if (!is.null(mold) && !is.list(mold)) {
abort("`mold` must be a result of calling `hardhat::mold()`.", .internal = TRUE)
}
if (!is_null(case_weights) && !hardhat::is_case_weights(case_weights)) {
abort("`case_weights` must be a true case weights column.", .internal = TRUE)
}
new_stage(
actions = actions,
mold = mold,
case_weights = case_weights,
subclass = "stage_pre"
)
}
new_stage_fit <- function(actions = new_named_list(), fit = NULL) {
if (!is.null(fit) && !is_model_fit(fit)) {
abort("`fit` must be a `model_fit`.", .internal = TRUE)
}
new_stage(actions = actions, fit = fit, subclass = "stage_fit")
}
new_stage_post <- function(actions = new_named_list()) {
new_stage(actions, subclass = "stage_post")
}
# ------------------------------------------------------------------------------
# A `stage` is a collection of `action`s
# There are 3 stages that actions can fall into:
# - pre
# - fit
# - post
new_stage <- function(actions = new_named_list(),
...,
subclass = character()) {
if (!is_list_of_actions(actions)) {
abort("`actions` must be a list of actions.", .internal = TRUE)
}
if (!is_uniquely_named(actions)) {
abort("`actions` must be uniquely named.", .internal = TRUE)
}
fields <- list2(...)
if (!is_uniquely_named(fields)) {
abort("`...` must be uniquely named.", .internal = TRUE)
}
fields <- list2(actions = actions, !!!fields)
structure(fields, class = c(subclass, "stage"))
}
# ------------------------------------------------------------------------------
is_stage <- function(x) {
inherits(x, "stage")
}
has_action <- function(stage, name) {
name %in% names(stage$actions)
}
# ------------------------------------------------------------------------------
new_named_list <- function() {
# To standardize results for testing.
# Mainly applicable when `[[<-` removes all elements from a named list and
# leaves a named list behind that we want to compare against.
set_names(list(), character())
}
|
/scratch/gouwar.j/cran-all/cranData/workflows/R/stage.R
|
#' @export
.censoring_weights_graf.workflow <- function(object,
predictions,
cens_predictors = NULL,
trunc = 0.05, eps = 10^-10, ...) {
if (is.null(object$fit$fit)) {
rlang::abort("The workflow does not have a model fit object.")
}
.censoring_weights_graf(object$fit$fit, predictions, cens_predictors, trunc, eps)
}
|
/scratch/gouwar.j/cran-all/cranData/workflows/R/survival-censoring-weights.R
|
is_uniquely_named <- function(x) {
if (length(x) > 0) {
is_named(x) && !anyDuplicated(names(x))
} else {
TRUE
}
}
glubort <- function(..., .sep = "", .envir = caller_env(), .call = .envir) {
abort(glue::glue(..., .sep = .sep, .envir = .envir), call = .call)
}
is_model_fit <- function(x) {
inherits(x, "model_fit") || modelenv::is_unsupervised_fit(x)
}
is_model_spec <- function(x) {
inherits(x, "model_spec") || modelenv::is_unsupervised_spec(x)
}
validate_recipes_available <- function(..., call = caller_env()) {
check_dots_empty()
if (!requireNamespace("recipes", quietly = TRUE)) {
abort("The `recipes` package must be available to add a recipe.", call = call)
}
invisible()
}
# ------------------------------------------------------------------------------
# https://github.com/r-lib/tidyselect/blob/10e00cea2fff3585fc827b6a7eb5e172acadbb2f/R/utils.R#L109
vec_index_invert <- function(x) {
if (vec_index_is_empty(x)) {
TRUE
} else {
-x
}
}
vec_index_is_empty <- function(x) {
!length(x) || all(x == 0L)
}
# ------------------------------------------------------------------------------
validate_is_workflow <- function(x, ..., arg = "`x`", call = caller_env()) {
check_dots_empty()
if (!is_workflow(x)) {
glubort("{arg} must be a workflow, not a {class(x)[[1]]}.", .call = call)
}
invisible(x)
}
# ------------------------------------------------------------------------------
has_preprocessor_recipe <- function(x) {
"recipe" %in% names(x$pre$actions)
}
has_preprocessor_formula <- function(x) {
"formula" %in% names(x$pre$actions)
}
has_preprocessor_variables <- function(x) {
"variables" %in% names(x$pre$actions)
}
has_case_weights <- function(x) {
"case_weights" %in% names(x$pre$actions)
}
has_mold <- function(x) {
!is.null(x$pre$mold)
}
has_spec <- function(x) {
"model" %in% names(x$fit$actions)
}
has_fit <- function(x) {
!is.null(x$fit$fit)
}
has_blueprint <- function(x) {
if (has_preprocessor_formula(x)) {
!is.null(x$pre$actions$formula$blueprint)
} else if (has_preprocessor_recipe(x)) {
!is.null(x$pre$actions$recipe$blueprint)
} else if (has_preprocessor_variables(x)) {
!is.null(x$pre$actions$variables$blueprint)
} else {
abort("`x` must have a preprocessor to check for a blueprint.", .internal = TRUE)
}
}
|
/scratch/gouwar.j/cran-all/cranData/workflows/R/utils.R
|
#' Create a workflow
#'
#' @description
#' A `workflow` is a container object that aggregates information required to
#' fit and predict from a model. This information might be a recipe used in
#' preprocessing, specified through [add_recipe()], or the model specification
#' to fit, specified through [add_model()].
#'
#' The `preprocessor` and `spec` arguments allow you to add components to a
#' workflow quickly, without having to go through the `add_*()` functions, such
#' as [add_recipe()] or [add_model()]. However, if you need to control any of
#' the optional arguments to those functions, such as the `blueprint` or the
#' model `formula`, then you should use the `add_*()` functions directly
#' instead.
#'
#' @param preprocessor An optional preprocessor to add to the workflow. One of:
#' - A formula, passed on to [add_formula()].
#' - A recipe, passed on to [add_recipe()].
#' - A [workflow_variables()] object, passed on to [add_variables()].
#'
#' @param spec An optional parsnip model specification to add to the workflow.
#' Passed on to [add_model()].
#'
#' @return
#' A new `workflow` object.
#'
#' @includeRmd man/rmd/indicators.Rmd details
#'
#' @examples
#' library(parsnip)
#' library(recipes)
#' library(magrittr)
#' library(modeldata)
#'
#' data("attrition")
#'
#' model <- logistic_reg() %>%
#' set_engine("glm")
#'
#' formula <- Attrition ~ BusinessTravel + YearsSinceLastPromotion + OverTime
#'
#' wf_formula <- workflow(formula, model)
#'
#' fit(wf_formula, attrition)
#'
#' recipe <- recipe(Attrition ~ ., attrition) %>%
#' step_dummy(all_nominal(), -Attrition) %>%
#' step_corr(all_predictors(), threshold = 0.8)
#'
#' wf_recipe <- workflow(recipe, model)
#'
#' fit(wf_recipe, attrition)
#'
#' variables <- workflow_variables(
#' Attrition,
#' c(BusinessTravel, YearsSinceLastPromotion, OverTime)
#' )
#'
#' wf_variables <- workflow(variables, model)
#'
#' fit(wf_variables, attrition)
#' @export
workflow <- function(preprocessor = NULL, spec = NULL) {
out <- new_workflow()
if (!is_null(preprocessor)) {
out <- add_preprocessor(out, preprocessor)
}
if (!is_null(spec)) {
out <- add_model(out, spec)
}
out
}
add_preprocessor <- function(x, preprocessor, ..., call = caller_env()) {
check_dots_empty()
if (is_formula(preprocessor)) {
return(add_formula(x, preprocessor))
}
if (is_recipe(preprocessor)) {
return(add_recipe(x, preprocessor))
}
if (is_workflow_variables(preprocessor)) {
return(add_variables(x, variables = preprocessor))
}
abort(
"`preprocessor` must be a formula, recipe, or a set of workflow variables.",
call = call
)
}
# ------------------------------------------------------------------------------
new_workflow <- function(pre = new_stage_pre(),
fit = new_stage_fit(),
post = new_stage_post(),
trained = FALSE) {
if (!is_stage(pre)) {
abort("`pre` must be a `stage`.")
}
if (!is_stage(fit)) {
abort("`fit` must be a `stage`.")
}
if (!is_stage(post)) {
abort("`post` must be a `stage`.")
}
if (!is_scalar_logical(trained)) {
abort("`trained` must be a single logical value.")
}
data <- list(
pre = pre,
fit = fit,
post = post,
trained = trained
)
structure(data, class = "workflow")
}
is_workflow <- function(x) {
inherits(x, "workflow")
}
# ------------------------------------------------------------------------------
#' Determine if a workflow has been trained
#'
#' @description
#' A trained workflow is one that has gone through [`fit()`][fit.workflow],
#' which preprocesses the underlying data, and fits the parsnip model.
#'
#' @param x A workflow.
#'
#' @return A single logical indicating if the workflow has been trained or not.
#'
#' @export
#' @examples
#' library(parsnip)
#' library(recipes)
#' library(magrittr)
#'
#' rec <- recipe(mpg ~ cyl, mtcars)
#'
#' mod <- linear_reg()
#' mod <- set_engine(mod, "lm")
#'
#' wf <- workflow() %>%
#' add_recipe(rec) %>%
#' add_model(mod)
#'
#' # Before any preprocessing or model fitting has been done
#' is_trained_workflow(wf)
#'
#' wf <- fit(wf, mtcars)
#'
#' # After all preprocessing and model fitting
#' is_trained_workflow(wf)
is_trained_workflow <- function(x) {
validate_is_workflow(x)
is_true(get_trained(x))
}
# ------------------------------------------------------------------------------
get_trained <- function(x) {
x[["trained"]]
}
set_trained <- function(x, value) {
x[["trained"]] <- value
x
}
# ------------------------------------------------------------------------------
#' @export
print.workflow <- function(x, ...) {
print_header(x)
print_preprocessor(x)
print_case_weights(x)
print_model(x)
# print_postprocessor(x)
invisible(x)
}
print_header <- function(x) {
if (is_trained_workflow(x)) {
trained <- " [trained]"
} else {
trained <- ""
}
header <- glue::glue("Workflow{trained}")
header <- cli::rule(header, line = 2)
cat_line(header)
preprocessor_msg <- cli::style_italic("Preprocessor:")
if (has_preprocessor_formula(x)) {
preprocessor <- "Formula"
} else if (has_preprocessor_recipe(x)) {
preprocessor <- "Recipe"
} else if (has_preprocessor_variables(x)) {
preprocessor <- "Variables"
} else {
preprocessor <- "None"
}
preprocessor_msg <- glue::glue("{preprocessor_msg} {preprocessor}")
cat_line(preprocessor_msg)
spec_msg <- cli::style_italic("Model:")
if (has_spec(x)) {
spec <- class(extract_spec_parsnip(x))[[1]]
spec <- glue::glue("{spec}()")
} else {
spec <- "None"
}
spec_msg <- glue::glue("{spec_msg} {spec}")
cat_line(spec_msg)
invisible(x)
}
print_case_weights <- function(x) {
if (!has_case_weights(x)) {
return(invisible(x))
}
# Space between Workflow / Preprocessor section and Case Weights section
cat_line("")
header <- cli::rule("Case Weights")
cat_line(header)
col <- extract_case_weights_col(x)
col <- quo_get_expr(col)
col <- expr_text(col)
cat_line(col)
invisible(x)
}
print_preprocessor <- function(x) {
has_preprocessor_formula <- has_preprocessor_formula(x)
has_preprocessor_recipe <- has_preprocessor_recipe(x)
has_preprocessor_variables <- has_preprocessor_variables(x)
no_preprocessor <-
!has_preprocessor_formula &&
!has_preprocessor_recipe &&
!has_preprocessor_variables
if (no_preprocessor) {
return(invisible(x))
}
# Space between Workflow section and Preprocessor section
cat_line("")
header <- cli::rule("Preprocessor")
cat_line(header)
if (has_preprocessor_formula) {
print_preprocessor_formula(x)
}
if (has_preprocessor_recipe) {
print_preprocessor_recipe(x)
}
if (has_preprocessor_variables) {
print_preprocessor_variables(x)
}
invisible(x)
}
print_preprocessor_formula <- function(x) {
formula <- extract_preprocessor(x)
formula <- rlang::expr_text(formula)
cat_line(formula)
invisible(x)
}
print_preprocessor_variables <- function(x) {
variables <- extract_preprocessor(x)
outcomes <- quo_get_expr(variables$outcomes)
predictors <- quo_get_expr(variables$predictors)
outcomes <- expr_text(outcomes)
predictors <- expr_text(predictors)
cat_line("Outcomes: ", outcomes)
cat_line("Predictors: ", predictors)
invisible(x)
}
print_preprocessor_recipe <- function(x) {
recipe <- extract_preprocessor(x)
steps <- recipe$steps
n_steps <- length(steps)
if (n_steps == 1L) {
step <- "Step"
} else {
step <- "Steps"
}
n_steps_msg <- glue::glue("{n_steps} Recipe {step}")
cat_line(n_steps_msg)
if (n_steps == 0L) {
return(invisible(x))
}
cat_line("")
step_names <- map_chr(steps, pull_step_name)
if (n_steps <= 10L) {
cli::cat_bullet(step_names)
return(invisible(x))
}
extra_steps <- n_steps - 10L
step_names <- step_names[1:10]
if (extra_steps == 1L) {
step <- "step"
} else {
step <- "steps"
}
extra_dots <- "..."
extra_msg <- glue::glue("and {extra_steps} more {step}.")
step_names <- c(step_names, extra_dots, extra_msg)
cli::cat_bullet(step_names)
invisible(x)
}
pull_step_name <- function(x) {
step <- class(x)[[1]]
glue::glue("{step}()")
}
print_model <- function(x) {
has_spec <- has_spec(x)
if (!has_spec) {
return(invisible(x))
}
has_fit <- has_fit(x)
# Space between Workflow/Preprocessor/Case Weights section and Model section
cat_line("")
header <- cli::rule("Model")
cat_line(header)
if (has_fit) {
print_fit(x)
return(invisible(x))
}
print_spec(x)
invisible(x)
}
print_spec <- function(x) {
spec <- extract_spec_parsnip(x)
print(spec)
invisible(x)
}
print_fit <- function(x) {
parsnip_fit <- extract_fit_parsnip(x)
fit <- parsnip_fit$fit
output <- utils::capture.output(fit)
n_output <- length(output)
if (n_output < 50L) {
print(fit)
return(invisible(x))
}
n_extra_output <- n_output - 50L
output <- output[1:50]
extra_output_msg <- glue::glue("and {n_extra_output} more lines.")
cat_line(output)
cat_line("")
cat_line("...")
cat_line(extra_output_msg)
invisible(x)
}
cat_line <- function(...) {
cat(paste0(..., collapse = "\n"), "\n", sep = "")
}
|
/scratch/gouwar.j/cran-all/cranData/workflows/R/workflow.R
|
#' @keywords internal
"_PACKAGE"
# The following block is used by usethis to automatically manage
# roxygen namespace tags. Modify with care!
## usethis namespace: start
#'
#' @import rlang
#' @importFrom generics augment
#' @importFrom generics fit
#' @importFrom generics glance
#' @importFrom generics tidy
#' @importFrom generics tune_args
#' @importFrom generics tunable
#' @importFrom lifecycle deprecated
#' @importFrom parsnip .censoring_weights_graf
#' @importFrom parsnip fit_xy
#' @importFrom stats predict
## usethis namespace: end
NULL
|
/scratch/gouwar.j/cran-all/cranData/workflows/R/workflows-package.R
|
# nocov start
.onLoad <- function(libname, pkgname) {
ns <- rlang::ns_env("workflows")
vctrs::s3_register("butcher::axe_call", "workflow")
vctrs::s3_register("butcher::axe_ctrl", "workflow")
vctrs::s3_register("butcher::axe_data", "workflow")
vctrs::s3_register("butcher::axe_env", "workflow")
vctrs::s3_register("butcher::axe_fitted", "workflow")
vctrs::s3_register("generics::required_pkgs", "workflow")
}
# nocov end
|
/scratch/gouwar.j/cran-all/cranData/workflows/R/zzz.R
|
## ----setup, include=FALSE-----------------------------------------------------
knitr::opts_chunk$set(
digits = 3,
collapse = TRUE,
comment = "#>"
)
options(digits = 3)
|
/scratch/gouwar.j/cran-all/cranData/workflows/inst/doc/stages.R
|
---
title: "Workflow Stages"
vignette: >
%\VignetteEngine{knitr::rmarkdown}
%\VignetteIndexEntry{Workflow Stages}
output:
knitr:::html_vignette:
toc: yes
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(
digits = 3,
collapse = TRUE,
comment = "#>"
)
options(digits = 3)
```
Workflows encompasses the three main stages of the modeling _process_: pre-processing of data, model fitting, and post-processing of results. This page enumerates the possible operations for each stage that have been implemented to date.
## Pre-processing
The two elements allowed for pre-processing are:
* A standard [model formula](https://cran.r-project.org/doc/manuals/r-release/R-intro.html#Formulae-for-statistical-models) via `add_formula()`.
* A recipe object via `add_recipe()`.
You can use one or the other but not both.
## Model Fitting
`parsnip` model specifications are the only option here, specified via `add_model()`.
When using a preprocessor, you may need an additional formula for special model terms (e.g. for mixed models or generalized linear models). In these cases, specify that formula using `add_model()`'s `formula` argument, which will be passed to the underlying model when `fit()` is called.
## Post-processing
Some examples of post-processing the model predictions would be: adding a probability threshold for two-class problems, calibration of probability estimates, truncating the possible range of predictions, and so on.
None of these are currently implemented but will be in coming versions.
|
/scratch/gouwar.j/cran-all/cranData/workflows/inst/doc/stages.Rmd
|
# Formula Handling
```{r start, include = FALSE}
options(width = 70)
library(parsnip)
library(workflows)
library(magrittr)
library(modeldata)
library(hardhat)
library(splines)
```
Note that, for different models, the formula given to `add_formula()` might be handled in different ways, depending on the parsnip model being used. For example, a random forest model fit using ranger would not convert any factor predictors to binary indicator variables. This is consistent with what `ranger::ranger()` would do, but is inconsistent with what `stats::model.matrix()` would do.
The documentation for parsnip models provides details about how the data given in the formula are encoded for the model if they diverge from the standard `model.matrix()` methodology. Our goal is to be consistent with how the underlying model package works.
## How is this formula used?
To demonstrate, the example below uses `lm()` to fit a model. The formula given to `add_formula()` is used to create the model matrix and that is what is passed to `lm()` with a simple formula of `body_mass_g ~ .`:
```{r pre-encoded-fit}
library(parsnip)
library(workflows)
library(magrittr)
library(modeldata)
library(hardhat)
data(penguins)
lm_mod <- linear_reg() %>%
set_engine("lm")
lm_wflow <- workflow() %>%
add_model(lm_mod)
pre_encoded <- lm_wflow %>%
add_formula(body_mass_g ~ species + island + bill_depth_mm) %>%
fit(data = penguins)
pre_encoded_parsnip_fit <- pre_encoded %>%
extract_fit_parsnip()
pre_encoded_fit <- pre_encoded_parsnip_fit$fit
# The `lm()` formula is *not* the same as the `add_formula()` formula:
pre_encoded_fit
```
This can affect how the results are analyzed. For example, to get sequential hypothesis tests, each individual term is tested:
```{r pre-encoded-anova}
anova(pre_encoded_fit)
```
## Overriding the default encodings
Users can override the model-specific encodings by using a hardhat blueprint. The blueprint can specify how factors are encoded and whether intercepts are included. As an example, if you use a formula and would like the data to be passed to a model untouched:
```{r blueprint-fit}
minimal <- default_formula_blueprint(indicators = "none", intercept = FALSE)
un_encoded <- lm_wflow %>%
add_formula(
body_mass_g ~ species + island + bill_depth_mm,
blueprint = minimal
) %>%
fit(data = penguins)
un_encoded_parsnip_fit <- un_encoded %>%
extract_fit_parsnip()
un_encoded_fit <- un_encoded_parsnip_fit$fit
un_encoded_fit
```
While this looks the same, the raw columns were given to `lm()` and that function created the dummy variables. Because of this, the sequential ANOVA tests groups of parameters to get column-level p-values:
```{r blueprint-anova}
anova(un_encoded_fit)
```
## Overriding the default model formula
Additionally, the formula passed to the underlying model can also be customized. In this case, the `formula` argument of `add_model()` can be used. To demonstrate, a spline function will be used for the bill depth:
```{r extra-formula-fit}
library(splines)
custom_formula <- workflow() %>%
add_model(
lm_mod,
formula = body_mass_g ~ species + island + ns(bill_depth_mm, 3)
) %>%
add_formula(
body_mass_g ~ species + island + bill_depth_mm,
blueprint = minimal
) %>%
fit(data = penguins)
custom_parsnip_fit <- custom_formula %>%
extract_fit_parsnip()
custom_fit <- custom_parsnip_fit$fit
custom_fit
```
## Altering the formula
Finally, when a formula is updated or removed from a fitted workflow, the corresponding model fit is removed.
```{r remove}
custom_formula_no_fit <- update_formula(custom_formula, body_mass_g ~ species)
try(extract_fit_parsnip(custom_formula_no_fit))
```
|
/scratch/gouwar.j/cran-all/cranData/workflows/man/rmd/add-formula.Rmd
|
# Indicator Variable Details
```{r echo=FALSE}
options(cli.width = 70, width = 70, cli.unicode = FALSE)
# Load them early on so package conflict messages don't show up
suppressPackageStartupMessages({
library(parsnip)
library(recipes)
library(workflows)
library(modeldata)
})
```
Some modeling functions in R create indicator/dummy variables from categorical data when you use a model formula, and some do not. When you specify and fit a model with a `workflow()`, parsnip and workflows match and reproduce the underlying behavior of the user-specified model's computational engine.
## Formula Preprocessor
In the [modeldata::Sacramento] data set of real estate prices, the `type` variable has three levels: `"Residential"`, `"Condo"`, and `"Multi-Family"`. This base `workflow()` contains a formula added via [add_formula()] to predict property price from property type, square footage, number of beds, and number of baths:
```{r}
set.seed(123)
library(parsnip)
library(recipes)
library(workflows)
library(modeldata)
data("Sacramento")
base_wf <- workflow() %>%
add_formula(price ~ type + sqft + beds + baths)
```
This first model does create dummy/indicator variables:
```{r}
lm_spec <- linear_reg() %>%
set_engine("lm")
base_wf %>%
add_model(lm_spec) %>%
fit(Sacramento)
```
There are **five** independent variables in the fitted model for this OLS linear regression. With this model type and engine, the factor predictor `type` of the real estate properties was converted to two binary predictors, `typeMulti_Family` and `typeResidential`. (The third type, for condos, does not need its own column because it is the baseline level).
This second model does not create dummy/indicator variables:
```{r}
rf_spec <- rand_forest() %>%
set_mode("regression") %>%
set_engine("ranger")
base_wf %>%
add_model(rf_spec) %>%
fit(Sacramento)
```
Note that there are **four** independent variables in the fitted model for this ranger random forest. With this model type and engine, indicator variables were not created for the `type` of real estate property being sold. Tree-based models such as random forest models can handle factor predictors directly, and don't need any conversion to numeric binary variables.
## Recipe Preprocessor
When you specify a model with a `workflow()` and a recipe preprocessor via [add_recipe()], the _recipe_ controls whether dummy variables are created or not; the recipe overrides any underlying behavior from the model's computational engine.
|
/scratch/gouwar.j/cran-all/cranData/workflows/man/rmd/indicators.Rmd
|
---
title: "Getting Started"
vignette: >
%\VignetteIndexEntry{Getting Started}
%\VignetteEngine{knitr::rmarkdown}
output:
knitr:::html_vignette:
toc: yes
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(
digits = 3,
collapse = TRUE,
comment = "#>"
)
options(digits = 3)
library(ggplot2)
theme_set(theme_bw() + theme(legend.position = "top"))
```
This is an example of a fairly realistic interactive data analysis project to demonstrate how workflows can be used.
## Introduction
In this `bivariate` data set, there are two predictors that can be used to differentiate two classes in the outcome.
There are three partitions of the original data: training (n = 1009), validation (n = 300), and testing (n = 710). We will work with the training set the most, use the validation set to compare models during the development process, and then use the test set once we think that we have a good algorithm for making predictions.
```{r}
library(modeldata)
# This gives us access to the 3 partitions:
# - `bivariate_train`: Training set
# - `bivariate_val`: Validation set
# - `bivariate_test`: Test set
data("bivariate")
```
Here is the training set:
```{r plot-data, message = FALSE}
library(workflows)
library(ggplot2)
library(dplyr)
ggplot(bivariate_train, aes(x = A, y = B, col = Class)) +
geom_point(alpha = .3) +
coord_equal(ratio = 20)
```
Both predictors have positive values and their distributions are right-skewed. There seems to be a separation of the classes, but only when the variables are used together. For example, when the predictors are visualized individually, there is little evidence in separation of the classes.
```{r plot-marginals}
library(tidyr)
bivariate_train %>%
pivot_longer(cols = c(A, B), names_to = "predictor") %>%
ggplot(aes(x = Class, y = value)) +
geom_boxplot() +
facet_wrap(~predictor, scales = "free_y") +
scale_y_log10()
```
In the first plot above, the separation appears to happen linearly, and a straight, diagonal boundary might do well. We could use `glm()` directly to create a logistic regression, but we will use the `tidymodels` infrastructure and start by making a `parsnip` model object.
```{r glm-mod}
library(parsnip)
logit_mod <-
logistic_reg() %>%
set_engine("glm")
```
This data analysis will involve looking at a few different approaches of representing the two predictors so that we have a high-quality model. We'll walk though the thought process of this analysis as we go. This will emulate how most data analysis projects happen: an initial approach is taken and then potential steps are attempted to make improvements. There is no pre-defined blueprint to this process and the R4DS diagram summarizes the process nicely.
Since we are going to try different combinations of feature engineering and models, `workflows` are really useful since you can have one object that contains all of these operations. It helps organize your work instead of having different objects in your workspace that, at some point, have been used in pairs.
## A first set of models
The obvious place to start is by adding both predictors as-is into the model:
```{r simple-glm}
# Create a workflow with just the model. We will add to this as we go.
glm_workflow <-
workflow() %>%
add_model(logit_mod)
simple_glm <-
glm_workflow %>%
# Add both predictors in
add_formula(Class ~ .) %>%
# Fit the model:
fit(data = bivariate_train)
```
To evaluate this model, the ROC curve will be computed along with its corresponding AUC.
```{r simple-roc}
library(yardstick)
simple_glm_probs <-
predict(simple_glm, bivariate_val, type = "prob") %>%
bind_cols(bivariate_val)
simple_glm_roc <-
simple_glm_probs %>%
roc_curve(Class, .pred_One)
simple_glm_probs %>% roc_auc(Class, .pred_One)
autoplot(simple_glm_roc)
```
This seems reasonable. One potential issue is that the two predictors have a high degree of correlation `r round(cor(bivariate_train$A, bivariate_train$B), 3)`, and this might cause some instability in the model.
Since there are two correlated predictors with skewed distributions and strictly positive values, it might be intuitive to use their ratio instead of the pair. We'll try that next by recycling the initial workflow and just adding a different formula:
```{r ratios}
ratio_glm <-
glm_workflow %>%
add_formula(Class ~ I(A/B)) %>%
fit(data = bivariate_train)
ratio_glm_probs <-
predict(ratio_glm, bivariate_val, type = "prob") %>%
bind_cols(bivariate_val)
ratio_glm_roc <-
ratio_glm_probs %>%
roc_curve(Class, .pred_One)
ratio_glm_probs %>% roc_auc(Class, .pred_One)
autoplot(simple_glm_roc) +
geom_path(
data = ratio_glm_roc,
aes(x = 1 - specificity, y = sensitivity),
col = "#FDE725FF"
)
```
The original analysis shows a slight edge, but the two models are probably within the experimental noise of one another.
## More complex feature engineering
Instead of combining the two predictors, would it help the model if we were to resolve the skewness of the variables? To test this theory, one option would be to use the Box-Cox transformation on each predictor individually to see if it recommends a nonlinear transformation. The transformation can encode a variety of different functions including the log transform, square root, inverse, and fractional transformations in-between these.
This cannot be easily done via the formula interface, so a recipe is used. A recipe is a list of sequential data processing steps that are conducted before the data are used in a model. For example, to use the Box-Cox method, a simple recipe would be:
```{r bc-rec, message = FALSE}
library(recipes)
trans_recipe <-
recipe(Class ~ ., data = bivariate_train) %>%
step_BoxCox(all_predictors())
```
Creating the recipe only makes an object with the instructions; it does not carry out the instructions (e.g. estimate the transformation parameter). To actually execute the recipe, we add it to our workflow with `add_recipe()` and then call `fit()`. Fitting the workflow evaluates both the model and the recipe.
```{r rec-trans}
trans_glm <-
glm_workflow %>%
add_recipe(trans_recipe) %>%
fit(data = bivariate_train)
trans_glm_probs <-
predict(trans_glm, bivariate_val, type = "prob") %>%
bind_cols(bivariate_val)
trans_glm_roc <-
trans_glm_probs %>%
roc_curve(Class, .pred_One)
trans_glm_probs %>% roc_auc(Class, .pred_One)
autoplot(simple_glm_roc) +
geom_path(
data = ratio_glm_roc,
aes(x = 1 - specificity, y = sensitivity),
col = "#FDE725FF"
) +
geom_path(
data = trans_glm_roc,
aes(x = 1 - specificity, y = sensitivity),
col = "#21908CFF"
)
```
That is a potential, if slight, improvement.
The Box-Cox procedure recommended transformations that are pretty close to the inverse.
The model above creates a class boundary for these data:
```{r plot-inverse}
ggplot(bivariate_train, aes(x = 1/A, y = 1/B, col = Class)) +
geom_point(alpha = .3) +
coord_equal(ratio = 1/12)
```
The correlation between these is about the same as in the original data. It might help the model to de-correlate them, and the standard technique for this is principal component analysis. A recipe step can be added that will conduct PCA and return the score values. The scores, instead of the original predictors, can then be used in the model. PCA chases variability, so it is important to normalize the two predictors so that they have the same units. Traditionally, each column could be centered and scaled. For this reason, a step is used prior to PCA that normalizes the two predictors.
```{r rec-pca}
pca_recipe <-
trans_recipe %>%
step_normalize(A, B) %>%
step_pca(A, B, num_comp = 2)
pca_glm <-
glm_workflow %>%
add_recipe(pca_recipe) %>%
fit(data = bivariate_train)
pca_glm_probs <-
predict(pca_glm, bivariate_val, type = "prob") %>%
bind_cols(bivariate_val)
pca_glm_roc <-
pca_glm_probs %>%
roc_curve(Class, .pred_One)
pca_glm_probs %>% roc_auc(Class, .pred_One)
```
These results are almost identical to the transformed model.
## The test set
Based on these results, the model with the logistic regression model with inverse terms is probably our best bet. Using the test set:
```{r test-set}
test_probs <-
predict(trans_glm, bivariate_test, type = "prob") %>%
bind_cols(bivariate_test)
test_roc <-
test_probs %>%
roc_curve(Class, .pred_One)
# A little more optimistic than the validation set
test_probs %>% roc_auc(Class, .pred_One)
autoplot(test_roc)
```
|
/scratch/gouwar.j/cran-all/cranData/workflows/vignettes/extras/getting-started.Rmd
|
---
title: "Workflow Stages"
vignette: >
%\VignetteEngine{knitr::rmarkdown}
%\VignetteIndexEntry{Workflow Stages}
output:
knitr:::html_vignette:
toc: yes
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(
digits = 3,
collapse = TRUE,
comment = "#>"
)
options(digits = 3)
```
Workflows encompasses the three main stages of the modeling _process_: pre-processing of data, model fitting, and post-processing of results. This page enumerates the possible operations for each stage that have been implemented to date.
## Pre-processing
The two elements allowed for pre-processing are:
* A standard [model formula](https://cran.r-project.org/doc/manuals/r-release/R-intro.html#Formulae-for-statistical-models) via `add_formula()`.
* A recipe object via `add_recipe()`.
You can use one or the other but not both.
## Model Fitting
`parsnip` model specifications are the only option here, specified via `add_model()`.
When using a preprocessor, you may need an additional formula for special model terms (e.g. for mixed models or generalized linear models). In these cases, specify that formula using `add_model()`'s `formula` argument, which will be passed to the underlying model when `fit()` is called.
## Post-processing
Some examples of post-processing the model predictions would be: adding a probability threshold for two-class problems, calibration of probability estimates, truncating the possible range of predictions, and so on.
None of these are currently implemented but will be in coming versions.
|
/scratch/gouwar.j/cran-all/cranData/workflows/vignettes/stages.Rmd
|
#' @keywords internal
"_PACKAGE"
## usethis namespace: start
## usethis namespace: end
NULL
#' @import ggplot2
#' @import vctrs
#' @import rlang
#' @importFrom stats qnorm as.formula model.frame
#' @importFrom pillar obj_sum type_sum tbl_sum size_sum
#' @importFrom dplyr dplyr_reconstruct
#' @importFrom lifecycle deprecated
utils::globalVariables(
c(
".config", ".estimator", ".metric", "info", "metric", "mod_nm",
"model", "n", "pp_nm", "preprocessor", "preproc", "object", "engine",
"result", "std_err", "wflow_id", "func", "is_race", "num_rs", "option",
"metrics", "predictions", "hash", "id", "workflow", "comment", "get_from_env",
".get_tune_metric_names", "select_best", "notes"
)
)
# ------------------------------------------------------------------------------
#' @importFrom tune collect_metrics
#' @export
tune::collect_metrics
#' @importFrom tune collect_predictions
#' @export
tune::collect_predictions
#' @importFrom tune collect_notes
#' @export
tune::collect_notes
#' @importFrom dplyr %>%
#' @export
dplyr::`%>%`
#' @importFrom ggplot2 autoplot
#' @export
ggplot2::autoplot
#' @importFrom hardhat extract_spec_parsnip
#' @export
hardhat::extract_spec_parsnip
#'
#' @importFrom hardhat extract_recipe
#' @export
hardhat::extract_recipe
#'
#' @importFrom hardhat extract_fit_parsnip
#' @export
hardhat::extract_fit_parsnip
#'
#' @importFrom hardhat extract_fit_engine
#' @export
hardhat::extract_fit_engine
#'
#' @importFrom hardhat extract_mold
#' @export
hardhat::extract_mold
#'
#' @importFrom hardhat extract_preprocessor
#' @export
hardhat::extract_preprocessor
#'
#' @importFrom hardhat extract_workflow
#' @export
hardhat::extract_workflow
#'
#' @importFrom hardhat extract_parameter_set_dials
#' @export
hardhat::extract_parameter_set_dials
#'
#' @importFrom hardhat extract_parameter_dials
#' @export
hardhat::extract_parameter_dials
|
/scratch/gouwar.j/cran-all/cranData/workflowsets/R/0_imports.R
|
#' Convert existing objects to a workflow set
#'
#' Use existing objects to create a workflow set. A list of objects that are
#' either simple workflows or objects that have class `"tune_results"` can be
#' converted into a workflow set.
#' @param ... One or more named objects. Names should be unique and the
#' objects should have at least one of the following classes: `workflow`,
#' `iteration_results`, `tune_results`, `resample_results`, or `tune_race`. Each
#' `tune_results` element should also contain the original workflow
#' (accomplished using the `save_workflow` option in the control function).
#' @return A workflow set. Note that the `option` column will not reflect the
#' options that were used to create each object.
#'
#' @includeRmd man-roxygen/example_data.Rmd note
#'
#' @examples
#'
#' # ------------------------------------------------------------------------------
#' # Existing results
#'
#' # Use the already worked example to show how to add tuned
#' # objects to a workflow set
#' two_class_res
#'
#' results <- two_class_res %>% purrr::pluck("result")
#' names(results) <- two_class_res$wflow_id
#'
#' # These are all objects that have been resampled or tuned:
#' purrr::map_chr(results, ~ class(.x)[1])
#'
#' # Use rlang's !!! operator to splice in the elements of the list
#' new_set <- as_workflow_set(!!!results)
#'
#' # ------------------------------------------------------------------------------
#' # Make a set from unfit workflows
#'
#' library(parsnip)
#' library(workflows)
#'
#' lr_spec <- logistic_reg()
#'
#' main_effects <-
#' workflow() %>%
#' add_model(lr_spec) %>%
#' add_formula(Class ~ .)
#'
#' interactions <-
#' workflow() %>%
#' add_model(lr_spec) %>%
#' add_formula(Class ~ (.)^2)
#'
#' as_workflow_set(main = main_effects, int = interactions)
#' @export
as_workflow_set <- function(...) {
object <- rlang::list2(...)
# These could be workflows or objects of class `tune_result`
is_workflow <- purrr::map_lgl(object, ~ inherits(.x, "workflow"))
wflows <- vector("list", length(is_workflow))
wflows[is_workflow] <- object[is_workflow]
wflows[!is_workflow] <- purrr::map(object[!is_workflow], tune::.get_tune_workflow)
names(wflows) <- names(object)
check_names(wflows)
check_for_workflow(wflows)
res <- tibble::tibble(wflow_id = names(wflows))
res <-
res %>%
dplyr::mutate(
workflow = unname(wflows),
info = purrr::map(workflow, get_info),
option = purrr::map(1:nrow(res), ~ new_workflow_set_options())
)
res$result <- vector(mode = "list", length = nrow(res))
res$result[!is_workflow] <- object[!is_workflow]
res %>%
dplyr::select(wflow_id, info, option, result) %>%
new_workflow_set()
}
|
/scratch/gouwar.j/cran-all/cranData/workflowsets/R/as_workflow_set.R
|
#' Plot the results of a workflow set
#'
#' This `autoplot()` method plots performance metrics that have been ranked using
#' a metric. It can also run `autoplot()` on the individual results (per
#' `wflow_id`).
#'
#' @param object A `workflow_set` whose elements have results.
#' @param rank_metric A character string for which metric should be used to rank
#' the results. If none is given, the first metric in the metric set is used
#' (after filtering by the `metric` option).
#' @param id A character string for what to plot. If a value of
#' `"workflow_set"` is used, the results of each model (and sub-model) are ordered
#' and plotted. Alternatively, a value of the workflow set's `wflow_id` can be
#' given and the `autoplot()` method is executed on that workflow's results.
#' @param select_best A logical; should the results only contain the numerically
#' best submodel per workflow?
#' @param metric A character vector for which metrics (apart from `rank_metric`)
#' to be included in the visualization.
#' @param std_errs The number of standard errors to plot (if the standard error
#' exists).
#' @param type The aesthetics with which to differentiate workflows. The
#' default `"class"` maps color to the model type and shape to the preprocessor
#' type. The `"workflow"` option maps a color to each `"wflow_id"`. This
#' argument is ignored for values of `id` other than `"workflow_set"`.
#' @param ... Other options to pass to `autoplot()`.
#' @details
#' This function is intended to produce a default plot to visualize helpful
#' information across all possible applications of a workflow set. A more
#' appropriate plot for your specific analysis can be created by
#' calling [rank_results()] and using standard `ggplot2` code for plotting.
#'
#' The x-axis is the workflow rank in the set (a value of one being the best)
#' versus the performance metric(s) on the y-axis. With multiple metrics, there
#' will be facets for each metric.
#'
#' If multiple resamples are used, confidence bounds are shown for each result
#' (90% confidence, by default).
#' @return A ggplot object.
#'
#' @includeRmd man-roxygen/example_data.Rmd note
#'
#' @examples
#' autoplot(two_class_res)
#' autoplot(two_class_res, select_best = TRUE)
#' autoplot(two_class_res, id = "yj_trans_cart", metric = "roc_auc")
#' @export
autoplot.workflow_set <- function(object, rank_metric = NULL, metric = NULL,
id = "workflow_set",
select_best = FALSE,
std_errs = qnorm(0.95),
type = "class",
...) {
rlang::arg_match(type, c("class", "wflow_id"))
check_string(rank_metric, allow_null = TRUE)
check_character(metric, allow_null = TRUE)
check_number_decimal(std_errs)
check_bool(select_best)
if (id == "workflow_set") {
p <- rank_plot(object,
rank_metric = rank_metric, metric = metric,
select_best = select_best, std_errs = std_errs, type = type
)
} else {
p <- autoplot(object$result[[which(object$wflow_id == id)]], metric = metric, ...)
}
p
}
rank_plot <- function(object, rank_metric = NULL, metric = NULL,
select_best = FALSE, std_errs = 1, type = "class") {
metric_info <- pick_metric(object, rank_metric, metric)
metrics <- collate_metrics(object)
res <- rank_results(object, rank_metric = metric_info$metric, select_best = select_best)
if (!is.null(metric)) {
keep_metrics <- unique(c(rank_metric, metric))
res <- dplyr::filter(res, .metric %in% keep_metrics)
}
num_metrics <- length(unique(res$.metric))
has_std_error <- !all(is.na(res$std_err))
p <-
switch(
type,
class =
ggplot(res, aes(x = rank, y = mean, col = model)) +
geom_point(aes(shape = preprocessor)),
wflow_id =
ggplot(res, aes(x = rank, y = mean, col = wflow_id)) +
geom_point()
)
if (num_metrics > 1) {
res$.metric <- factor(as.character(res$.metric), levels = metrics$metric)
p <-
p +
facet_wrap(~.metric, scales = "free_y", as.table = FALSE) +
labs(x = "Workflow Rank", y = "Metric")
} else {
p <- p + labs(x = "Workflow Rank", y = metric_info$metric)
}
if (has_std_error) {
p <-
p +
geom_errorbar(
aes(
ymin = mean - std_errs * std_err,
ymax = mean + std_errs * std_err
),
width = diff(range(res$rank)) / 75
)
}
p
}
|
/scratch/gouwar.j/cran-all/cranData/workflowsets/R/autoplot.R
|
check_wf_set <- function(x, arg = caller_arg(x), call = caller_env()) {
if (!inherits(x, "workflow_set")) {
cli::cli_abort(
"{arg} must be a workflow set, not {obj_type_friendly(x)}.",
call = call
)
}
invisible(TRUE)
}
check_consistent_metrics <- function(x, fail = TRUE) {
metric_info <-
dplyr::distinct(x, .metric, wflow_id) %>%
dplyr::mutate(has = TRUE) %>%
tidyr::pivot_wider(names_from = ".metric", values_from = "has", values_fill = FALSE) %>%
dplyr::select(-wflow_id) %>%
purrr::map_dbl(~ sum(!.x))
if (any(metric_info > 0)) {
incp_metrics <- names(metric_info)[metric_info > 0]
msg <- paste(
"Some metrics were not used in all workflows:",
paste(incp_metrics, collapse = ", ")
)
if (fail) {
halt(msg)
} else {
rlang::warn(msg)
}
}
invisible(NULL)
}
check_incompete <- function(x, fail = TRUE) {
empty_res <- purrr::map_lgl(x$result, ~ identical(.x, list()))
failed_res <- purrr::map_lgl(x$result, ~ inherits(.x, "try-error"))
n_empty <- sum(empty_res | failed_res)
if (n_empty > 0) {
msg <- paste("There were", n_empty, "workflows that had no results.")
if (fail) {
halt(msg)
} else {
rlang::warn(msg)
}
}
invisible(NULL)
}
# TODO check for consistent resamples
# if global in local, overwrite or fail?
common_options <- function(model, global) {
old_names <- names(model)
new_names <- names(global)
common_names <- intersect(old_names, new_names)
if (length(common_names) > 0) {
res <- paste0("'", common_names, "'", collapse = ", ")
} else {
res <- ""
}
res
}
check_options <- function(model, id, global, action = "fail") {
res <- purrr::map_chr(model, common_options, global)
flag <- nchar(res) > 0
if (any(flag)) {
msg <- "There are existing options that are being modified\n"
msg <- paste0(msg, paste0("\t", id[flag], ": ", res[flag], collapse = "\n"))
if (action == "fail") {
halt(msg)
}
if (action == "warn") {
rlang::warn(msg)
}
}
invisible(NULL)
}
check_tune_args <- function(x) {
arg_names <- c("resamples", "param_info", "grid", "metrics", "control",
"iter", "objective", "initial", "eval_time")
bad_args <- setdiff(x, arg_names)
if (length(bad_args) > 0) {
msg <- paste0("'", bad_args, "'")
msg <- paste("The following options cannot be used as arguments for",
"`fit_resamples()` or the `tune_*()` functions:", msg)
halt(msg)
}
invisible(NULL)
}
# in case there are no tuning parameters, we can avoid warnings
recheck_options <- function(opts, .fn) {
if (.fn == "fit_resamples") {
allowed <- c("object", "resamples", "metrics", "control", "eval_time")
nms <- names(opts)
disallowed <- !(nms %in% allowed)
if (any(disallowed)) {
opts <- opts[!disallowed]
}
}
opts
}
check_fn <- function(fn, x, verbose) {
has_tune <- nrow(tune::tune_args(x)) > 0
if (!has_tune & !fn %in% c("fit_resamples", "tune_cluster")) {
fn <- "fit_resamples"
if (verbose) {
cols <- tune::get_tune_colors()
msg <- "No tuning parameters. `fit_resamples()` will be attempted"
message(cols$symbol$info("i"), "\t", cols$message$info(msg))
}
}
fn
}
check_names <- function(x) {
nms <- names(x)
if (any(nms == "")) {
bad <- which(nms == "")
msg <- "Objects in these positions are not named:"
msg <- paste(msg, paste0(bad, collapse = ", "))
halt(msg)
} else if (all(is.null(nms))) {
halt("The values must be named.")
}
xtab <- table(nms)
if (any(xtab > 1)) {
msg <- "The workflow names should be unique:"
msg <- paste(msg, paste0("'", names(xtab)[xtab > 1], "'", collapse = ", "))
halt(msg)
}
invisible(NULL)
}
check_for_workflow <- function(x) {
no_wflow <- purrr::map_lgl(x, ~ !inherits(.x, "workflow"))
if (any(no_wflow)) {
bad <- names(no_wflow)[no_wflow]
msg <- "Some objects do not have workflows:"
msg <- paste(msg, paste0("'", bad, "'", collapse = ", "))
msg <- paste0(msg, ". Use the control option `save_workflow` and re-run.")
halt(msg)
}
invisible(NULL)
}
has_required_container_type <- function(x) {
rlang::is_list(x)
}
has_required_container_columns <- function(x) {
columns <- required_container_columns()
ok <- all(columns %in% names(x))
ok
}
required_container_columns <- function() {
c("wflow_id", "info", "option", "result")
}
has_valid_column_info_structure <- function(x) {
info <- x$info
rlang::is_list(info)
}
has_valid_column_info_inner_types <- function(x) {
info <- x$info
is_tibble_indicator <- purrr::map_lgl(info, tibble::is_tibble)
all(is_tibble_indicator)
}
has_valid_column_info_inner_names <- function(x) {
columns <- required_info_inner_names()
info <- x$info
list_of_names <- purrr::map(info, names)
has_names_indicator <- purrr::map_lgl(list_of_names, identical, y = columns)
all(has_names_indicator)
}
required_info_inner_names <- function() {
c("workflow", "preproc", "model", "comment")
}
has_valid_column_result_structure <- function(x) {
result <- x$result
rlang::is_list(result)
}
has_valid_column_result_inner_types <- function(x) {
result <- x$result
valid_indicator <- purrr::map_lgl(result, is_valid_result_inner_type)
all(valid_indicator)
}
has_valid_column_result_fingerprints <- function(x) {
result <- x$result
# Drop default results
default_indicator <- purrr::map_lgl(result, is_default_result_element)
result <- result[!default_indicator]
# Not sure how to fingerprint racing objects just yet. See
# https://github.com/tidymodels/rsample/issues/212
racing_indicator <- purrr::map_lgl(result, inherits, "tune_race")
result <- result[!racing_indicator]
tune_indicator <- purrr::map_lgl(result, inherits, "tune_results")
result <- result[tune_indicator]
if (length(result) > 0) {
hashes <- purrr::map_chr(result, rsample::.get_fingerprint)
} else {
hashes <- NA_character_
}
# Drop NAs for results created before rsample 0.1.0, which won't have a hash
pre_0.1.0 <- is.na(hashes)
hashes <- hashes[!pre_0.1.0]
result <- result[!pre_0.1.0]
if (rlang::is_empty(hashes)) {
# No hashes to check
TRUE
} else {
# Should collapse to a single hash value if all resamples are the same
uniques <- unique(hashes)
length(uniques) == 1L
}
}
is_valid_result_inner_type <- function(x) {
if (is_default_result_element(x)) {
# Default, before any results are filled
return(TRUE)
}
is.null(x) || inherits(x, "tune_results") || inherits(x, "try-error")
}
is_default_result_element <- function(x) {
identical(x, list())
}
has_valid_column_option_structure <- function(x) {
option <- x$option
rlang::is_list(option)
}
has_valid_column_option_inner_types <- function(x) {
option <- x$option
valid_options_indicator <- purrr::map_lgl(option, inherits, "workflow_set_options")
all(valid_options_indicator)
}
has_valid_column_wflow_id_structure <- function(x) {
wflow_id <- x$wflow_id
rlang::is_character(wflow_id)
}
has_valid_column_wflow_id_strings <- function(x) {
wflow_id <- x$wflow_id
uniques <- unique(wflow_id)
if (length(wflow_id) != length(uniques)) {
return(FALSE)
}
if (any(is.na(wflow_id))) {
return(FALSE)
}
if (any(wflow_id == "")) {
return(FALSE)
}
TRUE
}
has_all_pkgs <- function(w) {
pkgs <- generics::required_pkgs(w, infra = FALSE)
if (length(pkgs) > 0) {
is_inst <- purrr::map_lgl(pkgs, ~ rlang::is_true(requireNamespace(.x, quietly = TRUE)))
if (!all(is_inst)) {
cols <- tune::get_tune_colors()
msg <- paste0(
"The workflow requires packages that are not installed: ",
paste0("'", cols$message$danger(pkgs[!is_inst]), "'", collapse = ", "),
". Skipping this workflow."
)
message(
cols$symbol$danger(cli::symbol$cross), " ",
cols$message$warning(msg)
)
res <- FALSE
} else {
res <- TRUE
}
} else {
res <- TRUE
}
res
}
|
/scratch/gouwar.j/cran-all/cranData/workflowsets/R/checks.R
|
#' Obtain and format results produced by tuning functions for workflow sets
#'
#' Return a tibble of performance metrics for all models or submodels.
#'
#' @param x A [`workflow_set`][workflow_set()] object that has been evaluated
#' with [workflow_map()].
#' @param ... Not currently used.
#' @param summarize A logical for whether the performance estimates should be
#' summarized via the mean (over resamples) or the raw performance values (per
#' resample) should be returned along with the resampling identifiers. When
#' collecting predictions, these are averaged if multiple assessment sets
#' contain the same row.
#' @param parameters An optional tibble of tuning parameter values that can be
#' used to filter the predicted values before processing. This tibble should
#' only have columns for each tuning parameter identifier (e.g. `"my_param"`
#' if `tune("my_param")` was used).
#' @param select_best A single logical for whether the numerically best results
#' are retained. If `TRUE`, the `parameters` argument is ignored.
#' @param metric A character string for the metric that is used for
#' `select_best`.
#' @return A tibble.
#' @details
#'
#' When applied to a workflow set, the metrics and predictions that are returned
#' do not contain the actual tuning parameter columns and values (unlike when
#' these collect functions are run on other objects). The reason is that workflow
#' sets can contain different types of models or models with different tuning
#' parameters.
#'
#' If the columns are needed, there are two options. First, the `.config` column
#' can be used to merge the tuning parameter columns into an appropriate object.
#' Alternatively, the `map()` function can be used to get the metrics from the
#' original objects (see the example below).
#'
#' @seealso [tune::collect_metrics()], [rank_results()]
#'
#' @includeRmd man-roxygen/example_data.Rmd note
#'
#' @examples
#' library(dplyr)
#' library(purrr)
#' library(tidyr)
#'
#' two_class_res
#'
#' # ------------------------------------------------------------------------------
#' \donttest{
#' collect_metrics(two_class_res)
#'
#' # Alternatively, if the tuning parameter values are needed:
#' two_class_res %>%
#' dplyr::filter(grepl("cart", wflow_id)) %>%
#' mutate(metrics = map(result, collect_metrics)) %>%
#' dplyr::select(wflow_id, metrics) %>%
#' tidyr::unnest(cols = metrics)
#' }
#'
#' collect_metrics(two_class_res, summarize = FALSE)
#' @export
collect_metrics.workflow_set <- function(x, ..., summarize = TRUE) {
rlang::check_dots_empty()
check_incompete(x, fail = TRUE)
check_bool(summarize)
x <-
dplyr::mutate(
x,
metrics = purrr::map(
result,
collect_metrics,
summarize = summarize
),
metrics = purrr::map2(metrics, result, remove_parameters)
)
info <- dplyr::bind_rows(x$info) %>% dplyr::select(-workflow, -comment)
x <-
dplyr::select(x, wflow_id, metrics) %>%
dplyr::bind_cols(info) %>%
tidyr::unnest(cols = c(metrics)) %>%
reorder_cols()
check_consistent_metrics(x, fail = FALSE)
x
}
remove_parameters <- function(x, object) {
prm <- tune::.get_tune_parameter_names(object)
x <- dplyr::select(x, -dplyr::one_of(prm))
x
}
reorder_cols <- function(x) {
if (any(names(x) == ".iter")) {
cols <- c("wflow_id", ".config", ".iter", "preproc", "model")
} else {
cols <- c("wflow_id", ".config", "preproc", "model")
}
dplyr::relocate(x, !!!cols)
}
#' @export
#' @rdname collect_metrics.workflow_set
collect_predictions.workflow_set <-
function(x, ..., summarize = TRUE, parameters = NULL, select_best = FALSE,
metric = NULL) {
rlang::check_dots_empty()
check_incompete(x, fail = TRUE)
check_bool(summarize)
check_bool(select_best)
check_string(metric, allow_null = TRUE)
if (select_best) {
x <-
dplyr::mutate(x,
predictions = purrr::map(
result,
~ select_bare_predictions(
.x,
summarize = summarize,
metric = metric
)
)
)
} else {
x <-
dplyr::mutate(
x,
predictions = purrr::map(
result,
get_bare_predictions,
summarize = summarize,
parameters = parameters
)
)
}
info <- dplyr::bind_rows(x$info) %>% dplyr::select(-workflow, -comment)
x <-
dplyr::select(x, wflow_id, predictions) %>%
dplyr::bind_cols(info) %>%
tidyr::unnest(cols = c(predictions)) %>%
reorder_cols()
x
}
select_bare_predictions <- function(x, metric, summarize) {
res <-
tune::collect_predictions(x,
summarize = summarize,
parameters = tune::select_best(x, metric = metric)
)
remove_parameters(res, x)
}
get_bare_predictions <- function(x, ...) {
res <- tune::collect_predictions(x, ...)
remove_parameters(res, x)
}
#' @export
#' @rdname collect_metrics.workflow_set
collect_notes.workflow_set <- function(x, ...) {
check_incompete(x)
res <- dplyr::rowwise(x)
res <- dplyr::mutate(res, notes = list(collect_notes(result)))
res <- dplyr::ungroup(res)
res <- dplyr::select(res, wflow_id, notes)
res <- tidyr::unnest(res, cols = notes)
res
}
|
/scratch/gouwar.j/cran-all/cranData/workflowsets/R/collect.R
|
#' Add annotations and comments for workflows
#'
#' `comment_add()` can be used to log important information about the workflow or
#' its results as you work. Comments can be appended or removed.
#' @param x A workflow set outputted by [workflow_set()] or [workflow_map()].
#' @param id A single character string for a value in the `wflow_id` column. For
#' `comment_print()`, `id` can be a vector or `NULL` (and this indicates that
#' all comments are printed).
#' @param ... One or more character strings.
#' @param append A logical value to determine if the new comment should be added
#' to the existing values.
#' @param collapse A character string that separates the comments.
#' @return `comment_add()` and `comment_reset()` return an updated workflow set.
#' `comment_get()` returns a character string. `comment_print()` returns `NULL`
#' invisibly.
#' @export
#' @examples
#' two_class_set
#'
#' two_class_set %>% comment_get("none_cart")
#'
#' new_set <-
#' two_class_set %>%
#' comment_add("none_cart", "What does 'cart' stand for\u2753") %>%
#' comment_add("none_cart", "Classification And Regression Trees.")
#'
#' comment_print(new_set)
#'
#' new_set %>% comment_get("none_cart")
#'
#' new_set %>%
#' comment_reset("none_cart") %>%
#' comment_get("none_cart")
comment_add <- function(x, id, ..., append = TRUE, collapse = "\n") {
check_wf_set(x)
check_bool(append)
check_string(collapse)
dots <- list(...)
if (length(dots) == 0) {
return(x)
} else {
is_chr <- purrr::map_lgl(dots, is.character)
if (any(!is_chr)) {
rlang::abort("The comments should be character strings.")
}
}
check_string(id)
has_id <- id == x$wflow_id
if (!any(has_id)) {
rlang::abort("The 'id' value is not in wflow_id.")
}
id_index <- which(has_id)
current_val <- x$info[[id_index]]$comment
if (!is.na(current_val) && !append) {
rlang::abort("There is already a comment for this id and `append = FALSE`.")
}
new_value <- c(x$info[[id_index]]$comment, unlist(dots))
new_value <- new_value[!is.na(new_value) & nchar(new_value) > 0]
new_value <- paste0(new_value, collapse = "\n")
x$info[[id_index]]$comment <- new_value
x
}
#' @export
#' @rdname comment_add
comment_get <- function(x, id) {
check_wf_set(x)
if (length(id) > 1) {
rlang::abort("'id' should be a single character value.")
}
has_id <- id == x$wflow_id
if (!any(has_id)) {
rlang::abort("The 'id' value is not in wflow_id.")
}
id_index <- which(has_id)
x$info[[id_index]]$comment
}
#' @export
#' @rdname comment_add
comment_reset <- function(x, id) {
check_wf_set(x)
if (length(id) > 1) {
rlang::abort("'id' should be a single character value.")
}
has_id <- id == x$wflow_id
if (!any(has_id)) {
rlang::abort("The 'id' value is not in wflow_id.")
}
id_index <- which(has_id)
x$info[[id_index]]$comment <- character(1)
x
}
#' @export
#' @rdname comment_add
comment_print <- function(x, id = NULL, ...) {
check_wf_set(x)
if (is.null(id)) {
id <- x$wflow_id
}
x <- dplyr::filter(x, wflow_id %in% id)
chr_x <- purrr::map(x$wflow_id, ~ comment_get(x, id = .x))
has_comment <- purrr::map_lgl(chr_x, ~ nchar(.x) > 0)
chr_x <- chr_x[which(has_comment)]
id <- x$wflow_id[which(has_comment)]
for (i in seq_along(chr_x)) {
cat(cli::rule(id[i]), "\n\n")
tmp_chr <- comment_format(chr_x[[i]])
n_comments <- length(tmp_chr)
for (j in 1:n_comments) {
cat(tmp_chr[j], "\n\n")
}
}
invisible(NULL)
}
comment_format <- function(x, id, ...) {
x <- strsplit(x, "\n")[[1]]
x <- purrr::map(x, ~ strwrap(.x))
x <- purrr::map(x, ~ add_returns(.x))
paste0(x, collapse = "\n\n")
}
add_returns <- function(x) {
paste0(x, collapse = "\n")
}
|
/scratch/gouwar.j/cran-all/cranData/workflowsets/R/comments.R
|
#' @export
dplyr_reconstruct.workflow_set <- function(data, template) {
workflow_set_maybe_reconstruct(data)
}
|
/scratch/gouwar.j/cran-all/cranData/workflowsets/R/compat-dplyr.R
|
workflow_set_maybe_reconstruct <- function(x) {
if (workflow_set_is_reconstructable(x)) {
new_workflow_set0(x)
} else {
new_tibble0(x)
}
}
workflow_set_is_reconstructable <- function(x) {
has_required_container_type(x) &&
has_required_container_columns(x) &&
has_valid_column_info_structure(x) &&
has_valid_column_info_inner_types(x) &&
has_valid_column_info_inner_names(x) &&
has_valid_column_result_structure(x) &&
has_valid_column_result_inner_types(x) &&
has_valid_column_result_fingerprints(x) &&
has_valid_column_option_structure(x) &&
has_valid_column_option_inner_types(x) &&
has_valid_column_wflow_id_structure(x) &&
has_valid_column_wflow_id_strings(x)
}
|
/scratch/gouwar.j/cran-all/cranData/workflowsets/R/compat-vctrs-helpers.R
|
# ------------------------------------------------------------------------------
# `vec_restore()`
#
# Called at the end of `vec_slice()` and `vec_ptype()` after all slicing has
# been done on the proxy object.
#
# If all invariants still hold after modifying the proxy, then we can restore
# to a workflow_set object. Otherwise, it will fall back to a bare tibble.
#
# Unlike rsample classes, `vec_ptype()` returns a workflow_set object here.
# This allows `vec_ptype.workflow_set.workflow_set()` to be called.
#' @export
vec_restore.workflow_set <- function(x, to, ...) {
workflow_set_maybe_reconstruct(x)
}
# ------------------------------------------------------------------------------
# `vec_ptype2()`
#
# When combining two workflow_sets together, `x` and `y` will be zero-row slices
# which should always result in a new workflow_set object, as long as
# `df_ptype2()` can compute a common type.
#
# Combining a workflow_set with a tibble/data.frame will only ever happen if
# the user calls `vec_c()` or `vec_rbind()` with one of each of those inputs.
# I think that it would be very difficult to expect that this returns a new
# workflow_set, so instead we always return a tibble.
#' @export
vec_ptype2.workflow_set.workflow_set <- function(x, y, ..., x_arg = "", y_arg = "") {
out <- vctrs::df_ptype2(x, y, ..., x_arg = x_arg, y_arg = y_arg)
workflow_set_maybe_reconstruct(out)
}
#' @export
vec_ptype2.workflow_set.tbl_df <- function(x, y, ..., x_arg = "", y_arg = "") {
vctrs::tib_ptype2(x, y, ..., x_arg = x_arg, y_arg = y_arg)
}
#' @export
vec_ptype2.tbl_df.workflow_set <- function(x, y, ..., x_arg = "", y_arg = "") {
vctrs::tib_ptype2(x, y, ..., x_arg = x_arg, y_arg = y_arg)
}
#' @export
vec_ptype2.workflow_set.data.frame <- function(x, y, ..., x_arg = "", y_arg = "") {
vctrs::tib_ptype2(x, y, ..., x_arg = x_arg, y_arg = y_arg)
}
#' @export
vec_ptype2.data.frame.workflow_set <- function(x, y, ..., x_arg = "", y_arg = "") {
vctrs::tib_ptype2(x, y, ..., x_arg = x_arg, y_arg = y_arg)
}
# ------------------------------------------------------------------------------
# `vec_cast()`
#
# These methods are designed with `vec_ptype2()` in mind.
#
# Casting from one workflow_set to another will happen "automatically" when
# two workflow_sets are combined with `vec_c()`. The common type will be
# computed with `vec_ptype2()`, then each input will be `vec_cast()` to that
# common type. It should always be possible to reconstruct the workflow_set
# if `df_cast()` is able to cast the underlying data frames successfully.
#
# Casting a tibble or data.frame to a workflow_set should never happen
# automatically, because the ptype2 methods always push towards
# tibble / data.frame. Since it is so unlikely that this will be done
# correctly, we don't ever allow it.
#
# Casting a workflow_set to a tibble or data.frame is easy, the underlying
# vctrs function does the work for us. This is used when doing
# `vec_c(<workflow_set>, <tbl>)`, as the `vec_ptype2()` method will compute
# a common type of tibble, and then each input will be cast to tibble.
#' @export
vec_cast.workflow_set.workflow_set <- function(x, to, ..., x_arg = "", to_arg = "") {
out <- vctrs::df_cast(x, to, ..., x_arg = x_arg, to_arg = to_arg)
workflow_set_maybe_reconstruct(out)
}
#' @export
vec_cast.workflow_set.tbl_df <- function(x, to, ..., x_arg = "", to_arg = "") {
stop_incompatible_cast_workflow_set(x, to, x_arg = x_arg, to_arg = to_arg)
}
#' @export
vec_cast.tbl_df.workflow_set <- function(x, to, ..., x_arg = "", to_arg = "") {
vctrs::tib_cast(x, to, ..., x_arg = x_arg, to_arg = to_arg)
}
#' @export
vec_cast.workflow_set.data.frame <- function(x, to, ..., x_arg = "", to_arg = "") {
stop_incompatible_cast_workflow_set(x, to, x_arg = x_arg, to_arg = to_arg)
}
#' @export
vec_cast.data.frame.workflow_set <- function(x, to, ..., x_arg = "", to_arg = "") {
vctrs::df_cast(x, to, ..., x_arg = x_arg, to_arg = to_arg)
}
# ------------------------------------------------------------------------------
stop_incompatible_cast_workflow_set <- function(x, to, ..., x_arg, to_arg) {
details <- "Can't cast to a <workflow_set> because the resulting structure is likely invalid."
vctrs::stop_incompatible_cast(x, to, x_arg = x_arg, to_arg = to_arg, details = details)
}
|
/scratch/gouwar.j/cran-all/cranData/workflowsets/R/compat-vctrs.R
|
#' Two Class Example Data
#'
#' @includeRmd man-roxygen/example_data.Rmd description
#' @includeRmd man-roxygen/two_class_set.Rmd
#'
#' @name two_class_set
#' @aliases two_class_set two_class_res
#' @docType data
#' @keywords datasets
#' @examples
#' data(two_class_set)
#'
#' two_class_set
NULL
#' Chicago Features Example Data
#'
#' @includeRmd man-roxygen/example_data.Rmd description
#' @includeRmd man-roxygen/chi_features_set.Rmd
#'
#' @name chi_features_set
#' @aliases chi_features_set chi_features_res
#' @docType data
#' @keywords datasets
#' @references Max Kuhn and Kjell Johnson (2019) _Feature Engineering and
#' Selection_, \url{https://bookdown.org/max/FES/a-more-complex-example.html}
#' @examples
#' data(chi_features_set)
#'
#' chi_features_set
NULL
|
/scratch/gouwar.j/cran-all/cranData/workflowsets/R/data.R
|
#' Extract elements of workflow sets
#'
#' @description
#' These functions extract various elements from a workflow set object. If they
#' do not exist yet, an error is thrown.
#'
#' - `extract_preprocessor()` returns the formula, recipe, or variable
#' expressions used for preprocessing.
#'
#' - `extract_spec_parsnip()` returns the parsnip model specification.
#'
#' - `extract_fit_parsnip()` returns the parsnip model fit object.
#'
#' - `extract_fit_engine()` returns the engine specific fit embedded within
#' a parsnip model fit. For example, when using [parsnip::linear_reg()]
#' with the `"lm"` engine, this returns the underlying `lm` object.
#'
#' - `extract_mold()` returns the preprocessed "mold" object returned
#' from [hardhat::mold()]. It contains information about the preprocessing,
#' including either the prepped recipe, the formula terms object, or
#' variable selectors.
#'
#' - `extract_recipe()` returns the recipe. The `estimated` argument specifies
#' whether the fitted or original recipe is returned.
#'
#' - `extract_workflow_set_result()` returns the results of [workflow_map()]
#' for a particular workflow.
#'
#' - `extract_workflow()` returns the workflow object. The workflow will not
#' have been estimated.
#'
#' - `extract_parameter_set_dials()` returns the parameter set
#' _that will be used to fit_ the supplied row `id` of the workflow set.
#' Note that workflow sets reference a parameter set associated with the
#' `workflow` contained in the `info` column by default, but can be
#' fitted with a modified parameter set via the [option_add()] interface.
#' This extractor returns the latter, if it exists, and returns the former
#' if not, mirroring the process that [workflow_map()] follows to provide
#' tuning functions a parameter set.
#'
#' - `extract_parameter_dials()` returns the `parameters` object
#' _that will be used to fit_ the supplied tuning `parameter` in the supplied
#' row `id` of the workflow set. See the above notes in
#' `extract_parameter_set_dials()` on precedence.
#'
#' @inheritParams comment_add
#' @param id A single character string for a workflow ID.
#' @param parameter A single string for the parameter ID.
#' @param estimated A logical for whether the original (unfit) recipe or the
#' fitted recipe should be returned.
#' @param ... Other options (not currently used).
#' @details
#'
#' These functions supersede the `pull_*()` functions (e.g.,
#' [extract_workflow_set_result()]).
#' @return
#' The extracted value from the object, `x`, as described in the
#' description section.
#'
#' @includeRmd man-roxygen/example_data.Rmd note
#'
#' @examples
#' library(tune)
#'
#' two_class_res
#'
#' extract_workflow_set_result(two_class_res, "none_cart")
#'
#' extract_workflow(two_class_res, "none_cart")
#' @export
extract_workflow_set_result <- function(x, id, ...) {
check_wf_set(x)
y <- filter_id(x, id)
y$result[[1]]
}
#' @export
#' @rdname extract_workflow_set_result
extract_workflow.workflow_set <- function(x, id, ...) {
y <- filter_id(x, id)
y$info[[1]]$workflow[[1]]
}
#' @export
#' @rdname extract_workflow_set_result
extract_spec_parsnip.workflow_set <- function(x, id, ...) {
y <- filter_id(x, id)
extract_spec_parsnip(y$info[[1]]$workflow[[1]])
}
#' @export
#' @rdname extract_workflow_set_result
extract_recipe.workflow_set <- function(x, id, ..., estimated = TRUE) {
check_empty_dots(...)
if (!rlang::is_bool(estimated)) {
rlang::abort("`estimated` must be a single `TRUE` or `FALSE`.")
}
y <- filter_id(x, id)
extract_recipe(y$info[[1]]$workflow[[1]], estimated = estimated)
}
check_empty_dots <- function(...) {
opts <- list(...)
if (any(names(opts) == "estimated")) {
rlang::abort("'estimated' should be a named argument.")
}
if (length(opts) > 0) {
rlang::abort("'...' are not used in this function.")
}
invisible(NULL)
}
#' @export
#' @rdname extract_workflow_set_result
extract_fit_parsnip.workflow_set <- function(x, id, ...) {
y <- filter_id(x, id)
extract_fit_parsnip(y$info[[1]]$workflow[[1]])
}
#' @export
#' @rdname extract_workflow_set_result
extract_fit_engine.workflow_set <- function(x, id, ...) {
y <- filter_id(x, id)
extract_fit_engine(y$info[[1]]$workflow[[1]])
}
#' @export
#' @rdname extract_workflow_set_result
extract_mold.workflow_set <- function(x, id, ...) {
y <- filter_id(x, id)
extract_mold(y$info[[1]]$workflow[[1]])
}
#' @export
#' @rdname extract_workflow_set_result
extract_preprocessor.workflow_set <- function(x, id, ...) {
y <- filter_id(x, id)
extract_preprocessor(y$info[[1]]$workflow[[1]])
}
#' @export
#' @rdname extract_workflow_set_result
extract_parameter_set_dials.workflow_set <- function(x, id, ...) {
y <- filter_id(x, id)
if ("param_info" %in% names(y$option[[1]])) {
return(y$option[[1]][["param_info"]])
}
extract_parameter_set_dials(y$info[[1]]$workflow[[1]])
}
#' @export
#' @rdname extract_workflow_set_result
extract_parameter_dials.workflow_set <- function(x, id, parameter, ...) {
res <- extract_parameter_set_dials(x, id)
res <- extract_parameter_dials(res, parameter)
res
}
# ------------------------------------------------------------------------------
filter_id <- function(x, id) {
check_string(id)
out <- dplyr::filter(x, wflow_id == id)
if (nrow(out) != 1L) {
halt("`id` must correspond to a single row in `x`.")
}
out
}
|
/scratch/gouwar.j/cran-all/cranData/workflowsets/R/extract.R
|
#' @importFrom generics fit
#' @noRd
#' @method fit workflow_set
#' @export
fit.workflow_set <- function(object, ...) {
msg <- "`fit()` is not well-defined for workflow sets."
# supply a different message depending on whether the
# workflow set has been (attempted to have been) fitted or not
if (!all(purrr::map_lgl(object$result, ~ identical(.x, list())))) {
# if fitted:
msg <-
c(msg,
"i" = "Please see {.help [{.fun fit_best}](workflowsets::fit_best.workflow_set)}.")
} else {
# if not fitted:
msg <-
c(msg,
"i" = "Please see {.help [{.fun workflow_map}](workflowsets::workflow_map)}.")
}
cli::cli_abort(msg)
}
|
/scratch/gouwar.j/cran-all/cranData/workflowsets/R/fit.R
|
#' @importFrom tune fit_best
#' @export
tune::fit_best
#' Fit a model to the numerically optimal configuration
#'
#' `fit_best()` takes results from tuning many models and fits the workflow
#' configuration associated with the best performance to the training set.
#'
#' @param x A [`workflow_set`][workflow_set()] object that has been evaluated
#' with [workflow_map()]. Note that the workflow set must have been fitted with
#' the [control option][option_add] `save_workflow = TRUE`.
#' @param metric A character string giving the metric to rank results by.
#' @inheritParams tune::fit_best.tune_results
#' @param ... Additional options to pass to
#' [tune::fit_best][tune::fit_best.tune_results].
#'
#' @details
#' This function is a shortcut for the steps needed to fit the
#' numerically optimal configuration in a fitted workflow set.
#' The function ranks results, extracts the tuning result pertaining
#' to the best result, and then again calls `fit_best()` (itself a
#' wrapper) on the tuning result containing the best result.
#'
#' In pseudocode:
#'
#' ```
#' rankings <- rank_results(wf_set, metric, select_best = TRUE)
#' tune_res <- extract_workflow_set_result(wf_set, rankings$wflow_id[1])
#' fit_best(tune_res, metric)
#' ```
#'
#' @includeRmd man-roxygen/example_data.Rmd note
#'
#' @examplesIf rlang::is_installed(c("kknn", "modeldata", "recipes", "yardstick", "dials")) && identical(Sys.getenv("NOT_CRAN"), "true")
#' library(tune)
#' library(modeldata)
#' library(rsample)
#'
#' data(Chicago)
#' Chicago <- Chicago[1:1195,]
#'
#' time_val_split <-
#' sliding_period(
#' Chicago,
#' date,
#' "month",
#' lookback = 38,
#' assess_stop = 1
#' )
#'
#' chi_features_set
#'
#' chi_features_res_new <-
#' chi_features_set %>%
#' # note: must set `save_workflow = TRUE` to use `fit_best()`
#' option_add(control = control_grid(save_workflow = TRUE)) %>%
#' # evaluate with resamples
#' workflow_map(resamples = time_val_split, grid = 21, seed = 1, verbose = TRUE)
#'
#' chi_features_res_new
#'
#' # sort models by performance metrics
#' rank_results(chi_features_res_new)
#'
#' # fit the numerically optimal configuration to the training set
#' chi_features_wf <- fit_best(chi_features_res_new)
#'
#' chi_features_wf
#'
#' # to select optimal value based on a specific metric:
#' fit_best(chi_features_res_new, metric = "rmse")
#' @name fit_best.workflow_set
#' @export
fit_best.workflow_set <- function(x, metric = NULL, eval_time = NULL, ...) {
check_string(metric, allow_null = TRUE)
result_1 <- extract_workflow_set_result(x, id = x$wflow_id[[1]])
met_set <- tune::.get_tune_metrics(result_1)
if (is.null(metric)) {
metric <- .get_tune_metric_names(result_1)[1]
} else {
tune::check_metric_in_tune_results(tibble::as_tibble(met_set), metric)
}
if (is.null(eval_time) & is_dyn(met_set, metric)) {
eval_time <- tune::.get_tune_eval_times(result_1)[1]
}
rankings <-
rank_results(
x,
rank_metric = metric,
select_best = TRUE,
eval_time = eval_time
)
tune_res <- extract_workflow_set_result(x, id = rankings$wflow_id[1])
best_params <- select_best(tune_res, metric = metric, eval_time = eval_time)
fit_best(tune_res, parameters = best_params, ...)
}
# from unexported
# https://github.com/tidymodels/tune/blob/5b0e10fac559f18c075eb4bd7020e217c6174e66/R/metric-selection.R#L137-L141
is_dyn <- function(mtr_set, metric) {
mtr_info <- tibble::as_tibble(mtr_set)
mtr_cls <- mtr_info$class[mtr_info$metric == metric]
mtr_cls == "dynamic_survival_metric"
}
|
/scratch/gouwar.j/cran-all/cranData/workflowsets/R/fit_best.R
|
# Standalone file: do not edit by hand
# Source: <https://github.com/r-lib/rlang/blob/main/R/standalone-obj-type.R>
# ----------------------------------------------------------------------
#
# ---
# repo: r-lib/rlang
# file: standalone-obj-type.R
# last-updated: 2023-05-01
# license: https://unlicense.org
# imports: rlang (>= 1.1.0)
# ---
#
# ## Changelog
#
# 2023-05-01:
# - `obj_type_friendly()` now only displays the first class of S3 objects.
#
# 2023-03-30:
# - `stop_input_type()` now handles `I()` input literally in `arg`.
#
# 2022-10-04:
# - `obj_type_friendly(value = TRUE)` now shows numeric scalars
# literally.
# - `stop_friendly_type()` now takes `show_value`, passed to
# `obj_type_friendly()` as the `value` argument.
#
# 2022-10-03:
# - Added `allow_na` and `allow_null` arguments.
# - `NULL` is now backticked.
# - Better friendly type for infinities and `NaN`.
#
# 2022-09-16:
# - Unprefixed usage of rlang functions with `rlang::` to
# avoid onLoad issues when called from rlang (#1482).
#
# 2022-08-11:
# - Prefixed usage of rlang functions with `rlang::`.
#
# 2022-06-22:
# - `friendly_type_of()` is now `obj_type_friendly()`.
# - Added `obj_type_oo()`.
#
# 2021-12-20:
# - Added support for scalar values and empty vectors.
# - Added `stop_input_type()`
#
# 2021-06-30:
# - Added support for missing arguments.
#
# 2021-04-19:
# - Added support for matrices and arrays (#141).
# - Added documentation.
# - Added changelog.
#
# nocov start
#' Return English-friendly type
#' @param x Any R object.
#' @param value Whether to describe the value of `x`. Special values
#' like `NA` or `""` are always described.
#' @param length Whether to mention the length of vectors and lists.
#' @return A string describing the type. Starts with an indefinite
#' article, e.g. "an integer vector".
#' @noRd
obj_type_friendly <- function(x, value = TRUE) {
if (is_missing(x)) {
return("absent")
}
if (is.object(x)) {
if (inherits(x, "quosure")) {
type <- "quosure"
} else {
type <- class(x)[[1L]]
}
return(sprintf("a <%s> object", type))
}
if (!is_vector(x)) {
return(.rlang_as_friendly_type(typeof(x)))
}
n_dim <- length(dim(x))
if (!n_dim) {
if (!is_list(x) && length(x) == 1) {
if (is_na(x)) {
return(switch(
typeof(x),
logical = "`NA`",
integer = "an integer `NA`",
double =
if (is.nan(x)) {
"`NaN`"
} else {
"a numeric `NA`"
},
complex = "a complex `NA`",
character = "a character `NA`",
.rlang_stop_unexpected_typeof(x)
))
}
show_infinites <- function(x) {
if (x > 0) {
"`Inf`"
} else {
"`-Inf`"
}
}
str_encode <- function(x, width = 30, ...) {
if (nchar(x) > width) {
x <- substr(x, 1, width - 3)
x <- paste0(x, "...")
}
encodeString(x, ...)
}
if (value) {
if (is.numeric(x) && is.infinite(x)) {
return(show_infinites(x))
}
if (is.numeric(x) || is.complex(x)) {
number <- as.character(round(x, 2))
what <- if (is.complex(x)) "the complex number" else "the number"
return(paste(what, number))
}
return(switch(
typeof(x),
logical = if (x) "`TRUE`" else "`FALSE`",
character = {
what <- if (nzchar(x)) "the string" else "the empty string"
paste(what, str_encode(x, quote = "\""))
},
raw = paste("the raw value", as.character(x)),
.rlang_stop_unexpected_typeof(x)
))
}
return(switch(
typeof(x),
logical = "a logical value",
integer = "an integer",
double = if (is.infinite(x)) show_infinites(x) else "a number",
complex = "a complex number",
character = if (nzchar(x)) "a string" else "\"\"",
raw = "a raw value",
.rlang_stop_unexpected_typeof(x)
))
}
if (length(x) == 0) {
return(switch(
typeof(x),
logical = "an empty logical vector",
integer = "an empty integer vector",
double = "an empty numeric vector",
complex = "an empty complex vector",
character = "an empty character vector",
raw = "an empty raw vector",
list = "an empty list",
.rlang_stop_unexpected_typeof(x)
))
}
}
vec_type_friendly(x)
}
vec_type_friendly <- function(x, length = FALSE) {
if (!is_vector(x)) {
abort("`x` must be a vector.")
}
type <- typeof(x)
n_dim <- length(dim(x))
add_length <- function(type) {
if (length && !n_dim) {
paste0(type, sprintf(" of length %s", length(x)))
} else {
type
}
}
if (type == "list") {
if (n_dim < 2) {
return(add_length("a list"))
} else if (is.data.frame(x)) {
return("a data frame")
} else if (n_dim == 2) {
return("a list matrix")
} else {
return("a list array")
}
}
type <- switch(
type,
logical = "a logical %s",
integer = "an integer %s",
numeric = ,
double = "a double %s",
complex = "a complex %s",
character = "a character %s",
raw = "a raw %s",
type = paste0("a ", type, " %s")
)
if (n_dim < 2) {
kind <- "vector"
} else if (n_dim == 2) {
kind <- "matrix"
} else {
kind <- "array"
}
out <- sprintf(type, kind)
if (n_dim >= 2) {
out
} else {
add_length(out)
}
}
.rlang_as_friendly_type <- function(type) {
switch(
type,
list = "a list",
NULL = "`NULL`",
environment = "an environment",
externalptr = "a pointer",
weakref = "a weak reference",
S4 = "an S4 object",
name = ,
symbol = "a symbol",
language = "a call",
pairlist = "a pairlist node",
expression = "an expression vector",
char = "an internal string",
promise = "an internal promise",
... = "an internal dots object",
any = "an internal `any` object",
bytecode = "an internal bytecode object",
primitive = ,
builtin = ,
special = "a primitive function",
closure = "a function",
type
)
}
.rlang_stop_unexpected_typeof <- function(x, call = caller_env()) {
abort(
sprintf("Unexpected type <%s>.", typeof(x)),
call = call
)
}
#' Return OO type
#' @param x Any R object.
#' @return One of `"bare"` (for non-OO objects), `"S3"`, `"S4"`,
#' `"R6"`, or `"R7"`.
#' @noRd
obj_type_oo <- function(x) {
if (!is.object(x)) {
return("bare")
}
class <- inherits(x, c("R6", "R7_object"), which = TRUE)
if (class[[1]]) {
"R6"
} else if (class[[2]]) {
"R7"
} else if (isS4(x)) {
"S4"
} else {
"S3"
}
}
#' @param x The object type which does not conform to `what`. Its
#' `obj_type_friendly()` is taken and mentioned in the error message.
#' @param what The friendly expected type as a string. Can be a
#' character vector of expected types, in which case the error
#' message mentions all of them in an "or" enumeration.
#' @param show_value Passed to `value` argument of `obj_type_friendly()`.
#' @param ... Arguments passed to [abort()].
#' @inheritParams args_error_context
#' @noRd
stop_input_type <- function(x,
what,
...,
allow_na = FALSE,
allow_null = FALSE,
show_value = TRUE,
arg = caller_arg(x),
call = caller_env()) {
# From standalone-cli.R
cli <- env_get_list(
nms = c("format_arg", "format_code"),
last = topenv(),
default = function(x) sprintf("`%s`", x),
inherit = TRUE
)
if (allow_na) {
what <- c(what, cli$format_code("NA"))
}
if (allow_null) {
what <- c(what, cli$format_code("NULL"))
}
if (length(what)) {
what <- oxford_comma(what)
}
if (inherits(arg, "AsIs")) {
format_arg <- identity
} else {
format_arg <- cli$format_arg
}
message <- sprintf(
"%s must be %s, not %s.",
format_arg(arg),
what,
obj_type_friendly(x, value = show_value)
)
abort(message, ..., call = call, arg = arg)
}
oxford_comma <- function(chr, sep = ", ", final = "or") {
n <- length(chr)
if (n < 2) {
return(chr)
}
head <- chr[seq_len(n - 1)]
last <- chr[n]
head <- paste(head, collapse = sep)
# Write a or b. But a, b, or c.
if (n > 2) {
paste0(head, sep, final, " ", last)
} else {
paste0(head, " ", final, " ", last)
}
}
# nocov end
|
/scratch/gouwar.j/cran-all/cranData/workflowsets/R/import-standalone-obj-type.R
|
# Standalone file: do not edit by hand
# Source: <https://github.com/r-lib/rlang/blob/main/R/standalone-types-check.R>
# ----------------------------------------------------------------------
#
# ---
# repo: r-lib/rlang
# file: standalone-types-check.R
# last-updated: 2023-03-13
# license: https://unlicense.org
# dependencies: standalone-obj-type.R
# imports: rlang (>= 1.1.0)
# ---
#
# ## Changelog
#
# 2023-03-13:
# - Improved error messages of number checkers (@teunbrand)
# - Added `allow_infinite` argument to `check_number_whole()` (@mgirlich).
# - Added `check_data_frame()` (@mgirlich).
#
# 2023-03-07:
# - Added dependency on rlang (>= 1.1.0).
#
# 2023-02-15:
# - Added `check_logical()`.
#
# - `check_bool()`, `check_number_whole()`, and
# `check_number_decimal()` are now implemented in C.
#
# - For efficiency, `check_number_whole()` and
# `check_number_decimal()` now take a `NULL` default for `min` and
# `max`. This makes it possible to bypass unnecessary type-checking
# and comparisons in the default case of no bounds checks.
#
# 2022-10-07:
# - `check_number_whole()` and `_decimal()` no longer treat
# non-numeric types such as factors or dates as numbers. Numeric
# types are detected with `is.numeric()`.
#
# 2022-10-04:
# - Added `check_name()` that forbids the empty string.
# `check_string()` allows the empty string by default.
#
# 2022-09-28:
# - Removed `what` arguments.
# - Added `allow_na` and `allow_null` arguments.
# - Added `allow_decimal` and `allow_infinite` arguments.
# - Improved errors with absent arguments.
#
#
# 2022-09-16:
# - Unprefixed usage of rlang functions with `rlang::` to
# avoid onLoad issues when called from rlang (#1482).
#
# 2022-08-11:
# - Added changelog.
#
# nocov start
# Scalars -----------------------------------------------------------------
.standalone_types_check_dot_call <- .Call
check_bool <- function(x,
...,
allow_na = FALSE,
allow_null = FALSE,
arg = caller_arg(x),
call = caller_env()) {
if (!missing(x) && .standalone_types_check_dot_call(ffi_standalone_is_bool_1.0.7, x, allow_na, allow_null)) {
return(invisible(NULL))
}
stop_input_type(
x,
c("`TRUE`", "`FALSE`"),
...,
allow_na = allow_na,
allow_null = allow_null,
arg = arg,
call = call
)
}
check_string <- function(x,
...,
allow_empty = TRUE,
allow_na = FALSE,
allow_null = FALSE,
arg = caller_arg(x),
call = caller_env()) {
if (!missing(x)) {
is_string <- .rlang_check_is_string(
x,
allow_empty = allow_empty,
allow_na = allow_na,
allow_null = allow_null
)
if (is_string) {
return(invisible(NULL))
}
}
stop_input_type(
x,
"a single string",
...,
allow_na = allow_na,
allow_null = allow_null,
arg = arg,
call = call
)
}
.rlang_check_is_string <- function(x,
allow_empty,
allow_na,
allow_null) {
if (is_string(x)) {
if (allow_empty || !is_string(x, "")) {
return(TRUE)
}
}
if (allow_null && is_null(x)) {
return(TRUE)
}
if (allow_na && (identical(x, NA) || identical(x, na_chr))) {
return(TRUE)
}
FALSE
}
check_name <- function(x,
...,
allow_null = FALSE,
arg = caller_arg(x),
call = caller_env()) {
if (!missing(x)) {
is_string <- .rlang_check_is_string(
x,
allow_empty = FALSE,
allow_na = FALSE,
allow_null = allow_null
)
if (is_string) {
return(invisible(NULL))
}
}
stop_input_type(
x,
"a valid name",
...,
allow_na = FALSE,
allow_null = allow_null,
arg = arg,
call = call
)
}
IS_NUMBER_true <- 0
IS_NUMBER_false <- 1
IS_NUMBER_oob <- 2
check_number_decimal <- function(x,
...,
min = NULL,
max = NULL,
allow_infinite = TRUE,
allow_na = FALSE,
allow_null = FALSE,
arg = caller_arg(x),
call = caller_env()) {
if (missing(x)) {
exit_code <- IS_NUMBER_false
} else if (0 == (exit_code <- .standalone_types_check_dot_call(
ffi_standalone_check_number_1.0.7,
x,
allow_decimal = TRUE,
min,
max,
allow_infinite,
allow_na,
allow_null
))) {
return(invisible(NULL))
}
.stop_not_number(
x,
...,
exit_code = exit_code,
allow_decimal = TRUE,
min = min,
max = max,
allow_na = allow_na,
allow_null = allow_null,
arg = arg,
call = call
)
}
check_number_whole <- function(x,
...,
min = NULL,
max = NULL,
allow_infinite = FALSE,
allow_na = FALSE,
allow_null = FALSE,
arg = caller_arg(x),
call = caller_env()) {
if (missing(x)) {
exit_code <- IS_NUMBER_false
} else if (0 == (exit_code <- .standalone_types_check_dot_call(
ffi_standalone_check_number_1.0.7,
x,
allow_decimal = FALSE,
min,
max,
allow_infinite,
allow_na,
allow_null
))) {
return(invisible(NULL))
}
.stop_not_number(
x,
...,
exit_code = exit_code,
allow_decimal = FALSE,
min = min,
max = max,
allow_na = allow_na,
allow_null = allow_null,
arg = arg,
call = call
)
}
.stop_not_number <- function(x,
...,
exit_code,
allow_decimal,
min,
max,
allow_na,
allow_null,
arg,
call) {
if (allow_decimal) {
what <- "a number"
} else {
what <- "a whole number"
}
if (exit_code == IS_NUMBER_oob) {
min <- min %||% -Inf
max <- max %||% Inf
if (min > -Inf && max < Inf) {
what <- sprintf("%s between %s and %s", what, min, max)
} else if (x < min) {
what <- sprintf("%s larger than or equal to %s", what, min)
} else if (x > max) {
what <- sprintf("%s smaller than or equal to %s", what, max)
} else {
abort("Unexpected state in OOB check", .internal = TRUE)
}
}
stop_input_type(
x,
what,
...,
allow_na = allow_na,
allow_null = allow_null,
arg = arg,
call = call
)
}
check_symbol <- function(x,
...,
allow_null = FALSE,
arg = caller_arg(x),
call = caller_env()) {
if (!missing(x)) {
if (is_symbol(x)) {
return(invisible(NULL))
}
if (allow_null && is_null(x)) {
return(invisible(NULL))
}
}
stop_input_type(
x,
"a symbol",
...,
allow_na = FALSE,
allow_null = allow_null,
arg = arg,
call = call
)
}
check_arg <- function(x,
...,
allow_null = FALSE,
arg = caller_arg(x),
call = caller_env()) {
if (!missing(x)) {
if (is_symbol(x)) {
return(invisible(NULL))
}
if (allow_null && is_null(x)) {
return(invisible(NULL))
}
}
stop_input_type(
x,
"an argument name",
...,
allow_na = FALSE,
allow_null = allow_null,
arg = arg,
call = call
)
}
check_call <- function(x,
...,
allow_null = FALSE,
arg = caller_arg(x),
call = caller_env()) {
if (!missing(x)) {
if (is_call(x)) {
return(invisible(NULL))
}
if (allow_null && is_null(x)) {
return(invisible(NULL))
}
}
stop_input_type(
x,
"a defused call",
...,
allow_na = FALSE,
allow_null = allow_null,
arg = arg,
call = call
)
}
check_environment <- function(x,
...,
allow_null = FALSE,
arg = caller_arg(x),
call = caller_env()) {
if (!missing(x)) {
if (is_environment(x)) {
return(invisible(NULL))
}
if (allow_null && is_null(x)) {
return(invisible(NULL))
}
}
stop_input_type(
x,
"an environment",
...,
allow_na = FALSE,
allow_null = allow_null,
arg = arg,
call = call
)
}
check_function <- function(x,
...,
allow_null = FALSE,
arg = caller_arg(x),
call = caller_env()) {
if (!missing(x)) {
if (is_function(x)) {
return(invisible(NULL))
}
if (allow_null && is_null(x)) {
return(invisible(NULL))
}
}
stop_input_type(
x,
"a function",
...,
allow_na = FALSE,
allow_null = allow_null,
arg = arg,
call = call
)
}
check_closure <- function(x,
...,
allow_null = FALSE,
arg = caller_arg(x),
call = caller_env()) {
if (!missing(x)) {
if (is_closure(x)) {
return(invisible(NULL))
}
if (allow_null && is_null(x)) {
return(invisible(NULL))
}
}
stop_input_type(
x,
"an R function",
...,
allow_na = FALSE,
allow_null = allow_null,
arg = arg,
call = call
)
}
check_formula <- function(x,
...,
allow_null = FALSE,
arg = caller_arg(x),
call = caller_env()) {
if (!missing(x)) {
if (is_formula(x)) {
return(invisible(NULL))
}
if (allow_null && is_null(x)) {
return(invisible(NULL))
}
}
stop_input_type(
x,
"a formula",
...,
allow_na = FALSE,
allow_null = allow_null,
arg = arg,
call = call
)
}
# Vectors -----------------------------------------------------------------
check_character <- function(x,
...,
allow_null = FALSE,
arg = caller_arg(x),
call = caller_env()) {
if (!missing(x)) {
if (is_character(x)) {
return(invisible(NULL))
}
if (allow_null && is_null(x)) {
return(invisible(NULL))
}
}
stop_input_type(
x,
"a character vector",
...,
allow_na = FALSE,
allow_null = allow_null,
arg = arg,
call = call
)
}
check_logical <- function(x,
...,
allow_null = FALSE,
arg = caller_arg(x),
call = caller_env()) {
if (!missing(x)) {
if (is_logical(x)) {
return(invisible(NULL))
}
if (allow_null && is_null(x)) {
return(invisible(NULL))
}
}
stop_input_type(
x,
"a logical vector",
...,
allow_na = FALSE,
allow_null = allow_null,
arg = arg,
call = call
)
}
check_data_frame <- function(x,
...,
allow_null = FALSE,
arg = caller_arg(x),
call = caller_env()) {
if (!missing(x)) {
if (is.data.frame(x)) {
return(invisible(NULL))
}
if (allow_null && is_null(x)) {
return(invisible(NULL))
}
}
stop_input_type(
x,
"a data frame",
...,
allow_null = allow_null,
arg = arg,
call = call
)
}
# nocov end
|
/scratch/gouwar.j/cran-all/cranData/workflowsets/R/import-standalone-types-check.R
|
#' Create formulas without each predictor
#'
#' From an initial model formula, create a list of formulas that exclude
#' each predictor.
#' @param formula A model formula that contains at least two predictors.
#' @param data A data frame.
#' @param full_model A logical; should the list include the original formula?
#' @param ... Options to pass to [stats::model.frame()]
#' @seealso [workflow_set()]
#' @return A named list of formulas
#' @details The new formulas obey the hierarchy rule so that interactions
#' without main effects are not included (unless the original formula contains
#' such terms).
#'
#' Factor predictors are left as-is (i.e., no indicator variables are created).
#'
#' @examples
#' data(penguins, package = "modeldata")
#'
#' leave_var_out_formulas(
#' bill_length_mm ~ .,
#' data = penguins
#' )
#'
#' leave_var_out_formulas(
#' bill_length_mm ~ (island + sex)^2 + flipper_length_mm,
#' data = penguins
#' )
#'
#' leave_var_out_formulas(
#' bill_length_mm ~ (island + sex)^2 + flipper_length_mm +
#' I(flipper_length_mm^2),
#' data = penguins
#' )
#' @export
leave_var_out_formulas <- function(formula, data, full_model = TRUE, ...) {
check_formula(formula)
check_bool(full_model)
trms <- attr(model.frame(formula, data, ...), "terms")
x_vars <- attr(trms, "term.labels")
if (length(x_vars) < 2) {
rlang::abort("There should be at least 2 predictors in the formula.")
}
y_vars <- as.character(formula[[2]])
form_terms <- purrr::map(x_vars, rm_vars, lst = x_vars)
form <- purrr::map_chr(form_terms, ~ paste(y_vars, "~", paste(.x, collapse = " + ")))
form <- purrr::map(form, as.formula)
form <- purrr::map(form, rm_formula_env)
names(form) <- x_vars
if (full_model) {
form$everything <- formula
}
form
}
rm_vars <- function(x, lst) {
remaining_terms(x, lst)
}
remaining_terms <- function(x, lst) {
has_x <- purrr::map_lgl(lst, ~ x %in% all_terms(.x))
is_x <- lst == x
lst[!has_x & !is_x]
}
rm_formula_env <- function(x) {
attr(x, ".Environment") <- rlang::base_env()
x
}
all_terms <- function(x) {
y <- paste("~", x)
y <- as.formula(y)
all.vars(y)
}
|
/scratch/gouwar.j/cran-all/cranData/workflowsets/R/leave_var_out_formulas.R
|
make_workflow <- function(x, y) {
exp_classes <- c("formula", "recipe", "workflow_variables")
w <-
workflows::workflow() %>%
workflows::add_model(y)
if (inherits(x, "formula")) {
w <- workflows::add_formula(w, x)
} else if (inherits(x, "recipe")) {
w <- workflows::add_recipe(w, x)
} else if (inherits(x, "workflow_variables")) {
w <- workflows::add_variables(w, variables = x)
} else {
halt(
"The preprocessor must be an object with one of the ",
"following classes: ", paste0("'", exp_classes, "'", collapse = ", ")
)
}
w
}
halt <- function(...) {
rlang::abort(paste0(...))
}
# ------------------------------------------------------------------------------
metric_to_df <- function(x, ...) {
metrics <- attributes(x)$metrics
names <- names(metrics)
metrics <- unname(metrics)
classes <- purrr::map_chr(metrics, ~ class(.x)[[1]])
directions <- purrr::map_chr(metrics, ~ attr(.x, "direction"))
info <- data.frame(metric = names, class = classes, direction = directions)
info
}
collate_metrics <- function(x) {
metrics <-
x$result %>%
purrr::map(tune::.get_tune_metrics) %>%
purrr::map(metric_to_df) %>%
purrr::map_dfr(~ dplyr::mutate(.x, order = 1:nrow(.x)))
mean_order <-
metrics %>%
dplyr::group_by(metric) %>%
dplyr::summarize(
order = mean(order, na.rm = TRUE), n = dplyr::n(),
.groups = "drop"
)
dplyr::full_join(
dplyr::distinct(metrics) %>% dplyr::select(-order),
mean_order,
by = "metric"
) %>%
dplyr::arrange(order)
}
pick_metric <- function(x, rank_metric, select_metrics = NULL) {
# mostly to check for completeness and consistency:
tmp <- collect_metrics(x)
metrics <- collate_metrics(x)
if (!is.null(select_metrics)) {
tmp <- dplyr::filter(tmp, .metric %in% select_metrics)
metrics <- dplyr::filter(metrics, metric %in% select_metrics)
}
if (is.null(rank_metric)) {
rank_metric <- metrics$metric[1]
direction <- metrics$direction[1]
} else {
if (!any(metrics$metric == rank_metric)) {
halt("Metric '", rank_metric, "' was not in the results.")
}
direction <- metrics$direction[metrics$metric == rank_metric]
}
list(metric = as.character(rank_metric), direction = as.character(direction))
}
|
/scratch/gouwar.j/cran-all/cranData/workflowsets/R/misc.R
|
#' Add and edit options saved in a workflow set
#'
#' @description
#' The `option` column controls options for the functions that are used to
#' _evaluate_ the workflow set, such as [tune::fit_resamples()] or
#' [tune::tune_grid()]. Examples of common options to set for these functions
#' include `param_info` and `grid`.
#'
#' These functions are helpful for manipulating the information in the `option`
#' column.
#'
#' @export
#' @inheritParams comment_add
#' @param ... Arguments to pass to the `tune_*()` functions (e.g.
#' [tune::tune_grid()]) or [tune::fit_resamples()]. For `option_remove()` this
#' can be a series of unquoted option names.
#' @param id A character string of one or more values from the `wflow_id`
#' column that indicates which options to update. By default, all workflows
#' are updated.
#' @param strict A logical; should execution stop if existing options are being
#' replaced?
#' @return An updated workflow set.
#' @details
#' `option_add()` is used to update all of the options in a workflow set.
#'
#' `option_remove()` will eliminate specific options across rows.
#'
#' `option_add_parameters()` adds a parameter object to the `option` column
#' (if parameters are being tuned).
#'
#' Note that executing a function on the workflow set, such as `tune_grid()`,
#' will add any options given to that function to the `option` column.
#'
#' These functions do _not_ control options for the individual workflows, such as
#' the recipe blueprint. When creating a workflow manually, use
#' [workflows::add_model()] or [workflows::add_recipe()] to specify
#' extra options. To alter these in a workflow set, use
#' [update_workflow_model()] or [update_workflow_recipe()].
#'
#' @examples
#' library(tune)
#'
#' two_class_set
#'
#' two_class_set %>%
#' option_add(grid = 10)
#'
#' two_class_set %>%
#' option_add(grid = 10) %>%
#' option_add(grid = 50, id = "none_cart")
#'
#' two_class_set %>%
#' option_add_parameters()
option_add <- function(x, ..., id = NULL, strict = FALSE) {
check_wf_set(x)
dots <- list(...)
if (length(dots) == 0) {
return(x)
}
if (strict) {
act <- "fail"
} else {
act <- "warn"
}
check_tune_args(names(dots))
check_string(id, allow_null = TRUE)
check_bool(strict)
if (!is.null(id)) {
for (i in id) {
ind <- which(x$wflow_id == i)
if (length(ind) == 0) {
rlang::warn(paste("Don't have an 'id' value", i))
} else {
check_options(x$option[[ind]], x$wflow_id[[ind]], dots, action = act)
x$option[[ind]] <- append_options(x$option[[ind]], dots)
}
}
} else {
check_options(x$option, x$wflow_id, dots, action = act)
x <- dplyr::mutate(x, option = purrr::map(option, append_options, dots))
}
x
}
#' @export
#' @rdname option_add
option_remove <- function(x, ...) {
dots <- rlang::enexprs(...)
if (length(dots) == 0) {
return(x)
}
dots <- purrr::map_chr(dots, rlang::expr_text)
x <- dplyr::mutate(x, option = purrr::map(option, rm_elem, dots))
x
}
maybe_param <- function(x) {
prm <- hardhat::extract_parameter_set_dials(x)
if (nrow(prm) == 0) {
x <- list()
} else {
x <- list(param_info = prm)
}
x
}
#' @export
#' @rdname option_add
option_add_parameters <- function(x, id = NULL, strict = FALSE) {
prm <- purrr::map(x$info, ~ maybe_param(.x$workflow[[1]]))
num <- purrr::map_int(prm, length)
if (all(num == 0)) {
return(x)
}
if (strict) {
act <- "fail"
} else {
act <- "warn"
}
if (!is.null(id)) {
for (i in id) {
ind <- which(x$wflow_id == i)
if (length(ind) == 0) {
rlang::warn(paste("Don't have an 'id' value", i))
} else {
check_options(x$option[[ind]], x$wflow_id[[ind]], prm[[ind]], action = act)
x$option[[ind]] <- append_options(x$option[[ind]], prm[[ind]])
}
}
} else {
check_options(x$option, x$wflow_id, prm[1], action = act)
x <- dplyr::mutate(x, option = purrr::map2(option, prm, append_options))
}
x
}
rm_elem <- function(x, nms) {
x <- x[!(names(x) %in% nms)]
new_workflow_set_options(!!!x)
}
append_options <- function(model, global) {
old_names <- names(model)
new_names <- names(global)
common_names <- intersect(old_names, new_names)
if (length(common_names) > 0) {
model <- rm_elem(model, common_names)
}
all_opt <- c(model, global)
new_workflow_set_options(!!!all_opt)
}
#' @export
print.workflow_set_options <- function(x, ...) {
if (length(x) > 0) {
cat(
"a list of options with names: ",
paste0("'", names(x), "'", collapse = ", ")
)
} else {
cat("an empty container for options")
}
cat("\n")
}
#' Make a classed list of options
#'
#' This function returns a named list with an extra class of
#' `"workflow_set_options"` that has corresponding formatting methods for
#' printing inside of tibbles.
#' @param ... A set of named options (or nothing)
#' @return A classed list.
#' @examples
#' option_list(a = 1, b = 2)
#' option_list()
#' @export
option_list <- function(...) new_workflow_set_options(...)
new_workflow_set_options <- function(...) {
res <- rlang::list2(...)
if (any(names(res) == "")) {
rlang::abort("All options should be named.")
}
structure(res, class = c("workflow_set_options", "list"))
}
#' @export
type_sum.workflow_set_options <- function(x) {
paste0("opts[", length(x), "]")
}
#' @export
size_sum.workflow_set_options <- function(x) {
""
}
#' @export
obj_sum.workflow_set_options <- function(x) {
paste0("opts[", length(x), "]")
}
|
/scratch/gouwar.j/cran-all/cranData/workflowsets/R/options.R
|
#' @importFrom stats predict
#' @noRd
#' @method predict workflow_set
#' @export
predict.workflow_set <- function(object, ...) {
cli::cli_abort(c(
"`predict()` is not well-defined for workflow sets.",
"i" = "To predict with the optimal model configuration from a workflow \\
set, ensure that the workflow set was fitted with the \\
{.help [control option](workflowsets::option_add)} \\
{.help [{.code save_workflow = TRUE}](tune::control_grid)}, run \\
{.help [{.fun fit_best}](tune::fit_best)}, and then predict using \\
{.help [{.fun predict}](workflows::predict.workflow)} on its output.",
"i" = "To collect predictions from a workflow set, ensure that \\
the workflow set was fitted with the \\
{.help [control option](workflowsets::option_add)} \\
{.help [{.code save_pred = TRUE}](tune::control_grid)} and run \\
{.help [{.fun collect_predictions}](tune::collect_predictions)}."
))
}
|
/scratch/gouwar.j/cran-all/cranData/workflowsets/R/predict.R
|
#' Extract elements from a workflow set
#'
#' `r lifecycle::badge("soft-deprecated")`
#'
#' `pull_workflow_set_result()` retrieves the results of [workflow_map()] for a
#' particular workflow while `pull_workflow()` extracts the unfitted workflow
#' from the `info` column.
#'
#'
#' @inheritParams comment_add
#' @param id A single character string for a workflow ID.
#' @details
#' The [extract_workflow_set_result()] and [extract_workflow()] functions should
#' be used instead of these functions.
#' @return `pull_workflow_set_result()` produces a `tune_result` or
#' `resample_results` object. `pull_workflow()` returns an unfit workflow
#' object.
#' @examples
#' library(tune)
#'
#' two_class_res
#'
#' pull_workflow_set_result(two_class_res, "none_cart")
#'
#' pull_workflow(two_class_res, "none_cart")
#' @export
pull_workflow_set_result <- function(x, id) {
lifecycle::deprecate_warn(
"0.1.0",
"pull_workflow_set_result()",
"extract_workflow_set_result()"
)
if (length(id) != 1) {
rlang::abort("'id' should have a single value.")
}
y <- x %>% dplyr::filter(wflow_id == id[1])
if (nrow(y) != 1) {
halt("No workflow ID found for '", id[1], "'")
}
y$result[[1]]
}
#' @export
#' @rdname pull_workflow_set_result
pull_workflow <- function(x, id) {
lifecycle::deprecate_warn("0.1.0", "pull_workflow()", "extract_workflow()")
if (length(id) != 1) {
rlang::abort("'id' should have a single value.")
}
y <- x %>% dplyr::filter(wflow_id == id[1])
if (nrow(y) != 1) {
halt("No workflow ID found for '", id[1], "'")
}
y$info[[1]]$workflow[[1]]
}
|
/scratch/gouwar.j/cran-all/cranData/workflowsets/R/pull.R
|
#' Rank the results by a metric
#'
#' This function sorts the results by a specific performance metric.
#'
#' @inheritParams collect_metrics.workflow_set
#' @param rank_metric A character string for a metric.
#' @inheritParams tune::fit_best.tune_results
#' @param select_best A logical giving whether the results should only contain
#' the numerically best submodel per workflow.
#' @details
#' If some models have the exact same performance,
#' `rank(value, ties.method = "random")` is used (with a reproducible seed) so
#' that all ranks are integers.
#'
#' No columns are returned for the tuning parameters since they are likely to
#' be different (or not exist) for some models. The `wflow_id` and `.config`
#' columns can be used to determine the corresponding parameter values.
#' @return A tibble with columns: `wflow_id`, `.config`, `.metric`, `mean`,
#' `std_err`, `n`, `preprocessor`, `model`, and `rank`.
#'
#' @includeRmd man-roxygen/example_data.Rmd note
#'
#' @examples
#' chi_features_res
#'
#' rank_results(chi_features_res)
#' rank_results(chi_features_res, select_best = TRUE)
#' rank_results(chi_features_res, rank_metric = "rsq")
#' @export
rank_results <- function(x, rank_metric = NULL, eval_time = NULL, select_best = FALSE) {
check_wf_set(x)
check_string(rank_metric, allow_null = TRUE)
check_bool(select_best)
result_1 <- extract_workflow_set_result(x, id = x$wflow_id[[1]])
met_set <- tune::.get_tune_metrics(result_1)
if (!is.null(rank_metric)) {
tune::check_metric_in_tune_results(tibble::as_tibble(met_set), rank_metric)
}
metric_info <- pick_metric(x, rank_metric)
metric <- metric_info$metric
direction <- metric_info$direction
wflow_info <- dplyr::bind_cols(purrr::map_dfr(x$info, I), dplyr::select(x, wflow_id))
eval_time <- tune::choose_eval_time(result_1, metric, eval_time = eval_time)
results <- collect_metrics(x) %>%
dplyr::select(wflow_id, .config, .metric, mean, std_err, n,
dplyr::any_of(".eval_time")) %>%
dplyr::full_join(wflow_info, by = "wflow_id") %>%
dplyr::select(-comment, -workflow)
if (!is.null(eval_time) && ".eval_time" %in% names(results)) {
results <- results[results$.eval_time == eval_time, ]
}
types <- x %>%
dplyr::full_join(wflow_info, by = "wflow_id") %>%
dplyr::mutate(
is_race = purrr::map_lgl(result, ~ inherits(.x, "tune_race")),
num_rs = purrr::map_int(result, get_num_resamples)
) %>%
dplyr::select(wflow_id, is_race, num_rs)
ranked <-
dplyr::full_join(results, types, by = "wflow_id") %>%
dplyr::filter(.metric == metric)
if (any(ranked$is_race)) {
# remove any racing results with less resamples than the total number
rm_rows <-
ranked %>%
dplyr::filter(is_race & (num_rs > n)) %>%
dplyr::select(wflow_id, .config) %>%
dplyr::distinct()
if (nrow(rm_rows) > 0) {
ranked <- dplyr::anti_join(ranked, rm_rows, by = c("wflow_id", ".config"))
results <- dplyr::anti_join(results, rm_rows, by = c("wflow_id", ".config"))
}
}
if (direction == "maximize") {
ranked$mean <- -ranked$mean
}
if (select_best) {
best_by_wflow <-
dplyr::group_by(ranked, wflow_id) %>%
dplyr::slice_min(mean, with_ties = FALSE) %>%
dplyr::ungroup() %>%
dplyr::select(wflow_id, .config)
ranked <- dplyr::inner_join(ranked, best_by_wflow, by = c("wflow_id", ".config"))
}
# ensure reproducible rankings when there are ties
withr::with_seed(
1,
{
ranked <-
ranked %>%
dplyr::mutate(rank = rank(mean, ties.method = "random")) %>%
dplyr::select(wflow_id, .config, rank)
}
)
dplyr::inner_join(results, ranked, by = c("wflow_id", ".config")) %>%
dplyr::arrange(rank) %>%
dplyr::rename(preprocessor = preproc)
}
get_num_resamples <- function(x) {
purrr::map_dfr(x$splits, ~ .x$id) %>%
dplyr::distinct() %>%
nrow()
}
|
/scratch/gouwar.j/cran-all/cranData/workflowsets/R/rank_results.R
|
#' Update components of a workflow within a workflow set
#'
#' @description
#' Workflows can take special arguments for the recipe (e.g. a blueprint) or a
#' model (e.g. a special formula). However, when creating a workflow set, there
#' is no way to specify these extra components.
#'
#' `update_workflow_model()` and `update_workflow_recipe()` allow users to set
#' these values _after_ the workflow set is initially created. They are
#' analogous to [workflows::add_model()] or [workflows::add_recipe()].
#'
#' @inheritParams comment_add
#' @param id A single character string from the `wflow_id` column indicating
#' which workflow to update.
#' @inheritParams workflows::add_recipe
#' @inheritParams workflows::add_model
#'
#' @includeRmd man-roxygen/example_data.Rmd note
#'
#' @examples
#' library(parsnip)
#'
#' new_mod <-
#' decision_tree() %>%
#' set_engine("rpart", method = "anova") %>%
#' set_mode("classification")
#'
#' new_set <- update_workflow_model(two_class_res, "none_cart", spec = new_mod)
#'
#' new_set
#'
#' extract_workflow(new_set, id = "none_cart")
#' @export
update_workflow_model <- function(x, id, spec, formula = NULL) {
check_wf_set(x)
check_string(id)
check_formula(formula, allow_null = TRUE)
wflow <- extract_workflow(x, id = id)
wflow <- workflows::update_model(wflow, spec = spec, formula = formula)
id_ind <- which(x$wflow_id == id)
x$info[[id_ind]]$workflow[[1]] <- wflow
# Remove any existing results since they are now inconsistent
if (!identical(x$result[[id_ind]], list())) {
x$result[[id_ind]] <- list()
}
x
}
#' @rdname update_workflow_model
#' @export
update_workflow_recipe <- function(x, id, recipe, blueprint = NULL) {
check_wf_set(x)
check_string(id)
wflow <- extract_workflow(x, id = id)
wflow <- workflows::update_recipe(wflow, recipe = recipe, blueprint = blueprint)
id_ind <- which(x$wflow_id == id)
x$info[[id_ind]]$workflow[[1]] <- wflow
# Remove any existing results since they are now inconsistent
if (!identical(x$result[[id_ind]], list())) {
x$result[[id_ind]] <- list()
}
x
}
|
/scratch/gouwar.j/cran-all/cranData/workflowsets/R/update.R
|
#' Process a series of workflows
#'
#' `workflow_map()` will execute the same function across the workflows in the
#' set. The various `tune_*()` functions can be used as well as
#' [tune::fit_resamples()].
#' @param object A workflow set.
#' @param fn The name of the function to run, as a character. Acceptable values are:
#' ["tune_grid"][tune::tune_grid()],
#' ["tune_bayes"][tune::tune_bayes()],
#' ["fit_resamples"][tune::fit_resamples()],
#' ["tune_race_anova"][finetune::tune_race_anova()],
#' ["tune_race_win_loss"][finetune::tune_race_win_loss()], or
#' ["tune_sim_anneal"][finetune::tune_sim_anneal()]. Note that users need not
#' provide the namespace or parentheses in this argument,
#' e.g. provide `"tune_grid"` rather than `"tune::tune_grid"` or `"tune_grid()"`.
#' @param verbose A logical for logging progress.
#' @param seed A single integer that is set prior to each function execution.
#' @param ... Options to pass to the modeling function. See details below.
#' @return An updated workflow set. The `option` column will be updated with
#' any options for the `tune` package functions given to `workflow_map()`. Also,
#' the results will be added to the `result` column. If the computations for a
#' workflow fail, a `try-catch` object will be saved in place of the results
#' (without stopping execution).
#' @seealso [workflow_set()], [as_workflow_set()], [extract_workflow_set_result()]
#' @details
#'
#' When passing options, anything passed in the `...` will be combined with any
#' values in the `option` column. The values in `...` will override that
#' column's values and the new options are added to the `options` column.
#'
#' Any failures in execution result in the corresponding row of `results` to
#' contain a `try-error` object.
#'
#' In cases where a model has no tuning parameters is mapped to one of the
#' tuning functions, [tune::fit_resamples()] will be used instead and a
#' warning is issued if `verbose = TRUE`.
#'
#' If a workflow requires packages that are not installed, a message is printed
#' and `workflow_map()` continues with the next workflow (if any).
#'
#' @includeRmd man-roxygen/example_data.Rmd note
#'
#' @examplesIf rlang::is_installed(c("kknn", "modeldata", "recipes", "yardstick", "dials")) && identical(Sys.getenv("NOT_CRAN"), "true")
#' library(workflowsets)
#' library(workflows)
#' library(modeldata)
#' library(recipes)
#' library(parsnip)
#' library(dplyr)
#' library(rsample)
#' library(tune)
#' library(yardstick)
#' library(dials)
#'
#' # An example of processed results
#' chi_features_res
#'
#' # Recreating them:
#'
#' # ---------------------------------------------------------------------------
#' data(Chicago)
#' Chicago <- Chicago[1:1195,]
#'
#' time_val_split <-
#' sliding_period(
#' Chicago,
#' date,
#' "month",
#' lookback = 38,
#' assess_stop = 1
#' )
#'
#' # ---------------------------------------------------------------------------
#'
#' base_recipe <-
#' recipe(ridership ~ ., data = Chicago) %>%
#' # create date features
#' step_date(date) %>%
#' step_holiday(date) %>%
#' # remove date from the list of predictors
#' update_role(date, new_role = "id") %>%
#' # create dummy variables from factor columns
#' step_dummy(all_nominal()) %>%
#' # remove any columns with a single unique value
#' step_zv(all_predictors()) %>%
#' step_normalize(all_predictors())
#'
#' date_only <-
#' recipe(ridership ~ ., data = Chicago) %>%
#' # create date features
#' step_date(date) %>%
#' update_role(date, new_role = "id") %>%
#' # create dummy variables from factor columns
#' step_dummy(all_nominal()) %>%
#' # remove any columns with a single unique value
#' step_zv(all_predictors())
#'
#' date_and_holidays <-
#' recipe(ridership ~ ., data = Chicago) %>%
#' # create date features
#' step_date(date) %>%
#' step_holiday(date) %>%
#' # remove date from the list of predictors
#' update_role(date, new_role = "id") %>%
#' # create dummy variables from factor columns
#' step_dummy(all_nominal()) %>%
#' # remove any columns with a single unique value
#' step_zv(all_predictors())
#'
#' date_and_holidays_and_pca <-
#' recipe(ridership ~ ., data = Chicago) %>%
#' # create date features
#' step_date(date) %>%
#' step_holiday(date) %>%
#' # remove date from the list of predictors
#' update_role(date, new_role = "id") %>%
#' # create dummy variables from factor columns
#' step_dummy(all_nominal()) %>%
#' # remove any columns with a single unique value
#' step_zv(all_predictors()) %>%
#' step_pca(!!stations, num_comp = tune())
#'
#' # ---------------------------------------------------------------------------
#'
#' lm_spec <- linear_reg() %>% set_engine("lm")
#'
#' # ---------------------------------------------------------------------------
#'
#' pca_param <-
#' parameters(num_comp()) %>%
#' update(num_comp = num_comp(c(0, 20)))
#'
#' # ---------------------------------------------------------------------------
#'
#' chi_features_set <-
#' workflow_set(
#' preproc = list(date = date_only,
#' plus_holidays = date_and_holidays,
#' plus_pca = date_and_holidays_and_pca),
#' models = list(lm = lm_spec),
#' cross = TRUE
#' )
#'
#' # ---------------------------------------------------------------------------
#'
#' chi_features_res_new <-
#' chi_features_set %>%
#' option_add(param_info = pca_param, id = "plus_pca_lm") %>%
#' workflow_map(resamples = time_val_split, grid = 21, seed = 1, verbose = TRUE)
#'
#' chi_features_res_new
#' @export
workflow_map <- function(object, fn = "tune_grid", verbose = FALSE,
seed = sample.int(10^4, 1), ...) {
check_wf_set(object)
rlang::arg_match(fn, allowed_fn$func)
check_object_fn(object, fn)
check_bool(verbose)
check_number_decimal(seed)
on.exit({
cols <- tune::get_tune_colors()
message(cols$symbol$danger("Execution stopped; returning current results"))
return(new_workflow_set(object))
})
dots <- rlang::list2(...)
# check and add options to options column
if (length(dots) > 0) {
object <- rlang::exec("option_add", object, !!!dots)
}
iter_seq <- seq_along(object$wflow_id)
iter_chr <- format(iter_seq)
n <- length(iter_seq)
# Check for tuning when there is none?
# Also we should check that the resamples objects are the same using the
# new fingerprinting option.
for (iter in iter_seq) {
wflow <- extract_workflow(object, object$wflow_id[[iter]])
.fn <- check_fn(fn, wflow, verbose)
.fn_info <- dplyr::filter(allowed_fn, func == .fn)
log_progress(
verbose, object$wflow_id[[iter]], NULL, iter_chr[iter],
n, .fn, NULL
)
if (has_all_pkgs(wflow)) {
opt <- recheck_options(object$option[[iter]], .fn)
run_time <- system.time({
cl <- rlang::call2(.fn, .ns = .fn_info$pkg, object = wflow, !!!opt)
withr::with_seed(
seed[1],
object$result[[iter]] <- try(rlang::eval_tidy(cl), silent = TRUE)
)
})
object <- new_workflow_set(object)
log_progress(
verbose, object$wflow_id[[iter]], object$result[[iter]],
iter_chr[iter], n, .fn, run_time
)
}
}
on.exit(return(new_workflow_set(object)))
}
# nocov
allowed_fn <-
tibble::tibble(
func = c(
"tune_grid", "tune_bayes", "fit_resamples", "tune_race_anova",
"tune_race_win_loss", "tune_sim_anneal", "tune_cluster"
),
pkg = c(rep("tune", 3), rep("finetune", 3), "tidyclust")
)
allowed_fn_list <- paste0("'", allowed_fn$func, "'", collapse = ", ")
# nocov end
# ---------------------------------------------
check_object_fn <- function(object, fn, call = rlang::caller_env()) {
wf_specs <- purrr::map(
object$wflow_id, ~extract_spec_parsnip(object, id = .x)
)
is_cluster_spec <- purrr::map_lgl(wf_specs, inherits, "cluster_spec")
if (identical(fn, "tune_cluster")) {
if (!all(is_cluster_spec)) {
cli::cli_abort(
"To tune with {.fn tune_cluster}, each workflow's model \\
specification must inherit from {.cls cluster_spec}, but \\
{.var {object$wflow_id[!is_cluster_spec]}} {?does/do} not.",
call = call
)
}
return(invisible())
}
is_model_spec <- purrr::map_lgl(wf_specs, inherits, "model_spec")
msg <-
"To tune with {.fn {fn}}, each workflow's model \\
specification must inherit from {.cls model_spec}, but \\
{.var {object$wflow_id[!is_model_spec]}} {?does/do} not."
if (any(is_cluster_spec)) {
msg <- c(
msg,
"i" = "{cli::qty(object$wflow_id[is_cluster_spec])} \\
The workflow{?/s} {.var {object$wflow_id[is_cluster_spec]}} \\
{?is a /are} cluster specification{?/s}. Did you intend to \\
set `fn = 'tune_cluster'`?"
)
}
if (!all(is_model_spec)) {
cli::cli_abort(msg, call = call)
}
return(invisible())
}
# ------------------------------------------------------------------------------
log_progress <- function(verbose, id, res, iter, n, .fn, elapsed) {
if (!verbose) {
return(invisible(NULL))
}
cols <- tune::get_tune_colors()
event <- ifelse(grepl("tune", .fn), "tuning: ", "resampling:")
msg <- paste0(iter, " of ", n, " ", event, " ", id)
if (inherits(res, "try-error")) {
# When a bad arg is passed (usually)
errors_msg <- gsub("\n", "", as.character(res))
errors_msg <- gsub("Error : ", "", errors_msg, fixed = TRUE)
message(
cols$symbol$danger(cli::symbol$cross), " ",
cols$message$info(msg),
cols$message$info(" failed with: "),
cols$message$danger(errors_msg)
)
return(invisible(NULL))
}
if (is.null(res)) {
message(
cols$symbol$info("i"), " ",
cols$message$info(msg)
)
} else {
all_null <- isTRUE(all(is.null(unlist(res$.metrics))))
if (inherits(res, "try-error") || all_null) {
if (all_null) {
res <- collect_res_notes(res)
}
errors_msg <- gsub("\n", "", as.character(res))
errors_msg <- gsub("Error : ", "", errors_msg, fixed = TRUE)
message(
cols$symbol$danger(cli::symbol$cross), " ",
cols$message$info(msg),
cols$message$info(" failed with "),
cols$message$danger(errors_msg)
)
} else {
time_msg <- paste0(" (", prettyunits::pretty_sec(elapsed[3]), ")")
message(
cols$symbol$success(cli::symbol$tick), " ",
cols$message$info(msg),
cols$message$info(time_msg)
)
}
}
invisible(NULL)
}
collect_res_notes <- function(x, show = 1) {
y <- purrr::map_dfr(x$.notes, I)
show <- min(show, nrow(y))
y <- paste0(y$.notes[1:show])
gsub("[\r\n]", "", y)
}
|
/scratch/gouwar.j/cran-all/cranData/workflowsets/R/workflow_map.R
|
#' Generate a set of workflow objects from preprocessing and model objects
#'
#' Often a data practitioner needs to consider a large number of possible
#' modeling approaches for a task at hand, especially for new data sets
#' and/or when there is little knowledge about what modeling strategy
#' will work best. Workflow sets provide an expressive interface for
#' investigating multiple models or feature engineering strategies in such
#' a situation.
#'
#' @param preproc A list (preferably named) with preprocessing objects:
#' formulas, recipes, or [workflows::workflow_variables()].
#' @param models A list (preferably named) of `parsnip` model specifications.
#' @param cross A logical: should all combinations of the preprocessors and
#' models be used to create the workflows? If `FALSE`, the length of `preproc`
#' and `models` should be equal.
#' @param case_weights A single unquoted column name specifying the case
#' weights for the models. This must be a classed case weights column, as
#' determined by [hardhat::is_case_weights()]. See the "Case weights" section
#' below for more information.
#' @seealso [workflow_map()], [comment_add()], [option_add()],
#' [as_workflow_set()]
#' @details
#' The preprocessors that can be combined with the model objects can be one or
#' more of:
#'
#' * A traditional R formula.
#' * A recipe definition (un-prepared) via [recipes::recipe()].
#' * A selectors object created by [workflows::workflow_variables()].
#'
#' Since `preproc` is a named list column, any combination of these can be
#' used in that argument (i.e., `preproc` can be mixed types).
#'
#' @section Case weights:
#' The `case_weights` argument can be passed as a single unquoted column name
#' identifying the data column giving model case weights. For each workflow
#' in the workflow set using an engine that supports case weights, the case
#' weights will be added with [workflows::add_case_weights()]. `workflow_set()`
#' will warn if any of the workflows specify an engine that does not support
#' case weights---and ignore the case weights argument for those workflows---but
#' will not fail.
#'
#' Read more about case weights in the tidymodels at `?parsnip::case_weights`.
#'
#' @return A tibble with extra class 'workflow_set'. A new set includes four
#' columns (but others can be added):
#'
#' * `wflow_id` contains character strings for the preprocessor/workflow
#' combination. These can be changed but must be unique.
#' * `info` is a list column with tibbles containing more specific information,
#' including any comments added using [comment_add()]. This tibble also
#' contains the workflow object (which can be easily retrieved using
#' [extract_workflow()]).
#' * `option` is a list column that will include a list of optional arguments
#' passed to the functions from the `tune` package. They can be added
#' manually via [option_add()] or automatically when options are passed to
#' [workflow_map()].
#' * `result` is a list column that will contain any objects produced when
#' [workflow_map()] is used.
#'
#' @includeRmd man-roxygen/example_data.Rmd note
#'
#' @examplesIf rlang::is_installed(c("kknn", "modeldata", "recipes", "yardstick"))
#' library(workflowsets)
#' library(workflows)
#' library(modeldata)
#' library(recipes)
#' library(parsnip)
#' library(dplyr)
#' library(rsample)
#' library(tune)
#' library(yardstick)
#'
#' # ------------------------------------------------------------------------------
#'
#' data(cells)
#' cells <- cells %>% dplyr::select(-case)
#'
#' set.seed(1)
#' val_set <- validation_split(cells)
#'
#' # ------------------------------------------------------------------------------
#'
#' basic_recipe <-
#' recipe(class ~ ., data = cells) %>%
#' step_YeoJohnson(all_predictors()) %>%
#' step_normalize(all_predictors())
#'
#' pca_recipe <-
#' basic_recipe %>%
#' step_pca(all_predictors(), num_comp = tune())
#'
#' ss_recipe <-
#' basic_recipe %>%
#' step_spatialsign(all_predictors())
#'
#' # ------------------------------------------------------------------------------
#'
#' knn_mod <-
#' nearest_neighbor(neighbors = tune(), weight_func = tune()) %>%
#' set_engine("kknn") %>%
#' set_mode("classification")
#'
#' lr_mod <-
#' logistic_reg() %>%
#' set_engine("glm")
#'
#' # ------------------------------------------------------------------------------
#'
#' preproc <- list(none = basic_recipe, pca = pca_recipe, sp_sign = ss_recipe)
#' models <- list(knn = knn_mod, logistic = lr_mod)
#'
#' cell_set <- workflow_set(preproc, models, cross = TRUE)
#' cell_set
#'
#' # ------------------------------------------------------------------------------
#' # Using variables and formulas
#'
#' # Select predictors by their names
#' channels <- paste0("ch_", 1:4)
#' preproc <- purrr::map(channels, ~ workflow_variables(class, c(contains(!!.x))))
#' names(preproc) <- channels
#' preproc$everything <- class ~ .
#' preproc
#'
#' cell_set_by_group <- workflow_set(preproc, models["logistic"])
#' cell_set_by_group
#' @export
workflow_set <- function(preproc, models, cross = TRUE, case_weights = NULL) {
check_bool(cross)
if (length(preproc) != length(models) &
(length(preproc) != 1 & length(models) != 1 &
!cross)
) {
rlang::abort(
"The lengths of 'preproc' and 'models' are different and `cross = FALSE`."
)
}
preproc <- fix_list_names(preproc)
models <- fix_list_names(models)
case_weights <- enquo(case_weights)
if (cross) {
res <- cross_objects(preproc, models)
} else {
res <- fuse_objects(preproc, models)
}
# call set_weights outside of mutate call so that dplyr
# doesn't prepend possible warnings with "Problem while computing..."
wfs <-
purrr::map2(res$preproc, res$model, make_workflow) %>%
set_weights(case_weights) %>%
unname()
res <-
res %>%
dplyr::mutate(
workflow = wfs,
info = purrr::map(workflow, get_info),
option = purrr::map(1:nrow(res), ~ new_workflow_set_options()),
result = purrr::map(1:nrow(res), ~ list())
) %>%
dplyr::select(wflow_id, info, option, result)
new_workflow_set(res)
}
get_info <- function(x) {
tibble::tibble(
workflow = list(x),
preproc = preproc_type(x),
model = model_type(x),
comment = character(1)
)
}
preproc_type <- function(x) {
x <- extract_preprocessor(x)
class(x)[1]
}
model_type <- function(x) {
x <- extract_spec_parsnip(x)
class(x)[1]
}
fix_list_names <- function(x) {
prefix <- purrr::map_chr(x, ~ class(.x)[1])
prefix <- vctrs::vec_as_names(prefix, repair = "unique", quiet = TRUE)
prefix <- gsub("\\.\\.\\.", "_", prefix)
nms <- names(x)
if (is.null(nms)) {
names(x) <- prefix
} else if (any(nms == "")) {
no_name <- which(nms == "")
names(x)[no_name] <- prefix[no_name]
}
x
}
cross_objects <- function(preproc, models) {
tidyr::crossing(preproc, models) %>%
dplyr::mutate(pp_nm = names(preproc), mod_nm = names(models)) %>%
dplyr::mutate(wflow_id = paste(pp_nm, mod_nm, sep = "_")) %>%
dplyr::select(wflow_id, preproc, model = models)
}
fuse_objects <- function(preproc, models) {
if (length(preproc) == 1 | length(models) == 1) {
return(cross_objects(preproc, models))
}
nms <-
tibble::tibble(wflow_id = paste(names(preproc), names(models), sep = "_"))
tibble::tibble(preproc = preproc, model = models) %>%
dplyr::bind_cols(nms)
}
# takes in a _list_ of workflows so that we can check whether case weights
# are allowed in batch and only prompt once if so.
set_weights <- function(workflows, case_weights) {
if (rlang::quo_is_null(case_weights)) {
return(workflows)
}
allowed <-
workflows %>%
purrr::map(extract_spec_parsnip) %>%
purrr::map_lgl(case_weights_allowed)
if (any(!allowed)) {
disallowed <-
workflows[!allowed] %>%
purrr::map(extract_spec_parsnip) %>%
purrr::map(purrr::pluck, "engine") %>%
unlist() %>%
unique()
rlang::warn(
glue::glue(
"Case weights are not enabled by the underlying model implementation ",
"for the following engine(s): ",
"{glue::glue_collapse(disallowed, sep = ', ')}.\n\n",
"The `case_weights` argument will be ignored for specifications ",
"using that engine."
)
)
}
workflows <-
purrr::map2(
workflows,
allowed,
add_case_weights_conditionally,
case_weights
)
workflows
}
# copied from parsnip
case_weights_allowed <- function(spec) {
mod_type <- class(spec)[1]
mod_eng <- spec$engine
mod_mode <- spec$mode
model_info <-
parsnip::get_from_env(paste0(mod_type, "_fit")) %>%
dplyr::filter(engine == mod_eng & mode == mod_mode)
# If weights are used, they are protected data arguments with the canonical
# name 'weights' (although this may not be the model function's argument name).
data_args <- model_info$value[[1]]$protect
any(data_args == "weights")
}
add_case_weights_conditionally <- function(workflow, allowed, case_weights) {
if (allowed) {
res <- workflows::add_case_weights(workflow, !!case_weights)
} else{
res <- workflow
}
res
}
# adapted from workflows
has_case_weights <- function(x) {
"case_weights" %in% names(x$pre$actions)
}
# TODO api for correlation analysis?
# TODO select_best methods (req tune changes)
# ------------------------------------------------------------------------------
#' @export
tbl_sum.workflow_set <- function(x) {
orig <- NextMethod()
c("A workflow set/tibble" = unname(orig))
}
# ------------------------------------------------------------------------------
#' @export
`[.workflow_set` <- function(x, i, j, drop = FALSE, ...) {
out <- NextMethod()
workflow_set_maybe_reconstruct(out)
}
# ------------------------------------------------------------------------------
#' @export
`names<-.workflow_set` <- function(x, value) {
out <- NextMethod()
workflow_set_maybe_reconstruct(out)
}
# ------------------------------------------------------------------------------
new_workflow_set <- function(x) {
if (!has_required_container_type(x)) {
halt("`x` must be a list.")
}
if (!has_required_container_columns(x)) {
columns <- required_container_columns()
halt(
"The object should have columns: ",
paste0("'", columns, "'", collapse = ", "),
"."
)
}
if (!has_valid_column_info_structure(x)) {
halt("The 'info' column should be a list.")
}
if (!has_valid_column_info_inner_types(x)) {
halt("All elements of 'info' must be tibbles.")
}
if (!has_valid_column_info_inner_names(x)) {
columns <- required_info_inner_names()
halt(
"The 'info' columns should have columns: ",
paste0("'", columns, "'", collapse = ", "),
"."
)
}
if (!has_valid_column_result_structure(x)) {
halt("The 'result' column should be a list.")
}
if (!has_valid_column_result_inner_types(x)) {
halt("Some elements of 'result' do not have class `tune_results`.")
}
if (!has_valid_column_result_fingerprints(x)) {
halt(
"Different resamples were used in the workflow 'result's. ",
"All elements of 'result' must use the same resamples."
)
}
if (!has_valid_column_option_structure(x)) {
halt("The 'option' column should be a list.")
}
if (!has_valid_column_option_inner_types(x)) {
halt("All elements of 'option' should have class 'workflow_set_options'.")
}
if (!has_valid_column_wflow_id_structure(x)) {
halt("The 'wflow_id' column should be character.")
}
if (!has_valid_column_wflow_id_strings(x)) {
halt("The 'wflow_id' column should contain unique, non-missing character strings.")
}
new_workflow_set0(x)
}
new_workflow_set0 <- function(x) {
new_tibble0(x, class = "workflow_set")
}
new_tibble0 <- function(x, ..., class = NULL) {
# Handle the 0-row case correctly by using `new_data_frame()`.
# This also correctly strips any attributes except `names` off `x`.
x <- vctrs::new_data_frame(x)
tibble::new_tibble(x, nrow = nrow(x), class = class)
}
|
/scratch/gouwar.j/cran-all/cranData/workflowsets/R/workflow_set.R
|
.onLoad <- function(libname, pkgname) {
vctrs::s3_register("pillar::obj_sum", "workflow_set_options")
vctrs::s3_register("pillar::size_sum", "workflow_set_options")
vctrs::s3_register("pillar::type_sum", "workflow_set_options")
vctrs::s3_register("pillar::tbl_sum", "workflow_set")
vctrs::s3_register("tune::collect_metrics", "workflow_set")
vctrs::s3_register("tune::collect_predictions", "workflow_set")
vctrs::s3_register("ggplot2::autoplot", "workflow_set")
invisible()
}
|
/scratch/gouwar.j/cran-all/cranData/workflowsets/R/zzz.R
|
## ----include = FALSE----------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
library(parsnip)
library(recipes)
library(dplyr)
library(workflowsets)
library(ggplot2)
theme_set(theme_bw() + theme(legend.position = "top"))
## ----tidymodels---------------------------------------------------------------
data(mlc_churn, package = "modeldata")
ncol(mlc_churn)
## ----churn-objects------------------------------------------------------------
library(workflowsets)
library(parsnip)
library(rsample)
library(dplyr)
library(ggplot2)
lr_model <- logistic_reg() %>% set_engine("glm")
set.seed(1)
trn_tst_split <- initial_split(mlc_churn, strata = churn)
# Resample the training set
set.seed(1)
folds <- vfold_cv(training(trn_tst_split), strata = churn)
## ----churn-formulas-----------------------------------------------------------
formulas <- leave_var_out_formulas(churn ~ ., data = mlc_churn)
length(formulas)
formulas[["area_code"]]
## ----churn-wflow-sets---------------------------------------------------------
churn_workflows <-
workflow_set(
preproc = formulas,
models = list(logistic = lr_model)
)
churn_workflows
## ----churn-wflow-set-fits-----------------------------------------------------
churn_workflows <-
churn_workflows %>%
workflow_map("fit_resamples", resamples = folds)
churn_workflows
## ----churn-metrics, fig.width=6, fig.height=5---------------------------------
roc_values <-
churn_workflows %>%
collect_metrics(summarize = FALSE) %>%
filter(.metric == "roc_auc") %>%
mutate(wflow_id = gsub("_logistic", "", wflow_id))
full_model <-
roc_values %>%
filter(wflow_id == "everything") %>%
select(full_model = .estimate, id)
differences <-
roc_values %>%
filter(wflow_id != "everything") %>%
full_join(full_model, by = "id") %>%
mutate(performance_drop = full_model - .estimate)
summary_stats <-
differences %>%
group_by(wflow_id) %>%
summarize(
std_err = sd(performance_drop)/sum(!is.na(performance_drop)),
performance_drop = mean(performance_drop),
lower = performance_drop - qnorm(0.975) * std_err,
upper = performance_drop + qnorm(0.975) * std_err,
.groups = "drop"
) %>%
mutate(
wflow_id = factor(wflow_id),
wflow_id = reorder(wflow_id, performance_drop)
)
summary_stats %>% filter(lower > 0)
ggplot(summary_stats, aes(x = performance_drop, y = wflow_id)) +
geom_point() +
geom_errorbar(aes(xmin = lower, xmax = upper), width = .25) +
ylab("")
|
/scratch/gouwar.j/cran-all/cranData/workflowsets/inst/doc/evaluating-different-predictor-sets.R
|
---
title: "Evaluating different predictor sets"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Evaluating different predictor sets}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
library(parsnip)
library(recipes)
library(dplyr)
library(workflowsets)
library(ggplot2)
theme_set(theme_bw() + theme(legend.position = "top"))
```
Workflow sets are collections of tidymodels workflow objects that are created as a set. A workflow object is a combination of a preprocessor (e.g. a formula or recipe) and a `parsnip` model specification.
For some problems, users might want to try different combinations of preprocessing options, models, and/or predictor sets. In stead of creating a large number of individual objects, a cohort of workflows can be created simultaneously.
In this example, we'll fit the same model but specify different predictor sets in the preprocessor list.
Let's take a look at the customer churn data from the `modeldata` package:
```{r tidymodels}
data(mlc_churn, package = "modeldata")
ncol(mlc_churn)
```
There are 19 predictors, mostly numeric. This include aspects of their account, such as `number_customer_service_calls`. The outcome is a factor with two levels: "yes" and "no".
We'll use a logistic regression to model the data. Since the data set is not small, we'll use basic 10-fold cross-validation to get resampled performance estimates.
```{r churn-objects}
library(workflowsets)
library(parsnip)
library(rsample)
library(dplyr)
library(ggplot2)
lr_model <- logistic_reg() %>% set_engine("glm")
set.seed(1)
trn_tst_split <- initial_split(mlc_churn, strata = churn)
# Resample the training set
set.seed(1)
folds <- vfold_cv(training(trn_tst_split), strata = churn)
```
We would make a basic workflow that uses this model specification and a basic formula. However, in this application, we'd like to know which predictors are associated with the best area under the ROC curve.
```{r churn-formulas}
formulas <- leave_var_out_formulas(churn ~ ., data = mlc_churn)
length(formulas)
formulas[["area_code"]]
```
We create our workflow set:
```{r churn-wflow-sets}
churn_workflows <-
workflow_set(
preproc = formulas,
models = list(logistic = lr_model)
)
churn_workflows
```
Since we are using basic logistic regression, there is nothing to tune for these models. Instead of `tune_grid()`, we'll use `tune::fit_resamples()` instead by giving that function name as the first argument:
```{r churn-wflow-set-fits}
churn_workflows <-
churn_workflows %>%
workflow_map("fit_resamples", resamples = folds)
churn_workflows
```
To assess how to measure the effect of each predictor, let's subtract the area under the ROC curve for each predictor from the same metric from the full model. We'll match first by resampling ID, the compute the mean difference.
```{r churn-metrics, fig.width=6, fig.height=5}
roc_values <-
churn_workflows %>%
collect_metrics(summarize = FALSE) %>%
filter(.metric == "roc_auc") %>%
mutate(wflow_id = gsub("_logistic", "", wflow_id))
full_model <-
roc_values %>%
filter(wflow_id == "everything") %>%
select(full_model = .estimate, id)
differences <-
roc_values %>%
filter(wflow_id != "everything") %>%
full_join(full_model, by = "id") %>%
mutate(performance_drop = full_model - .estimate)
summary_stats <-
differences %>%
group_by(wflow_id) %>%
summarize(
std_err = sd(performance_drop)/sum(!is.na(performance_drop)),
performance_drop = mean(performance_drop),
lower = performance_drop - qnorm(0.975) * std_err,
upper = performance_drop + qnorm(0.975) * std_err,
.groups = "drop"
) %>%
mutate(
wflow_id = factor(wflow_id),
wflow_id = reorder(wflow_id, performance_drop)
)
summary_stats %>% filter(lower > 0)
ggplot(summary_stats, aes(x = performance_drop, y = wflow_id)) +
geom_point() +
geom_errorbar(aes(xmin = lower, xmax = upper), width = .25) +
ylab("")
```
From this, there are a predictors that, when not included in the model, have a significant effect on the performance metric.
|
/scratch/gouwar.j/cran-all/cranData/workflowsets/inst/doc/evaluating-different-predictor-sets.Rmd
|
---
title: "Tuning and comparing models"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Tuning-and-comparing-models}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.align = "center"
)
library(klaR)
library(mda)
library(rpart)
library(earth)
library(tidymodels)
library(discrim)
theme_set(theme_bw() + theme(legend.position = "top"))
```
Workflow sets are collections of tidymodels workflow objects that are created as a set. A [workflow](https://workflows.tidymodels.org/) object is a combination of a preprocessor (e.g. a formula or recipe) and a parsnip model specification.
For some problems, users might want to try different combinations of preprocessing options, models, and/or predictor sets. Instead of creating a large number of individual objects, a cohort of workflows can be created simultaneously.
In this example we'll use a small, two-dimensional data set for illustrating classification models. The data are in the [modeldata](https://modeldata.tidymodels.org/) package:
```{r parabolic}
library(tidymodels)
data(parabolic)
str(parabolic)
```
Let's hold back 25% of the data for a test set:
```{r 2d-splits}
set.seed(1)
split <- initial_split(parabolic)
train_set <- training(split)
test_set <- testing(split)
```
Visually, we can see that the predictors are mildly correlated and some type of nonlinear class boundary is probably needed.
```{r 2d-plot, fig.width=5, fig.height=5.1}
ggplot(train_set, aes(x = X1, y = X2, col = class)) +
geom_point(alpha = 0.5) +
coord_fixed(ratio = 1) +
scale_color_brewer(palette = "Dark2")
```
## Defining the models
We'll fit two types of discriminant analysis (DA) models (regularized DA and flexible DA using MARS, multivariate adaptive regression splines) as well as a simple classification tree. Let's create those parsnip model objects:
```{r models}
library(discrim)
mars_disc_spec <-
discrim_flexible(prod_degree = tune()) %>%
set_engine("earth")
reg_disc_sepc <-
discrim_regularized(frac_common_cov = tune(), frac_identity = tune()) %>%
set_engine("klaR")
cart_spec <-
decision_tree(cost_complexity = tune(), min_n = tune()) %>%
set_engine("rpart") %>%
set_mode("classification")
```
Next, we'll need a resampling method. Let's use the bootstrap:
```{r resamples}
set.seed(2)
train_resamples <- bootstraps(train_set)
```
We have a simple data set so a basic formula will suffice for our preprocessing. (If we needed more complex feature engineering, we could use a recipe as a preprocessor instead.)
The workflow set takes a named list of preprocessors and a named list of parsnip model specifications, and can cross them to find all combinations. For our case, it will just make a set of workflows for our models:
```{r wflow-set}
all_workflows <-
workflow_set(
preproc = list("formula" = class ~ .),
models = list(regularized = reg_disc_sepc, mars = mars_disc_spec, cart = cart_spec)
)
all_workflows
```
## Adding options to the models
We can add any specific options that we think are important for tuning or resampling using the `option_add()` function.
For illustration, let's use the `extract` argument of the [control function](https://tune.tidymodels.org/reference/control_grid.html) to save the fitted workflow. We can then pick which workflow should use this option with the `id` argument:
```{r option}
all_workflows <-
all_workflows %>%
option_add(id = "formula_cart",
control = control_grid(extract = function(x) x))
all_workflows
```
Keep in mind that this will save the fitted workflow for each resample and each tuning parameter combination that we evaluate.
## Tuning the models
Since these models all have tuning parameters, we can apply the `workflow_map()` function to execute grid search for each of these model-specific arguments. The default function to apply across the workflows is `tune_grid()` but other `tune_*()` functions and `fit_resamples()` can be used by passing the function name as the first argument.
Let's use the same grid size for each model. For the MARS model, there are only two possible tuning parameter values but `tune_grid()` is forgiving about our request of 20 parameter values.
The `verbose` option provides a concise listing for which workflow is being processed:
```{r tuning}
all_workflows <-
all_workflows %>%
# Specifying arguments here adds to any previously set with `option_add()`:
workflow_map(resamples = train_resamples, grid = 20, verbose = TRUE)
all_workflows
```
The `result` column now has the results of each `tune_grid()` call.
From these results, we can get quick assessments of how well these models classified the data:
```{r rank_res, fig.width=8, fig.height=5.5, out.width="100%"}
rank_results(all_workflows, rank_metric = "roc_auc")
# or a handy plot:
autoplot(all_workflows, metric = "roc_auc")
```
## Examining specific model results
It looks like the MARS model did well. We can plot its results and also pull out the tuning object too:
```{r mars, fig.width=6, fig.height=4.25}
autoplot(all_workflows, metric = "roc_auc", id = "formula_mars")
```
Not much of a difference in performance; it may be prudent to use the additive model (via `prod_degree = 1`).
We can also pull out the results of `tune_grid()` for this model:
```{r mars-results-print}
mars_results <-
all_workflows %>%
extract_workflow_set_result("formula_mars")
mars_results
```
Let's get that workflow object and finalize the model:
```{r final-mars}
mars_workflow <-
all_workflows %>%
extract_workflow("formula_mars")
mars_workflow
mars_workflow_fit <-
mars_workflow %>%
finalize_workflow(tibble(prod_degree = 1)) %>%
fit(data = train_set)
mars_workflow_fit
```
Let's see how well these data work on the test set:
```{r grid-pred}
# Make a grid to predict the whole space:
grid <-
crossing(X1 = seq(min(train_set$X1), max(train_set$X1), length.out = 250),
X2 = seq(min(train_set$X1), max(train_set$X2), length.out = 250))
grid <-
grid %>%
bind_cols(predict(mars_workflow_fit, grid, type = "prob"))
```
We can produce a contour plot for the class boundary, then overlay the data:
```{r 2d-boundary, fig.width=5, fig.height=5.1, warning=FALSE}
ggplot(grid, aes(x = X1, y = X2)) +
geom_contour(aes(z = .pred_Class2), breaks = 0.5, col = "black") +
geom_point(data = test_set, aes(col = class), alpha = 0.5) +
coord_fixed(ratio = 1)+
scale_color_brewer(palette = "Dark2")
```
The workflow set allows us to screen many models to find one that does very well. This can be combined with parallel processing and, especially, racing methods from the [finetune](https://finetune.tidymodels.org/reference/tune_race_anova.html) package to optimize efficiency.
## Extracting information from the results
Recall that we added an option to the CART model to extract the model results. Let's pull out the CART tuning results and see what we have:
```{r extraction-res}
cart_res <-
all_workflows %>%
extract_workflow_set_result("formula_cart")
cart_res
```
The `.extracts` has 20 rows for each resample (since there were 20 tuning parameter candidates). Each tibble in that column has a fitted workflow for each candidate and, since `cart_res` has `r nrow(cart_res)` rows, a value returned for each resample. That's `r nrow(cart_res) * 20` fitted workflows.
Let's slim that down by keeping the ones that correspond to the best tuning parameters:
```{r extract-subset}
# Get the best results
best_cart <- select_best(cart_res, metric = "roc_auc")
cart_wflows <-
cart_res %>%
select(id, .extracts) %>%
unnest(cols = .extracts) %>%
inner_join(best_cart)
cart_wflows
```
What can we do with these? Let's write a function to return the number of terminal nodes in the tree.
```{r cart-nodes}
num_nodes <- function(wflow) {
var_imps <-
wflow %>%
# Pull out the rpart model
extract_fit_engine() %>%
# The 'frame' element is a matrix with a column that
# indicates which leaves are terminal
pluck("frame") %>%
# Convert to a data frame
as_tibble() %>%
# Save only the rows that are terminal nodes
filter(var == "<leaf>") %>%
# Count them
nrow()
}
cart_wflows$.extracts[[1]] %>% num_nodes()
```
Now let's create a column with the results for each resample:
```{r num-nodes-counts}
cart_wflows <-
cart_wflows %>%
mutate(num_nodes = map_int(.extracts, num_nodes))
cart_wflows
```
The average number of terminal nodes for this model is `r round(mean(cart_wflows$num_nodes), 1)` nodes.
|
/scratch/gouwar.j/cran-all/cranData/workflowsets/vignettes/articles/tuning-and-comparing-models.Rmd
|
---
title: "Evaluating different predictor sets"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Evaluating different predictor sets}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
library(parsnip)
library(recipes)
library(dplyr)
library(workflowsets)
library(ggplot2)
theme_set(theme_bw() + theme(legend.position = "top"))
```
Workflow sets are collections of tidymodels workflow objects that are created as a set. A workflow object is a combination of a preprocessor (e.g. a formula or recipe) and a `parsnip` model specification.
For some problems, users might want to try different combinations of preprocessing options, models, and/or predictor sets. In stead of creating a large number of individual objects, a cohort of workflows can be created simultaneously.
In this example, we'll fit the same model but specify different predictor sets in the preprocessor list.
Let's take a look at the customer churn data from the `modeldata` package:
```{r tidymodels}
data(mlc_churn, package = "modeldata")
ncol(mlc_churn)
```
There are 19 predictors, mostly numeric. This include aspects of their account, such as `number_customer_service_calls`. The outcome is a factor with two levels: "yes" and "no".
We'll use a logistic regression to model the data. Since the data set is not small, we'll use basic 10-fold cross-validation to get resampled performance estimates.
```{r churn-objects}
library(workflowsets)
library(parsnip)
library(rsample)
library(dplyr)
library(ggplot2)
lr_model <- logistic_reg() %>% set_engine("glm")
set.seed(1)
trn_tst_split <- initial_split(mlc_churn, strata = churn)
# Resample the training set
set.seed(1)
folds <- vfold_cv(training(trn_tst_split), strata = churn)
```
We would make a basic workflow that uses this model specification and a basic formula. However, in this application, we'd like to know which predictors are associated with the best area under the ROC curve.
```{r churn-formulas}
formulas <- leave_var_out_formulas(churn ~ ., data = mlc_churn)
length(formulas)
formulas[["area_code"]]
```
We create our workflow set:
```{r churn-wflow-sets}
churn_workflows <-
workflow_set(
preproc = formulas,
models = list(logistic = lr_model)
)
churn_workflows
```
Since we are using basic logistic regression, there is nothing to tune for these models. Instead of `tune_grid()`, we'll use `tune::fit_resamples()` instead by giving that function name as the first argument:
```{r churn-wflow-set-fits}
churn_workflows <-
churn_workflows %>%
workflow_map("fit_resamples", resamples = folds)
churn_workflows
```
To assess how to measure the effect of each predictor, let's subtract the area under the ROC curve for each predictor from the same metric from the full model. We'll match first by resampling ID, the compute the mean difference.
```{r churn-metrics, fig.width=6, fig.height=5}
roc_values <-
churn_workflows %>%
collect_metrics(summarize = FALSE) %>%
filter(.metric == "roc_auc") %>%
mutate(wflow_id = gsub("_logistic", "", wflow_id))
full_model <-
roc_values %>%
filter(wflow_id == "everything") %>%
select(full_model = .estimate, id)
differences <-
roc_values %>%
filter(wflow_id != "everything") %>%
full_join(full_model, by = "id") %>%
mutate(performance_drop = full_model - .estimate)
summary_stats <-
differences %>%
group_by(wflow_id) %>%
summarize(
std_err = sd(performance_drop)/sum(!is.na(performance_drop)),
performance_drop = mean(performance_drop),
lower = performance_drop - qnorm(0.975) * std_err,
upper = performance_drop + qnorm(0.975) * std_err,
.groups = "drop"
) %>%
mutate(
wflow_id = factor(wflow_id),
wflow_id = reorder(wflow_id, performance_drop)
)
summary_stats %>% filter(lower > 0)
ggplot(summary_stats, aes(x = performance_drop, y = wflow_id)) +
geom_point() +
geom_errorbar(aes(xmin = lower, xmax = upper), width = .25) +
ylab("")
```
From this, there are a predictors that, when not included in the model, have a significant effect on the performance metric.
|
/scratch/gouwar.j/cran-all/cranData/workflowsets/vignettes/evaluating-different-predictor-sets.Rmd
|
# custom functions
# all written by Vikram B. Baliga ([email protected]) and Shreeram
# Senthivasan
# last updated: 2019-10-22
############################ trapezoidal integration ###########################
#' Approximate the definite integral via the trapezoidal rule
#'
#' Mostly meant for internal use in our analysis functions, but made available
#' for other use cases. Accordingly, it does not strictly rely on objects of
#' class \code{muscle_stim}.
#'
#' @param x a variable, e.g. vector of positions
#' @param f integrand, e.g. vector of forces
#'
#' @details In the functions \code{analyze_workloop()}, \code{read_analyze_wl()}
#' , and \code{read_analyze_wl_dir()}, work is calculated as the difference
#' between the integral of the upper curve and the integral of the lower curve
#' of a work loop.
#'
#' @return A numerical value indicating the value of the integral.
#'
#' @references Atkinson, Kendall E. (1989), An Introduction to Numerical
#' Analysis (2nd ed.), New York: John Wiley & Sons
#'
#' @author Vikram B. Baliga
#'
#' @seealso
#' \code{\link{analyze_workloop}},
#' \code{\link{read_analyze_wl}},
#' \code{\link{read_analyze_wl_dir}}
#'
#' @examples
#'
#' # create a circle centered at (x = 10, y = 20) with radius 2
#' t <- seq(0, 2 * pi, length = 1000)
#' coords <- t(rbind(10 + sin(t) * 2, 20 + cos(t) * 2))
#'
#'
#' # use the function to get the area
#' trapezoidal_integration(coords[, 1], coords[, 2])
#'
#' # does it match (pi * r^2)?
#' 3.14159265358 * (2^2) # very close
#'
#' @export
trapezoidal_integration <- function(x,
f) {
if (!is.numeric(x)) {
stop("The variable (first argument) is not numeric.")
}
if (!is.numeric(f)) {
stop("The integrand (second argument) is not numeric.")
}
if (length(x) != length(f)) {
stop("The lengths of the variable and the integrand are not equal.")
}
# obtain length of variable of integration and integrand
n <- length(x)
# integrate using the trapezoidal rule
integral <- 0.5 * sum((x[-1] - x[-n]) * (f[-1] + f[-n]))
# return the definite integral
return(integral)
}
########################### work loop data analysis ##########################
#' Analyze work loop object to compute work and power output
#'
#' Compute work and power output from a work loop experiment on a per-cycle
#' basis.
#'
#' @param x A \code{workloop} object of class \code{muscle_stim} that has been
#' passed through
#' \code{select_cycles}. See Details.
#' @param simplify Logical. If \code{FALSE}, the full analyzed workloop
#' object is returned. If \code{TRUE} a simpler table of net work and power
#' (by cycle) is returned.
#' @param GR Gear ratio, set to 1 by default
#' @param M Velocity multiplier, set adjust the sign of velocity. This parameter
#' should generally be either -1 (the default) or 1.
#' @param vel_bf Critical frequency (scalar) for low-pass filtering of velocity
#' via \code{signal::butter()}
#' @param ... Additional arguments potentially passed down from
#' \code{read_analyze_wl()} or \code{read_analyze_wl_dir()}
#'
#' @details Please note that \code{select_cycles()} must be run on data prior to
#' using this function. This function relies on the input \code{muscle_stim}
#' object being organized by cycle number.
#'
#' The \code{muscle_stim} object (\code{x}) must be a \code{workloop},
#' preferably read in by one of our data import functions. Please see
#' documentation for \code{as_muscle_stim()} if you need to manually construct
#' a \code{muscle_stim} object from a non .ddf source.
#'
#' The gear ratio (GR) and velocity multiplier (M) parameters can help correct
#' for issues related to the magnitude and sign of data collection. By default,
#' they are set to apply no gear ratio adjustment and to positivize velocity.
#' Instantaneous velocity is often noisy and the \code{vel_bf} parameter allows
#' for low-pass filtering of velocity data. See \code{signal::butter()} and
#' \code{signal::filtfilt()} for details of how filtering is achieved.
#'
#' Please also be careful with units! Se Warning section below.
#'
#' @section Warning:
#' Most systems we have encountered record Position data in millimeters
#' and Force in millinewtons, and therefore this function assumes data are
#' recorded in those units. Through a series of internal conversions, this
#' function computes velocity in meters/sec, work in Joules, and power in
#' Watts. If your raw data do not originate in millimeters and millinewtons,
#' please transform your data accordingly and ignore what you see in the
#' attribute \code{units}.
#'
#' @return
#' The function returns a \code{list} of class \code{analyzed_workloop}
#' that provides instantaneous velocity, a smoothed velocity, and computes work,
#' instantaneous power, and net power from a work loop experiment. All data are
#' organized by the cycle number and important metadata are stored as
#' Attributes.
#'
#' Within the \code{list}, each entry is labeled by cycle and includes:
#' \item{Time}{Time, in sec}
#' \item{Position}{Length change of the muscle, corrected for gear ratio, in mm}
#' \item{Force}{Force, corrected for gear ratio, in mN}
#' \item{Stim}{When stimulation occurs, on a binary scale}
#' \item{Cycle}{Cycle ID, as a letter}
#' \item{Inst_velocity}{Instantaneous velocity, computed from \code{Position}
#' change, reported in meters/sec}
#' \item{Filt_velocity}{Instantaneous velocity, after low-pass filtering, again
#' in meter/sec}
#' \item{Inst_Power}{Instantaneous power, a product of \code{Force} and
#' \code{Filt_velocity}, reported in J}
#' \item{Percent_of_Cycle}{The percent of that particular cycle which has
#' elapsed}
#'
#' In addition, the following information is stored in the
#' \code{analyzed_workloop} object's attributes:
#' \item{stimulus_frequency}{Frequency at which stimulus pulses occurred}
#' \item{cycle_frequency}{Frequency of oscillations (assuming sine wave
#' trajectory)}
#' \item{total_cycles}{Total number of oscillatory cycles (assuming sine wave
#' trajectory) that the muscle experienced.}
#' \item{cycle_def}{Specifies what part of the cycle is understood as the
#' beginning and end. There are currently three options:
#' 'lo' for L0-to-L0;
#' 'p2p' for peak-to-peak; and
#' 't2t' for trough-to-trough}
#' \item{amplitude}{Amplitude of length change (assuming sine wave
#' trajectory)}
#' \item{phase}{Phase of the oscillatory cycle (in percent) at which stimulation
#' occurred. Somewhat experimental, please use with caution}
#' \item{position_inverted}{Logical; whether position inversion has been
#' applied)}
#' \item{units}{The units of measurement for each column in the object after
#' running this function. See Warning}
#' \item{sample_frequency}{Frequency at which samples were collected}
#' \item{header}{Additional information from the header}
#' \item{units_table}{Units from each Channel of the original ddf file}
#' \item{protocol_table}{Protocol in tabular format; taken from the original
#' ddf file}
#' \item{stim_table}{Specific info on stimulus protocol; taken from the original
#' ddf file}
#' \item{stimulus_pulses}{Number of sequential pulses within a stimulation
#' train}
#' \item{stimulus_offset}{Timing offset at which stimulus began}
#' \item{gear_ratio}{Gear ratio applied by this function}
#' \item{file_id}{File name}
#' \item{mtime}{Time at which file was last modified}
#' \item{retained_cycles}{Which cycles were retained, as numerics}
#' \item{summary}{Simple table showing work (in J) and net power (in W) for each
#' cycle}
#'
#' @author Vikram B. Baliga and Shreeram Senthivasan
#'
#' @family data analyses
#' @family workloop functions
#'
#' @references Josephson RK. 1985. Mechanical Power output from Striated Muscle
#' during Cyclic Contraction. Journal of Experimental Biology 114: 493-512.
#'
#' @examples
#'
#' library(workloopR)
#'
#' # import the workloop.ddf file included in workloopR
#' wl_dat <-read_ddf(system.file("extdata", "workloop.ddf",
#' package = 'workloopR'),
#' phase_from_peak = TRUE)
#'
#' # select cycles 3 through 5 via the peak-to-peak definition
#' wl_selected <- select_cycles(wl_dat, cycle_def = "p2p", keep_cycles = 3:5)
#'
#' # run the analysis function and get the full object
#' wl_analyzed <- analyze_workloop(wl_selected, GR = 2)
#'
#' # print methods give a short summary
#' print(wl_analyzed)
#'
#' # summary provides a bit more detail
#' summary(wl_analyzed)
#'
#' # run the analysis but get the simplified version
#' wl_analyzed_simple <- analyze_workloop(wl_selected, simplify = TRUE, GR = 2)
#'
#' @seealso
#' \code{\link{read_ddf}},
#' \code{\link{read_analyze_wl}},
#' \code{\link{select_cycles}}
#'
#' @export
#'
analyze_workloop <- function(x,
simplify = FALSE,
GR = 1,
M = -1,
vel_bf = 0.05,
...) {
if (!any(class(x) == "workloop")) {
stop("Input data should be of class `workloop`")
}
if (!any(names(x) == "Cycle")) {
stop("The Cycle column is missing with no default.
\nPlease use select_cycles() to generate this column
or check that the column is named correctly.")
}
if (!is.numeric(GR)) {
stop("Gear ratio (GR) must be numeric")
}
if (!is.numeric(M)) {
stop("Velocity multiplier (M) must be numeric
\nand is recommended to be either -1 or 1.")
}
# transform variables
x <- fix_GR(x, GR)
# first chop up the data by cycle:
cycle_names <- unique(x$Cycle)
x_by_cycle <- lapply(cycle_names, function(cycle) x[x$Cycle == cycle, ])
# create a percent cycle index column
percent_of_cycle <- lapply(x_by_cycle, function(x)
seq(0, 100, 100 / (nrow(x) - 1)))
# work is calculated as the path integral of Force with respect to Position
# (displacement)
# Position and Force are each divided by 1000 to convert mm to meters and mN
# to N prior to taking the integral. This ensures that the integral reports
# work in J. The negative is used to match conventions for work
work <- lapply(x_by_cycle, function(x) {
-trapezoidal_integration(
x$Position / 1000,
x$Force / 1000
)
})
names(work) <- cycle_names
# velocity is the instantanous change in length (i.e. position) multiplied
# by sampling frequency the result is divided by 1000 to convert to m/s and
# multiplied by the velocity multiplier, M
velocity <-
lapply(x_by_cycle, function(x) {
(x$Position - c(
NA,
utils::head(
x$Position,
-1
)
)) * attributes(x)$sample_frequency / 1000 * M
})
# apply a butterworth filter to velocity to smooth it out a bit
buttah <- signal::butter(2, vel_bf)
filt_velocity <-
lapply(velocity, function(v) {
c(NA, signal::filtfilt(buttah, v[-1]))
})
# instantaneous power is calculated as the product of instantaneous velocity
# and force. However since velocity is calculated between two time points,
# corresponding pairs of force measurements are averaged first
# the result is divided by 1000 to convert mW to W
instant_power <- mapply(function(x, v) x$Force * v / 1000,
x_by_cycle,
filt_velocity,
SIMPLIFY = FALSE
)
# net power is simply the mean of all instantaneous power
net_power <- lapply(instant_power, mean, na.rm = TRUE)
# Early escape for simplified output
summary_table <- data.frame(
Cycle = paste0(toupper(cycle_names)),
Work = unlist(work),
Net_Power = unlist(net_power)
)
if (simplify) {
return(summary_table)
}
# combine everything into one useful object
result <- mapply(
function(x, v, filt_v, w, inst_p, net_p, perc) {
x$Inst_Velocity <- v
x$Filt_Velocity <- filt_v
x$Inst_Power <- inst_p
x$Percent_of_Cycle <- perc
attr(x, "work") <- w
attr(x, "net_power") <- net_p
if (!all(is.na(attr(x, "units")))) {
attr(x, "units") <- c(attr(x, "units"), "m/s", "m/s", "W")
}
return(x)
},
x_by_cycle,
velocity,
filt_velocity,
work,
instant_power,
net_power,
percent_of_cycle,
SIMPLIFY = FALSE
)
attr(x, "row.names") <- attr(x, "names") <- NULL
attributes(result) <- attributes(x)
attr(result, "summary") <- summary_table
class(result) <- c("analyzed_workloop", "list")
return(stats::setNames(result, paste0("cycle_", cycle_names)))
}
################################# time correct #################################
#' Time correction for work loop experiments
#'
#' Correct for potential degradation of muscle over time.
#'
#' @param x A \code{data.frame} with summary data, e.g. an object created by
#' \code{summarize_wl_trials()}.
#'
#' @details This function assumes that across a batch of successive trials, the
#' stimulation parameters for the first and final trials are identical. If not,
#' DO NOT USE. Decline in power output is therefore assumed to be a linear
#' function of time. Accordingly, the difference between the final and first
#' trial's (absolute) power output is used to 'correct' trials that occur in
#' between, with explicit consideration of run order and time elapsed (via
#' mtime). A similar correction procedure is applied to work.
#'
#' @return A \code{data.frame} that additionally contains:
#' \item{Time_Corrected_Work }{Time corrected work output, transformed from
#' \code{$Mean_Work}}
#' \item{Time_Corrected_Power }{Time corrected net power output, transformed
#' from \code{$Mean_Power}}
#'
#' And new attributes:
#' \item{power_difference }{Difference in mass-specific net power output
#' between the final and first trials.}
#' \item{time_difference }{Difference in mtime between the final and first
#' trials.}
#' \item{time_correction_rate }{Overall rate; \code{power_difference} divided
#' by \code{time_difference}.}
#'
#' @author Vikram B. Baliga and Shreeram Senthivasan
#'
#' @family workloop functions
#' @family batch analyses
#'
#' @examples
#'
#' library(workloopR)
#'
#' # batch read and analyze files included with workloopR
#' analyzed_wls <- read_analyze_wl_dir(system.file("extdata/wl_duration_trials",
#' package = 'workloopR'),
#' phase_from_peak = TRUE,
#' cycle_def = "p2p", keep_cycles = 2:4)
#'
#' # now summarize
#' summarized_wls <- summarize_wl_trials(analyzed_wls)
#'
#'
#' # mtimes within the package are not accurate, so we'll supply
#' # our own vector of mtimes
#' summarized_wls$mtime <- read.csv(
#' system.file(
#' "extdata/wl_duration_trials/ddfmtimes.csv",
#' package="workloopR"))$mtime
#'
#' # now time correct
#' timecor_wls <- time_correct(summarized_wls)
#' timecor_wls
#'
#'
#' @seealso
#' \code{\link{summarize_wl_trials}}
#'
#' @export
time_correct <- function(x) {
if (class(x)[[1]] != "data.frame") {
stop("Please provide a data.frame of summarized workloop trial data
generated by summarize_wl_trials()")
}
if (!all(c("Mean_Work", "Mean_Power", "mtime") %in% names(x))) {
stop("Please provide summarized workloop trial data
generated by summarize_wl_trials()")
}
x$Time_Corrected_Work <-
x$Mean_Work - (utils::tail(x$Mean_Work, 1) - utils::head(x$Mean_Work, 1)) /
(utils::tail(x$mtime, 1) - utils::head(x$mtime, 1)) * (x$mtime -
utils::head(x$mtime, 1))
x$Time_Corrected_Power <-
x$Mean_Power - (utils::tail(x$Mean_Power, 1) - utils::head(x$Mean_Power,
1)) /
(utils::tail(x$mtime, 1) - utils::head(x$mtime, 1)) * (x$mtime -
utils::head(x$mtime, 1))
attr(x, "power_difference") <-
utils::tail(x$Mean_Power, 1) - utils::head(x$Mean_Power, 1)
attr(x, "time_difference") <-
utils::tail(x$mtime, 1) - utils::head(x$mtime, 1)
attr(x, "time_correction_rate") <-
attr(x, "power_difference") / attr(x, "time_difference")
return(x)
}
############################## isometric timing ################################
#' Compute timing and magnitude of force in isometric trials
#'
#' Calculate timing and magnitude of force at stimulation, peak force, and
#' various parts of the rising (force development) and relaxation (falling)
#' phases of the twitch.
#'
#' @param x A \code{muscle_stim} object that contains data from an isometric
#' twitch trial, ideally created via \code{read_ddf}.
#' @param rising Set points of the rising phase to be described.
#' By default: 10\% and 90\%.
#' @param relaxing Set points of the relaxation phase to be described.
#' By default: 90\% and 50\%.
#'
#' @details The \code{data.frame} (x) must have time series data organized in
#' columns. Generally, it is preferred that you use a \code{muscle_stim} object
#' imported by \code{read_ddf()}.
#'
#' The \code{rising} and \code{relaxing} arguments allow for the user to supply
#' numeric vectors of any length. By default, these arguments are
#' \code{rising = c(10, 90)} and \code{relaxing = c(90, 50)}. Numbers in each
#' of these correspond to percent values and capture time and force at that
#' percent of the corresponding curve. These values can be replaced by those
#' that the user specifies and do not necessarily need to have length = 2. But
#' please note that 0 and 100 should not be used, e.g.
#' \code{rising = seq(10, 90, 5)} works, but \code{rising = seq(0, 100, 5)}
#' does not.
#'
#' @return A \code{data.frame} with the following metrics as columns:
#' \item{file_ID }{File ID}
#' \item{time_stim}{Time between beginning of data collection and when
#' stimulation occurs}
#' \item{force_stim}{Magnitude of force at the onset of stimulation}
#' \item{time_peak}{Absolute time of peak force, i.e. time between beginning of
#' data collection and when peak force occurs}
#' \item{force_peak}{Magnitude of peak force}
#' \item{time_rising_X}{Time between beginning of data collection and X\% of
#' force development}
#' \item{force_rising_X}{Magnitude of force at X\% of force development}
#' \item{time_relaxing_X}{Time between beginning of data collection and X\% of
#' force relaxation}
#' \item{force_relaxing_X}{Magnitude of force at X\% of relaxation}
#'
#'
#' @references Ahn AN, and Full RJ. 2002. A motor and a brake: two leg extensor
#' muscles acting at the same joint manage energy differently in a running
#' insect. Journal of Experimental Biology 205, 379-389.
#'
#' @author Vikram B. Baliga
#'
#' @examples
#'
#' library(workloopR)
#'
#' # import the twitch.ddf file included in workloopR
#' twitch_dat <-read_ddf(system.file("extdata", "twitch.ddf",
#' package = 'workloopR'))
#'
#' # run isometric_timing() to get info on twitch kinetics
#' # we'll use different set points than the defaults
#' analyze_twitch <- isometric_timing(twitch_dat,
#' rising = c(25, 50, 75),
#' relaxing = c(75, 50, 25)
#' )
#'
#' # see the results
#' analyze_twitch
#'
#' @family data analyses
#' @family twitch functions
#'
#' @export
isometric_timing <- function(x,
rising = c(10, 90),
relaxing = c(90, 50)) {
# check input data
if (!("isometric" %in% class(x))) {
stop("Please ensure that your data is from an isometric experiment!")
}
if ("tetanus" %in% class(x)) {
relaxing <- c()
}
# check that set points are numeric between 0 and 100
if (any(!is.numeric(rising) | rising < 0 | rising > 100)) {
stop("Please ensure that all rising set points are numeric values
between 0 and 100.")
}
if (any(!is.numeric(relaxing) | relaxing < 0 | relaxing > 100)) {
stop("Please ensure that all relaxing set points are numeric values
between 0 and 100.")
}
# convert precents to proportions for easier math
rising <- rising / 100
relaxing <- relaxing / 100
# find position of peak force and stimulus in dataset
stim_row <- which.max(x$Stim)
pf_row <- which.max(x$Force)
# get force and timing for peak force and stim
main_results <- data.frame(
"file_id" = attr(x, "file_id"),
"time_stim" = x$Time[stim_row],
"force_stim" = x$Force[stim_row],
"time_peak" = x$Time[pf_row],
"force_peak" = x$Force[pf_row],
stringsAsFactors = FALSE
)
# calculate absolute force at optional set points
rising_forces <-
rising * (x$Force[pf_row] - x$Force[stim_row]) + x$Force[stim_row]
relaxing_forces <-
relaxing * (x$Force[pf_row] - x$Force[stim_row]) + x$Force[stim_row]
# calculate corresponding position in dataset
rising_row <-
lapply(rising_forces, function(i) {
utils::head(which(x$Force > i), 1)
})
relaxing_row <-
lapply(relaxing_forces, function(i) {
utils::tail(which(x$Force > i), 1)
})
# extract time and force at these positions, bind together into a vector
set_point_results <-
c(
unlist(lapply(rising_row, function(i) c(x$Time[i], x$Force[i]))),
unlist(lapply(relaxing_row, function(i) c(x$Time[i], x$Force[i])))
)
# add names and convert to data.frame
names(set_point_results) <-
c(
unlist(lapply(rising * 100, function(i) {
c(
paste0("time_rising_", i),
paste0("force_rising_", i)
)
})),
unlist(lapply(relaxing * 100, function(i) {
c(
paste0("time_relaxing_", i),
paste0("force_relaxing_", i)
)
}))
)
set_point_results <- data.frame(as.list(set_point_results))
# return both result
return(cbind(main_results, set_point_results))
}
###################### work loop reading and data extraction ###################
#' All-in-one import function for work loop files
#'
#' \code{read_analyze_wl()} is an all-in-one function to read in a work loop
#' file, select cycles, and compute work and power output.
#'
#' @param file_name A .ddf file that contains data from a
#' single workloop experiment
#' @param ... Additional arguments to be passed to \code{read_ddf()},
#' \code{select_cycles()},
#' or \code{analyze_workloop()}.
#'
#' @details Please be careful with units! See Warnings below. This function
#' combines \code{read_ddf()} with \code{select_cycles()} and then ultimately
#' \code{analyze_workloop()} into one handy function.
#'
#' As detailed in these three functions, possible arguments include: \cr
#' \code{cycle_def} - used to specify which part of the cycle is understood as
#' the beginning and end. There are currently three options: 'lo' for L0-to-L0;
#' 'p2p' for peak-to-peak; and 't2t' for trough-to-trough \cr
#' \code{bworth_order} - Filter order for low-pass filtering of \code{Position}
#' via \code{signal::butter} prior to finding peak lengths. Default: 2. \cr
#' \code{bworth_freq} - Critical frequency (scalar) for low-pass filtering of
#' \code{Position} via \code{signal::butter} prior to finding peak lengths.
#' Default: 0.05. \cr
#' \code{keep_cycles} - Which cycles should be retained. Default: 4:6. \cr
#' \code{GR} - Gear ratio. Default: 1. \cr
#' \code{M} - Velocity multiplier used to positivize velocity; should be either
#' -1 or 1. Default: -1. \cr
#' \code{vel_bf} - Critical frequency (scalar) for low-pass filtering of
#' velocity via \code{signal::butter}. Default: 0.05. \cr
#'
#' The gear ratio (GR) and velocity multiplier (M) parameters can help correct
#' for issues related to the magnitude and sign of data collection. By
#' default, they are set to apply no gear ratio adjustment and to positivize
#' velocity. Instantaneous velocity is often noisy and the \code{vel_bf}
#' parameter allows for low-pass filtering of velocity data. See
#' \code{signal::butter()} and \code{signal::filtfilt()} for details of how
#' filtering is achieved.
#'
#' @inherit analyze_workloop return
#' @inheritSection analyze_workloop Warning
#'
#' @references Josephson RK. 1985. Mechanical Power output from Striated Muscle
#' during Cyclic Contraction. Journal of Experimental Biology 114: 493-512.
#'
#' @author Vikram B. Baliga
#'
#' @family data analyses
#' @family data import functions
#' @family workloop functions
#'
#' @examples
#'
#' library(workloopR)
#'
#' # import the workloop.ddf file included in workloopR and analyze with
#' # a gear ratio correction of 2 and cycle definition of peak-to-peak
#' wl_dat <- read_analyze_wl(system.file("extdata", "workloop.ddf",
#' package = 'workloopR'),
#' phase_from_peak = TRUE,
#' GR = 2, cycle_def = "p2p")
#'
#'
#' @seealso
#' \code{\link{read_ddf}},
#' \code{\link{select_cycles}}
#' \code{\link{analyze_workloop}}
#'
#' @export
read_analyze_wl <- function(file_name,
...) {
valid_args <- c(
"file_id", "rename_cols", "skip_cols",
"phase_from_peak", "cycle_def", "keep_cycles",
"bworth_order", "bworth_freq",
"simplify", "GR", "M", "vel_bf"
)
arg_names <- names(list(...))
if (!all(arg_names %in% valid_args)) {
warning("One or more provided attributes do not match known attributes.
\nThese will attributes will not be assigned.")
}
fulldata <- read_ddf(file_name, ...)
if (!("workloop" %in% class(fulldata))) {
stop(paste0("The provided file ", file_name, "
does not appear to contain data from a workloop experiment!"))
}
return(analyze_workloop(select_cycles(fulldata, ...), ...))
}
|
/scratch/gouwar.j/cran-all/cranData/workloopR/R/analysis_functions.R
|
# custom functions
# all written by Vikram B. Baliga ([email protected]) and Shreeram
# Senthivasan
# last updated: 2019-10-22
#' Import a batch of work loop or isometric data files from a directory
#'
#' Uses \code{read_ddf()} to read in workloop, twitch, or tetanus experiment
#' data from multiple .ddf files.
#'
#' @param file_path Path where files are stored. Should be in the same folder.
#' @param pattern Regex pattern for identifying relevant files in the file_path.
#' @param sort_by Metadata by which files should be sorted to be in the correct
#' run order. Defaults to \code{mtime}, which is time of last modification of
#' files.
#' @param ... Additional arguments to be passed to \code{read_ddf()}.
#'
#' @inherit read_ddf details
#'
#' @return A list of objects of class \code{workloop}, \code{twitch}, or
#' \code{tetanus}, all of which inherit class \code{muscle_stim}. These objects
#' behave like \code{data.frames} in most situations but also store metadata
#' from the ddf as attributes.
#'
#' Each \code{muscle_stim} object's columns contain:
#' \item{Time}{Time}
#' \item{Position}{Length change of the muscle, uncorrected for gear ratio}
#' \item{Force}{Force, uncorrected for gear ratio}
#' \item{Stim}{When stimulation occurs, on a binary scale}
#'
#' In addition, the following information is stored in each \code{data.frame}'s
#' attributes:
#' \item{sample_frequency}{Frequency at which samples were collected}
#' \item{pulses}{Number of sequential pulses within a stimulation train}
#' \item{total_cycles_lo}{Total number of oscillatory cycles (assuming sine
#' wave trajectory) that the muscle experienced. Cycles are defined with respect
#' to initial muscle length (L0-to-L0 as opposed to peak-to-peak).}
#' \item{amplitude}{amplitude of length change (again, assuming sine wave
#' trajectory)}
#' \item{cycle_frequency}{Frequency of oscillations (again, assuming sine wave
#' trajectory)}
#' \item{units}{The units of measurement for each column in the
#' \code{data.frame}. This might be the most important attribute so please check
#' that it makes sense!}
#'
#' @author Vikram B. Baliga and Shreeram Senthivasan
#'
#'
#' @family data import functions
#'
#' @examples
#'
#' library(workloopR)
#'
#' # import a set of twitch .ddf files included in workloopR
#' workloop_dat <-read_ddf_dir(system.file("extdata/wl_duration_trials",
#' package = 'workloopR'))
#'
#' @export
read_ddf_dir <- function(file_path,
pattern = "*.ddf",
sort_by = "mtime",
...) {
# Generate list of file_names
file_name_list <- list.files(path = file_path,
pattern = pattern,
full.names = TRUE)
if (length(file_name_list) == 0) {
stop("No files matching the pattern found at the given directory!")
}
# Generate list of muscle_stim objects
ms_list <- lapply(file_name_list, function(i) read_ddf(i, ...))
# Sort list, likely by modification time
if (is.null(attr(ms_list[[1]], sort_by))) {
warning("The provided sort_by argument is not a valid attribute.
\nDefaulting to `mtime`.")
sort_by <- "mtime"
}
ms_list <- ms_list[order(unlist(lapply(ms_list, function(i)
attr(i, sort_by))))]
return(ms_list)
}
###################### file info for sequence of work loops ####################
#' Get file info for a sequence of experiment files
#'
#' Grab metadata from files stored in the same folder (e.g. a sequence of trials
#' in an experiment).
#'
#' @param file_path Path where files are stored. Should be in the same folder.
#' @param pattern Regex pattern for identifying relevant files in the file_path.
#'
#' @details If several files (e.g. successive trials from one experiment) are
#' stored in one folder, use this function to obtain metadata in a list
#' format. Runs \code{file.info()} from base R to extract info from files.
#'
#' This function is not truly considered to be part of the batch analysis
#' pipeline;
#' see \code{read_analyze_wl_dir()} for a similar function that not
#' only grabs metadata but also imports & analyzes files. Instead,
#' \code{get_wl_metadata()} is meant to be a handy function to investigate
#' metadata issues that arise if running \code{read_analyze_wl_dir()} goes awry.
#'
#' Unlike \code{read_analyze_wl_dir()}, this function does not necessarily need
#' files to all be work loops. Any file type is welcome (as long as the Regex
#' \code{pattern} argument makes sense).
#'
#' @return Either a \code{data.frame} (if a single file is supplied) or a
#' \code{list} of \code{data.frame}s (if a list of files is supplied), with
#' information as supplied from \code{file.info()}.
#'
#' @family data import functions
#' @family workloop functions
#' @family batch analyses
#'
#' @seealso
#' \code{\link{summarize_wl_trials}}
#'
#' @author Vikram B. Baliga
#'
#' @examples
#'
#' library(workloopR)
#'
#' # get file info for files included with workloopR
#' wl_meta <- get_wl_metadata(system.file("extdata/wl_duration_trials",
#' package = 'workloopR'))
#'
#' @export
get_wl_metadata <- function(file_path,
pattern = "*.ddf") {
exp_list <- file.info(list.files(
path = file_path, pattern = pattern,
full.names = TRUE, recursive = TRUE
))
exp_list$exp_names <- rownames(exp_list)
# re-order by run order, using time stamps
exp_list <- exp_list[with(exp_list, order(as.POSIXct(mtime))), ]
return(exp_list)
}
###################### read and analyze sequence of work loops #################
#' Read and analyze work loop files from a directory
#'
#' All-in-one function to import multiple workloop .ddf files from a directory,
#' sort them by mtime, analyze them, and store the resulting objects in an
#' ordered list.
#'
#' @param file_path Directory in which files are located
#' @param pattern Regular expression used to specify files of interest. Defaults
#' to all .ddf files within file_path
#' @param sort_by Metadata by which files should be sorted to be in the correct
#' run order. Defaults to \code{mtime}, which is time of last modification of
#' files.
#' @param ... Additional arguments to be passed to \code{read_analyze_wl()},
#' \code{analyze_workloop()}, \code{select_cycles()}, or \code{read_ddf()}.
#'
#' @details Work loop data files will be imported and then arranged in the order
#' in which they were run (assuming run order is reflected in \code{mtime}).
#' Chiefly used in conjunction with \code{summarize_wl_trials()} and
#' \code{time_correct()} if time correction is desired.
#'
#' @return
#' A list containing \code{analyzed_workloop} objects, one for each file that is
#' imported and subsequently analyzed. The list is sorted according to the
#' \code{sort_by} parameter, which by default uses the time of last modification
#' of each file's contents (mtime).
#'
#' @inheritSection analyze_workloop Warning
#'
#' @references Josephson RK. 1985. Mechanical Power output from Striated Muscle
#' during Cyclic Contraction. Journal of Experimental Biology 114: 493-512.
#'
#' @seealso
#' \code{\link{read_analyze_wl}},
#' \code{\link{get_wl_metadata}},
#' \code{\link{summarize_wl_trials}},
#' \code{\link{time_correct}}
#'
#' @author Shreeram Senthivasan
#'
#' @family data analyses
#' @family data import functions
#' @family workloop functions
#' @family batch analyses
#'
#' @examples
#'
#' library(workloopR)
#'
#' # batch read and analyze files included with workloopR
#' analyzed_wls <- read_analyze_wl_dir(system.file("extdata/wl_duration_trials",
#' package = 'workloopR'),
#' phase_from_peak = TRUE,
#' cycle_def = "p2p", keep_cycles = 2:4)
#'
#' @export
read_analyze_wl_dir <- function(file_path,
pattern = "*.ddf",
sort_by = "mtime",
...) {
# Generate list of file_names
file_name_list <- list.files(path = file_path,
pattern = pattern,
full.names = TRUE)
if (length(file_name_list) == 0) {
stop("No files matching the pattern found at the given directory!")
}
# Generate list of analyzed workloop objects
wl_list <- lapply(file_name_list, function(i) read_analyze_wl(i, ...))
# Sort list, likely by modification time
if (is.null(attr(wl_list[[1]], sort_by))) {
warning("The provided sort_by argument is not a valid attribute.
\nDefaulting to `mtime`.")
sort_by <- "mtime"
}
return(wl_list <- wl_list[order(unlist(lapply(wl_list, function(i)
attr(i, sort_by))))])
}
######################### summarize sequence of work loops #####################
#' Summarize work loop files
#'
#' Summarize important info from work loop files stored in the same folder
#' (e.g. a sequence of trials in an experiment) including experimental
#' parameters, run order, and \code{mtime}.
#'
#' @param wl_list List of \code{analyzed_workloop} objects, preferably one
#' created by \code{read_analyze_wl_dir()}.
#'
#' @details If several files (e.g. successive trials from one experiment) are
#' stored in one folder, use this function to obtain summary stats and
#' metadata and other parameters. This function requires a list of
#' \code{analyze_workloop} objects, which can be readily obtained by first
#' running \code{read_analyze_wl_dir()} on a specified directory.
#'
#' @return
#' A \code{data.frame} of information about the collection of workloop files.
#' Columns include:
#' \item{File_ID }{Name of the file}
#' \item{Cycle_Frequency }{Frequency of Position change}
#' \item{Amplitude }{amplitude of Position change}
#' \item{Phase }{Phase of the oscillatory cycle (in percent) at which
#' stimulation occurred. Somewhat experimental, please use with caution}
#' \item{Stimulus_Pulses }{Number of stimulation pulses}
#' \item{mtime }{Time at which file's contents were last changed (\code{mtime})}
#' \item{Mean_Work }{Mean work output from the selected cycles}
#' \item{Mean_Power }{Net power output from the selected cycles}
#'
#' @references Josephson RK. 1985. Mechanical Power output from Striated Muscle
#' during Cyclic Contraction. Journal of Experimental Biology 114: 493-512.
#'
#' @seealso
#' \code{\link{read_analyze_wl_dir}},
#' \code{\link{get_wl_metadata}},
#' \code{\link{time_correct}}
#'
#' @author Vikram B. Baliga and Shreeram Senthivasan
#'
#' @family workloop functions
#' @family batch analyses
#'
#' @examples
#'
#' library(workloopR)
#'
#' # batch read and analyze files included with workloopR
#' analyzed_wls <- read_analyze_wl_dir(system.file("extdata/wl_duration_trials",
#' package = 'workloopR'),
#' phase_from_peak = TRUE,
#' cycle_def = "p2p",
#' keep_cycles = 2:4
#' )
#'
#' # now summarize
#' summarized_wls <- summarize_wl_trials(analyzed_wls)
#'
#' @export
summarize_wl_trials <- function(wl_list) {
if (class(wl_list)[[1]] != "list") {
stop("Please provide a list of analyzed workloop objects")
}
if (!all(unlist(lapply(wl_list,
function(x) "analyzed_workloop" %in% class(x))))) {
stop("The provided list includes elements that are
not analyzed workloop objects")
}
summarized <- data.frame(
File_ID = vapply(wl_list, function(i) attr(i, "file_id"),
character(1)),
Cycle_Frequency = vapply(wl_list, function(i) attr(i, "cycle_frequency"),
numeric(1)),
Amplitude = vapply(wl_list, function(i) attr(i, "amplitude"),
numeric(1)),
Phase = vapply(wl_list, function(i) attr(i, "phase"),
numeric(1)),
Stimulus_Pulses = vapply(wl_list, function(i) attr(i, "stimulus_pulses"),
numeric(1)),
Stimulus_Frequency = vapply(wl_list, function(i) {
attr(i, "stimulus_frequency")
}, numeric(1)),
mtime = vapply(wl_list, function(i) attr(i, "mtime"),
numeric(1)),
Mean_Work = vapply(wl_list, function(i) mean(attr(i, "summary")$Work),
numeric(1)),
Mean_Power = vapply(wl_list, function(i) mean(attr(i, "summary")$Net_Power),
numeric(1))
)
return(summarized)
}
|
/scratch/gouwar.j/cran-all/cranData/workloopR/R/batch_analysis_functions.R
|
# custom functions
# all written by Vikram B. Baliga ([email protected]) and Shreeram
# Senthivasan
# last updated: 2019-10-22
######################### read non-ddf work loop files #########################
#' Create your own muscle_stim object
#'
#' For use when data are not stored in .ddf format and you would like
#' to create a \code{muscle_stim} object that can be used by other workloopR
#' functions.
#'
#' @param x A \code{data.frame}. See Details for how it should be organized.
#' @param type Experiment type; must be one of: "workloop", "tetanus", or
#' "twitch."
#' @param sample_frequency Numeric value of the frequency at which samples were
#' recorded; must be in Hz. Please format as numeric, e.g. \code{10000} works
#' but \code{10000 Hz} does not
#' @param ... Additional arguments that can be passed in as attributes. See
#' Details.
#'
#' @details \code{muscle_stim} objects, which are required by (nearly) all
#' workloopR functions, are automatically created via \code{read_ddf()}. Should
#' you have data that are stored in a format other than .ddf, use this function
#' to create your own object of class \code{muscle_stim}.
#'
#' The input \code{x} must be a \code{data.frame} that contains time series
#' of numeric data collected from an experiment. Each row must correspond to a
#' sample, and these columns (exact title matches) must be included: \cr
#' "Time" - time, recorded in seconds \cr
#' "Position" - instantaneous position of the muscle,
#' preferably in millimeters \cr
#' "Force" - force, preferably in millinewtons \cr
#' "Stim" - whether stimulation has occurred. All entries must be either 0 (no
#' stimulus) or 1 (stimulus occurrence).
#'
#' Additional arguments can be provided via \code{...}. For all experiment
#' types, the following attributes are appropriate: \cr
#' "units","header", "units_table",
#' "protocol_table", "stim_table",
#' "stimulus_pulses", "stimulus_offset",
#' "stimulus_width", "gear_ratio",
#' "file_id", or "mtime".
#'
#' Please ensure that further attributes are appropriate to your experiment
#' type.
#'
#' For workloops, these include:
#' "stimulus_frequency", "cycle_frequency",
#' "total_cycles", "cycle_def",
#' "amplitude", "phase",
#' and "position_inverted"
#'
#' For twitches or tetanic trials:
#' "stimulus_frequency", and "stimulus_length"
#'
#'
#' @inherit read_ddf return
#'
#' @examples
#'
#' library(workloopR)
#'
#' # import the workloop.ddf file included in workloopR
#' wl_dat <-read_ddf(system.file("extdata", "workloop.ddf",
#' package = 'workloopR'))
#'
#'
#' @author Shreeram Senthivasan
#'
#' @family data import functions
#'
#' @seealso
#' \code{\link{read_ddf}}
#'
#' @export
as_muscle_stim <- function(x,
type,
sample_frequency,
...) {
# Check for missing information
if (missing(type)) stop("Please specify the experiment type!
\nThe type argument should be one of:
\nworkloop, tetanus, or twitch.")
if (!(type %in% c("workloop", "tetanus", "twitch")) | length(type) != 1) {
stop("Invalid experiment type!
\nThe type argument should be one of:
\nworkloop, tetanus, or twitch.")
}
if (!all(c("Position", "Force", "Stim") %in% names(x))) {
stop("Couldn't find one or more of the following necessary columns:
\nPosition, Force, Stim.
\nPlease ensure that the columns match the naming conventions.")
}
if (missing(sample_frequency) & !("Time" %in% names(x))) {
stop("Insufficient information to infer the sampling frequency.
\nPlease provide a value for the sample_frequency argument
\nor include a column named `Time` in the dataframe.")
}
# Consolidate time / sample frequency information
if (!missing(sample_frequency)) {
x$Time <- (seq_len(nrow(x)) - 1) / sample_frequency
} else {
sample_frequency <- 1 / (x$Time[2] - x$Time[1])
}
# Generate a list of acceptable attributes given experiment type
valid_args <-
c(
"units",
"header",
"units_table",
"protocol_table",
"stim_table",
"stimulus_pulses",
"stimulus_offset",
"stimulus_width",
"gear_ratio",
"file_id",
"mtime"
)
switch(
type,
"workloop" = valid_args <-
c(
valid_args,
"stimulus_frequency",
"cycle_frequency",
"total_cycles",
"cycle_def",
"amplitude",
"phase",
"position_inverted"
),
"tetanus" = valid_args <-
c(valid_args, "stimulus_frequency", "stimulus_length")
)
# Check for invalid attributes and assign valids
args <- list(...)
if (!all(names(args) %in% valid_args)) {
warning("One or more provided attributes do not match known attributes.
\nThese attributes will not be assigned.")
}
for (i in intersect(names(args), valid_args)) {
attr(x, i) <- args[[i]]
}
for (i in setdiff(valid_args, names(args))) {
attr(x, i) <- NA
}
attr(x, "sample_frequency") <- sample_frequency
if (is.na(attr(x, "gear_ratio"))) attr(x, "gear_ratio") <- 1
if (type == "workloop") {
if (is.na(attr(x, "position_inverted")))
attr(x, "position_inverted") <- FALSE
}
# Assign classes and return
class(x) <- c("muscle_stim", "data.frame")
switch(type,
"workloop" = class(x) <- c("workloop", class(x)),
"tetanus" = class(x) <- c("tetanus", "isometric", class(x)),
"twitch" = class(x) <- c("twitch", "isometric", class(x))
)
return(x)
}
########################## read_ddf files - work loops #########################
#' Import work loop or isometric data from .ddf files
#'
#' \code{read_ddf} reads in workloop, twitch, or tetanus experiment data from
#' .ddf files.
#'
#' @param file_name A .ddf file that contains data from a single workloop,
#' twitch, or tetanus experiment
#' @param file_id A string identifying the experiment. The file name is used by
#' default.
#' @param rename_cols List consisting of a vector of indices of columns to
#' rename and a vector of new column names. See Details.
#' @param skip_cols Numeric vector of column indices to skip. See Details.
#' @param phase_from_peak Logical, indicating whether percent phase of
#' stimulation should be recorded relative to peak length or relative to L0
#' (default)
#' @param ... Additional arguments passed to/from other functions that work
#' with \code{read_ddf()}
#'
#' @details Read in a .ddf file that contains data from an experiment. If
#' position and force do not correspond to columns 2 and 3 (respectively),
#' replace "2" and "3" within \code{rename_cols} accordingly. Similarly,
#' \code{skip_cols = 4:11} should be adjusted if more than 11 columns are
#' present and/or columns 4:11 contain important data.
#'
#' Please note that there is no correction for gear ratio or further
#' manipulation of data. See \code{fix_GR} to adjust gear ratio. Gear ratio can
#' also be adjusted prior to analyses within the \code{analyze_workloop()}
#' function, the data import all-in-one function \code{read_analyze_wl()}, or
#' the batch analysis all-in-one \code{read_analyze_wl_dir()}.
#'
#' Please also note that organization of data within the .ddf file is assumed to
#' conform to that used by Aurora Scientific's Dynamic Muscle Control and
#' Analysis Software. YMMV if using a .ddf file from another source. The
#' \code{as_muscle_stim()} function can be used to generate \code{muscle_stim}
#' objects if data are imported via another function. Please feel free to
#' contact us with any issues or requests.
#'
#'
#' @return An object of class \code{workloop}, \code{twitch}, or \code{tetanus},
#' all of which inherit class \code{muscle_stim}. These objects behave like
#' \code{data.frames} in most situations but also store metadata from the ddf
#' as attributes.
#'
#' The \code{muscle_stim} object's columns contain:
#' \item{Time}{Time}
#' \item{Position}{Length change of the muscle, uncorrected for gear ratio}
#' \item{Force}{Force, uncorrected for gear ratio}
#' \item{Stim}{When stimulation occurs, on a binary scale}
#'
#' In addition, the following information is stored in the \code{data.frame}'s
#' attributes:
#' \item{sample_frequency}{Frequency at which samples were collected}
#' \item{pulses}{Number of sequential pulses within a stimulation train}
#' \item{total_cycles_lo}{Total number of oscillatory cycles (assuming sine
#' wave trajectory) that the muscle experienced. Cycles are defined with respect
#' to initial muscle length (L0-to-L0 as opposed to peak-to-peak).}
#' \item{amplitude}{amplitude of length change (again, assuming sine wave
#' trajectory)}
#' \item{cycle_frequency}{Frequency of oscillations (again, assuming sine wave
#' trajectory)}
#' \item{units}{The units of measurement for each column in the
#' \code{data.frame}. This might be the most important attribute so please check
#' that it makes sense!}
#'
#' @author Vikram B. Baliga and Shreeram Senthivasan
#'
#'
#' @family data import functions
#'
#' @examples
#'
#' library(workloopR)
#'
#' # import the workloop.ddf file included in workloopR
#' wl_dat <-read_ddf(system.file("extdata", "workloop.ddf",
#' package = 'workloopR'),
#' phase_from_peak = TRUE)
#'
#'
#' @export
read_ddf <-
function(file_name,
file_id = NA,
rename_cols = list(c(2, 3), c("Position", "Force")),
skip_cols = 4:11,
phase_from_peak = FALSE,
...) {
# Import and checks
if (missing(file_name)) stop("A file_name is required")
if (!file.exists(file_name)) stop(paste0("File ", file_name, " not found!"))
f <- file(file_name, "r")
if (!grepl("DMC.*Data File", readLines(f, 1))) {
close(f)
stop("The input file does not appear to be a DMC Datafile (ddf)")
}
if (is.na(file_id)) file_id <- basename(file_name)
# get metadata
mtime <- file.info(file_name)$mtime
# Setup for reading in file
header <- c()
units_table <- c()
protocol_table <- c()
# Read in Header
while (!grepl("Calibration Data", (l <- readLines(f, 1)))) {
header <- c(header, l)
}
sample_frequency <- as.numeric(sub(".*: ", "", header[1]))
# Read in Calibration Table
while (!grepl("Comments", (l <- readLines(f, 1)))) {
units_table <- c(units_table, l)
}
units_table <- t(utils::read.table(
text = units_table,
row.names = 1,
sep = "\t",
stringsAsFactors = FALSE
))
rownames(units_table) <- c()
colnames(units_table) <- sub(" .*", "", colnames(units_table))
units_table <- data.frame(units_table, stringsAsFactors = FALSE)
units_table[3:5] <- lapply(units_table[3:5], as.numeric)
units <- c("s", units_table$Units[-skip_cols + 1], "TTL")
if (!all(units %in% c("s", "mm", "mN", "TTL"))) {
warning("Non-standard units detected in ddf file!
\nPlease note that calculations currently assume raw data
are in seconds, millimeters, and millinewtons.")
}
# Read in Protocol Array
while (!grepl("Protocol", readLines(f, 1))) {}
readLines(f, 1) # Discard empty line
while ((l <- readLines(f, 1)) != "") {
protocol_table <- c(protocol_table, l)
}
protocol_table <- utils::read.table(
text = protocol_table,
sep = "\t",
stringsAsFactors = FALSE,
col.names = c(
"Wait.s",
"Then.action",
"On.port",
"Units",
"Parameters"
)
)
# Read in data
while (!grepl("Test Data", (l <- readLines(f, 1)))) {}
readLines(f, 1)
dataz <- utils::read.table(
text = readLines(f),
header = TRUE,
sep = "\t",
stringsAsFactors = FALSE
)
if (any(!apply(dataz, 2, is.numeric))) {
warning("The ddf file includes non-numeric data.
\nPlease ensure that this is intentional before proceeding.")
}
close(f)
# Parse file type
read_filetype.ddf <- NULL
switch(
grep("Stim", protocol_table[[2]], value = TRUE)[1],
"Stimulus-Train" = read_filetype.ddf <- read_wl_ddf,
"Stimulus-Twitch" = read_filetype.ddf <- read_twitch_ddf,
"Stimulus-Tetanus" = read_filetype.ddf <- read_tetanus_ddf,
stop("Could not parse experiment type (workloop, twitch, or tetanus)!
\nPlease ensure that the protocol section of the ddf header
includes a label with one of the following:
\nStimulus-Train, Stimulus-Twitch, or Stimulus-Tetanus.")
)
return(read_filetype.ddf(
file_id = file_id,
mtime = mtime,
header = header,
units_table = units_table,
units = units,
protocol_table = protocol_table,
raw_data = dataz,
sample_frequency = sample_frequency,
rename_cols = rename_cols,
skip_cols = skip_cols,
phase_from_peak = phase_from_peak
))
}
############################# rescale data matrix ##############################
# Rescales data in ddf files using the scale and offset parameters
#' @noRd
rescale_data <-
function(dataz,
unitz,
sample_frequency,
rename_cols,
skip_cols) {
rescaled <- mapply(
function(raw, offset, scale) {
if (!is.numeric(raw)) {
return(raw)
} else {
(raw + offset) * scale
}
},
dataz[unitz$Channel],
unitz$Offset,
unitz$Scale
)
rescaled <- data.frame(
Time = (seq_len(nrow(dataz))) / sample_frequency,
rescaled,
Stim = dataz$Stim
)
# rename columns, if desired
if (!is.null(rename_cols)) {
names(rescaled)[rename_cols[[1]]] <- rename_cols[[2]]
}
return(rescaled[, -skip_cols])
}
########################## read_ddf files - workloop ###########################
#' @noRd
read_wl_ddf <-
function(raw_data,
units_table,
protocol_table,
sample_frequency,
rename_cols,
skip_cols,
...) {
# get info on experimental parameters
stim_table <-
utils::read.table(
text = protocol_table[grepl("Stim", protocol_table$Then.action),
"Units"],
sep = ",",
col.names = c("offset",
"frequency",
"width",
"pulses",
"cycle_frequency")
)
cycle_table <-
utils::read.table(
text = protocol_table[grepl("Sine", protocol_table$Then.action),
"Units"],
sep = ",",
col.names = c("frequency", "amplitude", "total_cycles")
)
# use scale (and maybe offset) to convert Volts into units
rescaled_data <- rescale_data(
raw_data,
units_table,
sample_frequency,
rename_cols,
skip_cols
)
# construct and return workloop object
return(workloop(
data = rescaled_data,
sample_frequency = sample_frequency,
units_table = units_table,
protocol_table = protocol_table,
stim_table = stim_table,
cycle_table = cycle_table,
...
))
}
############################ read_ddf files - twitch ###########################
#' @noRd
read_twitch_ddf <-
function(raw_data,
units_table,
protocol_table,
sample_frequency,
rename_cols = list(c(2, 3), c("Position", "Force")),
skip_cols = 4:11,
...) {
# get info on experimental parameters
stim_table <-
utils::read.table(
text = protocol_table[grepl("Stim", protocol_table$Then.action),
"Units"],
sep = ",",
col.names = c("offset", "width")
)
stim_table$pulses <- rep(1, nrow(stim_table))
# use scale (and maybe offset) to convert Volts into units
rescaled_data <- rescale_data(
raw_data,
units_table,
sample_frequency,
rename_cols,
skip_cols
)
# construct and return workloop object
return(twitch(
data = rescaled_data,
sample_frequency = sample_frequency,
units_table = units_table,
protocol_table = protocol_table,
stim_table = stim_table,
...
))
}
########################## read_ddf files - tetanus ##########################
#' @noRd
read_tetanus_ddf <-
function(raw_data,
units_table,
protocol_table,
sample_frequency,
rename_cols = list(c(2, 3), c("Position", "Force")),
skip_cols = 4:11,
...) {
# get info on experimental parameters
stim_table <-
utils::read.table(
text = protocol_table[grepl("Stim", protocol_table$Then.action),
"Units"],
sep = ",",
col.names = c("offset", "frequency", "width", "length")
)
stim_table$pulses <-
as.integer(floor(stim_table$frequency * stim_table$length))
# use scale (and maybe offset) to convert Volts into units
rescaled_data <- rescale_data(
raw_data,
units_table,
sample_frequency,
rename_cols,
skip_cols
)
# construct and return workloop object
return(tetanus(
data = rescaled_data,
sample_frequency = sample_frequency,
units_table = units_table,
protocol_table = protocol_table,
stim_table = stim_table,
...
))
}
|
/scratch/gouwar.j/cran-all/cranData/workloopR/R/data_import_functions.R
|
# custom functions
# all written by Vikram B. Baliga ([email protected]) and Shreeram
# Senthivasan
# last updated: 2019-10-20
############################### select cycles ###############################
#' Select cycles from a work loop object
#'
#' Retain data from a work loop experiment based on position cycle
#'
#' @param x A \code{workloop} object (see Details for how it should be
#' organized)
#' @param cycle_def A string specifying how cycles should be defined; one of:
#' "lo", "p2p", or "t2t". See Details more info
#' @param keep_cycles The indices of the cycles to keep. Include 0 to keep data
#' identified as being outside complete cycles
#' @param bworth_order Filter order for low-pass filtering of \code{Position}
#' via \code{signal::butter()} prior to finding L0
#' @param bworth_freq Critical frequency (scalar) for low-pass filtering of
#' \code{Position} via \code{signal::butter()} prior to finding L0
#' @param ... Additional arguments passed to/from other functions that make use
#' of \code{select_cycles()}
#'
#' @details \code{select_cycles()} subsets data from a workloop trial by
#' position cycle. The \code{cycle_def} argument is used to specify which part
#' of the cycle is understood as the beginning and end. There are currently
#' three options: \cr
#' 'lo' for L0-to-L0; \cr
#' 'p2p' for peak-to-peak; and \cr
#' 't2t' for trough-to-trough \cr
#'
#' Peaks are identified using \code{pracma::findpeaks()}. L0 points on the
#' rising edge are found by finding the midpoints between troughs and the
#' following peak. However the first and last extrema and L0 points may be
#' misidentified by this method. Please plot your \code{Position} cycles to
#' ensure the edge cases are identified correctly.
#'
#' The \code{keep_cycles} argument is used to determine which cycles (as
#' defined by \code{cycle_def} should be retained in the final dataset. Zero
#' is the index assigned to all data points that are determined to be outside
#' a complete cycle.
#'
#' The \code{muscle_stim} object (\code{x}) must be a \code{workloop},
#' preferably read in by one of our data import functions. Please see
#' documentation for \code{as_muscle_stim()} if you need to manually construct
#' a \code{muscle_stim} object from another source.
#'
#' @return A \code{workloop} object with rows subsetted by the chosen position
#' cycles. A \code{Cycle} column is appended to denote which cycle each time
#' point is associated with. Finally, all attributes from the input
#' \code{workloop} object are retained and one new attribute is added to
#' record which cycles from the original data were retained.
#'
#' @author Vikram B. Baliga and Shreeram Senthivasan
#'
#' @family data transformations
#' @family workloop functions
#'
#' @examples
#'
#' library(workloopR)
#'
#' # import the workloop.ddf file included in workloopR
#' wl_dat <-read_ddf(system.file("extdata", "workloop.ddf",
#' package = 'workloopR'),
#' phase_from_peak = TRUE)
#'
#' # select cycles 3 through 5 via the peak-to-peak definition
#' wl_selected <- select_cycles(wl_dat, cycle_def = "p2p", keep_cycles = 3:5)
#'
#'
#' # are the cycles of (approximately) the same length?
#' summary(as.factor(wl_selected$Cycle))
#'
#' @seealso
#' \code{\link{analyze_workloop}},
#' \code{\link{read_analyze_wl}},
#' \code{\link{read_analyze_wl_dir}}
#'
#' @export
select_cycles <- function(x,
cycle_def,
keep_cycles = 4:6,
bworth_order = 2,
bworth_freq = 0.05,
...) {
if (!any(class(x) == "workloop")) {
stop("Input data should be of class `workloop`")
}
if (!is.numeric(keep_cycles)) {
stop("keep_cycles should be numeric")
}
if (missing(cycle_def)) {
cycle_def <- "lo"
warning("Cycle definition not supplied! Defaulting to L0-to-L0")
}
if (is.na(attr(x, "cycle_frequency"))) {
stop("Length-out cycle frequency is needed to identify cycles!
Please set the `cycle_frequency` attribute accordingly.")
}
# get cycle frequency and sample frequency
cyc_freq <- attr(x, "cycle_frequency")
samp_freq <- attr(x, "sample_frequency")
# Use butterworth-filtered position data to identify peaks
bworth <- signal::butter(bworth_order, bworth_freq)
smPos <- signal::filtfilt(bworth, x$Position)
# Calculate minimum number of ups before a peak from proportion ups
qf <- floor(0.25 * (1 / cyc_freq) * samp_freq) - 1
peaks <- stats::setNames(
data.frame(pracma::findpeaks(smPos, nups = qf)[, 2:4]),
c("peak", "start", "end")
)
switch(cycle_def,
# L0-to-Lo assumes position cycle starts and ends on an L0
# Most L0 are found by averaging indices of a peak and previous trough
"lo" = {
splits <- round((peaks$start + peaks$peak) / 2)
# The first L0 is the lowest point before first peak
splits[1] <- peaks$start[1]
# The last L0 is the last peak
splits[length(splits)] <- peaks$peak[nrow(peaks)]
splits <- c(0, splits, nrow(x))
},
"p2p" = splits <- c(0, peaks$peak, nrow(x)),
"t2t" = splits <- c(0, peaks$start, utils::tail(peaks$end, 1), nrow(x)),
stop("Invalid cycle definition! Please select one of:\n
'lo': L0-to-L0
'p2p': peak-to-peak
't2t': trough-to-trough")
)
splits <- (splits - c(NA, utils::head(splits, -1)))[-1]
cycle <- unlist(lapply(seq_along(splits), function(i) rep(i - 1, splits[i])))
x$Cycle <- replace(cycle, cycle == max(cycle), 0)
# Update cycle definition and total cycles
attr(x, "cycle_def") <- cycle_def
attr(x, "total_cycles") <- max(x$Cycle)
# Subset by keep_cycles and rename cycles by letters
if (any(keep_cycles < 0 | keep_cycles > max(x$Cycle))) {
warning("The keep_cycles argument includes cycles that don't exist
(negative or greater than total_cycles).
\nThese are being ignored.")
}
x <- x[x$Cycle %in% keep_cycles, ]
x$Cycle <- letters[as.factor(x$Cycle)]
if (!all(is.na(attr(x, "units")))) attr(x, "units") <- c(attr(x, "units"),
"letters")
attr(x, "retained_cycles") <- keep_cycles
return(x)
}
############################## position inversion ##############################
#' Invert the position data
#'
#' Multiply instantaneous position by -1.
#'
#' @param x A \code{muscle_stim} object
#'
#' @details The \code{muscle_stim} object can be of any type, including
#' \code{workloop}, \code{twitch}, or \code{tetanus}.
#'
#' If you have manually constructed the object via \code{as_muscle_stim()},
#' the \code{muscle_stim} object should have a column entitled \code{Position}.
#' Other columns and attributes are welcome and will be passed along unchanged.
#'
#' @return A \code{workloop} object with inverted position. The
#' \code{position_inverted} attribute is set to \code{TRUE} and all others are
#' retained.
#'
#' @author Vikram B. Baliga
#'
#' @family data transformations
#' @family workloop functions
#' @family twitch functions
#' @family tetanus functions
#'
#' @examples
#'
#' library(workloopR)
#'
#' # import the workloop.ddf file included in workloopR
#' wl_dat <-read_ddf(system.file("extdata", "workloop.ddf",
#' package = 'workloopR'),
#' phase_from_peak = TRUE)
#'
#' # invert the sign of Position
#' wl_fixed <- invert_position(wl_dat)
#'
#' # quick check:
#' max(wl_fixed$Position) / min(wl_dat$Position) # -1
#'
#' @export
invert_position <- function(x) {
if (!any(class(x) == "muscle_stim")) {
stop("Input data should be of class `muscle_stim`")
}
x$Position <- x$Position * -1
attr(x, "position_inverted") <- TRUE
return(x)
}
############################## gear ratio correction ###########################
#' Adjust for the gear ratio of a motor arm
#'
#' Fix a discrepancy between the gear ratio of the motor arm used and the gear
#' ratio recorded by software.
#'
#' @param x A \code{muscle_stim} object
#' @param GR Gear ratio, set to 1 by default
#'
#' @details The \code{muscle_stim} object can be of any type, including
#' \code{workloop}, \code{twitch}, or \code{tetanus}.
#'
#' If you have manually constructed the object via \code{as_muscle_stim()},
#' the \code{muscle_stim} object should have columns as follows: \cr
#' \code{Position}: length change of the muscle; \cr
#' \code{Force}: force \cr
#'
#' @return An object of the same class(es) as the input (\code{x}). The function
#' will multiply \code{Position} by (1/GR) and multiply \code{Force} by GR,
#' returning an object with new values in \code{$Position} and \code{$Force}.
#' Other columns and attributes are welcome and will simply be passed on
#' unchanged into the resulting object.
#'
#' @author Vikram B. Baliga
#'
#' @family data transformations
#' @family workloop functions
#' @family twitch functions
#' @family tetanus functions
#'
#' @examples
#'
#' library(workloopR)
#'
#' # import the workloop.ddf file included in workloopR
#' wl_dat <-read_ddf(system.file("extdata", "workloop.ddf",
#' package = 'workloopR'),
#' phase_from_peak = TRUE)
#'
#' # apply a gear ratio correction of 2
#' # this will multiply Force by 2 and divide Position by 2
#' wl_fixed <- fix_GR(wl_dat, GR = 2)
#'
#' # quick check:
#' max(wl_fixed$Force) / max(wl_dat$Force) # 5592.578 / 2796.289 = 2
#' max(wl_fixed$Position) / max(wl_dat$Position) # 1.832262 / 3.664524 = 0.5
#'
#' @seealso
#' \code{\link{analyze_workloop}},
#' \code{\link{read_analyze_wl}},
#' \code{\link{read_analyze_wl_dir}}
#'
#' @export
fix_GR <- function(x,
GR = 1) {
# Check that x is correct type of object
if (!any(class(x) == "muscle_stim")) {
stop("Input data should be of class `muscle_stim`")
}
# check that gear ratio is numeric
if (!is.numeric(GR)) {
stop("Gear ratio (GR) must be numeric")
}
x$Position <- x$Position * (1 / GR)
x$Force <- x$Force * GR
attr(x, "gear_ratio") <- attr(x, "gear_ratio") * GR
if ("workloop" %in% class(x)) {
if (!is.na(attr(x, "amplitude"))) {
attr(x, "amplitude") <- attr(x, "amplitude") * (1 / GR)
}
}
return(x)
}
|
/scratch/gouwar.j/cran-all/cranData/workloopR/R/data_transformation_functions.R
|
# custom functions
# all written by Vikram B. Baliga ([email protected]) and Shreeram
# Senthivasan
# last updated: 2019-10-22
########################## workloop object functions ###########################
# Object constructor
# See read_ddf for details
# Top level class for all objects created by read_ddf
#' @noRd
muscle_stim <-
function(data,
units,
sample_frequency,
header,
units_table,
protocol_table,
stim_table,
file_id,
mtime,
...) {
attr(data, "units") <- units
attr(data, "sample_frequency") <- sample_frequency
attr(data, "header") <- header
attr(data, "units_table") <- units_table
attr(data, "protocol_table") <- protocol_table
attr(data, "stim_table") <- stim_table
attr(data, "stimulus_pulses") <- stim_table$pulses[1]
attr(data, "stimulus_offset") <- stim_table$offset[1]
attr(data, "stimulus_width") <- stim_table$width[1]
attr(data, "gear_ratio") <- 1
attr(data, "file_id") <- file_id
attr(data, "mtime") <- mtime
class(data) <- c(class(data), "muscle_stim", "data.frame")
return(data)
}
# Classes for specific trial type
#' @noRd
workloop <-
function(data,
stim_table,
cycle_table,
sample_frequency,
phase_from_peak,
...) {
attr(data, "stimulus_frequency") <- stim_table$frequency[1]
attr(data, "cycle_frequency") <- cycle_table$frequency[1]
attr(data, "total_cycles") <- cycle_table$total_cycles[1]
attr(data, "cycle_def") <- "lo"
attr(data, "amplitude") <- cycle_table$amplitude[1]
# Calculate Phase
phase <-
(
which.max(data$Stim) - which.max(data$Position)
) / sample_frequency * stim_table$cycle_frequency[1]
if (!phase_from_peak) {
phase <- phase + 0.25
}
# convert 0-1 scale to -50 to +50
attr(data, "phase") <- (((phase + 0.5) %% 1) - 0.5) * 100
attr(data, "position_inverted") <- FALSE
class(data) <- c("workloop")
return(muscle_stim(
data = data,
stim_table = stim_table,
sample_frequency = sample_frequency,
...
))
}
#' @noRd
tetanus <-
function(data, stim_table, ...) {
attr(data, "stimulus_frequency") <- stim_table$frequency[1]
attr(data, "stimulus_length") <- stim_table$length[1]
class(data) <- c("tetanus", "isometric")
return(muscle_stim(
data = data,
stim_table = stim_table,
...
))
}
#' @noRd
twitch <-
function(data, ...) {
class(data) <- c("twitch", "isometric")
return(muscle_stim(data = data, ...))
}
|
/scratch/gouwar.j/cran-all/cranData/workloopR/R/object_construction_functions.R
|
# custom functions
# all written by Vikram B. Baliga ([email protected]) and Shreeram
# Senthivasan
# last updated: 2019-10-22
# Generate type-specific output header for workloop objects
#' @noRd
print_muscle_stim_header <- function(x, include_time = TRUE) {
type <- paste(toupper(substring(class(x)[1], 1, 1)),
substring(class(x)[1], 2),
sep = ""
)
if (include_time) {
cat(paste0(
"# ", type, " Data: ",
ncol(x) - 1, " channels recorded over ",
nrow(x) / attr(x, "sample_frequency"), "s\n"
))
} else {
cat(paste0("# ", type, " Data:\n\n"))
}
}
# Print method
#' @export
print.muscle_stim <- function(x, n = 6, ...) {
print_muscle_stim_header(x)
cat(paste0("File ID: ", attr(x, "file_id"), "\n\n"))
class(x) <- "data.frame"
print(utils::head(x, n = n))
if (n < nrow(x)) {
cat(paste0("# \u2026 with ", nrow(x) - n, " more rows\n"))
}
}
#' @export
print.analyzed_workloop <- function(x, n = 6, ...) {
cat(paste0(
"File ID: ",
attr(x, "file_id")
))
cat(paste0(
"\nCycles: ",
length(attr(x, "retained_cycles")),
" cycles kept out of ",
attr(x, "total_cycles")
))
cat(paste0(
"\nMean Work: ",
round(mean(attr(x, "summary")$Work), 5),
" J"
))
cat(paste0(
"\nMean Power: ",
round(mean(attr(x, "summary")$Net_Power), 5),
" W\n\n"
))
}
# Summary method
#' @export
summary.muscle_stim <- function(object, ...) {
print_muscle_stim_header(object, ...)
cat(paste0("\nFile ID: ", attr(object, "file_id")))
cat(paste0("\nMod Time (mtime): ", attr(object, "mtime")))
cat(paste0("\nSample Frequency: ", attr(object, "sample_frequency"),
"Hz\n\n"))
cat(paste0("data.frame Columns: \n"))
for (i in 2:ncol(object)) {
cat(paste0(" ", colnames(object)[i], " (", attr(object, "units")[i],
")\n"))
}
cat(paste0("\nStimulus Offset: ", attr(object, "stimulus_offset"), "s\n"))
cat(paste0("Stimulus Frequency: ", attr(object, "stimulus_frequency"),
"Hz\n"))
cat(paste0("Stimulus Width: ", attr(object, "stimulus_width"), "ms\n"))
cat(paste0("Stimulus Pulses: ", attr(object, "stimulus_pulses"), "\n"))
cat(paste0("Gear Ratio: ", attr(object, "gear_ratio"), "\n"))
}
#' @export
summary.workloop <- function(object, ...) {
NextMethod()
cat(paste0("\nCycle Frequency: ", attr(object, "cycle_frequency"), "Hz\n"))
cat(paste0(
"Total Cycles (",
switch(attr(object, "cycle_def"),
"lo" = "L0-to-L0",
"p2p" = "peak-to-peak",
"t2t" = "trough-to-trough",
"undefined"
),
"): ",
attr(object, "total_cycles"), "\n"
))
if (!is.null(attr(object, "retained_cycles"))) {
cat(paste0(
"Cycles Retained: ",
length(attr(object, "retained_cycles")),
"\n"
))
}
cat(paste0(
"Amplitude: ",
attr(object, "amplitude"),
attr(object, "units")[grep("Position", colnames(object))], "\n\n"
))
if (attr(object, "position_inverted")) {
cat("\nPlease note that Position is inverted!\n\n")
}
}
#' @export
summary.tetanus <- function(object, ...) {
NextMethod()
cat(paste0("Stimulus Length: ", attr(object, "stimulus_length"), "s\n\n"))
}
#' @export
summary.analyzed_workloop <- function(object, ...) {
summary(object[[1]], include_time = FALSE)
cat("\n")
print(attr(object, "summary"))
}
|
/scratch/gouwar.j/cran-all/cranData/workloopR/R/print_and_summary_methods.R
|
#' @details
#' Functions for the import, transformation, and analysis of muscle physiology
#' experiments. Currently supported experiment types: work loop, simple twitch,
#' and tetanus.
#'
#' Data that are stored in .ddf format (e.g. generated by Aurora Scientific's
#' Dynamic Muscle Control and Analysis Software) are easily imported via
#' \code{read_ddf()}, \code{read_analyze_wl()}, or \code{read_analyze_wl_dir()}.
#' Doing so generates objects of class \code{muscle_stim}, which are formatted
#' to work nicely with workloopR's core functions. Data that are read from other
#' file formats can be constructed into \code{muscle_stim} objects via
#' \code{as_muscle_stim()}.
#'
#' Prior to analyses, data can be transformed or corrected. Transformational
#' functions include gear ratio correction (\code{fix_GR()}), position inversion
#' (\code{invert_position()}), and subsetting of particular cycles within a work
#' loop experiment (\code{select_cycles()}).
#'
#' Core data analytical functions include \code{analyze_workloop()} for work
#' loop files and \code{isometric_timing()} for twitches.
#' \code{analyze_workloop()} computes instantaneous velocity, net work,
#' instantaneous power, and net power for work loop experiments on a per-cycle
#' basis. \code{isometric_timing()} provides summarization of twitch kinetics.
#'
#' Some functions are readily available for batch processing of files. The
#' \code{read_analyze_wl_dir()} function allows for the batch import, cycle
#' selection, gear ratio correction, and ultimately work & power computation for
#' all work loop experiment files within a specified directory. The
#' \code{get_wl_metadata()} and \code{summarize_wl_trials()} functions organize
#' scanned files by recency (according to their time of last modification:
#' 'mtime') and then report work and power output in the order that trials were
#' run. This ultimately allows for the \code{time_correct()} function to correct
#' for degradation of the muscle (according to power & work) over time,
#' assuming that the first and final trials are identical in experimental
#' parameters.
#'
#' Please feel free to contact either Vikram or Shree with suggestions or code
#' development requests (see contact info below). We are especially interested
#' in expanding our data import functions to accommodate file types other than
#' .ddf in future versions of workloopR.
#'
#' @keywords internal
"_PACKAGE"
#> [1] "_PACKAGE"
|
/scratch/gouwar.j/cran-all/cranData/workloopR/R/workloopR.R
|
## ----setup, include = FALSE---------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## ----package_loading, message=FALSE, warning=FALSE----------------------------
library(workloopR)
library(magrittr)
library(ggplot2)
library(dplyr)
## ----data_import--------------------------------------------------------------
## The file workloop.ddf is included and therefore can be accessed via
## system.file("subdirectory","file_name","package") . We'll then use
## read_ddf() to import it, creating an object of class "muscle_stim".
## fix_GR() multiplies Force by 2 and divides Position by 2
workloop_dat <-
system.file(
"extdata",
"workloop.ddf",
package = 'workloopR') %>%
read_ddf(phase_from_peak = TRUE) %>%
fix_GR(GR = 2)
summary(workloop_dat)
## ----intial_plot--------------------------------------------------------------
scale_position_to_force <- 3000
workloop_dat %>%
# Set the x axis for the whole plot
ggplot(aes(x = Time)) +
# Add a line for force
geom_line(aes(y = Force, color = "Force"),
lwd = 1) +
# Add a line for Position, scaled to approximately the same range as Force
geom_line(aes(y = Position * scale_position_to_force, color = "Position")) +
# For stim, we only want to plot where stimulation happens, so we filter the data
geom_point(aes(y = 0, color = "Stim"), size = 1,
data = filter(workloop_dat, Stim == 1)) +
# Next we add the second y-axis with the corrected units
scale_y_continuous(sec.axis = sec_axis(~ . / scale_position_to_force, name = "Position (mm)")) +
# Finally set colours, labels, and themes
scale_color_manual(values = c("#FC4E2A", "#4292C6", "#373737")) +
labs(y = "Force (mN)", x = "Time (secs)", color = "Parameter:") +
ggtitle("Time course of \n work loop experiment") +
theme_bw() +
theme(legend.position = "bottom", legend.direction = "horizontal")
## ----select_cycles------------------------------------------------------------
## Select cycles
workloop_selected <-
workloop_dat %>%
select_cycles(cycle_def="lo", keep_cycles = 4:6)
summary(workloop_selected)
attr(workloop_selected, "retained_cycles")
## ----work_loop_fig------------------------------------------------------------
workloop_selected %>%
ggplot(aes(x = Position, y = Force)) +
geom_point(size=0.3) +
labs(y = "Force (mN)", x = "Position (mm)") +
ggtitle("Work loop") +
theme_bw()
## ----analyze_workloop---------------------------------------------------------
## Run the analyze_workloop() function
workloop_analyzed <-
workloop_selected %>%
analyze_workloop(GR = 1)
## Produces a list of objects.
## The print method gives a simple output:
workloop_analyzed
## How is the list organized?
names(workloop_analyzed)
## ----metrics_for_cycle--------------------------------------------------------
## What is work for the second cycle?
attr(workloop_analyzed$cycle_b, "work")
## What is net power for the third cycle?
attr(workloop_analyzed$cycle_c, "net_power")
## ----cycle_a_organization-----------------------------------------------------
str(workloop_analyzed$cycle_a)
## ----instant_power_plot-------------------------------------------------------
workloop_analyzed$cycle_b %>%
ggplot(aes(x = Percent_of_Cycle, y = Inst_Power)) +
geom_line(lwd = 1) +
labs(y = "Instantaneous Power (W)", x = "Percent cycle") +
ggtitle("Instantaneous power \n during cycle b") +
theme_bw()
## ----simplify_TRUE------------------------------------------------------------
workloop_analyzed_simple <-
workloop_selected %>%
analyze_workloop(GR = 1, simplify = TRUE)
## Produces a simple data.frame:
workloop_analyzed_simple
str(workloop_analyzed_simple)
## ----select_cycles_defintions-------------------------------------------------
## Select cycles 4:6 using lo
workloop_dat %>%
select_cycles(cycle_def="lo", keep_cycles = 4:6) %>%
ggplot(aes(x = Time, y = Position)) +
geom_line() +
theme_bw()
## Select cycles 4:6 using p2p
workloop_dat %>%
select_cycles(cycle_def="p2p", keep_cycles = 4:6) %>%
ggplot(aes(x = Time, y = Position)) +
geom_line() +
theme_bw()
## here we see that via 'p2p' the final cycle is ill-defined because the return
## to L0 is considered a cycle. Using a p2p definition, what we actually want is
## to use cycles 3:5 to get the final 3 full cycles:
workloop_dat %>%
select_cycles(cycle_def="p2p", keep_cycles = 3:5) %>%
ggplot(aes(x = Time, y = Position)) +
geom_line() +
theme_bw()
## this difficulty in defining cycles may be more apparent by first plotting the
## cycles 1:6, e.g.
workloop_dat %>%
select_cycles(cycle_def="p2p", keep_cycles = 1:6) %>%
ggplot(aes(x = Time, y = Position)) +
geom_line() +
theme_bw()
|
/scratch/gouwar.j/cran-all/cranData/workloopR/inst/doc/Analyzing-workloops.R
|
---
title: "Analyzing work loop experiments in workloopR"
author: "Vikram B. Baliga"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Analyzing work loop experiments in workloopR}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r setup, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
The function `analyze_workloop()` in `workloopR` allows users to evaluate the mechanical work and power output of a muscle they have investigated through work loop experiments.
To demonstrate `analyze_workloop()`, we will first load `workloopR` and use example data provided with the package. We'll also load a couple packages within the `tidyverse` to help with data wrangling and plotting.
## Load packages and data
```{r package_loading, message=FALSE, warning=FALSE}
library(workloopR)
library(magrittr)
library(ggplot2)
library(dplyr)
```
## Visualize
We'll now import the `workloop.ddf` file included with `workloopR`. Because this experiment involved using a gear ratio of 2, we'll use `fix_GR()` to also implement this correction.
Ultimately, an object of classes `workloop`, `muscle_stim`, and `data.frame` is produced. `muscle_stim` objects are used throughout `workloopR` to help with data formatting and error checking across functions. Additionally setting the class to `workloop` allows our functions to understand that the data have properties that other experiment types (twitch, tetanus) do not.
```{r data_import}
## The file workloop.ddf is included and therefore can be accessed via
## system.file("subdirectory","file_name","package") . We'll then use
## read_ddf() to import it, creating an object of class "muscle_stim".
## fix_GR() multiplies Force by 2 and divides Position by 2
workloop_dat <-
system.file(
"extdata",
"workloop.ddf",
package = 'workloopR') %>%
read_ddf(phase_from_peak = TRUE) %>%
fix_GR(GR = 2)
summary(workloop_dat)
```
Running `summary()` on a `muscle_stim shows a handy summary of file properties, data, and experimental parameters.
Let's plot Time against Force, Position, and Stimulus (Stim) to visualize the time course of the work loop experiment.
To get them all plotted in the same figure, we'll transform the data as they are being plotted. Please note that this is for aesthetic purposes only - the underlying data will not be changed after the plotting is complete.
```{r intial_plot}
scale_position_to_force <- 3000
workloop_dat %>%
# Set the x axis for the whole plot
ggplot(aes(x = Time)) +
# Add a line for force
geom_line(aes(y = Force, color = "Force"),
lwd = 1) +
# Add a line for Position, scaled to approximately the same range as Force
geom_line(aes(y = Position * scale_position_to_force, color = "Position")) +
# For stim, we only want to plot where stimulation happens, so we filter the data
geom_point(aes(y = 0, color = "Stim"), size = 1,
data = filter(workloop_dat, Stim == 1)) +
# Next we add the second y-axis with the corrected units
scale_y_continuous(sec.axis = sec_axis(~ . / scale_position_to_force, name = "Position (mm)")) +
# Finally set colours, labels, and themes
scale_color_manual(values = c("#FC4E2A", "#4292C6", "#373737")) +
labs(y = "Force (mN)", x = "Time (secs)", color = "Parameter:") +
ggtitle("Time course of \n work loop experiment") +
theme_bw() +
theme(legend.position = "bottom", legend.direction = "horizontal")
```
There's a lot to digest here. The blue trace shows the change in length of the muscle via cyclical, sinusoidal changes to Position. The dark gray Stim dots show stimulation on a off vs. on basis. Stimulus onset is close to when the muscle is at L0 and the stimulator zapped the muscle four times in pulses of 0.2 ms width at 300 Hz. The resulting force development is shown in red. These cycles of length change and stimulation occurred a total of 6 times (measuring L0-to-L0).
## Select cycles
We are now ready to run the `select_cycles()` function. This function subsets the data and labels each cycle in prep for our `analyze_workloop()` function.
In many cases, researchers are interested in using the final 3 cycles for analyses. Accordingly, we'll set the `keep_cycles` parameter to `4:6`.
One thing to pay heed to is the cycle definition, encoded as `cycle_def` within the arguments of `select_cycles()`. There are three options for how cycles can be defined and are named based on the starting (and ending) points of the cycle. We'll use the L0-to-L0 option, which is encoded as `lo`.
The function internally performs butterworth filtering of the Position data via `signal::butter()`. This is because Position data are often noisy, which makes assessing true peak values difficult. The default values of `bworth_order = 2` and `bworth_freq = 0.05` work well in most cases, but we recommend you please plot your data and assess this yourself.
We will keep things straightforward for now so that we can proceed to the analytical stage. Please see the final section of this vignette for more details on using `select_cycles()`.
```{r select_cycles}
## Select cycles
workloop_selected <-
workloop_dat %>%
select_cycles(cycle_def="lo", keep_cycles = 4:6)
summary(workloop_selected)
attr(workloop_selected, "retained_cycles")
```
The `summary()` function now reflects that 3 cycles of the original 6 have been retained, and getting the `"retained_cycles"` attribute shows that these cycles are 4, 5, and 6 from the original data.
To avoid confusion in numbering schemes between the original data and the new object, once `select_cycles()` has been used we label cycles by letter. So, cycle 4 is now "a", 5 is "b" and 6 is "c".
## Plot the work loop cycles
```{r work_loop_fig}
workloop_selected %>%
ggplot(aes(x = Position, y = Force)) +
geom_point(size=0.3) +
labs(y = "Force (mN)", x = "Position (mm)") +
ggtitle("Work loop") +
theme_bw()
```
## Basics of `analyze_workloop()`
Now we're ready to use `analyze_workloop()`.
Again, running `select_cycles()` beforehand was necessary, so we will switch to using `workloop_selected` as our data object.
Within `analyze_workloop()`, the `GR =` option allows for the gear ratio to be corrected if it hasn't been already. But because we already ran `fix_GR()` to correct the gear ratio to 2, we will not need to use it here again. So, this for this argument, we will use `GR = 1`, which keeps the data as they are. Please take care to ensure that you do not overcorrect for gear ratio by setting it multiple times. Doing so induces multiplicative changes. E.g. setting `GR = 3` on an object and then setting `GR = 3` again produces a gear ratio correction of 9.
### Using the default `simplify = FALSE` version
The argument `simplify = ` affects the output of the `analyze_workloop()` function. We'll first take a look at the organization of the "full" version, i.e. keeping the default `simplify = FALSE`.
```{r analyze_workloop}
## Run the analyze_workloop() function
workloop_analyzed <-
workloop_selected %>%
analyze_workloop(GR = 1)
## Produces a list of objects.
## The print method gives a simple output:
workloop_analyzed
## How is the list organized?
names(workloop_analyzed)
```
This produces an `analyzed_workloop` object that is essentially a `list` that is organized by cycle. Within each of these, time-course data are stored as a `data.frame` and important metadata are stored as attributes.
Users may typically want work and net power from each cycle. Within the `analyzed_workloop` object, these two values are stored as attributes: `"work"` (in J) and `"net_power"` (in W). To get them for a specific cycle:
```{r metrics_for_cycle}
## What is work for the second cycle?
attr(workloop_analyzed$cycle_b, "work")
## What is net power for the third cycle?
attr(workloop_analyzed$cycle_c, "net_power")
```
To see how e.g. the first cycle is organized:
```{r cycle_a_organization}
str(workloop_analyzed$cycle_a)
```
Within each cycle's `data.frame`, the usual `Time`, `Position`, `Force`, and `Stim` are stored. `Cycle`, added via `select_cycles()`, denotes cycle identity and `Percent_of_Cycle` displays time as a percentage of that particular cycle.
`analyze_workloop()` also computes instantaneous velocity (`Inst_Velocity`) which can sometimes be noisy, leading us to also apply a butterworth filter to this velocity (`Filt_Velocity`). See the function's help file for more details on how to tweak filtering. The time course of power (instantaneous power) is also provided as `Inst_Power`.
Each of these variables can be plot against Time to see the time-course of that variable's change over the cycle. For example, we will plot instantaneous force in cycle b:
```{r instant_power_plot}
workloop_analyzed$cycle_b %>%
ggplot(aes(x = Percent_of_Cycle, y = Inst_Power)) +
geom_line(lwd = 1) +
labs(y = "Instantaneous Power (W)", x = "Percent cycle") +
ggtitle("Instantaneous power \n during cycle b") +
theme_bw()
```
### Setting `simpilfy = TRUE` in the `analyze_workloop()` function
If you simply want work and net power for each cycle without retaining any of the time-course data, set `simplify = TRUE` within `analyze_workloop()`.
```{r simplify_TRUE}
workloop_analyzed_simple <-
workloop_selected %>%
analyze_workloop(GR = 1, simplify = TRUE)
## Produces a simple data.frame:
workloop_analyzed_simple
str(workloop_analyzed_simple)
```
Here, work (in J) and net power (in W) are simply returned in a `data.frame` that is organized by cycle. No other attributes are stored.
## More on cycle definitions in `select_cycles()`
As noted above, there are three options for cycle definitions within `select_cycles()`, encoded as `cycle_def`. The three options for how cycles can be defined are named based on the starting (and ending) points of the cycle: L0-to-L0 (`lo`), peak-to-peak (`p2p`), and trough-to-trough (`t2t`).
We highly recommend that you plot your Position data after using `select_cycles()`. The `pracma::findpeaks()` function work for most data (especially sine waves), but it is conceivable that small, local 'peaks' may be misinterpreted as a cycle's true minimum or maximum.
We also note that edge cases (i.e. the first cycle or the final cycle) may also be subject to issues in which the cycles are not super well defined via an automated algorithm.
Below, we will plot a couple case examples to show what we generally expect. We recommend plotting your data in a similar fashion to verify that `select_cycles()` is behaving in the way you expect.
```{r select_cycles_defintions}
## Select cycles 4:6 using lo
workloop_dat %>%
select_cycles(cycle_def="lo", keep_cycles = 4:6) %>%
ggplot(aes(x = Time, y = Position)) +
geom_line() +
theme_bw()
## Select cycles 4:6 using p2p
workloop_dat %>%
select_cycles(cycle_def="p2p", keep_cycles = 4:6) %>%
ggplot(aes(x = Time, y = Position)) +
geom_line() +
theme_bw()
## here we see that via 'p2p' the final cycle is ill-defined because the return
## to L0 is considered a cycle. Using a p2p definition, what we actually want is
## to use cycles 3:5 to get the final 3 full cycles:
workloop_dat %>%
select_cycles(cycle_def="p2p", keep_cycles = 3:5) %>%
ggplot(aes(x = Time, y = Position)) +
geom_line() +
theme_bw()
## this difficulty in defining cycles may be more apparent by first plotting the
## cycles 1:6, e.g.
workloop_dat %>%
select_cycles(cycle_def="p2p", keep_cycles = 1:6) %>%
ggplot(aes(x = Time, y = Position)) +
geom_line() +
theme_bw()
```
|
/scratch/gouwar.j/cran-all/cranData/workloopR/inst/doc/Analyzing-workloops.Rmd
|
## ----setup, include = FALSE---------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## ----package_loading, message=FALSE, warning=FALSE----------------------------
library(workloopR)
library(magrittr)
library(ggplot2)
## ----data_import--------------------------------------------------------------
## The file twitch.ddf is included and therefore can be accessed via
## system.file("subdirectory","file_name","package") . We'll then use
## read_ddf() to import it, creating an object of class "muscle_stim".
twitch_dat <-
system.file(
"extdata",
"twitch.ddf",
package = 'workloopR') %>%
read_ddf()
## ----intial_plot--------------------------------------------------------------
twitch_dat %>%
ggplot(aes(x = Time, y = Force)) +
geom_line() +
ylab("Force (mN)") +
xlab("Time (sec)") +
theme_minimal()
## ----data_cleaning------------------------------------------------------------
## Re-plot
twitch_dat %>%
ggplot(aes(x = Time, y = Force)) +
geom_line(lwd = 1) +
xlim(0.075, 0.2) +
ylim(200, 450) +
xlab("Time (sec)") +
ylab("Force (mN)") +
theme_minimal()
## ----twitch_analysis----------------------------------------------------------
## Run the isometric_timing() function
twitch_analyzed <-
twitch_dat %>%
isometric_timing()
twitch_analyzed
## ----twitch_rising_custom-----------------------------------------------------
## Change rising supply a custom set of force development set points
twitch_rising_custom <-
twitch_dat %>%
isometric_timing(rising = c(5, 10, 25, 50, 75, 95))
## The returned `data.frame` contains the timing and force magnitudes
## of these set points in the "..._rising_..." columns
twitch_rising_custom
## ----tetanus------------------------------------------------------------------
tetanus_analyzed <-
system.file(
"extdata",
"tetanus.ddf",
package = 'workloopR') %>%
read_ddf() %>%
isometric_timing(rising = c(25, 50, 75))
tetanus_analyzed
## ----twitch_intervals---------------------------------------------------------
## Time to peak force from stimulation
twitch_analyzed$time_peak - twitch_analyzed$time_stim
## ----annotated_plot-----------------------------------------------------------
## Create a color pallete
## Generated using `viridis::viridis(6)`
## We use hard-coded values here just to avoid extra dependencies
colz <- c("#440154FF","#414487FF","#2A788EFF",
"#22A884FF","#7AD151FF","#FDE725FF")
twitch_dat %>%
ggplot(aes(x = Time, y = Force)) +
geom_line(lwd = 1) +
xlim(0.075, 0.2) +
ylim(200, 450) +
xlab("Time (sec)") +
ylab("Force (mN)") +
geom_point(x = twitch_analyzed$time_stim,
y = twitch_analyzed$force_stim,
color = colz[1], size = 3) +
geom_point(x = twitch_analyzed$time_peak,
y = twitch_analyzed$force_peak,
color = colz[4], size = 3) +
geom_point(x = twitch_analyzed$time_rising_10,
y = twitch_analyzed$force_rising_10,
color = colz[2], size = 3) +
geom_point(x = twitch_analyzed$time_rising_90,
y = twitch_analyzed$force_rising_90,
color = colz[3], size = 3) +
geom_point(x = twitch_analyzed$time_relaxing_90,
y = twitch_analyzed$force_relaxing_90,
color = colz[5], size = 3) +
geom_point(x = twitch_analyzed$time_relaxing_50,
y = twitch_analyzed$force_relaxing_50,
color = colz[6], size = 3) +
theme_minimal()
|
/scratch/gouwar.j/cran-all/cranData/workloopR/inst/doc/Calculating-twitch-kinetics.R
|
---
title: "Working with isometric experiments in workloopR"
author: "Vikram B. Baliga"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Working with isometric experiments in workloopR}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r setup, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
The `workloopR` package also provides a function that can calculate the timing and magnitude of force during isometric experiments (twitch, tetanus) via the `isometric_timing()` function.
To demonstrate, we will first load `workloopR` and use example data provided with the package. We'll also load a couple packages within the `tidyverse` as well as `viridis` to help with data wrangling and plotting.
## Load packages and data
```{r package_loading, message=FALSE, warning=FALSE}
library(workloopR)
library(magrittr)
library(ggplot2)
```
## Visualize
We'll now import the `twitch.ddf` file included with `workloopR`.
```{r data_import}
## The file twitch.ddf is included and therefore can be accessed via
## system.file("subdirectory","file_name","package") . We'll then use
## read_ddf() to import it, creating an object of class "muscle_stim".
twitch_dat <-
system.file(
"extdata",
"twitch.ddf",
package = 'workloopR') %>%
read_ddf()
```
Let's plot Force vs. Time to visualize the time course of force development and relaxation.
```{r intial_plot}
twitch_dat %>%
ggplot(aes(x = Time, y = Force)) +
geom_line() +
ylab("Force (mN)") +
xlab("Time (sec)") +
theme_minimal()
```
This plot reveals that the final row of the data has Force = 0 and is likely an artifact. We can also see that the most salient parts of the twitch occur between ~ 0.075 and ~ 0.2 seconds.
We'll just re-plot the salient parts of the twitch by setting new limits on the axes via `ggplot2::xlim()` and `ggplot2::ylim()`. Please note that this will not change any analyses - we are simply doing it for ease of visualizing patterns.
```{r data_cleaning}
## Re-plot
twitch_dat %>%
ggplot(aes(x = Time, y = Force)) +
geom_line(lwd = 1) +
xlim(0.075, 0.2) +
ylim(200, 450) +
xlab("Time (sec)") +
ylab("Force (mN)") +
theme_minimal()
```
Looks better!
## Basics of `isometric_timing()`
Now we're ready to use `isometric_timing()`.
```{r twitch_analysis}
## Run the isometric_timing() function
twitch_analyzed <-
twitch_dat %>%
isometric_timing()
twitch_analyzed
```
The function returns a new `data.frame` that provides information about the timing and magnitude of force at various intervals within the twitch. All returned values are absolute; in other words, time is measured from the beginning of the file and forces are returned in their actual magnitudes.
The first five columns of this `data.frame` are fixed. They will return (in this order): 1) the ID of the file, 2) the time at which stimulation occurs, 3) magnitude of force when stimulation occurs, 4) time at which peak force occurs, and 5) magnitude of peak force.
The function also provides data that help describe the rising and the relaxation phases of the twitch at certain "set points". By default, in the rising phase the set points are at 10% and at 90% of peak force development. Timing and force magnitudes at these points are returned as columns in the `data.frame`. And for the relaxation phase, the time and magnitude of force when force has relaxed to 90% and 50% of peak force are given.
The user has some flexibility in specifying how data are grabbed from the rising and falling phases. There are two arguments: `rising = c()` and `falling = c()`. Each of these arguments can be filled with a vector of any length. Within the vector, each of these "set points" must be a vector between 0 and 100, signifying the % of peak force development that is to be described.
For example, if we'd like to describe the rising phase at six points (e.g. 5%, 10%, 25%, 50%, 75%, and 95% of peak force development):
```{r twitch_rising_custom}
## Change rising supply a custom set of force development set points
twitch_rising_custom <-
twitch_dat %>%
isometric_timing(rising = c(5, 10, 25, 50, 75, 95))
## The returned `data.frame` contains the timing and force magnitudes
## of these set points in the "..._rising_..." columns
twitch_rising_custom
```
### Tetanus trials
The `isometric_timing()` function can also work on `tetanus` objects that have been imported via `read_ddf()`. Should a `tetanus` object be used, the set points for relaxing are automatically set to `relaxing = c()`, which excludes this argument from producing anything. Instead, the timing & magnitude of force at stimulation, peak force, and specified points of the rising phase are returned. The idea of 'relaxation' is simply ignored.
To demonstrate, we'll use an example tetanus trial included in `workloopR`:
```{r tetanus}
tetanus_analyzed <-
system.file(
"extdata",
"tetanus.ddf",
package = 'workloopR') %>%
read_ddf() %>%
isometric_timing(rising = c(25, 50, 75))
tetanus_analyzed
```
## Computing intervals
The returned `data.frame` provides all timing and force magnitudes in absolute terms, i.e. time since the start of the file and actual force magnitudes. Often, we'd like to report characteristics of the twitch as intervals.
To calculate, e.g. the interval between stimulation and peak force (often reported as "time to peak force"):
```{r twitch_intervals}
## Time to peak force from stimulation
twitch_analyzed$time_peak - twitch_analyzed$time_stim
```
## Annotate the twitch plot
It is also good to plot some of these metrics and see if they pass the eye-test.
We'll use our analyzed twitch and the `viridis` package to supply colors to dots at key points.
```{r annotated_plot}
## Create a color pallete
## Generated using `viridis::viridis(6)`
## We use hard-coded values here just to avoid extra dependencies
colz <- c("#440154FF","#414487FF","#2A788EFF",
"#22A884FF","#7AD151FF","#FDE725FF")
twitch_dat %>%
ggplot(aes(x = Time, y = Force)) +
geom_line(lwd = 1) +
xlim(0.075, 0.2) +
ylim(200, 450) +
xlab("Time (sec)") +
ylab("Force (mN)") +
geom_point(x = twitch_analyzed$time_stim,
y = twitch_analyzed$force_stim,
color = colz[1], size = 3) +
geom_point(x = twitch_analyzed$time_peak,
y = twitch_analyzed$force_peak,
color = colz[4], size = 3) +
geom_point(x = twitch_analyzed$time_rising_10,
y = twitch_analyzed$force_rising_10,
color = colz[2], size = 3) +
geom_point(x = twitch_analyzed$time_rising_90,
y = twitch_analyzed$force_rising_90,
color = colz[3], size = 3) +
geom_point(x = twitch_analyzed$time_relaxing_90,
y = twitch_analyzed$force_relaxing_90,
color = colz[5], size = 3) +
geom_point(x = twitch_analyzed$time_relaxing_50,
y = twitch_analyzed$force_relaxing_50,
color = colz[6], size = 3) +
theme_minimal()
```
The plot has dots added for each of the six time&force points that the function returns by default.
|
/scratch/gouwar.j/cran-all/cranData/workloopR/inst/doc/Calculating-twitch-kinetics.Rmd
|
## ----setup, include = FALSE---------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## ----a_p_single_file----------------------------------------------------------
library(workloopR)
## import the workloop.ddf file included in workloopR
wl_dat <-read_ddf(system.file("extdata", "workloop.ddf",
package = 'workloopR'),
phase_from_peak = TRUE)
## select cycles 3 through 5 using a peak-to-peak definition
wl_selected <- select_cycles(wl_dat, cycle_def = "p2p", keep_cycles = 3:5)
## run the analysis function and get the full object
wl_analyzed <- analyze_workloop(wl_selected, GR = 2)
## for brevity, the print() method for this object produces a simple output
wl_analyzed
## but see the structure for the full output, e.g.
#str(wl_analyzed)
## or run the analysis but get the simplified version
wl_analyzed_simple <- analyze_workloop(wl_selected, simplify = TRUE, GR = 2)
wl_analyzed_simple
## ----a_p_batch_files----------------------------------------------------------
## batch read and analyze files included with workloopR
analyzed_wls <- read_analyze_wl_dir(system.file("extdata/wl_duration_trials",
package = 'workloopR'),
cycle_def = "p2p",
keep_cycles = 2:4,
phase_from_peak = TRUE
)
## now summarize
summarized_wls <- summarize_wl_trials(analyzed_wls)
summarized_wls
## ----data_import--------------------------------------------------------------
library(workloopR)
## import the workloop.ddf file included in workloopR
wl_dat <-read_ddf(system.file("extdata", "workloop.ddf",
package = 'workloopR'),
phase_from_peak = TRUE)
## muscle_stim objects have their own print() and summary() S3 methods
## for example:
summary(wl_dat) # some handy info about the imported file
## see the first few rows of data stored within
head(wl_dat)
## ----attributes---------------------------------------------------------------
## names(attributes(x) gives a list of all the attributes' names
names(attributes(wl_dat))
## take a look at the stimulation protocol
attr(wl_dat, "protocol_table")
## at what frequency were cyclic changes to Position performed?
attr(wl_dat, "cycle_frequency")
## at what frequency were data recorded?
attr(wl_dat, "sample_frequency")
## ----transformations----------------------------------------------------------
## this multiples Force by 2
## and multiplies Position by (1/2)
wl_fixed <- fix_GR(wl_dat, GR = 2)
# quick check:
max(wl_fixed$Force)/max(wl_dat$Force) #5592.578 / 2796.289 = 2
max(wl_fixed$Position)/max(wl_dat$Position) #1.832262 / 3.664524 = 0.5
|
/scratch/gouwar.j/cran-all/cranData/workloopR/inst/doc/Introduction-to-workloopR.R
|
---
title: "Introduction to workloopR"
author: "Vikram B. Baliga"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Introduction to workloopR}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r setup, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
## Welcome to workloopR
In this vignette, we'll provide an overview of core functions in `workloopR`. Other vignettes within the package give more details with respect to specific use-cases. Examples with code can also be found within each function's Help doc.
`workloopR` (pronounced "work looper") provides functions for the import, transformation, and analysis of muscle physiology experiments. As you may have guessed, the initial motivation was to provide functions to analyze work loop experiments in R, but we have expanded this goal to cover additional types of experiments that are often involved in work loop procedures. There are three currently supported experiment types: work loop, simple twitch, and tetanus.
## Analytical pipelines
To cut to the chase, `workloopR` offers the ability to import, transform, and then analyze a data file. For example, with a work loop file:
```{r a_p_single_file}
library(workloopR)
## import the workloop.ddf file included in workloopR
wl_dat <-read_ddf(system.file("extdata", "workloop.ddf",
package = 'workloopR'),
phase_from_peak = TRUE)
## select cycles 3 through 5 using a peak-to-peak definition
wl_selected <- select_cycles(wl_dat, cycle_def = "p2p", keep_cycles = 3:5)
## run the analysis function and get the full object
wl_analyzed <- analyze_workloop(wl_selected, GR = 2)
## for brevity, the print() method for this object produces a simple output
wl_analyzed
## but see the structure for the full output, e.g.
#str(wl_analyzed)
## or run the analysis but get the simplified version
wl_analyzed_simple <- analyze_workloop(wl_selected, simplify = TRUE, GR = 2)
wl_analyzed_simple
```
Batch processing of files within a directory (e.g. successive trials of an experiment) is also readily achieved:
```{r a_p_batch_files}
## batch read and analyze files included with workloopR
analyzed_wls <- read_analyze_wl_dir(system.file("extdata/wl_duration_trials",
package = 'workloopR'),
cycle_def = "p2p",
keep_cycles = 2:4,
phase_from_peak = TRUE
)
## now summarize
summarized_wls <- summarize_wl_trials(analyzed_wls)
summarized_wls
```
Sections below will give more specific overviews.
## Data import
Data that are stored in .ddf format (e.g. generated by Aurora Scientific's Dynamic Muscle Control and Analysis Software) are easily imported via the function `read_ddf()`. Two additional all-in-one functions (`read_analyze_wl()` and `read_analyze_wl_dir()`) also import data and subsequently transform and analyze them. More on those functions later!
Importing via these functions generates objects of class `muscle_stim`, which are formatted to work nicely with `workloopR`'s core functions and help with error checking procedures throughout the package. `muscle_stim` objects are organized to store time-series data for Time, Position, Force, and Stimulation in a `data.frame` and also store core metadata and experimental parameters as Attributes.
We'll provide a quick example using data that are included within the package.
```{r data_import}
library(workloopR)
## import the workloop.ddf file included in workloopR
wl_dat <-read_ddf(system.file("extdata", "workloop.ddf",
package = 'workloopR'),
phase_from_peak = TRUE)
## muscle_stim objects have their own print() and summary() S3 methods
## for example:
summary(wl_dat) # some handy info about the imported file
## see the first few rows of data stored within
head(wl_dat)
```
### Attributes
Again, important object metadata and experimental parameters are stored as attributes. We make extensive use of attributes throughout the package and most functions will update at least one attribute after completion. So please see this feature of your `muscle_stim` objects for important info!
You can use `attributes` on an object itself (e.g. `attributes(wl_dat)`), but we'll avoid doing so because the printout can be pretty lengthy.
Instead, let's just look at a couple interesting ones.
```{r attributes}
## names(attributes(x) gives a list of all the attributes' names
names(attributes(wl_dat))
## take a look at the stimulation protocol
attr(wl_dat, "protocol_table")
## at what frequency were cyclic changes to Position performed?
attr(wl_dat, "cycle_frequency")
## at what frequency were data recorded?
attr(wl_dat, "sample_frequency")
```
## Data from files that are not of .ddf format
Data that are read from other file formats can be constructed into `muscle_stim` objects via `as_muscle_stim()`. Should you need to do this, please refer to our vignette "Importing data from non .ddf sources" for an overview.
## Transformations and corrections to data
Prior to analyses, data can be transformed or corrected. Transformational functions include gear ratio correction (`fix_GR()`) and position inversion (`invert_position()`). The core idea behind these two functions is to correct issues related to data acquisition.
For example, to apply a gear ratio correction of 2:
```{r transformations}
## this multiples Force by 2
## and multiplies Position by (1/2)
wl_fixed <- fix_GR(wl_dat, GR = 2)
# quick check:
max(wl_fixed$Force)/max(wl_dat$Force) #5592.578 / 2796.289 = 2
max(wl_fixed$Position)/max(wl_dat$Position) #1.832262 / 3.664524 = 0.5
```
### A particularly important transformation - `select_cycles()`
Another 'transformational' function is `select_cycles()`, which subsets cycles within a work loop experiment. This is a necessary step prior to analyses of work loop data: data are labeled by cycle for use with `analyze_workloop()`.
## Data analytical functions
Core analytical functions include `analyze_workloop()` for work loop files and `isometric_timing()` for twitches. `analyze_workloop()` computes instantaneous velocity, net work, instantaneous power, and net power for work loop experiments on a per-cycle basis. `isometric_timing()` provides summarization of twitch kinetics.
To see more details about these functions, please refer to "Analyzing work loop experiments in workloopR" for work loop analyses and "Working with twitch files in workloopR" for twitches.
Some functions are readily available for batch processing of files. The `read_analyze_wl_dir()` function allows for the batch import, cycle selection, gear ratio correction, and ultimately work & power computation for all work loop experiment files within a specified directory. The `get_wl_metadata()` and `summarize_wl_trials()` functions organize scanned files by recency (according to their time of last modification: 'mtime') and then report work and power output in the order that trials were run.
This ultimately allows for the `time_correct()` function to correct for degradation of the muscle (according to power & work) over time, assuming that the first and final trials are identical in experimental parameters. If these parameters are not identical, we advise against using this function.
## Thanks for reading!
Please feel free to contact either Vikram or Shree with suggestions or code development requests. We are especially interested in expanding our data import functions to accommodate file types other than .ddf in future versions of `workloopR`.
|
/scratch/gouwar.j/cran-all/cranData/workloopR/inst/doc/Introduction-to-workloopR.Rmd
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## ----package_loading, message=FALSE, warning=FALSE----------------------------
library(workloopR)
library(magrittr)
library(ggplot2)
library(purrr)
library(tidyr)
library(dplyr)
## ----data_import--------------------------------------------------------------
workloop_dat<-
system.file(
"extdata",
"workloop.ddf",
package = 'workloopR') %>%
read_ddf(phase_from_peak = TRUE) %>%
fix_GR(2)
## ----raw_trace----------------------------------------------------------------
# To overlay position and force, we need them to be on comparable scales
# We will then use two y-axis to make the units clear
scale_position_to_force <- 3000
workloop_dat %>%
# Set the x axis for the whole plot
ggplot(aes(x = Time)) +
# Add a line for force
geom_line(aes(y = Force, color = "Force"),
lwd = 1) +
# Add a line for Position, scaled to approximately the same range as Force
geom_line(aes(y = Position * scale_position_to_force, color = "Position")) +
# For stim, we only want to plot where stimulation happens, so we filter the data
geom_point(aes(y = 0, color = "Stim"), size = 1,
data = filter(workloop_dat, Stim == 1)) +
# Next we add the second y-axis with the corrected units
scale_y_continuous(sec.axis = sec_axis(~ . / scale_position_to_force, name = "Position (mm)")) +
# Finally set colours, labels, and themes
scale_color_manual(values = c("#FC4E2A", "#4292C6", "#373737")) +
labs(y = "Force (mN)", x = "Time (secs)", color = "Parameter:") +
ggtitle("Time course of \n work loop experiment") +
theme_bw() +
theme(legend.position = "bottom", legend.direction = "horizontal")
## ----annotate_cycles, warning=FALSE-------------------------------------------
# Let's calculate x and y positions to add labels for each cycle
workloop_dat<-
workloop_dat %>%
select_cycles('lo', 0:6)
label_dat<-
workloop_dat %>%
group_by(Cycle) %>%
summarize(
x = mean(Time)
) %>%
# And add another row for the incomplete cycles at the beginning
bind_rows(data.frame(
Cycle = 'a',
x = 0))
workloop_dat %>%
ggplot(aes(x = Time, y = Position, colour = Cycle)) +
geom_point(size=1) +
geom_text(aes(x, y=2.1, colour = Cycle, label = Cycle), data = label_dat) +
labs(y = "Position (mm)", x = "Time (secs)") +
ggtitle("Division of position\nby `select_cycles()`") +
theme_bw() +
theme(legend.position = "none")
## ----select_cycles------------------------------------------------------------
workloop_dat<-
workloop_dat %>%
select_cycles('p2p', 2:5)
## ----analyze_workloop---------------------------------------------------------
# Let's start with a single cycle using colour to indicate time
workloop_dat %>%
filter(Cycle == 'a') %>%
ggplot(aes(x = Position, y = Force)) +
geom_path(aes(colour = Time)) +
labs(y = "Force (mN)", x = "Position (mm)", colour = "Time (sec)") +
ggtitle("Single work loop") +
theme_bw() +
theme(legend.position = "bottom", legend.direction = "horizontal")
# Now let's see how the work loop changes across cycles
# We can use arrows to indicate direction through time
workloop_dat %>%
ggplot(aes(x = Position, y = Force)) +
geom_path(aes(colour = Cycle), arrow=arrow()) +
labs(y = "Force (mN)", x = "Position (mm)", colour = "Cycle index") +
ggtitle("Work loops by cycle index") +
theme_bw() +
theme(legend.position = "bottom", legend.direction = "horizontal")
## ----multifile----------------------------------------------------------------
multi_workloop_dat<-
system.file(
"extdata/wl_duration_trials",
package = 'workloopR') %>%
read_ddf_dir(phase_from_peak = TRUE) %>%
map(fix_GR, 2) %>%
map(select_cycles,'p2p', 4) %>%
map(analyze_workloop)
# Summarize provides a quick way to pull out most experimental parameters, etc
multi_workloop_dat %>%
summarize_wl_trials %>%
ggplot(aes(Stimulus_Pulses, Mean_Power)) +
geom_point() +
labs(y = "Mean Power (W)", x = "Stim Duration (pulses)") +
ggtitle("Mean power over trial\nby stimulus duration") +
theme_bw() +
theme(legend.position = "bottom", legend.direction = "horizontal")
# Accessing the time course data requires more manipulation
multi_workloop_dat %>%
map(~ mutate(.x$cycle_a, stim_pulses = attr(.x, "stimulus_pulses"))) %>%
bind_rows %>%
ggplot(aes(Percent_of_Cycle, Inst_Power)) +
geom_path(aes(colour = as.factor(stim_pulses)))+
labs(y = "Power (W)", x = "Percent of Cycle", colour = "Stim Duration") +
ggtitle("Time course of instantaneous\npower by stimulus duration") +
theme_bw() +
theme(legend.position = "bottom", legend.direction = "horizontal")
## ----isometric_annotation-----------------------------------------------------
twitch_dat<-
system.file(
"extdata",
"twitch.ddf",
package = 'workloopR') %>%
read_ddf() %>%
fix_GR(2)
# We now need to reshape the single row into three columns, a label for the point, an x value for the label (time), and a y value (force). See the `tidyr` package and associated vignettes on reshaping tips
label_dat<-
twitch_dat %>%
isometric_timing(c(10,90),50) %>%
gather(label, value) %>%
filter(label != 'file_id') %>%
separate(label, c("type", "identifier"), "_", extra="merge") %>%
spread(type,value)
label_dat$time<-as.numeric(label_dat$time)
label_dat$force<-as.numeric(label_dat$force)
ggplot() +
geom_line(aes(Time, Force), data = twitch_dat) +
geom_point(aes(time, force), data = label_dat) +
geom_text(aes(time, force, label = identifier), hjust=-0.15, data = label_dat) +
labs(y = "Force (mN)", x = "Time (sec)") +
ggtitle("Force development in a twitch trial") +
theme_bw() +
theme(legend.position = "bottom", legend.direction = "horizontal")
## ----iso_multi----------------------------------------------------------------
multi_twitch_dat<-
system.file(
"extdata/twitch_csv",
package = 'workloopR') %>%
list.files(full.names = T) %>%
map(read.csv) %>%
map2(c("2mA","3mA","4mA","5mA"), ~as_muscle_stim(.x, type = 'twitch', file_id = .y))
# Next we want another data.frame of label data
multi_label_dat<-
multi_twitch_dat %>%
map_dfr(isometric_timing) %>%
select(file_id, ends_with("peak")) %>%
mutate(label = paste0(round(force_peak),"mV"))
# Once again we want the data in a single data.frame with a column for which trial it came from
multi_twitch_dat %>%
map_dfr(~mutate(.x, file_id = attr(.x, "file_id"))) %>%
ggplot(aes(x = Time, y = Force, colour = file_id)) +
geom_line() +
geom_text(aes(time_peak, force_peak, label = label), hjust=-0.7, data = multi_label_dat) +
labs(y = "Force (mN)", x = "Time (sec)", colour = "Stimulation Current") +
ggtitle("Force development across twitch trials") +
theme_bw() +
theme(legend.position = "bottom", legend.direction = "horizontal")
|
/scratch/gouwar.j/cran-all/cranData/workloopR/inst/doc/Plotting-workloopR.R
|
---
title: "Plotting data in workloopR"
author: "Shreeram Senthivasan"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Plotting data in workloopR}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
Let's take a look at plotting data stored in objects created by `workloopR`!
## Loading packages and data
```{r package_loading, message=FALSE, warning=FALSE}
library(workloopR)
library(magrittr)
library(ggplot2)
library(purrr)
library(tidyr)
library(dplyr)
```
## Plotting `workloop` objects
### Working with single files
Let's start by visualizing the raw traces in our data files, specifically position, force, and stimulation over time.
```{r data_import}
workloop_dat<-
system.file(
"extdata",
"workloop.ddf",
package = 'workloopR') %>%
read_ddf(phase_from_peak = TRUE) %>%
fix_GR(2)
```
```{r raw_trace}
# To overlay position and force, we need them to be on comparable scales
# We will then use two y-axis to make the units clear
scale_position_to_force <- 3000
workloop_dat %>%
# Set the x axis for the whole plot
ggplot(aes(x = Time)) +
# Add a line for force
geom_line(aes(y = Force, color = "Force"),
lwd = 1) +
# Add a line for Position, scaled to approximately the same range as Force
geom_line(aes(y = Position * scale_position_to_force, color = "Position")) +
# For stim, we only want to plot where stimulation happens, so we filter the data
geom_point(aes(y = 0, color = "Stim"), size = 1,
data = filter(workloop_dat, Stim == 1)) +
# Next we add the second y-axis with the corrected units
scale_y_continuous(sec.axis = sec_axis(~ . / scale_position_to_force, name = "Position (mm)")) +
# Finally set colours, labels, and themes
scale_color_manual(values = c("#FC4E2A", "#4292C6", "#373737")) +
labs(y = "Force (mN)", x = "Time (secs)", color = "Parameter:") +
ggtitle("Time course of \n work loop experiment") +
theme_bw() +
theme(legend.position = "bottom", legend.direction = "horizontal")
```
Next, we would select cycles from the workloop in preparation for analysis. Before we do this, let's keep all the cycles and visualize how `select_cycles()` splits the data. Note that you can include 0 in the `keep_cycles` argument to include data that is categorized as being outside of a complete cycle. This assigns a single cycle label (a) to the data before and after the complete cycles.
```{r annotate_cycles, warning=FALSE}
# Let's calculate x and y positions to add labels for each cycle
workloop_dat<-
workloop_dat %>%
select_cycles('lo', 0:6)
label_dat<-
workloop_dat %>%
group_by(Cycle) %>%
summarize(
x = mean(Time)
) %>%
# And add another row for the incomplete cycles at the beginning
bind_rows(data.frame(
Cycle = 'a',
x = 0))
workloop_dat %>%
ggplot(aes(x = Time, y = Position, colour = Cycle)) +
geom_point(size=1) +
geom_text(aes(x, y=2.1, colour = Cycle, label = Cycle), data = label_dat) +
labs(y = "Position (mm)", x = "Time (secs)") +
ggtitle("Division of position\nby `select_cycles()`") +
theme_bw() +
theme(legend.position = "none")
```
Visualizing the cycles is a highly recommended in case noise before or after the experimental procedure is interpreted as a cycle.
Let's go ahead and use cycles 2 to 5 (labeled c-f in the previous plot). Note however that the cycle labels will be reassigned from a-d when we subset the data.
```{r select_cycles}
workloop_dat<-
workloop_dat %>%
select_cycles('p2p', 2:5)
```
Now let's plot some work loops!
```{r analyze_workloop}
# Let's start with a single cycle using colour to indicate time
workloop_dat %>%
filter(Cycle == 'a') %>%
ggplot(aes(x = Position, y = Force)) +
geom_path(aes(colour = Time)) +
labs(y = "Force (mN)", x = "Position (mm)", colour = "Time (sec)") +
ggtitle("Single work loop") +
theme_bw() +
theme(legend.position = "bottom", legend.direction = "horizontal")
# Now let's see how the work loop changes across cycles
# We can use arrows to indicate direction through time
workloop_dat %>%
ggplot(aes(x = Position, y = Force)) +
geom_path(aes(colour = Cycle), arrow=arrow()) +
labs(y = "Force (mN)", x = "Position (mm)", colour = "Cycle index") +
ggtitle("Work loops by cycle index") +
theme_bw() +
theme(legend.position = "bottom", legend.direction = "horizontal")
```
### Working with single files
Working with multiple files is a little trickier as multiple the data are stored in separate `data.frame`s organized into a list. The easiest way to deal with this issue is to add a column specifying the file id and concatenating the data together. Refer to the "Batch processing" vignette for more information on working with multiple files.
```{r multifile}
multi_workloop_dat<-
system.file(
"extdata/wl_duration_trials",
package = 'workloopR') %>%
read_ddf_dir(phase_from_peak = TRUE) %>%
map(fix_GR, 2) %>%
map(select_cycles,'p2p', 4) %>%
map(analyze_workloop)
# Summarize provides a quick way to pull out most experimental parameters, etc
multi_workloop_dat %>%
summarize_wl_trials %>%
ggplot(aes(Stimulus_Pulses, Mean_Power)) +
geom_point() +
labs(y = "Mean Power (W)", x = "Stim Duration (pulses)") +
ggtitle("Mean power over trial\nby stimulus duration") +
theme_bw() +
theme(legend.position = "bottom", legend.direction = "horizontal")
# Accessing the time course data requires more manipulation
multi_workloop_dat %>%
map(~ mutate(.x$cycle_a, stim_pulses = attr(.x, "stimulus_pulses"))) %>%
bind_rows %>%
ggplot(aes(Percent_of_Cycle, Inst_Power)) +
geom_path(aes(colour = as.factor(stim_pulses)))+
labs(y = "Power (W)", x = "Percent of Cycle", colour = "Stim Duration") +
ggtitle("Time course of instantaneous\npower by stimulus duration") +
theme_bw() +
theme(legend.position = "bottom", legend.direction = "horizontal")
```
## Plotting isometric objects
### Working with single files
One useful visualization with isometric data is annotating peak force and other timing points. With a single file and multiple set points, some manipulation is useful to make annotating a little cleaner.
```{r isometric_annotation}
twitch_dat<-
system.file(
"extdata",
"twitch.ddf",
package = 'workloopR') %>%
read_ddf() %>%
fix_GR(2)
# We now need to reshape the single row into three columns, a label for the point, an x value for the label (time), and a y value (force). See the `tidyr` package and associated vignettes on reshaping tips
label_dat<-
twitch_dat %>%
isometric_timing(c(10,90),50) %>%
gather(label, value) %>%
filter(label != 'file_id') %>%
separate(label, c("type", "identifier"), "_", extra="merge") %>%
spread(type,value)
label_dat$time<-as.numeric(label_dat$time)
label_dat$force<-as.numeric(label_dat$force)
ggplot() +
geom_line(aes(Time, Force), data = twitch_dat) +
geom_point(aes(time, force), data = label_dat) +
geom_text(aes(time, force, label = identifier), hjust=-0.15, data = label_dat) +
labs(y = "Force (mN)", x = "Time (sec)") +
ggtitle("Force development in a twitch trial") +
theme_bw() +
theme(legend.position = "bottom", legend.direction = "horizontal")
```
### Working with multiple files
We can also overlay data from multiple isometric trials to see how force evolves across trials. Please see the "Batch processing" vignette for more details on how to work with multiple files.
```{r iso_multi}
multi_twitch_dat<-
system.file(
"extdata/twitch_csv",
package = 'workloopR') %>%
list.files(full.names = T) %>%
map(read.csv) %>%
map2(c("2mA","3mA","4mA","5mA"), ~as_muscle_stim(.x, type = 'twitch', file_id = .y))
# Next we want another data.frame of label data
multi_label_dat<-
multi_twitch_dat %>%
map_dfr(isometric_timing) %>%
select(file_id, ends_with("peak")) %>%
mutate(label = paste0(round(force_peak),"mV"))
# Once again we want the data in a single data.frame with a column for which trial it came from
multi_twitch_dat %>%
map_dfr(~mutate(.x, file_id = attr(.x, "file_id"))) %>%
ggplot(aes(x = Time, y = Force, colour = file_id)) +
geom_line() +
geom_text(aes(time_peak, force_peak, label = label), hjust=-0.7, data = multi_label_dat) +
labs(y = "Force (mN)", x = "Time (sec)", colour = "Stimulation Current") +
ggtitle("Force development across twitch trials") +
theme_bw() +
theme(legend.position = "bottom", legend.direction = "horizontal")
```
Please note that these twitch trials have differing values of initial force, so actual force developments are not identical to peak forces.
|
/scratch/gouwar.j/cran-all/cranData/workloopR/inst/doc/Plotting-workloopR.Rmd
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## ----package_loading, message=FALSE, warning=FALSE----------------------------
library(workloopR)
library(magrittr)
library(purrr)
## -----------------------------------------------------------------------------
workloop_trials_list<-
system.file(
"extdata/wl_duration_trials",
package = 'workloopR') %>%
read_ddf_dir(phase_from_peak = TRUE)
workloop_trials_list[1:2]
## -----------------------------------------------------------------------------
analyzed_wl_list<-
system.file(
"extdata/wl_duration_trials",
package = 'workloopR') %>%
read_analyze_wl_dir(sort_by = 'file_id',
phase_from_peak = TRUE,
cycle_def = 'lo',
keep_cycles = 3)
analyzed_wl_list[1:2]
## -----------------------------------------------------------------------------
analyzed_wl_list %>%
summarize_wl_trials
## -----------------------------------------------------------------------------
non_ddf_list<-
# Generate a vector of file names
system.file(
"extdata/twitch_csv",
package = 'workloopR') %>%
list.files(full.names = T) %>%
# Read into a list of data.frames
map(read.csv) %>%
# Coerce into a workloop object
map(as_muscle_stim, type = "twitch")
## -----------------------------------------------------------------------------
non_ddf_list<-
non_ddf_list %>%
map(~{
attr(.x,"stimulus_width")<-0.2
attr(.x,"stimulus_offset")<-0.1
return(.x)
}) %>%
map(fix_GR,2)
## -----------------------------------------------------------------------------
file_ids<-paste0("0",1:4,"-",2:5,"mA-twitch.csv")
non_ddf_list<-
non_ddf_list %>%
map2(file_ids, ~{
attr(.x,"file_id")<-.y
return(.x)
})
non_ddf_list
## -----------------------------------------------------------------------------
non_ddf_list %>%
map_dfr(isometric_timing)
|
/scratch/gouwar.j/cran-all/cranData/workloopR/inst/doc/batch-processing.R
|
---
title: "Batch processing"
author: "Shreeram Senthivasan"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Batch processing}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
Many of the functions in the `workloopR` package are built to facilitate batch processing of workloop and related data files. This vignette will start with an overview of how the functions were intended to be used for batch processing and then provide specific examples.
## Conceptual overview
We generally expect a single file to store data from a single experimental trial, whereas directories hold data from all the trials of a single experiment. Accordingly, the `muscle_stim` objects created and used by most of the `workloopR` functions are intended to hold data from a single trial of a workloop or related experiment. Lists are then used to package together trials from a single experiment. This also lends itself to using recursion to transform and analyze all data from a single experiment.
In broad strokes, there are three ways that batch processing has been worked into `workloopR` functions. First, some functions like the `*_dir()` family of import functions and `summarize_wl_trials()` specifically generate or require lists of `muscle_stim` objects. Second, the first argument of all other functions are the objects being manipulated, which can help clean up recursion using the `purrr::map()` family of functions. Finally, some functions return summarized data as single rows of a data.frame that can easily be bound together to generate a summary table.
## Load packages and data
This vignette will rely heavily on the `purrr::map()` family of functions for recursion, though it should be mentioned that the `base::apply()` family of functions would work as well.
```{r package_loading, message=FALSE, warning=FALSE}
library(workloopR)
library(magrittr)
library(purrr)
```
## Necessarily-multi-trial functions
### `*_dir()` functions
Both `read_ddf()` and `read_analyze_wl()` have alternatives suffixed by `_dir()` to read in multiple files from a directory. Both take a path to the directory and an optional regular expression to filter files by and return a list of `muscle_stim` objects or `analyzed_workloop` objects, respectively.
```{r}
workloop_trials_list<-
system.file(
"extdata/wl_duration_trials",
package = 'workloopR') %>%
read_ddf_dir(phase_from_peak = TRUE)
workloop_trials_list[1:2]
```
The `sort_by` argument can be used to rearrange this list by any attribute of the read-in objects. By default, the objects are sorted by their modification time. Other arguments of `read_ddf()` and `read_analyze_wl()` can also be passed to their `*_dir()` alternatives as named arguments.
```{r}
analyzed_wl_list<-
system.file(
"extdata/wl_duration_trials",
package = 'workloopR') %>%
read_analyze_wl_dir(sort_by = 'file_id',
phase_from_peak = TRUE,
cycle_def = 'lo',
keep_cycles = 3)
analyzed_wl_list[1:2]
```
### Summarizing workloop trials
In a series of workloop trials, it can useful to see how mean power and work change as you vary different experimental parameters. To facilitate this, `summarize_wl_trials()` specifically takes a list of `analyzed_workloop` objects and returns a `data.frame` of this information. We will explore ways of generating lists of analyzed workloops without using `read_analyze_wl_dir()` in the following section.
```{r}
analyzed_wl_list %>%
summarize_wl_trials
```
## Manual recursion examples
### Batch import for non-ddf data
One of the more realistic use cases for manual recursion is for importing data from multiple related trials that are not stored in ddf format. As with importing individual non-ddf data sources, we start by reading the data into a data.frame, only now we want a list of data.frames. In this example, we will read in csv files and stitch them into a list using `purrr::map()`
```{r}
non_ddf_list<-
# Generate a vector of file names
system.file(
"extdata/twitch_csv",
package = 'workloopR') %>%
list.files(full.names = T) %>%
# Read into a list of data.frames
map(read.csv) %>%
# Coerce into a workloop object
map(as_muscle_stim, type = "twitch")
```
### Data transformation and analysis
Applying a constant transformation to a list of `muscle_stim` objects is fairly straightforward using `purrr::map()`.
```{r}
non_ddf_list<-
non_ddf_list %>%
map(~{
attr(.x,"stimulus_width")<-0.2
attr(.x,"stimulus_offset")<-0.1
return(.x)
}) %>%
map(fix_GR,2)
```
Applying a non-constant transformation like setting a unique file ID can be done using `purrr::map2()`.
```{r}
file_ids<-paste0("0",1:4,"-",2:5,"mA-twitch.csv")
non_ddf_list<-
non_ddf_list %>%
map2(file_ids, ~{
attr(.x,"file_id")<-.y
return(.x)
})
non_ddf_list
```
Analysis can similarly be run recursively. `isometric_timing()` in particular returns a single row of a data.frame with timings and forces for key points in an isometric dataset. Here we can use `purrr::map_dfr()` to bind the rows together for neatness.
```{r}
non_ddf_list %>%
map_dfr(isometric_timing)
```
|
/scratch/gouwar.j/cran-all/cranData/workloopR/inst/doc/batch-processing.Rmd
|
## ----setup, include = FALSE---------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## ----package_loading, message=FALSE, warning=FALSE----------------------------
library(workloopR)
library(magrittr)
library(ggplot2)
## ----get_data-----------------------------------------------------------------
## Load in the work loop example data from workloopR
workloop_dat <-
system.file(
"extdata",
"workloop.ddf",
package = 'workloopR') %>%
read_ddf(phase_from_peak = TRUE) %>%
fix_GR(GR = 2)
## First we'll extract Time
Time <- workloop_dat$Time
## Now Position
Position <- workloop_dat$Position
## Force
Force <- workloop_dat$Force
## Stimulation
Stim <- workloop_dat$Stim
## Put it all together as a data.frame
my_data <- data.frame(Time = Time,
Position = Position,
Force = Force,
Stim = Stim)
head(my_data)
## ----as_mus_basic-------------------------------------------------------------
## Put it together
my_muscle_stim <- as_muscle_stim(x = my_data,
type = "workloop",
sample_frequency = 10000)
## Data are stored in columns and basically behave as data.frames
head(my_muscle_stim)
ggplot(my_muscle_stim, aes(x = Time, y = Position)) +
geom_line() +
labs(y = "Position (mm)", x = "Time (secs)") +
ggtitle("Time course of length change") +
theme_bw()
## ----attributes---------------------------------------------------------------
str(attributes(my_muscle_stim))
## ----add_file_id--------------------------------------------------------------
## This time, add the file's name via "file_id"
my_muscle_stim <- as_muscle_stim(x = my_data,
type = "workloop",
sample_frequency = 10000,
file_id = "workloop123")
## For simplicity, we'll just target the file_id attribute directly instead of
## printing all attributes again
attr(my_muscle_stim, "file_id")
## -----------------------------------------------------------------------------
names(attributes(workloop_dat))
## -----------------------------------------------------------------------------
str(attributes(workloop_dat))
|
/scratch/gouwar.j/cran-all/cranData/workloopR/inst/doc/non-ddf-sources.R
|
---
title: "Importing data from non .ddf sources"
author: "Vikram B. Baliga"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Importing data from non .ddf sources}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r setup, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
`workloopR`'s data import functions, such as `read_ddf()`, are generally geared towards importing data from .ddf files (e.g. those generated by Aurora Scientific's Dynamic Muscle Control and Analysis Software).
Should your data be stored in another file format, you can use the `as_muscle_stim()` function to generate your own `muscle_stim` objects. These `muscle_stim` objects are used by nearly all other `workloopR` functions and are formatted in a very specific way. This helps ensure that other functions can interpret data & metadata correctly and also perform internal checks.
## Load packages
Before running through anything, we'll ensure we have the packages we need.
```{r package_loading, message=FALSE, warning=FALSE}
library(workloopR)
library(magrittr)
library(ggplot2)
```
## Data
Because it is somewhat difficult to simulate muscle physiology data, we'll use one of our workloop files, deconstruct it, and then re-assemble the data via `as_muscle_stim()`.
```{r get_data}
## Load in the work loop example data from workloopR
workloop_dat <-
system.file(
"extdata",
"workloop.ddf",
package = 'workloopR') %>%
read_ddf(phase_from_peak = TRUE) %>%
fix_GR(GR = 2)
## First we'll extract Time
Time <- workloop_dat$Time
## Now Position
Position <- workloop_dat$Position
## Force
Force <- workloop_dat$Force
## Stimulation
Stim <- workloop_dat$Stim
## Put it all together as a data.frame
my_data <- data.frame(Time = Time,
Position = Position,
Force = Force,
Stim = Stim)
head(my_data)
```
## Assemble via `as_muscle_stim()`
It is absolutely crucial that the columns be named "Time", "Position", "Force", and "Stim" (all case-sensitive). Otherwise, `as_muscle_stim()` will not interpret data correctly.
At minimum, this `data.frame`, the type of experiment, and the frequency at which data were recorded (`sample_frequency`, as a numeric) are necessary for `as_muscle_stim()`.
```{r as_mus_basic}
## Put it together
my_muscle_stim <- as_muscle_stim(x = my_data,
type = "workloop",
sample_frequency = 10000)
## Data are stored in columns and basically behave as data.frames
head(my_muscle_stim)
ggplot(my_muscle_stim, aes(x = Time, y = Position)) +
geom_line() +
labs(y = "Position (mm)", x = "Time (secs)") +
ggtitle("Time course of length change") +
theme_bw()
```
### Attributes
By default, a couple attributes are auto-filled based on the available information, but it's pretty bare-bones
```{r attributes}
str(attributes(my_muscle_stim))
```
We highly encourage you to add in as many of these details as possible by passing them in via the `...` argument. For example:
```{r add_file_id}
## This time, add the file's name via "file_id"
my_muscle_stim <- as_muscle_stim(x = my_data,
type = "workloop",
sample_frequency = 10000,
file_id = "workloop123")
## For simplicity, we'll just target the file_id attribute directly instead of
## printing all attributes again
attr(my_muscle_stim, "file_id")
```
### Possible attributes
Here is a list of all possible attributes that can be filled.
```{r}
names(attributes(workloop_dat))
```
To see how each should be formatted, (e.g. which ones take numeric values vs. character vectors...etc)
```{r}
str(attributes(workloop_dat))
```
## Thanks for reading!
Please feel free to contact either Vikram or Shree with suggestions or code development requests. We are especially interested in expanding our data import functions to accommodate file types other than .ddf in future versions of `workloopR`.
|
/scratch/gouwar.j/cran-all/cranData/workloopR/inst/doc/non-ddf-sources.Rmd
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.