content
stringlengths 0
14.9M
| filename
stringlengths 44
136
|
---|---|
---
title: "Creating vitae templates"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Creating vitae templates}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r setup, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
Résumé/CV templates are abundantly available in many varieties of themes and layouts. The `vitae` package provides a few of the more popular templates that are suitable for most resumes. The included templates are far from comprehensive - your favourite template may not be included, or perhaps you have created your own template. This vignette explains how your LaTeX CV can be used within the package using custom templates.
## Creating a vitae template
Extending `vitae` to support new templates involves a similar process to creating new `rmarkdown` document templates. An extended explanation for creating `rmarkdown` templates can be found in the [*Document Templates* chapter](https://bookdown.org/yihui/rmarkdown/document-templates.html) in ["R Markdown: The Definitive Guide"](https://bookdown.org/yihui/rmarkdown/).
Creating a template for vitae can be broken into three parts:
- Converting a CV into a Pandoc template
- Adding LaTeX macros for displaying CV entries
- Using the template with `rmarkdown`
### Converting a CV into a Pandoc template
Most elements that are included in the YAML header of an `rmarkdown` document are passed to your template via Pandoc variables. Pandoc variables can be included in your template file by surrounding the variable with `$`. These can be used to fill in basic CV details such as your name, occupation, and social links.
For example, suppose your document contains this YAML header:
```
name: "Mitchell O'Hara-Wild"
position: "vitae maintainer"
output: vitae::awesomecv
```
The `$name$` variable in the template would be substituted with `Mitchell O'Hara-Wild` and similarly, `$position$` would become `vitae maintainer`. Many templates won't follow this same structure exactly (some may split the name into first and last names), but most of the time there is a reasonable place for these variables. It is recommended that a consistent set of variables are used to make switching between templates easy.
The current list of variables used in the `vitae` templates are:
- name
- position
- address
- date
- profilepic
- www
- email
- twitter
- github
- linkedin
- aboutme
- headcolor
In the [moderncv template](https://github.com/xdanaux/moderncv), the position of 'vitae maintainer' is specified using `\position{vitae maintainer}`. Using Pandoc variables, this would be replaced with `\position{$position$}`, which allows the position to be defined in the `rmarkdown` document's YAML header.
However if a `position` has not been provided in the YAML header, this would leave us with `\position{}` (which might be okay for some templates, but is undesirable for most templates). To resolve this, we can use Pandoc to conditionally include this section with `$if(position)$\position{$position$}$endif$`.
The main content from an `rmarkdown` document is also included using Pandoc variables. The results from the main section of the document is stored in `$body$`. So in a typical LaTeX CV template, where there is usually pre-filled details about experience and employment, this can be completely replaced with `$body$`. There are a few other common variables to place within the template, which are typically placed in the same location as other templates. These variables include:
- body
- header-includes
- fontfamily
- fontfamilyoptions
- fontsize
- lang
- papersize
- classoption
- linestretch
- include-before
- include-after
- highlighting-macros
Placement of these variables can be found by looking at other template files provided in the package. The conversion of the moderncv template into a Pandoc template for `vitae` can be found on [GitHub](https://github.com/mitchelloharawild/vitae/blob/master/inst/rmarkdown/templates/moderncv/resources/moderncv.tex).
### Adding template specific code for displaying CV entries
The interface for producing entries in a CV varies greatly between templates. To support these various formats, template specific R functions are used to convert the `vitae` format of *what, when, with, where, and why* to output suitable for each template. These functions are specified using the `set_entry_formats()` function, which accepts output from `new_entry_formats()`.
The moderncv template provides many different layouts, of which I have selected the two that best suit `brief_entries` and `detailed_entries`.
#### brief_entries
The moderncv template `\cvitem` command generates an appropriate layout for brief entries. It expects inputs in this format:
```
\cvitem{Title}{Description}
```
An appropriate function for creating these items could be:
```r
moderncv_cv_entries <- new_entry_formats(
brief = function(what, when, with){
paste0(
"\t\\cvitem{", when, "}{", what, ". ", with, "}",
collapse = "\n"
)
},
detailed = ... # See below
)
```
Note that `what`, `when` and `with` may contain more than one entry. This
function combines each `\cvitem{}` into a single string by separating each entry
with a new line.
#### detailed_entries
For detailed CV entries, the moderncv `\cventry` command is appropriate. It expects inputs in this format:
```
\cventry{Year}{Degree}{Institution}{City}{Grade}{Description}
```
A function that can produce these entries is as follows:
```r
moderncv_cv_entries <- new_entry_formats(
brief = ...,
detailed = function(what, when, with, where, why){
# Combine why inputs into a bullet list
why <- lapply(why, function(x) {
if(length(x) == 0) return("\\empty")
paste(c(
"\\begin{itemize}%",
paste0("\\item ", x, "%"),
"\\end{itemize}"
), collapse = "\n")
})
# Combine inputs into \cventry{} output
paste0(
paste0("\t\\cventry{", when, "}{", what, "}{", with, "}{", where, "}{}{", why, "}"),
collapse = "\n"
)
}
)
```
### Using the template with rmarkdown
Once the Pandoc variables and LaTeX CV entry macros are set in the template, it is ready for use with the `vitae` package. The package provides the `cv_document` output format, which is suitable for use with custom templates. To use the custom template, your `rmarkdown` document's YAML header would look like this:
```
output:
vitae::cv_document:
template: my_custom_template.tex
```
You will also need to copy all of the LaTeX class (`.cls`) and style (`.sty`) files provided with the template into the same folder as your `rmarkdown` document. Once that is done, your new template should be ready to use with the `vitae` package.
## Contributing to the vitae package
If you've gone to the effort of successfully creating a new template with the `vitae` package, you may be interested in making it available for others to use. You can contribute to this package by submitting a pull request that adds your template to the package.
Adding your template to the package can be done with:
```{r add-template, eval = FALSE}
usethis::use_rmarkdown_template(
template_name = "Curriculum Vitae (My custom format)",
template_dir = "my_template",
template_description = "The custom vitae template made by me!",
template_create_dir = TRUE)
```
Then by navigating to the package's `inst/rmarkdown/templates/my_template` folder, you can add your Pandoc template to the `resources` folder, and your `.cls` and `.sty` files to the `skeleton` folder.
Once that is done, we can create a new `rmarkdown` output format that uses your template. These are added to the "R/formats.R" file, and will usually follow the same structure as other templates. The template argument to `cv_document` is a link to your Pandoc template in the package (accessed using `system.file`), and it is recommended that the supporting `.cls` and `.sty` files are copied using `copy_supporting_files`.
```{r}
#' @rdname cv_formats
#' @export
my_template <- function(...) {
template <- system.file("rmarkdown", "templates", "my_template",
"resources", "my_template.tex", package="vitae")
set_entry_formats(moderncv_cv_entries)
copy_supporting_files("my_template")
cv_document(..., template = template)
}
```
The automatically generated `skeleton.Rmd` document in the `skeleton` folder should be modified to be a basic example of using your template. Examples of this file can be found in other templates, and this template file can act as a useful test for your template!
All done! You should now be able to install your new version of the package with `devtools::install()`, and test out your new output format with:
```
output:
vitae::my_template
```
|
/scratch/gouwar.j/cran-all/cranData/vitae/inst/doc/extending.Rmd
|
## ----setup, include = FALSE---------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
|
/scratch/gouwar.j/cran-all/cranData/vitae/inst/doc/vitae.R
|
---
title: "Introduction to vitae"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Introduction to vitae}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r setup, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
The vitae package makes creating and maintaining a Résumé or CV with R Markdown simple. It provides a collection of LaTeX templates, with helpful functions to add content to the documents. These functions allow you to dynamically include CV entries from any data source, which is particularly useful when this data is obtained/prepared by other R packages. Some examples of what this allows you to do includes:
- Automatically get your work experience from the web
- List the R packages you have contributed to
- Filter CV entries by keywords relevant to the current job
- Include your academic publications
## Creating a CV
If using RStudio, a new CV document can be easily produced from one of the templates provided in the package. This using the RStudio R Markdown template selector, accessible via `File` > `New File` > `R Markdown...`, and lastly `From Template`. This will show a list of R Markdown templates provided by all installed packages, and you should be able to find some templates from the vitae package to use.
If not using RStudio, you can create a new `*.Rmd` document and use an output format that is provided by the package. A list of output formats provided by the package can be found on the package website: https://pkg.mitchelloharawild.com/vitae/#templates. Examples of a YAML header that use one of these output formats is shown below.
Like other R Markdown documents, the file is split into two sections: the YAML header, and the main body.
### The YAML header
The YAML header contains general entries that are common across many templates, such as your name, address and social media profiles. This is also where the output format (the CV template used) is specified, along with any options that the template supports. An example of what this header may look like is shown below:
`r htmltools::pre(htmltools::code(paste0(readLines("sample.txt")[1:11],collapse = "\n")))`
You can also see that the output is set to `vitae::awesomecv`, which indicates that this CV uses the [Awesome CV](https://github.com/posquit0/Awesome-CV) template. Some of the templates allow for the choosing of a theme, or changing other options. These options can be found in the help file for each output format (say `?moderncv)`. For example, the `moderncv` template allows for the selection of one of five themes: "casual", "classic", "oldstyle", "banking" or "fancy".
To change default options of output formats you can modify the yaml as follows:
````yaml
# Choose a theme and pass other arguments
output:
vitae::moderncv:
theme: banking
````
Currently, the YAML header allows these fields to be specified:
- `name`: Your name
- `surname`: Your family or last name
- `position`: Your current workplace title or field
- `address`: Your address
- `date`: The current date
- `profilepic`: A local file path to an image
- `www`: Your website address
- `email`: Your email address
- `twitter`: Your twitter handle
- `github`: Your GitHub handle
- `linkedin`: Your LinkedIn username
- `aboutme`: A short description that is included in a template specific location
- `headcolor`: A featured colour for the template
- `docname`: Control the document name (typically curriculum vitae, or résumé)
### The document body
Like other R Markdown documents, the body allows you to mix R code with markdown to create the main content for your document. Below is an example of the start for a typical CV:
`r htmltools::pre(htmltools::code(paste0(readLines("sample.txt")[14:23],collapse = "\n")))`
The setup chunk is useful to load in any packages that you may use, and also prevent R code and warnings/notes from appearing in your CV. The above code also includes a professional summary using markdown syntax, which will appear in the final CV.
Unlike other R markdown formats, the vitae package and its templates support functions to generate CV entries from data: `detailed_entries()` and `brief_entries()`. Both functions provide sections for `what`, `when`, and `with`, and the `detailed_entries()` additionally supports `where` and `why`. They use an interface similar to [dplyr](https://CRAN.R-project.org/package=dplyr), in that the data can be piped (`%>%`) into these functions, and the arguments can involve some calculations.
#### Detailed entries
Let's add to the main body with some education history. I'm creating a dataset in R to do this, although you can read in the data from excel, or if you have it documented somewhere online like [ORCID](https://orcid.org) you can dynamically access it via their API.
`r htmltools::pre(htmltools::code(paste0(readLines("sample.txt")[25:40],collapse = "\n")))`
In the example above, the [glue](https://CRAN.R-project.org/package=glue) package has been used to combine the start and end years for our `when` input. Excluding any arguments is also okay (as is done for `why`), it will just be left blank in the CV.
#### Brief entries
Brief entries can be included with the same interface as `detailed_entries()`, and is appropriate for entries that do not need as much detail (such as skills). Another application of this is to include a list of R packages that you have published to CRAN using the [pkgsearch](https://CRAN.R-project.org/package=pkgsearch) package.
`r htmltools::pre(htmltools::code(paste0(readLines("sample.txt")[42:53],collapse = "\n")))`
This example also uses several other packages to prepare the data:
- [dplyr](https://CRAN.R-project.org/package=dplyr) to `filter()` my contributed packages, and `arrange()` the data by downloads
- [purrr](https://CRAN.R-project.org/package=purrr) to map over `package_data` column to find packages I've contributed to
- [lubridate](https://CRAN.R-project.org/package=lubridate) to display only the year from the `date` column
#### Bibliography entries
The package also supports bibliography entries from a `*.bib` file using the `bibliography_entries()` function. This outputs the contents of a bibliography using a citation style, and is suitable for CVs containing publications.
`r htmltools::pre(htmltools::code(paste0(readLines("sample.txt")[55:57],collapse = "\n")))`
|
/scratch/gouwar.j/cran-all/cranData/vitae/inst/doc/vitae.Rmd
|
---
name: Marie
surname: Curie
position: "Professor"
pronouns: she/her
address: "School of Physics & Chemistry, École Normale Supérieure"
phone: +1 22 3333 4444
www: mariecurie.com
email: "[email protected]"
twitter: mariecurie
github: mariecurie
linkedin: mariecurie
date: "`r format(Sys.time(), '%B %Y')`"
output:
vitae::awesomecv:
page_total: true
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = FALSE, warning = FALSE, message = FALSE)
library(vitae)
```
# Some stuff about me
* I poisoned myself doing research.
* I was the first woman to win a Nobel prize
* I was the first person and only woman to win a Nobel prize in two different sciences.
# Education
```{r}
library(tibble)
tribble(
~ Degree, ~ Year, ~ Institution, ~ Where,
"Informal studies", "1889-91", "Flying University", "Warsaw, Poland",
"Master of Physics", "1893", "Sorbonne Université", "Paris, France",
"Master of Mathematics", "1894", "Sorbonne Université", "Paris, France"
) %>%
detailed_entries(Degree, Year, Institution, Where)
```
# Nobel Prizes
```{r}
tribble(
~Year, ~Type, ~Desc,
1903, "Physics", "Awarded for her work on radioactivity with Pierre Curie and Henri Becquerel",
1911, "Chemistry", "Awarded for the discovery of radium and polonium"
) %>%
brief_entries(
glue::glue("Nobel Prize in {Type}"),
Year,
Desc
)
```
# Publications
```{r}
library(dplyr)
knitr::write_bib(c("vitae", "tibble"), "packages.bib")
bibliography_entries("packages.bib") %>%
arrange(desc(author$family), issued)
```
|
/scratch/gouwar.j/cran-all/cranData/vitae/inst/rmarkdown/templates/awesomecv/skeleton/skeleton.Rmd
|
---
name: Marie
surname: Curie
position: "Professor"
address: "School of Physics & Chemistry, École Normale Supérieure"
pronouns: she/her
phone: +1 22 3333 4444
www: mariecurie.com
email: "[email protected]"
twitter: mariecurie
github: mariecurie
linkedin: mariecurie
date: "`r format(Sys.time(), '%B %Y')`"
output: vitae::hyndman
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = FALSE, warning = FALSE, message = FALSE)
library(vitae)
```
# Some stuff about me
* I poisoned myself doing research.
* I was the first woman to win a Nobel prize
* I was the first person and only woman to win a Nobel prize in two different sciences.
# Education
```{r}
library(tibble)
tribble(
~ Degree, ~ Year, ~ Institution, ~ Where,
"Informal studies", "1889-91", "Flying University", "Warsaw, Poland",
"Master of Physics", "1893", "Sorbonne Université", "Paris, France",
"Master of Mathematics", "1894", "Sorbonne Université", "Paris, France"
) %>%
detailed_entries(Degree, Year, Institution, Where)
```
# Nobel Prizes
```{r}
tribble(
~Year, ~Type, ~Desc,
1903, "Physics", "Awarded for her work on radioactivity with Pierre Curie and Henri Becquerel",
1911, "Chemistry", "Awarded for the discovery of radium and polonium"
) %>%
brief_entries(
glue::glue("Nobel Prize in {Type}"),
Year,
Desc
)
```
# Publications
```{r}
library(dplyr)
knitr::write_bib(c("vitae", "tibble"), "packages.bib")
bibliography_entries("packages.bib") %>%
arrange(desc(author$family), issued)
```
|
/scratch/gouwar.j/cran-all/cranData/vitae/inst/rmarkdown/templates/hyndman/skeleton/skeleton.Rmd
|
---
name: Marie
surname: Curie
position: "Professor"
address: "School of Physics & Chemistry, École Normale Supérieure"
pronouns: she/her
phone: +1 22 3333 4444
www: mariecurie.com
email: "[email protected]"
twitter: mariecurie
github: mariecurie
linkedin: mariecurie
date: "`r format(Sys.time(), '%B %Y')`"
output:
vitae::latexcv:
theme: classic
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = FALSE, warning = FALSE, message = FALSE)
library(vitae)
```
# Some stuff about me
* I poisoned myself doing research.
* I was the first woman to win a Nobel prize
* I was the first person and only woman to win a Nobel prize in two different sciences.
# Education
```{r}
library(tibble)
tribble(
~ Degree, ~ Year, ~ Institution, ~ Where,
"Informal studies", "1889-91", "Flying University", "Warsaw, Poland",
"Master of Physics", "1893", "Sorbonne Université", "Paris, France",
"Master of Mathematics", "1894", "Sorbonne Université", "Paris, France"
) %>%
detailed_entries(Degree, Year, Institution, Where)
```
# Nobel Prizes
```{r}
tribble(
~Year, ~Type, ~Desc,
1903, "Physics", "Awarded for her work on radioactivity with Pierre Curie and Henri Becquerel",
1911, "Chemistry", "Awarded for the discovery of radium and polonium"
) %>%
brief_entries(
glue::glue("Nobel Prize in {Type}"),
Year,
Desc
)
```
# Publications
```{r}
library(dplyr)
knitr::write_bib(c("vitae", "tibble"), "packages.bib")
bibliography_entries("packages.bib") %>%
arrange(desc(author$family), issued)
```
|
/scratch/gouwar.j/cran-all/cranData/vitae/inst/rmarkdown/templates/latexcv/skeleton/skeleton.Rmd
|
---
title: CV
name: Marie
surname: Curie
position: "Professor"
address: "School of Physics & Chemistry, École Normale Supérieure"
phone: +1 22 3333 4444
pronouns: she/her
www: mariecurie.com
email: "[email protected]"
twitter: mariecurie
github: mariecurie
linkedin: mariecurie
date: "`r format(Sys.time(), '%B %Y')`"
aboutme: "Marie is a Polish and naturalized-French physicist and chemist who conducts pioneering research on radioactivity."
output:
vitae::markdowncv:
theme: kjhealy
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = FALSE, warning = FALSE, message = FALSE)
library(vitae)
```
## Some stuff about me
* I poisoned myself doing research.
* I was the first woman to win a Nobel prize
* I was the first person and only woman to win a Nobel prize in two different sciences.
## Education
```{r}
library(tibble)
tribble(
~ Degree, ~ Year, ~ Institution, ~ Where,
"Informal studies", "1889-91", "Flying University", "Warsaw, Poland",
"Master of Physics", "1893", "Sorbonne Université", "Paris, France",
"Master of Mathematics", "1894", "Sorbonne Université", "Paris, France"
) %>%
detailed_entries(Degree, Year, Institution, Where)
```
## Nobel Prizes
```{r}
tribble(
~Year, ~Type, ~Desc,
1903, "Physics", "Awarded for her work on radioactivity with Pierre Curie and Henri Becquerel",
1911, "Chemistry", "Awarded for the discovery of radium and polonium"
) %>%
brief_entries(
glue::glue("Nobel Prize in {Type}"),
Year,
Desc
)
```
## Publications
```{r}
library(dplyr)
knitr::write_bib(c("vitae", "tibble"), "packages.bib")
bibliography_entries("packages.bib") %>%
arrange(desc(author$family), issued)
```
|
/scratch/gouwar.j/cran-all/cranData/vitae/inst/rmarkdown/templates/markdowncv/skeleton/skeleton.Rmd
|
---
name: Marie
surname: Curie
position: "Professor"
address: "School of Physics & Chemistry, École Normale Supérieure"
phone: +1 22 3333 4444
www: mariecurie.com
email: "[email protected]"
twitter: mariecurie
github: mariecurie
linkedin: mariecurie
date: "`r format(Sys.time(), '%B %Y')`"
output: vitae::moderncv
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = FALSE, warning = FALSE, message = FALSE)
library(vitae)
```
# Some stuff about me
* I poisoned myself doing research.
* I was the first woman to win a Nobel prize
* I was the first person and only woman to win a Nobel prize in two different sciences.
# Education
```{r}
library(tibble)
tribble(
~ Degree, ~ Year, ~ Institution, ~ Where,
"Informal studies", "1889-91", "Flying University", "Warsaw, Poland",
"Master of Physics", "1893", "Sorbonne Université", "Paris, France",
"Master of Mathematics", "1894", "Sorbonne Université", "Paris, France"
) %>%
detailed_entries(Degree, Year, Institution, Where)
```
# Nobel Prizes
```{r}
tribble(
~Year, ~Type, ~Desc,
1903, "Physics", "Awarded for her work on radioactivity with Pierre Curie and Henri Becquerel",
1911, "Chemistry", "Awarded for the discovery of radium and polonium"
) %>%
brief_entries(
glue::glue("Nobel Prize in {Type}"),
Year,
Desc
)
```
# Publications
```{r}
library(dplyr)
knitr::write_bib(c("vitae", "tibble"), "packages.bib")
bibliography_entries("packages.bib") %>%
arrange(desc(author$family), issued)
```
|
/scratch/gouwar.j/cran-all/cranData/vitae/inst/rmarkdown/templates/moderncv/skeleton/skeleton.Rmd
|
---
name: Marie
surname: Curie
position: "Professor"
address: "School of Physics & Chemistry, École Normale Supérieure"
phone: +1 22 3333 4444
pronouns: she/her
profilepic: mariecurie.jpg
www: mariecurie.com
email: "[email protected]"
twitter: mariecurie
github: mariecurie
linkedin: mariecurie
date: "`r format(Sys.time(), '%B %Y')`"
aboutme: "Marie is a Polish and naturalized-French physicist and chemist who conducts pioneering research on radioactivity."
output: vitae::twentyseconds
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = FALSE, warning = FALSE, message = FALSE)
library(vitae)
```
# Some stuff about me
* I poisoned myself doing research.
* I was the first woman to win a Nobel prize
* I was the first person and only woman to win a Nobel prize in two different sciences.
# Education
```{r}
library(tibble)
tribble(
~ Degree, ~ Year, ~ Institution, ~ Where,
"Informal studies", "1889-91", "Flying University", "Warsaw, Poland",
"Master of Physics", "1893", "Sorbonne Université", "Paris, France",
"Master of Mathematics", "1894", "Sorbonne Université", "Paris, France"
) %>%
detailed_entries(Degree, Year, Institution, Where)
```
# Nobel Prizes
```{r}
tribble(
~Year, ~Type, ~Desc,
1903, "Physics", "Awarded for her work on radioactivity with Pierre Curie and Henri Becquerel",
1911, "Chemistry", "Awarded for the discovery of radium and polonium"
) %>%
brief_entries(
glue::glue("Nobel Prize in {Type}"),
Year,
Desc
)
```
# Publications
```{r}
library(dplyr)
knitr::write_bib(c("vitae", "tibble"), "packages.bib")
bibliography_entries("packages.bib") %>%
arrange(desc(author$family), issued)
```
|
/scratch/gouwar.j/cran-all/cranData/vitae/inst/rmarkdown/templates/twentyseconds/skeleton/skeleton.Rmd
|
---
title: "Data sources for vitae"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Using vitae with other packages}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r setup, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
Using data to dynamically build your Résumé or CV makes many powerful integrations possible. By using data to populate entries in the document, it becomes easy to manipulate and select relevant experiences for a particular application. There are many sources of data which can be used to populate a CV with vitae, some commonly sources are summarised in this vignette.
The main purpose of sourcing your CV entries from common data sources is to extend the "do not repeat yourself" programming philosophy to maintaining a CV. If you maintain publications on [ORCID](https://orcid.org/) you shouldn't need to repeat these entries in your CV. If a list of talks you've made can be found on your website, avoid repeating the list in multiple locations to ensure that they both contain the same content.
This vignette is far from comprehensive, and there are no doubt many other interesting ways to populate your CV with data. If you're using a data source that you think others should know about, consider making a [pull request](https://github.com/mitchelloharawild/vitae/pulls) that adds your method to this vignette.
## Spreadsheets and data sets
The simplest source of entries for vitae are maintained dataset(s) of past experiences and achievements. Just like any dataset, these entries can be loaded into the document as a `data.frame` or `tibble` using functions from base R or the [`readr` package](https://readr.tidyverse.org/).
```r
readr::read_csv("employment.csv") %>%
detailed_entries(???)
```
It is also possible to load in data from excel using the [`readxl` package](https://readxl.tidyverse.org/) or from Google Sheets using the [`googlesheets` package](https://github.com/jennybc/googlesheets).
```r
readxl::read_excel("awards.xlsx") %>%
brief_entries(???)
```
## Google scholar
Google Scholar does not require authentication to extract publications. Using the [`scholar` package](https://github.com/jkeirstead/scholar), it is easy to extract a user's publications from their Google Scholar ID. To obtain publications for an individual, you would first find your ID which is accessible from your profile URL. For example, Rob Hyndman's ID would be `"vamErfkAAAAJ"` (https://scholar.google.com/citations?user=vamErfkAAAAJ&hl=en).
```r
scholar::get_publications("vamErfkAAAAJ") %>%
detailed_entries(
what = title,
when = year,
with = author,
where = journal,
why = cites
)
```
## Bibliography files
The vitae package directly supports loading `*.bib` files using the `bibliography_entries()` function, which formats the entries in a bibliography style.
```r
bibliography_entries("publications.bib")
```
It is also possible to display the contents of your bibliography using template specific entries formats.
```r
bibliography_entries("publications.bib") %>%
detailed_entries(???)
```
## R packages
A list of R packages that you have helped develop can be obtained using the [`pkgsearch` package](https://github.com/r-hub/pkgsearch).
```r
pkgsearch::ps("O'Hara-Wild",size = 100) %>%
filter(map_lgl(package_data, ~ grepl("Mitchell O'Hara-Wild", .x$Author, fixed = TRUE))) %>%
as_tibble() %>%
brief_entries(
what = title,
when = lubridate::year(date),
with = description
)
```
|
/scratch/gouwar.j/cran-all/cranData/vitae/vignettes/data.Rmd
|
---
title: "Creating vitae templates"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Creating vitae templates}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r setup, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
Résumé/CV templates are abundantly available in many varieties of themes and layouts. The `vitae` package provides a few of the more popular templates that are suitable for most resumes. The included templates are far from comprehensive - your favourite template may not be included, or perhaps you have created your own template. This vignette explains how your LaTeX CV can be used within the package using custom templates.
## Creating a vitae template
Extending `vitae` to support new templates involves a similar process to creating new `rmarkdown` document templates. An extended explanation for creating `rmarkdown` templates can be found in the [*Document Templates* chapter](https://bookdown.org/yihui/rmarkdown/document-templates.html) in ["R Markdown: The Definitive Guide"](https://bookdown.org/yihui/rmarkdown/).
Creating a template for vitae can be broken into three parts:
- Converting a CV into a Pandoc template
- Adding LaTeX macros for displaying CV entries
- Using the template with `rmarkdown`
### Converting a CV into a Pandoc template
Most elements that are included in the YAML header of an `rmarkdown` document are passed to your template via Pandoc variables. Pandoc variables can be included in your template file by surrounding the variable with `$`. These can be used to fill in basic CV details such as your name, occupation, and social links.
For example, suppose your document contains this YAML header:
```
name: "Mitchell O'Hara-Wild"
position: "vitae maintainer"
output: vitae::awesomecv
```
The `$name$` variable in the template would be substituted with `Mitchell O'Hara-Wild` and similarly, `$position$` would become `vitae maintainer`. Many templates won't follow this same structure exactly (some may split the name into first and last names), but most of the time there is a reasonable place for these variables. It is recommended that a consistent set of variables are used to make switching between templates easy.
The current list of variables used in the `vitae` templates are:
- name
- position
- address
- date
- profilepic
- www
- email
- twitter
- github
- linkedin
- aboutme
- headcolor
In the [moderncv template](https://github.com/xdanaux/moderncv), the position of 'vitae maintainer' is specified using `\position{vitae maintainer}`. Using Pandoc variables, this would be replaced with `\position{$position$}`, which allows the position to be defined in the `rmarkdown` document's YAML header.
However if a `position` has not been provided in the YAML header, this would leave us with `\position{}` (which might be okay for some templates, but is undesirable for most templates). To resolve this, we can use Pandoc to conditionally include this section with `$if(position)$\position{$position$}$endif$`.
The main content from an `rmarkdown` document is also included using Pandoc variables. The results from the main section of the document is stored in `$body$`. So in a typical LaTeX CV template, where there is usually pre-filled details about experience and employment, this can be completely replaced with `$body$`. There are a few other common variables to place within the template, which are typically placed in the same location as other templates. These variables include:
- body
- header-includes
- fontfamily
- fontfamilyoptions
- fontsize
- lang
- papersize
- classoption
- linestretch
- include-before
- include-after
- highlighting-macros
Placement of these variables can be found by looking at other template files provided in the package. The conversion of the moderncv template into a Pandoc template for `vitae` can be found on [GitHub](https://github.com/mitchelloharawild/vitae/blob/master/inst/rmarkdown/templates/moderncv/resources/moderncv.tex).
### Adding template specific code for displaying CV entries
The interface for producing entries in a CV varies greatly between templates. To support these various formats, template specific R functions are used to convert the `vitae` format of *what, when, with, where, and why* to output suitable for each template. These functions are specified using the `set_entry_formats()` function, which accepts output from `new_entry_formats()`.
The moderncv template provides many different layouts, of which I have selected the two that best suit `brief_entries` and `detailed_entries`.
#### brief_entries
The moderncv template `\cvitem` command generates an appropriate layout for brief entries. It expects inputs in this format:
```
\cvitem{Title}{Description}
```
An appropriate function for creating these items could be:
```r
moderncv_cv_entries <- new_entry_formats(
brief = function(what, when, with){
paste0(
"\t\\cvitem{", when, "}{", what, ". ", with, "}",
collapse = "\n"
)
},
detailed = ... # See below
)
```
Note that `what`, `when` and `with` may contain more than one entry. This
function combines each `\cvitem{}` into a single string by separating each entry
with a new line.
#### detailed_entries
For detailed CV entries, the moderncv `\cventry` command is appropriate. It expects inputs in this format:
```
\cventry{Year}{Degree}{Institution}{City}{Grade}{Description}
```
A function that can produce these entries is as follows:
```r
moderncv_cv_entries <- new_entry_formats(
brief = ...,
detailed = function(what, when, with, where, why){
# Combine why inputs into a bullet list
why <- lapply(why, function(x) {
if(length(x) == 0) return("\\empty")
paste(c(
"\\begin{itemize}%",
paste0("\\item ", x, "%"),
"\\end{itemize}"
), collapse = "\n")
})
# Combine inputs into \cventry{} output
paste0(
paste0("\t\\cventry{", when, "}{", what, "}{", with, "}{", where, "}{}{", why, "}"),
collapse = "\n"
)
}
)
```
### Using the template with rmarkdown
Once the Pandoc variables and LaTeX CV entry macros are set in the template, it is ready for use with the `vitae` package. The package provides the `cv_document` output format, which is suitable for use with custom templates. To use the custom template, your `rmarkdown` document's YAML header would look like this:
```
output:
vitae::cv_document:
template: my_custom_template.tex
```
You will also need to copy all of the LaTeX class (`.cls`) and style (`.sty`) files provided with the template into the same folder as your `rmarkdown` document. Once that is done, your new template should be ready to use with the `vitae` package.
## Contributing to the vitae package
If you've gone to the effort of successfully creating a new template with the `vitae` package, you may be interested in making it available for others to use. You can contribute to this package by submitting a pull request that adds your template to the package.
Adding your template to the package can be done with:
```{r add-template, eval = FALSE}
usethis::use_rmarkdown_template(
template_name = "Curriculum Vitae (My custom format)",
template_dir = "my_template",
template_description = "The custom vitae template made by me!",
template_create_dir = TRUE)
```
Then by navigating to the package's `inst/rmarkdown/templates/my_template` folder, you can add your Pandoc template to the `resources` folder, and your `.cls` and `.sty` files to the `skeleton` folder.
Once that is done, we can create a new `rmarkdown` output format that uses your template. These are added to the "R/formats.R" file, and will usually follow the same structure as other templates. The template argument to `cv_document` is a link to your Pandoc template in the package (accessed using `system.file`), and it is recommended that the supporting `.cls` and `.sty` files are copied using `copy_supporting_files`.
```{r}
#' @rdname cv_formats
#' @export
my_template <- function(...) {
template <- system.file("rmarkdown", "templates", "my_template",
"resources", "my_template.tex", package="vitae")
set_entry_formats(moderncv_cv_entries)
copy_supporting_files("my_template")
cv_document(..., template = template)
}
```
The automatically generated `skeleton.Rmd` document in the `skeleton` folder should be modified to be a basic example of using your template. Examples of this file can be found in other templates, and this template file can act as a useful test for your template!
All done! You should now be able to install your new version of the package with `devtools::install()`, and test out your new output format with:
```
output:
vitae::my_template
```
|
/scratch/gouwar.j/cran-all/cranData/vitae/vignettes/extending.Rmd
|
---
title: "Introduction to vitae"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Introduction to vitae}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r setup, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
The vitae package makes creating and maintaining a Résumé or CV with R Markdown simple. It provides a collection of LaTeX templates, with helpful functions to add content to the documents. These functions allow you to dynamically include CV entries from any data source, which is particularly useful when this data is obtained/prepared by other R packages. Some examples of what this allows you to do includes:
- Automatically get your work experience from the web
- List the R packages you have contributed to
- Filter CV entries by keywords relevant to the current job
- Include your academic publications
## Creating a CV
If using RStudio, a new CV document can be easily produced from one of the templates provided in the package. This using the RStudio R Markdown template selector, accessible via `File` > `New File` > `R Markdown...`, and lastly `From Template`. This will show a list of R Markdown templates provided by all installed packages, and you should be able to find some templates from the vitae package to use.
If not using RStudio, you can create a new `*.Rmd` document and use an output format that is provided by the package. A list of output formats provided by the package can be found on the package website: https://pkg.mitchelloharawild.com/vitae/#templates. Examples of a YAML header that use one of these output formats is shown below.
Like other R Markdown documents, the file is split into two sections: the YAML header, and the main body.
### The YAML header
The YAML header contains general entries that are common across many templates, such as your name, address and social media profiles. This is also where the output format (the CV template used) is specified, along with any options that the template supports. An example of what this header may look like is shown below:
`r htmltools::pre(htmltools::code(paste0(readLines("sample.txt")[1:11],collapse = "\n")))`
You can also see that the output is set to `vitae::awesomecv`, which indicates that this CV uses the [Awesome CV](https://github.com/posquit0/Awesome-CV) template. Some of the templates allow for the choosing of a theme, or changing other options. These options can be found in the help file for each output format (say `?moderncv)`. For example, the `moderncv` template allows for the selection of one of five themes: "casual", "classic", "oldstyle", "banking" or "fancy".
To change default options of output formats you can modify the yaml as follows:
````yaml
# Choose a theme and pass other arguments
output:
vitae::moderncv:
theme: banking
````
Currently, the YAML header allows these fields to be specified:
- `name`: Your name
- `surname`: Your family or last name
- `position`: Your current workplace title or field
- `address`: Your address
- `date`: The current date
- `profilepic`: A local file path to an image
- `www`: Your website address
- `email`: Your email address
- `twitter`: Your twitter handle
- `github`: Your GitHub handle
- `linkedin`: Your LinkedIn username
- `aboutme`: A short description that is included in a template specific location
- `headcolor`: A featured colour for the template
- `docname`: Control the document name (typically curriculum vitae, or résumé)
### The document body
Like other R Markdown documents, the body allows you to mix R code with markdown to create the main content for your document. Below is an example of the start for a typical CV:
`r htmltools::pre(htmltools::code(paste0(readLines("sample.txt")[14:23],collapse = "\n")))`
The setup chunk is useful to load in any packages that you may use, and also prevent R code and warnings/notes from appearing in your CV. The above code also includes a professional summary using markdown syntax, which will appear in the final CV.
Unlike other R markdown formats, the vitae package and its templates support functions to generate CV entries from data: `detailed_entries()` and `brief_entries()`. Both functions provide sections for `what`, `when`, and `with`, and the `detailed_entries()` additionally supports `where` and `why`. They use an interface similar to [dplyr](https://CRAN.R-project.org/package=dplyr), in that the data can be piped (`%>%`) into these functions, and the arguments can involve some calculations.
#### Detailed entries
Let's add to the main body with some education history. I'm creating a dataset in R to do this, although you can read in the data from excel, or if you have it documented somewhere online like [ORCID](https://orcid.org) you can dynamically access it via their API.
`r htmltools::pre(htmltools::code(paste0(readLines("sample.txt")[25:40],collapse = "\n")))`
In the example above, the [glue](https://CRAN.R-project.org/package=glue) package has been used to combine the start and end years for our `when` input. Excluding any arguments is also okay (as is done for `why`), it will just be left blank in the CV.
#### Brief entries
Brief entries can be included with the same interface as `detailed_entries()`, and is appropriate for entries that do not need as much detail (such as skills). Another application of this is to include a list of R packages that you have published to CRAN using the [pkgsearch](https://CRAN.R-project.org/package=pkgsearch) package.
`r htmltools::pre(htmltools::code(paste0(readLines("sample.txt")[42:53],collapse = "\n")))`
This example also uses several other packages to prepare the data:
- [dplyr](https://CRAN.R-project.org/package=dplyr) to `filter()` my contributed packages, and `arrange()` the data by downloads
- [purrr](https://CRAN.R-project.org/package=purrr) to map over `package_data` column to find packages I've contributed to
- [lubridate](https://CRAN.R-project.org/package=lubridate) to display only the year from the `date` column
#### Bibliography entries
The package also supports bibliography entries from a `*.bib` file using the `bibliography_entries()` function. This outputs the contents of a bibliography using a citation style, and is suitable for CVs containing publications.
`r htmltools::pre(htmltools::code(paste0(readLines("sample.txt")[55:57],collapse = "\n")))`
|
/scratch/gouwar.j/cran-all/cranData/vitae/vignettes/vitae.Rmd
|
#' Function for data preparation
#'
#' Function to deal with NAs, right truncated data and datatype cumulative survival or
#' incremental motality.
#'
#' @param time A vector of observation dates
#' @param sdata A vector of survival data of the same length as \code{time}
#' @param datatype either \code{"CUM"} for cumulative or \code{"INC"} for incremental
#' @param rc.data Boolean. Is data right-censored?
#' @param returnMatrix Boolean. False returns a data frame, true returns a matrix.
#' (as in the original), if "matrix" returns a matrix instead, with the "rc.data" column
#' being 0 for FALSE, 1 for TRUE, or 2 for TF
#' @export
#' @return Returns a data.frame or matrix with columns time, sfract, x1, x2,
#' Ni (incremental survival fraction), rc.data.
dataPrep <- function(time, sdata, datatype, rc.data, returnMatrix = FALSE) {
#check for and remove NAs from data
if (any(is.na(time))) {
naT = is.na(time)
time = time[!naT]
sdata = sdata[!naT]
warning(message = "WARNING: NAs found in data and removed.")
}
if (any(is.na(sdata))) {
naT = is.na(sdata)
time = time[!naT]
sdata = sdata[!naT]
warning(message = "WARNING: NAs found in data and removed.")
}
if(length(time) < 5) {stop(message = "ERROR: not enough data.")}
if(!all(0 <= sdata) | !all(sdata <= 1)) {
stop("ERROR: survival fraction data outside range.")
}
# end data checking
maxx2 <-max(time) #for right-censored data and for plotting
# === check data type (CUMulative or INCremental. If CUM, create INC ===
if (datatype == "CUM") {
#====== survival assumed 1 at time 0 ===
if (time[1] > 0) {
time <- c(0, time)
sdata <- c(1, sdata)
}
else {
if (sdata[1] < 1) {
sdata <- sdata/sdata[1]
warning(message = "Initial survival < 1. Data scaled so that initial survival = 1.")
}
}
#------------------------
sfract <-sdata
len <- length(time)
# ...set up data for MLE fitting of incremental survivorship...
# --------- right censored data?
if (rc.data != TRUE) {
if (rc.data == FALSE) {
#check if final sdata indicates full mort
if (sfract[len] != 0) {
warning("WARNING: Survival data may be right censored...")
rc.data<-"TF"
}
else {
#standard setup
x1 <-c(time[1:(len-1)], 0)
x2 <-c(time[2:len], 0)
sfract1 <-c(sfract[1:(len-1)], 0)
sfract2 <-c(sfract[2:len], 0)
}
}
if (rc.data == "TF") {
#setup: add zero sruv and short time step ("TF" option)
x1 <-time
x2 <-c(time[2:len], 2*time[len]-time[len-1])
sfract1 <-sfract
sfract2 <-c(sfract[2:len], 0)
}
}
else { #if rc.data == T
x1 <-time
x2 <-c(time[2:len], 10*maxx2) #testing...
sfract1 <-sfract
sfract2 <-c(sfract[2:len], 0)
}
# ----------------- end of dealing with right censored data options
Ni <- sfract1 - sfract2 #incremental survival fraction
# ...end conversion of cumulative survivorship to incremental mortality
} else {
if (datatype == "INC") {
lent <- length(time)
#should be no t=0 data. Eliminate if necessary.
if(time[1] == 0) {
time <- time[2:lent]
sdata <- sdata[2:lent]
lent <- length(time)
}
#check for right censored data
if (rc.data != T) {
if (rc.data == F) {
if(sum(sdata) < 1) {
rc.data <- "TF"
warning("WARNING: Survival data may be right censored...")
} else {
#standard setup
Ni <-sdata
x1 <-c(0, time)[1:lent]
x2 <-time
}
}
if (rc.data == "TF") {
#setup: add zero sruv and short time step ("TF" option)
Ni <-c(sdata, 1-sum(sdata))
x1 <-c(0, time)
x2 <-c(time, 2*time[lent]-time[lent-1]) #final time interval assumed same as prev.
}
} else { #if rc.data == T
Ni <-c(sdata, 1-sum(sdata))
x1 <-c(0, time)
x2 <-c(time, 10*maxx2) #final time interval large
}
# Build cumulative data
#sfract <-1.0 # initial survival fraction assumed 1
sfract <- NULL
for (i in 1:length(Ni)) {
sfract <- c(sfract, 1-sum(Ni[1:i]))
}
time <- c(0, time)
len <- length(time) #check
print(c(length(sfract), length(time), length(Ni), length(x1), length(x2), lent))
}
else {
stop("ERROR: bad datatype specification")
}
}
if (returnMatrix) {
returnMat <- cbind(time, sfract, Ni, x1, x2, ifelse(rc.data[1] == "TF", 2,
ifelse(rc.data[1] == "T", 1, 0)))
dimnames(returnMat) <- list(NULL, c("time", "sfract", "Ni", "x1", "x2", "rc.data"))
return(returnMat)
}
return(data.frame(time, sfract, x1, x2, Ni, rc.data))
}
|
/scratch/gouwar.j/cran-all/cranData/vitality/R/dataPrep.R
|
#' @name swedish_females
#' @title Swedish Female Mortality Data
#' @description Period mortality data for Swedish females experiencing mortality
#' in the year 2000. Columns follow standard life-table naming conventions.
#' @docType data
#' @usage swedish_females
#' @format A \code{data.frame} object
#' @source Human Mortality Database
#'
#' @name rainbow_trout_for_k
#' @title Sample Rainbow Trout Data
#' @description Sample survival data for rainbow trout. Columns include "days" and "survival" (cumulative survival proportion by day).
#' @docType data
#' @usage rainbow_trout_for_k
#' @format matrix
#' @source http://cbr.washington.edu/analysis/vitality
NULL
|
/scratch/gouwar.j/cran-all/cranData/vitality/R/data_documentation.R
|
####################################################################################################
## Density functions
#' Density function for 3-param r, s, u
#'
#' None
#' @param xx age
#' @param r r value
#' @param s s value
#' @param u u value
#' @return density
ft.4p <- function(xx, r, s, u) {
temp1 <- s^2 * xx + u^2
temp2 <- u^2 * r + s^2
if (xx==0) value = 0
else value <- temp2 / sqrt(2 * pi * temp1^3) * exp(-(1-r * xx)^2/(2 * temp1))
return(value)
}
#' Vectorized density function
#'
#' None
#'
#' @param xx vector of ages
#' @param r r value
#' @param s s value
#' @param u u value
#' @return vector of densities
vft.4p <- Vectorize(FUN = ft.4p, vectorize.args = "xx")
####################################################################################################
## Density functions for 6 par
#' Density function for intrinsic
#'
#' None
#' @param xx age
#' @param r r value
#' @param s s value
#' @return density
ft.6p <- function(xx, r, s) {
temp1 <- s^2 * xx
temp2 <- s^2
if (xx==0) value = 0
else value <- ((xx^-(3/2)) / (s*sqrt(2 * pi))) * exp(-(1-r * xx)^2/(2 * temp1))
return(value)
}
#' Vectorized density function
#'
#' None
#'
#' @param xx vector of ages
#' @param r r value
#' @param s s value
#' @return vector of densities
vft.6p <- Vectorize(FUN = ft.6p, vectorize.args = "xx")
|
/scratch/gouwar.j/cran-all/cranData/vitality/R/density.R
|
####################################################################################################
## Mortality Rate Functions
#' Total mortality rate
#'
#' None
#'
#' @param t age
#' @param r r value
#' @param s s value
#' @param lambda lambda value
#' @param beta beta value
#' @return Total mortality rate (?)
mu.vd.4p <- function(t, r, s, lambda, beta){
mu.vd1.4p(t, r, s) + mu.vd2.4p(t, r, lambda, beta)
}
#' Intrinsic mortality rate
#'
#' None
#'
#' @param x age
#' @param r r value
#' @param s s value
#' @return Intrinsic mortality rate (?)
mu.vd1.4p <- function(x, r, s) {
vft.4p(x, r, s, 0) / SurvFn.h.4p(x, r, s, 0)
}
#' Extrinsic mortality rate
#'
#' None
#'
#' @param x age
#' @param r r value
#' @param lambda lambda value
#' @param beta beta value
#' @param gamma gamma value
#' @param alpha alpha value
#' @return Extrinsic mortality rate (?)
mu.vd2.4p <- function(x, r, lambda, beta){
lambda * exp(-(1 - r * x) / beta) #+ gamma * exp(-1 / alpha * x)
}
## Mortality Rate Functions for 6 parameter model
# intrinsic morality is similar to the previous four parameter model but a new function is included here for continuity in the naming structure
#' Total mortality rate
#'
#' None
#'
#' @param t age
#' @param r r value
#' @param s s value
#' @param lambda lambda value
#' @param beta beta value
#' @param gamma gamma value
#' @param alpha alpha value
#' @return Total mortality rate (?)
mu.vd.6p <- function(t, r, s, lambda, beta, gamma, alpha){
mu.vd1.6p(t, r, s) + mu.vd2.6p(t, r, lambda, beta, gamma, alpha)
}
#' Intrinsic mortality rate
#'
#' None
#'
#' @param x age
#' @param r r value
#' @param s s value
#' @return Intrinsic mortality rate (?)
mu.vd1.6p <- function(x, r, s) {
vft.6p(x, r, s) / SurvFn.h.6p(x, r, s)
}
#' Extrinisc mortality rate
#'
#' None
#'
#' @param x age
#' @param r r value
#' @param lambda lambda value
#' @param beta beta value
#' @param gamma gamma value
#' @param alpha alpha value # do we need 1/alpha for this as in the survival function?
#' @return Extrinsic mortality rate (?)
mu.vd2.6p <- function(x, r, lambda, beta, gamma, alpha){
lambda * exp(-(1 - r * x) / beta) + gamma*exp(-alpha*x)
}
#' Extrinisc mortality rate -- adult
#'
#' None
#'
#' @param x age
#' @param r r value
#' @param lambda lambda value
#' @param beta beta value
#' @return Extrinsic mortality rate (?)
mu.vd3.6p <- function(x, r, lambda, beta){
lambda * exp(-(1 - r * x) / beta)
}
#' Extrinisc mortality rate -- child
#'
#' None
#'
#' @param x age
#' @param r r value
#' @param lambda lambda value
#' @param beta beta value
#' @return Extrinsic mortality rate (?)
mu.vd4.6p <- function(x, gamma, alpha){
gamma*exp(-alpha*x)
}
|
/scratch/gouwar.j/cran-all/cranData/vitality/R/mortality_rate.R
|
#' Fitting routines for the Vitality family of mortality models.
#'
#' This package provides support for fitting models in the vitality family of mortality models.
#' Currently, the 4-parameter and 6paramter 2-process models are included, but planned updates will
#' include all published versions of the models, as well as Bayesian parameter estimation routines.
#'
#' Support developing this package was provided to J. Anderson by Bonneville Power Administration
#' and the University of Washington Center for Statistics and the Social Sciences
#' and to G. Passolt by the University of Washington Center for studies in Demography and Ecology.
#'
#' @examples \dontrun{
#' data(swedish_females)
#' head(swedish_females)
#' initial_age <- 0 # (Could be adjusted up)
#' time <- initial_age:max(swedish_females$age)
#' survival_fraction <- swe$lx / swe$lx[swe$age == initial_age]
#' sample_size <- swe$Lx[swe$age == initial_age] #sample size
#' results <- vitality.2ps(time = time,
#' sdata = survival_fraction,
#' init.params=c(0.012, 0.01, 0.1, 0.1),
#' se = sample_size,
#' Mplot=F)
#' }
#'
#' @references
#' \itemize{
#' \item Li, T. and J.J. Anderson (in press).
#' "Shaping human mortality patterns through intrinsic and extrinsiv vitality processes."
#' Demographic Research.
#' \item Salinger, D.H., J.J. Anderson, and O.S. Hamel. 2003.
#' "A parameter estimation routine for the vitality-based survival model."
#' Ecological Modelling 166 (3): 287-29
#' \item Li, T. and J.J. Anderson. 2009.
#' "The vitality model: A way to understand population survival and demographic heterogeneity."
#' Theoretical Population Biology 76: 118-131.
#' \item Anderson, J.J., Molly C. Gildea, Drew W. Williams, and Ting Li. 2008.
#' "Linking Growth, Survival, and Heterogeneity through Vitality.
#' The American Naturalist 171 (1): E20-E43.
#' }
#'
#' @import IMIS
#' @docType package
#' @name vitality
NULL
|
/scratch/gouwar.j/cran-all/cranData/vitality/R/package_documentation.R
|
## Four parameter simple model r s lambda beta model, no childhood hook
#' Fitting routine for the 2-process, 4-parameter vitality model (no childhood hook).
#'
#' Based on code by D.H. Salinger, J.J. Anderson and O. Hamel (2003).
#' "A parameter fitting routine for the vitality based survival model."
#' Ecological Modeling 166(3): 287--294.
#'
#' @param time Vector. Time component of data: Defaults to \code{0:(1-length(sdata))}.
#' @param sdata Required. Survival or mortality data. The default expects cumulative
#' survival fraction. If providing incremental mortality fraction
#' instead, use option: datatype = "INC".
#' The default also expects the data to represent full mortality.
#' Otherwise, use option: rc.data = T to indicate right censored data.
#' @param rc.data Optional, boolean. Specifies Right Censored data. If the data does not
#' represent full mortality, it is probably right censored. The default
#' is rc.data = F. A third option is rc.data = "TF". Use this case to add
#' a near-term zero survival point to data which displays nearly full
#' mortality ( <.01 survival at end). If rc.data = F but the data does
#' not show full mortality, rc.data = "TF" will be
#' invoked automatically.
#' @param se Optional, boolean. Calculates the standard errors for the MLE parameters.
#' Default is FALSE. Set equal to the initial study population to
#' compute standard errors.
#' @param datatype Optional. Defaults to \code{"CUM"} for cumulative survival fraction data.
#' Use \code{"INC"} - for incremental mortality fraction data.
#' @param ttol Optional. Stopping criteria tolerance. Default is 1e-6.
#' Specify as ttol = .0001. If one of the liklihood plots (esp. for "k") does not look optimal,
#' try decreasing ttol. If the program crashes, try increasing ttol.
#' @param init.params Optional. Please specify the initial param values.
#' specify \code{init.params = c(r, s, lambda, beta)} in that order
#' (eg. init.params = c(.1, .02, .3, 0.12)).
#' @param pplot Optional, boolean. Plots of cumulative survival for both data and fitted curves?
#' Default \code{TRUE}. \code{FALSE} Produce no plots. A
#' A third option: \code{pplot = n} (n >= 1) extends the time axis of
#' the fitting plots (beyond the max time in data). For example:
#' \code{pplot = 1.2} extends the time axis by 20%. Note: the incremental
#' mortality plot is a continuous representation of the appropriately-
#' binned histogram of incremental mortalities.
#' @param Iplot Optional, boolean. Incremental mortality for both data and fitted curves?
#' Default: \code{FALSE}.
#' @param Mplot Optional, boolean. Plot fitted mortality curve? Default is \code{FALSE}.
#' @param tlab Optional, character. specifies units for x-axis of plots. Default is "days".
#' @param silent Optional, boolean. Stops all print and plot options (still get most warning and all
#' error messages) Default is \code{FALSE}. A third option, \code{"verbose"} also
#' enables the trace setting in the ms (minimum sum) S-Plus routine.
#' @export
#' @return vector of final MLE r, s, lambda, beta w/wo gamma and alpha parameter estimates.
#' standard errors of MLE parameter estimates (if se = <population> is specified).
vitality.4p <- function(time = 0:(length(sdata)-1),
sdata,
init.params = FALSE,
lower = c(0, 0, 0, 0),
upper = c(100,50,100,50),
rc.data = FALSE,
se = FALSE,
datatype = c("CUM", "INC"),
ttol = 1e-6,
pplot = TRUE,
Iplot = FALSE,
Mplot = FALSE,
tlab = "years",
silent = FALSE) {
# --Check/prepare Data---
datatype <- match.arg(datatype)
if (length(time) != length(sdata)) {
stop("time and sdata must have the same length")
}
in.time <- time
dTmp <- dataPrep(time, sdata, datatype, rc.data)
time <- dTmp$time
sfract <- dTmp$sfract
x1 <- dTmp$x1
x2 <- dTmp$x2
Ni <- dTmp$Ni
rc.data <- dTmp$rc.data
if(in.time[1]>0){
time <- time[-1]
sfract <- sfract[-1]
x1 <- c(x1[-c(1,length(x1))], x1[1])
x2 <- c(x2[-c(1,length(x2))], 0)
Ni <- Ni[-1]
rc.data <- rc.data[-1]
}
# --Produce initial parameter values---
if(length(init.params) == 1) {
ii <- indexFinder(sfract, 0.5)
if (ii == -1) {
warning("ERROR: no survival fraction data below the .5 level.\n
Cannot use the initial r s k u estimator. You must supply initial r s k u estimates")
return(-1)
}
else rsk <- c(1/time[ii], 0.01, 0.1, 0.1)
} else { # use user specified init params
rsk <- init.params
}
if (rsk[1] == -1) {
stop
}
if (silent == FALSE) {
print(cbind(c("Initial r", "initial s", "initial lambda", "initial beta"), rsk))
}
# --create dataframe for sa---
dtfm <- data.frame(x1 = x1, x2 = x2, Ni = Ni)
# --run MLE fitting routine---
# --conduct Newton-Ralphoson algorithm directly --
fit.nlm <- nlminb(start = rsk, objective = logLikelihood.4p, lower = lower,
upper = upper, xx1 = x1, xx2 = x2, NNi = Ni)
# --save final param estimates---
r.final <- fit.nlm$par[1]
s.final <- abs(fit.nlm$par[2])
lambda.final <- fit.nlm$par[3]
beta.final <- fit.nlm$par[4]
mlv <- fit.nlm$obj
if (silent == FALSE) {print(cbind(c("estimated r", "estimated s", "estimated lambda",
"estimated beta", "minimum -loglikelihood value"),
c(r.final, s.final, lambda.final, beta.final, mlv)))}
# == end MLE fitting == =
# --compute standard errors---
if (se != FALSE) {
s.e. <- stdErr.4p(r.final, s.final, lambda.final, beta.final, x1, x2, Ni, se)
if (silent == FALSE){print(cbind(c("sd for r", "sd for s", "sd for lambda", "sd for beta"), s.e.))}
}
# --plotting and goodness of fit---
if (pplot != FALSE) {
plotting.4p(r.final, s.final, lambda.final, beta.final, mlv, time, sfract, x1, x2, Ni, pplot, Iplot, Mplot, tlab, rc.data)
}
# ............................................................................................
# --return final param values---
sd <- 5 #significant digits of output
if(se != F ) {
params <- c(r.final, s.final, lambda.final, beta.final)
pvalue <- c(1-pnorm(r.final/s.e.[1]), 1-pnorm(s.final/s.e.[2]), 1-pnorm(lambda.final/s.e.[3]), 1-pnorm(beta.final/s.e.[4]))
std <- c(s.e.[1], s.e.[2], s.e.[3], s.e.[4])
out <- signif(cbind(params, std, pvalue), sd)
return(out)
}
else {
return(signif(c(r.final, s.final, lambda.final, beta.final), sd))
}
}
#' Plotting function for 2-process vitality model. 4-param
#'
#' None.
#'
#' @param r.final r estimate
#' @param s.final s estimate
#' @param lambda.final lambda estimate
#' @param beta.final beta estimate
#' @param mlv TODO mlv
#' @param time time vector
#' @param sfract survival fraction
#' @param x1 Time 1
#' @param x2 Time 2
#' @param Ni Initial population
#' @param pplot Boolean. Plot cumulative survival fraction?
#' @param Iplot Boolean. Plot incremental survival?
#' @param Mplot Boolean. Plot mortality rate?
#' @param tlab Character, label for time axis
#' @param rc.data Booolean, right-censored data?
plotting.4p <- function(r.final,
s.final,
lambda.final,
beta.final,
mlv,
time,
sfract,
x1,
x2,
Ni,
pplot,
Iplot,
Mplot,
tlab,
rc.data) {
# --plot cumulative survival---
if (pplot != FALSE) {
#win.graph()
ext <- max(pplot, 1)
par(mfrow = c(1, 1))
len <- length(time)
tmax <- ext * time[len]
plot(time, sfract/sfract[1], xlab = tlab, ylab = "survival fraction",
ylim = c(0, 1), xlim = c(min(time), tmax), col = 1)
xxx <- seq(min(time), tmax, length = 200)
xxx1 <- c(0, xxx[-1])
lines(xxx, SurvFn.4p(xxx1, r.final, s.final, lambda.final, beta.final), col = 2, lwd=2)
lines(xxx, SurvFn.in.4p(xx=xxx1, r=r.final, s=s.final), col=3, lwd=2, lty=3)
lines(xxx, SurvFn.ex.4p(xx=xxx1, r=r.final, s=s.final, lambda = lambda.final, beta = beta.final), col=4, lwd=2, lty=2)
title("Cumulative Survival Data and Vitality Model Fitting")
legend(x="bottomleft", legend=c("Total", "Intrinsic", "Extrinsic"), lty=c(1, 3, 2), col=c(2,3,4), bty="n", lwd=c(2,2,2))
}
if ( Mplot != FALSE) {
lx <- round(sfract*100000)
lx <- c(lx,0)
ndx <- -diff(lx)
lxpn <- lx[-1]
n <- c(diff(time), 1000)
nax <- .5*n
nLx <- n * lxpn + ndx * nax
mu.x <- ndx/nLx
mu.x[length(mu.x)] <- NA
# qx <- Ni/sfract
# mu.x <- 2 * qx/(2 - qx)
#win.graph()
ext <- max(pplot, 1)
par(mfrow = c(1, 1))
len <- length(time)
tmax <- ext * time[len]
xxx <- seq(min(time), tmax, length = 200)
xxx1 <- xxx
mu.i <- mu.vd1.4p(xxx1, r.final, s.final)
mu.e <- mu.vd2.4p(xxx1, r.final, lambda.final, beta.final)
mu.t <- mu.vd.4p(xxx1, r.final, s.final, lambda.final, beta.final)
plot(time, mu.x, xlim = c(time[1], tmax), xlab = tlab, ylab = "estimated mortality rate", log = "y",
main = "Log Mortality Data and Vitality Model Fitting", ylim=c(min(mu.x,mu.t,na.rm=T),max(mu.x,mu.t,na.rm=T)))
#plot(xxx, mu.t, xlim = c(0, tmax), xlab = tlab, ylab = "estimated mortality rate", log = "y",
# type = "l")
lines(xxx, mu.t, col = 2, lwd=2)
lines(xxx, mu.i, col=3, lwd=2, lty=3)
lines(xxx, mu.e, col=4, lwd=2, lty=2)
legend(x="bottomright", legend=c("data (approximate)", expression(mu[total]), expression(mu[i]), expression(mu[e])), lty=c(NA, 1, 3, 2), pch=c(1,NA,NA,NA), col=c(1,2,3,4), bty="n", lwd=c(1,2,2,2))
}
# --Incremental mortality plot
if (Iplot != FALSE) {
#win.graph()
par(mfrow = c(1, 1))
ln <- length(Ni)-1
x1 <- x1[1:ln]
x2 <- x2[1:ln]
Ni <- Ni[1:ln]
ln <- length(Ni)
#scale <- (x2-x1)[Ni == max(Ni)]
scale <- max( (x2-x1)[Ni == max(Ni)] )
ext <- max(pplot, 1)
npt <- 200*ext
xxx <- seq(x1[1], x2[ln]*ext, length = npt)
xx1 <- xxx[1:(npt-1)]
xx2 <- xxx[2:npt]
sProbI <- survProbInc.4p(r.final, s.final, lambda.final, beta.final, xx1, xx2)
ytop <- 1.1 * max(max(sProbI/(xx2-xx1)), Ni/(x2-x1)) * scale
plot((x1+x2)/2, Ni*scale/(x2-x1), ylim = c(0, ytop), xlim = c(x1[1], ext*x2[ln]),
xlab = tlab, ylab = "incremental mortality")
title("Probability Density Function")
lines((xx1+xx2)/2, sProbI*scale/(xx2-xx1), col=2)
}
#return()
}
#' The intrinsic cumulative survival distribution function for 2-process 4-parameter
#'
#' None.
#'
#' @param xx vector of ages
#' @param r r value
#' @param s s value
#' @return vector of FF?
SurvFn.in.4p <- function(xx, r, s) {
yy <- s^2*xx
# pnorm is: cumulative prob for the Normal Dist.
tmp1 <- sqrt(1/yy) * (1 - xx * r) # xx = 0 is ok. pnorm(+-Inf) is defined
tmp2 <- sqrt(1/yy) * (1 + xx * r)
# --safeguard if exponent gets too large.---
tmp3 <- 2*r/(s*s)
if (tmp3 > 250) {
q <- tmp3/250
if (tmp3 > 1500) {
q <- tmp3/500
}
valueFF <- (1.-(pnorm(-tmp1) + (exp(tmp3/q) *pnorm(-tmp2)^(1/q))^(q)))#*exp(-lambda*exp(-1/beta)/(1/beta*r)*(exp(1/beta*r*xx)-1))
#valueFF <- (1.-(pnorm(-tmp1) + (exp(tmp3/q) *pnorm(-tmp2)^(1/q))^(q)))*exp(-a/b*(exp(b*xx)-1))
}
else {
valueFF <- (1.-(pnorm(-tmp1) + exp(tmp3) *pnorm(-tmp2)))#*exp(-lambda*exp(-1/beta)/(1/beta*r)*(exp(1/beta*r*xx)-1)) #1-G
#valueFF <- (1.-(pnorm(-tmp1) + exp(tmp3) *pnorm(-tmp2)))*exp(-a/b*(exp(b*xx)-1)) #1-G
}
if ( all(is.infinite(valueFF)) ) {
warning(message = "Inelegant exit caused by overflow in evaluation of survival function.
Check for right-censored data. Try other initial values.")
}
return(valueFF)
}
#' The cumulative survival distribution function for 2-process 4-parameter
#'
#' None.
#'
#' @param xx vector of ages
#' @param r r value
#' @param s s value
#' @param lambda lambda value
#' @param beta beta value
#' @return vector of FF?
SurvFn.ex.4p <- function(xx, r, s, lambda, beta) {
yy <- s^2*xx
# pnorm is: cumulative prob for the Normal Dist.
tmp1 <- sqrt(1/yy) * (1 - xx * r) # xx = 0 is ok. pnorm(+-Inf) is defined
tmp2 <- sqrt(1/yy) * (1 + xx * r)
# --safeguard if exponent gets too large.---
tmp3 <- 2*r/(s*s)
if (tmp3 > 250) {
q <- tmp3/250
if (tmp3 > 1500) {
q <- tmp3/500
}
#valueFF <- exp(-lambda*exp(-1/beta)/(r/beta)*(exp(r*xx/beta)-1) +gamma/alpha*(exp(-alpha*xx)-1))
valueFF <- exp(-lambda*exp(-1/beta)/(r/beta)*(exp(r*xx/beta)-1))
} else {
#valueFF <-exp(-lambda*exp(-1/beta)/(r/beta)*(exp(r*xx/beta)-1)+ gamma/alpha*(exp(-alpha*xx)-1))
valueFF <-exp(-lambda*exp(-1/beta)/(r/beta)*(exp(r*xx/beta)-1))
}
if ( all(is.infinite(valueFF)) ) {
warning(message = "Inelegant exit caused by overflow in evaluation of survival function.
Check for right-censored data. Try other initial values.")
}
return(valueFF)
}
#' Calculates incremental survival probability for 2-process 4-parameter r, s, lambda, beta
#'
#' None
#'
#' @param r r value
#' @param s s value
#' @param lambda lambda value
#' @param beta beta value
#' @param xx1 xx1 vector
#' @param xx2 xx2 vector
#' @return Incremental survival probabilities.
survProbInc.4p <- function(r, s, lambda, beta, xx1, xx2){
value.iSP <- -(SurvFn.4p(xx2, r, s, lambda, beta) - SurvFn.4p(xx1, r, s, lambda, beta))
value.iSP[value.iSP < 1e-18] <- 1e-18 # safeguards against taking Log(0)
value.iSP
}
#' Gives log likelihood of 2-process 4-parameter model
#'
#' None
#'
#' @param par vector of parameter(r, s, lambda, beta)
#' @param xx1 xx1 vector
#' @param xx2 xx2 vector
#' @param NNi survival fractions
#' @return log likelihood
logLikelihood.4p <- function(par, xx1, xx2, NNi) {
# --calculate incremental survival probability--- (safeguraded > 1e-18 to prevent log(0))
iSP <- survProbInc.4p(par[1], par[2], par[3], par[4], xx1, xx2)
loglklhd <- -NNi*log(iSP)
return(sum(loglklhd))
}
#' Standard errors for 4-param r, s, lambda, beta
#'
#' Note: if k <= 0, can not find std Err for k.
#'
#' @param r r value
#' @param s s value
#' @param k k value
#' @param u u value corresponding to beta?
#' @param x1 age 1 (corresponding 1:(t-1) and 2:t)
#' @param x2 age 2
#' @param Ni survival fraction
#' @param pop initial population (total population of the study)
#' @return standard error for r, s, k, u.
stdErr.4p <- function(r, s, k, u, x1, x2, Ni, pop) {
LL <- function(a, b, c, d, r, s, k, u, x1, x2, Ni) {logLikelihood.4p(c(r+a, s+b, k+c, u+d), x1, x2, Ni)}
#initialize hessian for storage
hess <- matrix(0, nrow = 4, ncol = 4)
#set finite difference intervals
h <- .001
hr <- abs(h*r)
hs <- h*s*.1
hk <- h*k*.1
hu <- h*u*.1
#Compute second derivitives (using 5 point)
# LLrr
f0 <- LL(-2*hr, 0, 0, 0, r, s, k, u, x1, x2, Ni)
f1 <- LL(-hr, 0, 0, 0, r, s, k, u, x1, x2, Ni)
f2 <- LL(0, 0, 0, 0, r, s, k, u, x1, x2, Ni)
f3 <- LL(hr, 0, 0, 0, r, s, k, u, x1, x2, Ni)
f4 <- LL(2*hr, 0, 0, 0, r, s, k, u, x1, x2, Ni)
fp0 <- (-25*f0 +48*f1 -36*f2 +16*f3 -3*f4)/(12*hr)
fp1 <- (-3*f0 -10*f1 +18*f2 -6*f3 +f4)/(12*hr)
fp3 <- (-f0 +6*f1 -18*f2 +10*f3 +3*f4)/(12*hr)
fp4 <- (3*f0 -16*f1 +36*f2 -48*f3 +25*f4)/(12*hr)
LLrr <- (fp0 -8*fp1 +8*fp3 -fp4)/(12*hr)
# LLss
f0 <- LL(0, -2*hs, 0, 0, r, s, k, u, x1, x2, Ni)
f1 <- LL(0, -hs, 0, 0, r, s, k, u, x1, x2, Ni)
# f2 as above
f3 <- LL(0, hs, 0, 0, r, s, k, u, x1, x2, Ni)
f4 <- LL(0, 2*hs, 0, 0, r, s, k, u, x1, x2, Ni)
fp0 <- (-25*f0 +48*f1 -36*f2 +16*f3 -3*f4)/(12*hs)
fp1 <- (-3*f0 -10*f1 +18*f2 -6*f3 +f4)/(12*hs)
fp3 <- (-f0 +6*f1 -18*f2 +10*f3 +3*f4)/(12*hs)
fp4 <- (3*f0 -16*f1 +36*f2 -48*f3 +25*f4)/(12*hs)
LLss <- (fp0 -8*fp1 +8*fp3 -fp4)/(12*hs)
# LLkk
f0 <- LL(0, 0, -2*hk, 0, r, s, k, u, x1, x2, Ni)
f1 <- LL(0, 0, -hk, 0, r, s, k, u, x1, x2, Ni)
# f2 as above
f3 <- LL(0, 0, hk, 0, r, s, k, u, x1, x2, Ni)
f4 <- LL(0, 0, 2*hk, 0, r, s, k, u, x1, x2, Ni)
fp0 <- (-25*f0 +48*f1 -36*f2 +16*f3 -3*f4)/(12*hk)
fp1 <- (-3*f0 -10*f1 +18*f2 -6*f3 +f4)/(12*hk)
fp3 <- (-f0 +6*f1 -18*f2 +10*f3 +3*f4)/(12*hk)
fp4 <- (3*f0 -16*f1 +36*f2 -48*f3 +25*f4)/(12*hk)
LLkk <- (fp0 -8*fp1 +8*fp3 -fp4)/(12*hk)
# LLuu
f0 <- LL(0, 0, 0, -2*hu, r, s, k, u, x1, x2, Ni)
f1 <- LL(0, 0, 0, -hu, r, s, k, u, x1, x2, Ni)
# f2 as above
f3 <- LL(0, 0, 0, hu, r, s, k, u, x1, x2, Ni)
f4 <- LL(0, 0, 0, 2*hu, r, s, k, u, x1, x2, Ni)
fp0 <- (-25*f0 +48*f1 -36*f2 +16*f3 -3*f4)/(12*hu)
fp1 <- (-3*f0 -10*f1 +18*f2 -6*f3 +f4)/(12*hu)
fp3 <- (-f0 +6*f1 -18*f2 +10*f3 +3*f4)/(12*hu)
fp4 <- (3*f0 -16*f1 +36*f2 -48*f3 +25*f4)/(12*hu)
LLuu <- (fp0 -8*fp1 +8*fp3 -fp4)/(12*hu)
#-------end second derivs---
# do mixed partials (4 points)
# LLrs
m1 <- LL(hr, hs, 0, 0, r, s, k, u, x1, x2, Ni)
m2 <- LL(-hr, hs, 0, 0, r, s, k, u, x1, x2, Ni)
m3 <- LL(-hr, -hs, 0, 0, r, s, k, u, x1, x2, Ni)
m4 <- LL(hr, -hs, 0, 0, r, s, k, u, x1, x2, Ni)
LLrs <- (m1 -m2 +m3 -m4)/(4*hr*hs)
# LLru
m1 <- LL(hr, 0, 0, hu, r, s, k, u, x1, x2, Ni)
m2 <- LL(-hr, 0, 0, hu, r, s, k, u, x1, x2, Ni)
m3 <- LL(-hr, 0, 0, -hu, r, s, k, u, x1, x2, Ni)
m4 <- LL(hr, 0, 0, -hu, r, s, k, u, x1, x2, Ni)
LLru <- (m1 -m2 +m3 -m4)/(4*hr*hu)
# LLsu
m1 <- LL(0, hs, 0, hu, r, s, k, u, x1, x2, Ni)
m2 <- LL(0, -hs, 0, hu, r, s, k, u, x1, x2, Ni)
m3 <- LL(0, -hs, 0, -hu, r, s, k, u, x1, x2, Ni)
m4 <- LL(0, hs, 0, -hu, r, s, k, u, x1, x2, Ni)
LLsu <- (m1 -m2 +m3 -m4)/(4*hu*hs)
# LLrk
m1 <- LL(hr, 0, hk, 0, r, s, k, u, x1, x2, Ni)
m2 <- LL(-hr, 0, hk, 0, r, s, k, u, x1, x2, Ni)
m3 <- LL(-hr, 0, -hk, 0, r, s, k, u, x1, x2, Ni)
m4 <- LL(hr, 0, -hk, 0, r, s, k, u, x1, x2, Ni)
LLrk <- (m1 -m2 +m3 -m4)/(4*hr*hk)
# LLsk
m1 <- LL(0, hs, hk, 0, r, s, k, u, x1, x2, Ni)
m2 <- LL(0, -hs, hk, 0, r, s, k, u, x1, x2, Ni)
m3 <- LL(0, -hs, -hk, 0, r, s, k, u, x1, x2, Ni)
m4 <- LL(0, hs, -hk, 0, r, s, k, u, x1, x2, Ni)
LLsk <- (m1 -m2 +m3 -m4)/(4*hs*hk)
# LLku
m1 <- LL(0, 0, hk, hu, r, s, k, u, x1, x2, Ni)
m2 <- LL(0, 0, hk, -hu, r, s, k, u, x1, x2, Ni)
m3 <- LL(0, 0, -hk, -hu, r, s, k, u, x1, x2, Ni)
m4 <- LL(0, 0, -hk, hu, r, s, k, u, x1, x2, Ni)
LLku <- (m1 -m2 +m3 -m4)/(4*hu*hk)
diag(hess) <- c(LLrr, LLss, LLkk, LLuu)*pop
hess[2, 1] = hess[1, 2] <- LLrs*pop
hess[3, 1] = hess[1, 3] <- LLrk*pop
hess[3, 2] = hess[2, 3] <- LLsk*pop
hess[4, 1] = hess[1, 4] <- LLru*pop
hess[4, 2] = hess[2, 4] <- LLsu*pop
hess[4, 3] = hess[3, 4] <- LLku*pop
#print(hess)
hessInv <- solve(hess)
#print(hessInv)
#compute correlation matrix:
sz <- 4
corr <- matrix(0, nrow = sz, ncol = sz)
for (i in 1:sz) {
for (j in 1:sz) {
corr[i, j] <- hessInv[i, j]/sqrt(abs(hessInv[i, i]*hessInv[j, j]))
}
}
#print(corr)
if ( abs(corr[2, 1]) > .98 ) {
warning("WARNING: parameters r and s appear to be closely correlated for this data set.
s.e. may fail for these parameters.")
}
if ( sz == 4 && abs(corr[3, 2]) > .98 ) {
warning("WARNING: parameters s and lambda appear to be closely correlated for this data set.
s.e. may fail for these parameters.")
}
if ( sz == 4 && abs(corr[3, 1]) > .98 ) {
warning("WARNING: parameters r and lambda appear to be closely correlated for this data set.
s.e. may fail for these parameters.")
}
if ( sz == 4 && abs(corr[4, 2]) > .98 ) {
warning("WARNING: parameters s and beta appear to be closely correlated for this data set.
s.e. may fail for these parameters.")
}
if ( sz == 4 && abs(corr[4, 1]) > .98 ) {
warning("WARNING: parameters r and beta appear to be closely correlated for this data set.
s.e. may fail for these parameters.")
}
se <- sqrt(diag(hessInv))
# Approximate s.e. for cases where calculation of s.e. failed:
if( sum( is.na(se) ) > 0 ) {
seNA <- is.na(se)
se12 <- sqrt(diag(solve(hess[c(1, 2) , c(1, 2) ])))
se13 <- sqrt(diag(solve(hess[c(1, 3) , c(1, 3) ])))
se23 <- sqrt(diag(solve(hess[c(2, 3) , c(2, 3) ])))
se14 <- sqrt(diag(solve(hess[c(1, 4) , c(1, 4) ])))
se24 <- sqrt(diag(solve(hess[c(2, 4) , c(2, 4) ])))
se34 <- sqrt(diag(solve(hess[c(3, 4) , c(3, 4) ])))
if(seNA[1]) {
if(!is.na(se12[1]) ){
se[1] = se12[1]
warning("* s.e. for parameter r is approximate.")
}
else if(!is.na(se13[1])){
se[1] = se13[1]
warning("* s.e. for parameter r is approximate.")
}
else if(!is.na(se14[1])){
se[1] = se14[1]
warning("* s.e. for parameter r is approximate.")
}
else warning("* unable to calculate or approximate s.e. for parameter r.")
}
if(seNA[2]) {
if(!is.na(se12[2]) ){
se[2] = se12[2]
warning("* s.e. for parameter s is approximate.")
}
else if(!is.na(se23[1])){
se[2] = se23[1]
warning("* s.e. for parameter s is approximate.")
}
else if(!is.na(se24[1])){
se[2] = se24[1]
warning("* s.e. for parameter s is approximate.")
}
else warning("* unable to calculate or approximate s.e. for parameter s.")
}
if(seNA[3]) {
if(!is.na(se13[2]) ){
se[3] = se13[2]
warning("* s.e. for parameter lambda is approximate.")
}
else if(!is.na(se23[2])){
se[3] = se23[2]
warning("* s.e. for parameter lambda is approximate.")
}
else if(!is.na(se34[1])){
se[3] = se34[1]
warning("* s.e. for parameter lambda is approximate.")
}
else warning("* unable to calculate or approximate s.e. for parameter lambda.")
}
if(seNA[4]) {
if(!is.na(se14[2]) ){
se[4] = se14[2]
warning("* s.e. for parameter beta is approximate.")
}
else if(!is.na(se24[2])){
se[4] = se24[2]
warning("* s.e. for parameter beta is approximate.")
}
else if(!is.na(se34[1])){
se[4] = se34[2]
warning("* s.e. for parameter beta is approximate.")
}
else warning("* unable to calculate or approximate s.e. for parameter beta.")
}
}
#######################
return(se)
}
SurvFn.4p <- function(xx,r,s,lambda,beta){
yy <- s^2*xx
# pnorm is: cumulative prob for the Normal Dist.
tmp1 <- sqrt(1/yy) * (1 - xx * r) # xx = 0 is ok. pnorm(+-Inf) is defined
tmp2 <- sqrt(1/yy) * (1 + xx * r)
# --safeguard if exponent gets too large.---
tmp3 <- 2*r/(s*s)
if (tmp3 > 250) {
q <- tmp3/250
if (tmp3 > 1500) {
q <- tmp3/500
}
valueFF <-(1.-(pnorm(-tmp1) + (exp(tmp3/q) *pnorm(-tmp2)^(1/q))^(q)))*exp(-lambda*exp(-1/beta)/(r/beta)*(exp(r*xx/beta)-1)) # This requires 1/alpha
} else {
valueFF <-(1.-(pnorm(-tmp1) + exp(tmp3) *pnorm(-tmp2)))*exp(-lambda*exp(-1/beta)/(r/beta)*(exp(r*xx/beta)-1))
}
if ( all(is.infinite(valueFF)) ) {
warning(message="Inelegant exit caused by overflow in evaluation of survival function. Check for right-censored data. Try other initial values.")
}
return(valueFF)
}
#' Intrinsic cumulative survival distribution
#'
#' None
#'
#' @param xx vector of ages
#' @param r r value
#' @param s s value
#' @param u u value
#' @return Cumulative survival distribution
SurvFn.h.4p <- function(xx, r, s, u){
yy <- u^2+s^2*xx
# pnorm is: cumulative prob for the Normal Dist.
tmp1 <- sqrt(1/yy) * (1 - xx * r) # xx = 0 is ok. pnorm(+-Inf) is defined
tmp2 <- sqrt(1/yy) * (1 + xx * r+2*u^2*r/s^2)
# --safeguard if exponent gets too large.---
tmp3 <- 2*r/(s*s)+2*u^2*r^2/s^4
if (tmp3 > 250) {
q <- tmp3/250
if (tmp3 > 1500) {
q <- tmp3/500
}
valueFF <- (1.-(pnorm(-tmp1) + (exp(tmp3/q) *pnorm(-tmp2)^(1/q))^(q)))
}
else {
valueFF <- (1.-(pnorm(-tmp1) + exp(tmp3) *pnorm(-tmp2))) #1-G
}
if ( all(is.infinite(valueFF)) ) {
warning(message = "Inelegant exit caused by overflow in evaluation of survival function.
Check for right-censored data. Try other initial values.")
}
return(valueFF)
}
|
/scratch/gouwar.j/cran-all/cranData/vitality/R/vitality.4p.R
|
## Six parameter simple model r s lambda beta gamma and alpha
#' Fitting routine for the 2-process, 6-parameter vitality model (all ages).
#'
#' Based on code by D.H. Salinger, J.J. Anderson and O. Hamel (2003).
#' "A parameter fitting routine for the vitality based survival model."
#' Ecological Modeling 166(3): 287--294.
#'
#' @param time Vector. Time component of data: Defaults to \code{0:(1-length(sdata))}.
#' @param sdata Required. Survival or mortality data. The default expects cumulative
#' survival fraction. If providing incremental mortality fraction
#' instead, use option: datatype = "INC".
#' The default also expects the data to represent full mortality.
#' Otherwise, use option: rc.data = T to indicate right censored data.
#' @param rc.data Optional, boolean. Specifies Right Censored data. If the data does not
#' represent full mortality, it is probably right censored. The default
#' is rc.data = F. A third option is rc.data = "TF". Use this case to add
#' a near-term zero survival point to data which displays nearly full
#' mortality ( <.01 survival at end). If rc.data = F but the data does
#' not show full mortality, rc.data = "TF" will be
#' invoked automatically.
#' @param se Optional, boolean. Calculates the standard errors for the MLE parameters.
#' Default is FALSE. Set equal to the initial study population to
#' compute standard errors.
#' @param datatype Optional. Defaults to \code{"CUM"} for cumulative survival fraction data.
#' Use \code{"INC"} - for incremental mortality fraction data.
#' @param ttol Optional. Stopping criteria tolerance. Default is 1e-6.
#' Specify as ttol = .0001. If one of the liklihood plots (esp. for "k") does not look optimal,
#' try decreasing ttol. If the program crashes, try increasing ttol.
#' @param init.params Optional. Please specify the initial param values.
#' specify \code{init.params = c(r, s, lambda, beta, gamma, alpha)} in that order
#' (eg. init.params = c(.1, .02, .3, 0.12, .1, 1)).
#' @param pplot Optional, boolean. Plots of cumulative survival for both data and fitted curves?
#' Default \code{TRUE}. \code{FALSE} Produce no plots. A
#' A third option: \code{pplot = n} (n >= 1) extends the time axis of
#' the fitting plots (beyond the max time in data). For example:
#' \code{pplot = 1.2} extends the time axis by 20%. Note: the incremental
#' mortality plot is a continuous representation of the appropriately-
#' binned histogram of incremental mortalities.
#' @param Iplot Optional, boolean. Incremental mortality for both data and fitted curves?
#' Default: \code{FALSE}.
#' @param Mplot Optional, boolean. Plot fitted mortality curve? Default is \code{FALSE}.
#' @param tlab Optional, character. specifies units for x-axis of plots. Default is "years".
#' @param silent Optional, boolean. Stops all print and plot options (still get most warning and all
#' error messages) Default is \code{FALSE}. A third option, \code{"verbose"} also
#' enables the trace setting in the ms (minimum sum) S-Plus routine.
#' @export
#' @return vector of final MLE r, s, lambda, beta, gamma, alpha estimates.
#' standard errors of MLE parameter estimates (if se = <population> is specified).
vitality.6p <- function(time = 0:(length(sdata)-1),
sdata,
init.params = FALSE,
lower = c(0, 0, 0, 0, 0, 0),
upper = c(100,50,100,50,50,10),
rc.data = FALSE,
se = FALSE,
datatype = c("CUM", "INC"),
ttol = 1e-6,
pplot = TRUE,
Iplot = FALSE,
Mplot = FALSE,
tlab = "years",
silent = FALSE) {
# --Check/prepare Data---
in.time <- time
dTmp <- dataPrep(time, sdata, datatype, rc.data)
time <- dTmp$time
sfract <- dTmp$sfract
x1 <- dTmp$x1
x2 <- dTmp$x2
Ni <- dTmp$Ni
rc.data <- dTmp$rc.data
if(in.time[1]>0){
time <- time[-1]
sfract <- sfract[-1]
x1 <- c(x1[-c(1,length(x1))], x1[1])
x2 <- c(x2[-c(1,length(x2))], 0)
Ni <- Ni[-1]
rc.data <- rc.data[-1]
}
# --Produce initial parameter values---
if(length(init.params) == 1) {
ii <- indexFinder(sfract, 0.5)
if (ii == -1) {
warning("ERROR: no survival fraction data below the .5 level.\n
Cannot use the initial r s l b g a estimator. You must supply initial r s l b g a estimates")
return(-1)
}
else rsk <- c(1/time[ii], 0.01, 0.1, 0.1, .1, 1)
} else { # use user specified init params
rsk <- init.params
}
if (rsk[1] == -1) {
stop
}
if (silent == FALSE) {
print(cbind(c("Initial r", "Initial s", "Initial lambda", "Initial beta", "Initial gamma", "Initial alpha"), rsk))
}
# --create dataframe for sa---
dtfm <- data.frame(x1 = x1, x2 = x2, Ni = Ni)
# --run MLE fitting routine---
# --conduct Newton-Ralphoson algorithm directly --
fit.nlm <- nlminb(start = rsk, objective = logLikelihood.6p, lower = lower,
upper = upper, xx1 = x1, xx2 = x2, NNi = Ni)
# --save final param estimates---
r.final <- fit.nlm$par[1]
s.final <- abs(fit.nlm$par[2])
lambda.final <- fit.nlm$par[3]
beta.final <- fit.nlm$par[4]
gamma.final <- fit.nlm$par[5]
alpha.final <- fit.nlm$par[6]
mlv <- fit.nlm$obj
if (silent == FALSE) {print(cbind(c("estimated r", "estimated s", "estimated lambda",
"estimated beta", "estimated gamma", "estimated alpha", "minimum -loglikelihood value"),
c(r.final, s.final, lambda.final, beta.final, gamma.final, alpha.final, mlv)))}
# == end MLE fitting == =
# --compute standard errors---
if (se != FALSE) {
s.e. <- stdErr.6p(r.final, s.final, lambda.final, beta.final, gamma.final, alpha.final, x1, x2, Ni, se)
if (silent == FALSE){print(cbind(c("sd for r", "sd for s", "sd for lambda", "sd for beta", "sd for gamma", "sd for alpha"), s.e.))}
}
# --plotting and goodness of fit---
if (pplot != FALSE) {
plotting.6p(r.final, s.final, lambda.final, beta.final, gamma.final, alpha.final, mlv, time, sfract, x1, x2, Ni, pplot, Iplot, Mplot, tlab, rc.data)
}
# ............................................................................................
# --return final param values---
sd <- 5 #significant digits of output
if(se != F ) {
params <- c(r.final, s.final, lambda.final, beta.final, gamma.final, alpha.final)
pvalue <- c(1-pnorm(r.final/s.e.[1]), 1-pnorm(s.final/s.e.[2]), 1-pnorm(lambda.final/s.e.[3]), 1-pnorm(beta.final/s.e.[4]), 1-pnorm(gamma.final/s.e.[5]), 1-pnorm(alpha.final/s.e.[6]))
std <- c(s.e.[1], s.e.[2], s.e.[3], s.e.[4], s.e.[5], s.e.[6])
out <- signif(cbind(params, std, pvalue), sd)
return(out)
}
else {
return(signif(c(r.final, s.final, lambda.final, beta.final, gamma.final, alpha.final), sd))
}
}
#' @param xx age
#' @param r.final r estimate
#' @param s.final s estimate
#' @param lambda.final lambda estimate
#' @param beta.final beta estimate
#' @param gamma.final gamma estimate
#' @param alpha.final alpha estimate
SurvFn.in.6p <-function(xx,r,s)
# The cumulative survival distribution function.
{
yy<-s^2*xx
# pnorm is: cumulative prob for the Normal Dist.
tmp1 <- sqrt(1/yy) * (1 - xx * r) # xx=0 is ok. pnorm(+-Inf) is defined
tmp2 <- sqrt(1/yy) * (1 + xx * r)
# --safeguard if exponent gets too large.---
tmp3 <- 2*r/(s*s)
if (tmp3 >250) {
q <-tmp3/250
if (tmp3 >1500) {
q <-tmp3/500
}
valueFF <-(1.-(pnorm(-tmp1) + (exp(tmp3/q) *pnorm(-tmp2)^(1/q))^(q)))#*exp(-lambda*exp(-1/beta)/(r/beta)*(exp(r*xx/beta)-1) +gamma/alpha*(exp(-alpha*xx)-1))
} else {
valueFF <-(1.-(pnorm(-tmp1) + exp(tmp3) *pnorm(-tmp2)))#*exp(-lambda*exp(-1/beta)/(r/beta)*(exp(r*xx/beta)-1)+ gamma/alpha*(exp(-alpha*xx)-1))
}
if ( all(is.infinite(valueFF)) ) {
warning(message="Inelegant exit caused by overflow in evaluation of survival function. Check for right-censored data. Try other initial values.")
}
return(valueFF)
}
#' @param xx age
#' @param r.final r estimate
#' @param s.final s estimate
#' @param lambda.final lambda estimate
#' @param beta.final beta estimate
#' @param gamma.final gamma estimate
#' @param alpha.final alpha estimate
SurvFn.ex.6p <-function(xx,r,s,lambda,beta,gamma,alpha)
# The cumulative survival distribution function.
{
alpha <- 1/alpha # need this for inverse in child mortality component added 9-16-2014
yy<-s^2*xx
# pnorm is: cumulative prob for the Normal Dist.
tmp1 <- sqrt(1/yy) * (1 - xx * r) # xx=0 is ok. pnorm(+-Inf) is defined
tmp2 <- sqrt(1/yy) * (1 + xx * r)
# --safeguard if exponent gets too large.---
tmp3 <- 2*r/(s*s)
if (tmp3 >250) {
q <-tmp3/250
if (tmp3 >1500) {
q <-tmp3/500
}
#valueFF <- exp(-lambda*exp(-1/beta)/(r/beta)*(exp(r*xx/beta)-1) +gamma/alpha*(exp(-alpha*xx)-1))
valueFF <- exp(-lambda*exp(-1/beta)/(r/beta)*(exp(r*xx/beta)-1) +alpha*gamma*(exp(-xx/alpha)-1))
} else {
#valueFF <-exp(-lambda*exp(-1/beta)/(r/beta)*(exp(r*xx/beta)-1)+ gamma/alpha*(exp(-alpha*xx)-1))
valueFF <-exp(-lambda*exp(-1/beta)/(r/beta)*(exp(r*xx/beta)-1)+ alpha*gamma*(exp(-xx/alpha)-1))
}
if ( all(is.infinite(valueFF)) ) {
warning(message="Inelegant exit caused by overflow in evaluation of survival function. Check for right-censored data. Try other initial values.")
}
return(valueFF)
}
#' Plotting function for 2-process vitality model. 4-param
#'
#' None.
#'
#' @param r.final r estimate
#' @param s.final s estimate
#' @param lambda.final lambda estimate
#' @param beta.final beta estimate
#' @param gamma.final gamma estimate
#' @param alpha.final alpha estimate
#' @param mlv TODO mlv
#' @param time time vector
#' @param sfract survival fraction
#' @param x1 Time 1
#' @param x2 Time 2
#' @param Ni Initial population
#' @param pplot Boolean. Plot cumulative survival fraction?
#' @param Iplot Boolean. Plot incremental survival?
#' @param Mplot Boolean. Plot mortality rate?
#' @param tlab Character, label for time axis
#' @param rc.data Booolean, right-censored data?
plotting.6p <- function(r.final,
s.final,
lambda.final,
beta.final,
gamma.final,
alpha.final,
mlv,
time,
sfract,
x1,
x2,
Ni,
pplot,
Iplot,
Mplot,
tlab,
rc.data) {
# --plot cumulative survival---
if (pplot != FALSE) {
#win.graph()
ext <- max(pplot, 1)
par(mfrow = c(1, 1))
len <- length(time)
tmax <- ext * time[len]
plot(time, sfract, xlab = tlab, ylab = "survival fraction",
ylim = c(0, 1), xlim = c(time[1], tmax), col = 1)
xxx <- seq(0, tmax, length = 200)
lines(xxx, SurvFn.6p(xxx, r.final, s.final, lambda.final, beta.final, gamma.final, alpha.final), col = 2, lwd=2)
lines(xxx, SurvFn.in.6p(xxx, r.final, s.final), col=3, lwd=2, lty=3)
lines(xxx, SurvFn.ex.6p(xxx, r.final, s.final, lambda.final, beta.final, gamma.final, alpha.final), col=4, lwd=2, lty=2)
title("Cumulative Survival Data and Vitality Model Fitting")
legend(x="bottomleft", bty="n", legend=c("Total", "Intrinsic", "Extrinsic"), lty=c(1,3,2), lwd=c(2,2,2), col=c(2,3,4))
}
if ( Mplot != FALSE) {
lx <- round(sfract*100000)
lx <- c(lx,0)
ndx <- -diff(lx)
lxpn <- lx[-1]
n <- c(diff(time), 1000)
nax <- .5*n
nLx <- n * lxpn + ndx * nax
mu.x <- ndx/nLx
mu.x[length(mu.x)] <- NA
# qx <- Ni/sfract
# mu.x <- 2 * qx/(2 - qx)
#win.graph()
ext <- max(pplot, 1)
par(mfrow = c(1, 1))
len <- length(time)
tmax <- ext * time[len]
xxx <- seq(0, tmax, length = 200)
mu.i <- mu.vd1.6p(xxx, r.final, s.final)
mu.e <- mu.vd2.6p(xxx, r.final, lambda.final, beta.final, gamma.final, alpha.final)
mu.ea <- mu.vd3.6p(xxx, r.final, lambda.final, beta.final)
mu.ec <- mu.vd4.6p(xxx, gamma.final, alpha.final)
mu.t <- mu.vd.6p(xxx, r.final, s.final, lambda.final, beta.final, gamma.final, alpha.final)
plot(time, mu.x, xlim = c(time[1], tmax), xlab = tlab, ylab = "estimated mortality rate", log = "y",
main = "Log Mortality Data and Vitality Model Fitting", ylim=c(min(mu.x, mu.t,na.rm=T),max(mu.x, mu.t,na.rm=T)))
#plot(xxx, mu.t, xlim = c(0, tmax), xlab = tlab, ylab = "estimated mortality rate", log = "y",
# type = "l")
lines(xxx, mu.t, lwd=2, col=2)
lines(xxx, mu.i, col = 3, lty = 3, lwd=2)
lines(xxx, mu.ea, col=4, lty=2, lwd=2)
lines(xxx, mu.ec, col=5, lty=4, lwd=2)
legend(x="bottomright", legend=c("data (approximate)", expression(mu[total]),expression(mu[i]),expression(mu["e,a"]),expression(mu["e,c"])), lty=c(NA,1,3,2,4), pch=c(1,NA,NA,NA,NA), col=c(1,2,3,4,5), lwd=c(1,rep(2,4)), bty="n")
}
# --Incremental mortality plot
if (Iplot != FALSE) {
#win.graph()
par(mfrow = c(1, 1))
ln <- length(Ni)-1
x1 <- x1[1:ln]
x2 <- x2[1:ln]
Ni <- Ni[1:ln]
ln <- length(Ni)
#scale <- (x2-x1)[Ni == max(Ni)]
scale <- max( (x2-x1)[Ni == max(Ni)] )
ext <- max(pplot, 1)
npt <- 200*ext
xxx <- seq(x1[1], x2[ln]*ext, length = npt)
xx1 <- xxx[1:(npt-1)]
xx2 <- xxx[2:npt]
sProbI <- survProbInc.6p(r.final, s.final, lambda.final, beta.final, gamma.final, alpha.final, xx1, xx2)
ytop <- 1.1 * max(max(sProbI/(xx2-xx1)), Ni/(x2-x1)) * scale
plot((x1+x2)/2, Ni*scale/(x2-x1), ylim = c(0, ytop), xlim = c(time[1], ext*x2[ln]),
xlab = tlab, ylab = "incremental mortality")
title("Probability Density Function")
lines((xx1+xx2)/2, sProbI*scale/(xx2-xx1), col=2)
}
#return()
}
#' The cumulative survival distribution function for 2-process 6-parameter
#'
#' None.
#'
#' @param xx vector of ages
#' @param r r value
#' @param s s value
#' @param lambda lambda value
#' @param beta beta value
#' @param gamma gamma value
#' @param alpha alpha value
#' @return vector of FF?
SurvFn.6p <- function(xx,r,s,lambda,beta,gamma,alpha){
alpha <- 1/alpha # need this for inverse in child mortality component added 9-16-2014
yy <- s^2*xx
# pnorm is: cumulative prob for the Normal Dist.
tmp1 <- sqrt(1/yy) * (1 - xx * r) # xx = 0 is ok. pnorm(+-Inf) is defined
tmp2 <- sqrt(1/yy) * (1 + xx * r)
# --safeguard if exponent gets too large.---
tmp3 <- 2*r/(s*s)
if (tmp3 > 250) {
q <- tmp3/250
if (tmp3 > 1500) {
q <- tmp3/500
}
valueFF <-(1.-(pnorm(-tmp1) + (exp(tmp3/q) *pnorm(-tmp2)^(1/q))^(q)))*exp(-lambda*exp(-1/beta)/(r/beta)*(exp(r*xx/beta)-1) +alpha*gamma*(exp(-xx/alpha)-1)) # This requires 1/alpha
} else {
valueFF <-(1.-(pnorm(-tmp1) + exp(tmp3) *pnorm(-tmp2)))*exp(-lambda*exp(-1/beta)/(r/beta)*(exp(r*xx/beta)-1)+ alpha*gamma*(exp(-xx/alpha)-1))
}
if ( all(is.infinite(valueFF)) ) {
warning(message="Inelegant exit caused by overflow in evaluation of survival function. Check for right-censored data. Try other initial values.")
}
return(valueFF)
}
#' Calculates incremental survival probability for 2-process 6-parameter r, s, lambda, beta, gamma, alpha
#'
#' None
#'
#' @param r r value
#' @param s s value
#' @param lambda lambda value
#' @param beta beta value
#' @param gamma gamma value
#' @param alpha alpha value
#' @param xx1 xx1 vector
#' @param xx2 xx2 vector
#' @return Incremental survival probabilities.
survProbInc.6p <- function(r, s, lambda, beta, gamma, alpha, xx1, xx2){
value.iSP <- -(SurvFn.6p(xx2, r, s, lambda, beta, gamma, alpha) - SurvFn.6p(xx1, r, s, lambda, beta, gamma, alpha))
value.iSP[value.iSP < 1e-18] <- 1e-18 # safeguards against taking Log(0)
value.iSP
}
#' Gives log likelihood of 2-process 6-parameter model
#'
#' None
#'
#' @param par vector of parameter(r, s, lambda, beta, gamma, alpha)
#' @param xx1 xx1 vector
#' @param xx2 xx2 vector
#' @param NNi survival fractions
#' @return log likelihood
logLikelihood.6p <- function(par, xx1, xx2, NNi) {
# --calculate incremental survival probability--- (safeguraded > 1e-18 to prevent log(0))
iSP <- survProbInc.6p(par[1], par[2], par[3], par[4], par[5], par[6], xx1, xx2)
loglklhd <- -NNi*log(iSP)
return(sum(loglklhd))
}
#' Standard errors for 6-param r, s, lambda, beta, gamma, alpha
#'
#' Note: if k <= 0, can not find std Err for k.
#'
#' @param r r value
#' @param s s value
#' @param k lambda value
#' @param u alpha value (corresponding to beta?)
#' @param g, gamma value
#' @param a, alpha value
#' @param x1 age 1 (corresponding 1:(t-1) and 2:t)
#' @param x2 age 2
#' @param Ni survival fraction
#' @param pop initial population (total population of the study)
#' @return standard error for r, s, k, u.
stdErr.6p <- function(r, s, k, u, g, a, x1, x2, Ni, pop) {
#a <- 1/a #???
LL <- function(va, vb, vc, vd, ve, vf, r, s, k, u, g, a, x1, x2, Ni) {logLikelihood.6p(c(r+va, s+vb, k+vc, u+vd, g+ve, a+vf), x1, x2, Ni)}
#initialize hessian for storage
hess <- matrix(0, nrow = 6, ncol = 6)
#set finite difference intervals
h <- .001
hr <- abs(h*r)
hs <- h*s*.1
hk <- h*k*.1
hu <- h*u*.1
hg <- h*g*.1
ha <- h*a*.1
#Compute second derivitives (using 5 point)
# LLrr
f0 <- LL(-2*hr, 0, 0, 0, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
f1 <- LL(-hr, 0, 0, 0, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
f2 <- LL(0, 0, 0, 0, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
f3 <- LL(hr, 0, 0, 0, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
f4 <- LL(2*hr, 0, 0, 0, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
fp0 <- (-25*f0 +48*f1 -36*f2 +16*f3 -3*f4)/(12*hr)
fp1 <- (-3*f0 -10*f1 +18*f2 -6*f3 +f4)/(12*hr)
fp3 <- (-f0 +6*f1 -18*f2 +10*f3 +3*f4)/(12*hr)
fp4 <- (3*f0 -16*f1 +36*f2 -48*f3 +25*f4)/(12*hr)
LLrr <- (fp0 -8*fp1 +8*fp3 -fp4)/(12*hr)
# LLss
f0 <- LL(0, -2*hs, 0, 0, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
f1 <- LL(0, -hs, 0, 0, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
# f2 as above
f3 <- LL(0, hs, 0, 0, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
f4 <- LL(0, 2*hs, 0, 0, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
fp0 <- (-25*f0 +48*f1 -36*f2 +16*f3 -3*f4)/(12*hs)
fp1 <- (-3*f0 -10*f1 +18*f2 -6*f3 +f4)/(12*hs)
fp3 <- (-f0 +6*f1 -18*f2 +10*f3 +3*f4)/(12*hs)
fp4 <- (3*f0 -16*f1 +36*f2 -48*f3 +25*f4)/(12*hs)
LLss <- (fp0 -8*fp1 +8*fp3 -fp4)/(12*hs)
# LLkk
f0 <- LL(0, 0, -2*hk, 0, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
f1 <- LL(0, 0, -hk, 0, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
# f2 as above
f3 <- LL(0, 0, hk, 0, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
f4 <- LL(0, 0, 2*hk, 0, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
fp0 <- (-25*f0 +48*f1 -36*f2 +16*f3 -3*f4)/(12*hk)
fp1 <- (-3*f0 -10*f1 +18*f2 -6*f3 +f4)/(12*hk)
fp3 <- (-f0 +6*f1 -18*f2 +10*f3 +3*f4)/(12*hk)
fp4 <- (3*f0 -16*f1 +36*f2 -48*f3 +25*f4)/(12*hk)
LLkk <- (fp0 -8*fp1 +8*fp3 -fp4)/(12*hk)
# LLuu
f0 <- LL(0, 0, 0, -2*hu, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
f1 <- LL(0, 0, 0, -hu, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
# f2 as above
f3 <- LL(0, 0, 0, hu, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
f4 <- LL(0, 0, 0, 2*hu, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
fp0 <- (-25*f0 +48*f1 -36*f2 +16*f3 -3*f4)/(12*hu)
fp1 <- (-3*f0 -10*f1 +18*f2 -6*f3 +f4)/(12*hu)
fp3 <- (-f0 +6*f1 -18*f2 +10*f3 +3*f4)/(12*hu)
fp4 <- (3*f0 -16*f1 +36*f2 -48*f3 +25*f4)/(12*hu)
LLuu <- (fp0 -8*fp1 +8*fp3 -fp4)/(12*hu)
# LLgg
f0 <- LL(0, 0, 0, 0, -2*hg, 0, r, s, k, u, g, a, x1, x2, Ni)
f1 <- LL(0, 0, 0, 0, -hg, 0, r, s, k, u, g, a, x1, x2, Ni)
# f2 as above
f3 <- LL(0, 0, 0, 0, hg, 0, r, s, k, u, g, a, x1, x2, Ni)
f4 <- LL(0, 0, 0, 0, 2*hg, 0, r, s, k, u, g, a, x1, x2, Ni)
fp0 <- (-25*f0 +48*f1 -36*f2 +16*f3 -3*f4)/(12*hg)
fp1 <- (-3*f0 -10*f1 +18*f2 -6*f3 +f4)/(12*hg)
fp3 <- (-f0 +6*f1 -18*f2 +10*f3 +3*f4)/(12*hg)
fp4 <- (3*f0 -16*f1 +36*f2 -48*f3 +25*f4)/(12*hg)
LLgg <- (fp0 -8*fp1 +8*fp3 -fp4)/(12*hg)
# LLaa
f0 <- LL(0, 0, 0, 0, 0, -2*ha, r, s, k, u, g, a, x1, x2, Ni)
f1 <- LL(0, 0, 0, 0, 0, -ha, r, s, k, u, g, a, x1, x2, Ni)
# f2 as above
f3 <- LL(0, 0, 0, 0, 0, ha, r, s, k, u, g, a, x1, x2, Ni)
f4 <- LL(0, 0, 0, 0, 0, 2*ha, r, s, k, u, g, a, x1, x2, Ni)
fp0 <- (-25*f0 +48*f1 -36*f2 +16*f3 -3*f4)/(12*ha)
fp1 <- (-3*f0 -10*f1 +18*f2 -6*f3 +f4)/(12*ha)
fp3 <- (-f0 +6*f1 -18*f2 +10*f3 +3*f4)/(12*ha)
fp4 <- (3*f0 -16*f1 +36*f2 -48*f3 +25*f4)/(12*ha)
LLaa <- (fp0 -8*fp1 +8*fp3 -fp4)/(12*ha)
#-------end second derivs---
# do mixed partials (4 points)
# LLrs
m1 <- LL(hr, hs, 0, 0, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
m2 <- LL(-hr, hs, 0, 0, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
m3 <- LL(-hr, -hs, 0, 0, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
m4 <- LL(hr, -hs, 0, 0, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
LLrs <- (m1 -m2 +m3 -m4)/(4*hr*hs)
# LLrk
m1 <- LL(hr, 0, hk, 0, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
m2 <- LL(-hr, 0, hk, 0, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
m3 <- LL(-hr, 0, -hk, 0, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
m4 <- LL(hr, 0, -hk, 0, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
LLrk <- (m1 -m2 +m3 -m4)/(4*hr*hk)
# LLru
m1 <- LL(hr, 0, 0, hu, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
m2 <- LL(-hr, 0, 0, hu, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
m3 <- LL(-hr, 0, 0, -hu, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
m4 <- LL(hr, 0, 0, -hu, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
LLru <- (m1 -m2 +m3 -m4)/(4*hr*hu)
# LLrg
m1 <- LL(hr, 0, 0, 0, hg, 0, r, s, k, u, g, a, x1, x2, Ni)
m2 <- LL(-hr, 0, 0, 0, hg, 0, r, s, k, u, g, a, x1, x2, Ni)
m3 <- LL(-hr, 0, 0, 0, -hg, 0, r, s, k, u, g, a, x1, x2, Ni)
m4 <- LL(hr, 0, 0, 0, -hg, 0, r, s, k, u, g, a, x1, x2, Ni)
LLrg <- (m1 -m2 +m3 -m4)/(4*hr*hg)
# LLra
m1 <- LL(hr, 0, 0, 0, 0, ha, r, s, k, u, g, a, x1, x2, Ni)
m2 <- LL(-hr, 0, 0, 0, 0, ha, r, s, k, u, g, a, x1, x2, Ni)
m3 <- LL(-hr, 0, 0, 0, 0, -ha, r, s, k, u, g, a, x1, x2, Ni)
m4 <- LL(hr, 0, 0, 0, 0, -ha, r, s, k, u, g, a, x1, x2, Ni)
LLra <- (m1 -m2 +m3 -m4)/(4*hr*ha)
# LLsk
m1 <- LL(0, hs, hk, 0, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
m2 <- LL(0, -hs, hk, 0, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
m3 <- LL(0, -hs, -hk, 0, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
m4 <- LL(0, hs, -hk, 0, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
LLsk <- (m1 -m2 +m3 -m4)/(4*hs*hk)
# LLsu
m1 <- LL(0, hs, 0, hu, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
m2 <- LL(0, -hs, 0, hu, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
m3 <- LL(0, -hs, 0, -hu, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
m4 <- LL(0, hs, 0, -hu, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
LLsu <- (m1 -m2 +m3 -m4)/(4*hu*hs)
# LLsg
m1 <- LL(0, hs, 0, 0, hg, 0, r, s, k, u, g, a, x1, x2, Ni)
m2 <- LL(0, -hs, 0, 0, hg, 0, r, s, k, u, g, a, x1, x2, Ni)
m3 <- LL(0, -hs, 0, 0, -hg, 0, r, s, k, u, g, a, x1, x2, Ni)
m4 <- LL(0, hs, 0, 0,-hg, 0, r, s, k, u, g, a, x1, x2, Ni)
LLsg <- (m1 -m2 +m3 -m4)/(4*hg*hs)
# LLsa
m1 <- LL(0, hs, 0, 0, 0, ha, r, s, k, u, g, a, x1, x2, Ni)
m2 <- LL(0, -hs, 0, 0, 0, ha, r, s, k, u, g, a, x1, x2, Ni)
m3 <- LL(0, -hs, 0, 0, 0, -ha, r, s, k, u, g, a, x1, x2, Ni)
m4 <- LL(0, hs, 0, 0, 0, -ha, r, s, k, u, g, a, x1, x2, Ni)
LLsa <- (m1 -m2 +m3 -m4)/(4*ha*hs)
# LLku
m1 <- LL(0, 0, hk, hu, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
m2 <- LL(0, 0, hk, -hu, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
m3 <- LL(0, 0, -hk, -hu, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
m4 <- LL(0, 0, -hk, hu, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
LLku <- (m1 -m2 +m3 -m4)/(4*hu*hk)
# LLkg
m1 <- LL(0, 0, hk, 0, hg, 0, r, s, k, u, g, a, x1, x2, Ni)
m2 <- LL(0, 0, hk, 0, -hg, 0, r, s, k, u, g, a, x1, x2, Ni)
m3 <- LL(0, 0, -hk, 0, -hg, 0, r, s, k, u, g, a, x1, x2, Ni)
m4 <- LL(0, 0, -hk, 0, hg, 0, r, s, k, u, g, a, x1, x2, Ni)
LLkg <- (m1 -m2 +m3 -m4)/(4*hg*hk)
# LLka
m1 <- LL(0, 0, hk, ha, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
m2 <- LL(0, 0, hk, -ha, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
m3 <- LL(0, 0, -hk, -ha, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
m4 <- LL(0, 0, -hk, ha, 0, 0, r, s, k, u, g, a, x1, x2, Ni)
LLka <- (m1 -m2 +m3 -m4)/(4*ha*hk)
# LLug
m1 <- LL(0, 0, 0, hu, hg, 0, r, s, k, u, g, a, x1, x2, Ni)
m2 <- LL(0, 0, 0, -hu, hg, 0, r, s, k, u, g, a, x1, x2, Ni)
m3 <- LL(0, 0, 0, -hu, -hg, 0, r, s, k, u, g, a, x1, x2, Ni)
m4 <- LL(0, 0, 0, hu, -hg, 0, r, s, k, u, g, a, x1, x2, Ni)
LLug <- (m1 -m2 +m3 -m4)/(4*hg*hu)
# LLua
m1 <- LL(0, 0, 0, hu, 0, ha, r, s, k, u, g, a, x1, x2, Ni)
m2 <- LL(0, 0, 0, -hu, 0, ha, r, s, k, u, g, a, x1, x2, Ni)
m3 <- LL(0, 0, 0, -hu, 0, -ha, r, s, k, u, g, a, x1, x2, Ni)
m4 <- LL(0, 0, 0, hu, 0, -ha, r, s, k, u, g, a, x1, x2, Ni)
LLua <- (m1 -m2 +m3 -m4)/(4*ha*hu)
# LLga
m1 <- LL(0, 0, 0, 0, hg, ha, r, s, k, u, g, a, x1, x2, Ni)
m2 <- LL(0, 0, 0, 0, -hg, ha, r, s, k, u, g, a, x1, x2, Ni)
m3 <- LL(0, 0, 0, 0, -hg, -ha, r, s, k, u, g, a, x1, x2, Ni)
m4 <- LL(0, 0, 0, 0, hg, -ha, r, s, k, u, g, a, x1, x2, Ni)
LLga <- (m1 -m2 +m3 -m4)/(4*ha*hg)
diag(hess) <- c(LLrr, LLss, LLkk, LLuu, LLgg, LLaa)*pop
hess[2, 1] = hess[1, 2] <- LLrs*pop
hess[3, 1] = hess[1, 3] <- LLrk*pop
hess[4, 1] = hess[1, 4] <- LLru*pop
hess[5, 1] = hess[1, 5] <- LLrg*pop
hess[6, 1] = hess[1, 6] <- LLra*pop
hess[3, 2] = hess[2, 3] <- LLsk*pop
hess[4, 2] = hess[2, 4] <- LLsu*pop
hess[5, 2] = hess[2, 5] <- LLsg*pop
hess[6, 2] = hess[2, 6] <- LLsa*pop
hess[4, 3] = hess[3, 4] <- LLku*pop
hess[5, 3] = hess[3, 5] <- LLkg*pop
hess[6, 3] = hess[3, 6] <- LLka*pop
hess[5, 4] = hess[4, 5] <- LLug*pop
hess[6, 4] = hess[4, 6] <- LLua*pop
hess[6, 5] = hess[5, 6] <- LLga*pop
#print(hess)
hessInv <- solve(hess)
#print(hessInv)
# hessian.i <- fdHess(pars=c(r,s,k,u,g,a), fun=logLikelihood.6p, xx1=x1, xx2=x2, NNi=Ni)$Hessian
# hessInv <- solve(hessian.i)
#compute correlation matrix:
sz <- 6
corr <- matrix(0, nrow = sz, ncol = sz)
for (i in 1:sz) {
for (j in 1:sz) {
corr[i, j] <- hessInv[i, j]/sqrt(abs(hessInv[i, i]*hessInv[j, j]))
}
}
#print(corr)
if ( abs(corr[2, 1]) > .98 ) {
warning("
WARNING: parameters r and s appear to be closely correlated for this data set.
s.e. may fail for these parameters. ")
}
if ( sz == 6 && abs(corr[3, 1]) > .98 ) {
warning("
WARNING: parameters r and lambda appear to be closely correlated for this data set.
s.e. may fail for these parameters. ")
}
if ( sz == 6 && abs(corr[4, 1]) > .98 ) {
warning("
WARNING: parameters r and beta appear to be closely correlated for this data set.
s.e. may fail for these parameters. ")
}
if ( sz == 6 && abs(corr[5, 1]) > .98 ) {
warning("
WARNING: parameters r and gamma appear to be closely correlated for this data set.
s.e. may fail for these parameters. ")
}
if ( sz == 6 && abs(corr[6, 1]) > .98 ) {
warning("
WARNING: parameters r and alpha appear to be closely correlated for this data set.
s.e. may fail for these parameters. ")
}
if ( sz == 6 && abs(corr[3, 2]) > .98 ) {
warning("
WARNING: parameters s and lambda appear to be closely correlated for this data set.
s.e. may fail for these parameters. ")
}
if ( sz == 6 && abs(corr[4, 2]) > .98 ) {
warning("
WARNING: parameters s and beta appear to be closely correlated for this data set.
s.e. may fail for these parameters. ")
}
if ( sz == 6 && abs(corr[5, 2]) > .98 ) {
warning("
WARNING: parameters s and gamma appear to be closely correlated for this data set.
s.e. may fail for these parameters. ")
}
if ( sz == 6 && abs(corr[6, 2]) > .98 ) {
warning("
WARNING: parameters s and alpha appear to be closely correlated for this data set.
s.e. may fail for these parameters. ")
}
se <- sqrt(diag(hessInv))
# Approximate s.e. for cases where calculation of s.e. failed:
if( sum( is.na(se) ) > 0 ) {
seNA <- is.na(se)
se12 <- sqrt(diag(solve(hess[c(1, 2) , c(1, 2) ])))
se13 <- sqrt(diag(solve(hess[c(1, 3) , c(1, 3) ])))
se14 <- sqrt(diag(solve(hess[c(1, 4) , c(1, 4) ])))
se15 <- sqrt(diag(solve(hess[c(1, 5) , c(1, 5) ])))
se16 <- sqrt(diag(solve(hess[c(1, 6) , c(1, 6) ])))
se23 <- sqrt(diag(solve(hess[c(2, 3) , c(2, 3) ])))
se24 <- sqrt(diag(solve(hess[c(2, 4) , c(2, 4) ])))
se25 <- sqrt(diag(solve(hess[c(2, 5) , c(2, 5) ])))
se26 <- sqrt(diag(solve(hess[c(2, 6) , c(2, 6) ])))
se34 <- sqrt(diag(solve(hess[c(3, 4) , c(3, 4) ])))
se35 <- sqrt(diag(solve(hess[c(3, 5) , c(3, 5) ])))
se36 <- sqrt(diag(solve(hess[c(3, 6) , c(3, 6) ])))
se45 <- sqrt(diag(solve(hess[c(4, 5) , c(4, 5) ])))
se46 <- sqrt(diag(solve(hess[c(4, 6) , c(4, 6) ])))
se56 <- sqrt(diag(solve(hess[c(5, 6) , c(5, 6) ])))
if(seNA[1]) {
if(!is.na(se12[1]) ){
se[1] = se12[1]
warning(" * s.e. for parameter r is approximate. ")
}
else if(!is.na(se13[1])){
se[1] = se13[1]
warning(" * s.e. for parameter r is approximate. ")
}
else if(!is.na(se14[1])){
se[1] = se14[1]
warning(" * s.e. for parameter r is approximate. ")
}
else if(!is.na(se15[1])){
se[1] = se15[1]
warning(" * s.e. for parameter r is approximate. ")
}
else if(!is.na(se16[1])){
se[1] = se16[1]
warning(" * s.e. for parameter r is approximate. ")
}
else warning(" * unable to calculate or approximate s.e. for parameter r. ")
}
if(seNA[2]) {
if(!is.na(se12[2]) ){
se[2] = se12[2]
warning(" * s.e. for parameter s is approximate. ")
}
else if(!is.na(se23[1])){
se[2] = se23[1]
warning(" * s.e. for parameter s is approximate. ")
}
else if(!is.na(se24[1])){
se[2] = se24[1]
warning(" * s.e. for parameter s is approximate. ")
}
else if(!is.na(se25[1])){
se[2] = se25[1]
warning(" * s.e. for parameter s is approximate. ")
}
else if(!is.na(se26[1])){
se[2] = se26[1]
warning(" * s.e. for parameter s is approximate. ")
}
else warning(" * unable to calculate or approximate s.e. for parameter s. ")
}
if(seNA[3]) {
if(!is.na(se13[2]) ){
se[3] = se13[2]
warning(" * s.e. for parameter lambda is approximate. ")
}
else if(!is.na(se23[2])){
se[3] = se23[2]
warning(" * s.e. for parameter lambda is approximate. ")
}
else if(!is.na(se34[1])){
se[3] = se34[1]
warning(" * s.e. for parameter lambda is approximate. ")
}
else if(!is.na(se35[1])){
se[3] = se35[1]
warning(" * s.e. for parameter lambda is approximate. ")
}
else if(!is.na(se36[1])){
se[3] = se36[1]
warning(" * s.e. for parameter lambda is approximate. ")
}
else warning(" * unable to calculate or approximate s.e. for parameter lambda. ")
}
if(seNA[4]) {
if(!is.na(se14[2]) ){
se[4] = se14[2]
warning(" * s.e. for parameter beta is approximate. ")
}
else if(!is.na(se24[2])){
se[4] = se24[2]
warning(" * s.e. for parameter beta is approximate. ")
}
else if(!is.na(se34[2])){
se[4] = se34[2]
warning(" * s.e. for parameter beta is approximate. ")
}
else if(!is.na(se45[1])){
se[4] = se45[1]
warning(" * s.e. for parameter beta is approximate. ")
}
else if(!is.na(se46[1])){
se[4] = se46[1]
warning(" * s.e. for parameter beta is approximate. ")
}
else warning(" * unable to calculate or approximate s.e. for parameter beta. ")
}
if(seNA[5]) {
if(!is.na(se15[2]) ){
se[5] = se15[2]
warning(" * s.e. for parameter gamma is approximate. ")
}
else if(!is.na(se25[2])){
se[5] = se25[2]
warning(" * s.e. for parameter gamma is approximate. ")
}
else if(!is.na(se35[2])){
se[5] = se35[2]
warning(" * s.e. for parameter gamma is approximate. ")
}
else if(!is.na(se45[2])){
se[5] = se45[2]
warning(" * s.e. for parameter gamma is approximate. ")
}
else if(!is.na(se56[1])){
se[5] = se56[1]
warning(" * s.e. for parameter gamma is approximate. ")
}
else warning(" * unable to calculate or approximate s.e. for parameter gamma. ")
}
if(seNA[6]) {
if(!is.na(se16[2]) ){
se[6] = se16[2]
warning(" * s.e. for parameter alpha is approximate. ")
}
else if(!is.na(se26[2])){
se[6] = se26[2]
warning(" * s.e. for parameter alpha is approximate. ")
}
else if(!is.na(se36[2])){
se[6] = se36[2]
warning(" * s.e. for parameter alpha is approximate. ")
}
else if(!is.na(se46[2])){
se[6] = se46[2]
warning(" * s.e. for parameter alpha is approximate. ")
}
else if(!is.na(se56[2])){
se[6] = se56[2]
warning(" * s.e. for parameter alpha is approximate. ")
}
else warning(" * unable to calculate or approximate s.e. for parameter alpha. ")
}
}
#######################
return(se)
}
#' Intrinsic cumulative survival distribution
#'
#' None
#'
#' @param xx vector of ages
#' @param r r value
#' @param s s value
#' @return Cumulative survival distribution
SurvFn.h.6p <- function(xx, r, s)
# The cumulative survival distribution function.
{
yy<-s^2*xx
# pnorm is: cumulative prob for the Normal Dist.
tmp1 <- sqrt(1/yy) * (1 - xx * r) # xx=0 is ok. pnorm(+-Inf) is defined
tmp2 <- sqrt(1/yy) * (1 + xx * r)
# --safeguard if exponent gets too large.---
tmp3 <- 2*r/(s*s)
if (tmp3 >250) {
q <-tmp3/250
if (tmp3 >1500) {
q <-tmp3/500
}
valueFF <-(1.-(pnorm(-tmp1) + (exp(tmp3/q) *pnorm(-tmp2)^(1/q))^(q)))
} else {
valueFF <-(1.-(pnorm(-tmp1) + exp(tmp3) *pnorm(-tmp2)))
}
if ( all(is.infinite(valueFF)) ) {
warning(message="Inelegant exit caused by overflow in evaluation of survival function. Check for right-censored data. Try other initial values.")
}
return(valueFF)
}
|
/scratch/gouwar.j/cran-all/cranData/vitality/R/vitality.6p.R
|
## Three parameter simple model r s k, no childhood hook
#' Fitting routine for the 2-process, 3-parameter vitality model.
#'
#' Based on code by D.H. Salinger, J.J. Anderson and O. Hamel (2003).
#' "A parameter fitting routine for the vitality based survival model."
#' Ecological Modeling 166(3): 287--294.
#'
#' @param time Vector. Time component of data: Defaults to \code{0:(1-length(sdata))}.
#' @param sdata Required. Survival or mortality data. The default expects cumulative
#' survival fraction. If providing incremental mortality fraction
#' instead, use option: datatype = "INC".
#' The default also expects the data to represent full mortality.
#' Otherwise, use option: rc.data = T to indicate right censored data.
#' @param rc.data Optional, boolean. Specifies Right Censored data. If the data does not
#' represent full mortality, it is probably right censored. The default
#' is rc.data = F. A third option is rc.data = "TF". Use this case to add
#' a near-term zero survival point to data which displays nearly full
#' mortality ( <.01 survival at end). If rc.data = F but the data does
#' not show full mortality, rc.data = "TF" will be
#' invoked automatically.
#' @param se Optional, boolean. Calculates the standard errors for the MLE parameters.
#' Default is FALSE. Set equal to the initial study population to
#' compute standard errors.
#' @param gfit Provides a Pearson C type test for goodness of fit.
#' Default is \code{gfit=F}. Set to initial study population as with \code{se} for
#' computing goodness of fit.
#' @param datatype Optional. Defaults to \code{"CUM"} for cumulative survival fraction data.
#' Use \code{"INC"} - for incremental mortality fraction data.
#' @param ttol Optional. Stopping criteria tolerance. Default is 1e-6.
#' Specify as ttol = .0001. If one of the liklihood plots (esp. for "k") does not look optimal,
#' try decreasing ttol. If the program crashes, try increasing ttol.
#' @param init.params Optional. Please specify the initial param values.
#' specify \code{init.params = c(r, s, k)} in that order
#' (eg. init.params = c(.1, .02, .3)).
#' @param pplot Optional, boolean. Plots of cumulative survival for both data and fitted curves?
#' Default \code{TRUE}. \code{FALSE} Produce no plots. Note: the incremental
#' mortality plot is a continuous representation of the appropriately-
#' binned histogram of incremental mortalities.
#' @param Iplot Optional, boolean. Incremental mortality for both data and fitted curves?
#' Default: \code{FALSE}.
#' @param lplot Provides likelihood function plotting (default=\code{FALSE}).
#' Note: these plots are not "likelihood profiles" in that while one
#' parameter is varied, the others are held fixed, rather than
#' re-optimized. (must also have \code{pplot=T}.)
#' @param cplot Provides a likelihood contour plot for a range of r and s values
#' (can be slow so default is \code{FALSE}). Must also have lplot=T (and pplot=T)
#' to get contour plots.
#' @param tlab Optional, character. specifies units for x-axis of plots. Default is "days".
#' @param silent Optional, boolean. Stops all print and plot options (still get most warning and all
#' error messages) Default is \code{FALSE}. A third option, \code{"verbose"} also
#' enables the trace setting in the ms (minimum sum) S-Plus routine.
#' @export
#' @return vector of final MLE r, s, k parameter estimates.
#' standard errors of MLE parameter estimates (if se = <population> is specified).
vitality.k <- function(time,sdata,rc.data=F,se=F,gfit=F,datatype="CUM",ttol=.000001,init.params=F,lower = c(0, -1, 0),
upper = c(100,50,50), pplot=T,tlab="days",lplot=F,cplot=F,Iplot=F,silent=F)
#
#
#
# Vitality based survival model: parameter fitting routine: VERSION: 11/14/2014
#
# REQUIRED PARAMETERS:
# time - time component of data: time from experiment start. Time should
# start after the imposition of a stressor is completed.
# sdata - survival or mortality data. The default expects cumulative
# survival fraction. If providing incremental mortality fraction
# instead, use option: datatype="INC".
# The default also expects the data to represent full mortality.
# Otherwise, use option: rc.data=T to indicate right censored data.
#
# OPTIONAL PARAMETERS:
# rc.data =T - specifies Right Censored data. If the data does not
# represent full mortality, it is probably right censored. The default
# is rc.data=F. A third option is rc.data="TF". Use this case to add
# a near-term zero survival point to data which displays nearly full
# mortality ( <.01 survival at end). If rc.data=F but the data does
# not show full mortality, rc.data="TF" will be
# invoked automatically.
# se =<population> calculates the standard errors for the MLE parameters.
# Default is se=F. The initial study population is necessary for
# computing these standard errors.
# gfit =<population> provides a Pearson C type test for goodness of fit.
# Default is gfit=F. The initial study population is necessary for
# computing goodness of fit.
# datatype ="CUM" -cumulative survival fraction data- is the default.
# Other option: datatype="INC" - for incremental mortality fraction
# data. ttol (stopping criteria tolerence.) Default is .000001 .
# specify as ttol=.0001.
# If one of the liklihood plots (esp. for "k") does not look optimal,
# try decreasing ttol. If the program crashes, try increasing ttol.
# init.params =F has the routine choose initial parameter estimates for
# r,s,k (default: =F). If you wish to specify initial param values
# rather than have the routine choose them, specify
# init.params=c(r,s,k) in that order (eg. init.params=c(.1,.02,.003)).
# pplot =T provides plots of cumulative survival and incremental mortality -
# for both data and fitted curves (default: =T). pplot=F provides no
# plotting. A third option: pplot=n (n>=1) extends the time axis of
# the fitting plots (beyond the max time in data). For example:
# pplot=1.2 extends the time axis by 20%. (Note: the incremental
# mortality plot is a continuous representation of the appropriately-
# binned histogram of incremental mortalities.)
# tlab ="<time units>" specifies units for x-axis of plots. Default is
# tlab="days".
# lplot =T provides likelihood function plotting (default =T).
# Note: these plots are not "likelihood profiles" in that while one
# parameter is varied, the others are held fixed, rather than
# re-optimized. (must also have pplot=T.)
# cplot =T provides a likelihood contour plot for a range of r and s values
# (can be slow so default is F). Must also have lplot=T (and pplot=T)
# to get contour plots.
# silent =T stops all print and plot options (still get most warning and all
# error messages) Default is F. A third option, silent="verbose" also
# enables the trace setting in the ms (minimum sum) S-Plus routine.
#
# RETURN:
# vector of final MLE r,s,k parameter estimates.
# standard errors of MLE parameter estimates (if se=<population> is
# specified).
#
{
# --Check/prepare Data---
datatype <- match.arg(datatype)
if (length(time) != length(sdata)) {
stop("time and sdata must have the same length")
}
in.time <- time
dTmp <- dataPrep(time, sdata, datatype, rc.data)
time <- dTmp$time
sfract <- dTmp$sfract
x1 <- dTmp$x1
x2 <- dTmp$x2
Ni <- dTmp$Ni
rc.data <- dTmp$rc.data
if(in.time[1]>0){
time <- time[-1]
sfract <- sfract[-1]
x1 <- c(x1[-c(1,length(x1))], x1[1])
x2 <- c(x2[-c(1,length(x2))], 0)
Ni <- Ni[-1]
# rc.data <- rc.data[-1] # need only single value for rc.data NOT eliminate the first value.
}
rc.data <- rc.data[1] ; # WNB only use the first value, this was replicated to fill the dataframe in dataPrep
# --Produce initial parameter values---
tt <- time
sf <- sfract
if(length(init.params) == 1) {
ii <- indexFinder(sfract, 0.5)
if (ii == -1) {
warning("ERROR: no survival fraction data below the .5 level.\n
Cannot use the initial r s k estimator. You must supply initial r s k estimates")
return(-1)
}
#else rsk <- c(1/time[ii], 0.01, 0.1)
else slope <- (sf[ii]-sf[ii-1]) /(tt[ii]-tt[ii-1])
t50 <- tt[ii] + (0.5-sf[ii])/slope
nslope <- slope*t50
# Script for setting r.s.slope - the data frame used by function rsk.init
# to produce initial r,s estimates from an estimated slope of the survival
# curve at the inflection point.
c1<-c(0.000,0.100,0.200,0.300,0.400,0.500,0.600,0.700,0.800,0.900,0.930,0.950,0.960,0.970,
0.980,0.990,0.991,0.992,0.993,0.994,0.995,0.996,0.997,0.998)
c2<-c(1.48260200,1.40208800,1.31746800,1.22796200,1.13249500,1.02951700,0.91664920,0.78987300,0.64132740,
0.45059940,0.37620130,0.31747940,0.28374720,0.24554230,0.20032630,0.14153800,0.13426380,0.12657460,
0.11839010,0.10959890,0.10004140,0.08947239,0.07747895,0.06325606)
c3<-c(-0.2143371,-0.2315594,-0.2518273,-0.2761605,-0.3061416,-0.3443959,-0.3956923,-0.4699249,-0.5925325,
-0.8638228,-1.0422494,-1.2411051,-1.3920767,-1.6126579,-1.9815621,-2.8115970,-2.9646636,-3.1455461,
-3.3638416,-3.6345703,-3.9827944,-4.4543775,-5.1451825,-6.3036320)
r.s.slope<-data.frame(c1,c2,c3)
dimnames(r.s.slope)[[2]]<-c("r","s","slope")
rm(c1)
rm(c2)
rm(c3)
rp<-r.s.slope[,1]
sp<-r.s.slope[,2]
slope.p<-r.s.slope[,3]
sze<-length(slope.p)
if(slope.p[sze] < nslope && nslope < slope.p[1]) { # check if normalized slope (nslope) is on the chart.
for (i in 2:sze) {
if ( (slope.p[i-1] - nslope)*(slope.p[i] - nslope) < 0.0) {
rri<-( rp[i-1] + (rp[i]-rp[i-1])*(nslope-slope.p[i-1])/(slope.p[i]-slope.p[i-1]) )/t50
ssi<-( sp[i-1] + (sp[i]-sp[i-1])*(nslope-slope.p[i-1])/(slope.p[i]-slope.p[i-1]) )/sqrt(t50)
break
}
}
} else {
if (nslope <= slope.p[sze]) {
rri<-rp[sze]/t50
ssi<-sp[sze]/sqrt(t50)
}
else {
rri<-rp[1]/t50
ssi<-sp[1]/sqrt(t50)
}
}
ssi<-ssi/1.1 # ssi was consistently overestimated above.
# --estimate initial k---
#use rri,ssi and a data point (tt[ii],sf[ii]) to solve for kki
# ..using the actual survival function.
ii<-indexFinder(sf,.94)-1
if (ii <= 1) { # In case no data points between sruv=1 and surv=.94.
ii<-2
warning(message="WARNING: Initial time step may be too long.")
}
kki <- -(1/tt[ii])*log(sf[ii]/SurvFn.k(tt[ii],rri,ssi,0))
if (kki <= 0) {
kki <- (1.0 - sf[ii])/tt[ii]
}
rsk <- c(rri, ssi, kki)
} else { # use user specified init params
rsk <- init.params
}
if (rsk[1] == -1) {
stop
}
if (silent == FALSE) {
print(cbind(c("Initial r", "initial s", "initial k"), rsk))
}
# --create dataframe for sa---
dtfm <- data.frame(x1 = x1, x2 = x2, Ni = Ni)
# --run MLE fitting routine---
# --conduct Newton-Ralphoson algorithm directly --
fit.nlm <- nlminb(start = rsk, objective = logLikelihood.k, lower = lower, upper = upper, xx1 = x1, xx2 = x2, NNi = Ni)
# -- if k<0 run again with k=0 --
# if(fit.nlm$par[3]<0){
# #k.final <- 0
# warning("WARNING: k<0 on initial run. Trying again with k=0.")
# # fit.nlm <- nlminb(start = rsk, objective = logLikelihood.k, lower = c(lower[1:2], 0), upper = c(upper[1:2],0), xx1 = x1, xx2 = x2, NNi = Ni)
# }
# --save final param estimates---
r.final <- fit.nlm$par[1]
s.final <- abs(fit.nlm$par[2])
k.final <- fit.nlm$par[3]
mlv <- fit.nlm$obj
if (silent == FALSE) {print(cbind(c("estimated r", "estimated s", "estimated k", "minimum -loglikelihood value"),
c(r.final, s.final, k.final, mlv)))}
# == end MLE fitting == =
# --compute standard errors---
if (se != FALSE) {
s.e. <- stdErr.k(r.final, s.final, k.final, x1, x2, Ni, se)
if (silent == FALSE){print(cbind(c("sd for r", "sd for s", "sd for k"), s.e.))}
}
# --plotting and goodness of fit---
if (pplot != F || gfit != F) {
plotting.k(r.final,s.final,k.final,mlv,time,sfract,x1,x2,Ni,pplot,tlab,lplot,cplot,Iplot,gfit,rc.data)
}
# ............................................................................................
# --return final param values---
sigd <- 5 #significant digits of output
if(se != F){
params <- c(r.final, s.final, k.final)
pvalue <- c(1-pnorm(r.final/s.e.[1]), 1-pnorm(s.final/s.e.[2]), 1-pnorm(k.final/s.e.[3]))
std <- c(s.e.[1], s.e.[2], s.e.[3])
out <- signif(cbind(params, std, pvalue), sigd)
return(out)
} else {
return(signif(c(r.final, s.final, k.final)))
}
}
#' Plotting function for 2-process vitality model. 4-param
#'
#' None.
#'
#' @param r.final r estimate
#' @param s.final s estimate
#' @param k.final k estimate
#' @param mlv TODO mlv
#' @param time time vector
#' @param sfract survival fraction
#' @param x1 Time 1
#' @param x2 Time 2
#' @param Ni Initial population
#' @param pplot Boolean. Plot cumulative survival fraction?
#' @param Iplot Boolean. Plot incremental survival?
#' @param Mplot Boolean. Plot mortality rate?
#' @param tlab Character, label for time axis
#' @param rc.data Booolean, right-censored data?
plotting.k <- function(r.final,s.final,k.final,mlv,time,sfract,x1,x2,Ni,pplot,tlab,lplot,cplot,Iplot,gfit,rc.data){
# Function to provide plotting and goodness of fit computations
#
# --plot cumulative survival---
if (pplot != F) {
ext<-max(pplot,1)
par(mfrow=c(1,1))
len<-length(time)
tmax <-ext*time[len]
plot(time,sfract,xlab=tlab,ylab="survival fraction",ylim=c(0,1),xlim=c(0,tmax))
xxx<-seq(0,tmax,length=200)
lines(xxx,SurvFn.k(xxx,r.final,s.final,k.final))
title("Cumulative Survival Data and Vitality Model Fitting")
}
# --likelihood and likelihood contour plots---
if(lplot != F) {
profilePlot <- function(r.f,s.f,k.f,x1,x2,Ni,mlv,cplot){
SLL <- function(r,s,k,x1,x2,Ni){sum(logLikelihood.k(c(r,s,k),x1,x2,Ni))}
rf<-.2; sf<-.5; kf<-1.0; fp<-40 # rf,sf,kf - set profile plot range (.2 => plot +-20%), 2*fp+1 points
rseq <-seq((1-rf)*r.f,(1+rf)*r.f, (rf/fp)*r.f)
sseq <-seq((1-sf)*s.f,(1+sf)*s.f, (sf/fp)*s.f)
if (k.f > 0) {
kseq <-seq((1-kf)*k.f,(1+kf)*k.f, (kf/fp)*k.f)
} else { #if k=0..
kseq <-seq(.00000001,.1,length=(2*fp+1))
}
rl <-length(rseq)
tmpLLr <-rep(0,rl)
tmpLLs <-tmpLLr
tmpLLk <-tmpLLr
for (i in 1:rl) {
tmpLLr[i] <-SLL(rseq[i],s.f,k.f,x1,x2,Ni)
tmpLLs[i] <-SLL(r.f,sseq[i],k.f,x1,x2,Ni)
tmpLLk[i] <-SLL(r.f,s.f,kseq[i],x1,x2,Ni)
}
par(mfrow=c(1,3))
rlim1 <-rseq[1]
rlim2 <-rseq[rl]
if (r.f < 0) { #even though r should not be <0
rlim2 <-rseq[1]
rlim1 <-rseq[rl]
}
plot(r.f,LL<-SLL(r.f,s.f,k.f,x1,x2,Ni),
xlim=c(rlim1,rlim2), xlab="r",ylab="Likelihood");
lines(rseq,tmpLLr)
legend(x="topright", legend=c("r.final", "Likelihood varying r"), pch=c(1,NA), lty=c(NA, 1))
plot(s.f,LL,xlim=c(sseq[1],sseq[rl]), xlab="s",ylab="Likelihood");
lines(sseq,tmpLLs)
legend(x="topright", legend=c("s.final", "Likelihood varying s"), pch=c(1,NA), lty=c(NA, 1))
title("Likelihood Plots")
plot(k.f,LL,xlim=c(kseq[1],kseq[rl]),
ylim=c(1.1*min(tmpLLk)-.1*(mLk<-max(tmpLLk)),mLk),xlab="k",ylab="Likelihood");
lines(kseq,tmpLLk)
legend(x="topright", legend=c("k.final", "Likelihood varying k"), pch=c(1,NA), lty=c(NA, 1))
# -- for contour plotting ----------------------------------------
if (cplot==T) {
rl2<-(rl+1)/2; rl4<-20; st<-rl2-rl4; nr<-2*rl4+1
tmpLLrs <-matrix(rep(0,nr*nr),nrow=nr,ncol=nr)
for(i in 1:nr) {
for(j in 1:nr) {
tmpLLrs[i,j] <-SLL(rseq[i+st-1],sseq[j+st-1],k.f,x1,x2,Ni)
}
}
lvv<-seq(mlv,1.02*mlv,length=11) #99.8%, 99.6% ... 98%
par(mfrow=c(1,1))
contour(rseq[st:(rl2+rl4)],sseq[st:(rl2+rl4)],tmpLLrs,levels=lvv,xlab="r",ylab="s")
title("Likelihood Contour Plot of r and s", "Outermost ring is likelihhod 98% (of max) level, innermost is 99.8% level.")
points(r.f,s.f,pch="*",cex=3.0)
points(c(rseq[st],r.f),c(s.f,sseq[st]),pch="+",cex=1.5)
}
}
profilePlot(r.final,s.final,k.final,x1,x2,Ni,mlv,cplot)
}
# --calculations for goodness of fit---
if(gfit!=F) { # then gfit must supply the population number
isp <-survProbInc.k(r.final,s.final,k.final,x1,x2)
C1.calc<-function(pop, isp, Ni){
# Routine to calculate goodness of fit (Pearson's C -type test)
# pop - population number of sample
# isp - Modeled: incemental survivor Probability
# Ni - Data: incemental survivor fraction (prob)
#
# Returns: a list containing:
# C1, dof, Chi2 (retrieve each from list as ..$C1 etc.)
if(pop<35){
if(pop<25){
warning(paste("WARNING: sample population (",as.character(pop),") is too small for
meaningful goodness of fit measure. Goodness of fit not being computed"))
return()
} else {
warning(paste("WARNING: sample population (",as.character(pop),") may be too small for
meaningful goodness of fit measure"))
}
}
np <- pop * isp # modeled population at each survival probability level.
tmpC1 <- 0
i1<-1; i<-1; cnt<-0
len <- length(np)
while(i <= len) {
idx <- i1:i
# It is recommended that each np[i] >5 for meningful results. Where np<5
# points are grouped to attain that level. I have fudged it to 4.5 ...
# (as some leeway is allowed, and exact populations are sometimes unknown).
if(sum(np[idx]) > 4.5) {
cnt <-cnt+1
# Check if enough points remain. If not, they are glommed onto previous grouping.
if(i < len && sum(np[(i + 1):len]) < 4.5) {
idx <- i1:len
i <- len
}
sNi <- sum(Ni[idx])
sisp <- sum(isp[idx])
tmpC1 <- tmpC1 + (pop * (sNi - sisp)^2)/sisp
i1 <-i+1
}
i <-i+1
}
C1 <- tmpC1
dof <-cnt-1-3 # degrees of freedom (3 is number of parameters).
if (dof < 1) {
warning(paste("WARNING: sample population (",as.character(pop),") is too small for
meaningful goodness of fit measure (DoF<1). Goodness of fit not being computed"))
return()
}
chi2<-qchisq(.95,dof)
return(list(C1=C1,dof=dof,chi2=chi2))
}
C1dof <- C1.calc(gfit,isp,Ni)
C1 <-C1dof$C1
dof <-C1dof$dof
chi2 <-C1dof$chi2
print(paste("Pearson's C1=",as.character(round(C1,3))," chisquared =", as.character(round(chi2,3)),"on",as.character(dof),"degrees of freedom"))
# Note: The hypothesis being tested is whether the data could reasonably have come
# from the assumed (vitality) model.
if(C1 > chi2){
print("C1 > chiSquared; should reject the hypothesis becasue C1 falls outside the 95% confidence interval.")
} else {
print("C1 < chiSquared; should Not reject the hypothesis because C1 falls inside the 95% confidence interval")
}
}
# --Incremental mortality plot
if (Iplot != F) {
par(mfrow=c(1,1))
if (rc.data != F) {
ln <-length(Ni)-1
x1 <-x1[1:ln]
x2 <-x2[1:ln]
Ni <-Ni[1:ln]
}
ln <-length(Ni)
#scale<-(x2-x1)[Ni==max(Ni)]
scale<-max( (x2-x1)[Ni==max(Ni)] )
ext<-max(pplot,1)
npt<-200*ext
xxx <-seq(x1[1],x2[ln]*ext,length=npt)
xx1 <-xxx[1:(npt-1)]
xx2 <-xxx[2:npt]
sProbI <-survProbInc.k(r.final[1],s.final[1],k.final[1],xx1,xx2)
ytop <-1.1*max( max(sProbI/(xx2-xx1)),Ni/(x2-x1) )*scale
plot((x1+x2)/2,Ni*scale/(x2-x1),ylim=c(0,ytop),xlim=c(0,ext*x2[ln]),xlab=tlab,ylab="incremental mortality")
title("Probability Density Function")
lines((xx1+xx2)/2,sProbI*scale/(xx2-xx1))
}
return()
}
#' The cumulative survival distribution function for 2-process 3-parameter
#'
#' None.
#'
#' @param xx vector of ages
#' @param r r value
#' @param s s value
#' @param k k value
#' @return vector of FF?
SurvFn.k <- function(xx, r, s, k) {
# pnorm is: cumulative prob for the Normal Dist.
tmp1 <- (sqrt(1/xx) * (1 - xx * r))/s # xx=0 is ok. pnorm(+-Inf) is defined
tmp2 <- (sqrt(1/xx) * (1 + xx * r))/s
# --safeguard if exponent gets too large.---
tmp3 <- 2*r/(s*s)
if(tmp3 >250){
q <- tmp3/250
if(tmp3 >1500){
q <- tmp3/500
}
valueFF <-(1.-(pnorm(-tmp1) + (exp(tmp3/q) *pnorm(-tmp2)^(1/q))^(q)))*exp(-k*xx)
}
else {
valueFF <-(1.-(pnorm(-tmp1) + exp(tmp3) *pnorm(-tmp2)))*exp(-k*xx) #1-G
}
if ( all(is.infinite(valueFF)) ) {
warning(message="Inelegant exit caused by overflow in evaluation of survival function.
Check for right-censored data. Try other initial values.")
}
return(valueFF)
}
#' Calculates incremental survival probability for 2-process 3-parameter r, s, k
#'
#' None
#'
#' @param r r value
#' @param s s value
#' @param k k value
#' @param xx1 xx1 vector
#' @param xx2 xx2 vector
#' @return Incremental survival probabilities.
survProbInc.k <- function(r, s, k, xx1, xx2){
value.iSP <- -(SurvFn.k(xx2, r, s, k) - SurvFn.k(xx1, r, s, k))
value.iSP[value.iSP < 1e-18] <- 1e-18 # safeguards against taking Log(0)
value.iSP
}
#' Gives log likelihood of 2-process 4-parameter model
#'
#' None
#'
#' @param par vector of parameter(r, s, lambda, beta)
#' @param xx1 xx1 vector
#' @param xx2 xx2 vector
#' @param NNi survival fractions
#' @return log likelihood
logLikelihood.k <- function(par, xx1, xx2, NNi) {
# --calculate incremental survival probability--- (safeguraded > 1e-18 to prevent log(0))
iSP <- survProbInc.k(par[1], par[2], par[3], xx1, xx2)
loglklhd <- -NNi*log(iSP)
if (par[3] < 0) {
loglklhd<-loglklhd + par[3]*par[3]*1e4
}
return(sum(loglklhd)) ## remove sum()?
}
#' Standard errors for 3-param r, s, k
#'
#' Note: if k <= 0, can not find std Err for k.
#'
#' @param r r value
#' @param s s value
#' @param k k value
#' @param xx1 age 1 (corresponding 1:(t-1) and 2:t)
#' @param x2 age 2
#' @param Ni survival fraction
#' @param pop initial population (total population of the study)
#' @return standard error for r, s, k, u.
stdErr.k <- function(r,s,k,x1,x2,Ni,pop){
# function to compute standard error for MLE parameters r,s,k in the vitality model
# Arguments:
# r,s,k - final values of MLE parameters
# xx1,xx2 time vectors (steps 1:(T-1) and 2:T)
# Ni - survival fraction
# pop - total population of the study
#
# Return:
# standard error for r,s,k
# Note: if k <or= 0, can not find std Err for k.
#
LL <-function(a,b,c,r,s,k,x1,x2,Ni){sum(logLikelihood.k(c(r+a,s+b,k+c),x1,x2,Ni))}
#initialize hessian for storage
if (k > 0) {
hess <- matrix(0,nrow=3,ncol=3)
} else {
hess <- matrix(0,nrow=2,ncol=2)
}
#set finite difference intervals
h <-.001
hr <-abs(h*r)
hs <-h*s*.1
hk <-h*k*.1
#Compute second derivitives (using 5 point)
# LLrr
f0 <-LL(-2*hr,0,0,r,s,k,x1,x2,Ni)
f1 <-LL(-hr,0,0,r,s,k,x1,x2,Ni)
f2 <-LL(0,0,0,r,s,k,x1,x2,Ni)
f3 <-LL(hr,0,0,r,s,k,x1,x2,Ni)
f4 <-LL(2*hr,0,0,r,s,k,x1,x2,Ni)
fp0 <-(-25*f0 +48*f1 -36*f2 +16*f3 -3*f4)/(12*hr)
fp1 <-(-3*f0 -10*f1 +18*f2 -6*f3 +f4)/(12*hr)
fp3 <-(-f0 +6*f1 -18*f2 +10*f3 +3*f4)/(12*hr)
fp4 <-(3*f0 -16*f1 +36*f2 -48*f3 +25*f4)/(12*hr)
LLrr <-(fp0 -8*fp1 +8*fp3 -fp4)/(12*hr)
# LLss
f0 <-LL(0,-2*hs,0,r,s,k,x1,x2,Ni)
f1 <-LL(0,-hs,0,r,s,k,x1,x2,Ni)
# f2 as above
f3 <-LL(0,hs,0,r,s,k,x1,x2,Ni)
f4 <-LL(0,2*hs,0,r,s,k,x1,x2,Ni)
fp0 <-(-25*f0 +48*f1 -36*f2 +16*f3 -3*f4)/(12*hs)
fp1 <-(-3*f0 -10*f1 +18*f2 -6*f3 +f4)/(12*hs)
fp3 <-(-f0 +6*f1 -18*f2 +10*f3 +3*f4)/(12*hs)
fp4 <-(3*f0 -16*f1 +36*f2 -48*f3 +25*f4)/(12*hs)
LLss <-(fp0 -8*fp1 +8*fp3 -fp4)/(12*hs)
# LLkk
if (k > 0) {
f0 <-LL(0,0,-2*hk,r,s,k,x1,x2,Ni)
f1 <-LL(0,0,-hk,r,s,k,x1,x2,Ni)
# f2 as above
f3 <-LL(0,0,hk,r,s,k,x1,x2,Ni)
f4 <-LL(0,0,2*hk,r,s,k,x1,x2,Ni)
fp0 <-(-25*f0 +48*f1 -36*f2 +16*f3 -3*f4)/(12*hk)
fp1 <-(-3*f0 -10*f1 +18*f2 -6*f3 +f4)/(12*hk)
fp3 <-(-f0 +6*f1 -18*f2 +10*f3 +3*f4)/(12*hk)
fp4 <-(3*f0 -16*f1 +36*f2 -48*f3 +25*f4)/(12*hk)
LLkk <-(fp0 -8*fp1 +8*fp3 -fp4)/(12*hk)
}
#-------end second derivs---
# do mixed partials (4 points)
# LLrs
m1 <-LL(hr,hs,0,r,s,k,x1,x2,Ni)
m2 <-LL(-hr,hs,0,r,s,k,x1,x2,Ni)
m3 <-LL(-hr,-hs,0,r,s,k,x1,x2,Ni)
m4 <-LL(hr,-hs,0,r,s,k,x1,x2,Ni)
LLrs <-(m1 -m2 +m3 -m4)/(4*hr*hs)
if (k > 0) {
# LLrk
m1 <-LL(hr,0,hk,r,s,k,x1,x2,Ni)
m2 <-LL(-hr,0,hk,r,s,k,x1,x2,Ni)
m3 <-LL(-hr,0,-hk,r,s,k,x1,x2,Ni)
m4 <-LL(hr,0,-hk,r,s,k,x1,x2,Ni)
LLrk <-(m1 -m2 +m3 -m4)/(4*hr*hk)
# LLsk
m1 <-LL(0,hs,hk,r,s,k,x1,x2,Ni)
m2 <-LL(0,-hs,hk,r,s,k,x1,x2,Ni)
m3 <-LL(0,-hs,-hk,r,s,k,x1,x2,Ni)
m4 <-LL(0,hs,-hk,r,s,k,x1,x2,Ni)
LLsk <-(m1 -m2 +m3 -m4)/(4*hs*hk)
}
if (k > 0) {
diag(hess) <-c(LLrr,LLss,LLkk)*pop
hess[2,1]<-hess[1,2]<-LLrs*pop
hess[3,1]<-hess[1,3]<-LLrk*pop
hess[3,2]<-hess[2,3]<-LLrk*pop
} else {
diag(hess) <-c(LLrr,LLss)*pop
hess[2,1]<-hess[1,2]<-LLrs*pop
}
#print(hess)
hessInv <-solve(hess)
#print(hessInv)
#compute correlation matrix:
sz <-3
if (k <= 0) { sz <-2 }
corr<- matrix(0,nrow=sz,ncol=sz)
for (i in 1:sz) {
for (j in 1:sz) {
corr[i,j] <- hessInv[i,j]/sqrt(abs(hessInv[i,i]*hessInv[j,j]))
}
}
#print(corr)
if ( abs(corr[2,1]) > .98 ) {
warning("WARNING: parameters r and s appear to be closely correlated for this data set.
s.e. may fail for these parameters.")
}
if ( sz == 3 && abs(corr[3,2]) > .98 ) {
warning("WARNING: parameters s and k appear to be closely correlated for this data set.
s.e. may fail for these parameters.")
}
if ( sz == 3 && abs(corr[3,1]) > .98 ) {
warning("WARNING: parameters r and k appear to be closely correlated for this data set.
s.e. may fail for these parameters.")
}
se <-sqrt(diag(hessInv))
# Approximate s.e. for cases where calculation of s.e. failed:
if( sum( seNA<-is.na(se) ) > 0 ) {
se12 <-sqrt(diag(solve(hess[c(1,2) ,c(1,2) ])))
if (k > 0) {
se13 <-sqrt(diag(solve(hess[c(1,3) ,c(1,3) ])))
se23 <-sqrt(diag(solve(hess[c(2,3) ,c(2,3) ])))
}
if(seNA[1] == T) {
if(!is.na(se[1]<-se12[1]) || (k>0 && !is.na(se[1]<-se13[1])) )
warning("* s.e. for parameter r is approximate.")
else warning("* unable to calculate or approximate s.e. for parameter r.")
}
if(seNA[2] == T) {
if(!is.na(se[2]<-se12[2]) || (k>0 && !is.na(se[2]<-se23[1])) )
warning("* s.e. for parameter s is approximate.")
else warning("* unable to calculate or approximate s.e. for parameter s.")
}
if(k>0 && seNA[3] == T) {
if(!is.na(se[3]<-se13[2]) || !is.na(se[3]<-se23[2]) )
warning("* s.e. for parameter k is approximate.")
else warning("* unable to calculate or approximate s.e. for parameter k.")
}
}
#######################
if (k <= 0) {
se <-c(se,NA)
}
return(se)
}
|
/scratch/gouwar.j/cran-all/cranData/vitality/R/vitality.k.R
|
vitality.ku<-function(time,sdata,rc.data=F,se=F,gfit=F,datatype="CUM",ttol=.000001,
init.params=F,lower=c(0,-1,0,0),upper=c(100,100,50,50),pplot=T,
tlab="days",lplot=F,cplot=F,Iplot=F,silent=F,L=0){
#
#
# Vitality based survival model: parameter fitting routine: VERSION: 10/12/2007; DS 2014/11/17
#
# REQUIRED PARAMETERS:
# time - time component of data: time from experiment start. Time should
# start after the imposition of a stressor is completed.
# sdata - survival or mortality data. The default expects cumulative
# survival fraction. If providing incremental mortality fraction
# instead, use option: datatype="INC".
# The default also expects the data to represent full mortality.
# Otherwise, use option: rc.data=T to indicate right censored data.
#
# OPTIONAL PARAMETERS:
# rc.data =T - specifies Right Censored data. If the data does not
# represent full mortality, it is probably right censored. The default
# is rc.data=F. A third option is rc.data="TF". Use this case to add
# a near-term zero survival point to data which displays nearly full
# mortality ( <.01 survival at end). If rc.data=F but the data does
# not show full mortality, rc.data="TF" will be
# invoked automatically.
# se =<population> calculates the standard errors for the MLE parameters.
# Default is se=F. The initial study population is necessary for
# computing these standard errors.
# gfit =<population> provides a Pearson C type test for goodness of fit.
# Default is gfit=F. The initial study population is necessary for
# computing goodness of fit.
# datatype ="CUM" -cumulative survival fraction data- is the default.
# Other option: datatype="INC" - for incremental mortality fraction
# data. ttol (stopping criteria tolerence.) Default is .000001 .
# specify as ttol=.0001.
# If one of the liklihood plots (esp. for "k") does not look optimal,
# try decreasing ttol. If the program crashes, try increasing ttol.
# init.params =F has the routine choose initial parameter estimates for
# r,s,k,u (default: =F). If you wish to specify initial param values
# rather than have the routine choose them, specify
# init.params=c(r,s,k,u) in that order (eg. init.params=c(.1,.02,.003)).
# pplot =T provides plots of cumulative survival and incremental mortality -
# for both data and fitted curves (default: =T). pplot=F provides no
# plotting. A third option: pplot=n (n>=1) extends the time axis of
# the fitting plots (beyond the max time in data). For example:
# pplot=1.2 extends the time axis by 20%. (Note: the incremental
# mortality plot is a continuous representation of the appropriately-
# binned histogram of incremental mortalities.)
# tlab ="<time units>" specifies units for x-axis of plots. Default is
# tlab="days".
# lplot =T provides likelihood function plotting (default =T).
# Note: these plots are not "likelihood profiles" in that while one
# parameter is varied, the others are held fixed, rather than
# re-optimized. (must also have pplot=T.)
# cplot =T provides a likelihood contour plot for a range of r and s values
# (can be slow so default is F). Must also have lplot=T (and pplot=T)
# to get contour plots.
# silent =T stops all print and plot options (still get most warning and all
# error messages) Default is F. A third option, silent="verbose" also
# enables the trace setting in the ms (minimum sum) S-Plus routine.
# L =0 times of running simulated annealing. Default is 0, use Newton-Ralphson method only.
# RETURN:
# vector of final MLE r,s,k,u parameter estimates.
# standard errors of MLE parameter estimates (if se=<population> is
# specified).
#
# --Check/prepare Data---
dTmp<-dataPrep(time,sdata,datatype,rc.data)
time<-dTmp$time
sfract<-dTmp$sfract
x1<-dTmp$x1
x2<-dTmp$x2
Ni<-dTmp$Ni
rc.data<-dTmp$rc.data
# --Produce initial parameter values---
if(length(init.params)==1) {
ii<-indexFinder(sfract,0.5)
if (ii == -1) {
warning("ERROR: no survival fraction data below the .5 level. Can not use the initial r s k u estimator. You must supply initial r s k u estimates")
return(-1)
}
else rsk<-c(1/time[ii],0.1,0.01,0.1)
} else { # use user specified init params
rsk <-init.params
}
if(rsk[1] == -1) {
stop
}
if (silent == F) {
print(cbind(c("Initial r","initial s","initial k","initial u"),rsk))
}
# --create dataframe for sa---
dtfm <- data.frame(x1=x1,x2=x2,Ni=Ni)
#param(dtfm,"r")
#param(dtfm,"s")
#param(dtfm,"k")
#param(dtfm,"u")
# --run MLE fitting routine (simulated annealing)---
# --L>=1 simulated anealing will be conducted to generate the initial values--
if (L>=1){
vfit.sa.temp<-sapply(1:L,function(r0,s0,k0,u0,xx1,xx2,NNi)
sa.lt.ku(rsk[1],rsk[2],rsk[3],rsk[4],x1,x2,Ni))
ind<-which.min(vfit.sa.temp[5,])
vfit.sa<-vfit.sa.temp[,ind]
# --Newton-Ralphson algorithm using the results from simulated anealing as initial values--#
fit.nlm<-nlminb(vfit.sa[1:4],objective=logLikelihood.ku,lower=lower,upper=upper,xx1=x1,xx2=x2,NNi=Ni)
} else if (L==0){ # --conduct Newton-Ralphoson algorithm directly --
fit.nlm<-nlminb(rsk,objective=logLikelihood.ku,lower=lower,upper=upper,xx1=x1,xx2=x2,NNi=Ni)
} else stop("ERROR: L should be a positive integer.")
# --save final param estimates---
r.final<-fit.nlm$par[1]
s.final<-abs(fit.nlm$par[2])
k.final<-fit.nlm$par[3]
u.final<-fit.nlm$par[4]
mlv<-fit.nlm$obj
if (silent ==F) {print(cbind(c("estimated r", "estimated s", "estimated k", "estimated u", "minimum -loglikelihood value"), c(r.final, s.final, k.final, u.final, mlv)))}
# ==end MLE fitting===
# --compute standard errors---
if (se != F) {
s.e. <-stdErr.ku(r.final, s.final, k.final, u.final, x1, x2, Ni, se)
if (silent==F){print(cbind(c("sd for r","sd for s","sd for k","sd for u"), s.e.))}
}
# # --compute AIC --
# if (AIC !=F) {
# AIC.calc<-function(pop,value){
# # Routine to calculate AIC value
# # pop - population number of sample
# # value - the minimal value of the -loglikelihood
# return(pop*(2*4+2*value))
# }
# AIC.value<-AIC.calc(AIC,mlv)
# if (silent==F){print(c("AIC value for model fitting:", AIC.value))}
# }
# --plotting and goodness of fit---
if (pplot != F || gfit != F) {
plotting.ku(r.final,s.final,k.final,u.final,mlv,time,sfract,x1,x2,Ni,pplot,tlab,lplot,cplot,Iplot,gfit)
}
# ............................................................................................
# --return final param values---
sd<-5 #significant digits of output
if(se != F ) {
params<-c(r.final,s.final,k.final,u.final)
pvalue<-c(1-pnorm(r.final/s.e.[1]),1-pnorm(s.final/s.e.[2]),1-pnorm(k.final/s.e.[3]),1-pnorm(u.final/s.e.[4]))
std<-c(s.e.[1],s.e.[2],s.e.[3],s.e.[4])
out<-signif(cbind(params,std,pvalue),sd)
return(out)
} else {
return(signif(c(r.final,s.final,k.final,u.final),sd))
}
}
#=SurvFn.ku================================================================================================
SurvFn.ku<-function(xx,r,s,k,u){ # The cumulative survival distribution function.
yy<-u^2+s^2*xx
# pnorm is: cumulative prob for the Normal Dist.
tmp1 <- sqrt(1/yy) * (1 - xx * r) # xx=0 is ok. pnorm(+-Inf) is defined
tmp2 <- sqrt(1/yy) * (1 + xx * r+2*u^2*r/s^2)
# --safeguard if exponent gets too large.---
tmp3 <- 2*r/(s*s)+2*u^2*r^2/s^4
if (tmp3 >250) {
q <-tmp3/250
if (tmp3 >1500) {
q <-tmp3/500
}
valueFF <-(1.-(pnorm(-tmp1) + (exp(tmp3/q) *pnorm(-tmp2)^(1/q))^(q)))*exp(-k*xx)
}
else {
valueFF <-(1.-(pnorm(-tmp1) + exp(tmp3) *pnorm(-tmp2)))*exp(-k*xx) #1-G
}
if ( all(is.infinite(valueFF)) ) {
warning(message="Inelegant exit caused by overflow in evaluation of survival function.
Check for right-censored data. Try other initial values.")
}
return(valueFF)
}
#=survProbInc.ku===========================================================================================
survProbInc.ku<-function(r,s,k,u,xx1,xx2){ # calculates incremental survival probability
value.iSP <--(SurvFn.ku(xx2,r,s,k,u) - SurvFn.ku(xx1,r,s,k,u))
value.iSP[value.iSP < 1e-18] <-1e-18 # safeguards against taking Log(0)
value.iSP
}
#=logLikelihood.ku=========================================================================================
logLikelihood.ku<-function(par,xx1,xx2,NNi){
#returns vector of terms in log likelihood (sum them to get the log likelihood)
# --calculate incremental survival probability--- (safeguraded >1e-18 to prevent log(0))
iSP <- survProbInc.ku(par[1],par[2],par[3],par[4],xx1,xx2)
loglklhd <--NNi*log(iSP)
# add smooth penalty to log-likelihood if k <0
# if (k < 0) {
# loglklhd <-loglklhd + k*k*1e4
# }
return(sum(loglklhd))
}
#=logLikelihood4.ku========================================================================================
#--the version of -log likelihood for function nlminb --
# logLikelihood4.ku<-function(par,xx1,xx2,NNi){
# #returns vector of terms in log likelihood (sum them to get the log likelihood)
# # --calculate incremental survival probability--- (safeguraded >1e-18 to prevent log(0))
# iSP <- survProbInc.ku(par[1],par[2],par[3],par[4],xx1,xx2)
# loglklhd <--NNi*log(iSP)
# return(sum(loglklhd))
# }
#=sa.lt.ku=================================================================================================
#simulated annealing for four parameters vitality model
#R function, version lt.514
sa.lt.ku<-function(r0,s0,k0,u0,x1,x2,Ni){
#step 0 (initialization)
n.p<-4 #number of parameters
x0<-c(r0,s0,k0,u0) #starting point
v<-c(0.1,0.1,0.005,0.1) # starting step vector v0, ...waiting for giving values
T0<-10^3 #starting temperature T0
eps<-10^(-7) #a terminating criterion eps
Ne<-4 #values of minima are less than a tolerance
Ns<-20 #a test for step variation
c<-2 #a varying criterion
Nt<-10*n.p #a test for temperature reduction
rt<-0.85 #a reduction coefficient rt
f0<-logLikelihood.ku(par=c(x0[1],x0[2],x0[3],x0[4]),x1,x2,Ni) ###check the function form in splus
x.opt<-x0
x.temp<-x0
x.new<-x0
f.opt<-f0
f.new<-f0
f.temp<-f0
N<-rep(0,n.p)
x.star<-matrix(0,10000,n.p)
x.star[1:Ne,]<-t(matrix(rep(x0,Ne),4,Ne))
f.star<-c(rep(f0,Ne),rep(0,99996))
i<-0 #successive points
j<-0 #successive cycles along every direction
m<-0 #successive step adjustments
k<-0 #successive temperature reductions
h<-1 #the direction along which the trail point is generated
###############################################################
#step 1
while(k<1000){
while(m<Nt){
while (j<Ns){
while (h<=n.p){
x.new[h]<-x.temp[h]+runif(1,-1,1)*v[h]
if (h==1){
while(x.new[h]<0|x.new[h]>40){ #make sure all the points are in their suitable ranges
x.new[h]<-x.temp[h]+runif(1,-1,1)*v[h] #generate a new point
}
}
if (h==2){
while(x.new[h]<0|x.new[h]>5){ #make sure all the points are in their suitable ranges
x.new[h]<-x.temp[h]+runif(1,-1,1)*v[h] #generate a new point
}
}
if (h==3){
while(x.new[h]<0|x.new[h]>5){ #make sure all the points are in their suitable ranges
x.new[h]<-x.temp[h]+runif(1,-1,1)*v[h] #generate a new point
}
}
if (h==4){
while(x.new[h]<0|x.new[h]>2){ #make sure all the points are in their suitable ranges
x.new[h]<-x.temp[h]+runif(1,-1,1)*v[h] #generate a new point
}
}
f.new<-logLikelihood.ku(par=c(x.new[1],x.new[2],x.new[3],x.new[4]),x1,x2,Ni)
if(f.new<=f.temp){ #then accept the new point
x.temp<-x.new
f.temp<-f.new
i<-i+1
N[h]<-N[h]+1
if(f.new<f.opt){
x.opt<-x.new
f.opt<-f.new
}
}
else{ #metropolis move
p<-exp((f.temp-f.new)/T0)
if(runif(1)<p){ #accept point
x.temp<-x.new
f.temp<-f.new
i<-i+1
N[h]<-N[h]+1
}
}
h<-h+1
}
h<-1
j<-j+1
}
#for (t in 1:4){
# if(N[t]>0.6*Ns){
# v[t]<-v[t]*(1+c*(N[t]/Ns-0.6)/0.4)
# }
# else{
# if(N[t]<0.4*NS){
# v[t]<-v[t]/(1+c*(0.4-N[t]/Ns)/0.4)
# }
# }
#}
j<-0
N<-rep(0,n.p)
m<-m+1
}
T0<-T0*rt
f.star[k+Ne+1]<-f.temp
x.star[k+Ne+1,]<-x.temp
index<-0
for (q in 1:Ne){
if(abs(f.star[k+Ne+1]-f.star[k+Ne+1-q])>eps)
index<-1
}
for(t in 1:n.p){
if(abs(x.star[k+Ne+1,t]-x.star[k+Ne,t])/x.star[k+Ne,t]>0.001)
index<-1
}
if(f.star[k+Ne+1]-f.opt>eps)
index<-1
if(index==0)
break #stop the search
else{
k<-k+1
m<-0
}
}
return(c(x.opt,f.opt))
}
#=stdErr.ku====================================================================================
#computing standand errors.
stdErr.ku<-function(r,s,k,u,x1,x2,Ni,pop){
# function to compute standard error for MLE parameters r,s,k,u in the vitality model
# Arguments:
# r,s,k,u - final values of MLE parameters
# x1,x2 time vectors (steps 1:(T-1) and 2:T)
# Ni - survival fraction
# pop - total population of the study
#
# Return:
# standard error for r,s,k,u
# Note: if k <or= 0, can not find std Err for k.
#
LL <-function(a,b,c,d,r,s,k,u,x1,x2,Ni){logLikelihood.ku(c(r+a,s+b,k+c,u+d),x1,x2,Ni)}
#initialize hessian for storage
hess <-matrix(0,nrow=4,ncol=4)
#set finite difference intervals
h <-.001
hr <-abs(h*r)
hs <-h*s*.1
hk <-h*k*.1
hu <-h*u*.1
#Compute second derivitives (using 5 point)
# LLrr
f0 <-LL(-2*hr,0,0,0,r,s,k,u,x1,x2,Ni)
f1 <-LL(-hr,0,0,0,r,s,k,u,x1,x2,Ni)
f2 <-LL(0,0,0,0,r,s,k,u,x1,x2,Ni)
f3 <-LL(hr,0,0,0,r,s,k,u,x1,x2,Ni)
f4 <-LL(2*hr,0,0,0,r,s,k,u,x1,x2,Ni)
fp0 <-(-25*f0 +48*f1 -36*f2 +16*f3 -3*f4)/(12*hr)
fp1 <-(-3*f0 -10*f1 +18*f2 -6*f3 +f4)/(12*hr)
fp3 <-(-f0 +6*f1 -18*f2 +10*f3 +3*f4)/(12*hr)
fp4 <-(3*f0 -16*f1 +36*f2 -48*f3 +25*f4)/(12*hr)
LLrr <-(fp0 -8*fp1 +8*fp3 -fp4)/(12*hr)
# LLss
f0 <-LL(0,-2*hs,0,0,r,s,k,u,x1,x2,Ni)
f1 <-LL(0,-hs,0,0,r,s,k,u,x1,x2,Ni)
# f2 as above
f3 <-LL(0,hs,0,0,r,s,k,u,x1,x2,Ni)
f4 <-LL(0,2*hs,0,0,r,s,k,u,x1,x2,Ni)
fp0 <-(-25*f0 +48*f1 -36*f2 +16*f3 -3*f4)/(12*hs)
fp1 <-(-3*f0 -10*f1 +18*f2 -6*f3 +f4)/(12*hs)
fp3 <-(-f0 +6*f1 -18*f2 +10*f3 +3*f4)/(12*hs)
fp4 <-(3*f0 -16*f1 +36*f2 -48*f3 +25*f4)/(12*hs)
LLss <-(fp0 -8*fp1 +8*fp3 -fp4)/(12*hs)
# LLkk
f0 <-LL(0,0,-2*hk,0,r,s,k,u,x1,x2,Ni)
f1 <-LL(0,0,-hk,0,r,s,k,u,x1,x2,Ni)
# f2 as above
f3 <-LL(0,0,hk,0,r,s,k,u,x1,x2,Ni)
f4 <-LL(0,0,2*hk,0,r,s,k,u,x1,x2,Ni)
fp0 <-(-25*f0 +48*f1 -36*f2 +16*f3 -3*f4)/(12*hk)
fp1 <-(-3*f0 -10*f1 +18*f2 -6*f3 +f4)/(12*hk)
fp3 <-(-f0 +6*f1 -18*f2 +10*f3 +3*f4)/(12*hk)
fp4 <-(3*f0 -16*f1 +36*f2 -48*f3 +25*f4)/(12*hk)
LLkk <-(fp0 -8*fp1 +8*fp3 -fp4)/(12*hk)
# LLuu
f0 <-LL(0,0,0,-2*hu,r,s,k,u,x1,x2,Ni)
f1 <-LL(0,0,0,-hu,r,s,k,u,x1,x2,Ni)
# f2 as above
f3 <-LL(0,0,0,hu,r,s,k,u,x1,x2,Ni)
f4 <-LL(0,0,0,2*hu,r,s,k,u,x1,x2,Ni)
fp0 <-(-25*f0 +48*f1 -36*f2 +16*f3 -3*f4)/(12*hu)
fp1 <-(-3*f0 -10*f1 +18*f2 -6*f3 +f4)/(12*hu)
fp3 <-(-f0 +6*f1 -18*f2 +10*f3 +3*f4)/(12*hu)
fp4 <-(3*f0 -16*f1 +36*f2 -48*f3 +25*f4)/(12*hu)
LLuu <-(fp0 -8*fp1 +8*fp3 -fp4)/(12*hu)
#-------end second derivs---
# do mixed partials (4 points)
# LLrs
m1 <-LL(hr,hs,0,0,r,s,k,u,x1,x2,Ni)
m2 <-LL(-hr,hs,0,0,r,s,k,u,x1,x2,Ni)
m3 <-LL(-hr,-hs,0,0,r,s,k,u,x1,x2,Ni)
m4 <-LL(hr,-hs,0,0,r,s,k,u,x1,x2,Ni)
LLrs <-(m1 -m2 +m3 -m4)/(4*hr*hs)
# LLru
m1 <-LL(hr,0,0,hu,r,s,k,u,x1,x2,Ni)
m2 <-LL(-hr,0,0,hu,r,s,k,u,x1,x2,Ni)
m3 <-LL(-hr,0,0,-hu,r,s,k,u,x1,x2,Ni)
m4 <-LL(hr,0,0,-hu,r,s,k,u,x1,x2,Ni)
LLru <-(m1 -m2 +m3 -m4)/(4*hr*hu)
# LLsu
m1 <-LL(0,hs,0,hu,r,s,k,u,x1,x2,Ni)
m2 <-LL(0,-hs,0,hu,r,s,k,u,x1,x2,Ni)
m3 <-LL(0,-hs,0,-hu,r,s,k,u,x1,x2,Ni)
m4 <-LL(0,hs,0,-hu,r,s,k,u,x1,x2,Ni)
LLsu <-(m1 -m2 +m3 -m4)/(4*hu*hs)
# LLrk
m1 <-LL(hr,0,hk,0,r,s,k,u,x1,x2,Ni)
m2 <-LL(-hr,0,hk,0,r,s,k,u,x1,x2,Ni)
m3 <-LL(-hr,0,-hk,0,r,s,k,u,x1,x2,Ni)
m4 <-LL(hr,0,-hk,0,r,s,k,u,x1,x2,Ni)
LLrk <-(m1 -m2 +m3 -m4)/(4*hr*hk)
# LLsk
m1 <-LL(0,hs,hk,0,r,s,k,u,x1,x2,Ni)
m2 <-LL(0,-hs,hk,0,r,s,k,u,x1,x2,Ni)
m3 <-LL(0,-hs,-hk,0,r,s,k,u,x1,x2,Ni)
m4 <-LL(0,hs,-hk,0,r,s,k,u,x1,x2,Ni)
LLsk <-(m1 -m2 +m3 -m4)/(4*hs*hk)
# LLku
m1 <-LL(0,0,hk,hu,r,s,k,u,x1,x2,Ni)
m2 <-LL(0,0,hk,-hu,r,s,k,u,x1,x2,Ni)
m3 <-LL(0,0,-hk,-hu,r,s,k,u,x1,x2,Ni)
m4 <-LL(0,0,-hk,hu,r,s,k,u,x1,x2,Ni)
LLku <-(m1 -m2 +m3 -m4)/(4*hu*hk)
diag(hess) <-c(LLrr,LLss,LLkk,LLuu)*pop
hess[2,1]=hess[1,2]<-LLrs*pop
hess[3,1]=hess[1,3]<-LLrk*pop
hess[3,2]=hess[2,3]<-LLsk*pop
hess[4,1]=hess[1,4]<-LLru*pop
hess[4,2]=hess[2,4]<-LLsu*pop
hess[4,3]=hess[3,4]<-LLku*pop
#print(hess)
hessInv <-solve(hess)
#print(hessInv)
#compute correlation matrix:
sz <-4
corr<-matrix(0,nrow=sz,ncol=sz)
for (i in 1:sz) {
for (j in 1:sz) {
corr[i,j] <-hessInv[i,j]/sqrt(abs(hessInv[i,i]*hessInv[j,j]))
}
}
#print(corr)
if ( abs(corr[2,1]) > .98 ) {
warning("WARNING: parameters r and s appear to be closely correlated for this data set.
s.e. may fail for these parameters.")
}
if ( sz == 4 && abs(corr[3,2]) > .98 ) {
warning("WARNING: parameters s and k appear to be closely correlated for this data set.
s.e. may fail for these parameters.")
}
if ( sz == 4 && abs(corr[3,1]) > .98 ) {
warning("WARNING: parameters r and k appear to be closely correlated for this data set.
s.e. may fail for these parameters.")
}
if ( sz == 4 && abs(corr[4,2]) > .98 ) {
warning("WARNING: parameters s and u appear to be closely correlated for this data set.
s.e. may fail for these parameters.")
}
if ( sz == 4 && abs(corr[4,1]) > .98 ) {
warning("WARNING: parameters r and u appear to be closely correlated for this data set.
s.e. may fail for these parameters.")
}
se <-sqrt(diag(hessInv))
# Approximate s.e. for cases where calculation of s.e. failed:
if( sum( is.na(se) ) > 0 ) {
seNA<-is.na(se)
se12 <-sqrt(diag(solve(hess[c(1,2) ,c(1,2) ])))
se13 <-sqrt(diag(solve(hess[c(1,3) ,c(1,3) ])))
se23 <-sqrt(diag(solve(hess[c(2,3) ,c(2,3) ])))
se14 <-sqrt(diag(solve(hess[c(1,4) ,c(1,4) ])))
se24 <-sqrt(diag(solve(hess[c(2,4) ,c(2,4) ])))
se34 <-sqrt(diag(solve(hess[c(3,4) ,c(3,4) ])))
if(seNA[1]) {
if(!is.na(se12[1]) ){
se[1]=se12[1]
warning("* s.e. for parameter r is approximate.")
}
else if(!is.na(se13[1])){
se[1]=se13[1]
warning("* s.e. for parameter r is approximate.")
}
else if(!is.na(se14[1])){
se[1]=se14[1]
warning("* s.e. for parameter r is approximate.")
}
else warning("* unable to calculate or approximate s.e. for parameter r.")
}
if(seNA[2]) {
if(!is.na(se12[2]) ){
se[2]=se12[2]
warning("* s.e. for parameter s is approximate.")
}
else if(!is.na(se23[1])){
se[2]=se23[1]
warning("* s.e. for parameter s is approximate.")
}
else if(!is.na(se24[1])){
se[2]=se24[1]
warning("* s.e. for parameter s is approximate.")
}
else warning("* unable to calculate or approximate s.e. for parameter s.")
}
if(seNA[3]) {
if(!is.na(se13[2]) ){
se[3]=se13[2]
warning("* s.e. for parameter k is approximate.")
}
else if(!is.na(se23[2])){
se[3]=se23[2]
warning("* s.e. for parameter k is approximate.")
}
else if(!is.na(se34[1])){
se[3]=se34[1]
warning("* s.e. for parameter k is approximate.")
}
else warning("* unable to calculate or approximate s.e. for parameter k.")
}
if(seNA[4]) {
if(!is.na(se14[2]) ){
se[4]=se14[2]
warning("* s.e. for parameter u is approximate.")
}
else if(!is.na(se24[2])){
se[4]=se24[2]
warning("* s.e. for parameter u is approximate.")
}
else if(!is.na(se34[1])){
se[4]=se34[2]
warning("* s.e. for parameter u is approximate.")
}
else warning("* unable to calculate or approximate s.e. for parameter u.")
}
}
#######################
return(se)
}
#=plotting.ku=============================================================================================
plotting.ku<-function(r.final,s.final,k.final,u.final,mlv,time,sfract,x1,x2,Ni,pplot,tlab,lplot,cplot,Iplot,gfit){
# Function to provide plotting and goodness of fit computations
# --plot cumulative survival---
if(pplot != F){
#win.graph()
ext<-max(pplot,1)
par(mfrow=c(1,1))
len<-length(time)
tmax <-ext*time[len]
plot(time,sfract,xlab=tlab,ylab="survival fraction",ylim=c(0,1),xlim=c(0,tmax))
xxx<-seq(0,tmax,length=200)
lines(xxx,SurvFn.ku(xxx,r.final,s.final,k.final,u.final))
title("Cumulative Survival Data and Vitality Model Fitting")
# --likelihood and likelihood contour plots---
if(lplot != F) {
profilePlot.ku<-function(r.f,s.f,k.f,u.f,x1,x2,Ni,mlv,cplot) {
#
# mlv = value of max likelihood
# likelihood plots
# These plots are really not "profile" plots in that while one parameter is varied,
# the others are held fixed, not re-optimized.
SLL <- function(r,s,k,u,x1,x2,Ni){logLikelihood.ku(c(r,s,k,u),x1,x2,Ni)}
rf<-.2; sf<-.5; kf<-1.0; uf<-0.8;fp<-40 # rf,sf,kf,uf - set profile plot range (.2 => plot +-20%), 2*fp+1 points
rseq <-seq((1-rf)*r.f,(1+rf)*r.f, (rf/fp)*r.f)
sseq <-seq((1-sf)*s.f,(1+sf)*s.f, (sf/fp)*s.f)
useq <-seq((1-uf)*u.f,(1+uf)*u.f, (uf/fp)*u.f)
if (k.f > 0) {
kseq <-seq((1-kf)*k.f,(1+kf)*k.f, (kf/fp)*k.f)
}
else { #if k=0..
kseq <-seq(.00000001,.1,length=(2*fp+1))
}
rl <-length(rseq)
tmpLLr <-rep(0,rl)
tmpLLs <-tmpLLr
tmpLLk <-tmpLLr
tmpLLu <-tmpLLr
for (i in 1:rl) {
tmpLLr[i] <-SLL(rseq[i],s.f,k.f,u.f,x1,x2,Ni)
tmpLLs[i] <-SLL(r.f,sseq[i],k.f,u.f,x1,x2,Ni)
tmpLLk[i] <-SLL(r.f,s.f,kseq[i],u.f,x1,x2,Ni)
tmpLLu[i] <-SLL(r.f,s.f,k.f,useq[i],x1,x2,Ni)
}
par(mfrow=c(2,2), mar=c(5,4,3,2))
rlim1 <-rseq[1]
rlim2 <-rseq[rl]
if (r.f < 0) { #even though r should not be <0
rlim2 <-rseq[1]
rlim1 <-rseq[rl]
}
LL<-SLL(r.f,s.f,k.f,u.f,x1,x2,Ni)
plot(r.f,LL, xlim=c(rlim1,rlim2), xlab="r",ylab="Likelihood")
title("Likelihood Plots", outer=T, line=-1)
lines(rseq,tmpLLr)
legend(x="topright", legend=c("r.final", "vary r"), pch=c(1,NA), lty=c(NA,1))
plot(s.f,LL,xlim=c(sseq[1],sseq[rl]), xlab="s",ylab="Likelihood")
lines(sseq,tmpLLs)
legend(x="topright", legend=c("s.final", "vary s"), pch=c(1,NA), lty=c(NA,1))
plot(k.f,LL,xlim=c(kseq[1],kseq[rl]), ylim=c(1.1*min(tmpLLk)-.1*(mLk=max(tmpLLk)),mLk),xlab="k",ylab="Likelihood")
lines(kseq,tmpLLk)
legend(x="topright", legend=c("k.final", "vary k"), pch=c(1,NA), lty=c(NA,1))
plot(u.f,LL,xlim=c(useq[1],useq[rl]), xlab="u",ylab="Likelihood")
lines(useq,tmpLLu)
legend(x="topright", legend=c("u.final", "vary u"), pch=c(1,NA), lty=c(NA,1))
# -- for contour plotting ----------------------------------------
if (cplot==T) {
rl2<-(rl+1)/2; rl4<-20; st<-rl2-rl4; nr<-2*rl4+1
tmpLLrs <-matrix(rep(0,nr*nr),nrow<-nr,ncol=nr)
tmpLLru <-matrix(rep(0,nr*nr),nrow<-nr,ncol=nr)
tmpLLsu <-matrix(rep(0,nr*nr),nrow<-nr,ncol=nr)
for(i in 1:nr) {
for(j in 1:nr) {
tmpLLrs[i,j] <-SLL(rseq[i+st-1],sseq[j+st-1],k.f,u.f,x1,x2,Ni)
}
}
for(i in 1:nr) {
for(j in 1:nr) {
tmpLLru[i,j] <-SLL(rseq[i+st-1],s.f,k.f,useq[j+st-1],x1,x2,Ni)
}
}
for(i in 1:nr) {
for(j in 1:nr) {
tmpLLsu[i,j] <-SLL(r.f,sseq[i+st-1],k.f,useq[j+st-1],x1,x2,Ni)
}
}
lvv<-seq(mlv,1.02*mlv,length=11) #99.8%, 99.6% ... 98%
par(mfrow=c(2,2))
contour(rseq[st:(rl2+rl4)],sseq[st:(rl2+rl4)],tmpLLrs,levels=lvv,xlab="r",ylab="s")
title("Likelihood Contour Plot of r and s.")
points(r.f,s.f,pch="*",cex=3.0)
points(c(rseq[st],r.f),c(s.f,sseq[st]),pch="+",cex=1.5)
contour(rseq[st:(rl2+rl4)],useq[st:(rl2+rl4)],tmpLLru,levels=lvv,xlab="r",ylab="u")
title("Likelihood Contour Plot of r and u.")
points(r.f,u.f,pch="*",cex=3.0)
points(c(rseq[st],r.f),c(u.f,useq[st]),pch="+",cex=1.5)
contour(sseq[st:(rl2+rl4)],useq[st:(rl2+rl4)],tmpLLsu,levels=lvv,xlab="s",ylab="u")
title("Likelihood Contour Plot of s and u.")
points(s.f,u.f,pch="*",cex=3.0)
points(c(sseq[st],s.f),c(u.f,useq[st]),pch="+",cex=1.5)
plot(1,1, pch=NA,xaxt="n", yaxt="n", bty="n", xlab="", ylab="")
legend(x="center", legend="Outermost ring is likelihood 98% (of max) level,\ninnermost is 99.8% level.", bty="n")
}
}
profilePlot.ku(r.final,s.final,k.final,u.final,x1,x2,Ni,mlv,cplot)
}
}
# --calculations for goodness of fit---
if(gfit!=F){ # then gfit must supply the population number
isp <-survProbInc.ku(r.final,s.final,k.final,u.final,x1,x2)
C1.calc<-function(pop, isp, Ni){
# Routine to calculate goodness of fit (Pearson's C -type test)
# pop - population number of sample
# isp - Modeled: incemental survivor Probability
# Ni - Data: incemental survivor fraction (prob)
#
# Returns: a list containing:
# C1, dof, Chi2 (retrieve each from list as ..$C1 etc.)
if (pop < 35) {
if (pop < 25) {
warning(paste("WARNING: sample population (",as.character(pop),") is too small for
meaningful goodness of fit measure. Goodness of fit not being computed"))
return()
} else {
warning(paste("WARNING: sample population (",as.character(pop),") may be too small for
meaningful goodness of fit measure"))
}
}
np <- pop * isp # modeled population at each survival probability level.
tmpC1 <- 0
i1 <-1; i<-1; cnt<-0
len <- length(np)
while(i <= len) {
idx <- i1:i
# It is recommended that each np[i] >5 for meningful results. Where np<5
# points are grouped to attain that level. I have fudged it to 4.5 ...
# (as some leeway is allowed, and exact populations are sometimes unknown).
if(sum(np[idx]) > 4.5) {
cnt <-cnt+1
# Check if enough points remain. If not, they are glommed onto previous grouping.
if(i < len && sum(np[(i + 1):len]) < 4.5) {
idx <- i1:len
i <- len
}
sNi <- sum(Ni[idx])
sisp <- sum(isp[idx])
tmpC1 <- tmpC1 + (pop * (sNi - sisp)^2)/sisp
i1 <-i+1
}
i <-i+1
}
C1 <- tmpC1
dof <-cnt-1-4 # degrees of freedom (3 is number of parameters).
if (dof < 1) {
warning(paste("WARNING: sample population (",as.character(pop),") is too small for
meaningful goodness of fit measure (DoF<1). Goodness of fit not being computed"))
return()
}
chi2<-qchisq(.95,dof)
return(list(C1=C1,dof=dof,chi2=chi2))
}
C1dof <- C1.calc(gfit,isp,Ni)
C1 <-C1dof$C1
dof <-C1dof$dof
chi2 <-C1dof$chi2
print(paste("Pearson's C1=",as.character(round(C1,3))," chisquared =",
as.character(round(chi2,3)),"on",as.character(dof),"degrees of freedom "))
# Note: The hypothesis being tested is whether the data could reasonably have come
# from the assumed (vitality) model.
if (C1 > chi2) {
print("C1 > chiSquared; should reject the hypothesis becasue C1 falls outside the 95% confidence interval.")
} else {
print("C1 < chiSquared; should Not reject the hypothesis because C1 falls inside the 95% confidence interval")
}
}
# --Incremental mortality plot
if(Iplot != F){
par(mfrow=c(1,1))
#if (rc.data != F) {
ln <-length(Ni)-1
x1 <-x1[1:ln]
x2 <-x2[1:ln]
Ni <-Ni[1:ln]
#}
#ln <-length(Ni)
#scale<-(x2-x1)[Ni==max(Ni)]
scale<-max( (x2-x1)[Ni==max(Ni)] )
ext<-max(pplot,1)
npt<-200*ext
xxx <-seq(x1[1],x2[ln]*ext,length=npt)
xx1 <-xxx[1:(npt-1)]
xx2 <-xxx[2:npt]
sProbI <-survProbInc.ku(r.final,s.final,k.final,u.final,xx1,xx2)
ytop <-1.1*max( max(sProbI/(xx2-xx1)),Ni/(x2-x1) )*scale
plot((x1+x2)/2,Ni*scale/(x2-x1),ylim=c(0,ytop),xlim=c(0,ext*x2[ln]),xlab=tlab,ylab="incremental mortality")
title("Probability Density Function")
lines((xx1+xx2)/2,sProbI*scale/(xx2-xx1))
}
return()
}
|
/scratch/gouwar.j/cran-all/cranData/vitality/R/vitality.ku.R
|
## Unexported Utility Functions
#' Finds the first value of a vector that is less than a value.
#'
#' None
#'
#' @param x Vector to serach
#' @param val Threshhold
#' @return Gives the index of the first value of x that is <= val.
#' returns -1 if no value satisfies the condition
indexFinder <- function(x, val) {
idx <- (1:length(x))[x<= val][1]
if (is.na(idx)) {idx <- -1}
idx
}
|
/scratch/gouwar.j/cran-all/cranData/vitality/R/vitality.utils.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Generate a Information Value HTML Report
#'
#' @description
#' The function generates an interactive HTML report using Standard Person Query
#' data as an input. The report contains a full Information Value analysis, a
#' data exploration technique that helps determine which columns in a data set
#' have predictive power or influence on the value of a specified dependent
#' variable.
#'
#' @param data A Standard Person Query dataset in the form of a data frame.
#' @param predictors A character vector specifying the columns to be used as
#' predictors. Defaults to NULL, where all numeric vectors in the data will be
#' used as predictors.
#' @param outcome A string specifying a binary variable, i.e. can only contain
#' the values 1 or 0.
#' @param bins Number of bins to use in `Information::create_infotables()`,
#' defaults to 10.
#' @param max_var Numeric value to represent the maximum number of variables to
#' show on plots.
#' @param path Pass the file path and the desired file name, _excluding the file
#' extension_. For example, `"IV report"`.
#' @param timestamp Logical vector specifying whether to include a timestamp in
#' the file name. Defaults to TRUE.
#'
#' @section Creating a report:
#'
#' Below is an example on how to run the report.
#'
#' ```
#' library(dplyr)
#'
#' pq_data %>%
#' mutate(CH_binary = ifelse(Collaboration_hours > 12, 1, 0)) %>% # Simulate binary variable
#' IV_report(outcome = "CH_binary",
#' predictors = c("Email_hours", "Meeting_hours"))
#' ```
#'
#' @family Reports
#' @family Variable Association
#' @family Information Value
#'
#' @inherit generate_report return
#'
#' @export
IV_report <- function(data,
predictors = NULL,
outcome,
bins = 5,
max_var = 9,
path = "IV report",
timestamp = TRUE){
# Create timestamped path (if applicable) -----------------------------------
if(timestamp == TRUE){
newpath <- paste(path, vivainsights::tstamp())
} else {
newpath <- path
}
# Return IV object directly -------------------------------------------------
# Call `calculate_IV()` only once
IV_obj <-
data %>%
create_IV(outcome = outcome,
predictors = predictors,
bins = bins,
return = "IV")
# IV_names
IV_names <- names(IV_obj$Tables)
# List of tables -----------------------------------------------------------
table_list <-
IV_names %>%
purrr::map(function(x){
IV_obj$Tables[[x]] %>%
mutate(ODDS = exp(WOE + IV_obj$lnodds),
PROB = ODDS / (ODDS + 1))
}) %>%
purrr::set_names(IV_names)
# List of ggplot objects ----------------------------------------------------
plot_list <-
IV_obj$Summary$Variable %>%
as.character() %>%
purrr::map(~plot_WOE(IV = IV_obj, predictor = .))
# Restrict maximum plots to `max_var` ---------------------------------------
if(length(plot_list) > max_var){
plot_list <- plot_list[1:max_var]
table_list <- table_list[1:max_var]
}
table_names <- gsub("_", " ", x = names(table_list))
# Output list ---------------------------------------------------------------
output_list <-
list(
data %>% check_query(return = "text"),
data %>% create_IV(outcome = outcome, predictors=predictors, bins= bins),
data %>% create_IV(outcome = outcome,
predictors = predictors,
bins = bins,
return="summary"),
read_preamble("blank.md") # Header for WOE Analysis
) %>%
c(plot_list) %>%
c(list(read_preamble("blank.md"))) %>% # Header for Summary Tables
c(table_list) %>%
purrr::map_if(is.data.frame, create_dt) %>%
purrr::map_if(is.character, md2html)
title_list <-
c("Data Overview",
"Top Predictors",
"",
"WOE Analysis",
rep("", length(plot_list)),
"Summary - Predictors",
table_names)
n_title <- length(title_list)
title_levels <-
c(
2,
2,
4,
2, # Header for WOE Analysis
rep(4, length(plot_list)),
2, # Header for WOE Analysis
rep(3, length(table_list))
)
generate_report(title = "Information Value Report",
filename = newpath,
outputs = output_list,
titles = title_list,
subheaders = rep("", n_title),
echos = rep(FALSE, n_title),
levels = title_levels,
theme = "cosmo",
preamble = read_preamble("IV_report.md"))
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/IV_report.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Distribution of After-hours Collaboration Hours as a 100% stacked bar
#' @name afterhours_dist
#'
#' @description Analyse the distribution of weekly after-hours collaboration time.
#' Returns a stacked bar plot by default.
#' Additional options available to return a table with distribution elements.
#'
#' @details
#' Uses the metric \code{After_hours_collaboration_hours}.
#' See `create_dist()` for applying the same analysis to a different metric.
#'
#' @inheritParams create_dist
#' @inherit create_dist return
#'
#' @param cut A vector specifying the cuts to use for the data,
#' accepting "default" or "range-cut" as character vector,
#' or a numeric value of length three to specify the exact breaks to use. e.g. c(1, 3, 5)
#'
#' @import dplyr
#' @import ggplot2
#' @import reshape2
#' @import scales
#' @importFrom tidyr spread
#' @importFrom stats median
#' @importFrom stats sd
#'
#' @family Visualization
#' @family After-hours Collaboration
#'
#' @examples
#' # Return plot
#' afterhours_dist(pq_data, hrvar = "Organization")
#'
#' # Return summary table
#' afterhours_dist(pq_data, hrvar = "Organization", return = "table")
#'
#' # Return result with a custom specified breaks
#' afterhours_dist(pq_data, hrvar = "LevelDesignation", cut = c(4, 7, 9))
#' @export
afterhours_dist <- function(data,
hrvar = "Organization",
mingroup = 5,
return = "plot",
cut = c(1, 2, 3)) {
create_dist(data = data,
metric = "After_hours_collaboration_hours",
hrvar = hrvar,
mingroup = mingroup,
return = return,
cut = cut)
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/afterhours_dist.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Distribution of After-hours Collaboration Hours (Fizzy Drink plot)
#'
#' @description
#' Analyze weekly after-hours collaboration hours distribution, and returns
#' a 'fizzy' scatter plot by default.
#' Additional options available to return a table with distribution elements.
#'
#' @details
#' Uses the metric `After_hours_collaboration_hours`.
#' See `create_fizz()` for applying the same analysis to a different metric.
#'
#' @inheritParams create_fizz
#' @inherit create_fizz return
#'
#' @family Visualization
#' @family After-hours Collaboration
#'
#' @examples
#' # Return plot
#' afterhours_fizz(pq_data, hrvar = "LevelDesignation", return = "plot")
#'
#' # Return summary table
#' afterhours_fizz(pq_data, hrvar = "Organization", return = "table")
#' @export
afterhours_fizz <- function(data,
hrvar = "Organization",
mingroup = 5,
return = "plot"){
create_fizz(data = data,
metric = "After_hours_collaboration_hours",
hrvar = hrvar,
mingroup = mingroup,
return = return)
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/afterhours_fizz.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title After-hours Collaboration Time Trend - Line Chart
#'
#' @description
#' Provides a week by week view of after-hours collaboration time, visualized as
#' line charts. By default returns a line chart for after-hours collaboration
#' hours, with a separate panel per value in the HR attribute. Additional
#' options available to return a summary table.
#'
#' @details
#' Uses the metric `After_hours_collaboration_hours`.
#'
#' @seealso [create_line()] for applying the same analysis to a different metric.
#'
#' @inheritParams create_line
#' @inherit create_line return
#'
#' @family Visualization
#' @family After-hours Collaboration
#'
#' @examples
#' # Return a line plot
#' afterhours_line(pq_data, hrvar = "LevelDesignation")
#'
#' # Return summary table
#' afterhours_line(pq_data, hrvar = "LevelDesignation", return = "table")
#'
#' @export
afterhours_line <- function(data,
hrvar = "Organization",
mingroup=5,
return = "plot"){
## Inherit arguments
create_line(data = data,
metric = "After_hours_collaboration_hours",
hrvar = hrvar,
mingroup = mingroup,
return = return)
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/afterhours_line.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Rank groups with high After-Hours Collaboration Hours
#'
#' @description
#' This function scans a Standard Person Query for groups with high levels of
#' After-Hours Collaboration. Returns a plot by default, with an option to
#' return a table with all groups (across multiple HR attributes) ranked by
#' hours of After-Hours Collaboration Hours.
#'
#' @details
#' Uses the metric \code{After_hours_collaboration_hours}.
#' See `create_rank()` for applying the same analysis to a different metric.
#'
#' @inheritParams create_rank
#'
#' @import dplyr
#' @import ggplot2
#' @import reshape2
#' @import scales
#' @importFrom stats reorder
#'
#' @family Visualization
#' @family After-hours Collaboration
#'
#' @return
#' When 'table' is passed in `return`, a summary table is returned as a data frame.
#'
#' @examples
#' # Return plot
#' afterhours_rank(pq_data, return = "plot")
#'
#' # Return summary table
#' afterhours_rank(pq_data, return = "table")
#' @export
afterhours_rank <- function(data,
hrvar = extract_hr(data),
mingroup = 5,
mode = "simple",
plot_mode = 1,
return = "plot"){
data %>%
create_rank(metric = "After_hours_collaboration_hours",
hrvar = hrvar,
mingroup = mingroup,
mode = mode,
plot_mode = plot_mode,
return = return)
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/afterhours_rank.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Summary of After-Hours Collaboration Hours
#'
#' @description
#' Provides an overview analysis of after-hours collaboration time.
#' Returns a bar plot showing average weekly after-hours collaboration hours by default.
#' Additional options available to return a summary table.
#'
#' @details
#' Uses the metric \code{After_hours_collaboration_hours}.
#'
#' @inheritParams create_bar
#' @inherit create_bar return
#'
#' @family Visualization
#' @family After-hours Collaboration
#'
#' @examples
#' # Return a ggplot bar chart
#' afterhours_summary(pq_data, hrvar = "LevelDesignation")
#'
#' # Return a summary table
#' afterhours_summary(pq_data, hrvar = "LevelDesignation", return = "table")
#'
#' @export
afterhours_summary <- function(data,
hrvar = "Organization",
mingroup = 5,
return = "plot"){
create_bar(data = data,
metric = "After_hours_collaboration_hours",
hrvar = hrvar,
mingroup = mingroup,
return = return,
bar_colour = "alert")
}
#' @rdname afterhours_summary
#' @export
afterhours_sum <- afterhours_summary
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/afterhours_summary.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title After-Hours Time Trend
#'
#' @description
#' Provides a week by week view of after-hours collaboration time.
#' By default returns a week by week heatmap, highlighting the points in time with most activity.
#' Additional options available to return a summary table.
#'
#' @details
#' Uses the metric `After_hours_collaboration_hours`.
#'
#' @inheritParams create_trend
#'
#' @family Visualization
#' @family After-hours Collaboration
#'
#' @examples
#' # Run plot
#' afterhours_trend(pq_data)
#'
#' # Run table
#' afterhours_trend(pq_data, hrvar = "LevelDesignation", return = "table")
#'
#' @return
#' Returns a 'ggplot' object by default, where 'plot' is passed in `return`.
#' When 'table' is passed, a summary table is returned as a data frame.
#'
#' @export
afterhours_trend <- function(data,
hrvar = "Organization",
mingroup = 5,
return = "plot"){
create_trend(data,
metric = "After_hours_collaboration_hours",
hrvar = hrvar,
mingroup = mingroup,
return = return)
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/afterhours_trend.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Anonymise a categorical variable by replacing values
#'
#' @description
#' Anonymize categorical variables such as HR variables by replacing values with
#' dummy team names such as 'Team A'. The behaviour is to make 1 to 1
#' replacements by default, but there is an option to completely randomise
#' values in the categorical variable.
#'
#' @param x Character vector to be passed through.
#' @param scramble Logical value determining whether to randomise values in the
#' categorical variable.
#' @param replacement Character vector containing the values to replace original
#' values in the categorical variable. The length of the vector must be at
#' least as great as the number of unique values in the original variable.
#' Defaults to `NULL`, where the replacement would consist of `"Team A"`,
#' `"Team B"`, etc.
#'
#' @examples
#' unique(anonymise(pq_data$Organization))
#'
#' rep <- c("Manager+", "Manager", "IC")
#' unique(anonymise(pq_data$Layer), replacement = rep)
#'
#' @seealso jitter
#'
#' @return
#' Character vector with the same length as input `x`, replaced with values
#' provided in `replacement`.
#'
#' @export
anonymise <- function(x,
scramble = FALSE,
replacement = NULL){
n_to_rep <- length(x)
v_to_rep <- unique(x)
nd_to_rep <- length(v_to_rep)
if(is.null(replacement)){
replacement <- paste("Team", LETTERS[1:nd_to_rep])
} else {
replacement <- replacement[1:nd_to_rep]
}
if(scramble == TRUE){
sample(x = replacement,
size = n_to_rep,
replace = TRUE)
} else if(scramble == FALSE){
replacement[match(x, v_to_rep)]
}
}
#' @rdname anonymise
#' @export
anonymize <- anonymise
#' @title Jitter metrics in a data frame
#'
#' @description Convenience wrapper around `jitter()` to add a layer of
#' anonymity to a query. This can be used in combination with `anonymise()` to
#' produce a demo dataset from real data.
#'
#' @param data Data frame containing a query.
#' @param cols Character vector containing the metrics to jitter. When set to
#' `NULL` (default), all numeric columns in the data frame are jittered.
#' @param ... Additional arguments to pass to `jitter()`.
#'
#' @importFrom dplyr mutate
#' @importFrom dplyr across
#' @import tidyselect
#'
#' @examples
#' jittered <- jitter_metrics(pq_data, cols = "Collaboration_hours")
#'
#' # compare jittered vs original results of top rows
#' head(
#' data.frame(
#' original = pq_data$Collaboration_hours,
#' jittered = jittered$Collaboration_hours
#' )
#' )
#'
#' @seealso anonymise
#'
#' @return
#' data frame where numeric columns specified by `cols` are jittered using the
#' function `jitter()`.
#'
#' @export
jitter_metrics <- function(data, cols = NULL, ...){
if(!is.null(cols)){
data %>%
mutate(
across(
.cols = cols,
.fns = ~abs(jitter(., ...))
)
)
} else {
data %>%
mutate(
across(
.cols = where(~is.numeric(.)),
.fns = ~abs(jitter(., ...))
)
)
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/anonymise.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Check a query to ensure that it is suitable for analysis
#'
#' @description Prints diagnostic data about the data query to the R console,
#' with information such as date range, number of employees, HR attributes
#' identified, etc.
#'
#' @details This can be used with any person-level query, such as the standard
#' person query, Ways of Working assessment query, and the hourly collaboration
#' query. When run, this prints diagnostic data to the R console.
#'
#' @param data A person-level query in the form of a data frame. This includes:
#' - Standard Person Query
#' - Ways of Working Assessment Query
#' - Hourly Collaboration Query
#'
#' All person-level query have a `PersonId` column and a `MetricDate` column.
#'
#' @param return String specifying what to return. This must be one of the
#' following strings:
#' - `"message"` (default)
#' - `"text"`
#'
#' See `Value` for more information.
#'
#' @param validation Logical value to specify whether to show summarized version. Defaults to `FALSE`. To hide checks on variable
#' names, set `validation` to `TRUE`.
#'
#' @return
#' A different output is returned depending on the value passed to the `return`
#' argument:
#' - `"message"`: a message is returned to the console.
#' - `"text"`: string containing the diagnostic message.
#'
#' @examples
#' check_query(pq_data)
#'
#' @family Data Validation
#'
#' @export
check_query <- function(data, return = "message", validation = FALSE){
if(!is.data.frame(data)){
stop("Input is not a data frame.")
}
if("PersonId" %in% names(data)){
if(validation == FALSE){
check_person_query(data = data, return = return)
} else if(validation == TRUE){
# Different displays required for validation_report()
check_query_validation(data = data, return = return)
}
} else {
message("Note: checks are currently unavailable for a non-Person query")
}
}
#' @title Check a Person Query to ensure that it is suitable for analysis
#'
#' @description
#' Prints diagnostic data about the data query to the R console, with information
#' such as date range, number of employees, HR attributes identified, etc.
#'
#' @inheritParams check_query
#'
#' @details Used as part of `check_query()`.
#'
#' @noRd
#'
check_person_query <- function(data, return){
## Query Type - In {wpa}, this uses `identify_query()`
## Set as blank for initiation
main_chunk <- ""
## PersonId
if(!("PersonId" %in% names(data))){
stop("There is no `PersonId` variable in the input.")
} else {
new_chunk <- paste("There are", dplyr::n_distinct(data$PersonId), "employees in this dataset.")
main_chunk <- paste(main_chunk, new_chunk, sep = "\n\n")
}
## Date
if(!("MetricDate" %in% names(data))){
stop("There is no `MetricDate` variable in the input.")
} else if("Influence_rank" %in% names(data)){
# Omit date conversion
new_chunk <- paste0("Date ranges from ", min(data$MetricDate), " to ", max(data$MetricDate), ".")
main_chunk <- paste(main_chunk, new_chunk, sep = "\n\n")
} else {
data$MetricDate <- as.Date(data$MetricDate, "%m/%d/%Y")
new_chunk <- paste0("Date ranges from ", min(data$MetricDate), " to ", max(data$MetricDate), ".")
main_chunk <- paste(main_chunk, new_chunk, sep = "\n\n")
}
## Extract unique identifiers of query ------------------------------------
extracted_chr <-
data %>%
hrvar_count_all(return = "table") %>%
filter(`Unique values`== 1) %>%
pull(Attributes)
if (length(extracted_chr)>1) {
extractHRValues <- function(data, hrvar){
data %>%
summarise(FirstValue = first(!!sym(hrvar))) %>%
mutate(HRAttribute = wrap(hrvar, wrapper = "`")) %>%
select(HRAttribute, FirstValue) %>%
mutate(FirstValue = as.character(FirstValue)) # Coerce type
}
result <-
extracted_chr %>%
purrr::map(function(x){ extractHRValues(data = data, hrvar = x)}) %>%
bind_rows()
new_chunk <- paste("Unique identifiers include:",
result %>%
mutate(identifier = paste(HRAttribute, "is", FirstValue)) %>%
pull(identifier) %>%
paste(collapse = "; "))
main_chunk <- paste(main_chunk, new_chunk, sep = "\n\n")
}
## HR Variables
hr_chr <- extract_hr(data, max_unique = 200) %>% wrap(wrapper = "`")
new_chunk <- paste("There are", length(hr_chr), "(estimated) HR attributes in the data:" )
main_chunk <- paste(main_chunk, new_chunk, sep = "\n\n")
new_chunk <- paste(hr_chr, collapse = ", ")
main_chunk <- paste(main_chunk, new_chunk, sep = "\n")
## `IsActive` flag
if(!("IsActive" %in% names(data))){
new_chunk <- "The `IsActive` flag is not present in the data."
main_chunk <- paste(main_chunk, new_chunk, sep = "\n")
} else {
data$IsActive <- as.logical(data$IsActive) # Force to logical
active_n <- dplyr::n_distinct(data[data$IsActive == TRUE, "PersonId"])
new_chunk <- paste0("There are ", active_n, " active employees out of all in the dataset.")
main_chunk <- paste(main_chunk, new_chunk, sep = "\n\n")
}
## Variable check header
new_chunk <- "Variable name check:"
main_chunk <- paste(main_chunk, new_chunk, sep = "\n\n")
## Collaboration_hours
if(!("Collaboration_hours" %in% names(data)) &
("Collaboration_hrs" %in% names(data))){
new_chunk <- "`Collaboration_hrs` is used instead of `Collaboration_hours` in the data."
main_chunk <- paste(main_chunk, new_chunk, sep = "\n\n")
} else if(!("Collaboration_hrs" %in% names(data)) &
("Collaboration_hours" %in% names(data))){
new_chunk <- "`Collaboration_hours` is used instead of `Collaboration_hrs` in the data."
main_chunk <- paste(main_chunk, new_chunk, sep = "\n\n")
} else {
new_chunk <- "No collaboration hour metric exists in the data."
main_chunk <- paste(main_chunk, new_chunk, sep = "\n\n")
}
## Instant_Message_hours
if(!("Instant_message_hours" %in% names(data)) &
("Instant_Message_hours" %in% names(data))){
new_chunk <- "`Instant_Message_hours` is used instead of `Instant_message_hours` in the data."
main_chunk <- paste(main_chunk, new_chunk, sep = "\n\n")
} else if(!("Instant_Message_hours" %in% names(data)) &
("Instant_message_hours" %in% names(data))){
new_chunk <- "`Instant_message_hours` is used instead of `Instant_Message_hours` in the data."
main_chunk <- paste(main_chunk, new_chunk, sep = "\n\n")
} else {
new_chunk <- "No instant message hour metric exists in the data."
main_chunk <- paste(main_chunk, new_chunk, sep = "\n\n")
}
## Return
if(return == "message"){
main_chunk <- paste("", main_chunk, sep = "\n")
message(main_chunk)
} else if(return == "text"){
main_chunk
} else {
stop("Please check inputs for `return`")
}
}
#' @title Perform a query check for the validation report
#'
#' @description
#' Prints diagnostic data about the data query to the R console, with information
#' such as date range, number of employees, HR attributes identified, etc.
#' Optimised for the `validation_report()`
#'
#' @inheritParams check_query
#'
#' @details Used as part of `check_query()`.
#'
#' @noRd
check_query_validation <- function(data, return){
## Query Type - Initialise
main_chunk <- ""
## PersonId
if(!("PersonId" %in% names(data))){
stop("There is no `PersonId` variable in the input.")
} else {
new_chunk <- paste("There are", dplyr::n_distinct(data$PersonId), "employees in this dataset.")
main_chunk <- paste(main_chunk, new_chunk, sep = "\n\n")
}
## Date
if(!("MetricDate" %in% names(data))){
stop("There is no `MetricDate` variable in the input.")
} else {
data$MetricDate <- as.Date(data$MetricDate, "%m/%d/%Y")
new_chunk <- paste0("Date ranges from ", min(data$MetricDate), " to ", max(data$MetricDate), ".")
main_chunk <- paste(main_chunk, new_chunk, sep = "\n\n")
}
## Extract unique identifiers of query ------------------------------------
extracted_chr <- data %>%
hrvar_count_all(return = "table") %>%
filter(`Unique values`==1) %>%
pull(Attributes)
if (length(extracted_chr) > 1) {
extractHRValues <- function(data, hrvar){
data %>%
summarise(FirstValue = first(!!sym(hrvar))) %>%
mutate(HRAttribute = wrap(hrvar, wrapper = "`")) %>%
select(HRAttribute, FirstValue) %>%
mutate(FirstValue = as.character(FirstValue)) # Coerce type
}
result <-
extracted_chr %>%
purrr::map(function(x){ extractHRValues(data = data, hrvar = x)}) %>%
bind_rows()
new_chunk <- paste("Unique identifiers include:",
result %>%
mutate(identifier = paste(HRAttribute, "is", FirstValue)) %>%
pull(identifier) %>%
paste(collapse = "; "))
main_chunk <- paste(main_chunk, new_chunk, sep = "\n\n")
}
## HR Variables
hr_chr <- extract_hr(data, max_unique = 200) %>% wrap(wrapper = "`")
new_chunk <- paste("There are", length(hr_chr), "(estimated) HR attributes in the data:" )
main_chunk <- paste(main_chunk, new_chunk, sep = "\n\n")
new_chunk <- paste(hr_chr, collapse = ", ")
main_chunk <- paste(main_chunk, new_chunk, sep = "\n")
## `IsActive` flag
if(!("IsActive" %in% names(data))){
new_chunk <- "The `IsActive` flag is not present in the data."
main_chunk <- paste(main_chunk, new_chunk, sep = "\n")
} else {
data$IsActive <- as.logical(data$IsActive) # Force to logical
active_n <- dplyr::n_distinct(data[data$IsActive == TRUE, "PersonId"])
new_chunk <- paste0("There are ", active_n, " active employees out of all in the dataset.")
main_chunk <- paste(main_chunk, new_chunk, sep = "\n\n")
}
## Return
if(return == "message"){
main_chunk <- paste("", main_chunk, sep = "\n")
message(main_chunk)
} else if(return == "text"){
main_chunk
} else {
stop("Please check inputs for `return`")
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/check_query.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Collaboration - Stacked Area Plot
#'
#' @description
#' Provides an overview analysis of Weekly Digital Collaboration.
#' Returns an stacked area plot of Email and Meeting Hours by default.
#' Additional options available to return a summary table.
#'
#' @details
#' Uses the metrics `Meeting_hours`, `Email_hours`, `Unscheduled_Call_hours`,
#' and `Instant_Message_hours`.
#'
#' @param data A Standard Person Query dataset in the form of a data frame.
#' A Ways of Working assessment dataset may also be provided, in which
#' Unscheduled call hours would be included in the output.
#' @param hrvar HR Variable by which to split metrics, defaults to `NULL`, but
#' accepts any character vector, e.g. "LevelDesignation". If `NULL` is passed,
#' the organizational attribute is automatically populated as "Total".
#' @param mingroup Numeric value setting the privacy threshold / minimum group
#' size. Defaults to 5.
#' @param return String specifying what to return. This must be one of the
#' following strings:
#' - `"plot"`
#' - `"table"`
#'
#' See `Value` for more information.
#'
#' @import dplyr
#' @import ggplot2
#' @import reshape2
#' @import scales
#'
#' @family Visualization
#' @family Collaboration
#'
#' @examples
#' # Return plot with total (default)
#' collaboration_area(pq_data)
#'
#' # Return plot with hrvar split
#' collaboration_area(pq_data, hrvar = "Organization")
#'
#' # Return summary table
#' collaboration_area(pq_data, return = "table")
#'
#' @return
#' A different output is returned depending on the value passed to the `return` argument:
#' - `"plot"`: 'ggplot' object. A stacked area plot for the metric.
#' - `"table"`: data frame. A summary table for the metric.
#'
#' @export
collaboration_area <- function(data,
hrvar = NULL,
mingroup=5,
return = "plot"){
## Handle date name
data <- data %>% rename(Date = MetricDate)
## Handling NULL values passed to hrvar
if(is.null(hrvar)){
data <- totals_col(data)
hrvar <- "Total"
}
## Date cleaning
data$Date <- as.Date(data$Date, format = "%m/%d/%Y")
## Lower case version of column names
lnames <- tolower(names(data))
if("unscheduled_call_hours" %in% lnames){
names(data) <-
gsub(pattern = "unscheduled_call_hours",
replacement = "Unscheduled_Call_hours",
x = names(data),
ignore.case = TRUE) # Case-insensitive
}
## Exclude metrics if not available as a metric
check_chr <- c("^Meeting_hours$",
"^Email_hours$",
"^Instant_Message_hours$",
"^Unscheduled_Call_hours$")
main_vars <-
names(data)[
grepl(pattern = paste(check_chr, collapse = "|"),
x = lnames,
ignore.case = TRUE)
]
## Analysis table
myTable <-
data %>%
rename(group = !!sym(hrvar)) %>% # Rename HRvar to `group`
select(PersonId,
Date,
group,
main_vars) %>%
group_by(Date, group) %>%
summarise_at(vars(main_vars), ~mean(.)) %>%
left_join(hrvar_count(data, hrvar, return = "table"),
by = c("group" = hrvar)) %>%
rename(Employee_Count = "n") %>%
filter(Employee_Count >= mingroup) %>%
ungroup()
myTable_long <-
myTable %>%
select(Date, group, ends_with("_hours")) %>%
tidyr::gather(Metric, Hours, -Date, -group) %>%
mutate(Metric = sub(pattern = "_hours", replacement = "", x = Metric))
## Levels
level_chr <- sub(pattern = "_hours", replacement = "", x = main_vars)
## Colour definitions
colour_defs <-
c("Meeting" = "#34b1e2",
"Email" = "#1d627e",
"Instant_Message" = "#adc0cb",
"Unscheduled_Call" = "#b4d5dd")
colour_defs <- colour_defs[names(colour_defs) %in% level_chr]
plot_object <-
myTable_long %>%
mutate(Metric = factor(Metric, levels = level_chr)) %>%
ggplot(aes(x = Date, y = Hours, colour = Metric)) +
geom_area(aes(fill = Metric), alpha = 1.0, position = 'stack') +
theme_wpa_basic() +
scale_y_continuous(labels = round) +
theme(axis.text.x = element_text(angle = 90, hjust = 1)) +
scale_colour_manual(values = colour_defs) +
scale_fill_manual(values = colour_defs) +
facet_wrap(.~group) +
labs(title = "Total Collaboration Hours",
subtitle = paste("Weekly collaboration hours by", camel_clean(hrvar))) +
labs(caption = extract_date_range(data, return = "text"))
if(return == "table"){
myTable %>%
as_tibble() %>%
mutate(Collaboration_hours = select(., main_vars) %>%
apply(1, sum, na.rm = TRUE))
} else if(return == "plot"){
return(plot_object)
} else {
stop("Please enter a valid input for `return`.")
}
}
#' @rdname collaboration_area
#' @export
collab_area <- collaboration_area
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/collaboration_area.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Distribution of Collaboration Hours as a 100% stacked bar
#'
#' @description
#' Analyze the distribution of Collaboration Hours.
#' Returns a stacked bar plot by default.
#' Additional options available to return a table with distribution elements.
#'
#' @template ch
#'
#' @inheritParams create_dist
#' @inherit create_dist return
#'
#' @family Visualization
#' @family Collaboration
#'
#' @examples
#' # Return plot
#' collaboration_dist(pq_data, hrvar = "Organization")
#'
#' # Return summary table
#' collaboration_dist(pq_data, hrvar = "Organization", return = "table")
#' @export
collaboration_dist <- function(data,
hrvar = "Organization",
mingroup = 5,
return = "plot",
cut = c(15, 20, 25)) {
create_dist(data = data,
metric = "Collaboration_hours",
hrvar = hrvar,
mingroup = mingroup,
return = return,
cut = cut)
}
#' @rdname collaboration_dist
#' @export
collab_dist <- collaboration_dist
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/collaboration_dist.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Distribution of Collaboration Hours (Fizzy Drink plot)
#'
#' @description
#' Analyze weekly collaboration hours distribution, and returns
#' a 'fizzy' scatter plot by default.
#' Additional options available to return a table with distribution elements.
#'
#' @template ch
#'
#' @inheritParams create_fizz
#' @inherit create_fizz return
#'
#' @family Visualization
#' @family Collaboration
#'
#' @examples
#' # Return plot
#' collaboration_fizz(pq_data, hrvar = "Organization", return = "plot")
#'
#' # Return summary table
#' collaboration_fizz(pq_data, hrvar = "Organization", return = "table")
#'
#' @export
collaboration_fizz <- function(data,
hrvar = "Organization",
mingroup = 5,
return = "plot"){
create_fizz(data = data,
metric = "Collaboration_hours",
hrvar = hrvar,
mingroup = mingroup,
return = return)
}
#' @rdname collaboration_fizz
#' @export
collab_fizz <- collaboration_fizz
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/collaboration_fizz.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Collaboration Time Trend - Line Chart
#'
#' @description
#' Provides a week by week view of collaboration time, visualised as line charts.
#' By default returns a line chart for collaboration hours,
#' with a separate panel per value in the HR attribute.
#' Additional options available to return a summary table.
#'
#' @template ch
#'
#' @inheritParams create_line
#' @inherit create_line return
#'
#' @family Visualization
#' @family Collaboration
#'
#' @examples
#' # Return a line plot
#' collaboration_line(pq_data, hrvar = "LevelDesignation")
#'
#' # Return summary table
#' collaboration_line(pq_data, hrvar = "LevelDesignation", return = "table")
#'
#' @export
collaboration_line <- function(data,
hrvar = "Organization",
mingroup = 5,
return = "plot"){
## Inherit arguments
create_line(data = data,
metric = "Collaboration_hours",
hrvar = hrvar,
mingroup = mingroup,
return = return)
}
#' @rdname collaboration_line
#' @export
collab_line <- collaboration_line
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/collaboration_line.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Collaboration Ranking
#'
#' @description
#' This function scans a standard query output for groups with high levels of
#' 'Weekly Digital Collaboration'. Returns a plot by default, with an option to
#' return a table with a all of groups (across multiple HR attributes) ranked by
#' hours of digital collaboration.
#'
#' @details
#' Uses the metric `Collaboration_hours`.
#' See `create_rank()` for applying the same analysis to a different metric.
#'
#' @inheritParams create_rank
#' @inherit create_rank return
#'
#' @family Visualization
#' @family Collaboration
#'
#' @examples
#' # Return rank table
#' collaboration_rank(
#' data = pq_data,
#' return = "table"
#' )
#'
#' # Return plot
#' collaboration_rank(
#' data = pq_data,
#' return = "plot"
#' )
#'
#' @export
collaboration_rank <- function(data,
hrvar = extract_hr(data),
mingroup = 5,
mode = "simple",
plot_mode = 1,
return = "plot"){
create_rank(data,
metric = "Collaboration_hours",
hrvar = hrvar,
mingroup = mingroup,
mode = mode,
plot_mode = plot_mode,
return = return)
}
#' @rdname collaboration_rank
#' @export
collab_rank <- collaboration_rank
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/collaboration_rank.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Collaboration Summary
#'
#' @description
#' Provides an overview analysis of 'Weekly Digital Collaboration'.
#' Returns a stacked bar plot of Email and Meeting Hours by default.
#' Additional options available to return a summary table.
#'
#' @details
#' Uses the metrics `Meeting_hours`, `Email_hours`, `Unscheduled_Call_hours`,
#' and `Instant_Message_hours`.
#'
#' @template spq-params
#' @param return Character vector specifying what to return, defaults to "plot".
#' Valid inputs are "plot" and "table".
#'
#' @import dplyr
#' @import ggplot2
#' @import reshape2
#' @import scales
#' @importFrom stats reorder
#'
#' @family Visualization
#' @family Collaboration
#'
#' @examples
#' # Return a ggplot bar chart
#' collaboration_sum(pq_data, hrvar = "LevelDesignation")
#'
#' # Return a summary table
#' collaboration_sum(pq_data, hrvar = "LevelDesignation", return = "table")
#'
#' @return
#' Returns a 'ggplot' object by default, where 'plot' is passed in `return`.
#' When 'table' is passed, a summary table is returned as a data frame.
#'
#' @export
collaboration_sum <- function(data,
hrvar = "Organization",
mingroup=5,
return = "plot"){
if("Instant_message_hours" %in% names(data)){
data <- rename(data, Instant_Message_hours = "Instant_message_hours")
}
if("Unscheduled_Call_hours" %in% names(data)){
main_vars <- c("Meeting_hours",
"Email_hours",
"Instant_Message_hours",
"Unscheduled_Call_hours")
} else {
main_vars <- c("Meeting_hours",
"Email_hours")
}
create_stacked(data = data,
hrvar = hrvar,
metrics = main_vars,
mingroup = mingroup,
return = return)
}
#' @rdname collaboration_sum
#' @export
collab_sum <- collaboration_sum
#' @rdname collaboration_sum
#' @export
collaboration_summary <- collaboration_sum
#' @rdname collaboration_sum
#' @export
collab_summary <- collaboration_sum
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/collaboration_sum.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Collaboration Time Trend
#'
#' @description
#' Provides a week by week view of collaboration time.
#' By default returns a week by week heatmap, highlighting the points in time with most activity.
#' Additional options available to return a summary table.
#'
#' @template ch
#'
#' @inheritParams create_trend
#'
#' @family Visualization
#' @family Collaboration
#'
#' @return
#' Returns a 'ggplot' object by default, where 'plot' is passed in `return`.
#' When 'table' is passed, a summary table is returned as a data frame.
#'
#' @examples
#' # Run plot
#' collaboration_trend(pq_data)
#'
#' # Run table
#' collaboration_trend(pq_data, hrvar = "LevelDesignation", return = "table")
#'
#' @export
collaboration_trend <- function(data,
hrvar = "Organization",
mingroup = 5,
return = "plot"){
create_trend(data,
metric = "Collaboration_hours",
hrvar = hrvar,
mingroup = mingroup,
return = return)
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/collaboration_trend.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Copy a data frame to clipboard for pasting in Excel
#'
#' @description
#' This is a pipe-optimised function, that feeds into `vivainsights::export()`,
#' but can be used as a stand-alone function.
#'
#' Based on the original function from
#' <https://github.com/martinctc/surveytoolbox>.
#'
#' @param x Data frame to be passed through. Cannot contain list-columns or
#' nested data frames.
#' @param row.names A logical vector for specifying whether to allow row names.
#' Defaults to `FALSE`.
#' @param col.names A logical vector for specifying whether to allow column
#' names. Defaults to `FALSE`.
#' @param quietly Set this to TRUE to not print data frame on console
#' @param ... Additional arguments for write.table().
#'
#' @importFrom utils write.table
#'
#' @family Import and Export
#'
#' @return
#' Copies a data frame to the clipboard with no return value.
#'
#' @export
copy_df <-function(x,
row.names = FALSE,
col.names = TRUE,
quietly = FALSE,...) {
utils::write.table(x,"clipboard-50000",
sep="\t",
row.names=row.names,
col.names=col.names,...)
if(quietly==FALSE) print(x)
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/copy_df.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Compute Information Value for Predictive Variables
#'
#' @description This function calculates the Information Value (IV) for the
#' selected numeric predictor variables in the dataset, given a specified
#' outcome variable. The Information Value provides a measure of the predictive
#' power of each variable in relation to the outcome variable, which can be
#' useful in feature selection for predictive modeling.
#'
#' @details
#' This is a wrapper around `wpa::create_IV()`.
#'
#' @param data A Person Query dataset in the form of a data frame.
#' @param predictors A character vector specifying the columns to be used as
#' predictors. Defaults to NULL, where all numeric vectors in the data will be
#' used as predictors.
#' @param outcome String specifying the column name for a binary variable,
#' containing only the values 1 or 0.
#' @param bins Number of bins to use, defaults to 5.
#' @param siglevel Significance level to use in comparing populations for the
#' outcomes, defaults to 0.05
#' @param exc_sig Logical value determining whether to exclude values where the
#' p-value lies below what is set at `siglevel`. Defaults to `FALSE`, where
#' p-value calculation does not happen altogether.
#' @param return String specifying what to return. This must be one of the
#' following strings:
#' - `"plot"`
#' - `"summary"`
#' - `"list"`
#' - `"plot-WOE"`
#' - `"IV"`
#'
#' See `Value` for more information.
#'
#' @return
#' A different output is returned depending on the value passed to the `return`
#' argument:
#' - `"plot"`: 'ggplot' object. A bar plot showing the IV value of the top
#' (maximum 12) variables.
#' - `"summary"`: data frame. A summary table for the metric.
#' - `"list"`: list. A list of outputs for all the input variables.
#' - `"plot-WOE"`: A list of 'ggplot' objects that show the WOE for each
#' predictor used in the model.
#' - `"IV"` returns a list object which mirrors the return
#' in `Information::create_infotables()`.
#'
#' @import dplyr
#'
#' @family Variable Association
#' @family Information Value
#'
#' @examples
#' # Return a summary table of IV
#' pq_data %>%
#' dplyr::mutate(X = ifelse(Internal_network_size > 40, 1, 0)) %>%
#' create_IV(outcome = "X",
#' predictors = c("Email_hours",
#' "Meeting_hours",
#' "Chat_hours"),
#' return = "plot")
#'
#'
#' # Return summary
#' pq_data %>%
#' dplyr::mutate(X = ifelse(Internal_network_size > 40, 1, 0)) %>%
#' create_IV(outcome = "X",
#' predictors = c("Email_hours", "Meeting_hours"),
#' return = "summary")
#'
#' @export
create_IV <- function(data,
predictors = NULL,
outcome,
bins = 5,
siglevel = 0.05,
exc_sig = FALSE,
return = "plot"){
wpa::create_IV(
data = data,
predictors = predictors,
outcome = outcome,
bins = bins,
siglevel = siglevel,
exc_sig = exc_sig,
return = return
)
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/create_IV.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Mean Bar Plot for any metric
#'
#' @description
#' Provides an overview analysis of a selected metric by calculating a mean per
#' metric.
#' Returns a bar plot showing the average of a selected metric by default.
#' Additional options available to return a summary table.
#'
#' @template spq-params
#' @param mingroup Numeric value setting the privacy threshold / minimum group
#' size. Defaults to 5.
#' @param metric Character string containing the name of the metric,
#' e.g. "Collaboration_hours"
#' @param return String specifying what to return. This must be one of the
#' following strings:
#' - `"plot"`
#' - `"table"`
#'
#' See `Value` for more information.
#'
#' @param bar_colour String to specify colour to use for bars.
#' In-built accepted values include `"default"` (default), `"alert"` (red), and
#' `"darkblue"`. Otherwise, hex codes are also accepted. You can also supply
#' RGB values via `rgb2hex()`.
#' @param na.rm A logical value indicating whether `NA` should be stripped
#' before the computation proceeds. Defaults to `FALSE`.
#' @param percent Logical value to determine whether to show labels as
#' percentage signs. Defaults to `FALSE`.
#' @param plot_title An option to override plot title.
#' @param plot_subtitle An option to override plot subtitle.
#' @param legend_lab String. Option to override legend title/label. Defaults to
#' `NULL`, where the metric name will be populated instead.
#' @param rank String specifying how to rank the bars. Valid inputs are:
#' - `"descending"` - ranked highest to lowest from top to bottom (default).
#' - `"ascending"` - ranked lowest to highest from top to bottom.
#' - `NULL` - uses the original levels of the HR attribute.
#' @param xlim An option to set max value in x axis.
#' @param text_just `r lifecycle::badge('experimental')` A numeric value
#' controlling for the horizontal position of the text labels. Defaults to
#' 0.5.
#' @param text_colour `r lifecycle::badge('experimental')` String to specify
#' colour to use for the text labels. Defaults to `"#FFFFFF"`.
#'
#'
#' @return
#' A different output is returned depending on the value passed to the `return` argument:
#' - `"plot"`: 'ggplot' object. A bar plot for the metric.
#' - `"table"`: data frame. A summary table for the metric.
#'
#' @import dplyr
#' @import ggplot2
#' @import reshape2
#' @importFrom scales wrap_format
#' @importFrom stats reorder
#'
#' @family Visualization
#' @family Flexible
#'
#' @examples
#' # Return a ggplot bar chart
#' create_bar(pq_data, metric = "Collaboration_hours", hrvar = "LevelDesignation")
#'
#' # Change bar colour
#' create_bar(pq_data,
#' metric = "After_hours_collaboration_hours",
#' bar_colour = "alert")
#'
#' # Custom data label positions and formatting
#' pq_data %>%
#' create_bar(
#' metric = "Meetings",
#' text_just = 1.1,
#' text_colour = "black",
#' xlim = 20)
#'
#' # Return a summary table
#' create_bar(pq_data,
#' metric = "Collaboration_hours",
#' hrvar = "LevelDesignation",
#' return = "table")
#' @export
create_bar <- function(data,
metric,
hrvar = "Organization",
mingroup = 5,
return = "plot",
bar_colour = "default",
na.rm = FALSE,
percent = FALSE,
plot_title = us_to_space(metric),
plot_subtitle = paste("Average by", tolower(camel_clean(hrvar))),
legend_lab = NULL,
rank = "descending",
xlim = NULL,
text_just = 0.5,
text_colour = "#FFFFFF"){
## Check inputs
required_variables <- c("MetricDate",
metric,
"PersonId")
## Error message if variables are not present
## Nothing happens if all present
data %>%
check_inputs(requirements = required_variables)
## Handle `legend_lab`
if(is.null(legend_lab)){
legend_lab <- gsub("_", " ", metric)
}
## Handling NULL values passed to hrvar
if(is.null(hrvar)){
data <- totals_col(data)
hrvar <- "Total"
}
## Clean metric name
clean_nm <- us_to_space(metric)
## Data for bar plot
plot_data <-
data %>%
rename(group = !!sym(hrvar)) %>%
group_by(PersonId, group) %>%
summarise(!!sym(metric) := mean(!!sym(metric), na.rm = na.rm)) %>%
ungroup() %>%
left_join(data %>%
rename(group = !!sym(hrvar)) %>%
group_by(group) %>%
summarise(Employee_Count = n_distinct(PersonId)),
by = "group") %>%
filter(Employee_Count >= mingroup)
## Colour bar override
if(bar_colour == "default"){
bar_colour <- "#34b1e2"
} else if(bar_colour == "alert"){
bar_colour <- "#FE7F4F"
} else if(bar_colour == "darkblue"){
bar_colour <- "#1d627e"
}
## Bar plot
plot_object <- data %>%
create_stacked(
metrics = metric,
hrvar = hrvar,
mingroup = mingroup,
stack_colours = bar_colour,
percent = percent,
plot_title = plot_title,
plot_subtitle = plot_subtitle,
legend_lab = legend_lab,
return = "plot",
rank = rank,
xlim = xlim,
text_just = text_just,
text_colour = text_colour
)
summary_table <-
plot_data %>%
select(group, !!sym(metric)) %>%
group_by(group) %>%
summarise(!!sym(metric) := mean(!!sym(metric)),
n = n())
if(return == "table"){
return(summary_table)
} else if(return == "plot"){
return(plot_object)
} else {
stop("Please enter a valid input for `return`.")
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/create_bar.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Create a bar chart without aggregation for any metric
#'
#' @description
#' This function creates a bar chart directly from the aggregated / summarised
#' data. Unlike `create_bar()` which performs a person-level aggregation, there
#' is no calculation for `create_bar_asis()` and the values are rendered as they
#' are passed into the function.
#'
#' @param data Plotting data as a data frame.
#' @param group_var String containing name of variable for the group.
#' @param bar_var String containing name of variable representing the value of
#' the bars.
#' @param title Title of the plot.
#' @param subtitle Subtitle of the plot.
#' @param caption Caption of the plot.
#' @param ylab Y-axis label for the plot (group axis)
#' @param xlab X-axis label of the plot (bar axis).
#' @param percent Logical value to determine whether to show labels as
#' percentage signs. Defaults to `FALSE`.
#' @param bar_colour String to specify colour to use for bars.
#' In-built accepted values include "default" (default), "alert" (red), and
#' "darkblue". Otherwise, hex codes are also accepted. You can also supply
#' RGB values via `rgb2hex()`.
#' @param rounding Numeric value to specify number of digits to show in data
#' labels
#'
#' @return
#' 'ggplot' object. A horizontal bar plot.
#'
#' @examples
#' # Creating a custom bar plot without mean aggregation
#' library(dplyr)
#'
#' pq_data %>%
#' group_by(Organization) %>%
#' summarise(across(.cols = Meeting_hours,
#' .fns = ~sum(., na.rm = TRUE))) %>%
#' create_bar_asis(group_var = "Organization",
#' bar_var = "Meeting_hours",
#' title = "Total Meeting Hours over period",
#' subtitle = "By Organization",
#' caption = extract_date_range(pq_data, return = "text"),
#' bar_colour = "darkblue",
#' rounding = 0)
#'
#' @import ggplot2
#' @import dplyr
#'
#' @family Visualization
#' @family Flexible
#'
#' @examples
#' library(dplyr)
#'
#' # Summarise Non-person-average median `Emails_sent`
#' med_df <-
#' pq_data %>%
#' group_by(Organization) %>%
#' summarise(Emails_sent_median = median(Emails_sent))
#'
#' med_df %>%
#' create_bar_asis(
#' group_var = "Organization",
#' bar_var = "Emails_sent_median",
#' title = "Emails sent by organization",
#' subtitle = "Median values",
#' bar_colour = "darkblue",
#' caption = extract_date_range(pq_data, return = "text")
#' )
#'
#'
#' @export
create_bar_asis <- function(data,
group_var,
bar_var,
title = NULL,
subtitle = NULL,
caption = NULL,
ylab = group_var,
xlab = bar_var,
percent = FALSE,
bar_colour = "default",
rounding = 1){
## Colour bar override
if(bar_colour == "default"){
bar_colour <- "#34b1e2"
} else if(bar_colour == "alert"){
bar_colour <- "#FE7F4F"
} else if(bar_colour == "darkblue"){
bar_colour <- "#1d627e"
}
up_break <- max(data[[bar_var]], na.rm = TRUE) * 1.3
if(percent == FALSE){
returnPlot <-
data %>%
ggplot(aes(x = reorder(!!sym(group_var), !!sym(bar_var)), y = !!sym(bar_var))) +
geom_col(fill = bar_colour) +
geom_text(aes(label = round(!!sym(bar_var), digits = rounding)),
position = position_stack(vjust = 0.5),
# hjust = -0.25,
color = "#FFFFFF",
fontface = "bold",
size = 4)
} else if(percent == TRUE){
returnPlot <-
data %>%
ggplot(aes(x = reorder(!!sym(group_var), !!sym(bar_var)), y = !!sym(bar_var))) +
geom_col(fill = bar_colour) +
geom_text(aes(label = scales::percent(!!sym(bar_var),
accuracy = 10 ^ -rounding)),
position = position_stack(vjust = 0.5),
# hjust = -0.25,
color = "#FFFFFF",
fontface = "bold",
size = 4)
}
returnPlot +
scale_y_continuous(expand = c(.01, 0), limits = c(0, up_break)) +
coord_flip() +
labs(title = title,
subtitle = subtitle,
caption = caption,
y = camel_clean(xlab),
x = ylab) +
theme_wpa_basic() +
theme(
axis.line = element_blank(),
axis.ticks = element_blank(),
axis.text.x = element_blank(),
axis.title = element_blank()
)
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/create_bar_asis.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Box Plot for any metric
#'
#' @description
#' Analyzes a selected metric and returns a box plot by default.
#' Additional options available to return a table with distribution elements.
#'
#' @details
#' This is a general purpose function that powers all the functions
#' in the package that produce box plots.
#'
#' @template spq-params
#' @param metric Character string containing the name of the metric,
#' e.g. "Collaboration_hours"
#'
#' @param return String specifying what to return. This must be one of the
#' following strings:
#' - `"plot"`
#' - `"table"`
#'
#' See `Value` for more information.
#'
#' @return
#' A different output is returned depending on the value passed to the `return` argument:
#' - `"plot"`: 'ggplot' object. A box plot for the metric.
#' - `"table"`: data frame. A summary table for the metric.
#'
#' @import dplyr
#' @import ggplot2
#' @import reshape2
#' @import scales
#' @importFrom stats median
#' @importFrom stats sd
#'
#' @family Visualization
#' @family Flexible
#'
#' @examples
#' # Create a box plot for Collaboration_hours by Level Designation
#' create_boxplot(pq_data, metric = "Collaboration_hours", hrvar = "LevelDesignation", return = "plot")
#'
#' # Create a box plot for Collaboration_hours by Organization
#' create_boxplot(pq_data, metric = "Collaboration_hours", hrvar = "Organization", return = "plot")
#'
#' # Create a summary statistics table for Collaboration_hoursby Organization
#' create_boxplot(pq_data, metric = "Collaboration_hours", hrvar = "Organization", return = "table")
#'
#' @export
create_boxplot <- function(data,
metric,
hrvar = "Organization",
mingroup = 5,
return = "plot") {
## Check inputs
required_variables <- c("MetricDate",
metric,
"PersonId")
## Error message if variables are not present
## Nothing happens if all present
data %>%
check_inputs(requirements = required_variables)
## Handling NULL values passed to hrvar
if(is.null(hrvar)){
data <- totals_col(data)
hrvar <- "Total"
}
## Clean metric name
clean_nm <- us_to_space(metric)
plot_data <-
data %>%
rename(group = !!sym(hrvar)) %>% # Rename HRvar to `group`
group_by(PersonId, group) %>%
summarise(!!sym(metric) := mean(!!sym(metric))) %>%
ungroup() %>%
left_join(data %>%
rename(group = !!sym(hrvar)) %>%
group_by(group) %>%
summarise(Employee_Count = n_distinct(PersonId)),
by = "group") %>%
filter(Employee_Count >= mingroup)
## Get max value
max_point <- max(plot_data[[metric]]) * 1.2
plot_legend <-
plot_data %>%
group_by(group) %>%
summarize(Employee_Count = first(Employee_Count)) %>%
mutate(Employee_Count = paste("n=",Employee_Count))
## summary table
summary_table <-
plot_data %>%
select(group, tidyselect::all_of(metric)) %>%
group_by(group) %>%
summarise(mean = mean(!!sym(metric)),
median = median(!!sym(metric)),
sd = sd(!!sym(metric)),
min = min(!!sym(metric)),
max = max(!!sym(metric)),
range = max - min,
n = n())
## group order
group_ord <-
summary_table %>%
arrange(desc(mean)) %>%
pull(group)
plot_object <-
plot_data %>%
mutate(group = factor(group, levels = group_ord)) %>%
ggplot(aes(x = group, y = !!sym(metric))) +
geom_boxplot(color = "#578DB8") +
ylim(0, max_point) +
annotate("text", x = plot_legend$group, y = 0, label = plot_legend$Employee_Count) +
scale_x_discrete(labels = scales::wrap_format(10)) +
theme_wpa_basic() +
theme(axis.text=element_text(size=12),
axis.text.x = element_text(angle = 30, hjust = 1),
plot.title = element_text(color="grey40", face="bold", size=18),
plot.subtitle = element_text(size=14),
legend.position = "top",
legend.justification = "right",
legend.title=element_text(size=14),
legend.text=element_text(size=14)) +
labs(title = clean_nm,
subtitle = paste("Distribution of",
tolower(clean_nm),
"by",
tolower(camel_clean(hrvar)))) +
xlab(hrvar) +
ylab(paste("Average", clean_nm)) +
labs(caption = extract_date_range(data, return = "text"))
if(return == "table"){
summary_table %>%
as_tibble() %>%
return()
} else if(return == "plot"){
return(plot_object)
} else if(return == "data"){
plot_data %>%
mutate(group = factor(group, levels = group_ord)) %>%
arrange(desc(group))
} else {
stop("Please enter a valid input for `return`.")
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/create_boxplot.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Create a bubble plot with two selected Viva Insights metrics (General
#' Purpose), with size representing the number of employees in the group.
#'
#' @description Returns a bubble plot of two selected metrics, using size to map
#' the number of employees.
#'
#' @details This is a general purpose function that powers all the functions in
#' the package that produce bubble plots.
#'
#' @param data A Standard Person Query dataset in the form of a data frame.
#' @param metric_x Character string containing the name of the metric, e.g.
#' "Collaboration_hours"
#' @param metric_y Character string containing the name of the metric, e.g.
#' "Collaboration_hours"
#' @param hrvar HR Variable by which to split metrics, defaults to
#' "Organization" but accepts any character vector, e.g. "LevelDesignation"
#' @param mingroup Numeric value setting the privacy threshold / minimum group
#' size. Defaults to 5.
#'
#' @param return String specifying what to return. This must be one of the
#' following strings: - `"plot"` - `"table"`
#'
#' @param bubble_size A numeric vector of length two to specify the size range
#' of the bubbles
#'
#' @import dplyr
#' @import ggplot2
#' @import scales
#'
#' @family Visualization
#' @family Flexible
#'
#' @examples
#' create_bubble(pq_data, "Collaboration_hours", "Multitasking_hours", hrvar ="Organization")
#'
#'
#' @return A different output is returned depending on the value passed to the
#' `return` argument:
#' - `"plot"`: 'ggplot' object. A bubble plot for the metric.
#' - `"table"`: data frame. A summary table for the metric.
#'
#' @export
create_bubble <- function(data,
metric_x,
metric_y,
hrvar = "Organization",
mingroup = 5,
return = "plot",
bubble_size = c(1, 10)){
## Check inputs
required_variables <- c(hrvar,
metric_x,
metric_y,
"PersonId")
## Error message if variables are not present
## Nothing happens if all present
data %>%
check_inputs(requirements = required_variables)
## Handling NULL values passed to hrvar
if(is.null(hrvar)){
data <- totals_col(data)
hrvar <- "Total"
}
## Clean metric names
clean_x <- us_to_space(metric_x)
clean_y <- us_to_space(metric_y)
myTable <-
data %>%
group_by(PersonId, !!sym(hrvar)) %>%
summarise_at(vars(!!sym(metric_x), !!sym(metric_y)), ~mean(., na.rm = TRUE)) %>%
group_by(!!sym(hrvar)) %>%
summarise_at(vars(!!sym(metric_x), !!sym(metric_y)), ~mean(., na.rm = TRUE)) %>%
ungroup() %>%
left_join(hrvar_count(data, hrvar = hrvar, return = "table"),
by = hrvar) %>%
filter(n >= mingroup)
plot_object <-
myTable %>%
ggplot(aes(x = !!sym(metric_x),
y = !!sym(metric_y),
label = !!sym(hrvar))) +
geom_point(alpha = 0.5, color = rgb2hex(0, 120, 212), aes(size = n)) +
ggrepel::geom_text_repel(size = 3) +
labs(title = paste0(clean_x, " and ", clean_y),
subtitle = paste("By", camel_clean(hrvar)),
caption = paste("Total employees =", sum(myTable$n), "|",
extract_date_range(data, return = "text"))) +
xlab(clean_x) +
ylab(clean_y) +
scale_size(range = bubble_size) +
theme_wpa_basic()
if(return == "table"){
return(myTable)
} else if(return == "plot"){
return(plot_object)
} else {
stop("Please enter a valid input for `return`.")
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/create_bubble.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Create a density plot for any metric
#'
#' @description
#' Provides an analysis of the distribution of a selected metric.
#' Returns a faceted density plot by default.
#' Additional options available to return the underlying frequency table.
#'
#' @template spq-params
#' @param metric String containing the name of the metric,
#' e.g. "Collaboration_hours"
#'
#' @param ncol Numeric value setting the number of columns on the plot. Defaults
#' to `NULL` (automatic).
#'
#' @param return String specifying what to return. This must be one of the
#' following strings:
#' - `"plot"`
#' - `"table"`
#' - `"data"`
#' - `"frequency"`
#'
#' See `Value` for more information.
#'
#' @return
#' A different output is returned depending on the value passed to the `return`
#' argument:
#' - `"plot"`: 'ggplot' object. A faceted density plot for the metric.
#' - `"table"`: data frame. A summary table for the metric.
#' - `"data"`: data frame. Data with calculated person averages.
#' - `"frequency`: list of data frames. Each data frame contains the
#' frequencies used in each panel of the plotted histogram.
#'
#' @import dplyr
#' @import ggplot2
#' @import reshape2
#' @import scales
#' @importFrom tidyr spread
#' @importFrom stats median
#' @importFrom stats sd
#'
#' @family Flexible
#'
#' @examples
#' # Return plot for whole organization
#' create_density(pq_data, metric = "Collaboration_hours", hrvar = NULL)
#'
#' # Return plot
#' create_density(pq_data, metric = "Collaboration_hours", hrvar = "Organization")
#'
#' # Return plot but coerce plot to three columns
#' create_density(pq_data, metric = "Collaboration_hours", hrvar = "Organization", ncol = 3)
#'
#' # Return summary table
#' create_density(pq_data, metric = "Collaboration_hours", hrvar = "Organization", return = "table")
#' @export
create_density <- function(data,
metric,
hrvar = "Organization",
mingroup = 5,
ncol = NULL,
return = "plot") {
## Check inputs
required_variables <- c("MetricDate",
metric,
"PersonId")
## Error message if variables are not present
## Nothing happens if all present
data %>%
check_inputs(requirements = required_variables)
## Create NULL variables
density <- scaled <- ndensity <- NULL
## Clean metric name
clean_nm <- us_to_space(metric)
## Handling NULL values passed to hrvar
if(is.null(hrvar)){
data <- totals_col(data)
hrvar <- "Total"
}
## Basic Data for bar plot
## Calculate person-averages
plot_data <-
data %>%
rename(group = !!sym(hrvar)) %>%
group_by(PersonId, group) %>%
summarise(!!sym(metric) := mean(!!sym(metric))) %>%
ungroup() %>%
left_join(data %>%
rename(group = !!sym(hrvar)) %>%
group_by(group) %>%
summarise(Employee_Count = n_distinct(PersonId)),
by = "group") %>%
filter(Employee_Count >= mingroup)
## Employee count / base size table
plot_legend <-
plot_data %>%
group_by(group) %>%
summarize(Employee_Count = first(Employee_Count)) %>%
mutate(Employee_Count = paste("n=",Employee_Count))
if(return == "table"){
## Table to return
plot_data %>%
group_by(group) %>%
summarise(
mean = mean(!!sym(metric), na.rm = TRUE),
median = median(!!sym(metric), na.rm = TRUE),
max = max(!!sym(metric), na.rm = TRUE),
min = min(!!sym(metric), na.rm = TRUE)
) %>%
left_join(data %>%
rename(group = !!sym(hrvar)) %>%
group_by(group) %>%
summarise(Employee_Count = n_distinct(PersonId)),
by = "group")
} else if(return == "plot"){
## Density plot
plot_data %>%
ggplot(aes(x = !!sym(metric))) +
geom_density(lwd = 1, colour = 4, fill = 4, alpha = 0.25) +
facet_wrap(group ~ ., ncol = ncol) +
theme_wpa_basic() +
theme(strip.background = element_rect(color = "#1d627e",
fill = "#1d627e"),
strip.text = element_text(size = 10,
colour = "#FFFFFF",
face = "bold")) +
labs(title = clean_nm,
subtitle = paste("Distribution of", tolower(clean_nm), "by", tolower(camel_clean(hrvar)))) +
xlab(clean_nm) +
ylab("Density") +
labs(caption = extract_date_range(data, return = "text"))
} else if(return == "frequency"){
hist_obj <-
plot_data %>%
ggplot(aes(x = !!sym(metric))) +
geom_density() +
facet_wrap(group ~ ., ncol = ncol)
ggplot2::ggplot_build(hist_obj)$data[[1]] %>%
select(
group,
PANEL,
y,
x,
density,
scaled,
ndensity,
count,
n
) %>%
group_split(PANEL)
} else if(return == "data"){
plot_data
} else {
stop("Please enter a valid input for `return`.")
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/create_density.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Horizontal 100 percent stacked bar plot for any metric
#'
#' @description
#' Provides an analysis of the distribution of a selected metric.
#' Returns a stacked bar plot by default.
#' Additional options available to return a table with distribution elements.
#'
#' @template spq-params
#' @param metric String containing the name of the metric,
#' e.g. "Collaboration_hours"
#'
#' @param return String specifying what to return. This must be one of the
#' following strings:
#' - `"plot"`
#' - `"table"`
#'
#' See `Value` for more information.
#'
#' @param cut A numeric vector of length three to specify the breaks for the
#' distribution,
#' e.g. c(10, 15, 20)
#' @param dist_colours A character vector of length four to specify colour
#' codes for the stacked bars.
#' @param unit String to specify what unit to use. This defaults to `"hours"`
#' but can accept any custom string. See `cut_hour()` for more details.
#' @inheritParams cut_hour
#' @param sort_by String to specify the bucket label to sort by. Defaults to
#' `NULL` (no sorting).
#' @param labels Character vector to override labels for the created
#' categorical variables. Must be a named vector - see examples.
#'
#' @return
#' A different output is returned depending on the value passed to the `return` argument:
#' - `"plot"`: 'ggplot' object. A stacked bar plot for the metric.
#' - `"table"`: data frame. A summary table for the metric.
#'
#' @import dplyr
#' @import ggplot2
#' @import reshape2
#' @import scales
#' @importFrom tidyr spread
#' @importFrom stats median
#' @importFrom stats sd
#'
#' @family Visualization
#' @family Flexible
#'
#' @examples
#' # Return plot
#' create_dist(pq_data, metric = "Collaboration_hours", hrvar = "Organization")
#'
#' # Return summary table
#' create_dist(pq_data, metric = "Collaboration_hours", hrvar = "Organization", return = "table")
#'
#' # Use custom labels by providing a label vector
#' eh_labels <- c(
#' "Fewer than fifteen" = "< 15 hours",
#' "Between fifteen and twenty" = "15 - 20 hours",
#' "Between twenty and twenty-five" = "20 - 25 hours",
#' "More than twenty-five" = "25+ hours"
#' )
#'
#' pq_data %>% create_dist(metric = "Meeting_hours", labels = eh_labels, return = "plot")
#'
#' # Sort by a category
#' pq_data %>% create_dist(metric = "Collaboration_hours", sort_by = "25+ hours")
#' @export
create_dist <- function(data,
metric,
hrvar = "Organization",
mingroup = 5,
return = "plot",
cut = c(15, 20, 25),
dist_colours = c("#facebc",
"#fcf0eb",
"#b4d5dd",
"#bfe5ee"),
unit = "hours",
lbound = 0,
ubound = 200,
sort_by = NULL,
labels = NULL) {
## Check inputs -----------------------------------------------------------
required_variables <- c("MetricDate",
metric,
"PersonId")
## Error message if variables are not present -----------------------------
## Nothing happens if all present
data %>%
check_inputs(requirements = required_variables)
## Clean metric name ------------------------------------------------------
clean_nm <- us_to_space(metric)
## Replace labels ---------------------------------------------------------
replace_labels <- function(x, labels){
ifelse(
is.na(names(labels[match(x, labels)])),
x,
names(labels[match(x, labels)])
)
}
## Handling NULL values passed to hrvar -----------------------------------
if(is.null(hrvar)){
data <- totals_col(data)
hrvar <- "Total"
}
## Basic Data for bar plot ------------------------------------------------
plot_data <-
data %>%
rename(group = !!sym(hrvar)) %>%
group_by(PersonId, group) %>%
summarise(!!sym(metric) := mean(!!sym(metric))) %>%
ungroup() %>%
left_join(data %>%
rename(group = !!sym(hrvar)) %>%
group_by(group) %>%
summarise(Employee_Count = n_distinct(PersonId)),
by = "group") %>%
filter(Employee_Count >= mingroup)
## Create buckets of collaboration hours ---------------------------------
plot_data <-
plot_data %>%
mutate(bucket_hours = cut_hour(!!sym(metric),
cuts = cut,
unit = unit,
lbound = lbound,
ubound = ubound))
## Employee count / base size table --------------------------------------
plot_legend <-
plot_data %>%
group_by(group) %>%
summarize(Employee_Count = first(Employee_Count)) %>%
mutate(Employee_Count = paste("n=",Employee_Count))
## Data for bar plot
plot_table <-
plot_data %>%
group_by(group, bucket_hours) %>%
summarize(Employees = n(),
Employee_Count = first(Employee_Count),
percent = Employees / Employee_Count ) %>%
arrange(group, desc(bucket_hours))
## Table for annotation --------------------------------------------------
annot_table <-
plot_legend %>%
dplyr::left_join(plot_table, by = "group")
## Remove max from axis labels, and add %
max_blank <- function(x){
as.character(
c(
scales::percent(
x[1:length(x) - 1]
),
"")
)
}
# paste0(x*100, "%")
## Replace dist_colours --------------------------------------------------
if((length(dist_colours) - length(cut)) < 1){
dist_colours <- heat_colours(n = length(cut) + 1)
message("Insufficient colours supplied to `dist_colours` - using default colouring palette instead.",
"Please supply a vector of colours of length n + 1 where n is the length of vector supplied to `cut`.")
}
## Table to return -------------------------------------------------------
return_table <-
plot_table %>%
select(group, bucket_hours, percent) %>%
{if(is.null(labels)){
.
} else if(is.function(labels)){
mutate(., bucket_hours = do.call(what = labels, args = list(bucket_hours)))
} else {
mutate(., bucket_hours = replace_labels(x = bucket_hours, labels = labels))
}} %>%
spread(bucket_hours, percent) %>%
left_join(data %>%
rename(group = !!sym(hrvar)) %>%
group_by(group) %>%
summarise(Employee_Count = n_distinct(PersonId)),
by = "group") %>%
ungroup() %>%
{ if(is.null(sort_by)){
.
} else {
arrange(., desc(!!sym(sort_by)))
}} %>%
mutate(group = factor(group, levels = unique(group)))
## Bar plot -------------------------------------------------------------
plot_object <-
plot_table %>%
mutate(group = factor(group, levels = levels(return_table$group))) %>%
ggplot(aes(x = group,
y = Employees,
# y = stats::reorder(Employees, group),
fill = bucket_hours)) +
geom_bar(stat = "identity", position = position_fill(reverse = TRUE)) +
scale_y_continuous(expand = c(.01, 0), labels = max_blank, position = "right") +
coord_flip() +
annotate("text", x = plot_legend$group, y = 1.15, label = plot_legend$Employee_Count, size = 3) +
annotate("rect", xmin = 0.5, xmax = length(plot_legend$group) + 0.5, ymin = 1.05, ymax = 1.25, alpha = .2) +
annotate(x = length(plot_legend$group) + 0.8,
xend = length(plot_legend$group) + 0.8,
y = 0,
yend = 1,
colour = "black",
lwd = 0.75,
geom = "segment") +
# Option to override labels ---------------------------------------------
{if(is.null(labels)){
scale_fill_manual(name = "", values = rev(dist_colours))
} else if(is.function(labels)){
scale_fill_manual(name = "", labels = labels, values = rev(dist_colours))
} else {
# # Match with values, replace with names
# # Flip names and values to be used for `scale_fill_manual()`
flip <- function(x){ stats::setNames(object = names(x), nm = x)}
scale_fill_manual(name = "",
labels = flip(labels),
values = rev(dist_colours))
}} +
theme_wpa_basic() +
theme(axis.line = element_blank(),
axis.ticks = element_blank(),
axis.title = element_blank()) +
labs(
title = clean_nm,
subtitle = paste("Percentage of employees by", tolower(camel_clean(hrvar))),
x = camel_clean(hrvar),
caption = extract_date_range(data, return = "text")
)
# Return options ---------------------------------------------------------
if(return == "table"){
return_table
} else if(return == "plot"){
return(plot_object)
} else {
stop("Please enter a valid input for `return`.")
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/create_dist.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Create interactive tables in HTML with 'download' buttons.
#'
#' @description
#' See
#' <https://martinctc.github.io/blog/vignette-downloadable-tables-in-rmarkdown-with-the-dt-package/>
#' for more.
#'
#' @details
#' This is exported from `wpa::create_dt()`.
#'
#' @param x Data frame to be passed through.
#' @param rounding Numeric vector to specify the number of decimal points to display
#' @param freeze Number of columns from the left to 'freeze'. Defaults to 2,
#' which includes the row number column.
#' @param percent Logical value specifying whether to display numeric columns
#' as percentages.
#'
#' @importFrom dplyr mutate_if
#'
#' @family Import and Export
#'
#' @examples
#' output <- hrvar_count(pq_data, return = "table")
#' create_dt(output)
#'
#' @return
#' Returns an HTML widget displaying rectangular data.
#'
#' @export
create_dt <- function(x, rounding = 1, freeze = 2, percent = FALSE){
wpa::create_dt(
x = x,
rounding = rounding,
freeze = freeze,
percent = percent
)
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/create_dt.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Fizzy Drink / Jittered Scatter Plot for any metric
#'
#' @description
#' Analyzes a selected metric and returns a 'fizzy' scatter plot by default.
#' Additional options available to return a table with distribution elements.
#'
#' @details
#' This is a general purpose function that powers all the functions
#' in the package that produce 'fizzy drink' / jittered scatter plots.
#'
#' @template spq-params
#' @param metric Character string containing the name of the metric,
#' e.g. `"Collaboration_hours"`
#' @param return String specifying what to return. This must be one of the following strings:
#' - `"plot"`
#' - `"table"`
#'
#' See `Value` for more information.
#'
#' @return
#' A different output is returned depending on the value passed to the `return` argument:
#' - `"plot"`: 'ggplot' object. A jittered scatter plot for the metric.
#' - `"table"`: data frame. A summary table for the metric.
#'
#' @import dplyr
#' @import ggplot2
#' @import reshape2
#' @import scales
#' @importFrom stats median
#' @importFrom stats sd
#'
#' @family Visualization
#' @family Flexible
#'
#' @examples
#' # Create a fizzy plot for Collaboration hours by Level Designation
#' create_fizz(pq_data, metric = "Collaboration_hours", hrvar = "LevelDesignation", return = "plot")
#'
#' # Create a summary statistics table for Collaboration hours by Organization
#' create_fizz(pq_data, metric = "Collaboration_hours", hrvar = "Organization", return = "table")
#'
#' @export
create_fizz <- function(data,
metric,
hrvar = "Organization",
mingroup = 5,
return = "plot") {
## Check inputs
required_variables <- c("MetricDate",
metric,
"PersonId")
## Error message if variables are not present
## Nothing happens if all present
data %>%
check_inputs(requirements = required_variables)
## Handling NULL values passed to hrvar
if(is.null(hrvar)){
data <- totals_col(data)
hrvar <- "Total"
}
## Clean metric name
clean_nm <- us_to_space(metric)
## Plot data
plot_data <-
data %>%
rename(group = !!sym(hrvar)) %>% # Rename HRvar to `group`
group_by(PersonId, group) %>%
summarise(!!sym(metric) := mean(!!sym(metric))) %>%
ungroup() %>%
left_join(data %>%
rename(group = !!sym(hrvar)) %>%
group_by(group) %>%
summarise(Employee_Count = n_distinct(PersonId)),
by = "group") %>%
filter(Employee_Count >= mingroup)
## Get max value
max_point <- max(plot_data[[metric]]) * 1.2
plot_legend <-
plot_data %>%
group_by(group) %>%
summarize(Employee_Count = first(Employee_Count)) %>%
mutate(Employee_Count = paste("n=",Employee_Count))
plot_object <-
plot_data %>%
ggplot(aes(x = group, y = !!sym(metric))) +
geom_point(size = 1,
alpha = 1/5,
color = "#578DB8",
position = position_jitter(width = 0.1, height = 0.1)) +
theme_wpa_basic() +
theme(
axis.line = element_blank(),
panel.grid.major.x = element_line(colour = "grey80"),
axis.ticks = element_blank(),
axis.title = element_blank()
) +
annotate("text",
x = plot_legend$group,
y = max_point,
label = plot_legend$Employee_Count,
size = 3) +
annotate("rect",
xmin = 0.5,
xmax = length(plot_legend$group) + 0.5,
ymin = max_point*0.95,
ymax = max_point*1.05,
alpha = .2) +
scale_y_continuous(
position = "right",
limits = c(0, max_point * 1.1)) +
coord_flip() +
labs(title = clean_nm,
subtitle = paste("Distribution of",
tolower(clean_nm),
"by",
tolower(camel_clean(hrvar))),
caption = extract_date_range(data, return = "text"),
x = hrvar,
y = paste("Average", clean_nm))
summary_table <-
plot_data %>%
select(group, tidyselect::all_of(metric)) %>%
group_by(group) %>%
summarise(mean = mean(!!sym(metric)),
median = median(!!sym(metric)),
sd = sd(!!sym(metric)),
min = min(!!sym(metric)),
max = max(!!sym(metric)),
range = max - min,
n = n())
if(return == "table"){
summary_table %>%
as_tibble() %>%
return()
} else if(return == "plot"){
return(plot_object)
} else {
stop("Please enter a valid input for `return`.")
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/create_fizz.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Create a histogram plot for any metric
#'
#' @description
#' Provides an analysis of the distribution of a selected metric.
#' Returns a faceted histogram by default.
#' Additional options available to return the underlying frequency table.
#'
#' @template spq-params
#' @param metric String containing the name of the metric,
#' e.g. "Collaboration_hours"
#'
#' @param binwidth Numeric value for setting `binwidth` argument within
#' `ggplot2::geom_histogram()`. Defaults to 1.
#'
#' @param ncol Numeric value setting the number of columns on the plot. Defaults
#' to `NULL` (automatic).
#'
#' @param return String specifying what to return. This must be one of the
#' following strings:
#' - `"plot"`
#' - `"table"`
#' - `"data"`
#' - `"frequency"`
#'
#' See `Value` for more information.
#'
#' @return
#' A different output is returned depending on the value passed to the `return`
#' argument:
#' - `"plot"`: 'ggplot' object. A faceted histogram for the metric.
#' - `"table"`: data frame. A summary table for the metric.
#' - `"data"`: data frame. Data with calculated person averages.
#' - `"frequency`: list of data frames. Each data frame contains the
#' frequencies used in each panel of the plotted histogram.
#'
#' @import dplyr
#' @import ggplot2
#' @import reshape2
#' @import scales
#' @importFrom tidyr spread
#' @importFrom stats median
#' @importFrom stats sd
#'
#' @family Flexible
#'
#' @examples
#' # Return plot for whole organization
#' create_hist(pq_data, metric = "Collaboration_hours", hrvar = NULL)
#'
#' # Return plot
#' create_hist(pq_data, metric = "Collaboration_hours", hrvar = "Organization")
#'
#' # Return plot but coerce plot to 3 columns
#' create_hist(pq_data, metric = "Collaboration_hours", hrvar = "Organization", ncol = 3)
#'
#' # Return summary table
#' create_hist(pq_data, metric = "Collaboration_hours", hrvar = "Organization", return = "table")
#' @export
create_hist <- function(data,
metric,
hrvar = "Organization",
mingroup = 5,
binwidth = 1,
ncol = NULL,
return = "plot") {
## Check inputs
required_variables <- c("MetricDate",
metric,
"PersonId")
## Error message if variables are not present
## Nothing happens if all present
data %>%
check_inputs(requirements = required_variables)
## Clean metric name
clean_nm <- us_to_space(metric)
## Handling NULL values passed to hrvar
if(is.null(hrvar)){
data <- totals_col(data)
hrvar <- "Total"
}
## Basic Data for bar plot
## Calculate person-averages
plot_data <-
data %>%
rename(group = !!sym(hrvar)) %>%
group_by(PersonId, group) %>%
summarise(!!sym(metric) := mean(!!sym(metric))) %>%
ungroup() %>%
left_join(data %>%
rename(group = !!sym(hrvar)) %>%
group_by(group) %>%
summarise(Employee_Count = n_distinct(PersonId)),
by = "group") %>%
filter(Employee_Count >= mingroup)
## Employee count / base size table
plot_legend <-
plot_data %>%
group_by(group) %>%
summarize(Employee_Count = first(Employee_Count)) %>%
mutate(Employee_Count = paste("n=",Employee_Count))
## Bar plot
plot_object <-
plot_data %>%
ggplot(aes(x = !!sym(metric))) +
geom_histogram(binwidth = binwidth, colour = "white", fill="#34b1e2") +
facet_wrap(group ~ ., ncol = ncol) +
theme_wpa_basic() +
theme(strip.background = element_rect(color = "#1d627e",
fill = "#1d627e"),
strip.text = element_text(size = 10,
colour = "#FFFFFF",
face = "bold")) +
labs(title = clean_nm,
subtitle = paste("Distribution of", tolower(clean_nm), "by", tolower(camel_clean(hrvar)))) +
xlab(clean_nm) +
ylab("Number of employees") +
labs(caption = extract_date_range(data, return = "text"))
## Table to return
return_table <-
plot_data %>%
group_by(group) %>%
summarise(
mean = mean(!!sym(metric), na.rm = TRUE),
median = median(!!sym(metric), na.rm = TRUE),
max = max(!!sym(metric), na.rm = TRUE),
min = min(!!sym(metric), na.rm = TRUE),
.groups = "drop"
) %>%
left_join(data %>%
rename(group = !!sym(hrvar)) %>%
group_by(group) %>%
summarise(Employee_Count = n_distinct(PersonId)),
by = "group")
if(return == "table"){
return_table
} else if(return == "plot"){
return(plot_object)
} else if(return == "frequency"){
ggplot2::ggplot_build(plot_object)$data[[1]] %>%
select(group,
PANEL,
x,
xmin,
xmax,
y) %>%
group_split(PANEL)
} else if(return == "data"){
plot_data
} else {
stop("Please enter a valid input for `return`.")
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/create_hist.R
|
#' @title
#' Create an incidence analysis reflecting proportion of population scoring above
#' or below a threshold for a metric
#'
#' @description
#' An incidence analysis is generated, with each value in the table reflecting
#' the proportion of the population that is above or below a threshold for a
#' specified metric. There is an option to only provide a single `hrvar` in
#' which a bar plot is generated, or two `hrvar` values where an incidence table
#' (heatmap) is generated.
#'
#'
#' @param data A Standard Person Query dataset in the form of a data frame.
#' @param metric Character string containing the name of the metric,
#' e.g. "Collaboration_hours"
#' @param hrvar Character vector of at most length 2 containing the name of the
#' HR Variable by which to split metrics.
#' @param mingroup Numeric value setting the privacy threshold / minimum group
#' size. Defaults to 5.
#' @param threshold Numeric value specifying the threshold.
#' @param position String containing the below valid values:
#' - `"above"`: show incidence of those equal to or above the threshold
#' - `"below"`: show incidence of those equal to or below the threshold
#' @param return String specifying what to return. This must be one of the
#' following strings:
#' - `"plot"`
#' - `"table"`
#'
#' See `Value` for more information.
#'
#' @return
#' A different output is returned depending on the value passed to the `return` argument:
#' - `"plot"`: 'ggplot' object. A heat map.
#' - `"table"`: data frame. A summary table.
#'
#' @import dplyr
#' @import ggplot2
#' @importFrom scales percent
#'
#' @family Visualization
#' @family Flexible
#'
#' @examples
#' # Only a single HR attribute
#' create_inc(
#' data = pq_data,
#' metric = "After_hours_collaboration_hours",
#' hrvar = "Organization",
#' threshold = 4,
#' position = "above"
#' )
#'
#' # Two HR attributes
#' create_inc(
#' data = pq_data,
#' metric = "Collaboration_hours",
#' hrvar = c("LevelDesignation", "Organization"),
#' threshold = 20,
#' position = "below"
#' )
#'
#' @export
create_inc <- function(
data,
metric,
hrvar,
mingroup = 5,
threshold,
position,
return = "plot"
){
if(length(hrvar) == 1){
create_inc_bar(
data = data,
metric = metric,
hrvar = hrvar,
mingroup = mingroup,
threshold = threshold,
position = position,
return = return
)
} else if(length(hrvar) == 2){
create_inc_grid(
data = data,
metric = metric,
hrvar = hrvar,
mingroup = mingroup,
threshold = threshold,
position = position,
return = return
)
} else {
stop("`hrvar` can only accept a character vector of length 2.")
}
}
#' @rdname create_inc
#' @export
create_incidence <- create_inc
#' Run `create_inc` with only single `hrvar`
#' Returning a bar chart
#'
#' @noRd
create_inc_bar <- function(
data,
metric,
hrvar,
mingroup = 5,
threshold,
position,
return = "plot"
){
# Transform data so that metrics become proportions
data_t <-
data %>%
{ if (position == "above"){
mutate(., !!sym(metric) := !!sym(metric) >= threshold)
} else if (position == "below"){
mutate(., !!sym(metric) := !!sym(metric) <= threshold)
}
}
# Set title text
title_text <-
paste(
"Incidence of",
tolower(us_to_space(metric)),
position,
threshold
)
# Set subtitle text
subtitle_text <-
paste(
"Percentage and number of employees by",
hrvar
)
# Pipe result to `create_bar()`
create_bar(
data = data_t,
metric = metric,
hrvar = hrvar,
mingroup = mingroup,
return = return,
plot_title = title_text,
plot_subtitle = subtitle_text,
legend_lab = paste("% with",
tolower(us_to_space(metric)),
position,
threshold),
percent = TRUE
)
}
#' Run `create_inc` with only two `hrvar`
#' Returning a heatmap
#'
#' @noRd
create_inc_grid <- function(
data,
metric,
hrvar,
mingroup = 5,
threshold,
position,
return = "plot"
){
# Create table of proportions
myTable <-
data %>%
{ if (position == "above"){
mutate(., !!sym(metric) := !!sym(metric) >= threshold)
} else if (position == "below"){
mutate(., !!sym(metric) := !!sym(metric) <= threshold)
}
} %>%
group_by(!!sym(hrvar[1]), !!sym(hrvar[2]), PersonId) %>%
summarise(
!!sym(metric) := mean(!!sym(metric), na.rm = TRUE)
) %>%
group_by(!!sym(hrvar[1]), !!sym(hrvar[2])) %>%
summarise(
!!sym(metric) := mean(!!sym(metric), na.rm = TRUE),
n = n_distinct(PersonId),
.groups = "drop"
) %>%
filter(n >= mingroup) %>%
arrange(desc(!!sym(metric)))
if(return == "table"){
myTable
} else if(return == "plot"){
# Set title text
title_text <-
paste(
"Incidence of",
tolower(us_to_space(metric)),
position,
threshold
)
# Set subtitle text
subtitle_text <-
paste(
"Percentage and number of employees by",
hrvar[1],
"and",
hrvar[2]
)
metric_text <- NULL
myTable %>%
mutate(metric_text = paste0(
scales::percent(!!sym(metric), accuracy = 1),
" (", n, ")")) %>%
ggplot(aes(x = !!sym(hrvar[1]),
y = !!sym(hrvar[2]),
fill = !!sym(metric))) +
geom_tile() +
geom_text(aes(label = metric_text),
colour = "black",
size = 3)+
scale_fill_gradient2(low = rgb2hex(7, 111, 161),
mid = rgb2hex(241, 204, 158),
high = rgb2hex(216, 24, 42),
midpoint = 0.5,
breaks = c(0, 0.5, 1),
labels = c("0%", "", "100%"),
limits = c(0, 1)) +
scale_x_discrete(position = "top", labels = us_to_space) +
scale_y_discrete(labels = us_to_space) +
theme_wpa_basic() +
labs(
title = title_text,
subtitle = subtitle_text,
caption = paste(
extract_date_range(data, return = "text"),
"\n",
"Percentages reflect incidence with respect to population in cell."),
fill = "Incidence"
)
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/create_inc.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Time Trend - Line Chart for any metric
#'
#' @description
#' Provides a week by week view of a selected metric, visualised as line charts.
#' By default returns a line chart for the defined metric,
#' with a separate panel per value in the HR attribute.
#' Additional options available to return a summary table.
#'
#' @details
#' This is a general purpose function that powers all the functions
#' in the package that produce faceted line plots.
#'
#' @template spq-params
#' @param metric Character string containing the name of the metric,
#' e.g. "Collaboration_hours"
#'
#' @param ncol Numeric value setting the number of columns on the plot. Defaults
#' to `NULL` (automatic).
#'
#' @param return String specifying what to return. This must be one of the following strings:
#' - `"plot"`
#' - `"table"`
#'
#' See `Value` for more information.
#'
#' @import dplyr
#' @import ggplot2
#' @import reshape2
#' @import scales
#' @importFrom tidyselect all_of
#'
#' @family Visualization
#' @family Flexible
#' @family Time-series
#'
#' @examples
#' # Return plot of Email Hours
#' pq_data %>% create_line(metric = "Email_hours", return = "plot")
#'
#' # Return plot of Collaboration Hours
#' pq_data %>% create_line(metric = "Collaboration_hours", return = "plot")
#'
#' # Return plot but coerce plot to two columns
#' pq_data %>%
#' create_line(
#' metric = "Collaboration_hours",
#' hrvar = "Organization",
#' ncol = 2
#' )
#'
#' # Return plot of email hours and cut by `LevelDesignation`
#' pq_data %>% create_line(metric = "Email_hours", hrvar = "LevelDesignation")
#'
#' @return
#' A different output is returned depending on the value passed to the `return` argument:
#' - `"plot"`: 'ggplot' object. A faceted line plot for the metric.
#' - `"table"`: data frame. A summary table for the metric.
#'
#' @export
create_line <- function(data,
metric,
hrvar = "Organization",
mingroup = 5,
ncol = NULL,
return = "plot"){
## Check inputs
required_variables <- c("MetricDate",
metric,
"PersonId")
## Error message if variables are not present
## Nothing happens if all present
data %>%
check_inputs(requirements = required_variables)
## Clean metric name
clean_nm <- us_to_space(metric)
## Handling NULL values passed to hrvar
if(is.null(hrvar)){
data <- totals_col(data)
hrvar <- "Total"
subtitle_nm <- paste("Total",
tolower(clean_nm),"over time")
} else{
subtitle_nm <- paste("Total",
tolower(clean_nm),
"by",
tolower(camel_clean(hrvar)))
}
myTable <-
data %>%
mutate(MetricDate = as.Date(MetricDate, "%m/%d/%Y")) %>%
rename(group = !!sym(hrvar)) %>% # Rename HRvar to `group`
select(PersonId, MetricDate, group, all_of(metric)) %>%
group_by(group) %>%
mutate(Employee_Count = n_distinct(PersonId)) %>%
filter(Employee_Count >= mingroup) # Keep only groups above privacy threshold
myTable <-
myTable %>%
group_by(MetricDate, group) %>%
summarize(Employee_Count = mean(Employee_Count),
!!sym(metric) := mean(!!sym(metric)),.groups = "drop")
## Data frame to return
myTable_return <-
myTable %>%
select(MetricDate, group, all_of(metric)) %>%
spread(MetricDate, !!sym(metric))
## Data frame for creating plot
myTable_plot <-
myTable %>%
select(MetricDate, group, all_of(metric)) %>%
group_by(MetricDate, group) %>%
summarise_at(vars(all_of(metric)), ~mean(., na.rm = TRUE)) %>%
ungroup()
return_plot <- function(){
myTable_plot %>%
ggplot(aes(x = MetricDate, y = !!sym(metric))) +
geom_line(colour = "#1d627e") +
facet_wrap(.~group, ncol = ncol) +
scale_fill_gradient(name="Hours", low = "white", high = "red") +
theme_wpa_basic() +
theme(strip.background = element_rect(color = "#1d627e",
fill = "#1d627e"),
strip.text = element_text(size = 10,
colour = "#FFFFFF",
face = "bold")) +
labs(title = clean_nm,
subtitle = subtitle_nm,
x = "Metric Date",
y = clean_nm,
caption = extract_date_range(data, return = "text")) +
ylim(0, NA) # Set origin to zero
}
if(return == "table"){
myTable_return
} else if(return == "plot"){
return_plot()
} else {
stop("Please enter a valid input for `return`.")
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/create_line.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Create a line chart without aggregation for any metric
#'
#' @description
#' This function creates a line chart directly from the aggregated / summarised data.
#' Unlike `create_line()` which performs a person-level aggregation, there is no
#' calculation for `create_line_asis()` and the values are rendered as they are passed
#' into the function. The only requirement is that a `date_var` is provided for the x-axis.
#'
#' @param data Plotting data as a data frame.
#' @param date_var String containing name of variable for the horizontal axis.
#' @param metric String containing name of variable representing the line.
#' @param title Title of the plot.
#' @param subtitle Subtitle of the plot.
#' @param caption Caption of the plot.
#' @param ylab Y-axis label for the plot (group axis)
#' @param xlab X-axis label of the plot (bar axis).
#' @param line_colour String to specify colour to use for the line.
#' Hex codes are accepted. You can also supply
#' RGB values via `rgb2hex()`.
#'
#' @import ggplot2
#' @import dplyr
#'
#' @family Visualization
#' @family Flexible
#' @family Time-series
#'
#' @return
#' Returns a 'ggplot' object representing a line plot.
#'
#' @examples
#' library(dplyr)
#'
#' # Median `Emails_sent` grouped by `MetricDate`
#' # Without Person Averaging
#' med_df <-
#' pq_data %>%
#' group_by(MetricDate) %>%
#' summarise(Emails_sent_median = median(Emails_sent))
#'
#' med_df %>%
#' create_line_asis(
#' date_var = "MetricDate",
#' metric = "Emails_sent_median",
#' title = "Median Emails Sent",
#' subtitle = "Person Averaging Not Applied",
#' caption = extract_date_range(pq_data, return = "text")
#' )
#'
#' @export
create_line_asis <- function(data,
date_var = "MetricDate",
metric,
title = NULL,
subtitle = NULL,
caption = NULL,
ylab = date_var,
xlab = metric,
line_colour = rgb2hex(0, 120, 212)){
returnPlot <-
data %>%
mutate_at(vars(date_var), ~as.Date(., format = "%m/%d/%Y")) %>%
ggplot(aes(x = !!sym(date_var), y = !!sym(metric))) +
geom_line(colour = line_colour)
returnPlot +
labs(title = title,
subtitle = subtitle,
caption = caption,
y = xlab,
x = ylab) +
theme_wpa_basic()
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/create_line_asis.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Period comparison scatter plot for any two metrics
#'
#' @description
#' Returns two side-by-side scatter plots representing two selected metrics,
#' using colour to map an HR attribute and size to represent number of employees.
#' Returns a faceted scatter plot by default, with additional options
#' to return a summary table.
#'
#' @details
#' This is a general purpose function that powers all the functions
#' in the package that produce faceted scatter plots.
#'
#' @param data A Standard Person Query dataset in the form of a data frame.
#' @param hrvar HR Variable by which to split metrics. Accepts a character vector,
#' defaults to "Organization" but accepts any character vector, e.g. "LevelDesignation"
#' @param metric_x Character string containing the name of the metric,
#' e.g. "Collaboration_hours"
#' @param metric_y Character string containing the name of the metric,
#' e.g. "Collaboration_hours"
#' @param before_start Start date of "before" time period in YYYY-MM-DD
#' @param before_end End date of "before" time period in YYYY-MM-DD
#' @param after_start Start date of "after" time period in YYYY-MM-DD
#' @param after_end End date of "after" time period in YYYY-MM-DD
#' @param before_label String to specify a label for the "before" period. Defaults to "Period 1".
#' @param after_label String to specify a label for the "after" period. Defaults to "Period 2".
#' @param mingroup Numeric value setting the privacy threshold / minimum group size.
#' Defaults to 5.
#' @param return Character vector specifying what to return, defaults to "plot".
#' Valid inputs are "plot" and "table".
#'
#' @import dplyr
#' @import ggplot2
#'
#' @family Visualization
#' @family Flexible
#' @family Time-series
#'
#' @return
#' Returns a 'ggplot' object showing two scatter plots side by side representing
#' the two periods.
#'
#' @examples
#' # Return plot
#' create_period_scatter(pq_data,
#' hrvar = "LevelDesignation",
#' before_start = "2022-05-01",
#' before_end = "2022-05-31",
#' after_start = "2022-06-01",
#' after_end = "2022-07-03")
#'
#' # Return a summary table
#' create_period_scatter(pq_data, before_end = "2022-05-31", return = "table")
#'
#'
#' @export
create_period_scatter <- function(data,
hrvar = "Organization",
metric_x = "Large_and_long_meeting_hours",
metric_y = "Meeting_hours",
before_start = min(as.Date(data$MetricDate, "%m/%d/%Y")),
before_end,
after_start = as.Date(before_end) + 1,
after_end = max(as.Date(data$MetricDate, "%m/%d/%Y")),
before_label = "Period 1",
after_label = "Period 2",
mingroup = 5,
return = "plot"){
## Check inputs
## Update these column names as per appropriate
required_variables <- c("MetricDate",
hrvar,
"PersonId")
## Error message if variables are not present
## Nothing happens if all present
data %>%
check_inputs(requirements = required_variables)
daterange_1_start <- as.Date(before_start)
daterange_1_end <- as.Date(before_end)
daterange_2_start <- as.Date(after_start)
daterange_2_end <- as.Date(after_end)
# Fix dates format for queries
WpA_dataset <- data %>% mutate(Date = as.Date(MetricDate, "%m/%d/%Y"))
# Check for dates in data file
if (daterange_1_start < min(WpA_dataset$Date) |
daterange_1_start > max(WpA_dataset$Date) |
daterange_1_end < min(WpA_dataset$Date) |
daterange_1_end > max(WpA_dataset$Date) |
daterange_2_start < min(WpA_dataset$Date) |
daterange_2_start > max(WpA_dataset$Date) |
daterange_2_end < min(WpA_dataset$Date) |
daterange_2_end > max(WpA_dataset$Date)) {
stop('Dates not found in dataset')
geterrmessage()
}
## Employee count
emp_count <-
WpA_dataset %>%
group_by(!!sym(hrvar)) %>%
summarise(n = n_distinct(PersonId))
data_p1 <-
WpA_dataset %>%
rename(group = hrvar) %>%
filter(between(Date, daterange_1_start, daterange_1_end)) %>%
group_by(PersonId, group) %>%
summarise_at(vars(!!sym(metric_x), !!sym(metric_y)), ~mean(.)) %>%
ungroup() %>%
group_by(group) %>%
summarise_at(vars(!!sym(metric_x), !!sym(metric_y)), ~mean(., na.rm = TRUE)) %>%
mutate(Period = before_label) %>%
left_join(emp_count, by = c(group = hrvar)) %>%
filter(n >= mingroup)
data_p2 <-
WpA_dataset %>%
rename(group = hrvar) %>%
filter(between(Date, daterange_2_start, daterange_2_end)) %>%
group_by(PersonId, group) %>%
summarise_at(vars(!!sym(metric_x), !!sym(metric_y)), ~mean(.)) %>%
ungroup() %>%
group_by(group) %>%
summarise_at(vars(!!sym(metric_x), !!sym(metric_y)), ~mean(., na.rm = TRUE)) %>%
mutate(Period = after_label) %>%
left_join(emp_count, by = c(group = hrvar)) %>%
filter(n >= mingroup)
## bind data
data_both <- rbind(data_p1, data_p2)
date_range_str <-
paste("Data from",
daterange_1_start,
"to",
daterange_1_end,
"and",
daterange_2_start,
"to",
daterange_2_end)
clean_x <- us_to_space(metric_x)
clean_y <- us_to_space(metric_y)
plot_title <-
paste(clean_x, "and", clean_y)
plot_object <-
data_both %>%
ggplot(aes(x = !!sym(metric_x),
y = !!sym(metric_y),
colour = group,
size = n)) +
geom_point(alpha = 0.5) +
scale_size(range = c(1, 20)) +
facet_wrap(.~Period) +
guides(size = "none") +
theme_wpa_basic() +
theme(legend.position = "bottom",
strip.background = element_rect(color = "#1d627e",
fill = "#1d627e"),
strip.text = element_text(size = 10,
colour = "#FFFFFF",
face = "bold")) +
ggtitle(plot_title,
subtitle = paste("Comparison of weekly averages by ", tolower(camel_clean(hrvar)))) +
ylab(clean_y) +
xlab(clean_x) +
labs(caption = date_range_str)
if(return == "table"){
# return(myTable_return)
return(data_both)
} else if(return == "plot"){
return(plot_object)
} else {
stop("Please enter a valid input for `return`.")
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/create_period_scatter.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title
#' Rank all groups across HR attributes on a selected Viva Insights metric
#'
#' @description
#' This function scans a standard Person query output for groups with high
#' levels of a given Viva Insights Metric. Returns a plot by default, with an
#' option to return a table with all groups (across multiple HR attributes)
#' ranked by the specified metric.
#'
#' @author Carlos Morales Torrado <carlos.morales@@microsoft.com>
#' @author Martin Chan <martin.chan@@microsoft.com>
#'
#' @template spq-params
#' @param metric Character string containing the name of the metric,
#' e.g. "Collaboration_hours"
#'
#' @param return String specifying what to return. This must be one of the
#' following strings:
#' - `"plot"` (default)
#' - `"table"`
#'
#' See `Value` for more information.
#'
#' @param mode String to specify calculation mode. Must be either:
#' - `"simple"`
#' - `"combine"`
#'
#' @param plot_mode Numeric vector to determine which plot mode to return. Must
#' be either `1` or `2`, and is only used when `return = "plot"`.
#' - `1`: Top and bottom five groups across the data population are highlighted
#' - `2`: Top and bottom groups _per_ organizational attribute are highlighted
#'
#' @import dplyr
#' @import ggplot2
#' @import reshape2
#' @import scales
#' @importFrom stats reorder
#'
#' @family Visualization
#' @family Flexible
#'
#' @examples
# Use a small sample for faster runtime
#' pq_data_small <- dplyr::slice_sample(pq_data, prop = 0.1)
#'
#' # Plot mode 1 - show top and bottom five groups
#' create_rank(
#' data = pq_data_small,
#' hrvar = c("FunctionType", "LevelDesignation"),
#' metric = "Emails_sent",
#' return = "plot",
#' plot_mode = 1
#' )
#'
#' # Plot mode 2 - show top and bottom groups per HR variable
#' create_rank(
#' data = pq_data_small,
#' hrvar = c("FunctionType", "LevelDesignation"),
#' metric = "Emails_sent",
#' return = "plot",
#' plot_mode = 2
#' )
#'
#' # Return a table
#' create_rank(
#' data = pq_data_small,
#' metric = "Emails_sent",
#' return = "table"
#' )
#'
#' \donttest{
#' # Return a table - combination mode
#' create_rank(
#' data = pq_data_small,
#' metric = "Emails_sent",
#' mode = "combine",
#' return = "table"
#' )
#' }
#'
#' @return
#' A different output is returned depending on the value passed to the `return`
#' argument:
#' - `"plot"`: 'ggplot' object. A bubble plot where the x-axis represents the
#' metric, the y-axis represents the HR attributes, and the size of the
#' bubbles represent the size of the organizations. Note that there is no
#' plot output if `mode` is set to `"combine"`.
#' - `"table"`: data frame. A summary table for the metric.
#'
#' @export
create_rank <- function(data,
metric,
hrvar = extract_hr(data, exclude_constants = TRUE),
mingroup = 5,
return = "table",
mode = "simple",
plot_mode = 1){
if(mode == "simple"){
results <-
create_bar(data,
metric = metric,
hrvar = hrvar[1],
mingroup = mingroup,
return = "table")
## Create a blank column
results$hrvar <- ""
## Empty table
results <- results[0,]
## Loop through each HR attribute supplied in argument
for (p in hrvar) {
table1 <-
data %>%
create_bar(metric = metric,
hrvar = p,
mingroup = mingroup,
return = "table")
table1$hrvar <- p
results <- rbind(results,table1)
}
output <-
results %>%
arrange(desc(get(metric))) %>%
select(hrvar, everything()) %>%
mutate(group = as.character(group)) # text fails when not string
if(return == "table"){
output
} else if(return == "plot"){
# Company average
avg_ch <-
data %>%
create_bar(hrvar = NULL, metric = metric, return = "table") %>%
pull(metric)
if(plot_mode == 1){
# Main plot
output %>%
mutate(Rank = rev(rank(!!sym(metric), ties.method = "max"))) %>%
mutate(Group =
case_when(Rank %in% 1:5 ~ "Top 5",
Rank %in% nrow(.):(nrow(.) - 5) ~ "Bottom 5",
TRUE ~ "Middle")) %>%
group_by(hrvar) %>%
mutate(OrgGroup =
case_when(!!sym(metric) == max(!!sym(metric), na.rm = TRUE) ~ "Top",
!!sym(metric) == min(!!sym(metric), na.rm = TRUE) ~ "Bottom",
TRUE ~ "Middle")) %>%
mutate(top_group = max(!!sym(metric), na.rm = TRUE)) %>%
ungroup() %>%
ggplot(aes(x = !!sym(metric),
y = reorder(hrvar, top_group))) + # Sort by top group
geom_point(aes(fill = Group,
size = n),
colour = "black",
pch = 21,
alpha = 0.8) +
labs(title = us_to_space(metric),
subtitle = "Lowest and highest group averages, by org. attribute",
y = "",
x = "") +
ggrepel::geom_text_repel(
aes(x = !!sym(metric),
y = hrvar,
label = ifelse(Group %in% c("Top 5", "Bottom 5"), group, "")),
size = 3) +
scale_x_continuous(position = "top") +
scale_fill_manual(name = "Group",
values = c(rgb2hex(68,151,169),
"white",
"#FE7F4F"),
guide = "legend") +
theme_wpa_basic() +
scale_size(guide = "none", range = c(1, 15)) +
theme(
axis.line=element_blank(),
panel.grid.major.x = element_blank(),
panel.grid.major.y = element_line(colour = "#D9E7F7", size = 3), # lightblue bar
panel.grid.minor.x = element_line(color="gray"),
strip.placement = "outside",
strip.background = element_blank(),
strip.text = element_blank()
) +
geom_vline(xintercept = avg_ch, colour = "red")
} else if(plot_mode == 2){
output %>%
group_by(hrvar) %>%
mutate(OrgGroup =
case_when(!!sym(metric) == max(!!sym(metric), na.rm = TRUE) ~ "Top",
!!sym(metric) == min(!!sym(metric), na.rm = TRUE) ~ "Bottom",
TRUE ~ "Middle")) %>%
mutate(top_group = max(!!sym(metric), na.rm = TRUE)) %>%
ungroup() %>%
ggplot(aes(x = !!sym(metric),
y = reorder(hrvar, top_group))) + # Sort by top group
geom_point(aes(fill = OrgGroup,
size = n),
colour = "black",
pch = 21,
alpha = 0.8) +
labs(title = us_to_space(metric),
subtitle = "Group averages by organizational attribute",
y = "Organizational attributes",
x = us_to_space(metric)) +
ggrepel::geom_text_repel(aes(x = !!sym(metric),
y = hrvar,
label = ifelse(OrgGroup %in% c("Top", "Bottom"), group, "")),
size = 3) +
scale_x_continuous(position = "top") +
scale_fill_manual(name = "Group",
values = c(rgb2hex(68,151,169),
"white",
"#FE7F4F"),
guide = "legend") +
theme_wpa_basic() +
scale_size(guide = "none", range = c(1, 8)) +
theme(
panel.grid.major.x = element_blank(),
panel.grid.major.y = element_line(colour = "#D9E7F7", size = 3), # lightblue bar
strip.placement = "outside",
strip.background = element_blank(),
strip.text = element_blank()
) +
geom_vline(xintercept = avg_ch, colour = "red")
} else {
stop("Invalid plot_mode argument.")
}
} else {
stop("Invalid `return` argument.")
}
} else if(mode == "combine"){
create_rank_combine(
data = data,
hrvar = hrvar,
metric = metric,
mingroup = mingroup
)
} else {
stop("Invalid `mode` argument.")
}
}
#' @title Create combination pairs of HR variables and run 'create_rank()'
#'
#' @description Create pairwise combinations of HR variables and compute an
#' average of a specified advanced insights metric.
#'
#' @details
#' This function is called when the `mode` argument in `create_rank()` is
#' specified as `"combine"`.
#'
#' @inheritParams create_rank
#'
#' @examples
#' # Use a small sample for faster runtime
#' pq_data_small <- dplyr::slice_sample(pq_data, prop = 0.1)
#'
#' create_rank_combine(
#' data = pq_data_small,
#' metric = "Email_hours",
#' hrvar = c("Organization", "FunctionType", "LevelDesignation")
#' )
#'
#' @return Data frame containing the following variables:
#' - `hrvar`: placeholder column that denotes the output as `"Combined"`.
#' - `group`: pairwise combinations of HR attributes with the HR attribute
#' in square brackets followed by the value of the HR attribute.
#' - Name of the metric (as passed to `metric`)
#' - `n`
#'
#' @export
create_rank_combine <- function(data,
hrvar = extract_hr(data),
metric,
mingroup = 5){
hrvar_iter_grid <-
tidyr::expand_grid(var1 = hrvar,
var2 = hrvar) %>%
dplyr::filter(var1 != var2)
hrvar_iter_grid %>%
purrr::pmap(function(var1, var2){
data %>%
dplyr::mutate(Combined =
paste0(
"[",var1, "] ",
!!sym(var1),
" [",var2, "] ",
!!sym(var2))) %>%
create_rank(
metric = metric,
hrvar = "Combined",
mode = "simple",
mingroup = mingroup
)
}) %>%
dplyr::bind_rows()
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/create_rank.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Create a sankey chart from a two-column count table
#'
#' @description
#' Create a 'networkD3' style sankey chart based on a long count table
#' with two variables. The input data should have three columns, where
#' each row is a unique group:
#' 1. Variable 1
#' 2. Variable 2
#' 3. Count
#'
#' @param data Data frame of the long count table.
#' @param var1 String containing the name of the variable to be shown on the
#' left.
#' @param var2 String containing the name of the variable to be shown on the
#' right.
#' @param count String containing the name of the count variable.
#'
#' @import dplyr
#'
#' @return A 'sankeyNetwork' and 'htmlwidget' object containing a two-tier
#' sankey plot. The output can be saved locally with
#' `htmlwidgets::saveWidget()`.
#'
#' @examples
#' \donttest{
#' pq_data %>%
#' dplyr::count(Organization, FunctionType) %>%
#' create_sankey(var1 = "Organization", var2 = "FunctionType")
#' }
#'
#' @family Visualization
#' @family Flexible
#'
#' @export
create_sankey <- function(data, var1, var2, count = "n"){
## Rename
data$pre_group <- data[[var1]]
data$group <- data[[var2]]
## Set up `nodes`
group_source <- unique(data$pre_group)
group_target <- paste0(unique(data$group), " ")
groups <- c(group_source, group_target)
nodes_source <- tibble(name = group_source)
nodes_target <- tibble(name = group_target)
nodes <- rbind(nodes_source, nodes_target) %>% mutate(node = 0:(nrow(.) - 1))
## Set up `links`
links <-
data %>%
mutate(group = paste0(group, " ")) %>%
select(source = "pre_group",
target = "group",
value = count)
nodes_source <- nodes_source %>% select(name) # Make `nodes` a single column data frame
nodes_target <- nodes_target %>% select(name) # Make `nodes` a single column data frame
links <-
links %>%
left_join(nodes %>% rename(IDsource = "node"), by = c("source" = "name")) %>%
left_join(nodes %>% rename(IDtarget = "node"), by = c("target" = "name"))
networkD3::sankeyNetwork(Links = as.data.frame(links),
Nodes = as.data.frame(nodes),
Source = 'IDsource', # Change reference to IDsource
Target = 'IDtarget', # Change reference to IDtarget
Value = 'value',
NodeID = 'name',
units="count",
sinksRight = FALSE)
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/create_sankey.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title
#' Create a Scatter plot with two selected Viva Insights metrics (General Purpose)
#'
#' @description
#' Returns a scatter plot of two selected metrics, using colour to map
#' an HR attribute.
#' Returns a scatter plot by default, with additional options
#' to return a summary table.
#'
#' @details
#' This is a general purpose function that powers all the functions
#' in the package that produce scatter plots.
#'
#' @param data A Standard Person Query dataset in the form of a data frame.
#' @param metric_x Character string containing the name of the metric,
#' e.g. "Collaboration_hours"
#' @param metric_y Character string containing the name of the metric,
#' e.g. "Collaboration_hours"
#' @param hrvar HR Variable by which to split metrics, defaults to "Organization"
#' but accepts any character vector, e.g. "LevelDesignation"
#' @param mingroup Numeric value setting the privacy threshold / minimum group size. Defaults to 5.
#' @param return Character vector specifying what to return, defaults to "plot".
#' Valid inputs are "plot" and "table".
#'
#' @import dplyr
#' @import ggplot2
#' @import scales
#'
#' @family Visualization
#' @family Flexible
#'
#' @examples
#' create_scatter(
#' pq_data,
#' metric_x = "Collaboration_hours",
#' metric_y = "Multitasking_hours",
#' hrvar = "Organization"
#' )
#'
#' create_scatter(
#' pq_data,
#' metric_x = "Collaboration_hours",
#' metric_y = "Multitasking_hours",
#' hrvar = "Organization",
#' mingroup = 100,
#' return = "plot"
#' )
#'
#' @return
#' Returns a 'ggplot' object by default, where 'plot' is passed in `return`.
#' When 'table' is passed, a summary table is returned as a data frame.
#'
#' @export
create_scatter <- function(data,
metric_x,
metric_y,
hrvar = "Organization",
mingroup = 5,
return = "plot"){
## Check inputs
required_variables <- c(hrvar,
metric_x,
metric_y,
"PersonId")
## Error message if variables are not present
## Nothing happens if all present
data %>%
check_inputs(requirements = required_variables)
## Extract values violating privacy threshold
violate_thres_chr <-
data %>%
group_by(!!sym(hrvar)) %>%
summarise(n = n_distinct(PersonId)) %>%
filter(n < mingroup) %>%
pull(!!sym(hrvar))
## Clean metric names
clean_x <- us_to_space(metric_x)
clean_y <- us_to_space(metric_y)
myTable <-
data %>%
filter(!(!!sym(hrvar) %in% violate_thres_chr)) %>%
group_by(PersonId, !!sym(hrvar)) %>%
summarise_at(vars(!!sym(metric_x),
!!sym(metric_y)),
~mean(.)) %>%
ungroup()
plot_object <-
myTable %>%
ggplot(aes(x = !!sym(metric_x),
y = !!sym(metric_y),
colour = !!sym(hrvar))) +
geom_point(alpha = 0.5) +
labs(title = paste0(clean_x, " and ", clean_y),
subtitle = paste("Distribution of employees by", tolower(camel_clean(hrvar))),
caption = extract_date_range(data, return = "text")) +
xlab(clean_x) +
ylab(clean_y) +
theme_wpa_basic()
myTable_return <-
myTable %>%
group_by(!!sym(hrvar)) %>%
summarise_at(vars(!!sym(metric_x),
!!sym(metric_y)),
~mean(.))
if(return == "table"){
return(myTable_return)
} else if(return == "plot"){
return(plot_object)
} else {
stop("Please enter a valid input for `return`.")
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/create_scatter.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Horizontal stacked bar plot for any metric
#'
#' @description
#' Creates either a single bar plot, of a stacked bar using selected metrics
#' (where the typical use case is to create different definitions of
#' collaboration hours).
#' Returns a plot by default.
#' Additional options available to return a summary table.
#'
#' @template spq-params
#' @param metrics A character vector to specify variables to be used
#' in calculating the "Total" value, e.g. c("Meeting_hours", "Email_hours").
#' The order of the variable names supplied determine the order in which they
#' appear on the stacked plot.
#' @param return Character vector specifying what to return, defaults to "plot".
#' Valid inputs are "plot" and "table".
#' @param stack_colours
#' A character vector to specify the colour codes for the stacked bar charts.
#' @param percent Logical value to determine whether to show labels as
#' percentage signs. Defaults to `FALSE`.
#' @param plot_title String. Option to override plot title.
#' @param plot_subtitle String. Option to override plot subtitle.
#' @param legend_lab String. Option to override legend title/label. Defaults to
#' `NULL`, where the metric name will be populated instead.
#' @param rank String specifying how to rank the bars. Valid inputs are:
#' - `"descending"` - ranked highest to lowest from top to bottom (default).
#' - `"ascending"` - ranked lowest to highest from top to bottom.
#' - `NULL` - uses the original levels of the HR attribute.
#' @param xlim An option to set max value in x axis.
#' @param text_just `r lifecycle::badge('experimental')` A numeric value
#' controlling for the horizontal position of the text labels. Defaults to
#' 0.5.
#' @param text_colour `r lifecycle::badge('experimental')` String to specify
#' colour to use for the text labels. Defaults to `"#FFFFFF"`.
#'
#' @import dplyr
#' @import ggplot2
#' @import reshape2
#' @import scales
#' @importFrom stats reorder
#'
#' @family Visualization
#' @family Flexible
#'
#' @return
#' Returns a 'ggplot' object by default, where 'plot' is passed in `return`.
#' When 'table' is passed, a summary table is returned as a data frame.
#'
#' @examples
#' pq_data %>%
#' create_stacked(hrvar = "LevelDesignation",
#' metrics = c("Meeting_hours", "Email_hours"),
#' return = "plot")
#'
#' pq_data %>%
#' create_stacked(hrvar = "FunctionType",
#' metrics = c("Meeting_hours",
#' "Email_hours",
#' "Call_hours",
#' "Chat_hours"),
#' return = "plot",
#' rank = "ascending")
#'
#' pq_data %>%
#' create_stacked(hrvar = "FunctionType",
#' metrics = c("Meeting_hours",
#' "Email_hours",
#' "Call_hours",
#' "Chat_hours"),
#' return = "table")
#'
#' @export
create_stacked <- function(data,
hrvar = "Organization",
metrics = c("Meeting_hours",
"Email_hours"),
mingroup = 5,
return = "plot",
stack_colours = c("#1d627e",
"#34b1e2",
"#b4d5dd",
"#adc0cb"),
percent = FALSE,
plot_title = "Collaboration Hours",
plot_subtitle = paste("Average by", tolower(camel_clean(hrvar))),
legend_lab = NULL,
rank = "descending",
xlim = NULL,
text_just = 0.5,
text_colour = "#FFFFFF"
){
## Check inputs
required_variables <- c("MetricDate",
metrics,
"PersonId")
## Error message if variables are not present
## Nothing happens if all present
data %>%
check_inputs(requirements = required_variables)
## Handle `legend_lab`
if(is.null(legend_lab)){
legend_lab <- gsub("_", " ", metrics)
}
## Handling NULL values passed to hrvar
if(is.null(hrvar)){
data <- totals_col(data)
hrvar <- "Total"
}
n_count <-
data %>%
rename(group = !!sym(hrvar)) %>% # Rename HRvar to `group`
group_by(group) %>%
summarise(Employee_Count = n_distinct(PersonId))
## Person level table
myTable <-
data %>%
rename(group = !!sym(hrvar)) %>% # Rename HRvar to `group`
select(PersonId, group, metrics) %>%
group_by(PersonId, group) %>%
summarise_at(vars(metrics), ~mean(.)) %>%
ungroup() %>%
mutate(Total = select(., metrics) %>% apply(1, sum)) %>%
left_join(n_count, by = "group") %>%
# Keep only groups above privacy threshold
filter(Employee_Count >= mingroup)
myTableReturn <-
myTable %>%
group_by(group) %>%
summarise_at(vars(metrics, Total), ~mean(.)) %>%
left_join(n_count, by = "group")
plot_table <-
myTable %>%
select(PersonId, group, metrics, Total) %>%
tidyr::gather(Metric, Value, -PersonId, -group)
totalTable <-
plot_table %>%
filter(Metric == "Total") %>%
group_by(group) %>%
summarise(Total = mean(Value))
myTable_legends <-
n_count %>%
filter(Employee_Count >= mingroup) %>%
mutate(Employee_Count = paste("n=",Employee_Count)) %>%
left_join(totalTable, by = "group")
## Get maximum value
if (is.null(xlim)) {
location <- max(myTable_legends$Total)
}
else if(is.numeric(xlim)) {
location <- xlim
}
else {
stop("Invalid return to `xlim`")
}
## Remove max from axis labels ------------------------------------------
max_blank <- function(x){
as.character(
c(
x[1:length(x) - 1],
"")
)
}
## Remove max from axis labels, but with percentages ---------------------
max_blank_percent <- function(x){
x <- scales::percent(x)
as.character(
c(
x[1:length(x) - 1],
"")
)
}
invert_mean <- function(x){
mean(x) * -1
}
## Create plot -----------------------------------------------------------
plot_object <-
plot_table %>%
filter(Metric != "Total") %>%
mutate(Metric = factor(Metric, levels = rev(metrics))) %>%
group_by(group, Metric) %>%
summarise_at(vars(Value), ~mean(.)) %>%
# Conditional ranking based on `rank` argument
{ if(is.null(rank)){
ggplot(., aes(x = group, y = Value, fill = Metric))
} else if(rank == "descending"){
ggplot(., aes(x = stats::reorder(group, Value, mean), y = Value, fill = Metric))
} else if(rank == "ascending"){
ggplot(., aes(x = stats::reorder(group, Value, invert_mean), y = Value, fill = Metric))
} else {
stop("Invalid return to `rank`")
}
} +
geom_bar(position = "stack", stat = "identity") +
{ if(percent == FALSE){
geom_text(aes(label = round(Value, 1)),
position = position_stack(vjust = text_just),
color = text_colour,
fontface = "bold")
} else if(percent == TRUE){
geom_text(aes(label = scales::percent(Value, accuracy = 0.1)),
position = position_stack(vjust = text_just),
color = text_colour,
fontface = "bold")
}
} +
{ if(percent == FALSE){
scale_y_continuous(expand = c(.01, 0),
limits = c(0, location * 1.3),
labels = max_blank,
position = "right")
} else if(percent == TRUE){
scale_y_continuous(expand = c(.01, 0),
limits = c(0, location * 1.3),
labels = max_blank_percent,
position = "right")
}
} +
annotate("text",
x = myTable_legends$group,
y = location * 1.15,
label = myTable_legends$Employee_Count,
size = 3) +
annotate("rect",
xmin = 0.5,
xmax = length(myTable_legends$group) + 0.5,
ymin = location * 1.05,
ymax = location * 1.25,
alpha = .2) +
annotate(x=length(myTable_legends$group) + 0.8,
xend=length(myTable_legends$group) + 0.8,
y = 0,
yend = location* 1.04,
colour = "black",
lwd = 0.75,
geom = "segment") +
scale_fill_manual(name="",
values = stack_colours,
breaks = metrics,
labels = legend_lab) +
coord_flip() +
theme_wpa_basic() +
theme(axis.line = element_blank(),
axis.ticks = element_blank(),
axis.title = element_blank()) +
labs(title = plot_title,
subtitle = plot_subtitle,
x = hrvar,
y = "Average weekly hours",
caption = extract_date_range(data, return = "text"))
# Return options ---------------------------------------------------------
if(return == "table"){
myTableReturn
} else if(return == "plot"){
return(plot_object)
} else {
stop("Please enter a valid input for `return`.")
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/create_stacked.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Create a line chart that tracks metrics over time with a 4-week
#' rolling average
#'
#' @description
#' `r lifecycle::badge('experimental')`
#'
#' Create a two-series line chart that visualizes a set of metric over time for
#' the selected population, with one of the series being a four-week rolling
#' average.
#'
#' @param data A Standard Person Query dataset in the form of a data frame.
#' @param metric Character string containing the name of the metric,
#' e.g. "Collaboration_hours"
#' percentage signs. Defaults to `FALSE`.
#' @param plot_title An option to override plot title.
#' @param plot_subtitle An option to override plot subtitle.
#' @param percent Logical value to determine whether to show labels as
#' percentage signs. Defaults to `FALSE`.
#'
#' @examples
#' pq_data %>%
#' create_tracking(
#' metric = "Collaboration_hours",
#' percent = FALSE
#' )
#'
#' @family Visualization
#' @family Flexible
#'
#' @import dplyr
#' @import ggplot2
#' @import reshape2
#'
#' @return
#' A different output is returned depending on the value passed to the `return` argument:
#' - `"plot"`: 'ggplot' object. A time-series plot for the metric.
#' - `"table"`: data frame. A summary table for the metric.
#'
#' @export
create_tracking <- function(data,
metric,
plot_title = us_to_space(metric),
plot_subtitle = "Measure over time",
percent = FALSE){
data$Date <- as.Date(data$MetricDate, "%m/%d/%Y")
min_date <- data %>% extract_date_range() %>% pull(Start)
max_date <- data %>% extract_date_range() %>% pull(End)
# Set variables
metrics <- NULL
`4 week rolling average` <- NULL
`Weekly average` <- NULL
data %>%
group_by(MetricDate) %>%
summarise(across(.cols = metric,
.fns = ~mean(., na.rm = TRUE)),
.groups = "drop") %>%
mutate(
lag0 = lag(!!sym(metric), 0),
lag1 = lag(!!sym(metric), 1),
lag2 = lag(!!sym(metric), 2),
lag3 = lag(!!sym(metric), 3)
) %>%
mutate(`4 week rolling average` = select(., paste0("lag", 0:3)) %>%
apply(1, function(x) mean(x, na.rm = TRUE))) %>% # Use all available data
select(-paste0("lag", 0:3)) %>%
rename(`Weekly average` = metric) %>%
tidyr::pivot_longer(cols = c(`Weekly average`, `4 week rolling average`),
names_to = "metrics",
values_to = "value") %>%
tidyr::drop_na(value) %>%
ggplot(aes(x = MetricDate,
y = value,
colour = metrics)) +
geom_line(size = 1) +
scale_colour_manual(
values = c(
"Weekly average" = rgb2hex(67, 189, 211),
"4 week rolling average" = rgb2hex(0, 82, 101)),
labels = us_to_space,
guide = guide_legend(reverse = TRUE)
) +
{ if(percent == FALSE){
scale_y_continuous(
limits = c(0, NA)
)
} else if(percent == TRUE){
scale_y_continuous(
limits = c(0, 1),
labels = scales::percent
)
}} +
scale_x_date(position = "top",
limits = c(min_date, max_date),
date_breaks = "2 weeks") +
theme_wpa_basic() +
theme(axis.line = element_blank(),
axis.ticks = element_blank(),
axis.title = element_blank(),
panel.grid.major.x = element_line(color="gray"),
panel.grid.major.y = element_line(colour = "#D9E7F7", size = 5)) +
labs(
title = plot_title,
subtitle = plot_subtitle,
caption = extract_date_range(data, return = "text")
)
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/create_tracking.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Heat mapped horizontal bar plot over time for any metric
#'
#' @description
#' Provides a week by week view of a selected Viva Insights metric. By
#' default returns a week by week heatmap bar plot, highlighting the points in
#' time with most activity. Additional options available to return a summary
#' table.
#'
#' @template spq-params
#' @param metric Character string containing the name of the metric,
#' e.g. "Collaboration_hours"
#' @param palette Character vector containing colour codes, ranked from the
#' lowest value to the highest value. This is passed directly to
#' `ggplot2::scale_fill_gradientn()`.
#' @param return Character vector specifying what to return, defaults to
#' `"plot"`.
#' Valid inputs are "plot" and "table".
#' @param legend_title String to be used as the title of the legend. Defaults to
#' `"Hours"`.
#'
#' @import dplyr
#' @import ggplot2
#' @import reshape2
#' @import scales
#'
#' @family Visualization
#' @family Flexible
#' @family Time-series
#'
#' @examples
#' create_trend(pq_data, metric = "Collaboration_hours", hrvar = "LevelDesignation")
#'
#' # custom colours
#' create_trend(
#' pq_data,
#' metric = "Collaboration_hours",
#' hrvar = "LevelDesignation",
#' palette = c(
#' "#FB6107",
#' "#F3DE2C",
#' "#7CB518",
#' "#5C8001"
#' )
#' )
#'
#' @return
#' Returns a 'ggplot' object by default, where 'plot' is passed in `return`.
#' When 'table' is passed, a summary table is returned as a data frame.
#'
#' @export
create_trend <- function(data,
metric,
hrvar = "Organization",
mingroup = 5,
palette = c("steelblue4",
"aliceblue",
"white",
"mistyrose1",
"tomato1"),
return = "plot",
legend_title = "Hours"){
## Check inputs
required_variables <- c("MetricDate",
metric,
"PersonId")
## Error message if variables are not present
## Nothing happens if all present
data %>%
check_inputs(requirements = required_variables)
## Handling NULL values passed to hrvar
if(is.null(hrvar)){
data <- totals_col(data)
hrvar <- "Total"
}
## Clean metric name
clean_nm <- us_to_space(metric)
myTable <-
data %>%
mutate(MetricDate = as.Date(MetricDate, "%m/%d/%Y")) %>%
rename(group = !!sym(hrvar)) %>% # Rename HRvar to `group`
select(PersonId, MetricDate, group, !!sym(metric)) %>%
group_by(group) %>%
mutate(Employee_Count = n_distinct(PersonId)) %>%
filter(Employee_Count >= mingroup) # Keep only groups above privacy threshold
myTable <-
myTable %>%
group_by(MetricDate, group) %>%
summarize(Employee_Count = mean(Employee_Count, na.rm = TRUE),
!!sym(metric) := mean(!!sym(metric), na.rm = TRUE))
myTable_plot <- myTable %>% select(MetricDate, group, !!sym(metric))
myTable_return <- myTable_plot %>% tidyr::spread(MetricDate, !!sym(metric))
plot_object <-
myTable_plot %>%
ggplot(aes(x = MetricDate , y = group , fill = !!sym(metric))) +
geom_tile(height=.5) +
scale_x_date(position = "top") +
scale_fill_gradientn(name = legend_title,
colours = palette) +
theme_wpa_basic() +
theme(axis.line.y = element_blank(), axis.title.y = element_blank()) +
labs(title = clean_nm,
subtitle = paste("Hotspots by", tolower(camel_clean(hrvar)))) +
xlab("Date") +
ylab(hrvar) +
labs(caption = extract_date_range(data, return = "text"))
if(return == "table"){
myTable_return
} else if(return == "plot"){
plot_object
} else {
stop("Please enter a valid input for `return`.")
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/create_trend.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Convert a numeric variable for hours into categorical
#'
#' @description
#' Supply a numeric variable, e.g. `Collaboration_hours`, and return a character
#' vector.
#'
#' @details
#' This is used within `create_dist()` for numeric to categorical conversion.
#'
#' @param metric A numeric variable representing hours.
#' @param cuts A numeric vector of minimum length 3 to represent the
#' cut points required. The minimum and maximum values provided in the vector
#' are inclusive.
#' @param unit String to specify the unit of the labels. Defaults to "hours".
#' @param lbound Numeric. Specifies the lower bound (inclusive) value for the
#' minimum label. Defaults to 0.
#' @param ubound Numeric. Specifies the upper bound (inclusive) value for the
#' maximum label. Defaults to 100.
#'
#' @family Support
#'
#' @return
#' Character vector representing a converted categorical variable, appended
#' with the label of the unit. See `examples` for more information.
#'
#' @examples
#' # Direct use
#' cut_hour(1:30, cuts = c(15, 20, 25))
#'
#' # Use on a query
#' cut_hour(pq_data$Collaboration_hours, cuts = c(10, 15, 20), ubound = 150)
#'
#' @export
cut_hour <- function(metric,
cuts,
unit = "hours",
lbound = 0,
ubound = 100){
cuts <- unique(cuts) # No duplicates allowed
ncuts <- length(cuts)
if(ncuts < 2){
stop("Please provide a numeric vector of at least length 2 to `cuts`")
}
# Extract min, max, and middle values
mincut <- min(cuts, na.rm = TRUE)
maxcut <- max(cuts, na.rm = TRUE)
midcut <- cuts[!cuts %in% mincut] # Excludes mincut only
midcut_min_1 <- cuts[match(midcut, cuts) - 1] # one value smaller
mincut_2 <- midcut_min_1[[1]] # second smallest cut
# Min and max values of `metric`
minval <- min(metric, na.rm = TRUE)
maxval <- max(metric, na.rm = TRUE)
# Warn if smaller lbound or larger ubound
if(minval < lbound){
warning("`lbound` does not capture the smallest value in `metric`. ",
"Values smaller than `lbound` will be classified as NA. ",
"Adjusting `lbound` is recommended.")
}
if(maxval > ubound){
warning("`ubound` does not capture the largest value in `metric`. ",
"Values larger than `ubound` will be classified as NA. ",
"Adjusting `ubound` is recommended.")
}
# Take smallest or largest of both values
lbound <- min(c(mincut, lbound), na.rm = TRUE)
ubound <- max(c(maxcut, ubound), na.rm = TRUE)
# Individual labels
label_mincut <- paste0("< ", mincut, " ", unit)
label_maxcut <- paste0(maxcut, "+ ", unit)
label_midcut <- paste0(midcut_min_1, " - ", midcut, " ", unit)
# All labels
all_labels <- unique(c(label_mincut, label_midcut, label_maxcut))
# If `lbound` or `ubound` conflict with cuts
if(lbound == mincut){
all_labels <- all_labels[all_labels != label_mincut]
}
if(ubound == maxcut){
all_labels <- all_labels[all_labels != label_maxcut]
}
# Debugging chunk ---------------------------------------------------------
# list(
# breaks = unique(c(lbound, cuts, ubound)),
# lbound,
# ubound,
# all_labels
# )
# Return result
cut(metric,
breaks = unique(c(lbound, cuts, ubound)),
include.lowest = TRUE,
labels = all_labels)
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/cut_hour.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Distribution of Email Hours as a 100% stacked bar
#'
#' @description
#' Analyze Email Hours distribution.
#' Returns a stacked bar plot by default.
#' Additional options available to return a table with distribution elements.
#'
#' @inheritParams create_dist
#' @inherit create_dist return
#'
#' @family Visualization
#' @family Emails
#'
#' @examples
#' # Return plot
#' email_dist(pq_data, hrvar = "Organization")
#'
#' # Return summary table
#' email_dist(pq_data, hrvar = "Organization", return = "table")
#'
#' # Return result with a custom specified breaks
#' email_dist(pq_data, hrvar = "LevelDesignation", cut = c(1, 2, 3))
#'
#' @export
email_dist <- function(data,
hrvar = "Organization",
mingroup = 5,
return = "plot",
cut = c(.5, 1, 1.5)) {
create_dist(data = data,
metric = "Email_hours",
hrvar = hrvar,
mingroup = mingroup,
return = return,
cut = cut)
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/email_dist.R
|
#' @title Distribution of Email Hours (Fizzy Drink plot)
#'
#' @description
#' Analyze weekly email hours distribution, and returns
#' a 'fizzy' scatter plot by default.
#' Additional options available to return a table with distribution elements.
#'
#' @inheritParams create_fizz
#' @inherit create_fizz return
#'
#' @family Visualization
#' @family Emails
#'
#' @examples
#'
#' # Return plot
#' email_fizz(pq_data, hrvar = "Organization", return = "plot")
#'
#' # Return summary table
#' email_fizz(pq_data, hrvar = "Organization", return = "table")
#'
#' @export
email_fizz <- function(data,
hrvar = "Organization",
mingroup = 5,
return = "plot"){
create_fizz(data = data,
metric = "Email_hours",
hrvar = hrvar,
mingroup = mingroup,
return = return)
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/email_fizz.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Email Time Trend - Line Chart
#'
#' @description
#' Provides a week by week view of email time, visualised as line charts.
#' By default returns a line chart for email hours,
#' with a separate panel per value in the HR attribute.
#' Additional options available to return a summary table.
#'
#' @inheritParams create_line
#' @inherit create_line return
#'
#' @family Visualization
#' @family Emails
#'
#' @examples
#' # Return a line plot
#' email_line(pq_data, hrvar = "LevelDesignation")
#'
#' # Return summary table
#' email_line(pq_data, hrvar = "LevelDesignation", return = "table")
#'
#' @export
email_line <- function(data,
hrvar = "Organization",
mingroup = 5,
return = "plot"){
## Inherit arguments
create_line(data = data,
metric = "Email_hours",
hrvar = hrvar,
mingroup = mingroup,
return = return)
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/email_line.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Email Hours Ranking
#'
#' @description
#' This function scans a standard query output for groups with high levels of
#' 'Weekly Email Collaboration'. Returns a plot by default, with an option to
#' return a table with a all of groups (across multiple HR attributes) ranked by
#' hours of digital collaboration.
#'
#' @details
#' Uses the metric `Email_hours`.
#' See `create_rank()` for applying the same analysis to a different metric.
#'
#' @inheritParams create_rank
#' @inherit create_rank return
#'
#' @family Visualization
#' @family Emails
#'
#' @examples
#' # Return rank table
#' email_rank(
#' data = pq_data,
#' return = "table"
#' )
#'
#' # Return plot
#' email_rank(
#' data = pq_data,
#' return = "plot"
#' )
#'
#' @export
email_rank <- function(data,
hrvar = extract_hr(data),
mingroup = 5,
mode = "simple",
plot_mode = 1,
return = "plot"){
data %>%
create_rank(metric = "Email_hours",
hrvar = hrvar,
mingroup = mingroup,
mode = mode,
plot_mode = plot_mode,
return = return)
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/email_rank.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Email Summary
#'
#' @description
#' Provides an overview analysis of weekly email hours.
#' Returns a bar plot showing average weekly email hours by default.
#' Additional options available to return a summary table.
#'
#' @inheritParams create_bar
#' @inherit create_bar return
#'
#' @family Visualization
#' @family Emails
#'
#' @examples
#' # Return a ggplot bar chart
#' email_summary(pq_data, hrvar = "LevelDesignation")
#'
#' # Return a summary table
#' email_summary(pq_data, hrvar = "LevelDesignation", return = "table")
#'
#' @export
email_summary <- function(data,
hrvar = "Organization",
mingroup = 5,
return = "plot"){
create_bar(data = data,
metric = "Email_hours",
hrvar = hrvar,
mingroup = mingroup,
return = return,
bar_colour = "darkblue")
}
#' @rdname email_summary
#' @export
email_sum <- email_summary
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/email_summary.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Email Hours Time Trend
#'
#' @description Provides a week by week view of email time.
#' By default returns a week by week heatmap, highlighting the points in time with most activity.
#' Additional options available to return a summary table.
#'
#' @details
#' Uses the metric `Email_hours`.
#'
#' @inheritParams create_trend
#' @inherit create_trend return
#'
#' @family Visualization
#' @family Emails
#'
#'
#' @examples
#' # Run plot
#' email_trend(pq_data)
#'
#' # Run table
#' email_trend(pq_data, hrvar = "LevelDesignation", return = "table")
#'
#' @export
email_trend <- function(data,
hrvar = "Organization",
mingroup = 5,
return = "plot"){
create_trend(data,
metric = "Email_hours",
hrvar = hrvar,
mingroup = mingroup,
return = return)
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/email_trend.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Export 'vivainsights' outputs to CSV, clipboard, or save as images
#'
#' @description
#' A general use function to export 'vivainsights' outputs to CSV, clipboard, or save as
#' images. By default, `export()` copies a data frame to the clipboard. If the
#' input is a 'ggplot' object, the default behaviour is to export a PNG.
#'
#' @author Martin Chan <martin.chan@@microsoft.com>
#'
#' @param x Data frame or 'ggplot' object to be passed through.
#' @param method Character string specifying the method of export.
#' Valid inputs include:
#' - `"clipboard"` (default if input is data frame)
#' - `"csv"`
#' - `"png"` (default if input is 'ggplot' object)
#' - `"svg"`
#' - `"jpeg"`
#' - `"pdf"`
#' @param path If exporting a file, enter the path and the desired file name,
#' _excluding the file extension_. For example, `"Analysis/SQ Overview"`.
#' @param timestamp Logical vector specifying whether to include a timestamp in
#' the file name. Defaults to `TRUE`.
#' @param width Width of the plot
#' @param height Height of the plot
#'
#' @return
#' A different output is returned depending on the value passed to the `method`
#' argument:
#' - `"clipboard"`: no return - data frame is saved to clipboard.
#' - `"csv"`: CSV file containing data frame is saved to specified path.
#' - `"png"`: PNG file containing 'ggplot' object is saved to specified path.
#' - `"svg"`: SVG file containing 'ggplot' object is saved to specified path.
#' - `"jpeg"`: JPEG file containing 'ggplot' object is saved to specified path.
#' - `"pdf"`: PDF file containing 'ggplot' object is saved to specified path.
#'
#' @importFrom utils write.csv
#'
#' @family Import and Export
#'
#' @export
export <- function(x,
method = "clipboard",
path = "insights export",
timestamp = TRUE,
width = 12,
height = 9){
## Create timestamped path (if applicable)
if(timestamp == TRUE){
newpath <- paste(path, vivainsights::tstamp())
} else {
newpath <- path
}
## Force method to png if is.ggplot and method not appropriate
if(is.ggplot(x) & method %in% c("clipboard", "csv")){
message("Input is a 'ggplot' object. Defaulted to exporting as PNG...")
method <- "png"
}
## Main export function
if(method == "clipboard"){
copy_df(x)
message(c("Data frame copied to clipboard.\n",
"You may paste the contents directly to Excel."))
## Export option: CSV
} else if(method == "csv"){
newpath <- paste0(newpath, ".csv")
write.csv(x = x, file = newpath)
## Export option: any ggsave methods
} else if(method %in% c("png", "svg", "jpeg", "pdf")){
newpath <- paste0(newpath, ".", method)
ggsave(filename = newpath, plot = x, width = width, height = height)
} else {
stop("Please check inputs. Enter `?export` for more details.")
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/export.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Distribution of External Collaboration Hours as a 100% stacked bar
#'
#' @description
#' Analyze the distribution of External Collaboration Hours.
#' Returns a stacked bar plot by default.
#' Additional options available to return a table with distribution elements.
#'
#' @details
#' Uses the metric `External_collaboration_hours`.
#' See `create_dist()` for applying the same analysis to a different metric.
#'
#' @inheritParams create_dist
#' @inherit create_dist return
#'
#' @family Visualization
#' @family External Collaboration
#'
#' @examples
#' # Return plot
#' external_dist(pq_data, hrvar = "Organization")
#'
#' # Return summary table
#' external_dist(pq_data, hrvar = "Organization", return = "table")
#'
#' # Return result with a custom specified breaks
#' external_dist(pq_data, hrvar = "LevelDesignation", cut = c(2, 4, 6))
#'
#' @export
external_dist <- function(data,
hrvar = "Organization",
mingroup = 5,
return = "plot",
cut = c(5, 10, 15)) {
data %>%
create_dist(
metric = "External_collaboration_hours",
hrvar = hrvar,
mingroup = mingroup,
return = return,
cut = cut,
dist_colours = c("#3F7066", "#64B4A4", "#B1EDE1","#CBF3EB")
)
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/external_dist.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Distribution of External Collaboration Hours (Fizzy Drink plot)
#'
#' @description
#' Analyze weekly External Collaboration hours distribution, and returns
#' a 'fizzy' scatter plot by default.
#' Additional options available to return a table with distribution elements.
#'
#' @details
#' Uses the metric `Collaboration_hours_external`.
#' See `create_fizz()` for applying the same analysis to a different metric.
#'
#' @inheritParams create_fizz
#' @inherit create_fizz return
#'
#' @family Visualization
#' @family External Collaboration
#'
#' @examples
#' # Return plot
#' external_fizz(pq_data, hrvar = "LevelDesignation", return = "plot")
#'
#' # Return summary table
#' external_fizz(pq_data, hrvar = "Organization", return = "table")
#' @export
external_fizz <- function(data,
hrvar = "Organization",
mingroup = 5,
return = "plot"){
create_fizz(data = data,
metric = "External_collaboration_hours",
hrvar = hrvar,
mingroup = mingroup,
return = return)
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/external_fizz.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title External Collaboration Hours Time Trend - Line Chart
#'
#' @description
#' Provides a week by week view of External collaboration time, visualized as
#' line chart. By default returns a separate panel per value in the HR attribute. Additional
#' options available to return a summary table.
#'
#' @details
#' Uses the metric `Collaboration_hours_external`.
#'
#' @seealso [create_line()] for applying the same analysis to a different metric.
#'
#' @inheritParams create_line
#' @inherit create_line return
#'
#' @family Visualization
#' @family External Collaboration
#'
#' @examples
#' # Return a line plot
#' external_line(pq_data, hrvar = "LevelDesignation")
#'
#' # Return summary table
#' external_line(pq_data, hrvar = "LevelDesignation", return = "table")
#'
#' @export
external_line <- function(data,
hrvar = "Organization",
mingroup=5,
return = "plot"){
## Inherit arguments
create_line(data = data,
metric = "External_collaboration_hours",
hrvar = hrvar,
mingroup = mingroup,
return = return)
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/external_line.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Rank groups with high External Collaboration Hours
#'
#' @description
#' This function scans a Standard Person Query for groups with high levels of
#' External Collaboration. Returns a plot by default, with an option to
#' return a table with all groups (across multiple HR attributes) ranked by
#' hours of External Collaboration.
#'
#' @details
#' Uses the metric \code{Collaboration_hours_external}.
#' See `create_rank()` for applying the same analysis to a different metric.
#'
#' @inheritParams create_rank
#'
#' @import dplyr
#' @import ggplot2
#' @import reshape2
#' @import scales
#' @importFrom stats reorder
#'
#' @family Visualization
#' @family After-hours Collaboration
#'
#' @return
#' When 'table' is passed in `return`, a summary table is returned as a data frame.
#'
#' @examples
#' # Return rank table
#' external_rank(data = pq_data, return = "table")
#'
#' # Return plot
#' external_rank(data = pq_data, return = "plot")
#'
#' @export
external_rank <- function(data,
hrvar = extract_hr(data),
mingroup = 5,
mode = "simple",
plot_mode = 1,
return = "plot"){
data %>%
create_rank(metric = "External_collaboration_hours",
hrvar = hrvar,
mingroup = mingroup,
mode = mode,
plot_mode = plot_mode,
return = return)
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/external_rank.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title External Collaboration Summary
#'
#' @description
#' Provides an overview analysis of 'External Collaboration'.
#' Returns a stacked bar plot of internal and external collaboration.
#' Additional options available to return a summary table.
#'
#' @inheritParams create_stacked
#' @inherit create_stacked return
#'
#' @family Visualization
#' @family External Collaboration
#'
#' @examples
#' # Return a plot
#' external_sum(pq_data, hrvar = "LevelDesignation")
#'
#' # Return summary table
#' external_sum(pq_data, hrvar = "LevelDesignation", return = "table")
#'
#' @export
external_sum <- function(data,
hrvar = "Organization",
mingroup = 5,
stack_colours = c("#1d327e", "#1d7e6a"),
return = "plot"){
# Calculate Internal / External Collaboration time
plot_data <- data %>% mutate(Internal_hours= Collaboration_hours - External_collaboration_hours) %>% mutate(External_hours= External_collaboration_hours)
# Plot Internal / External Collaboration time by Organization
plot_data %>% create_stacked(hrvar = hrvar, metrics = c("Internal_hours", "External_hours"), plot_title = "Internal and External Collaboration Hours", stack_colours = stack_colours, mingroup = mingroup, return = return)
}
#' @rdname external_sum
#' @export
external_summary <- external_sum
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/external_sum.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Extract HR attribute variables
#'
#' @description
#' This function uses a combination of variable class,
#' number of unique values, and regular expression matching
#' to extract HR / organisational attributes from a data frame.
#'
#' @param data A data frame to be passed through.
#' @param max_unique A numeric value representing the maximum
#' number of unique values to accept for an HR attribute. Defaults to 50.
#' @param exclude_constants Logical value to specify whether single-value HR
#' attributes are to be excluded. Defaults to `TRUE`.
#'
#' @param return String specifying what to return. This must be one of the
#' following strings:
#' - `"names"`
#' - `"vars"`
#'
#' See `Value` for more information.
#' @return
#' A different output is returned depending on the value passed to the `return`
#' argument:
#' - `"names"`: character vector identifying all the names of HR variables
#' present in the data.
#' - `"vars"`: data frame containing all the columns of HR variables present
#' in the data.
#'
#' @family Support
#' @family Data Validation
#'
#' @examples
#' pq_data %>% extract_hr(return = "names")
#'
#' pq_data %>% extract_hr(return = "vars")
#'
#' @export
extract_hr <- function(data,
max_unique = 50,
exclude_constants = TRUE,
return = "names"){
if(exclude_constants == TRUE){
min_unique = 1
} else if (exclude_constants == FALSE){
min_unique = 0
}
hr_var <-
data %>%
dplyr::select_if(~(is.character(.) | is.logical(.) | is.factor(.))) %>%
dplyr::select_if(~(dplyr::n_distinct(.) < max_unique)) %>%
dplyr::select_if(~(dplyr::n_distinct(.) > min_unique)) %>% # Exc constants
dplyr::select_if(~!all(is_date_format(.))) %>%
names() %>%
.[.!= "WorkingStartTimeSetInOutlook"] %>%
.[.!= "WorkingEndTimeSetInOutlook"] %>%
.[.!= "WorkingDaysSetInOutlook"]
if(return == "names"){
return(hr_var)
} else if(return == "vars"){
return(dplyr::select(data, tidyselect::all_of(hr_var)))
} else {
stop("Invalid input for return")
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/extract_hr.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Flag unusual high collaboration hours to after-hours collaboration
#' hours ratio
#'
#' @description This function flags persons who have an unusual ratio
#' of collaboration hours to after-hours collaboration hours.
#' Returns a character string by default.
#'
#' @template ch
#'
#' @import dplyr
#'
#' @param data A data frame containing a Person Query.
#' @param threshold Numeric value specifying the threshold for flagging.
#' Defaults to 30.
#' @param return String to specify what to return. Options include:
#' - `"message"`
#' - `"text"`
#' - `"data"`
#'
#' @return
#' A different output is returned depending on the value passed to the `return`
#' argument:
#' - `"message"`: message in the console containing diagnostic summary
#' - `"text"`: string containing diagnostic summary
#' - `"data"`: data frame. Person-level data with flags on unusually high or
#' low ratios
#'
#' @family Data Validation
#'
#' @examples
#' flag_ch_ratio(pq_data)
#'
#'
#' data.frame(PersonId = c("Alice", "Bob"),
#' Collaboration_hours = c(30, 0.5),
#' After_hours_collaboration_hours = c(0.5, 30)) %>%
#' flag_ch_ratio()
#'
#' @export
flag_ch_ratio <- function(data, threshold = c(1, 30), return = "message"){
min_thres <- min(threshold, na.rm = TRUE)
max_thres <- max(threshold, na.rm = TRUE)
## Check for high collab hours but lower afterhour collab hours
## Because of faulty outlook settings
ch_summary <-
data %>%
group_by(PersonId) %>%
summarise_at(vars(Collaboration_hours, After_hours_collaboration_hours), ~mean(.)) %>%
mutate(CH_ratio = Collaboration_hours / After_hours_collaboration_hours) %>%
arrange(desc(CH_ratio)) %>%
mutate(CH_FlagLow = ifelse(CH_ratio < min_thres, TRUE, FALSE),
CH_FlagHigh = ifelse(CH_ratio > max_thres, TRUE, FALSE),
CH_Flag = ifelse(CH_ratio > max_thres | CH_ratio < min_thres, TRUE, FALSE))
## Percent of people with high collab hours + low afterhour collab hours
CHFlagN <- sum(ch_summary$CH_Flag, na.rm = TRUE)
CHFlagProp <- mean(ch_summary$CH_Flag, na.rm = TRUE)
CHFlagProp2 <- paste(round(CHFlagProp * 100), "%") # Formatted
CHFlagMessage_Warning <- paste0("[Warning] The ratio of after-hours collaboration to total collaboration hours is outside the expected threshold for ", CHFlagN, " employees (", CHFlagProp2, " of the total).")
CHFlagMessage_Pass_Low <- paste0("[Pass] The ratio of after-hours collaboration to total collaboration hours is outside the expected threshold for only ", CHFlagN, " employees (", CHFlagProp2, " of the total).")
CHFlagMessage_Pass_Zero <- paste0("[Pass] The ratio of after-hours collaboration to total collaboration hours falls within the expected threshold for all employees.")
CHFlagLowN <- sum(ch_summary$CH_FlagLow, na.rm = TRUE)
CHFlagLowProp <- mean(ch_summary$CH_FlagLow, na.rm = TRUE)
CHFlagLowProp2 <- paste(round(CHFlagLowProp * 100), "%") # Formatted
CHFlagLowMessage <- paste0("- ", CHFlagLowN, " employees (", CHFlagLowProp2,
") have an unusually low after-hours collaboration")
CHFlagHighN <- sum(ch_summary$CH_FlagHigh, na.rm = TRUE)
CHFlagHighProp <- mean(ch_summary$CH_FlagHigh, na.rm = TRUE)
CHFlagHighProp2 <- paste(round(CHFlagHighProp * 100), "%") # Formatted
CHFlagHighMessage <- paste0("- ", CHFlagHighN, " employees (", CHFlagHighProp2 , ") have an unusually high after-hours collaboration (relative to weekly collaboration hours)")
if(CHFlagProp >= .05){
CHFlagMessage <- paste(CHFlagMessage_Warning, CHFlagHighMessage, CHFlagLowMessage, sep = "\n")
} else if(CHFlagProp < .05 & CHFlagProp2 > 0){
CHFlagMessage <- paste(CHFlagMessage_Pass_Low, CHFlagHighMessage, CHFlagLowMessage, sep = "\n")
} else if(CHFlagProp==0){
CHFlagMessage <- CHFlagMessage_Pass_Zero
}
## Print diagnosis
## Should implement options to return the PersonIds or a full data frame
if(return == "message"){
message(CHFlagMessage)
} else if(return == "text"){
CHFlagMessage
} else if(return == "data") {
ch_summary
} else {
stop("Invalid input for `return`.")
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/flag_ch_ratio.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Flag Persons with unusually high Email Hours to Emails Sent ratio
#'
#' @description This function flags persons who have an unusual ratio
#' of email hours to emails sent. If the ratio between Email Hours and
#' Emails Sent is greater than the threshold, then observations tied to
#' a `PersonId` is flagged as unusual.
#'
#' @import dplyr
#'
#' @family Data Validation
#'
#' @param data A data frame containing a Person Query.
#' @param threshold Numeric value specifying the threshold for flagging.
#' Defaults to 1.
#'
#' @param return String specifying what to return. This must be one of the
#' following strings:
#' - `"text"`
#' - `"data"`
#'
#' See `Value` for more information.
#'
#' @return
#' A different output is returned depending on the value passed to the `return`
#' argument:
#' - `"text"`: string. A diagnostic message.
#' - `"data"`: data frame. Person-level data with those flagged with unusual
#' ratios.
#'
#' @examples
#' flag_em_ratio(pq_data)
#'
#' @export
flag_em_ratio <- function(data, threshold = 1, return = "text"){
## Check for high collab hours but lower afterhour collab hours
## Because of faulty outlook settings
em_summary <-
data %>%
group_by(PersonId) %>%
summarise_at(vars(Email_hours, Emails_sent), ~mean(.)) %>%
mutate(Email_ratio = Email_hours / Emails_sent) %>%
arrange(desc(Email_ratio)) %>%
mutate(Email_Flag = ifelse(Email_ratio > threshold, TRUE, FALSE))
## Percent of people with high collab hours + low afterhour collab hours
EmailFlagN <- sum(em_summary$Email_Flag, na.rm = TRUE)
EmailFlagProp <- mean(em_summary$Email_Flag, na.rm = TRUE)
EmailFlagProp2 <- paste(round(EmailFlagProp * 100), "%") # Formatted
EmailFlagMessage <- paste0(EmailFlagProp2, " (", EmailFlagN, ") ",
"of the population have an unusually high email hours to emails sent ratio.")
if(return == "text"){
EmailFlagMessage
} else if(return == "data"){
em_summary
} else {
stop("Invalid input to `return`.")
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/flag_em_ratio.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Warn for extreme values by checking against a threshold
#'
#' @description
#' This is used as part of data validation to check if there are extreme values
#' in the dataset.
#'
#' @param data A Standard Person Query dataset in the form of a data frame.
#' @param metric A character string specifying the metric to test.
#' @param person A logical value to specify whether to calculate
#' person-averages. Defaults to `TRUE` (person-averages calculated).
#' @param threshold Numeric value specifying the threshold for flagging.
#' @param mode String determining mode to use for identifying extreme values.
#' - `"above"`: checks whether value is great than the threshold (default)
#' - `"equal"`: checks whether value is equal to the threshold
#' - `"below"`: checks whether value is below the threshold
#' @param return String specifying what to return. This must be one of the
#' following strings:
#' - `"text"`
#' - `"message"`
#' - `"table"`
#'
#' See `Value` for more information.
#'
#' @return
#' A different output is returned depending on the value passed to the `return`
#' argument:
#' - `"text"`: string. A diagnostic message.
#' - `"message"`: message on console. A diagnostic message.
#' - `"table"`: data frame. A person-level table with `PersonId` and the
#' extreme values of the selected metric.
#'
#' @family Data Validation
#'
#' @import dplyr
#'
#' @examples
#' # The threshold values are intentionally set low to trigger messages.
#' flag_extreme(pq_data, "Email_hours", threshold = 15)
#'
#' # Return a summary table
#' flag_extreme(pq_data, "Email_hours", threshold = 15, return = "table")
#'
#' # Person-week level
#' flag_extreme(pq_data, "Email_hours", person = FALSE, threshold = 15)
#'
#' # Check for values equal to threshold
#' flag_extreme(pq_data, "Email_hours", person = TRUE, mode = "equal", threshold = 0)
#'
#' # Check for values below threshold
#' flag_extreme(pq_data, "Email_hours", person = TRUE, mode = "below", threshold = 5)
#'
#' @export
flag_extreme <- function(data,
metric,
person = TRUE,
threshold,
mode = "above",
return = "message"){
## Define relational term/string and input checks
if(mode == "above"){
rel_str <- " exceeds "
} else if(mode == "equal"){
rel_str <- " are equal to "
} else if(mode == "below"){
rel_str <- " are less than "
} else {
stop("invalid input to `mode`")
}
## Data frame containing the extreme values
if(person == TRUE){
extreme_df <-
data %>%
rename(metric = !!sym(metric)) %>%
group_by(PersonId) %>%
summarise_at(vars(metric), ~mean(.)) %>%
# Begin mode chunk
{
if(mode == "above"){
filter(., metric > threshold)
} else if(mode == "equal"){
filter(., metric == threshold)
} else if(mode == "below"){
filter(., metric < threshold)
}
} %>%
rename(!!sym(metric) := "metric")
} else if(person == FALSE){
extreme_df <-
data %>%
rename(metric = !!sym(metric)) %>%
# Begin mode chunk
{
if(mode == "above"){
filter(., metric > threshold)
} else if(mode == "equal"){
filter(., metric == threshold)
} else if(mode == "below"){
filter(., metric < threshold)
}
} %>%
rename(!!sym(metric) := "metric")
}
## Clean names for pretty printing
metric_nm <- metric %>% us_to_space() %>% camel_clean()
## Define MessageLevel
if(person == TRUE){
MessageLevel <- " persons where their average "
} else if(person == FALSE){
MessageLevel <- " rows where their value of "
}
## Define FlagMessage
if(nrow(extreme_df) == 0){
FlagMessage <-
paste0("[Pass] There are no",
MessageLevel,
metric_nm,
rel_str,
threshold, ".")
} else {
FlagMessage <-
paste0("[Warning] There are ",
nrow(extreme_df),
MessageLevel,
metric_nm,
rel_str,
threshold, ".")
}
if(return == "text"){
FlagMessage
} else if(return == "message"){
message(FlagMessage)
} else if(return == "table"){
extreme_df
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/flag_extreme.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Flag unusual outlook time settings for work day start and end time
#'
#' @description This function flags unusual outlook calendar settings for
#' start and end time of work day.
#'
#' @import dplyr
#'
#' @param data A data frame containing a Person Query.
#' @param threshold A numeric vector of length two, specifying the hour
#' threshold for flagging. Defaults to c(4, 15).
#' @param return String specifying what to return. This must be one of the
#' following strings:
#' - `"text"` (default)
#' - `"message"`
#' - `"data"`
#'
#' @return
#' A different output is returned depending on the value passed to the `return`
#' argument:
#' - `"text"`: string. A diagnostic message.
#' - `"message"`: message on console. A diagnostic message.
#' - `"data"`: data frame. Data where flag is present.
#'
#' See `Value` for more information.
#'
#' @family Data Validation
#'
#' @examples
#' # Demo with `pq_data` example where Outlook Start and End times are imputed
#' spq_df <- pq_data
#'
#' spq_df$WorkingStartTimeSetInOutlook <- "6:30"
#'
#' spq_df$WorkingEndTimeSetInOutlook <- "23:30"
#'
#' # Return a message
#' flag_outlooktime(spq_df, threshold = c(5, 13))
#'
#' # Return data
#' flag_outlooktime(spq_df, threshold = c(5, 13), return = "data")
#'
#' @export
flag_outlooktime <- function(data, threshold = c(4, 15), return = "message"){
# pad_times <- function(x){
# if(nchar(x) == 1){
# x <- paste0("0", x, "00")
# } else if(nchar(x) == 2){
# x <- paste0(x, "00")
# } else if(nchar(x) == 3){
# x <- paste0("0", x)
# } else {
# x
# }
# }
#
# pad_times <- Vectorize(pad_times)
## Clean `WorkingStartTimeSetInOutlook`
if(any(grepl(pattern = "\\d{1}:\\d{1,2}", x = data$WorkingStartTimeSetInOutlook))){
# Pad two zeros and keep last five characters
data$WorkingStartTimeSetInOutlook <-
paste0("00", data$WorkingStartTimeSetInOutlook) %>%
substr(start = nchar(.) - 4, stop = nchar(.))
}
## Clean `WorkingEndTimeSetInOutlook`
if(any(grepl(pattern = "\\d{1}:\\d{1,2}", x = data$WorkingEndTimeSetInOutlook))){
# Pad two zeros and keep last five characters
data$WorkingEndTimeSetInOutlook <-
paste0("00", data$WorkingEndTimeSetInOutlook) %>%
substr(start = nchar(.) - 4, stop = nchar(.))
}
if(
any(
!grepl(pattern = "\\d{1,2}:\\d{1,2}", x = data$WorkingStartTimeSetInOutlook) |
!grepl(pattern = "\\d{1,2}:\\d{1,2}", x = data$WorkingEndTimeSetInOutlook)
)
){
stop("Please check data format for `WorkingStartTimeSetInOutlook` or `WorkingEndTimeSetInOutlook.\n
These variables must be character vectors, and have the format `%H:%M`, such as `07:30` or `23:00`.")
}
clean_times <- function(x){
out <- gsub(pattern = ":", replacement = "", x = x)
# out <- pad_times(out)
strptime(out, format = "%H%M")
}
flagged_data <-
data %>%
# mutate_at(vars(WorkingStartTimeSetInOutlook, WorkingEndTimeSetInOutlook), ~clean_times(.)) %>%
mutate_at(vars(WorkingStartTimeSetInOutlook, WorkingEndTimeSetInOutlook), ~gsub(pattern = ":", replacement = "", x = .)) %>%
mutate_at(vars(WorkingStartTimeSetInOutlook, WorkingEndTimeSetInOutlook), ~strptime(., format = "%H%M")) %>%
mutate(WorkdayRange = as.numeric(WorkingEndTimeSetInOutlook - WorkingStartTimeSetInOutlook, units = "hours"),
WorkdayFlag1 = WorkdayRange < threshold[[1]],
WorkdayFlag2 = WorkdayRange > threshold[[2]],
WorkdayFlag = WorkdayRange < threshold[[1]] | WorkdayRange > threshold[[2]]) %>%
select(PersonId, WorkdayRange, WorkdayFlag, WorkdayFlag1, WorkdayFlag2)
## Short working hour settings
FlagN1 <- sum(flagged_data$WorkdayFlag1, na.rm = TRUE)
FlagProp1 <- mean(flagged_data$WorkdayFlag1, na.rm = TRUE)
FlagProp1F <- paste0(round(FlagProp1 * 100, 1), "%") # Formatted
## Long working hour settings
FlagN2 <- sum(flagged_data$WorkdayFlag2, na.rm = TRUE)
FlagProp2 <- mean(flagged_data$WorkdayFlag2, na.rm = TRUE)
FlagProp2F <- paste0(round(FlagProp2 * 100, 1), "%") # Formatted
## Short or long working hoursettings
FlagN <- sum(flagged_data$WorkdayFlag, na.rm = TRUE)
FlagProp <- mean(flagged_data$WorkdayFlag, na.rm = TRUE)
FlagPropF <- paste0(round(FlagProp * 100, 1), "%") # Formatted
## Flag Messages
Warning_Message <- paste0("[Warning] ", FlagPropF, " (", FlagN, ") ", "of the person-date rows in the data have extreme Outlook settings.")
Pass_Message1 <- paste0("[Pass] Only ", FlagPropF, " (", FlagN, ") ", "of the person-date rows in the data have extreme Outlook settings.")
Pass_Message2 <- paste0("There are no extreme Outlook settings in this dataset (Working hours shorter than ", threshold[[1]], " hours, or longer than ", threshold[[2]], " hours.")
Detail_Message <- paste0(FlagProp1F, " (", FlagN1, ") ", " have an Outlook workday shorter than ", threshold[[1]], " hours, while ",
FlagProp2F, " (", FlagN2, ") ", "have a workday longer than ", threshold[[2]], " hours.")
if(FlagProp >= .05){
FlagMessage <- paste(Warning_Message, Detail_Message, sep = "\n")
} else if(FlagProp < .05 & FlagProp > 0){
FlagMessage <- paste(Pass_Message1, Detail_Message, sep = "\n")
} else if(FlagProp==0){
FlagMessage <- Pass_Message2
}
## Print diagnosis
## Should implement options to return the PersonIds or a full data frame
if(return == "text"){
FlagMessage
} else if(return == "message"){
message(FlagMessage)
} else if(return == "data"){
flagged_data[flagged_data$WorkdayFlag == TRUE,]
} else {
stop("Error: please check inputs for `return`")
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/flag_outlooktime.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Sample Group-to-Group dataset
#'
#' @description
#' A demo dataset representing a Group-to-Group Query. The grouping
#' organizational attribute used here is `Organization`, where the variable have
#' been prefixed with `PrimaryCollaborator_` and `SecondaryCollaborator_` to represent the
#' direction of collaboration.
#'
#' @family Data
#' @family Network
#'
#' @return data frame.
#'
#' @format A data frame with 150 rows and 11 variables:
#' \describe{
#' \item{PrimaryCollaborator_Organization}{ }
#' \item{PrimaryCollaborator_GroupSize}{ }
#' \item{SecondaryCollaborator_Organization}{ }
#' \item{SecondaryCollaborator_GroupSize}{ }
#' \item{MetricDate}{ }
#' \item{Percent_Group_collaboration_time_invested}{ }
#' \item{Group_collaboration_time_invested}{ }
#' \item{Group_email_sent_count}{ }
#' \item{Group_email_time_invested}{ }
#' \item{Group_meeting_count}{ }
#' \item{Group_meeting_time_invested}{ }
#' ...
#' }
#' @source \url{https://analysis.insights.viva.office.com/analyst/analysis/}
"g2g_data"
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/g2g_data.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Generate HTML report with list inputs
#'
#' @description
#' This is a support function using a list-pmap workflow to
#' create a HTML document, using RMarkdown as the engine.
#'
#' @author Martin Chan <martin.chan@@microsoft.com>
#'
#' @param title Character string to specify the title of the chunk.
#' @param filename File name to be used in the exported HTML.
#' @param outputs A list of outputs to be added to the HTML report.
#' Note that `outputs`, `titles`, `echos`, and `levels` must have the same
#' length
#' @param titles A list/vector of character strings to specify the title of the
#' chunks.
#' @param subheaders A list/vector of character strings to specify the
#' subheaders for each chunk.
#' @param echos A list/vector of logical values to specify whether to display
#' code.
#' @param levels A list/vector of numeric value to specify the header level of
#' the chunk.
#' @param theme Character vector to specify theme to be used for the report.
#' E.g. `"united"`, `"default"`.
#' @param preamble A preamble to appear at the beginning of the report, passed
#' as a text string.
#'
#' @importFrom purrr pmap
#' @importFrom purrr reduce
#'
#' @family Reports
#'
#' @section Creating a custom report:
#'
#' Below is an example on how to set up a custom report.
#'
#' The first step is to define the content that will go into a report and assign
#' the outputs to a list.
#'
#' ```
#' # Step 1: Define Content
#' output_list <-
#' list(pq_data %>% workloads_summary(return = "plot"),
#' pq_data %>% workloads_summary(return = "table")) %>%
#' purrr::map_if(is.data.frame, create_dt)
#' ```
#'
#' The next step is to add a list of titles for each of the objects on the list:
#'
#' ```
#' # Step 2: Add Corresponding Titles
#' title_list <- c("Workloads Summary - Plot", "Workloads Summary - Table")
#' n_title <- length(title_list)
#' ```
#' The final step is to run `generate_report()`. This can all be wrapped within
#' a function such that the function can be used to generate a HTML report.
#' ```
#' # Step 3: Generate Report
#' generate_report(title = "My First Report",
#' filename = "My First Report",
#' outputs = output_list,
#' titles = title_list,
#' subheaders = rep("", n_title),
#' echos = rep(FALSE, n_title
#' ```
#' @return
#' An HTML report with the same file name as specified in the arguments is
#' generated in the working directory. No outputs are directly returned by the
#' function.
#'
#' @export
generate_report <- function(title = "My minimal HTML generator",
filename = "minimal_html",
outputs = output_list,
titles,
subheaders,
echos,
levels,
theme = "united",
preamble = ""){
## Title of document
title_chr <- paste0('title: \"', title, '\"')
## chunk loopage
## merged to create `chunk_merged`
chunk_merged <-
list(output = outputs,
title = titles,
subheader = subheaders,
echo = echos,
level = levels,
id = seq(1, length(outputs))) %>%
purrr::pmap(function(output, title, subheader, echo, level, id){
generate_chunks(level = level,
title = title,
subheader = subheader,
echo = echo,
object = paste0("outputs[[", id, "]]"))
}) %>%
purrr::reduce(c)
# wpa_logo <- system.file("logos/logo.PNG", package = "wpa")
## markdown object
markobj <- c('---',
title_chr <- paste0('title: \"', title, '\"'),
'output: ',
' html_document:',
paste0(' theme: ', theme),
# ' theme: united',
' toc: true',
' toc_float:',
' collapsed: false',
' smooth_scroll: true',
'---',
# paste0(''),
'',
preamble,
'',
chunk_merged)
writeLines(markobj, paste0(filename, ".Rmd"))
rmarkdown::render(paste0(filename, ".Rmd"))
## Load in browser
utils::browseURL(paste0(filename, ".html"))
## Deletes specified files
unlink(c(paste0(filename, ".Rmd"),
paste0(filename, ".md")))
}
#' @title Generate chunk strings
#'
#' @description This is used as a supporting function for `generate_report()`
#' and not directly used. `generate_report()`` works by creating a
#' loop structure around generate_chunks(), and binds them together
#' to create a report.
#'
#' @details
#' `generate_chunks()` is primarily a wrapper around paste() functions,
#' to create a structured character vector that will form the individual
#' chunks. No plots 'exist' within the environment of `generate_chunks()`.
#'
#' @param level Numeric value to specify the header level of the chunk.
#' @param title Character string to specify the title of the chunk.
#' @param subheader Character string to specify the subheader of the chunk.
#' @param echo Logical value to specify whether to display code.
#' @param object Character string to specify name of the object to show.
#'
#' @noRd
generate_chunks <- function(level = 3,
title,
subheader = "",
echo = FALSE,
object){
level_hash <- paste(rep('#', level), collapse = "")
obj <- c(paste(level_hash, title),
subheader,
paste0('```{r, echo=',
echo,
', fig.height=9, fig.width=12}'),
object,
'```',
' ')
return(obj)
}
#' @title Read preamble
#'
#' @description
#' Read in a preamble to be used within each individual reporting function.
#' Reads from the Markdown file installed with the package.
#'
#' @param path Text string containing the path for the appropriate Markdown file.
#'
#' @return
#' String containing the text read in from the specified Markdown file.
#'
#' @family Support
#' @family Reports
#'
#' @export
read_preamble <- function(path){
full_path <- paste0("/preamble/", path)
complete_path <- paste0(path.package("vivainsights"), full_path)
text <- suppressWarnings(readLines(complete_path))
return(text)
}
#' Display HTML fragment in RMarkdown chunk, from Markdown text
#'
#' @description
#' This is a wrapper around `markdown::markdownToHTML()`, where
#' the default behaviour is to produce a HTML fragment.
#' `htmltools::HTML()` is then used to evaluate the HTML code
#' within a RMarkdown chunk.
#'
#' @importFrom htmltools HTML
#' @importFrom markdown markdownToHTML
#'
#' @param text Character vector containing Markdown text
#'
#' @family Support
#'
#' @noRd
#'
md2html <- function(text){
html_chunk <- markdown::markdownToHTML(text = text,
fragment.only = TRUE)
htmltools::HTML(html_chunk)
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/generate_report.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Generate HTML report based on existing RMarkdown documents
#'
#' @description
#' This is a support function that accepts parameters and creates a HTML
#' document based on an RMarkdown template. This is an alternative to
#' `generate_report()` which instead creates an RMarkdown document from scratch
#' using individual code chunks.
#'
#' @note
#' The implementation of this function was inspired by the 'DataExplorer'
#' package by boxuancui, with credits due to the original author.
#'
#' @param output_format output format in `rmarkdown::render()`. Default is
#' `rmarkdown::html_document(toc = TRUE, toc_depth = 6, theme = "cosmo")`.
#' @param output_file output file name in `rmarkdown::render()`. Default is
#' `"report.html"`.
#' @param output_dir output directory for report in `rmarkdown::render()`.
#' Default is user's current directory.
#' @param report_title report title. Default is `"Report"`.
#' @param rmd_dir string specifying the path to the directory containing the
#' RMarkdown template files.
#' @param \dots other arguments to be passed to `params`. For instance, pass
#' `hrvar` if the RMarkdown document requires a 'hrvar' parameter.
#' @export
generate_report2 <- function(output_format = rmarkdown::html_document(toc = TRUE, toc_depth = 6, theme = "cosmo"),
output_file = "report.html",
output_dir = getwd(),
report_title = "Report",
rmd_dir = system.file("rmd_template/minimal.rmd", package = "vivainsights"),
...) {
## Render report into html
suppressWarnings(
rmarkdown::render(
input = rmd_dir,
output_format = output_format,
output_file = output_file,
output_dir = output_dir,
intermediates_dir = output_dir,
params = list(set_title = report_title, ...)
))
## Open report
report_path <- path.expand(file.path(output_dir, output_file))
utils::browseURL(report_path)
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/generate_report2.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
###################################################################
## Global Variables
## This file is added to minimize the false positives flagged during R CMD check.
## Example: afterhours_trend: no visible binding for global variable 'Date'
###################################################################
utils::globalVariables(
c(
"PersonId",
".",
"group",
"Employee_Count",
"PANEL",
"y",
"x",
"bucket_hours",
"Employees",
"heat_colours",
"xmin",
"xmax",
"Date",
"MetricDate", # New `Date`
"top_group",
"OrgGroup",
"var1",
"var2",
"Total",
"Metric",
"Value",
"Start",
"End",
"Group",
"value",
"where", # in `jitter_metrics()` for tidyselect
"Hours",
"Collaboration_hours",
"External_collaboration_hours",
"attribute",
"values",
"calculation",
"variable",
"value_rescaled",
"Meeting_hours_with_manager_1_1",
"Meetings_with_manager_1_on_1",
"Cadence_of_1_on_1_meetings_with_manager",
"Organization",
"flag_nkw",
"holidayweek",
"mean_collab",
"perc",
"perc_nkw",
"total",
"ymax",
"ymin",
"z_score",
".N",
"After_hours_collaboration_hours",
"Attributes",
"CH_ratio",
"Collaboration_hrs",
"HRAttribute",
"Unique values",
"WorkingEndTimeSetInOutlook",
"name",
"Email_hours",
"Instant_message_hours",
"values",
"WorkingStartTimeSetInOutlook",
"output_list",
"Email_ratio",
"PersonCount",
"WorkdayFlag",
"WorkdayRange",
"identifier",
"pre_group",
"Emails_sent",
"Shifts",
"WorkdayFlag1",
"subject_validate",
"FirstValue",
"TenureYear",
"WorkdayFlag2",
"tenure_years",
"N",
"V1",
"V2",
"Subject",
"line",
"text",
"word",
"freq",
"PrimaryCollaborator_PersonId",
"PrimaryOrg",
"SecondaryCollaborator_PersonId",
"SecondaryOrg",
"betweenness",
"closeness",
"cluster",
"colour",
"degree",
"eigenvector",
"from",
"metric_prop",
"node_size",
"org_size",
"pagerank",
"to",
"Variable",
"pval",
"WOE",
"ODDS",
"labelpos"
)
)
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/globals.R
|
#' @title
#' Generate a vector of `n` contiguous colours, as a red-yellow-green palette.
#'
#' @description
#' Takes a numeric value `n` and returns a character vector of colour HEX codes
#' corresponding to the heat map palette.
#'
#' @param n the number of colors (>= 1) to be in the palette.
#' @param alpha an alpha-transparency level in the range of 0 to 1
#' (0 means transparent and 1 means opaque)
#' @param rev logical indicating whether the ordering of the colors should be
#' reversed.
#'
#' @examples
#' barplot(rep(10, 50), col = heat_colours(n = 50), border = NA)
#'
#' barplot(rep(10, 50), col = heat_colours(n = 50, alpha = 0.5, rev = TRUE),
#' border = NA)
#'
#' @family Support
#'
#' @return
#' A character vector containing the HEX codes and the same length as `n` is
#' returned.
#'
#' @export
heat_colours <- function (n, alpha, rev = FALSE) {
## Move from red to green
h <- seq(from = 0, to = 0.3, length.out = n)
#h <- c(1, pre_h)
## Less bright
s <- rep(0.69, length(h))
## Increasingly low value (darker)
v <- seq(from = 1, to = 0.8, length.out = n)
cols <- grDevices::hsv(h = h, s = s, v = v, alpha = alpha)
if(rev){
rev(cols)
} else if(rev == FALSE) {
cols
}
}
#' @rdname heat_colours
#' @export
heat_colors <- heat_colours
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/heat_colours.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Employee count over time
#'
#' @description Returns a line chart showing the change in
#' employee count over time. Part of a data validation process to check
#' for unusual license growth / declines over time.
#'
#' @param data A Standard Person Query dataset in the form of a data frame.
#' @param return String specifying what to return. This must be one of the
#' following strings:
#' - `"plot"`
#' - `"table"`
#'
#' See `Value` for more information.
#'
#' @return
#' A different output is returned depending on the value passed to the `return`
#' argument:
#' - `"plot"`: ggplot object. A line plot showing employee count over time.
#' - `"table"`: data frame containing a summary table.
#'
#' @import dplyr
#' @import ggplot2
#'
#' @examples
#' # Return plot
#' hr_trend(pq_data)
#'
#' # Return summary table
#' hr_trend(pq_data, return = "table")
#'
#' @family Visualization
#' @family Data Validation
#'
#' @export
hr_trend <- function(data, return = "plot"){
data$Date <- as.Date(data$MetricDate, format = "%m/%d/%Y")
## Date range data frame
myPeriod <- extract_date_range(data)
plot_data <-
data %>%
group_by(Date) %>%
summarise(n = n_distinct(PersonId), .groups = "drop_last") %>%
ungroup()
if(return == "plot"){
plot_data %>%
ggplot(aes(x = Date, y = n)) +
geom_line(size = 1) +
labs(title = "Population over time",
subtitle = "Unique licensed population by week",
caption = paste("Data from week of", myPeriod$Start, "to week of", myPeriod$End)) +
ylab("Employee count") +
xlab("Date") +
scale_y_continuous(labels = round, limits = c(0,NA)) +
theme_wpa_basic()
} else if(return == "table"){
plot_data
} else {
stop("Please enter a valid input for `return`.")
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/hr_trend.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Create a count of distinct people in a specified HR variable
#'
#' @description
#' This function enables you to create a count of the distinct people
#' by the specified HR attribute.The default behaviour is to return a
#' bar chart as typically seen in 'Analysis Scope'.
#'
#' @param data A Standard Person Query dataset in the form of a data frame.
#' @param hrvar HR Variable by which to split metrics, defaults to
#' "Organization" but accepts any character vector, e.g. "LevelDesignation".
#' If a vector with more than one value is provided, the HR attributes are
#' automatically concatenated.
#' @param return String specifying what to return. This must be one of the
#' following strings:
#' - `"plot"`
#' - `"table"`
#'
#' See `Value` for more information.
#'
#' @return
#' A different output is returned depending on the value passed to the `return`
#' argument:
#' - `"plot"`: 'ggplot' object containing a bar plot.
#' - `"table"`: data frame containing a count table.
#'
#' @import ggplot2
#' @import dplyr
#' @importFrom data.table ":=" "%like%" "%between%"
#'
#' @family Visualization
#' @family Data Validation
#'
#' @examples
#' # Return a bar plot
#' hrvar_count(pq_data, hrvar = "LevelDesignation")
#'
#' # Return a summary table
#' hrvar_count(pq_data, hrvar = "LevelDesignation", return = "table")
#'
#'@export
hrvar_count <- function(data,
hrvar = "Organization",
return = "plot"){
## Allow multiple HRvar inputs
if(length(hrvar) > 1){
hrvar_flat <- paste(hrvar, collapse = ", ")
summary_table <-
data %>%
select(PersonId, all_of(hrvar)) %>%
mutate(!!sym(hrvar_flat) := select(., hrvar) %>%
apply(1, paste, collapse = ", ")) %>%
group_by(!!sym(hrvar_flat)) %>%
summarise(n = n_distinct(PersonId)) %>%
arrange(desc(n))
# Single reference for single and multiple org attributes
hrvar_label <- hrvar_flat
} else {
summary_table <-
data %>%
select(PersonId, all_of(hrvar)) %>%
group_by(!!sym(hrvar)) %>%
summarise(n = n_distinct(PersonId)) %>%
arrange(desc(n))
# Single reference for single and multiple org attributes
hrvar_label <- hrvar
}
if(return == "table"){
data %>%
data.table::as.data.table() %>%
.[, .(n = n_distinct(PersonId)), by = hrvar] %>%
as_tibble() %>%
arrange(desc(n))
} else if(return == "plot"){
## This is re-run to enable multi-attribute grouping without concatenation
summary_table %>%
ggplot(aes(x = stats::reorder(!!sym(hrvar_label), -n),
y = n)) +
geom_col(fill = rgb2hex(0, 120, 212)) +
geom_text(aes(label = n),
vjust = -1,
fontface = "bold",
size = 4)+
theme_wpa_basic() +
theme(axis.text.x = element_text(angle = 45, hjust = 1)) +
labs(title = paste("People by", camel_clean(hrvar_label))) +
scale_y_continuous(limits = c(0, max(summary_table$n) * 1.1)) +
xlab(camel_clean(hrvar_label)) +
ylab("Number of employees")
} else {
stop("Please enter a valid input for `return`.")
}
}
#' @rdname hrvar_count
#' @export
analysis_scope <- hrvar_count
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/hrvar_count.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Create count of distinct fields and percentage of employees with
#' missing values for all HR variables
#'
#' @description `r lifecycle::badge('experimental')`
#'
#' This function enables you to create a summary table to validate
#' organizational data. This table will provide a summary of the data found in
#' the Viva Insights _Data sources_ page. This function will return a summary
#' table with the count of distinct fields per HR attribute and the percentage
#' of employees with missing values for that attribute. See `hrvar_count()`
#' function for more detail on the specific HR attribute of interest.
#'
#' @param data A Standard Person Query dataset in the form of a data frame.
#' @param n_var number of HR variables to include in report as rows. Default is
#' set to 50 HR variables.
#' @param return String to specify what to return
#' @param threshold The max number of unique values allowed for any attribute.
#' Default is 100.
#' @param maxna The max percentage of NAs allowable for any column. Default is
#' 20.
#'
#' @import dplyr
#'
#' @family Data Validation
#'
#' @examples
#' # Return a summary table of all HR attributes
#' hrvar_count_all(pq_data, return = "table")
#'
#' @return
#' Returns an error message by default, where 'text' is passed in `return`.
#' - `'table'`: data frame. A summary table listing the number of distinct
#' fields and percentage of missing values for the specified number of HR
#' attributes will be returned.
#' - `'message'`: outputs a message indicating which values are
#' beyond the specified thresholds.
#'
#' @export
hrvar_count_all <- function(data,
n_var = 50,
return = "message",
threshold = 100,
maxna = 20
){
## Character vector of HR attributes
extracted_chr <- extract_hr(
data,
return = "names",
max_unique = threshold,
exclude_constants = FALSE
)
summary_table_n <-
data %>%
select(PersonId, extracted_chr) %>%
summarise_at(vars(extracted_chr), ~n_distinct(.,na.rm = TRUE)) # Excludes NAs from unique count
## Note: WPA here is used for a matching separator
results <-
data %>%
select(PersonId, extracted_chr) %>%
summarise_at(vars(extracted_chr),
list(`WPAn_unique` = ~n_distinct(., na.rm = TRUE), # Excludes NAs from unique count
`WPAper_na` = ~(sum(is.na(.))/ nrow(data) * 100),
`WPAsum_na` = ~sum(is.na(.)) # Number of missing values
)) %>% # % of missing values
tidyr::gather(attribute, values) %>%
tidyr::separate(col = attribute, into = c("attribute", "calculation"), sep = "_WPA") %>%
tidyr::spread(calculation, values)
## Single print message
if(sum(results$n_unique >= threshold)==0){
printMessage <- paste("No attributes have greater than", threshold, "unique values.")
}
if(sum(results$per_na >= maxna)==0){
newMessage <- paste("No attributes have more than", maxna, "percent NA values.")
printMessage <- paste(printMessage, newMessage, collapse = "\n")
}
for (i in 1:nrow(results)) {
if(results$n_unique[i] >= threshold){
newMessage <- paste0("The attribute '",results$attribute[i],"' has a large amount of unique values. Please check.")
printMessage <- paste(printMessage, newMessage, collapse = "\n")
}
if(results$per_na[i]>=maxna){
newMessage <- paste0("The attribute '",results$attribute[i],"' has a large amount of NA values. Please check.")
printMessage <- paste(printMessage, newMessage, collapse = "\n")
}
}
if(return == "table"){
results <-
results %>%
select(Attributes = "attribute",
`Unique values` = "n_unique",
`Total missing values` = "sum_na",
`% missing values` = "per_na")
return(utils::head(results, n_var))
} else if(return == "text"){
printMessage
} else if(return == "message"){
message(printMessage)
} else {
stop("Error: please check inputs for `return`")
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/hrvar_count_all.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Track count of distinct people over time in a specified HR variable
#'
#' @description
#' This function provides a week by week view of the count of the distinct
#' people by the specified HR attribute.The default behaviour is to return a
#' week by week heatmap bar plot.
#'
#' @param data A Standard Person Query dataset in the form of a data frame.
#' @param hrvar HR Variable by which to split metrics, defaults to
#' "Organization" but accepts any character vector, e.g. "LevelDesignation".
#' If a vector with more than one value is provided, the HR attributes are
#' automatically concatenated.
#' @param return String specifying what to return. This must be one of the
#' following strings:
#' - `"plot"`
#' - `"table"`
#'
#' See `Value` for more information.
#'
#' @return
#' A different output is returned depending on the value passed to the `return`
#' argument:
#' - `"plot"`: 'ggplot' object containing a bar plot.
#' - `"table"`: data frame containing a count table.
#'
#' @import ggplot2
#' @import dplyr
#' @importFrom data.table ":=" "%like%" "%between%"
#'
#' @family Visualization
#' @family Data Validation
#'
#' @examples
#' # Return a bar plot
#' hrvar_trend(pq_data, hrvar = "LevelDesignation")
#'
#' # Return a summary table
#' hrvar_trend(pq_data, hrvar = "LevelDesignation", return = "table")
#'
#'@export
hrvar_trend <- function(data,
hrvar = "Organization",
return = "plot"){
## Allow multiple HRvar inputs
if(length(hrvar) > 1){
hrvar_flat <- paste(hrvar, collapse = ", ")
summary_table <-
data %>%
select(PersonId, MetricDate, all_of(hrvar)) %>%
mutate(!!sym(hrvar_flat) := select(., hrvar) %>%
apply(1, paste, collapse = ", ")) %>%
group_by(MetricDate, !!sym(hrvar_flat)) %>%
summarise(n = n_distinct(PersonId)) %>%
arrange(desc(n))
# Single reference for single and multiple org attributes
hrvar_label <- hrvar_flat
} else {
summary_table <-
data %>%
select(PersonId, MetricDate, all_of(hrvar)) %>%
group_by(MetricDate, !!sym(hrvar)) %>%
summarise(n = n_distinct(PersonId)) %>%
arrange(desc(n))
# Single reference for single and multiple org attributes
hrvar_label <- hrvar
}
if(return == "table"){
summary_table %>%
mutate(PersonId = "") %>%
create_trend(metric = "n",
hrvar = hrvar,
mingroup = 0,
return = "table")
} else if(return == "plot"){
## This is re-run to enable multi-attribute grouping without concatenation
summary_table %>%
mutate(PersonId="") %>%
create_trend(metric = "n",
hrvar = hrvar,
mingroup = 0,
return = "plot",
legend_title = "Number of employees") +
labs(title = "Employees over time",
subtitle = paste0("Dynamics by ", tolower(camel_clean(hrvar))))
} else {
stop("Please enter a valid input for `return`.")
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/hrvar_trend.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Identify employees who have churned from the dataset
#'
#' @description
#' This function identifies and counts the number of employees who have churned
#' from the dataset by measuring whether an employee who is present in the first
#' `n` (n1) weeks of the data is present in the last `n` (n2) weeks of the data.
#'
#' @details
#' An additional use case of this function is the ability to identify
#' "new-joiners" by using the argument `flip`.
#'
#' @param data A Person Query as a data frame. Must contain a `PersonId`.
#' @param n1 A numeric value specifying the number of weeks at the beginning of
#' the period that defines the measured employee set. Defaults to 6.
#' @param n2 A numeric value specifying the number of weeks at the end of the
#' period to calculate whether employees have churned from the data. Defaults
#' to 6.
#' @param return String specifying what to return. This must be one of the
#' following strings:
#' - `"message"` (default)
#' - `"text"`
#' - `"data"`
#'
#' See `Value` for more information.
#'
#' @param flip Logical, defaults to FALSE. This determines whether to reverse
#' the logic of identifying the non-overlapping set. If set to `TRUE`, this
#' effectively identifies new-joiners, or those who were not present in the
#' first n weeks of the data but were present in the final n weeks.
#'
#' @return
#' A different output is returned depending on the value passed to the `return`
#' argument:
#' - `"message"`: Message on console. A diagnostic message.
#' - `"text"`: String. A diagnostic message.
#' - `"data"`: Character vector containing the the `PersonId` of
#' employees who have been identified as churned.
#'
#' @details
#' If an employee is present in the first `n` weeks of the data but not present
#' in the last `n` weeks of the data, the function considers the employee as
#' churned. As the measurement period is defined by the number of weeks from the
#' start and the end of the passed data frame, you may consider filtering the
#' dates accordingly before running this function.
#'
#' Another assumption that is in place is that any employee whose `PersonId` is
#' not available in the data has churned. Note that there may be other reasons
#' why an employee's `PersonId` may not be present, e.g. maternity/paternity
#' leave, Viva Insights license has been removed, shift to a
#' low-collaboration role (to the extent that he/she becomes inactive).
#'
#' @family Data Validation
#'
#' @examples
#' pq_data %>% identify_churn(n1 = 3, n2 = 3, return = "message")
#'
#' @export
identify_churn <- function(data,
n1 = 6,
n2 = 6,
return = "message",
flip = FALSE){
data$Date <- as.Date(data$MetricDate, format = "%m/%d/%Y") # Ensure correct format
unique_dates <- unique(data$MetricDate) # Vector of unique dates
nlen <- length(unique_dates) # Total number of unique dates
# First and last n weeks
firstnweeks <- sort(unique_dates)[1:n1]
lastnweeks <- sort(unique_dates, decreasing = TRUE)[1:n2]
## People in the first week
first_peeps <-
data %>%
dplyr::filter(MetricDate %in% firstnweeks) %>%
dplyr::pull(PersonId) %>%
unique()
## People in the last week
final_peeps <-
data %>%
dplyr::filter(MetricDate %in% lastnweeks) %>%
dplyr::pull(PersonId) %>%
unique()
if(flip == FALSE){
## In first, not in last
churner_id <- setdiff(first_peeps, final_peeps)
## Message
printMessage <-
paste0("Churn:\nThere are ", length(churner_id),
" employees from ", min(firstnweeks), " to ",
max(firstnweeks), " (", n1, " weeks)",
" who are no longer present in ",
min(lastnweeks), " to ", max(lastnweeks),
" (", n2, " weeks).")
} else if(flip == TRUE){
## In last, not in first
## new joiners
churner_id <- dplyr::setdiff(final_peeps, first_peeps)
## Message
printMessage <-
paste0("New joiners:\nThere are ", length(churner_id),
" employees from ", min(lastnweeks), " to ",
max(lastnweeks), " (", n2, " weeks)",
" who were not present in ",
min(firstnweeks), " to ", max(firstnweeks),
" (", n1, " weeks).")
} else {
stop("Invalid argument for `flip`")
}
if(return == "message"){
message(printMessage)
} else if(return == "text"){
printMessage
} else if(return == "data"){
churner_id
} else {
stop("Invalid `return`")
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/identify_churn.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Identify date frequency based on a series of dates
#'
#' @description
#' `r lifecycle::badge('experimental')`
#'
#' Takes a vector of dates and identify whether the frequency is 'daily',
#' 'weekly', or 'monthly'. The primary use case for this function is to provide
#' an accurate description of the query type used and for raising errors should
#' a wrong date grouping be used in the data input.
#'
#' @param x Vector containing a series of dates.
#'
#' @details
#' Date frequency detection works as follows:
#' - If at least three days of the week are present (e.g., Monday, Wednesday,
#' Thursday) in the series, then the series is classified as 'daily'
#' - If the total number of months in the series is equal to the length, then
#' the series is classified as 'monthly'
#' - If the total number of sundays in the series is equal to the length of
#' the series, then the series is classified as 'weekly
#'
#' @section Limitations:
#' One of the assumptions made behind the classification is that weeks are
#' denoted with Sundays, hence the count of sundays to measure the number of
#' weeks. In this case, weeks where a Sunday is missing would result in an
#' 'unable to classify' error.
#'
#' Another assumption made is that dates are evenly distributed, i.e. that the
#' gap between dates are equal. If dates are unevenly distributed, e.g. only two
#' days of the week are available for a given week, then the algorithm will fail
#' to identify the frequency as 'daily'.
#'
#' @return
#' String describing the detected date frequency, i.e.:
#' - `'daily'`
#' - `'weekly'`
#' - `'monthly'`
#'
#' @examples
#' start_date <- as.Date("2022/06/26")
#' end_date <- as.Date("2022/11/27")
#'
#' # Daily
#' day_seq <-
#' seq.Date(
#' from = start_date,
#' to = end_date,
#' by = "day"
#' )
#'
#' identify_datefreq(day_seq)
#'
#' # Weekly
#' week_seq <-
#' seq.Date(
#' from = start_date,
#' to = end_date,
#' by = "week"
#' )
#'
#' identify_datefreq(week_seq)
#'
#' # Monthly
#' month_seq <-
#' seq.Date(
#' from = start_date,
#' to = end_date,
#' by = "month"
#' )
#' identify_datefreq(month_seq)
#'
#' @export
identify_datefreq <- function(x){
# Data frame for checking
date_df <- data.frame(
weekdays = names(table(weekdays(x))),
n = as.numeric(table(weekdays(x)))
)
dweekchr <- c(
"Sunday",
"Saturday",
"Monday",
"Tuesday",
"Wednesday",
"Thursday",
"Friday"
)
# At least 3 days of the week must be present
check_wdays <- ifelse(
sum(dweekchr %in% date_df$weekdays) >= 3, TRUE, FALSE)
# Check number of Sundays - should equal number of weeks if weekly
check_nsun <- sum(date_df$n[date_df$weekdays == "Sunday"])
ifelse(
length(months(x)) == length(x),
"monthly",
ifelse(
check_nsun == length(x),
"weekly",
ifelse(
check_wdays,
"daily",
"Unable to identify date frequency."
)
)
)
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/identify_datefreq.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Identify Holiday Weeks based on outliers
#'
#' @description
#' This function scans a standard query output for weeks where collaboration
#' hours is far outside the mean. Returns a list of weeks that appear to be
#' holiday weeks and optionally an edited dataframe with outliers removed. By
#' default, missing values are excluded.
#'
#' As best practice, run this function prior to any analysis to remove atypical
#' collaboration weeks from your dataset.
#'
#' @template ch
#'
#' @param data A Standard Person Query dataset in the form of a data frame.
#' @param sd The standard deviation below the mean for collaboration hours that
#' should define an outlier week. Enter a positive number. Default is 1
#' standard deviation.
#'
#' @param return String specifying what to return. This must be one of the
#' following strings:
#' - `"message"` (default)
#' - `"data"`
#' - `"data_cleaned"`
#' - `"data_dirty"`
#' - `"plot"`
#'
#' See `Value` for more information.
#'
#' @return
#' A different output is returned depending on the value passed to the `return`
#' argument:
#' - `"message"`: message on console. a message is printed identifying holiday
#' weeks.
#' - `"data"`: data frame. A dataset with outlier weeks flagged in a new
#' column is returned as a dataframe.
#' - `"data_cleaned"`: data frame. A dataset with outlier weeks removed is
#' returned.
#' - `"data_dirty"`: data frame. A dataset with only outlier weeks is
#' returned.
#' - `"plot"`: ggplot object. A line plot of Collaboration Hours with holiday
#' weeks highlighted.
#'
#'
#' @import dplyr
#' @import ggplot2
#' @importFrom methods is
#'
#' @family Data Validation
#'
#' @examples
#' # Return a message by default
#' identify_holidayweeks(pq_data)
#'
#' # Return plot
#' identify_holidayweeks(pq_data, return = "plot")
#'
#' @export
identify_holidayweeks <- function(data, sd = 1, return = "message"){
## Ensure date is formatted
if(all(is_date_format(data$MetricDate))){
data$MetricDate <- as.Date(data$MetricDate, format = "%m/%d/%Y")
} else if(is(data$MetricDate, "Date")){
# Do nothing
} else {
stop("`MetricDate` appears not to be properly formatted.\n
It needs to be in the format MM/DD/YYYY.\n
Also check for missing values or stray values with inconsistent formats.")
}
Calc <-
data %>%
group_by(MetricDate) %>%
summarize(mean_collab = mean(Collaboration_hours, na.rm = TRUE),.groups = 'drop') %>%
mutate(z_score = (mean_collab - mean(mean_collab, na.rm = TRUE))/ sd(mean_collab, na.rm = TRUE))
Outliers <- (Calc$MetricDate[Calc$z_score < -sd])
mean_collab_hrs <- mean(Calc$mean_collab, na.rm = TRUE)
Message <- paste0("The weeks where collaboration was ",
sd,
" standard deviations below the mean (",
round(mean_collab_hrs, 1),
") are: \n",
paste(wrap(Outliers, wrapper = "`"),collapse = ", "))
myTable_plot <-
data %>%
mutate(holidayweek = (MetricDate %in% Outliers)) %>%
select("MetricDate", "holidayweek", "Collaboration_hours") %>%
group_by(MetricDate) %>%
summarise(
Collaboration_hours = mean(Collaboration_hours),
holidayweek = first(holidayweek)) %>%
mutate(MetricDate = as.Date(MetricDate, format = "%m/%d/%Y"))
myTable_plot_shade <-
myTable_plot %>%
filter(holidayweek == TRUE) %>%
mutate(min = MetricDate - 3 , max = MetricDate + 3 , ymin = -Inf, ymax = +Inf)
plot <-
myTable_plot %>%
ggplot(aes(x = MetricDate, y = Collaboration_hours, group = 1)) +
geom_line(colour = "grey40") +
theme_wpa_basic() +
geom_rect(data = myTable_plot_shade,
aes(xmin = min,
xmax = max,
ymin = ymin,
ymax = ymax),
color = "transparent",
fill = "steelblue",
alpha = 0.3) +
labs(title = "Holiday Weeks",
subtitle = "Showing average collaboration hours over time")+
ylab("Collaboration Hours") +
xlab("Date") +
ylim(0, NA) # Set origin to zero
if(return == "text"){
return(Message)
} else if(return == "message"){
message(Message)
} else if(return %in% c("data_clean", "data_cleaned")){
data %>% filter(!(MetricDate %in% Outliers))
} else if(return == "data_dirty"){
data %>% filter((MetricDate %in% Outliers))
} else if(return == "data"){
data %>% mutate(holidayweek = (MetricDate %in% Outliers))
} else if(return == "plot"){
return(plot)
} else {
stop("Invalid input for `return`.")
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/identify_holidayweeks.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Identify Inactive Weeks
#'
#' @description
#' This function scans a standard query output for weeks where collaboration
#' hours is far outside the mean for any individual person in the dataset.
#' Returns a list of weeks that appear to be inactive weeks and optionally an
#' edited dataframe with outliers removed.
#'
#' As best practice, run this function prior to any analysis to remove atypical
#' collaboration weeks from your dataset.
#'
#' @param data A Standard Person Query dataset in the form of a data frame.
#' @param sd The standard deviation below the mean for collaboration hours that
#' should define an outlier week. Enter a positive number. Default is 1
#' standard deviation.
#' @param return String specifying what to return. This must be one of the
#' following strings:
#' - `"text"`
#' - `"data_cleaned"`
#' - `"data_dirty"`
#'
#' See `Value` for more information.
#'
#' @import dplyr
#'
#' @family Data Validation
#'
#' @return
#' Returns an error message by default, where `'text'` is returned. When
#' `'data_cleaned'` is passed, a dataset with outlier weeks removed is returned
#' as a dataframe. When `'data_dirty'` is passed, a dataset with outlier weeks
#' is returned as a dataframe.
#'
#' @export
identify_inactiveweeks <- function(data, sd = 2, return = "text"){
init_data <-
data %>%
group_by(PersonId) %>%
mutate(z_score = (Collaboration_hours - mean(Collaboration_hours))/sd(Collaboration_hours))
Calc <-
init_data %>%
filter(z_score <= -sd) %>%
select(PersonId, MetricDate, z_score) %>%
data.frame()
pop_mean <-
data %>%
dplyr::mutate(Total = "Total") %>%
create_bar(metric = "Collaboration_hours",
hrvar = "Total",
return = "table") %>%
dplyr::pull(Collaboration_hours) %>%
round(digits = 1)
Message <- paste0("There are ", nrow(Calc), " rows of data with weekly collaboration hours more than ",
sd," standard deviations below the mean (", pop_mean, ").")
if(return == "text"){
return(Message)
} else if(return == "data_dirty"){
init_data %>%
filter(z_score <= -sd) %>%
select(-z_score) %>%
data.frame()
} else if(return == "data_cleaned"){
init_data %>%
filter(z_score > -sd) %>%
select(-z_score) %>%
data.frame()
} else if(return == "data"){
init_data %>%
mutate(inactiveweek = (z_score<= -sd)) %>%
select(-z_score) %>%
data.frame()
} else {
stop("Error: please check inputs for `return`")
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/identify_inactiveweeks.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Identify Non-Knowledge workers in a Person Query using Collaboration
#' Hours
#'
#' @description
#' This function scans a standard query output to identify employees with
#' consistently low collaboration signals. Returns the % of non-knowledge
#' workers identified by Organization, and optionally an edited data frame with
#' non-knowledge workers removed, or the full data frame with the kw/nkw flag
#' added.
#'
#' @param data A Standard Person Query dataset in the form of a data frame.
#' @param collab_threshold Positive numeric value representing the collaboration
#' hours threshold that should be exceeded as an average for the entire
#' analysis period for the employee to be categorized as a knowledge worker
#' ("kw"). Default is set to 5 collaboration hours. Any versions after v1.4.3,
#' this uses a "greater than or equal to" logic (`>=`), in which case persons
#' with exactly 5 collaboration hours will pass.
#' @param return String specifying what to return. This must be one of the
#' following strings:
#' - `"text"`
#' - `"data_with_flag"`
#' - `"data_clean"`
#' - `"data_summary"`
#'
#' See `Value` for more information.
#'
#' @import dplyr
#'
#' @family Data Validation
#'
#' @return
#' A different output is returned depending on the value passed to the `return`
#' argument:
#' - `"text"`: string. Returns a diagnostic message.
#' - `"data_with_flag"`: data frame. Original input data with an additional
#' column containing the `kw`/`nkw` flag.
#' - `"data_clean"`: data frame. Data frame with non-knowledge workers
#' excluded.
#' - `"data_summary"`: data frame. A summary table by organization listing
#' the number and % of non-knowledge workers.
#'
#' @export
identify_nkw <- function(data, collab_threshold = 5, return = "data_summary"){
summary_byPersonId <-
data %>%
group_by(PersonId, Organization) %>%
summarize(mean_collab = mean(Collaboration_hours), .groups = "drop")%>%
mutate(flag_nkw = case_when(mean_collab >= collab_threshold ~ "kw",
TRUE ~ "nkw"))
data_with_flag <-
left_join(data,
summary_byPersonId %>%
dplyr::select(PersonId,flag_nkw),
by = 'PersonId')
summary_byOrganization <-
summary_byPersonId %>%
group_by(Organization, flag_nkw)%>%
summarise(total = n(), .groups = "drop")%>%
group_by(Organization)%>%
mutate(perc = total/sum(total))%>% #need to format to %
filter(flag_nkw == "nkw")%>%
rename(n_nkw = total, perc_nkw = perc)%>%
select(-flag_nkw) %>%
ungroup()
## Number of NKW identified
n_nkw <- sum(summary_byPersonId$flag_nkw == "nkw")
if(n_nkw == 0){
flagMessage <- paste0("[Pass] There are no non-knowledge workers identified",
" (average collaboration hours below ",
collab_threshold,
" hours).")
} else {
flagMessage <-
paste0("[Warning] Out of a population of ", n_distinct(data$PersonId),
", there are ", n_nkw,
" employees who may be non-knowledge workers (average collaboration hours below ",
collab_threshold, " hours).")
}
if(return == "data_with_flag"){
return(data_with_flag)
} else if(return %in% c("data_clean", "data_cleaned")){
data_with_flag %>%
filter(flag_nkw == "kw")
} else if(return == "text"){
flagMessage
} else if(return =="data_summary"){
summary_byOrganization %>%
mutate(perc_nkw = scales::percent(perc_nkw, accuracy = 1)) %>%
rename(`Non-knowledge workers (count)` = "n_nkw",
`Non-knowledge workers (%)` = "perc_nkw")
} else {
stop("Error: please check inputs for `return`")
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/identify_nkw.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Identify metric outliers over a date interval
#'
#' @description This function takes in a selected metric and uses
#' z-score (number of standard deviations) to identify outliers
#' across time. There are applications in this for identifying
#' weeks with abnormally low collaboration activity, e.g. holidays.
#' Time as a grouping variable can be overridden with the `group_var`
#' argument.
#'
#' @param data A Standard Person Query dataset in the form of a data frame.
#' @param group_var A string with the name of the grouping variable.
#' Defaults to `Date`.
#' @param metric Character string containing the name of the metric,
#' e.g. "Collaboration_hours"
#'
#' @import dplyr
#'
#' @examples
#' identify_outlier(pq_data, metric = "Collaboration_hours")
#'
#' @return
#' Returns a data frame with `MetricDate` (if grouping variable is not set),
#' the metric, and the corresponding z-score.
#'
#' @family Data Validation
#'
#' @export
identify_outlier <- function(data,
group_var = "MetricDate",
metric = "Collaboration_hours"){
## Check inputs
required_variables <- c(group_var,
"PersonId",
metric)
## Error message if variables are not present
## Nothing happens if all present
data %>%
check_inputs(requirements = required_variables)
main_table <-
data %>%
group_by(!!sym(group_var)) %>%
summarise_at(vars(!!sym(metric)), ~mean(.)) %>%
mutate(zscore = (!!sym(metric) - mean(!!sym(metric)))/sd(!!sym(metric)))
return(main_table)
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/identify_outlier.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Identify groups under privacy threshold
#'
#' @description
#' This function scans a standard query output for groups with of employees
#' under the privacy threshold. The method consists in reviewing each individual
#' HR attribute, and count the distinct people within each group.
#'
#' @param data A Standard Person Query dataset in the form of a data frame.
#' @param hrvar A list of HR Variables to consider in the scan.
#' Defaults to all HR attributes identified.
#' @param mingroup Numeric value setting the privacy threshold / minimum group
#' size. Defaults to 5.
#' @param return String specifying what to return. This must be one of the
#' following strings:
#' - `"table"`
#' - `"text"`
#'
#' See `Value` for more information.
#'
#' @return
#' A different output is returned depending on the value passed to the `return`
#' argument:
#' - `"table"`: data frame. A summary table of groups that fall below the
#' privacy threshold.
#' - `"text"`: string. A diagnostic message.
#'
#' @examples
#' # Return a summary table
#' pq_data %>% identify_privacythreshold(return = "table")
#'
#' # Return a diagnostic message
#' pq_data %>% identify_privacythreshold(return = "text")
#'
#' @import dplyr
#' @import ggplot2
#' @import reshape2
#' @import scales
#' @importFrom stats reorder
#'
#' @family Data Validation
#'
#' @return
#' Returns a ggplot object by default, where 'plot' is passed in `return`.
#' When 'table' is passed, a summary table is returned as a data frame.
#'
#' @export
identify_privacythreshold <- function(data,
hrvar = extract_hr(data),
mingroup = 5,
return = "table"){
results <-
data %>% hrvar_count(
hrvar = hrvar[1],
return = "table")
results$hrvar <- ""
results <- results[0,]
for (p in hrvar) {
table1 <-
data %>%
hrvar_count(hrvar = p,
return = "table")
table1$hrvar <- p
colnames(table1)[1] <- "group"
results <- rbind(results,table1)
}
output <- results %>% arrange(n) %>% select(hrvar, everything())
groups_under <- results %>% filter(n<mingroup) %>% nrow()
MinGroupFlagMessage_Warning <- paste0("[Warning] There are ", groups_under, " groups under the minimum group size privacy threshold of ", mingroup, ".")
MinGroupFlagMessage_Low <- paste0("[Pass] There is only ", groups_under, " group under the minimum group size privacy threshold of ", mingroup, ".")
MinGroupFlagMessage_Zero <- paste0("[Pass] There are no groups under the minimum group size privacy threshold of ", mingroup, ".")
if(groups_under > 1){
MinGroupFlagMessage <- MinGroupFlagMessage_Warning
} else if(groups_under == 1 ){
MinGroupFlagMessage <- MinGroupFlagMessage_Low
} else if(groups_under ==0){
MinGroupFlagMessage <- MinGroupFlagMessage_Zero
}
if(return == "table"){
return(output)
} else if(return == "message"){
message(MinGroupFlagMessage)
} else if(return == "text"){
MinGroupFlagMessage
} else {
stop("Invalid `return` argument.")
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/identify_privacythreshold.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title
#' Identify shifts based on outlook time settings for work day start and end
#' time
#'
#' @description
#' This function uses outlook calendar settings for start and end time of work
#' day to identify work shifts. The relevant variables are
#' `WorkingStartTimeSetInOutlook` and `WorkingEndTimeSetInOutlook`.
#'
#'
#' @param data A data frame containing data from the Hourly Collaboration query.
#'
#' @param return String specifying what to return. This must be one of the
#' following strings:
#' - `"plot"`
#' - `"table"`
#' - `"data"`
#'
#' See `Value` for more information.
#'
#' @return
#' A different output is returned depending on the value passed to the `return`
#' argument:
#' - `"plot"`: ggplot object. A bar plot for the weekly count of shifts.
#' - `"table"`: data frame. A summary table for the count of shifts.
#' - `"data`: data frame. Input data appended with the `Shifts` columns.
#'
#' @importFrom data.table ":=" "%like%" "%between%"
#'
#' @family Data Validation
#' @family Working Patterns
#'
#' @examples
#' # Demo with `pq_data` example where Outlook Start and End times are imputed
#' spq_df <- pq_data
#'
#' spq_df$WorkingStartTimeSetInOutlook <- "6:30"
#'
#' spq_df$WorkingEndTimeSetInOutlook <- "23:30"
#'
#' # Return plot
#' spq_df %>% identify_shifts()
#'
#' # Return summary table
#' spq_df %>% identify_shifts(return = "table")
#'
#' @export
identify_shifts <- function(data, return = "plot"){
clean_times <- function(x){
out <- gsub(pattern = ":00", replacement = "", x = x)
as.numeric(out)
}
data <- data.table::as.data.table(data)
# data <- data.table::copy(data)
# Make sure data.table knows we know we're using it
.datatable.aware = TRUE
data[, Shifts := paste(WorkingStartTimeSetInOutlook, WorkingEndTimeSetInOutlook, sep = "-")]
# outputTable <- data[, .(count = .N), by = Shifts]
outputTable <- data[, list(WeekCount = .N,
PersonCount = dplyr::n_distinct(PersonId)), by = Shifts]
outputTable <- data.table::setorder(outputTable, -PersonCount)
if(return == "table"){
dplyr::as_tibble(outputTable)
} else if(return == "plot"){
outputTable %>%
utils::head(10) %>%
create_bar_asis(group_var = "Shifts",
bar_var = "WeekCount",
title = "Most frequent outlook shifts",
subtitle = "Showing top 10 only",
caption = extract_date_range(data, return = "text"),
ylab = "Shifts",
xlab = "Frequency")
} else if(return == "data"){
output_data <- data
dplyr::as_tibble(output_data)
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/identify_shifts.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Tenure calculation based on different input dates, returns data
#' summary table or histogram
#'
#' @description
#' This function calculates employee tenure based on different input dates.
#' `identify_tenure` uses the latest Date available if user selects "MetricDate",
#' but also have flexibility to select a specific date, e.g. "1/1/2020".
#'
#' @family Data Validation
#'
#' @param data A Standard Person Query dataset in the form of a data frame.
#' @param end_date A string specifying the name of the date variable
#' representing the latest date. Defaults to "MetricDate".
#' @param beg_date A string specifying the name of the date variable
#' representing the hire date. Defaults to "HireDate".
#' @param maxten A numeric value representing the maximum tenure.
#' If the tenure exceeds this threshold, it would be accounted for in the flag message.
#'
#' @param return String specifying what to return. This must be one of the
#' following strings:
#' - `"message"`
#' - `"text"`
#' - `"plot"`
#' - `"data_cleaned"`
#' - `"data_dirty"`
#' - `"data"`
#'
#' See `Value` for more information.
#' @return
#' A different output is returned depending on the value passed to the `return`
#' argument:
#' - `"message"`: message on console with a diagnostic message.
#' - `"text"`: string containing a diagnostic message.
#' - `"plot"`: 'ggplot' object. A line plot showing tenure.
#' - `"data_cleaned"`: data frame filtered only by rows with tenure values
#' lying within the threshold.
#' - `"data_dirty"`: data frame filtered only by rows with tenure values
#' lying outside the threshold.
#' - `"data"`: data frame with the `PersonId` and a calculated variable called
#' `TenureYear` is returned.
#'
#'
#' @examples
#' library(dplyr)
#' # Add HireDate to `pq_data`
#' pq_data2 <-
#' pq_data %>%
#' mutate(HireDate = as.Date("1/1/2015", format = "%m/%d/%Y"))
#'
#' identify_tenure(pq_data2)
#'
#' @export
identify_tenure <- function(data,
end_date = "MetricDate",
beg_date = "HireDate",
maxten = 40,
return = "message"){
required_variables <- c("HireDate")
## Error message if variables are not present
## Nothing happens if all present
data %>%
check_inputs(requirements = required_variables)
data_prep <-
data %>%
mutate(MetricDate = as.Date(MetricDate, format = "%m/%d/%Y"), # Re-format `MetricDate`
end_date = as.Date(!!sym(end_date), format= "%m/%d/%Y"), # Access a symbol, not a string
beg_date = as.Date(!!sym(beg_date), format= "%m/%d/%Y")) %>% # Access a symbol, not a string
arrange(end_date) %>%
mutate(End = last(end_date))
last_date <- data_prep$End
# graphing data
tenure_summary <-
data_prep %>%
filter(MetricDate == last_date) %>%
mutate(tenure_years = (MetricDate - beg_date)/365) %>%
group_by(tenure_years)%>%
summarise(n = n(),.groups = 'drop')
# off person IDs
oddpeople <-
data_prep %>%
filter(MetricDate == last_date) %>%
mutate(tenure_years = (MetricDate - beg_date)/365) %>%
filter(tenure_years >= maxten) %>%
select(PersonId)
# message
Message <- paste0("The mean tenure is ",round(mean(tenure_summary$tenure_years,na.rm = TRUE),1)," years.\nThe max tenure is ",
round(max(tenure_summary$tenure_years,na.rm = TRUE),1),".\nThere are ",
length(tenure_summary$tenure_years[tenure_summary$tenure_years>=maxten])," employees with a tenure greater than ",maxten," years.")
if(return == "text"){
return(Message)
} else if(return == "message"){
message(Message)
} else if(return == "plot"){
suppressWarnings(
ggplot(data = tenure_summary,aes(x = as.numeric(tenure_years))) +
geom_density() +
labs(title = "Tenure - Density",
subtitle = "Calculated with `HireDate`") +
xlab("Tenure in Years") +
ylab("Density - number of employees") +
theme_wpa_basic()
)
} else if(return == "data_cleaned"){
return(data %>% filter(!(PersonId %in% oddpeople$PersonId)) %>% data.frame())
} else if(return == "data_dirty"){
return(data %>% filter((PersonId %in% oddpeople$PersonId)) %>% data.frame())
} else if(return == "data"){
data_prep %>%
filter(MetricDate == last_date) %>%
mutate(TenureYear = as.numeric((MetricDate - beg_date)/365)) %>%
select(PersonId, TenureYear)
} else {
stop("Error: please check inputs for `return`")
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/identify_tenure.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Import a query from Viva Insights Analyst Experience
#'
#' @description Import a Viva Insights Query from a .csv file, with variable
#' classifications optimised for other functions in the package.
#'
#' @details `import_query()` uses `data.table::fread()` to import .csv files for
#' speed, and by default `stringsAsFactors` is set to FALSE. A data frame is
#' returned by the function (not a `data.table`). Column names are automatically
#' cleaned, replacing spaces and special characters with underscores.
#'
#' @param x String containing the path to the Viva Insights query to be
#' imported. The input file must be a .csv file, and the file extension must
#' be explicitly entered, e.g. `"/files/standard query.csv"`
#'
#' @param pid String specifying the unique person or individual identifier
#' variable. `import_query` renames this to `PersonId` so that this is
#' compatible with other functions in the package. Defaults to `NULL`, where
#' no action is taken.
#' @param dateid String specifying the date variable. `import_query` renames
#' this to `MetricDate` so that this is compatible with other functions in the
#' package. Defaults to `NULL`, where no action is taken.
#' @param date_format String specifying the date format for converting any
#' variable that may be a date to a Date variable. Defaults to `"%m/%d/%Y"`.
#' @param convert_date Logical. Defaults to `TRUE`. When set to `TRUE`, any
#' variable that matches true with `is_date_format()` gets converted to a Date
#' variable. When set to `FALSE`, this step is skipped.
#'
#' @param encoding String to specify encoding to be used within
#' `data.table::fread()`. See `data.table::fread()` documentation for more
#' information. Defaults to `'UTF-8'`.
#'
#' @return A `tibble` is returned.
#'
#' @family Import and Export
#'
#' @export
import_query <- function(x,
pid = NULL,
dateid = NULL,
date_format = "%m/%d/%Y",
convert_date = TRUE,
encoding = 'UTF-8') {
# import data
return_data <-
data.table::fread(x,
stringsAsFactors = FALSE,
encoding = encoding) %>%
as.data.frame()
if(convert_date == TRUE){
# Columns which are Dates
dateCols <- sapply(return_data, function(x) all(is_date_format(x) | any_idate(x)))
dateCols <- names(return_data)[dateCols == TRUE]
# Format any date columns
return_data <-
return_data %>%
dplyr::mutate_at(dplyr::vars(dateCols), ~as.Date(., format = date_format))
if(length(dateCols) >= 1){
message("Converted the following Date variables:\n",
paste(dateCols, collapse = ", "))
}
}
# rename specified names
if(!is.null(pid)){
names(return_data)[names(return_data) == pid] <- "PersonId"
}
if(!is.null(dateid)){
names(return_data)[names(return_data) == dateid] <- "MetricDate"
}
# clean names
names(return_data) <-
names(return_data) %>%
gsub(pattern = " ", replacement = "_", x = .) %>% # replace spaces
gsub(pattern = "-", replacement = "_", x = .) %>% # replace hyphens
gsub(pattern = "[:|,]", replacement = "_", x = .) # replace : and ,
message("Query has been imported successfully!")
dplyr::as_tibble(return_data)
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/import_query.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
.onAttach <- function(libname, pkgname) {
message <- c("\n Thank you for using the {vivainsights} R package!",
"\n \n Our analysts have taken every care to ensure that this package runs smoothly and bug-free.",
"\n \n However, if you do happen to encounter any, please report any issues at",
"\n https://github.com/microsoft/vivainsights/issues/",
"\n \n Happy coding!")
packageStartupMessage(message)
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/init.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Identify whether string is a date format
#'
#' @description
#' This function uses regular expression to determine whether a string is of the
#' format `"mdy"`, separated by `"-"`, `"/"`, or `"."`, returning a logical
#' vector.
#'
#' @param string Character string to test whether is a date format.
#'
#' @return logical value indicating whether the string is a date format.
#'
#' @examples
#' is_date_format("1/5/2020")
#'
#' @family Support
#'
#' @export
is_date_format <- function(string){
grepl("^\\d{1,2}[- /.]\\d{1,2}[- /.]\\d{1,4}$", string)
}
#' @title Identify whether variable is an IDate class.
#'
#' @description
#' This function checks whether the variable is an IDate class.
#'
#' @param x Variable to test whether an IDate class.
#'
#' @return logical value indicating whether the string is of an IDate class.
#'
#' @examples
#' any_idate("2023-12-15")
#'
#' @family Support
#'
#' @export
any_idate <- function(x){
any(class(x) %in% "IDate")
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/is_date_format.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Run a summary of Key Metrics from the Standard Person Query data
#'
#' @description
#' Returns a heatmapped table by default, with options to return a table.
#'
#' @template spq-params
#' @param metrics A character vector containing the variable names to calculate
#' averages of.
#' @param return Character vector specifying what to return, defaults to "plot".
#' Valid inputs are "plot" and "table".
#' @param low String specifying colour code to use for low-value metrics.
#' Arguments are passed directly to `ggplot2::scale_fill_gradient2()`.
#' @param mid String specifying colour code to use for mid-value metrics.
#' Arguments are passed directly to `ggplot2::scale_fill_gradient2()`.
#' @param high String specifying colour code to use for high-value metrics.
#' Arguments are passed directly to `ggplot2::scale_fill_gradient2()`.
#' @param textsize A numeric value specifying the text size to show in the plot.
#'
#' @import dplyr
#' @import ggplot2
#' @import reshape2
#' @importFrom stats reorder
#'
#' @return
#' Returns a ggplot object by default, when `'plot'` is passed in `return`.
#' When `'table'` is passed, a summary table is returned as a data frame.
#'
#' @family Visualization
#'
#' @examples
#' # Heatmap plot is returned by default
#' keymetrics_scan(pq_data)
#'
#' # Heatmap plot with custom colours
#' keymetrics_scan(pq_data, low = "purple", high = "yellow")
#'
#' # Return summary table
#' keymetrics_scan(pq_data, hrvar = "LevelDesignation", return = "table")
#'
#' @export
keymetrics_scan <- function(data,
hrvar = "Organization",
mingroup = 5,
metrics = c("Workweek_span",
"Collaboration_hours",
"After_hours_collaboration_hours",
"Meetings",
"Meeting_hours",
"After_hours_meeting_hours",
"Low_quality_meeting_hours",
"Meeting_hours_with_manager_1_on_1",
"Meeting_hours_with_manager",
"Emails_sent",
"Email_hours",
"After_hours_email_hours",
"Generated_workload_email_hours",
"Total_focus_hours",
"Internal_network_size",
"Networking_outside_organization",
"External_network_size",
"Networking_outside_company"),
return = "plot",
low = rgb2hex(7, 111, 161),
mid = rgb2hex(241, 204, 158),
high = rgb2hex(216, 24, 42),
textsize = 2){
## Handling NULL values passed to hrvar
if(is.null(hrvar)){
data <- totals_col(data)
hrvar <- "Total"
}
## Omit if metrics do not exist in dataset
metrics <- dplyr::intersect(metrics, names(data))
## Summary table
myTable <-
data %>%
rename(group = !!sym(hrvar)) %>% # Rename HRvar to `group`
group_by(group, PersonId) %>%
summarise_at(vars(metrics), ~mean(., na.rm = TRUE)) %>%
group_by(group) %>%
summarise_at(vars(metrics), ~mean(., na.rm = TRUE)) %>%
left_join(hrvar_count(data, hrvar = hrvar, return = "table") %>%
rename(Employee_Count = "n"),
by = c("group" = hrvar)) %>%
filter(Employee_Count >= mingroup) # Keep only groups above privacy threshold
myTable %>%
reshape2::melt(id.vars = "group") %>%
reshape2::dcast(variable ~ group) -> myTable_wide
myTable_long <- reshape2::melt(myTable, id.vars=c("group")) %>%
mutate(variable = factor(variable)) %>%
group_by(variable) %>%
# Heatmap by row
mutate(value_rescaled = maxmin(value)) %>%
ungroup()
plot_object <-
myTable_long %>%
filter(variable != "Employee_Count") %>%
ggplot(aes(x = group,
y = stats::reorder(variable, desc(variable)))) +
geom_tile(aes(fill = value_rescaled),
colour = "#FFFFFF",
size = 2) +
geom_text(aes(label=round(value, 1)), size = textsize) +
# Fill is contingent on max-min scaling
scale_fill_gradient2(low = low,
mid = mid,
high = high,
midpoint = 0.5,
breaks = c(0, 0.5, 1),
labels = c("Minimum", "", "Maximum"),
limits = c(0, 1)) +
scale_x_discrete(position = "top") +
scale_y_discrete(labels = us_to_space) +
theme_wpa_basic() +
theme(axis.line = element_line(color = "#FFFFFF")) +
labs(title = "Key metrics",
subtitle = paste("Weekly average by", camel_clean(hrvar)),
y =" ",
x =" ",
fill = " ",
caption = extract_date_range(data, return = "text")) +
theme(axis.text.x = element_text(angle = 90, hjust = 0),
plot.title = element_text(color="grey40", face="bold", size=20))
if(return == "table"){
myTable_wide %>%
as_tibble() %>%
return()
} else if(return == "plot"){
return(plot_object)
} else {
stop("Please enter a valid input for `return`.")
}
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/keymetrics_scan.R
|
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
#' @title Run a summary of Key Metrics without aggregation
#'
#' @description
#' Return a heatmapped table directly from the aggregated / summarised data.
#' Unlike `keymetrics_scan()` which performs a person-level aggregation, there
#' is no calculation for `keymetrics_scan_asis()` and the values are rendered as
#' they are passed into the function.
#'
#' @param data data frame containing data to plot. It is recommended to provide
#' data in a 'long' table format where one grouping column forms the rows, a
#' second column forms the columns, and a third numeric columns forms the
#' @param row_var String containing name of the grouping variable that will form
#' the rows of the heatmapped table.
#' @param col_var String containing name of the grouping variable that will form
#' the columns of the heatmapped table.
#' @param group_var String containing name of the grouping variable by which
#' heatmapping would apply. Defaults to `col_var`.
#' @param value_var String containing name of the value variable that will form
#' the values of the heatmapped table. Defaults to `"value"`.
#' @param title Title of the plot.
#' @param subtitle Subtitle of the plot.
#' @param caption Caption of the plot.
#' @param ylab Y-axis label for the plot (group axis)
#' @param xlab X-axis label of the plot (bar axis).
#' @param rounding Numeric value to specify number of digits to show in data
#' labels
#' @param low String specifying colour code to use for low-value metrics.
#' Arguments are passed directly to `ggplot2::scale_fill_gradient2()`.
#' @param mid String specifying colour code to use for mid-value metrics.
#' Arguments are passed directly to `ggplot2::scale_fill_gradient2()`.
#' @param high String specifying colour code to use for high-value metrics.
#' Arguments are passed directly to `ggplot2::scale_fill_gradient2()`.
#' @param textsize A numeric value specifying the text size to show in the plot.
#'
#' @return
#' ggplot object for a heatmap table.
#'
#' @examples
#'
#' library(dplyr)
#'
#' # Compute summary table
#' out_df <-
#' pq_data %>%
#' group_by(Organization) %>%
#' summarise(
#' across(
#' .cols = c(
#' Email_hours,
#' Collaboration_hours
#' ),
#' .fns = ~median(., na.rm = TRUE)
#' ),
#' .groups = "drop"
#' ) %>%
#' tidyr::pivot_longer(
#' cols = c("Email_hours", "Collaboration_hours"),
#' names_to = "metrics"
#' )
#'
#' keymetrics_scan_asis(
#' data = out_df,
#' col_var = "metrics",
#' row_var = "Organization"
#' )
#'
#' # Show data the other way round
#' keymetrics_scan_asis(
#' data = out_df,
#' col_var = "Organization",
#' row_var = "metrics",
#' group_var = "metrics"
#' )
#'
#' @export
#'
keymetrics_scan_asis <- function(
data,
row_var,
col_var,
group_var = col_var,
value_var = "value",
title = NULL,
subtitle = NULL,
caption = NULL,
ylab = row_var,
xlab = "Metrics",
rounding = 1,
low = rgb2hex(7, 111, 161),
mid = rgb2hex(241, 204, 158),
high = rgb2hex(216, 24, 42),
textsize = 2
){
# Transform to long data format
myTable_long <-
data %>%
mutate(!!sym(group_var) := factor(!!sym(group_var))) %>%
group_by(!!sym(group_var)) %>%
# Heatmap by row
mutate(value_rescaled = maxmin(!!sym(value_var))) %>%
ungroup()
plot_object <-
myTable_long %>%
ggplot(aes(x = !!sym(row_var),
y = stats::reorder(!!sym(col_var), desc(!!sym(col_var))))) +
geom_tile(aes(fill = value_rescaled),
colour = "#FFFFFF",
size = 2) +
geom_text(aes(label = round(value, rounding)), size = textsize) +
# Fill is contingent on max-min scaling
scale_fill_gradient2(
low = low,
mid = mid,
high = high,
midpoint = 0.5,
breaks = c(0, 0.5, 1),
labels = c("Minimum", "", "Maximum"),
limits = c(0, 1)
) +
scale_x_discrete(position = "top") +
scale_y_discrete(labels = us_to_space) +
theme_wpa_basic() +
theme(axis.line = element_line(color = "#FFFFFF")) +
labs(title = title,
subtitle = subtitle,
y = " ",
x = " ",
fill = " ",
caption = caption) +
theme(
axis.text.x = element_text(angle = 90, hjust = 0),
plot.title = element_text(color="grey40", face="bold", size=20)
)
plot_object
}
|
/scratch/gouwar.j/cran-all/cranData/vivainsights/R/keymetrics_scan_asis.R
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.