content
stringlengths 0
14.9M
| filename
stringlengths 44
136
|
---|---|
#' Turn into a wide matrix, perform SVD, return to tidy form
#'
#' This is useful for dimensionality reduction of items, especially when setting a
#' lower nv.
#'
#' @name widely_svd
#'
#' @param tbl Table
#' @param item Item to perform dimensionality reduction on; will end up in `item` column
#' @param feature Column describing the feature that links one item to others.
#' @param value Value
#' @param nv Optional; the number of principal components to estimate. Recommended for matrices
#' with many features.
#' @param weight_d Whether to multiply each value by the `d` principal component.
#' @param ... Extra arguments passed to `svd` (if `nv` is `NULL`)
#' or `irlba` (if `nv` is given)
#'
#' @return A tbl_df with three columns. The first is retained from the `item` input,
#' then `dimension` and `value`. Each row represents one principal component
#' value.
#'
#' @examples
#'
#' library(dplyr)
#' library(gapminder)
#'
#' # principal components driving change
#' gapminder_svd <- gapminder %>%
#' widely_svd(country, year, lifeExp)
#'
#' gapminder_svd
#'
#' # compare SVDs, join with other data
#' library(ggplot2)
#' library(tidyr)
#'
#' gapminder_svd %>%
#' spread(dimension, value) %>%
#' inner_join(distinct(gapminder, country, continent), by = "country") %>%
#' ggplot(aes(`1`, `2`, label = country)) +
#' geom_point(aes(color = continent)) +
#' geom_text(vjust = 1, hjust = 1)
#'
#' @export
widely_svd <- function(tbl, item, feature, value, nv = NULL, weight_d = FALSE, ...) {
widely_svd_(tbl,
col_name(substitute(item)),
col_name(substitute(feature)),
col_name(substitute(value)),
nv = nv,
weight_d = weight_d,
...)
}
#' @rdname widely_svd
#' @export
widely_svd_ <- function(tbl, item, feature, value, nv = NULL, weight_d = FALSE, ...) {
if (is.null(nv)) {
perform_svd <- function(m) {
s <- svd(m, ...)
if (weight_d) {
ret <- t(s$d * t(s$u))
} else {
ret <- s$u
}
rownames(ret) <- rownames(m)
ret
}
sparse <- FALSE
} else {
if (!requireNamespace("irlba", quietly = TRUE)) {
stop("Requires the irlba package")
}
perform_svd <- function(m) {
s <- irlba::irlba(m, nv = nv, ...)
if (weight_d) {
ret <- t(s$d * t(s$u))
} else {
ret <- s$u
}
rownames(ret) <- rownames(m)
ret
}
sparse <- TRUE
}
item_vals <- tbl[[item]]
item_u <- unique(item_vals)
tbl[[item]] <- match(item_vals, item_u)
ret <- widely_(perform_svd, sparse = sparse)(tbl, item, feature, value)
ret <- ret %>%
transmute(item = item_u[as.integer(item1)],
dimension = item2,
value)
colnames(ret)[1] <- item
ret
}
|
/scratch/gouwar.j/cran-all/cranData/widyr/R/widely_svd.R
|
## -----------------------------------------------------------------------------
library(dplyr)
library(gapminder)
gapminder
## -----------------------------------------------------------------------------
library(widyr)
gapminder %>%
pairwise_dist(country, year, lifeExp)
## -----------------------------------------------------------------------------
gapminder %>%
pairwise_dist(country, year, lifeExp) %>%
arrange(distance)
## -----------------------------------------------------------------------------
gapminder %>%
pairwise_dist(country, year, lifeExp, upper = FALSE) %>%
arrange(distance)
## -----------------------------------------------------------------------------
gapminder %>%
pairwise_cor(country, year, lifeExp, upper = FALSE, sort = TRUE)
|
/scratch/gouwar.j/cran-all/cranData/widyr/inst/doc/intro.R
|
---
title: "widyr: Widen, process, and re-tidy a dataset"
author: "David Robinson"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{widyr: Widen, process, and re-tidy a dataset}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
This package wraps the pattern of un-tidying data into a wide matrix, performing some processing, then turning it back into a tidy form. This is useful for several mathematical operations such as co-occurrence counts, correlations, or clustering that are best done on a wide matrix.
## Towards a precise definition of "wide" data
The term "wide data" has gone out of fashion as being "imprecise" [(Wickham 2014)](http://vita.had.co.nz/papers/tidy-data.pdf)), but I think with a proper definition the term could be entirely meaningful and useful.
A **wide** dataset is one or more matrices where:
* Each row is one **item**
* Each column is one **feature**
* Each value is one **observation**
* Each matrix is one **variable**
When would you want data to be wide rather than tidy? Notable examples include classification, clustering, correlation, factorization, or other operations that can take advantage of a matrix structure. In general, when you want to **compare between items** rather than compare between variables, this is a useful structure.
The widyr package is based on the observation that during a tidy data analysis, you often want data to be wide only *temporarily*, before returning to a tidy structure for visualization and further analysis. widyr makes this easy through a set of `pairwise_` functions.
## Example: gapminder
Consider the gapminder dataset in the [gapminder package](https://CRAN.R-project.org/package=gapminder).
```{r}
library(dplyr)
library(gapminder)
gapminder
```
This tidy format (one-row-per-country-per-year) is very useful for grouping, summarizing, and filtering operations. But if we want to *compare pairs of countries* (for example, to find countries that are similar to each other), we would have to reshape this dataset. Note that here, country is the **item**, while year is the **feature** column.
#### Pairwise operations
The widyr package offers `pairwise_` functions that operate on pairs of items within data. An example is `pairwise_dist`:
```{r}
library(widyr)
gapminder %>%
pairwise_dist(country, year, lifeExp)
```
In a single step, this finds the Euclidean distance between the `lifeExp` value in each pair of countries, matching pairs based on year. We could find the closest pairs of countries overall with `arrange()`:
```{r}
gapminder %>%
pairwise_dist(country, year, lifeExp) %>%
arrange(distance)
```
Notice that this includes duplicates (Germany/Belgium and Belgium/Germany). To avoid those (the upper triangle of the distance matrix), use `upper = FALSE`:
```{r}
gapminder %>%
pairwise_dist(country, year, lifeExp, upper = FALSE) %>%
arrange(distance)
```
In some analyses, we may be interested in correlation rather than distance of pairs. For this we would use `pairwise_cor`:
```{r}
gapminder %>%
pairwise_cor(country, year, lifeExp, upper = FALSE, sort = TRUE)
```
|
/scratch/gouwar.j/cran-all/cranData/widyr/inst/doc/intro.Rmd
|
## ----setup, echo = FALSE----------------------------------------------------------------------------
library(knitr)
options(width = 102)
knitr::opts_chunk$set(message = FALSE, warning = FALSE)
library(ggplot2)
theme_set(theme_bw())
## ----echo = FALSE-----------------------------------------------------------------------------------
if (!requireNamespace("unvotes", quietly = TRUE)) {
print("This vignette requires the unvotes package to be installed. Exiting...")
knitr::knit_exit()
}
## ---------------------------------------------------------------------------------------------------
library(dplyr)
library(unvotes)
un_votes
## ---------------------------------------------------------------------------------------------------
levels(un_votes$vote)
## ----cors-------------------------------------------------------------------------------------------
library(widyr)
cors <- un_votes %>%
mutate(vote = as.numeric(vote)) %>%
pairwise_cor(country, rcid, vote, use = "pairwise.complete.obs", sort = TRUE)
cors
## ----US_cors----------------------------------------------------------------------------------------
US_cors <- cors %>%
filter(item1 == "United States")
# Most in agreement
US_cors
# Least in agreement
US_cors %>%
arrange(correlation)
## ----US_cors_map, fig.width = 10, fig.height = 6----------------------------------------------------
if (require("maps", quietly = TRUE) &&
require("fuzzyjoin", quietly = TRUE) &&
require("countrycode", quietly = TRUE) &&
require("ggplot2", quietly = TRUE)) {
world_data <- map_data("world") %>%
regex_full_join(iso3166, by = c("region" = "mapname")) %>%
filter(region != "Antarctica")
US_cors %>%
mutate(a2 = countrycode(item2, "country.name", "iso2c")) %>%
full_join(world_data, by = "a2") %>%
ggplot(aes(long, lat, group = group, fill = correlation)) +
geom_polygon(color = "gray", size = .1) +
scale_fill_gradient2() +
coord_quickmap() +
theme_void() +
labs(title = "Correlation of each country's UN votes with the United States",
subtitle = "Blue indicates agreement, red indicates disagreement",
fill = "Correlation w/ US")
}
## ----country_network, fig.width = 10, fig.height = 6------------------------------------------------
if (require("ggraph", quietly = TRUE) &&
require("igraph", quietly = TRUE) &&
require("countrycode", quietly = TRUE)) {
cors_filtered <- cors %>%
filter(correlation > .6)
continents <- tibble(country = unique(un_votes$country)) %>%
filter(country %in% cors_filtered$item1 |
country %in% cors_filtered$item2) %>%
mutate(continent = countrycode(country, "country.name", "continent"))
set.seed(2017)
cors_filtered %>%
graph_from_data_frame(vertices = continents) %>%
ggraph() +
geom_edge_link(aes(edge_alpha = correlation)) +
geom_node_point(aes(color = continent), size = 3) +
geom_node_text(aes(label = name), check_overlap = TRUE, vjust = 1, hjust = 1) +
theme_void() +
labs(title = "Network of countries with correlated United Nations votes")
}
|
/scratch/gouwar.j/cran-all/cranData/widyr/inst/doc/united_nations.R
|
---
title: "United Nations Voting Correlations"
author: "David Robinson"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{United Nations Voting Correlations}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r setup, echo = FALSE}
library(knitr)
options(width = 102)
knitr::opts_chunk$set(message = FALSE, warning = FALSE)
library(ggplot2)
theme_set(theme_bw())
```
Here we'll examine an example application of the widyr package, particularly the `pairwise_cor` and `pairwise_dist` functions. We'll use the data on United Nations General Assembly voting from the `unvotes` package:
```{r echo = FALSE}
if (!requireNamespace("unvotes", quietly = TRUE)) {
print("This vignette requires the unvotes package to be installed. Exiting...")
knitr::knit_exit()
}
```
```{r}
library(dplyr)
library(unvotes)
un_votes
```
This dataset has one row for each country for each roll call vote. We're interested in finding pairs of countries that tended to vote similarly.
### Pairwise correlations
Notice that the `vote` column is a factor, with levels (in order) "yes", "abstain", and "no":
```{r}
levels(un_votes$vote)
```
We may then be interested in obtaining a measure of country-to-country agreement for each vote, using the `pairwise_cor` function.
```{r cors}
library(widyr)
cors <- un_votes %>%
mutate(vote = as.numeric(vote)) %>%
pairwise_cor(country, rcid, vote, use = "pairwise.complete.obs", sort = TRUE)
cors
```
We could, for example, find the countries that the US is most and least in agreement with:
```{r US_cors}
US_cors <- cors %>%
filter(item1 == "United States")
# Most in agreement
US_cors
# Least in agreement
US_cors %>%
arrange(correlation)
```
This can be particularly useful when visualized on a map.
```{r US_cors_map, fig.width = 10, fig.height = 6}
if (require("maps", quietly = TRUE) &&
require("fuzzyjoin", quietly = TRUE) &&
require("countrycode", quietly = TRUE) &&
require("ggplot2", quietly = TRUE)) {
world_data <- map_data("world") %>%
regex_full_join(iso3166, by = c("region" = "mapname")) %>%
filter(region != "Antarctica")
US_cors %>%
mutate(a2 = countrycode(item2, "country.name", "iso2c")) %>%
full_join(world_data, by = "a2") %>%
ggplot(aes(long, lat, group = group, fill = correlation)) +
geom_polygon(color = "gray", size = .1) +
scale_fill_gradient2() +
coord_quickmap() +
theme_void() +
labs(title = "Correlation of each country's UN votes with the United States",
subtitle = "Blue indicates agreement, red indicates disagreement",
fill = "Correlation w/ US")
}
```
### Visualizing clusters in a network
Another useful kind of visualization is a network plot, which can be created with Thomas Pedersen's [ggraph package](https://github.com/thomasp85/ggraph). We can filter for pairs of countries with correlations above a particular threshold.
```{r country_network, fig.width = 10, fig.height = 6}
if (require("ggraph", quietly = TRUE) &&
require("igraph", quietly = TRUE) &&
require("countrycode", quietly = TRUE)) {
cors_filtered <- cors %>%
filter(correlation > .6)
continents <- tibble(country = unique(un_votes$country)) %>%
filter(country %in% cors_filtered$item1 |
country %in% cors_filtered$item2) %>%
mutate(continent = countrycode(country, "country.name", "continent"))
set.seed(2017)
cors_filtered %>%
graph_from_data_frame(vertices = continents) %>%
ggraph() +
geom_edge_link(aes(edge_alpha = correlation)) +
geom_node_point(aes(color = continent), size = 3) +
geom_node_text(aes(label = name), check_overlap = TRUE, vjust = 1, hjust = 1) +
theme_void() +
labs(title = "Network of countries with correlated United Nations votes")
}
```
Choosing the threshold for filtering correlations (or other measures of similarity) typically requires some trial and error. Setting too high a threshold will make a graph too sparse, while too low a threshold will make a graph too crowded.
|
/scratch/gouwar.j/cran-all/cranData/widyr/inst/doc/united_nations.Rmd
|
---
title: "widyr: Widen, process, and re-tidy a dataset"
author: "David Robinson"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{widyr: Widen, process, and re-tidy a dataset}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
This package wraps the pattern of un-tidying data into a wide matrix, performing some processing, then turning it back into a tidy form. This is useful for several mathematical operations such as co-occurrence counts, correlations, or clustering that are best done on a wide matrix.
## Towards a precise definition of "wide" data
The term "wide data" has gone out of fashion as being "imprecise" [(Wickham 2014)](http://vita.had.co.nz/papers/tidy-data.pdf)), but I think with a proper definition the term could be entirely meaningful and useful.
A **wide** dataset is one or more matrices where:
* Each row is one **item**
* Each column is one **feature**
* Each value is one **observation**
* Each matrix is one **variable**
When would you want data to be wide rather than tidy? Notable examples include classification, clustering, correlation, factorization, or other operations that can take advantage of a matrix structure. In general, when you want to **compare between items** rather than compare between variables, this is a useful structure.
The widyr package is based on the observation that during a tidy data analysis, you often want data to be wide only *temporarily*, before returning to a tidy structure for visualization and further analysis. widyr makes this easy through a set of `pairwise_` functions.
## Example: gapminder
Consider the gapminder dataset in the [gapminder package](https://CRAN.R-project.org/package=gapminder).
```{r}
library(dplyr)
library(gapminder)
gapminder
```
This tidy format (one-row-per-country-per-year) is very useful for grouping, summarizing, and filtering operations. But if we want to *compare pairs of countries* (for example, to find countries that are similar to each other), we would have to reshape this dataset. Note that here, country is the **item**, while year is the **feature** column.
#### Pairwise operations
The widyr package offers `pairwise_` functions that operate on pairs of items within data. An example is `pairwise_dist`:
```{r}
library(widyr)
gapminder %>%
pairwise_dist(country, year, lifeExp)
```
In a single step, this finds the Euclidean distance between the `lifeExp` value in each pair of countries, matching pairs based on year. We could find the closest pairs of countries overall with `arrange()`:
```{r}
gapminder %>%
pairwise_dist(country, year, lifeExp) %>%
arrange(distance)
```
Notice that this includes duplicates (Germany/Belgium and Belgium/Germany). To avoid those (the upper triangle of the distance matrix), use `upper = FALSE`:
```{r}
gapminder %>%
pairwise_dist(country, year, lifeExp, upper = FALSE) %>%
arrange(distance)
```
In some analyses, we may be interested in correlation rather than distance of pairs. For this we would use `pairwise_cor`:
```{r}
gapminder %>%
pairwise_cor(country, year, lifeExp, upper = FALSE, sort = TRUE)
```
|
/scratch/gouwar.j/cran-all/cranData/widyr/vignettes/intro.Rmd
|
---
title: "United Nations Voting Correlations"
author: "David Robinson"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{United Nations Voting Correlations}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r setup, echo = FALSE}
library(knitr)
options(width = 102)
knitr::opts_chunk$set(message = FALSE, warning = FALSE)
library(ggplot2)
theme_set(theme_bw())
```
Here we'll examine an example application of the widyr package, particularly the `pairwise_cor` and `pairwise_dist` functions. We'll use the data on United Nations General Assembly voting from the `unvotes` package:
```{r echo = FALSE}
if (!requireNamespace("unvotes", quietly = TRUE)) {
print("This vignette requires the unvotes package to be installed. Exiting...")
knitr::knit_exit()
}
```
```{r}
library(dplyr)
library(unvotes)
un_votes
```
This dataset has one row for each country for each roll call vote. We're interested in finding pairs of countries that tended to vote similarly.
### Pairwise correlations
Notice that the `vote` column is a factor, with levels (in order) "yes", "abstain", and "no":
```{r}
levels(un_votes$vote)
```
We may then be interested in obtaining a measure of country-to-country agreement for each vote, using the `pairwise_cor` function.
```{r cors}
library(widyr)
cors <- un_votes %>%
mutate(vote = as.numeric(vote)) %>%
pairwise_cor(country, rcid, vote, use = "pairwise.complete.obs", sort = TRUE)
cors
```
We could, for example, find the countries that the US is most and least in agreement with:
```{r US_cors}
US_cors <- cors %>%
filter(item1 == "United States")
# Most in agreement
US_cors
# Least in agreement
US_cors %>%
arrange(correlation)
```
This can be particularly useful when visualized on a map.
```{r US_cors_map, fig.width = 10, fig.height = 6}
if (require("maps", quietly = TRUE) &&
require("fuzzyjoin", quietly = TRUE) &&
require("countrycode", quietly = TRUE) &&
require("ggplot2", quietly = TRUE)) {
world_data <- map_data("world") %>%
regex_full_join(iso3166, by = c("region" = "mapname")) %>%
filter(region != "Antarctica")
US_cors %>%
mutate(a2 = countrycode(item2, "country.name", "iso2c")) %>%
full_join(world_data, by = "a2") %>%
ggplot(aes(long, lat, group = group, fill = correlation)) +
geom_polygon(color = "gray", size = .1) +
scale_fill_gradient2() +
coord_quickmap() +
theme_void() +
labs(title = "Correlation of each country's UN votes with the United States",
subtitle = "Blue indicates agreement, red indicates disagreement",
fill = "Correlation w/ US")
}
```
### Visualizing clusters in a network
Another useful kind of visualization is a network plot, which can be created with Thomas Pedersen's [ggraph package](https://github.com/thomasp85/ggraph). We can filter for pairs of countries with correlations above a particular threshold.
```{r country_network, fig.width = 10, fig.height = 6}
if (require("ggraph", quietly = TRUE) &&
require("igraph", quietly = TRUE) &&
require("countrycode", quietly = TRUE)) {
cors_filtered <- cors %>%
filter(correlation > .6)
continents <- tibble(country = unique(un_votes$country)) %>%
filter(country %in% cors_filtered$item1 |
country %in% cors_filtered$item2) %>%
mutate(continent = countrycode(country, "country.name", "continent"))
set.seed(2017)
cors_filtered %>%
graph_from_data_frame(vertices = continents) %>%
ggraph() +
geom_edge_link(aes(edge_alpha = correlation)) +
geom_node_point(aes(color = continent), size = 3) +
geom_node_text(aes(label = name), check_overlap = TRUE, vjust = 1, hjust = 1) +
theme_void() +
labs(title = "Network of countries with correlated United Nations votes")
}
```
Choosing the threshold for filtering correlations (or other measures of similarity) typically requires some trial and error. Setting too high a threshold will make a graph too sparse, while too low a threshold will make a graph too crowded.
|
/scratch/gouwar.j/cran-all/cranData/widyr/vignettes/united_nations.Rmd
|
#' Download the csv-file of a table
#'
#' \code{download_csv()} downloads the csv for a table
#'
#' @param tablename name of the table to retrieve.
#' @param startyear only retrieve values for years equal or larger to \code{startyear}. Default: "".
#' @param endyear only retrieve values for years smaller or equal to \code{endyear}. Default: "".
#' @param ... further parameters supplied as URL parameter in the GENESIS database call
#' @param genesis_db name of the database (default: 'de').
#' @param save write string to a text file (default: TRUE)
#'
#' @details
#' Downloads the csv file either to the working directory \code{getwd()} or outputs it as a string.
#' This is an alternative approach to the retrieve_*() functions. This is designed for \url{https://www-genesis.destatis.de/genesis/online} as it does not require a login. It might not work as expected for the other databases.
#'
#'
#' @seealso \code{\link{read_header_genesis}}.
#'
#'
#' @examples
#' \dontrun{
#'
#' download_csv("12411-0004.csv")
#'
#' }
#'
#'
#'
#'
#' @export
download_csv <- function(tablename, startyear="", endyear="", ..., genesis_db="de", save=TRUE){
argg <- eval(substitute(alist(...)))
baseurl <- set_db2(db=genesis_db)
param <- list(
sequenz='tabelleDownload',
selectionname=tablename,
startjahr = startyear,
endjahr = endyear,
format = 'csv')
param <- c(param,argg)
httrdata <- GET(baseurl, query = param)
str <- content(httrdata, encoding="windows-1252", as = "text")
if( save ){
writeLines(str, file(paste0(tablename,".csv")))
} else{ return(str) }
}
|
/scratch/gouwar.j/cran-all/cranData/wiesbaden/R/download_csv.R
|
make_genesis <- function(genesis){
if ( is.null(genesis['db']) ) {
stop("genesis['db'] missing/unrecognized.")
}
if ( !(genesis['db'] %in% c("regio", "nrw", "bm", "de", "by", "st")) ){
stop("genesis['db'] missing/unrecognized.")
}
if ( is.na(genesis['user']) | is.na(genesis['password']) ){
if (genesis['db']=='regio'){
genesis <- key_user_pw(genesis,"regionalstatistik")
}
else if (genesis['db']=='nrw'){
genesis <- key_user_pw(genesis,"landesdatenbank-nrw")
}
else if (genesis['db']=='bm'){
genesis <- key_user_pw(genesis,"bildungsmonitoring")
}
else if (genesis['db']=='st'){
genesis <- key_user_pw(genesis,"landesdatenbank-st")
}
else if (genesis['db']=='by'){
genesis <- key_user_pw(genesis,"landesdatenbank-by")
}
else if (genesis['db']=='de'){
genesis <- key_user_pw(genesis,"destatis")
} else {
stop("genesis['user']/genesis['password'] is missing.")
}
}
return(genesis)
}
key_user_pw <- function(genesis,service){
genesis["user"] <- as.character(key_list(service=service)['username'])
genesis["password"] <- as.character(key_get(service=service,
username=genesis["user"]))
return(genesis)
}
# genesis_error_check <- function(xml){
#
# if ( length(xml)==0 ) {
# error <- xml_find_all(xml, './/faultstring/text()')
# if ( length(error) !=0 ) stop(as.character(error))
# }
#
# if ( length(xml)==1 ){
# if ( xml_has_attr(xml, 'nil')==TRUE ) {
# stop("No results found.") }
# }
#
# }
readstr_csv <- function(string,skip=0){
con <- textConnection(string)
tab <- read.csv2(con, header=FALSE, stringsAsFactors=FALSE, skip=skip)
return(tab)
}
set_db <- function(db){
if (db=="nrw") return("https://www.landesdatenbank.nrw.de/ldbnrwws/services/")
if (db=="regio") return("https://www.regionalstatistik.de/genesisws/services/")
if (db=="de") return("https://www-genesis.destatis.de/genesisWS/web/")
if (db=="bm") return("https://www.bildungsmonitoring.de/bildungws/services/")
if (db=="st") return("https://genesis.sachsen-anhalt.de/webservice/services/")
if (db=="by") return("https://www.statistikdaten.bayern.de/genesisWS/services/")
stop("DB: Currently not implemented.")
}
set_db2 <- function(db){
if (db=="de") return("https://www-genesis.destatis.de/genesis/online")
if (db=="by") return("https://www.statistikdaten.bayern.de/genesis/online")
if (db=="regio") return("https://www.regionalstatistik.de/genesis/online/")
stop("DB: Currently not implemented.")
}
get_character_vec <- function(x){
x <- paste(unlist(na.omit(x), use.names=FALSE), collapse="_")
x <- stri_trans_general(x, "Latin-ASCII")
x <- str_replace_all(x, " *", "")
x <- str_replace_all(x, "[^a-zA-Z0-9_]", "")
return(x)
}
|
/scratch/gouwar.j/cran-all/cranData/wiesbaden/R/helper.R
|
#' Reads the DESTATIS GV100 Format
#'
#' The GV100 format is used by DESTATIS to publish the German municipality register
#'
#'
#' @param file path to file
#' @param stzrt integer to select the administrative level (see details)
#' @param version which GV100 version. If NULL the version is guessed based on the file name.
#' @param encoding encoding of the file
#' @param ... other parameters passed to \code{read_fwf}
#'
#'
#' @details
#' The Gemeindeverzeichnis (municipality register) is published
#' in a fixed width file refered to as "GV1000 ASCII Format" by
#' DESTATIS. The register features the list of municipality and
#' higher order administrative units. The function is a wrapper
#' around [readr::read_fwf()].
#'
#' There are two types of files: One feature the administrative
#' information (\code{version="AD"}) and one with non-administrative
#' (\code{version="NAD"}). If \code{version=NULL}, read_gv100() guess the
#' type based on the file name.
#'
#' To select a particular administrative
#' unit use the stzrt argument (Satzart). For the
#' AD version, the following choices are possible:
#'
#' 10 - Länder (states)
#' 20 - Regierungsbezirke
#' 30 - Regionsdaten (only Baden-Württemberg)
#' 40 - Kreise (counties)
#' 50 - Gemeindeverbandsdaten
#' 60 - Gemeinden (municipalities)
#'
#' For the NAD version only:
#'
#' 41 - Kreise (counties)
#' 61 - Gemeinden (municipalities)
#'
#' Since about 2019, the Gemeindeverzeichnis is using UTF-8 encoding rather
#' than ISO-8859-1.
#'
#' @return a \code{data.frame}.
#'
#'
#' @seealso
#' \url{https://www.destatis.de/DE/Themen/Laender-Regionen/Regionales/Gemeindeverzeichnis/_inhalt.html}
#' [readr::read_fwf()]
#'
#'
#'
#' @examples
#' \dontrun{
#'
#' d <- read_gv100("GV100NAD31122016.asc", stzrt=60)
#'
#' }
#'
#'
#'
#' @export
read_gv100 <- function(file, stzrt,
version=NULL,
encoding="iso-8859-1",
...){
if ( is.null(version) ) {
version <- ifelse(str_detect(file, "NAD"), "NAD", "AD")
}
if (version=="AD"){
spec <- gv100$ad
spec_fwf <- spec$fwf[spec$fwf$satzart==stzrt,]
} else {
spec <- gv100$nad
spec_fwf <- spec$fwf[spec$fwf$satzart==stzrt,]
}
if(str_to_lower(encoding)=="utf-8"){
# Workaround: https://github.com/sumtxt/wiesbaden/issues/13
# "Durch die Aufname der sorbischen Schreibweise in den
# amtlichen Gemeindenamen ist es notwendig geworden, die
# Daten mit UTF-8 zu kodieren." Latin-2 (ISO8859-2) can
# accomodate Sorbian (Latin-1 can not).
x <- read_lines(file=file,
locale = locale(encoding = "UTF-8"), ...)
x <- iconv(x, from = "UTF-8", to = "ISO8859-2")
d <- withCallingHandlers(
read_fwf(
file=I(x),
col_positions=spec_fwf,
col_types=spec$col,
locale = locale(encoding = "iso-8859-2"),
...),
warning = h)
} else {
d <- withCallingHandlers(
read_fwf(
file=file,
col_positions=spec_fwf,
col_types=spec$col,
locale = locale(encoding = encoding),
...),
warning = h)
}
if (stzrt %in% c(40,50,60) & version=="AD"){
d <- merge(d, spec$key, by="schluessel", all.y=FALSE, all.x=TRUE)
d$schluessel <- d$typ
d$typ <- NULL
}
d <- d[d$satzart==stzrt,]
return(as.data.frame(d))
}
# Suppress expected specific warning
h <- function(w) if( any( grepl( "The following named parsers don't match the column names", w) ) ) invokeRestart( "muffleWarning" )
|
/scratch/gouwar.j/cran-all/cranData/wiesbaden/R/read_gv100.R
|
#' Read Header of a GENESIS csv
#'
#' \code{read_header_genesis} reads the header of a GENESIS csv.
#'
#' @param ... arguments to \code{read_csv2}
#' @param start number of the first line of the header
#' @param lines number of header lines
#' @param locale default encoding is 'windows-1252'
#' @param replacer a vector that is used as the first K column-names
#' @param clean_letters make proper variable names? (default: TRUE)
#' @param readr_locale definition of locale() to be passed to read_csv2()
#'
#' @details
#' To generate valid column names, the function replaces all special characters (e.g. German öüä) with ASCII letters
#' and removes whitespaces. Multi-line headers are joined but separated with a '_'.
#'
#'
#' @return a \code{vector} of column names.
#'
#' @seealso \code{\link{read_csv2}}
#'
#' @examples
#' \dontrun{
#'
#' library(readr)
#'
#' download_csv(tablename="12411-0004")
#'
#' d <- read_header_genesis('12411-0004.csv', start=6, replacer=c("STAG"))
#' data <- read_csv2('12411-0004.csv', skip=6, n_max=30-6+1,
#' na="-", locale=locale(encoding="windows-1252") )
#' colnames(data) <- d
#' }
#'
#'
#'
#'
#' @export
read_header_genesis <- function(..., start, lines=2, readr_locale=locale(encoding="windows-1252"), replacer=NULL, clean_letters=TRUE){
h <- read_csv2(..., col_names=FALSE, skip=start-1, n_max=lines, col_types=cols( .default = col_character() ), locale=readr_locale )
if(clean_letters==TRUE){
h <- apply(h, 2, function(x) get_character_vec(x) )
} else{
h <- apply(h, 2, function(x) paste(unlist(na.omit(x), use.names=FALSE), collapse=" "))
}
if( !is.null(replacer) ) h[1:length(replacer)] <- replacer
return(h)
}
|
/scratch/gouwar.j/cran-all/cranData/wiesbaden/R/read_header_genesis.R
|
#' Retrieves Data from GENESIS Databases
#'
#' \code{retrieve_data} retrieves a single data table.
#'
#'
#' @param tablename name of the table to retrieve.
#' @param startyear only retrieve values for years equal or larger to \code{startyear}. Default: "".
#' @param endyear only retrieve values for years smaller or equal to \code{endyear}. Default: "".
#' @param regionalschluessel only retrieve values for particular regional units. See details for more information. Default: "".
#' @param regionalmerkmal key for Regionalklassifikation. See details for more information. Default: "".
#' @param sachmerkmal,sachmerkmal2,sachmerkmal3 key for Sachklassifikation. Default: "".
#' @param sachschluessel,sachschluessel2,sachschluessel3 value for Sachklassifikation. Default: "".
#' @param inhalte retrieve only selected variables. Default is to retrieve all.
#' @param genesis to authenticate a user and set the database (see below).
#' @param language retrieve information in German "de" (default) or in English "en" if available.
#' @param ... other arguments send to the httr::GET request.
#'
#'
#'
#' @details
#' Use \code{\link{retrieve_datalist}} to find the \code{tablename} based on the table series you are interested in. See the
#' package description (\code{\link{wiesbaden}}) for details about setting the login and database.
#'
#' The parameter \code{regionalschluessel} can either be a single value (a single Amtlicher Gemeindeschlüssel) or a
#' comma-separated list of values supplied as string (no whitespaces). Wildcard character "*" is allowed.
#' If \code{regionalschluessel} is set, the parameter \code{regionalmerkmal} must also be set to GEMEIN, KREISE,
#' REGBEZ, or DLAND. The same logic applies to the parameter combination \code{sachmerkmal} and \code{sachschluessel*}.
#' The parameter \code{inhalte} takes a 1-6 character long name of a variable in the table. If choosing multiple variables,
#' delimit by ",", e.g. "STNW01,STNW02" (no whitespaces).
#'
#' Limiting the data request to particular years (via the \code{*year} parameters), geographical units (via the \code{regional*} parameters)
#' attributes (via the \code{sach*} parameters) or selected variables (via the \code{inhalte} parameter) is necessary if the API request
#' fails to return any data. If you are not able to download the table because of size, inspect the metadata first
#' (using \link{retrieve_metadata} or \link{retrieve_valuelabel}) and then limit the data request accordingly. See also examples below.
#'
#' @return a \code{data.frame}. Value variables (_val) come with three additional variables (_qual, _lock, _err). The exact nature
#' of these variables is unknown, but _qual appears to indicate if _val is a valid value. If _qual=="e" the value in _val is
#' valid while if _qual!="e" (then _qual = ("-","/", ".", "x", ... ) ) it is typically zero should/might be set to NA.
#'
#'
#'
#'
#' @seealso \code{\link{retrieve_datalist}} \code{\link{wiesbaden}}
#'
#' @examples
#'
#' \dontrun{
#' # Retrieve values for the table 14111KJ002 which contains the
#' # federal election results on the county level.
#' # Assumes that user/password are stored via save_credentials()
#'
#' data <- retrieve_data(tablename="14111KJ002", genesis=c(db="regio") )
#'
#' # ... only the values for the AfD.
#'
#' data <- retrieve_data(tablename="14111KJ002", sachmerkmal="PART04",
#' sachschluessel="AFD", genesis=c(db="regio") )
#
#'
#' # ... or only values from Saxony
#'
#' data <- retrieve_data(tablename="14111KJ002", regionalmerkmal="KREISE",
#' regionalschluessel="14*", genesis=c(db="regio") )
#'
#' # Limiting the number of data points is in particular important for
#' # large tables. For example, this data request fails:
#'
#' data <- retrieve_data(tablename="33111GJ005", genesis=c(db='regio'))
#'
#' # But after limiting the request to one year, the data is returned:
#'
#' data <- retrieve_data(tablename="33111GJ005", genesis=c(db='regio'), startyear=2019, endyear=2019)
#'
#' # An alternative strategy is to only request a subset of the variables.
#' # For example, this data request fails:
#'
#' data <- retrieve_data("12711GJ002", genesis=c(db="regio"))
#'
#' # But when requesting only one instead of all variables, the data is returned:
#'
#' data <- retrieve_data("12711GJ002", inhalte="BEV081", genesis=c(db="regio"))
#'
#'
#'
#' }
#'
#' @export
retrieve_data <- function(
tablename,
startyear = "",
endyear = "",
regionalmerkmal = "",
regionalschluessel = "",
sachmerkmal = "",
sachschluessel = "",
sachmerkmal2 = "",
sachschluessel2 = "",
sachmerkmal3 = "",
sachschluessel3 = "",
inhalte = "",
genesis=NULL, language='de', ... ) {
genesis <- make_genesis(genesis)
baseurl <- paste(set_db(db=genesis['db']), "ExportService_2010", sep="")
param <- list(
method = 'DatenExport',
kennung = genesis['user'],
passwort = genesis['password'],
namen = tablename,
bereich = 'Alle',
format = 'csv',
werte = 'true',
metadaten = 'false',
zusatz = 'false',
startjahr = as.character(startyear),
endjahr = as.character(endyear),
zeitscheiben = '',
inhalte = inhalte,
regionalmerkmal = regionalmerkmal,
regionalschluessel = regionalschluessel,
sachmerkmal = sachmerkmal,
sachschluessel = sachschluessel,
sachmerkmal2 = sachmerkmal2,
sachschluessel2 = sachschluessel2,
sachmerkmal3 = sachmerkmal3,
sachschluessel3 = sachschluessel3,
stand = '',
sprache = language)
httrdata <- GET(baseurl, query = param, progress(), ... )
xmldata <- content(httrdata, type='text/xml', options="HUGE", encoding="UTF-8")
entries <- xml_find_all(xmldata, './/quaderDaten')
if ( length(entries)==0 ) return( xml_text(xmldata) )
sstr <- str_split(xml_text(entries), '\nK')
if ( sstr[[1]][1] == "" ) return("No results found.")
tabs <- lapply(sstr[[1]], readstr_csv)
# Construct header
DQERH <- paste("id", tabs[[3]]$V2[2], sep="")
DQA <- tabs[[4]]$V2[2:nrow(tabs[[4]])]
DQZ <- tabs[[5]]$V2[2:nrow(tabs[[5]])]
DQI <- tabs[[6]]$V2[2:nrow(tabs[[6]])]
DQIexpd <- c("val", "qual", "lock", "err")
DQIcom <- unlist(lapply(DQI, function(x) paste(x, DQIexpd,sep="_")))
header <- c(DQERH, DQA, DQZ, DQIcom)
if ( is.na(sstr[[1]][7]) ) stop("The API has returned a response without data.
This might indicate that you requested too much data. Consider only
requesting a subset of the data. See package documentation for guidance.")
data <- read_delim(sstr[[1]][7], skip = 1, col_names = header, delim = ';')
return(as.data.frame(data))
}
|
/scratch/gouwar.j/cran-all/cranData/wiesbaden/R/retrieve_data.R
|
#' Retrieves List of Tables from GENESIS Databases
#'
#' \code{retrieve_datalist} retrieves a list of available data tables in a series.
#'
#' @param tableseries name of series for which tables should be retrieved.
#' @param genesis to authenticate a user and set the database (see below).
#' @param language retrieve information in German "de" (default) or in English "en" if available.
#' @param ... other arguments send to the httr::GET request.
#'
#'
#' @details
#' See the package description (\code{\link{wiesbaden}}) for details about setting the login and database.
#' To retrieve a list of all available data use tableseries="*" or combine the wildcard character * with a prefix (see below for an example).
#'
#' @return a \code{data.frame}
#'
#' @seealso \code{\link{retrieve_data}} \code{\link{wiesbaden}}
#'
#' @examples
#'
#' \dontrun{
#' # Retrieves list of available tables for the table series 14111
#' # which contains the federal election results.
#' # Assumes that user/password are stored via save_credentials()
#'
#' d <- retrieve_datalist(tableseries="14111*", genesis=c(db="regio") )
#' }
#'
#'
#'
#'
#' @export
retrieve_datalist <- function(tableseries,
genesis=NULL, language='de', ... ) {
genesis <- make_genesis(genesis)
baseurl <- paste(set_db(db=genesis['db']), "RechercheService_2010", sep="")
param <- list(
method = 'DatenKatalog',
kennung = genesis['user'],
passwort = genesis['password'],
bereich = 'Alle',
filter = tableseries,
listenLaenge = '500',
sprache = language)
httrdata <- GET(baseurl, query = param, ... )
xmldata <- content(httrdata, type='text/xml', encoding="UTF-8")
entries <- xml_find_all(xmldata, '//datenKatalogEintraege')
if ( length(entries)==0 ) return( xml_text(xmldata) )
entries <- lapply(entries, function(x) rev(xml_text(xml_find_all(x, './code|./beschriftungstext'))) )
d <- as.data.frame(do.call(rbind, entries))
if ( ncol(d)==0 ) return("No results found.")
# Cleanup
colnames(d) <- c("tablename", "description")
d$description <- unlist(lapply(str_split(d$description, pattern=",", n=2), function(x) x[2] ))
d$description <- str_trim(str_replace_all(d$description, "\n", " "))
if ( nrow(d) == 500 ) warning("The selected series might contain more data. The maximum number of results was retrieved (N=500).\n")
return(d)
}
|
/scratch/gouwar.j/cran-all/cranData/wiesbaden/R/retrieve_datalist.R
|
#' Retrieves Meta Data from GENESIS Databases
#'
#' \code{retrieve_metadata} retrieves meta data.
#'
#' @param tablename name of the table to retrieve.
#' @param genesis to authenticate a user and set the database (see below).
#' @param language retrieve information in German "de" (default) or in English "en" if available.
#' @param ... other arguments send to the httr::GET request.
#'
#'
#' @details
#' See the package description (\code{\link{wiesbaden}}) for details about setting the login and database.
#'
#' @return a \code{data.frame}.
#'
#' @seealso \code{\link{wiesbaden}}
#'
#' @examples
#'
#' \dontrun{
#' # Meta data contain the explanations to the variable names for the table
#' # federal election results on the county level.
#' # Assumes that user/password are stored via save_credentials()
#'
#' metadata <- retrieve_metadata(tablename="14111KJ002", genesis=c(db="regio") )
#' }
#'
#'
#'
#'
#' @export
retrieve_metadata <- function(
tablename, language='de',
genesis=NULL, ... ) {
genesis <- make_genesis(genesis)
baseurl <- paste(set_db(db=genesis['db']), "ExportService_2010", sep="")
param <- list(
method = 'DatenAufbau',
kennung = genesis['user'],
passwort = genesis['password'],
namen = tablename,
bereich = 'Alle',
sprache = language)
datenaufbau <- GET(baseurl, query = param, ... )
datenaufbau <- content(datenaufbau, type='text/xml', encoding="UTF-8")
entries <- xml_find_all(datenaufbau, '//merkmale')
if ( length(entries)==0 ) return( xml_text(datenaufbau) )
entries <- lapply(entries, function(x) xml_text(xml_find_all(x, './code|./inhalt|./masseinheit')) )
d <- as.data.frame(do.call(rbind, entries))
if ( ncol(d)==0 ) return("No results found.")
colnames(d) <- c("name", "description", "unit")
return(d)
}
|
/scratch/gouwar.j/cran-all/cranData/wiesbaden/R/retrieve_metadata.R
|
#' Retrieves Value Labels from GENESIS Databases
#'
#' \code{retrieve_valuelabel} retrieves value labels for variable
#'
#' @param variablename name of the variable
#' @param valuelabel "*" (default) retrieves all value labels.
#' @param genesis to authenticate a user and set the database (see below).
#' @param language retrieve information in German "de" (default) or in English "en" if available.
#' @param ... other arguments send to the httr::GET request.
#'
#' @details
#' See the package description (\code{\link{wiesbaden}}) for details about setting the login and database.
#'
#' @return a \code{data.frame}.
#'
#' @seealso \code{\link{retrieve_datalist}} \code{\link{wiesbaden}}
#'
#' @examples
#'
#' \dontrun{
#' # Value labels contain for the variable 'PART04' in the table with the
#' # federal election results on the county level.
#' # Assumes that user/password are stored via save_credentials()
#'
#' metadata <- retrieve_valuelabel(variablename="PART04", genesis=c(db="regio") )
#' }
#'
#'
#'
#'
#' @export
retrieve_valuelabel <- function(
variablename,
valuelabel="*",
genesis=NULL, language='de', ... ) {
genesis <- make_genesis(genesis)
baseurl <- paste(set_db(db=genesis['db']), "RechercheService_2010", sep="")
# listenLaenge: 2500 is the max for this API
param <- list(
method = 'MerkmalAuspraegungenKatalog',
kennung = genesis['user'],
passwort = genesis['password'],
namen = variablename,
auswahl = valuelabel,
kriterium = '',
bereich = 'Alle',
listenLaenge = 2500,
sprache = language)
datenaufbau <- GET(baseurl, query = param, ... )
datenaufbau <- content(datenaufbau, type='text/xml', encoding="UTF-8")
entries <- xml_find_all(datenaufbau, '//merkmalAuspraegungenKatalogEintraege')
if ( length(entries)==0 ) return( xml_text(datenaufbau) )
entries <- lapply(entries, function(x) xml_text(xml_find_all(x, './code|./inhalt')) )
d <- as.data.frame(do.call(rbind, entries))
if ( ncol(d)==0 ) return("No results found.")
colnames(d) <- c(variablename, "description")
return(d)
}
|
/scratch/gouwar.j/cran-all/cranData/wiesbaden/R/retrieve_valuelabel.R
|
#' Retrieves further information on a variable from GENESIS Databases
#'
#' \code{retrieve_varinfo} retrieves further information.
#'
#' @param variablename name of the variable
#' @param genesis to authenticate a user and set the database (see below).
#' @param language retrieve information in German "de" (default) or in English "en" if available.
#' @param ... other arguments send to the httr::GET request.
#'
#' @details
#' See the package description (\code{\link{wiesbaden}}) for details about setting the login and database.
#'
#' @return a \code{data.frame}.
#'
#' @seealso \code{\link{retrieve_datalist}} \code{\link{wiesbaden}}
#'
#' @examples
#'
#' \dontrun{
#' # Variable information 'AI2105' (Anteil der Empfänger von Arbeitslosengeld II im Alter
#' # von 15 bis 24 Jahren an der Bevölkerung gleichen Alters)
#' # Assumes that user/password are stored via save_credentials()
#'
#' metadata <- retrieve_varinfo(variablename="AI2105", genesis=c(db="regio") )
#' }
#'
#'
#'
#'
#' @export
retrieve_varinfo <- function(
variablename,
genesis=NULL, language='de', ... ) {
genesis <- make_genesis(genesis)
baseurl <- paste(set_db(db=genesis['db']), "ExportService_2010", sep="")
param <- list(
method = 'MerkmalInformation',
kennung = genesis['user'],
passwort = genesis['password'],
name = variablename,
bereich = 'Alle',
sprache = language)
datenaufbau <- GET(baseurl, query = param, ... )
datenaufbau <- content(datenaufbau, type='text/xml', encoding="UTF-8")
entries <- xml_find_all(datenaufbau, '//MerkmalInformationReturn')
if ( length(entries)==0 ) return( xml_text(datenaufbau) )
entries <- lapply(entries, function(x) xml_text(xml_find_all(x, './code|./information')) )
d <- as.data.frame(do.call(rbind, entries))
if ( ncol(d)==0 ) return("No results found.")
colnames(d) <- c(variablename, "description")
return(d)
}
|
/scratch/gouwar.j/cran-all/cranData/wiesbaden/R/retrieve_varinfo.R
|
#' Saves database credentials
#'
#' \code{save_credentials} saves a set of database credentials using the \code{keyring} package.
#'
#' @param db database name, either 'nrw', 'regio', 'de' or 'bm'.
#' @param user your user name.
#' @param password your password.
#'
#' @details
#' User/password are stored in Keychain on macOS, Credential Store on Windows or Secret Service API on Linux.
#' If a user/password pair for a database already exists, it is silently replaced with the new pair.
#' This function relies on the \code{\link{keyring}} package.
#'
#' @seealso \code{\link{wiesbaden}}, \code{\link{keyring}}
#'
#'
#'
#' @export
save_credentials <- function(db, user, password){
if ( !(db %in% c("nrw", "regio", "de", "bm", "by", "st")) ) stop(paste("Database '", db, "' unknown.",sep=""))
if (db=='regio'){
key_set_with_value("regionalstatistik", username=user, password=password)
message("Successfully added credentials.")
} else if (db=='nrw'){
key_set_with_value("landesdatenbank-nrw", username=user, password=password)
message("Successfully added credentials.")
} else if (db=='bm'){
key_set_with_value("bildungsmonitoring", username=user, password=password)
message("Successfully added credentials.")
} else if (db=='de'){
key_set_with_value("destatis", username=user, password=password)
message("Successfully saved credentials.")
} else if (db=='by'){
key_set_with_value("landesdatenbank-by", username=user, password=password)
message("Successfully saved credentials.")
} else if (db=='st'){
key_set_with_value("landesdatenbank-st", username=user, password=password)
message("Successfully saved credentials.")
}
}
|
/scratch/gouwar.j/cran-all/cranData/wiesbaden/R/save_credentials.R
|
#' Tests Login in GENESIS Databases
#'
#' \code{test_login} tests if the login works.
#'
#'
#' @param genesis to authenticate a user and set the database (see below).
#' @param ... other arguments send to the httr::GET request.
#'
#'
#' @return a \code{string} with the server return message.
#'
#'
#'
#' @examples
#'
#' \dontrun{
#'
#' test_login(genesis=c(db="regio") )
#'
#' }
#'
#'
#'
#'
#' @export
test_login <- function(genesis=NULL, ... ) {
genesis <- make_genesis(genesis)
baseurl <- paste(set_db(db=genesis['db']), "TestService_2010", sep="")
param <- list(
method = 'logonoff',
kennung = genesis['user'],
passwort = genesis['password'])
httrdata <- GET(baseurl, query = param, ... )
xmldata <- content(httrdata, type='text/xml', encoding="UTF-8")
return(xml_text(xmldata))
}
|
/scratch/gouwar.j/cran-all/cranData/wiesbaden/R/test_login.R
|
#'
#' Data retrieval client for Federal Statistical Office of Germany
#'
#'
#'
#' To authenticate, supply a vector with your user name, password, and database
#' shortcut ("regio", "de", "nrw", "bm") as an argument for the \code{genesis}
#' parameter whenever you call a \code{retrieve_*} function:
#' \code{c(user="your-username", password="your-password", db="database-shortname")}
#'
#' Alternatively, store the credentials on your computer using the \code{\link{save_credentials}} function. This function
#' relies on the \code{\link{keyring}} package.
#'
#' Available databases are regionalstatistik.de (shortname: "regio"), landesdatenbank.nrw.de ("nrw"),
#' www-genesis.destatis.de ("de") and bildungsmonitoring.de ("bm").
#'
#'
#'
#' @name wiesbaden-package
#'
#' @docType package
#' @aliases wiesbaden
#' @title Client to access the data from the Federal Statistical Office, Germany
#' @author Moritz Marbach \email{[email protected]}
#'
#' @keywords internal
#'
#' @import httr
#' @import xml2
#' @importFrom keyring key_set_with_value key_list key_get
#' @importFrom stringr str_detect str_split str_replace_all str_trim str_to_lower
#' @importFrom readr read_csv read_csv2 read_fwf read_delim read_file read_lines locale cols col_character
#' @importFrom stringi stri_trans_general
#' @importFrom stats na.omit
#' @importFrom utils read.csv2
#' @importFrom jsonlite fromJSON toJSON
NULL
|
/scratch/gouwar.j/cran-all/cranData/wiesbaden/R/wiesbaden-package.R
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## ----eval = FALSE-------------------------------------------------------------
# library(wiesbaden)
#
# # Assuming credentials are stored via save_credentials()
# test_login(genesis=c(db='regio'))
# #> [1] "Sie wurden erfolgreich an- und abgemeldet."
#
# # ... or supply password/username
# test_login(genesis=c(db='regio', user="your-username", password="your-password"))
# #> [1] "Sie wurden erfolgreich an- und abgemeldet."
## ----eval = FALSE-------------------------------------------------------------
# d <- retrieve_datalist(tableseries="141*", genesis=c(db='regio'))
## ----eval = FALSE-------------------------------------------------------------
# subset(d, grepl("Kreise", description))
# #> tablename
# #> 1 14111KJ001
# #> 2 14111KJ002
# #> description
# #> 1 Wahlberechtigte, Wahlbeteiligung, Gültige Zweitstimmen, Kreise und kreisfreie Städte, Stichtag
# #> 2 Gültige Zweitstimmen, Kreise und kreisfreie Städte, Parteien, Stichtag
## ----eval = FALSE-------------------------------------------------------------
# data <- retrieve_data(tablename="14111KJ002", genesis=c(db='regio'))
## ----eval = FALSE-------------------------------------------------------------
# head(data)
# #> id14111 KREISE PART04 STAG WAHL09_val WAHL09_qual WAHL09_lock
# #> 1 D 01001 AFD 22.09.2013 1855 e NA
# #> 2 D 01001 AFD 24.09.2017 3702 e NA
# #> 3 D 01001 B90-GRUENE 16.10.1994 4651 e NA
# #> 4 D 01001 B90-GRUENE 27.09.1998 3815 e NA
# #> 5 D 01001 B90-GRUENE 22.09.2002 5556 e NA
# #> 6 D 01001 B90-GRUENE 18.09.2005 5028 e NA
# #> WAHL09_err
# #> 1 0
# #> 2 0
# #> 3 0
# #> 4 0
# #> 5 0
# #> 6 0
## ----eval = FALSE-------------------------------------------------------------
# retrieve_metadata(tablename="14111KJ002", genesis=c(db='regio'))
# #> name description unit
# #> 1 WAHL09 Gültige Zweitstimmen Anzahl
# #> 2 STAG Stichtag
# #> 3 PART04 Parteien
# #> 4 KREISE Kreise und kreisfreie Städte
## ----eval = FALSE-------------------------------------------------------------
# retrieve_valuelabel("PART04", genesis=c(db='regio'))
# #> PART04 description
# #> 1 AFD AfD
# #> 2 B90-GRUENE GRÜNE
# #> 3 CDU CDU/CSU
# #> 4 DIELINKE DIE LINKE
# #> 5 FDP FDP
# #> 6 SONSTIGE Sonstige Parteien
# #> 7 SPD SPD
|
/scratch/gouwar.j/cran-all/cranData/wiesbaden/inst/doc/wiesbaden.R
|
---
title: "Getting Data from DESTATIS via R"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Getting Data from DESTATIS via R}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
The R package `wiesbaden` provides functions to directly retrieve data from databases maintained by the Federal Statistical Office of Germany (DESTATIS) in Wiesbaden.
Access to the following databases is implemented:
* [regionalstatistik.de](https://www.regionalstatistik.de/genesis/online)
* [genesis.destatis.de](https://www-genesis.destatis.de/genesis/online)
* [landesdatenbank.nrw.de](https://www.landesdatenbank.nrw.de)
* [bildungsmonitoring.de](https://www.bildungsmonitoring.de/bildung/online/logon)
To access any of the databases using this package, you need to register on the respective website to get a personal login name and password. The registration is free.
To authenticate, supply a vector with your user name, password, and database shortcut ("regio", "de", "nrw", "bm") as an argument for the `genesis` parameter whenever you call a `retrieve_*` function:
c(user="your-username", password="your-password", db="database-shortname")
Alternatively, you can use `save_credentials()` to store the credentials on your computer. This function relies on the [keyring package](https://github.com/r-lib/keyring). For more details about how credentials are stored by this package, see the keyring package documentation.
Use the function `test_login()` to check if your login/password combination allows you to access the respective database (and if the server is functioning properly).
```{r,eval = FALSE}
library(wiesbaden)
# Assuming credentials are stored via save_credentials()
test_login(genesis=c(db='regio'))
#> [1] "Sie wurden erfolgreich an- und abgemeldet."
# ... or supply password/username
test_login(genesis=c(db='regio', user="your-username", password="your-password"))
#> [1] "Sie wurden erfolgreich an- und abgemeldet."
```
The available data are organized by themes ("Themen") and subthemes. To get a list of all available themes go to the respective database website (links above) and click on "Themen". Each theme typically comes with multiple subthemes.
Suppose we want to download the federal election results on the county level from [regionalstatistik.de](https://www.regionalstatistik.de/genesis/online). This data is available in the theme "Wahlen" which has the code `14`. The federal election results are available in subtheme `141`.
Using `retrieve_datalist()`, download a `data.frame` of all available data cubes in theme `141`:
```{r,eval = FALSE}
d <- retrieve_datalist(tableseries="141*", genesis=c(db='regio'))
```
Note, we are assuming that credentials are stored via `save_credentials()`.
Use `grepl` (or `str_detect()` from the `stringr` package) to filter cubes with a description that contains the word "Kreise" (county):
```{r,eval = FALSE}
subset(d, grepl("Kreise", description))
#> tablename
#> 1 14111KJ001
#> 2 14111KJ002
#> description
#> 1 Wahlberechtigte, Wahlbeteiligung, Gültige Zweitstimmen, Kreise und kreisfreie Städte, Stichtag
#> 2 Gültige Zweitstimmen, Kreise und kreisfreie Städte, Parteien, Stichtag
```
Having identified the correct data cube, call `retrieve_data()` to download the data:
```{r,eval = FALSE}
data <- retrieve_data(tablename="14111KJ002", genesis=c(db='regio'))
```
```{r,eval = FALSE}
head(data)
#> id14111 KREISE PART04 STAG WAHL09_val WAHL09_qual WAHL09_lock
#> 1 D 01001 AFD 22.09.2013 1855 e NA
#> 2 D 01001 AFD 24.09.2017 3702 e NA
#> 3 D 01001 B90-GRUENE 16.10.1994 4651 e NA
#> 4 D 01001 B90-GRUENE 27.09.1998 3815 e NA
#> 5 D 01001 B90-GRUENE 22.09.2002 5556 e NA
#> 6 D 01001 B90-GRUENE 18.09.2005 5028 e NA
#> WAHL09_err
#> 1 0
#> 2 0
#> 3 0
#> 4 0
#> 5 0
#> 6 0
```
The data are organized in long format: For each combination of `KREIS` (county), `PART04` (political party) and `STAG` (election date) there is a vote count (`WAHL09_value`). Please see help file for the information on the additional variables (\*\_qual, \*\_lock, \*\_err).
To get the metadata for each variable, call `retrieve_metadata()`:
```{r,eval = FALSE}
retrieve_metadata(tablename="14111KJ002", genesis=c(db='regio'))
#> name description unit
#> 1 WAHL09 Gültige Zweitstimmen Anzahl
#> 2 STAG Stichtag
#> 3 PART04 Parteien
#> 4 KREISE Kreise und kreisfreie Städte
```
To get the value labels for the variable `PART04`, call `retrieve_valuelabel()`:
```{r,eval = FALSE}
retrieve_valuelabel("PART04", genesis=c(db='regio'))
#> PART04 description
#> 1 AFD AfD
#> 2 B90-GRUENE GRÜNE
#> 3 CDU CDU/CSU
#> 4 DIELINKE DIE LINKE
#> 5 FDP FDP
#> 6 SONSTIGE Sonstige Parteien
#> 7 SPD SPD
```
This function also works with the other variables (e.g., `KREIS`).
|
/scratch/gouwar.j/cran-all/cranData/wiesbaden/inst/doc/wiesbaden.Rmd
|
---
title: "Getting Data from DESTATIS via R"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Getting Data from DESTATIS via R}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
The R package `wiesbaden` provides functions to directly retrieve data from databases maintained by the Federal Statistical Office of Germany (DESTATIS) in Wiesbaden.
Access to the following databases is implemented:
* [regionalstatistik.de](https://www.regionalstatistik.de/genesis/online)
* [genesis.destatis.de](https://www-genesis.destatis.de/genesis/online)
* [landesdatenbank.nrw.de](https://www.landesdatenbank.nrw.de)
* [bildungsmonitoring.de](https://www.bildungsmonitoring.de/bildung/online/logon)
To access any of the databases using this package, you need to register on the respective website to get a personal login name and password. The registration is free.
To authenticate, supply a vector with your user name, password, and database shortcut ("regio", "de", "nrw", "bm") as an argument for the `genesis` parameter whenever you call a `retrieve_*` function:
c(user="your-username", password="your-password", db="database-shortname")
Alternatively, you can use `save_credentials()` to store the credentials on your computer. This function relies on the [keyring package](https://github.com/r-lib/keyring). For more details about how credentials are stored by this package, see the keyring package documentation.
Use the function `test_login()` to check if your login/password combination allows you to access the respective database (and if the server is functioning properly).
```{r,eval = FALSE}
library(wiesbaden)
# Assuming credentials are stored via save_credentials()
test_login(genesis=c(db='regio'))
#> [1] "Sie wurden erfolgreich an- und abgemeldet."
# ... or supply password/username
test_login(genesis=c(db='regio', user="your-username", password="your-password"))
#> [1] "Sie wurden erfolgreich an- und abgemeldet."
```
The available data are organized by themes ("Themen") and subthemes. To get a list of all available themes go to the respective database website (links above) and click on "Themen". Each theme typically comes with multiple subthemes.
Suppose we want to download the federal election results on the county level from [regionalstatistik.de](https://www.regionalstatistik.de/genesis/online). This data is available in the theme "Wahlen" which has the code `14`. The federal election results are available in subtheme `141`.
Using `retrieve_datalist()`, download a `data.frame` of all available data cubes in theme `141`:
```{r,eval = FALSE}
d <- retrieve_datalist(tableseries="141*", genesis=c(db='regio'))
```
Note, we are assuming that credentials are stored via `save_credentials()`.
Use `grepl` (or `str_detect()` from the `stringr` package) to filter cubes with a description that contains the word "Kreise" (county):
```{r,eval = FALSE}
subset(d, grepl("Kreise", description))
#> tablename
#> 1 14111KJ001
#> 2 14111KJ002
#> description
#> 1 Wahlberechtigte, Wahlbeteiligung, Gültige Zweitstimmen, Kreise und kreisfreie Städte, Stichtag
#> 2 Gültige Zweitstimmen, Kreise und kreisfreie Städte, Parteien, Stichtag
```
Having identified the correct data cube, call `retrieve_data()` to download the data:
```{r,eval = FALSE}
data <- retrieve_data(tablename="14111KJ002", genesis=c(db='regio'))
```
```{r,eval = FALSE}
head(data)
#> id14111 KREISE PART04 STAG WAHL09_val WAHL09_qual WAHL09_lock
#> 1 D 01001 AFD 22.09.2013 1855 e NA
#> 2 D 01001 AFD 24.09.2017 3702 e NA
#> 3 D 01001 B90-GRUENE 16.10.1994 4651 e NA
#> 4 D 01001 B90-GRUENE 27.09.1998 3815 e NA
#> 5 D 01001 B90-GRUENE 22.09.2002 5556 e NA
#> 6 D 01001 B90-GRUENE 18.09.2005 5028 e NA
#> WAHL09_err
#> 1 0
#> 2 0
#> 3 0
#> 4 0
#> 5 0
#> 6 0
```
The data are organized in long format: For each combination of `KREIS` (county), `PART04` (political party) and `STAG` (election date) there is a vote count (`WAHL09_value`). Please see help file for the information on the additional variables (\*\_qual, \*\_lock, \*\_err).
To get the metadata for each variable, call `retrieve_metadata()`:
```{r,eval = FALSE}
retrieve_metadata(tablename="14111KJ002", genesis=c(db='regio'))
#> name description unit
#> 1 WAHL09 Gültige Zweitstimmen Anzahl
#> 2 STAG Stichtag
#> 3 PART04 Parteien
#> 4 KREISE Kreise und kreisfreie Städte
```
To get the value labels for the variable `PART04`, call `retrieve_valuelabel()`:
```{r,eval = FALSE}
retrieve_valuelabel("PART04", genesis=c(db='regio'))
#> PART04 description
#> 1 AFD AfD
#> 2 B90-GRUENE GRÜNE
#> 3 CDU CDU/CSU
#> 4 DIELINKE DIE LINKE
#> 5 FDP FDP
#> 6 SONSTIGE Sonstige Parteien
#> 7 SPD SPD
```
This function also works with the other variables (e.g., `KREIS`).
|
/scratch/gouwar.j/cran-all/cranData/wiesbaden/vignettes/wiesbaden.Rmd
|
#' @keywords internal
expand_block_variable_step <- function(chr_pos, value, chrom, span = 1) {
n <- length(chr_pos)
chr_pos2 <- sequence(nvec = rep(span, n), from = chr_pos)
value2 <- rep(value, each = span)
tibble::tibble(chr = chrom, pos = chr_pos2, val = value2)
}
#' @keywords internal
expand_block_fixed_step <- function(value, chrom, start, step, span = 1) {
n <- length(value)
chr_pos <- seq(start, by = step, length.out = n)
expand_block_variable_step(chr_pos = chr_pos, value = value, chrom = chrom, span = span)
}
|
/scratch/gouwar.j/cran-all/cranData/wig/R/expand_block.R
|
#' Imports a WIG file
#'
#' `import_wig` reads a [WIG
#' (wiggle)](https://m.ensembl.org/info/website/upload/wig.html) file and
#' expands the data into long format, i.e., the each observation in the returned
#' tibble pertains the position of one single base.
#'
#' @param file_path A path to a WIG file.
#' @param n The (maximal) number of lines to read. Negative values indicate that
#' one should read up to the end of input on the connection.
#'
#' @return A tibble of three variables: `chr`, chromosome; `pos`, genomic
#' position; and `val`, value. Chromosome positions are 1-relative, i.e. the
#' first base is 1, as specified in WIG files.
#'
#' @md
#'
#' @examples
#' # Import a WIG file
#' wig_file <- system.file(
#' "extdata",
#' file = 'hg19-pik3ca.wig',
#' package = "wig",
#' mustWork = TRUE)
#'
#' import_wig(wig_file)
#'
#' @export
import_wig <- function(file_path, n = -1L) {
lines <- readLines(file_path, n = n)
is_declaration <- grepl('Step', lines)
blocks <- cumsum(is_declaration)
lines_lst <- split(lines, blocks)
lst <- vector(mode = 'list', length = length(lines_lst))
for(i in seq_along(lst)) {
declaration <- parse_declaration(lines_lst[[i]][1])
if(identical(declaration$format, 'fixedStep')) {
lst[[i]] <-
expand_block_fixed_step(
value = as.double(lines_lst[[i]][-1]),
chrom = declaration$chrom,
start = declaration$start,
step = declaration$step,
span = ifelse(is.na(declaration$span), 1, declaration$span)
)
} else { # 'variableStep'
m <- stringr::str_split(lines_lst[[i]][-1], pattern = '\\s+', simplify = TRUE)
lst[[i]] <-
expand_block_variable_step(
chr_pos = as.integer(m[, 1]),
value = as.double(m[, 2]),
chrom = declaration$chrom,
span = ifelse(is.na(declaration$span), 1, declaration$span)
)
}
}
dplyr::bind_rows(lst)
}
|
/scratch/gouwar.j/cran-all/cranData/wig/R/import_wig.R
|
#' @keywords internal
parse_declaration <- function(x) {
format <- stringr::str_match(x, pattern = '^(variableStep|fixedStep)')[, 2]
chrom <- stringr::str_match(x, pattern = 'chrom=(\\w+)')[, 2]
start <- stringr::str_match(x, pattern = 'start=(\\w+)')[, 2]
step <- stringr::str_match(x, pattern = 'step=(\\w+)')[, 2]
span <- stringr::str_match(x, pattern = 'span=(\\w+)')[, 2]
list(format = format, chrom = chrom, start = as.integer(start), step = as.integer(step), span = as.integer(span))
}
|
/scratch/gouwar.j/cran-all/cranData/wig/R/parse_declaration.R
|
#' Pipe operator
#'
#' See \code{magrittr::\link[magrittr:pipe]{\%>\%}} for details.
#'
#' @name %>%
#' @rdname pipe
#' @keywords internal
#' @export
#' @importFrom magrittr %>%
#' @usage lhs \%>\% rhs
#' @param lhs A value or the magrittr placeholder.
#' @param rhs A function call using the magrittr semantics.
#' @return The result of calling `rhs(lhs)`.
NULL
|
/scratch/gouwar.j/cran-all/cranData/wig/R/utils-pipe.R
|
#### Modesto Escobar
# Sat Feb 27 23:56:26 2021 ------------------------------
#validUrl -----
#' Find if an URL link is valid.
#'
#' @param url A vector of URLs.
#' @param time The timeout (in seconds) to be used for each connection. Default = 2.
#' @details This function checks if a URL exists on the Internet.
#' @return A boolean value of TRUE or FALSE.
#' @author Modesto Escobar, Department of Sociology and Communication, University of Salamanca. See <https://sociocav.usal.es/blog/modesto-escobar/>
#' @examples
#' validUrl(url="https://es.wikipedia.org/wiki/Weber,_Max", time=2)
#' @export
validUrl <- function(url, time=2){
con <- url(url)
check <- suppressWarnings(try(open.connection(con,open="rt",timeout=time),silent=T)[1])
suppressWarnings(try(close.connection(con),silent=T))
ifelse(is.null(check),TRUE,FALSE)
}
# urltoHtml ----
#' Convert a Wikipedia URL to an HTML link
#' @param url Character vector of URLs.
#' @param text A vector with name of the correspondent title of the url (See details).
#' @details This function converts an available URL direction to the corresponding HTML link, i.e., "https://es.wikipedia.org/wiki/Socrates" changes into "<a href='https://es.wikipedia.org/wiki/Socrates', target='_blank'>Socrates</a>".
#` It is possible to change the showing name of the link directly using the argument text. When not specified, it is extracted directly from the URL.
#' @return A character vector of HTML links for the given urls.
#' @author Modesto Escobar, Department of Sociology and Communication, University of Salamanca. See <https://sociocav.usal.es/blog/modesto-escobar/>
#' @examples
#' ## When you have a single URL:
#'
#' urltoHtml("https://es.wikipedia.org/wiki/Socrates", text = "Socrates")
#'
#' ## It is possible to work with several items:
#'
#' A <- c("https://es.wikipedia.org/wiki/Socrates",
#' "https://es.wikipedia.org/wiki/Plato",
#' "https://es.wikipedia.org/wiki/Aristotle")
#' urltoHtml (A, text = c("Socrates", "Plato", "Aristotle"))
#'
#' ## And you can also directly extract the info from nametoWikiURL():
#'
#' urltoHtml(nametoWikiURL("Plato", "en"), "Plato" )
#' urltoHtml(nametoWikiURL(c("Plato", "Socrates", "Aristotle"), language="en"),
#' c("Plato", "Socrates", "Aristotle"))
#' @export
urltoHtml <- function(url, text=NULL) {
if (is.null(text)) text <- sub("https?:/{0,2}","", url)
paste0("<a href=\'",url, "\', target= \'_blank\'>", text, "</a>")
}
# urltoFrame----
#' Convert an URL link to an HTML iframe.
#' @param url Character vector of URLs.
#' @details This function converts an available URL direction to the corresponding HTML iframe, i.e., "https://es.wikipedia.org/wiki/Socrates" changes into "<a href='https://es.wikipedia.org/wiki/Socrates', target='_blank'>Socrates</a>".
#' @return A character vector of HTML iframe for the given urls.
#' @author Modesto Escobar, Department of Sociology and Communication, University of Salamanca. See <https://sociocav.usal.es/blog/modesto-escobar/>
#' @examples
#' ## When you have a single URL:
#'
#' urltoFrame("https://es.wikipedia.org/wiki/Socrates")
#'
#' ## It is possible to work with a vector of URL to obtain another vector of html frames:
#'
#' A <- c("https://es.wikipedia.org/wiki/Socrates",
#' "https://es.wikipedia.org/wiki/Plato",
#' "https://es.wikipedia.org/wiki/Aristotle")
#' urltoHtml (A)
#' @export
urltoFrame <- function(url){
paste0('<iframe src="',url, '" width="100%" height="100%" frameborder="0" marginwidth="0", margingheight="0"></iframe>')
}
#cc ----
#' Converts a text separated by commas into a character vector.
#' @param text Text to be separated.
#' @param sep A character of separation. It must be a blank. If it is another character, trailing blanks are suppressed.
#' @details Returns inside the text are omitted.
#' @return A vector of the split segments of the text.
#' @examples
#' ## A text with three names separated with commas is converted into a vector of length 3.
#' cc("Pablo Picasso, Diego Velazquez, Salvador Dali")
#' @author Modesto Escobar, Department of Sociology and Communication, University of Salamanca. See <https://sociocav.usal.es/blog/modesto-escobar/>
#' @export
cc <- function(text, sep=",") {
if(!sep==" ") {
text <- gsub(paste0("[ ]*", sep,"[ ]*"), sep, text)
text <- gsub("\\n[ ]*", "", text)
}
else text <- gsub("\\n","", text)
strsplit(text,sep)[[1]]
}
#preName ----
#' Reverse the order of the first and last names of every element of a vector.
#' @param X A vector of names with format "name, prename".
#' @details This function reverses the order of the first and last names of the items: i.e., "Weber, Max" turns into "Max Weber".
#' @return Another vector with its elements changed.
#' @examples
#' ## To reconvert a single name:
#' preName("Weber, Max")
#' ## It is possible to work with several items, as in here:
#' A <- c("Weber, Max", "Descartes, Rene", "Locke, John")
#' preName(A)
#' @author Modesto Escobar, Department of Sociology and Communication, University of Salamanca. See <https://sociocav.usal.es/blog/modesto-escobar/>
#' @export
preName <- function(X) {sub("(^.*),\\s*(.*$)","\\2 \\1", X)}
# nametoWikiURL----
#' Create the Wikipedia URL of a name or entry.
#' @param name A vector consisting of one or more Wikipedia's entry (i.e., topic or person).
#' @param language The language of the Wikipedia page version. This should consist of an ISO language code (default = "en").
#' @return A character vector of names' URLs.
#' @details This function adds the Wikipedia URL to a entry or name, i.e., "Max Weber" converts into "https://es.wikipedia.org/wiki/Max_Weber". It also manages different the languages of Wikipedia thru the abbreviated two-letter language parameter, i.e., "en" = "english".
#' @author Modesto Escobar, Department of Sociology and Communication, University of Salamanca. See <https://sociocav.usal.es/blog/modesto-escobar/>
#' @examples
#' ## When extracting a single item;
#' nametoWikiURL("Computer", language = "en")
#'
#' ## When extracting two objetcs;
#' A <- c("Computer", "Operating system")
#' nametoWikiURL(A)
#'
#' ## Same when three or more items;
#' B <- c("Socrates", "Plato" , "Aristotle")
#' nametoWikiURL(B)
#' @export
nametoWikiURL <- function (name, language="en") {
paste0("https://", language, ".wikipedia.org/wiki/", gsub(" ","_",name))
}
# nametoWikiHtml----
#' Create the Wikipedia link of a name or entry.
#' @param name A vector consisting of one or more Wikipedia's entry (i.e., topic or person).
#' @param language The language of the Wikipedia page version. This should consist of an ISO language code (default = "en").
#' @return A character vector of names' links.
#' @details This function adds the Wikipedia's html link to a entry or name, i.e., "Max Weber" converts into "<a href='https://es.wikipedia.org/wiki/Max_Weber', target='_blank'>Max Weber</a>". It also manages different the languages of Wikipedia through the abbreviated two-letter language parameter, i.e., "en" = "english".
#' @author Modesto Escobar, Department of Sociology and Communication, University of Salamanca. See <https://sociocav.usal.es/blog/modesto-escobar/>
#' @examples
#' ## When extracting a single item;
#' nametoWikiHtml("Computer", language = "en")
#'
#' ## When extracting two objetcs;
#' A <- c("Computer", "Operating system")
#' nametoWikiHtml(A)
## Same when three or more items;
#' B <- c("Socrates", "Plato","Aristotle" )
#' nametoWikiHtml(B)
#' @export
nametoWikiHtml <- function(name, language="en"){
paste0("<a href=\'https://", language, ".wikipedia.org/wiki/", gsub(" ","_",name), "', target=\'_blank\'>", name, "</a>")
}
# nametoWikiFrame----
#' Convert names into a Wikipedia's iframe
#' @param name A vector consisting of one or more Wikipedia's entry (i.e., topic or person).
#' @param language The language of the Wikipedia page version. This should consist of an ISO language code (default = "en").
#' @details This function adds the Wikipedia's iframe to a entry or name, i.e., "Max Weber" converts into "<iframe src=\"https://es.m.wikipedia.org/wiki/Max_Weber\" width=\"100...". It also manages different the languages of Wikipedia through the abbreviated two-letter language parameter, i.e., "en" = "english".
#' @return A character vector of Wikipedia's iframes.
#' @author Modesto Escobar, Department of Sociology and Communication, University of Salamanca. See <https://sociocav.usal.es/blog/modesto-escobar/>
#' @examples
#' ## When extracting a single item;
#' nametoWikiFrame("Computer", language = "en")
#'
#' ## When extracting two objetcs;
#' A <- c("Computer", "Operating system")
#' nametoWikiFrame(A)
#'
#' ## Same when three or more items;
#' B <- c("Socrates", "Plato", "Aristotle")
#' nametoWikiFrame(B)
#' @export
nametoWikiFrame <- function(name, language="en") {
paste0('<iframe src="https://',language,'.m.wikipedia.org/wiki/',gsub(" ","_",name),'" width="100%" height="100%" frameborder="0" marginwidth="0", margingheight="0"></iframe>')
}
# searchWiki----
#' Find if there is a Wikipedia page of a name(s) in the selected language.
#'
#' @param name A vector consisting of one or more Wikipedia's entry (i.e., topic or person).
#' @param language The language of the Wikipedia page version. This should consist of an ISO language code.
#' @param all If all, all the languages are checked. If false, once a term is found, there is no search of others, so it's faster.
#' @param maxtime In case you want to apply a random waiting between consecutive searches.
#' @details This function checks any page or entry in order to find if it has a Wikipedia page in a given language.
#' It manages the different the languages of Wikipedia thru the two-letters abbreviated language parameter, i.e, "en" = "english". It is possible to check multiple languages in order of preference; in this case, only the first available language will appear as TRUE.
#' @return A Boolean data frame of TRUE or FALSE.
#' @author Modesto Escobar, Department of Sociology and Communication, University of Salamanca. See <https://sociocav.usal.es/blog/modesto-escobar/>
#' @examples
#' ## When you want to check an entry in a single language:
#' searchWiki("Manuel Vilas", language = "es")
#'
#' ## When you want to check an entry in several languages:
#' \dontrun{
#' searchWiki("Manuel Vilas", language = c( "en", "es", "fr", "it", "de", "pt", "ca"), all=TRUE)
#' }
## When you want to check several entries and languages:
#' \dontrun{
#' A<-c("Manuel Vilas", "Julia Navarro", "Rosa Montero")
#' searchWiki(A, language = c("en", "es", "fr", "it", "de", "pt", "ca"), all=FALSE)
#' }
#' @export
searchWiki <- function(name, language=c("en", "es", "fr", "it", "de", "pt", "ca"), all=FALSE, maxtime=0) {
errores <- data.frame(es=logical(), en=logical(), fr=logical(), it=logical(),
de=logical(), pt=logical(), ca=logical())[,language, drop=FALSE]
for (I in name){
errores[I,language] <- rep(FALSE, length(language))
for (L in language){
person <- gsub(" ", "_", I)
url <-URLencode(paste("https://",L,".wikipedia.org/wiki/",person,sep=""))
if (validUrl(url)) {
errores[I,L] <- TRUE
if (!all) break
}
Sys.sleep(runif(1, min=0, max=maxtime))
}
}
return(errores)
}
# getWikiInf ----
#' Create a data.frame with Q's and descriptions of a vector of names.
#' @param names A vector consisting of one or more Wikidata's entry (i.e., topic or person).
#' @param number Take the number occurrence in case there are several equal names in Wikidata.
#' @param language The language of the Wikipedia page version. This should consist of an ISO language code (default = "en").
#' @return A data frame with name, Q, label and description of the names.
#' @author Modesto Escobar, Department of Sociology and Communication, University of Salamanca. See <https://sociocav.usal.es/blog/modesto-escobar/>
#' @examples
#' ## Obtaining information in English Wikidata
#' names <- c("William Shakespeare", "Pablo Picasso")
#' information <- getWikiInf(names)
#'
#' ## Obtaining information in Spanish Wikidata
#' \dontrun{
#' informacion <- getWikiInf(names, language="es")
#' }
#' @export
#' @importFrom WikidataR find_item
getWikiInf <- function(names, number=1, language="en"){
get <-function(name, number=1, language="en"){
i <- find_item(name, language=language)
if(length(i)>=number) {
X <- c(name=name, Q=i[[number]]$id,
label=ifelse(is.null(i[[number]]$label),NaN, i[[number]]$label),
description=ifelse(is.null(i[[number]]$description),NaN,i[[number]]$description))
}
else X <- c(name=name, Q=NaN, label=NaN, description=NaN)
return(X)
}
D <- as.data.frame(t(sapply(names, get, number, language)))
return(D)
}
# getWikiData ----
#' Create a data.frame with Wikidata of a vector of names.
#' @param names A vector consisting of one or more Wikidata's entry (i.e., topic or person).
#' @param language The language of the Wikipedia page version. This should consist of an ISO language code (default = "en").
#' @param csv A file name to save the results, in which case the only return is a message with the name of the saved file.
#' @return A data frame with personal information of the names or a csv file with the information separated by semicolons.
#' @author Modesto Escobar, Department of Sociology and Communication, University of Salamanca. See <https://sociocav.usal.es/blog/modesto-escobar/>
#' @examples
#' ## Obtaining information in English Wikidata
#' \dontrun{
#' names <- c("William Shakespeare", "Pablo Picasso")
#' info <- getWikiData(names)
#' ## Obtaining information in Spanish Wikidata
#' d <- getWikiData(names, language="es")
#' }
#' @export
#' @importFrom WikidataQueryServiceR query_wikidata
#' @importFrom utils write.csv2
getWikiData <- function(names, language="en", csv=NULL) {
petition <-function(q){
chaine <- paste0('SELECT ?entityLabel ?entityDescription ?sexLabel ?birthdate ?birthplaceLabel ?birthcountryLabel ?deathdate ?deathplaceLabel ?deathcountryLabel
(GROUP_CONCAT(DISTINCT ?pic;separator="|") as ?pics)
(GROUP_CONCAT(DISTINCT ?ocLabel;separator="|") as ?occupation)
(GROUP_CONCAT(DISTINCT ?moLabel;separator="|") as ?movement)
(GROUP_CONCAT(DISTINCT ?geLabel;separator="|") as ?genres)
(GROUP_CONCAT(DISTINCT ?inLabel;separator="|") as ?influencedby)
(GROUP_CONCAT(DISTINCT ?in;separator="|") as ?influencedbyQ)
(GROUP_CONCAT(DISTINCT ?noLabel;separator="|") as ?notablework)
(GROUP_CONCAT(DISTINCT ?no;separator="|") as ?notableworkQ)
WHERE {
BIND(wd:',q,' AS ?entity)
SERVICE wikibase:label {bd:serviceParam wikibase:language "en"}
{
SELECT ?birthdate (COUNT(?refP569) AS ?cP569)
WHERE {
OPTIONAL {wd:',q,' wdt:P569 ?birthdate.}
OPTIONAL {wd:',q,' p:P569 [ps:P569 ?birthdate; prov:wasDerivedFrom [(pr:P248|pr:P854|pr:P143) ?refP569]].}
} GROUP BY ?birthdate ORDER BY DESC(?cP569) LIMIT 1
}
{
SELECT ?birthplace ?birthcountry ?starttime1 (COUNT(?refP19) AS ?cP19)
WHERE {
OPTIONAL {wd:',q,' wdt:P19 ?birthplace.
?birthplace p:P17 [ps:P17 ?birthcountry; pq:P580* ?starttime1; ].}
OPTIONAL {wd:',q,' p:P19 [ps:P19 ?birthplace; prov:wasDerivedFrom [(pr:P248|pr:P854|pr:P143) ?refP19]].}
} GROUP BY ?birthplace ?birthcountry ?starttime1 ORDER BY DESC(?cP19) DESC(?starttime1) LIMIT 1
}
{
SELECT ?deathdate (COUNT(?refP570) AS ?cP570)
WHERE {
OPTIONAL {wd:',q,' wdt:P570 ?deathdate.}
OPTIONAL {wd:',q,' p:P570 [ps:P570 ?deathdate; prov:wasDerivedFrom [(pr:P248|pr:P854|pr:P143) ?refP570]].}
} GROUP BY ?deathdate ORDER BY DESC(?cP570) LIMIT 1
}
{
SELECT ?deathplace ?deathcountry ?starttime2 (COUNT(?refP20) AS ?cP20)
WHERE {
OPTIONAL {wd:',q,' wdt:P20 ?deathplace.
?deathplace p:P17 [ps:P17 ?deathcountry; pq:P580* ?starttime2; ].}
OPTIONAL {wd:',q,' p:P20 [ps:P20 ?deathplace; prov:wasDerivedFrom [(pr:P248|pr:P854|pr:P143) ?refP20]].}
} GROUP BY ?deathplace ?deathcountry ?starttime2 ORDER BY DESC(?cP20) DESC(?starttime2) LIMIT 1
}
OPTIONAL {?entity wdt:P21 ?sex.}
OPTIONAl {?entity wdt:P18 ?pic.}
OPTIONAL {?entity wdt:P106 ?oc. ?oc rdfs:label ?ocLabel. FILTER(LANG(?ocLabel) = "en")}
OPTIONAL {?entity wdt:P135 ?mo. ?mo rdfs:label ?moLabel. FILTER(LANG(?moLabel) = "en")}
OPTIONAL {?entity wdt:P136 ?ge. ?ge rdfs:label ?geLabel. FILTER(LANG(?geLabel) = "en")}
OPTIONAL {?entity wdt:P737 ?in. ?in rdfs:label ?inLabel. FILTER(LANG(?inLabel) = "en")}
OPTIONAL {?entity wdt:P800 ?no. ?no rdfs:label ?noLabel. FILTER(LANG(?noLabel) = "en")}
}
GROUP BY ?entityLabel ?entityDescription ?sexLabel ?birthdate ?birthplaceLabel ?birthcountryLabel ?deathdate ?deathplaceLabel ?deathcountryLabel
')
return(chaine)
}
getWiki <-function(nombre){
i <- find_item(nombre, language=language, limit=1)
if(length(i)>0) {
Q <- i[[1]]$id
X <- suppressMessages(query_wikidata(petition(Q)))
X <- cbind(Q, X)
bcb <- !is.na(X$birthdate) && substring(X$birthdate,1,1)=="-"
bcd <- !is.na(X$deathdate) && substring(X$deathdate,1,1)=="-"
X$birthdate <- sub("^-","",X$birthdate)
X$deathdate <- sub("^-","",X$deathdate)
X$birthdate <- as.numeric(format(as.POSIXct(X$birthdate, origin="1960-01-01", optional=TRUE), "%Y"))
X$deathdate <- as.numeric(format(as.POSIXct(X$deathdate, origin="1960-01-01", optional=TRUE), "%Y"))
if(bcb) X$birthdate <- -X$birthdate
if(bcd) X$deathdate <- -X$deathdate
}
else X <- data.frame(Q=NA, entityLabel=nombre, entityDescription =NA, sexLabel=NA,
birthdate=NA, birthplaceLabel=NA, birthcountryLabel=NA,
deathdate=NA, deathplaceLabel=NA, deathcountryLabel=NA,
pics=NA, occupation=NA, movement=NA, genres=NA,
influencedby=NA, influencebyQ=NA, notablework=NA, notableworkQ=NA,
stringsAsFactors = FALSE)
return(X)
}
transM <- function(X) {
dimensions <- dim(X)
x <- unlist(X)
m <- as.data.frame(matrix(x, nrow=dimensions[2], ncol=dimensions[1], byrow=TRUE), stringsAsFactors=FALSE)
colnames(m) <- rownames(X)
return(m)
}
X <- sapply(names,getWiki)
if(is.null(csv)) return(transM(X))
else {
if(filext(csv)=="") csv <- paste0(csv,".csv")
write.csv2(transM(X), file=csv, row.names=FALSE)
print(paste0("The file ", csv, " has been saved."))
}
}
#filext ----
#' Extract the extension of a file
#'
#' @param fn Character vector with the files whose extensions are to be extracted.
#' @details This function extracts the extension of a vector of file names.
#' @return A character vector of extension names.
#' @examples
#' ## For a single item:
#' filext("Albert Einstein.jpg")
#' ## You can do the same for a vector:
#' filext(c("Hillary Duff.png", "Britney Spears.jpg", "Avril Lavigne.tiff"))
#' @author Modesto Escobar, Department of Sociology and Communication, University of Salamanca. See <https://sociocav.usal.es/blog/modesto-escobar/>
#' @export
filext <- function (fn) {
extract <-function(X){
splitted <- strsplit(x=X, split='/')[[1]]
fn <- splitted [length(splitted)]
ext <- ''
splitted <- strsplit(x=fn, split='\\.')[[1]]
l <-length (splitted)
if (l > 1 && sum(splitted[1:(l-1)] != '')) ext <-splitted [l]
ext
}
sapply(fn, extract)
}
# getFiles ----
#' Downloads a list of files in a specified path of the computer, and return a vector of the no-found names (if any).
#' @param lista A list or data frame of files' URLs to be download (See details).
#' @param path Directory where to export the files.
#' @param ext Select desired extension of the files. Default= NULL.
#' @details This function allows download a file of files directly into your directory.
#' This function needs a preexistent data frame of names and pictures' URL. It must be a list (or data.frame) with two values: "name" (specifying the names of the files) and "url" (containing the urls to the files to download)..
#' All the errors are reported as outcomes (NULL= no errors). The files are donwload into your chosen directory.
#' @return It returns a vector of errors, if any. All pictures are download into the selected directory (NULL= no errors).
#' @author Modesto Escobar, Department of Sociology and Communication, University of Salamanca. See <https://sociocav.usal.es/blog/modesto-escobar/>
#' @examples
#' ## Not run:
#'
#' ## In case you want to download a file directly from an URL:
#'
#' # dta <- data.frame(name = "Data", url = "https://sociocav.usal.es/me/Stata/example.dta")
#' # getFiles(dta, path = "./")
#'
#' ## You can can also combine this function with getWikiData (among others).
#' ## In case you want to download a picture of a person:
#'
#' # A <- data.frame(name= getWikiData("Rembrandt")$label, url=getWikiData("Rembrandt")$pics)
#' # getFiles(A, path = "./", ext = "png")
#'
#' ## Or the pics of multiple authors:
#'
#' # B <- getWikiData(c("Monet", "Renoir", "Caillebotte"))
#' # data <- data.frame(name = B$label, url = B$pics)
#' # getFiles(data, path = "./", ext = NULL)
#'
#' ## End(Not run)
#' @export
getFiles <- function(lista, path="./", ext=NULL) {
errores <- NULL
path <- ifelse(substr(path,nchar(path),nchar(path))!="/",paste0(path,"/"),path)
lista <- as.data.frame(lista)
for (case in 1:nrow(lista)) {
name <- lista[case,1]; url <- lista[case,2]
if(is.null(ext)) ext <- filext(url)
file=paste0(path,sub("/","-",name),".",ext)
if(!is.na(url) & !file.exists(file)) {
E <- suppressWarnings(tryCatch(download.file(url, destfile=file, quiet=TRUE, mode="wb"),error = function(e) name))
if (E!=0) errores <- c(errores, E)
}
}
return(errores)
}
# getWikiFiles----
#' Downloads a list of Wikipedia pages in a specified path of the computer, and return a vector of the no-found names (if any).
#' @param X A vector of Wikipedia's entry).
#' @param language The language of the Wikipedia page version. This should consist of an ISO language code (default = "en").
#' @param directory Directory where to export the files to.
#' @param maxtime In case you want to apply a random waiting between consecutive searches.
#' @details This function allows download a set of Wikipedia pages into a directory of the local computer.
#' All the errors (not found pages) are reported as outcomes (NULL= no errors). The files are donwload into your chosen directory.
#' @return It returns a vector of errors, if any. All pictures are download into the selected directory (NULL= no errors).
#' @author Modesto Escobar, Department of Sociology and Communication, University of Salamanca. See <https://sociocav.usal.es/blog/modesto-escobar/>
#' @examples
#' ## Not run:
#'
#' ## In case you want to download the Wikipage of a person:
#'
#' # getWikiFiles("Rembrandt", dir = "./")
#'
#' ## Or the pics of multiple authors:
#'
#' # B <- c("Monet", "Renoir", "Caillebotte")
#' # getWikiFiles(B, dir = "./", language="fr")
#'
#' ## End(Not run)
#' @export
#' @importFrom utils download.file URLencode
#' @importFrom stats runif
getWikiFiles <- function(X, language=c("es", "en", "fr"), directory="./", maxtime=0) {
if(substring(directory,nchar(directory))!="/" & substring(directory,nchar(directory))!="\\") directory=paste0(directory,"/")
errores <- NULL
for (I in X){
person <- gsub(" ", "_", I)
url <-paste("https://",language[1],".wikipedia.org/wiki/",person,sep="")
file <- paste0(directory, person,".html")
E <- suppressWarnings(tryCatch(download.file(url,destfile=file, quiet=TRUE),error = function(e) person))
if (E!=0) errores <- c(errores, E)
Sys.sleep(runif(1, min=0, max=maxtime))
}
return(errores)
}
#extractWiki----
#' Extract the first paragraph of a Wikipedia article with a maximum of characters.
#' @param names A vector of names, whose entries have to be extracted.
#' @param language A vector of Wikipedia's languages to look for. If the article is not found in the language of the first element, it search for the followings,.
#' @param plain If TRUE, the results are delivered in plain format.
#' @param maximum Number maximum of characters to be included when the paragraph is too large.
#' @examples
#' ## Obtaining information in English Wikidata
#' names <- c("William Shakespeare", "Pablo Picasso")
#' info <- getWikiInf(names)
#' info$text <- extractWiki(info$label)
#' @return a character vector with html formatted (or plain text) Wikipedia paragraphs.
#' @author Modesto Escobar, Department of Sociology and Communication, University of Salamanca. See <https://sociocav.usal.es/blog/modesto-escobar/>
#' @importFrom jsonlite fromJSON
#' @export
extractWiki <- function(names, language=c("en", "es", "fr", "de", "it"), plain=FALSE, maximum=1000) {
extract <- function(name, language=c("en", "es"), plain=FALSE, maximum=1000) {
name <- URLencode(name)
json <- list(query=list(pages=-1))
explain <- ifelse(plain, "&explaintext", "")
for(I in 1:length(language)) {
json <- jsonlite::fromJSON(paste0("https://", language[I], ".wikipedia.org/w/api.php?format=json&action=query&prop=extracts&exintro", explain,"&redirects=1&titles=",name))
if(names(json$query$pages)!="-1") break
}
ascii <- json[["query"]][["pages"]][[1]][["extract"]]
ascii <- gsub("\\[.*?\\]","", ascii)
if(!plain) ascii <- gsub("(.{150}</p>).*","\\1", ascii)
else if(nchar(ascii)>maximum) ascii <- paste0(substr(ascii, 1, maximum), "...")
return(ascii)
}
return(sapply(names, extract, language=language, plain=plain, maximum=maximum))
}
# get_template ----
#' Create a drop-down vignette for nodes from different items (for galleries).
#' @param data data frame which contains the data.
#' @param title column name which contains the first tittle of the vignette.
#' @param title2 column name which contains the secondary title of the vignette.
#' @param text column name which contains the main text of the vignette.
#' @param img column name which contains the names of the image files.
#' @param wiki column name which contains the wiki URL for the vignette.
#' @param width length of the vignette's width.
#' @param color color of the vignette's strip (It also could be a column name which contains colors).
#' @param cex number indicating the amount by which plotting text should be scaled relative to the default.
#' @examples
#' ## Obtaining information in English Wikidata
#' \dontrun{
#' names <- c("William Shakespeare", "Pablo Picasso")
#' information <- getWikiData(names)
#' information$html <- get_template(information, title="entityLabel", text="entityDescription")
#' }
#' @return a character vector of html formatted vignettes.
#' @author Modesto Escobar, Department of Sociology and Communication, University of Salamanca. See <https://sociocav.usal.es/blog/modesto-escobar/>
#' @export
get_template <- function(data, title=NULL, title2=NULL, text=NULL, img=NULL, wiki=NULL, width=300, color="#135dcd", cex=1){
return(get_template_(data, title, title2, text, img, wiki, width, c(6,12), color, cex))
}
#get_template_for_maps----
#' Create a drop-down vignette for nodes from different items (for maps).
#' @param data data frame which contains the data.
#' @param title column name which contains the first tittle of the vignette.
#' @param title2 column name which contains the secondary title of the vignette.
#' @param text column name which contains the main text of the vignette.
#' @param img column name which contains the names of the image files.
#' @param wiki column name which contains the wiki URL for the vignette.
#' @param color color of the vignette's strip (It also could be a column name which contains colors).
#' @param cex number indicating the amount by which plotting text should be scaled relative to the default.
#' @examples
#' ## Obtaining information in English Wikidata
#' \dontrun{
#' names <- c("William Shakespeare", "Pablo Picasso")
#' info <- getWikiData(names)
#' info$html <- get_template_for_maps(info, title="entityLabel", text="entityDescription")
#' }
#' @return a character vector of html formatted vignettes.
#' @author Modesto Escobar, Department of Sociology and Communication, University of Salamanca. See <https://sociocav.usal.es/blog/modesto-escobar/>
#' @export
get_template_for_maps <- function(data, title=NULL, title2=NULL, text=NULL, img=NULL, wiki=NULL, color="#cbdefb", cex=1){
return(get_template_(data, title, title2, text, img, wiki, NULL, c(13,19), color, cex))
}
get_template_ <- function(data, title=NULL, title2=NULL, text=NULL, img=NULL, wiki=NULL, width=NULL, padding=NULL, color=NULL, cex=1) {
if(length(color)){
if(length(data[[color]])){
color <- data[[color]]
}
color <- paste0('background-color:',color,';')
}else{
color <- ""
}
if(length(width)){
width <- paste0(" width: ",width,"px;")
}else{
width <- ""
}
if(length(padding)){
margin <- paste0("margin:",paste0(-padding,"px",collapse=" "),";")
padding <- paste0("padding:",paste0(padding,"px",collapse=" "),";")
}else{
padding <- ""
margin <- ""
}
data[["template"]] <- paste0('<div style="font-size:',cex,'em;',margin,width,'">')
borderRadius <- 'border-radius:12px 12px 0 0;'
if(is.character(img) && length(data[[img]])){
for(i in (1:nrow(data))){
if(file.exists(data[[i,img]])){
data[i,img] <- paste0("data:",mime(data[[i,img]]),";base64,",base64encode(data[[i,img]]))
}
}
data[["template"]] <- paste0(data[["template"]],'<img style="width:100%;',borderRadius,'" src="',data[[img]],'"/>')
borderRadius <- ''
}
if(is.character(title) && length(data[[title]])){
data[["template"]] <- paste0(data[["template"]],'<h2 style="font-size:2em;',color,padding,'margin-top:-3px;',borderRadius,'">',data[[title]],'</h2>')
}
data[["template"]] <- paste0(data[["template"]],'<div style="',padding,'">')
if(is.character(title2) && length(data[[title2]])){
data[["template"]] <- paste0(data[["template"]],'<h3>', data[[title2]],'</h3>')
}
if(is.character(text) && length(data[[text]])){
data[["template"]] <- paste0(data[["template"]],'<p>',data[[text]],'</p>')
}else{
data[["template"]] <- paste0(data[["template"]],'<p></p>')
}
if(is.character(wiki) && length(data[[wiki]])){
data[["template"]] <- paste0(data[["template"]],'<h3><img style="width:20px;vertical-align:bottom;margin-right:10px;" src="https://www.wikipedia.org/portal/wikipedia.org/assets/img/Wikipedia-logo-v2.png"/>Wikipedia: <a target="_blank" href="',data[[wiki]],'">',wiki,'</a></h3>')
}
data[["template"]] <- paste0(data[["template"]],'</div></div>')
return(data[["template"]])
}
base64encode <- function(filename) {
to.read = file(filename, "rb")
fsize <- file.size(filename)
sbit <- readBin(to.read, raw(), n = fsize, endian = "little")
close(to.read)
b64c <- "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"
shfts <- c(18,12,6,0)
sand <- function(n,s) bitwAnd(bitwShiftR(n,s),63)+1
slft <- function(p,n) bitwShiftL(as.integer(p),n)
subs <- function(s,n) substring(s,n,n)
npad <- ( 3 - length(sbit) %% 3) %% 3
sbit <- c(sbit,as.raw(rep(0,npad)))
pces <- lapply(seq(1,length(sbit),by=3),function(ii) sbit[ii:(ii+2)])
encv <- paste0(sapply(pces,function(p) paste0(sapply(shfts,function(s)(subs(b64c,sand(slft(p[1],16)+slft(p[2],8)+slft(p[3],0),s)))))),collapse="")
if (npad > 0) substr(encv,nchar(encv)-npad+1,nchar(encv)) <- paste0(rep("=",npad),collapse="")
return(encv)
}
mime <- function(name) {
mimemap <- c(jpeg = "image/jpeg", jpg = "image/jpeg", png = "image/png", svg = "image/svg", gif = "image/gif")
ext <- sub("^.*\\.","",name)
mime <- unname(mimemap[ext])
return(mime)
}
|
/scratch/gouwar.j/cran-all/cranData/wikiTools/R/wikiTools.R
|
#### wiki_utils.R
#### Angel F. Zazo <[email protected]>
#### 2021-11-09
# General headers
my_user_agent <- paste('netCoincidenceAnalysis BOT <[email protected]>.', R.version.string)
my_headers <- c(accept = 'application/json',
'user-agent' = my_user_agent)
#' limit_requester
#' Limit the rate at which a function will execute
#' @param f The original function
#' @param n Number of allowed events within a period
#' @param period Length (in seconds) of measurement period
#' @return If 'f' is a single function, then a new function with the same signature and (eventual) behavior as the original function, but rate limited. If 'f' is a named list of functions, then a new list of functions with the same names and signatures, but collectively bound by a shared rate limit.
#' @author Angel F. Zazo, Departament of Computer Science and Automatics, University of Salamanca
#' @seealso ratelimitr
#' @importFrom ratelimitr limit_rate rate
#' @export
limit_requester <- function(f, n, period) {
return(ratelimitr::limit_rate(f, ratelimitr::rate(n = n, period = period)))
}
#' Wikimedia_query
#' Use httr package to retrieve responses in JSON format about an article from Wikimedia API.
#' @param query A list with de (key, values) pairs with the search.
#' @param project The Wikimedia project to search.
#' @param headers A vector with aditional query headers for the request.
#' @param attempts On errors, the maximun number of times the query is launched if repetition_on_error is not zero (default 2)
#' @return The response in JSON format or NULL on errors.
#' @author Angel F. Zazo, Departament of Computer Science and Automatics, University of Salamanca
#' @importFrom httr GET content add_headers stop_for_status
#' @importFrom jsonlite fromJSON
#' @export
Wikimedia_query <- function(query, project='en.wikipedia.org', headers = my_headers, attempts = 2) {
url = paste('https://', project, "/w/api.php", sep = '')
nt <- 1
tryCatch( {
while(TRUE) {
nt <- nt + 1
r <- httr::GET(url, query=query, httr::add_headers(.headers = headers))
# Converts http errors to R errors or warnings
httr::stop_for_status(r)
content <- httr::content(r, as = "text", encoding = "UTF-8")
j <- jsonlite::fromJSON(content, simplifyVector = FALSE)
#
# See https://www.mediawiki.org/wiki/API:Etiquette#Request_limit
if ( !is.null(j$error) && j$error$code == 'ratelimited') {
if (nt > attempts)
stop(paste(as.character(nt)," ratelimited achieved, aborting query", sep=''))
else {
t = 60*nt
print(paste("ratelimited error. Sleeping", as.character(t), "seconds"))
Sys.sleep(t)
}
}
else {
return(j)
}
}
}, error = function(e){
print(e)
return(NULL)
}
)
}
#' req_WDQS
#' Retrieve responses in JSON format from Wikidata Query Service (WDQS)
#' @param sparql_query A string with the query in SPARQL language.
#' @return A JSON response. Please check httr::stop_for_status(response)
#' @author Angel F. Zazo, Departament of Computer Science and Automatics, University of Salamanca
#' @importFrom httr GET user_agent add_headers
#' @note For short queries GET method is better, POST for long ones. Only GET queries as cached.
#' @export
req_WDQS <- function(sparql_query) {
httr::GET( #
url = 'https://query.wikidata.org/sparql',
query = list(query = sparql_query),
httr::user_agent(my_user_agent),
httr::add_headers(Accept = "application/sparql-results+json")
)
}
#' Wikidata_sparql_query
#' Retrieve responses in JSON format from Wikidata Query Service (WDQS)
#' #' req_WDQS_rated
#' The ratelimitr version of req_WDQS_rate.
#' See https://www.mediawiki.org/wiki/Wikidata_Query_Service/User_Manual#SPARQL_endpoint
#' @param sparql_query A string with the query in SPARQL language.
#' @return A JSON response or NULL on errors.
#' @author Angel F. Zazo, Departament of Computer Science and Automatics, University of Salamanca
#' @importFrom httr stop_for_status content
#' @importFrom jsonlite fromJSON
#' @export
Wikidata_sparql_query <- function(sparql_query) {
req_WDQS_rated <- limit_requester(req_WDQS, n=30, period=60)
tryCatch(
{
r <- req_WDQS_rated(sparql_query)
httr::stop_for_status(r)
content <- httr::content(r, as = "text", encoding = "UTF-8")
temp <- jsonlite::fromJSON(content, simplifyVector = FALSE)
return(temp)
}, error = function(e){
print(e)
return(NULL)
}
)
}
#' GetWikidataitem
#' Use Wikimedia_query to obtain the Wikidata entity of a article from a Wikimedia project.
#' Automatically resolvs redirects.
#' @param article Article to search
#' @param project Wikimedia project, defaults "en.wikipedia.org"
#' @return A vector with the firts element to 1 if exists the Wikidata item and if not a
#' disambiguation page, the second element de normalized forma of article, and the third the wikidata item. If errors, the firts element is set to 0 and the third is the explication of error.
#' @author Angel F. Zazo, Departament of Computer Science and Automatics, University of Salamanca
#' @examples
#' GetWikidataitem('Max Planck', project='es.wikipedia.org')
#'
#' GetWikidataitem('Max')
#'
#' GetWikidataitem('Cervante')
#' @export
GetWikidataitem <- function(article = '', project = 'en.wikipedia.org') {
if ((article == '') | (project == '')) {
return(c(0, '', "No article or project present"))
}
query = list(format = 'json',
formatversion = '2',
action = 'query',
redirects = '1',
prop = 'pageprops',
ppprop = 'wikibase_item|disambiguation',
titles = article)
j <- Wikimedia_query(query, project)
if (is.null(j))
return(c(0, article, "Error response from Wikimedia_query"))
#
if (is.null(j$query))
return(c(0, article, "No query response"))
#
# Si tiene redirects, lo tomo (ya lo da normalizado, no se pasa a "normalized")
if (!is.null(j$query$redirects))
article <- j$query$redirects[[1]]$to
# Si no tiene redirects, tomo el normalized, de existir.
# Nota: puede haber varios: ver https://phabricator.wikimedia.org/T29849#2594624
else if (!is.null(j$query$normalized))
for (nn in j$query$normalized)
article <- nn$to
#
# Con formatversion=2 se obtiene en j$query$pages un vector. Si solo se ha
# solicitado un title, este será el único elemento, es decir el 1
page <- j$query$pages[[1]]
#
#
if (!is.null(page$missing)) # No se encuentra el título buscado
return (c(0, article, "missing"))
#
if (is.null(page$pageprops)) # No hay pageprops (por tanto, tampoco wikidta_item)
return(c(0, article, "no_pageprops"))
#
if (!is.null(page$pageprops$disambiguation))
return(c(0,article, "disambiguation"))
if (is.null(page$pageprops$wikibase_item)) # No tiene wikibase_item
return(c(0, article, "no_wikibase_item"))
#
# Finalmente se devuelve wikibase_item
return(c(1, article, page$pageprops$wikibase_item))
}
# -----------------------
#' Wikimedia_get_redirects
#' Obtains redirection pages (from namespace 0) to the article page in the Wikimedia project
#' @param article Article target
#' @param project Wikimedia project, defaults "en.wikipedio.org"
#' @return A list with the firts element the target of all redirections, or NULL on error.
#' @author Angel F. Zazo, Departament of Computer Science and Automatics, University of Salamanca
#' @export
Wikimedia_get_redirects <- function(article, project = "en.wikipedia.org") {
if ((article == '') | (project == '')) {
return(NULL)
}
query = list(format = 'json',
formatversion = '2',
action = 'query',
redirects = '1',
prop = 'redirects',
rdnamespace = '0', # Only from namespace = 0
rdlimit = 'max', # Change to 5 for testing
rdprop = 'title',
titles = article)
titles <- character() # An empty character vector
repeat {
# print(cbind(query)) # checking
j <- Wikimedia_query(query, project = project)
#
if (is.null(j) | is.null(j$query)) # Error response from Wikimedia_query/No query response
return(NULL)
#
if (is.null(query$continue)) { # First, there isn't 'continue' response
#
# Si tiene redirects, lo tomo (ya lo da normalizado, no se pasa a "normalized")
if (!is.null(j$query$redirects))
titles <- append(titles, j$query$redirects[[1]]$to)
# Si no tiene redirects, tomo el normalized, de existir.
# Nota: puede haber varios: ver https://phabricator.wikimedia.org/T29849#2594624
else if (!is.null(j$query$normalized)) {
for (nn in j$query$normalized)
article <- nn$to
titles <- append(titles, article)
}
else
titles <- append(titles, article)
}
#
# Con formatversion=2 se obtiene en j$query$pages un vector. Si solo se ha
# solicitado un title, este será el único elemento, es decir el 1
page <- j$query$pages[[1]]
#
if (!is.null(page$missing)) # No se encuentra el título buscado (missing)
return(NULL)
#
titles <- append(titles, sapply(page$redirects,function(x){ return(x[["title"]]) }))
#
if (!is.null(j$continue)) {
query$continue <- j$continue$continue
query$rdcontinue <- j$continue$rdcontinue
}
else
return(titles)
}
}
#' Wikimedia_person_exists
#' Use Wikimedia_query and Wikidata_sparql_query to check if a article of a person
#' exists in the Wikimedia project. If exists, also return the Wikipedia pages of
#' that person in the languages indicated in param lang
#' @param article Article to search
#' @param project Wikimedia project, defaults "en.wikipedia.org"
#' @param langs Wikipedia languages to search if the person has a page, use "|" to split languages
#' @return If the article of the person exists, a vector with four elements: the firts one set to 1, the second de article label normalized, the third de Wikidata id, and fourth a data frame with URL to Wikipedias (lang, label, URL)
#' If the article of the person does not exist, the firts element is set to 0 and the third is the explication of error.
#' @author Angel F. Zazo, Departament of Computer Science and Automatics, University of Salamanca
#' @export
Wikimedia_person_exists <- function(article, project="en.wikipedia.org",
langs="en|es|fr|de|it|pt|ca") {
ret <- GetWikidataitem(article, project)
if (ret[1] == 0)
return(ret)
#
art <- ret[2] # Normalized article label
qid <- ret[3]
langs <- paste0("'",
paste0(strsplit(langs, "|", fixed=T)[[1]], collapse="', '"),
"'")
query = paste0('SELECT (STR(?article) as ?url) ?lang ?name
WHERE {
wd:',qid,' wdt:P31 ?instance .
FILTER (?instance = wd:Q5)
?article schema:about wd:',qid,';
schema:inLanguage ?lang;
schema:name ?name;
schema:isPartOf [ wikibase:wikiGroup "wikipedia" ] .
FILTER(?lang in (', langs, ')) .
}')
j <- Wikidata_sparql_query(query)
if (is.null(j))
return(c(0, art, qid, 'Error response from Wikidata_sparql_query'))
#
bindings <- j$results$bindings
if (length(bindings) == 0)
return(c(0, art, qid, 'Not human'))
#
columns <- unlist(j$head$vars)
df <- as.data.frame(sapply(columns,function(y){
return(sapply(bindings,function(x){
return(x[[y]][["value"]])
}))
}))
return(list(qid = c(1, art, qid),
wiki = df))
}
#' Wikidata_occupationCount
#' Search Wikidata Query Service (WDQS) to know the number of Wikidata entities with P106 property (occupation) set to Qoc.
#' @param Qoc The Wikidata entity of the occupation
#' @return The number of entities with that occupation (integer)
#' @author Angel F. Zazo, Departament of Computer Science and Automatics, University of Salamanca
#' @examples
#' Wikidata_occupationCount('Q2526255') # Film director
#' @export
Wikidata_occupationCount <- function(Qoc='') {
if (Qoc == '')
return(NULL)
#
query = paste0('SELECT (COUNT(*) AS ?count) WHERE {?human wdt:P106 wd:',Qoc," .}")
j <- Wikidata_sparql_query(query)
return(as.integer(j$results$bindings[[1]]$count$value))
}
#' req_wikimedia_metrics
#' Retrieve responses in JSON format from Wikimedia metrics API
#' @param url The URL with the query
#' @return A JSON response. Please check httr::stop_for_status(response)
#' @author Angel F. Zazo, Departament of Computer Science and Automatics, University of Salamanca
#' @importFrom httr GET user_agent add_headers
#' @note Used in Wikimedia_page_views
#' @export
req_wikimedia_metrics <- function(url) {
httr::GET(
url = url,
httr::user_agent(my_user_agent),
httr::add_headers(Accept = "application/json")
)
}
#' Wikimedia_page_views
#' Return the number of views one article has in a Wikimedia project in a date interval (see granularity). Optionally include redirections to the article page.
#' req_wikimeida_metrics_rated
#' The limitratedr version of req_wikimedia_metrics.
# Limit is 100 req/s, we limit 50 req/s.
#' @param article The article to search
#' @param project The Wikimedia project, defaults en.wikipedia.org
#' @param start,end First and last day to include (format YYYYMMDD or YYYYMMDDHH)
#' @param access Filter by access method: all-access (default), desktop, mobile-app, mobile-web
#' @param agent Filter by agent type: all-agents, user (default), spider, automated
#' @param granularity Time unit for the response data: daily, monthly (default)
#' @param include_redirects Boolean to include redirection to the article page (defaults: False)
#' @return The number of visits
#' @author Angel F. Zazo, Departament of Computer Science and Automatics, University of Salamanca
#' @importFrom httr stop_for_status content
#' @importFrom jsonlite fromJSON
#' @export
Wikimedia_page_views <- function(article, project = "en.wikipedia.org",
start, end, access = "all-access",
agent = "user", granularity = "monthly",
include_redirects = FALSE) {
req_wikimedia_metrics_rated <- limit_requester(req_wikimedia_metrics, n=50, period=1)
article <- gsub(" ", "_", article)
url <- "https://wikimedia.org/api/rest_v1/metrics/pageviews/per-article"
url <- paste(url, project, access, agent, article, granularity, start, end, sep="/", collapse = "")
a <- list()
if (include_redirects == TRUE) {
for (art in Wikimedia_get_redirects(article, project)) {
b <- Wikimedia_page_views(art, project, start, end, access, agent,
granularity, include_redirects=FALSE)
for (n in union(names(a), names(b))) {
a[n] <- ifelse(n %in% names(a), a[n][[1]], 0) + ifelse(n %in% names(b), b[n][[1]], 0)
}
}
}
else {
r <- req_wikimedia_metrics_rated(url)
httr::stop_for_status(r)
content <- httr::content(r, as = "text", encoding = "UTF-8")
j <- jsonlite::fromJSON(content, simplifyVector = FALSE)
for (v in j$items)
a[v$timestamp] <- v$views
}
return(a)
}
# ------
#' Wikidata_wikipedias
#' For an occupation, obtains all Wikidata entity of the people with that ocupation, also the
#' number of wikipedias witch article they have, and the URL of those wikipedias (sep = tab)
#' Run queries splitting with offset and chunk.
#' @param Qoc The Wikidata entity of the occupation
#' @param chunk The chunk to split intermediate results with the aim of reduce the limit 60 seconds processig time.
#' @return A list with Wikidata entity, the number of wikipedias and the URL
#' @author Angel F. Zazo, Departament of Computer Science and Automatics, University of Salamanca
#' @importFrom utils tail
#' @export
Wikidata_Wikipedias <- function(Qoc, chunk=10000) {
if (Qoc == '') return(NULL)
nq <- Wikidata_occupationCount(Qoc)
print(paste("Number of entities",as.character(nq), sep=" "))
d = list()
for (k in 0:as.integer(nq/chunk + 1)) {
offset <- chunk * k
print(paste("offset", as.character(offset), "chunk", as.character(k), sep=" "))
query = paste0(
'SELECT ?qid (COUNT(?page) AS ?count) (GROUP_CONCAT(?page;separator="\t") as ?pages)
WITH {
SELECT ?qid
WHERE {?qid wdt:P106 wd:' , Qoc, ' .}
ORDER BY ?qid
LIMIT ', chunk, ' OFFSET ', offset ,'
} AS %results
WHERE {
INCLUDE %results.
OPTIONAL {?page schema:about ?qid;
schema:isPartOf [ wikibase:wikiGroup "wikipedia" ] .}
} GROUP BY ?qid')
j <- Wikidata_sparql_query(query)
bindings <- j$results$bindings
for (b in bindings) {
q <- tail(strsplit(b$qid$value, '/')[[1]], 1)
c <- b$count$value
w <- b$pages$value
w <- paste(sort(strsplit(w,"\t", fixed=T)[[1]]), collapse = "\t")
d[q] <- list(q = c(q,c,w))
}
}
return(d)
}
#' wmflabs_get_allinfo
#' Obtains information about an article in the Wikimedia project in JSON format, or NULL on error.
#' @param article The article to search
#' @param project The Wikimedia project, defaults en.wikipedia.org
#' @param links_in_count_redirects If infotype==links, if redirects are included or not
#' @return The number of visits
#' @author Angel F. Zazo, Departament of Computer Science and Automatics, University of Salamanca
#' @note Is important that the article be don't a redirection: with "prose" infotype the function gets information of the target article, but with "articleinfo" and "links" the information is about the redirection.
#' @importFrom httr stop_for_status content
#' @importFrom jsonlite fromJSON
#' @export
wmflabs_get_allinfo <- function(article, project = "en.wikipedia.org",
links_in_count_redirects = FALSE) {
req_wikimedia_metrics_rated <- limit_requester(req_wikimedia_metrics, n=50, period=1)
wmflabs_get_info <- function(article, infotype = "articleinfo", project = "en.wikipedia.org",
links_in_count_redirects = FALSE) {
d <- list()
if (infotype == "links" & links_in_count_redirects) {
arts <- Wikimedia_get_redirects(article, project)
for(art in arts) {
b <- wmflabs_get_info(article = art, infotype = "links", project = project,
links_in_count_redirects = FALSE)
if ("links_in_count" %in% names(d))
d["links_in_count"] <- d["links_in_count"][[1]] + b["links_in_count"][[1]]
else
d <- b
}
}
else {
url <- 'https://xtools.wmflabs.org/api/page/'
url <- paste(url, infotype, project, article, sep="/", collapse = "")
r <- req_wikimedia_metrics_rated(url)
httr::stop_for_status(r)
content <- httr::content(r, as = "text", encoding = "UTF-8")
d <- jsonlite::fromJSON(content, simplifyVector = FALSE)
}
return(d)
}
# first: articleinfo
r <- wmflabs_get_info(article, infotype = 'articleinfo', project = project)
# Error in response
if (!is.null(r$error)) {
print("Error in response from wmflabs_get_info:")
print(r$error)
return(NULL)
}
# Second: prose
b <- wmflabs_get_info(article, infotype = "prose", project = project)
for (n in names(b))
r[n] <- b[n]
# Third: links
c <- wmflabs_get_info(article, infotype = "links", project = project,
links_in_count_redirects = links_in_count_redirects)
for (n in names(c))
r[n] <- c[n]
return(r)
}
|
/scratch/gouwar.j/cran-all/cranData/wikiTools/R/wiki_utils.R
|
`Abschlussnote` <-
structure(function(x,y,z){
x.note <- (x/100)*30
y.note <- (y/100)*30
z.note <- (z/100)*40
abschluss <- x.note + y.note + z.note
cat (c("Abschlussnote: "), abschluss)
}
, comment = "Funktion zur Errechnung einer gedachten Abschlussnote")
|
/scratch/gouwar.j/cran-all/cranData/wikibooks/R/Abschlussnote.R
|
`Bundesliga.Mannschaft` <-
function(Mannschaft, Saison="all"){
Bundesliga <- wikibooks::Bundesliga
if(Saison=="all"){
saison<- as.character(c("1963/1964", "1964/1965", "1965/1966", "1966/1967", "1967/1968", "1968/1969", "1969/1970", "1970/1971", "1971/1972", "1972/1973", "1973/1974", "1974/1975", "1975/1976", "1976/1977", "1977/1978", "1978/1979", "1979/1980", "1980/1981", "1981/1982", "1982/1983", "1983/1984", "1984/1985", "1985/1986", "1986/1987", "1987/1988", "1988/1989", "1989/1990", "1990/1991", "1991/1992", "1992/1993", "1993/1994", "1994/1995", "1995/1996", "1996/1997", "1997/1998", "1998/1999", "1999/2000", "2000/2001", "2001/2002", "2002/2003", "2003/2004", "2004/2005", "2005/2006", "2006/2007"))
}else{
saison <- as.character(Saison)
}
start <- 1
ende <- length(saison)+1
team <- as.character(Mannschaft)
Uebersicht <- data.frame("saison", "datum", "spieltag","heim", "gast", "ergebnis1", "ergebnis1", "Halbzeit","Halbzeit")
Uebersicht <- Uebersicht[-1,]
while(start<ende){
jahr <- as.character(saison[start])
season <- subset(Bundesliga, Bundesliga$Saison==jahr&(Bundesliga$Heim==Mannschaft|Bundesliga$Gast==Mannschaft))
dummy <- data.frame(season$Saison, season$Datum, season$Spieltag, season$Heim, season$Gast, season$Tore.Heim, season$Tore.Gast,season$Tore.Heim.Halbzeit, season$Tore.Gast.Halbzeit)
Uebersicht <- rbind(Uebersicht, dummy)
start<-start+1
}
colnames(Uebersicht) <- c("Saison", "Datum", "Spieltag", "Heim", "Gast", "Tore.Heim", "Tore.Gast", "Tore.Heim.Halbzeit","Tore.Gast.Halbzeit")
return(Uebersicht)
}
|
/scratch/gouwar.j/cran-all/cranData/wikibooks/R/Bundesliga.Mannschaft.R
|
`Bundesliga.Tabelle` <-
function(Saison, Spieltag=1, output="Tabelle"){
Bundesliga <- wikibooks::Bundesliga
saison <- as.character(Saison)
### Anzahl der Mannschaften aenderte sich ab 1965
if(saison=="1963/1964" | saison=="1964/1965"){
dummy <- integer(16)
pdummy <- c(NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA)
c <- 9 # Aufschlag
d <- 8 # Anzahl Spiele pro Spieltag
if(Spieltag > 30) Spieltag <- 30
}else{
# 1991/1992 waren es 20 Mannschaften, wegen der DDR-Integration
if(saison=="1991/1992"){
if(Spieltag > 38) Spieltag <- 38
dummy <- integer(20)
pdummy <- c(NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA)
c <- 11 # Aufschlag
d <- 10 # Anzahl Spiele pro Spieltag
}else{
# Ansonsten sind es immer 18 Mannschaften
if(Spieltag > 34) Spieltag <- 34
dummy <- integer(18)
pdummy <- c(NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA)
c <- 10 # Aufschlag
d <- 9 # Anzahl Spiele pro Spieltag
}
}
#---------------------------------------------------
### Einfuehrung der 3-Punkte-Regel ab Saison 1995/96
punkteregel <- c("1963/1964", "1964/1965", "1965/1966", "1966/1967", "1967/1968", "1968/1969", "1969/1970", "1970/1971", "1971/1972", "1972/1973", "1973/1974", "1974/1975", "1975/1976", "1976/1977", "1977/1978", "1978/1979", "1979/1980", "1980/1981", "1981/1982", "1982/1983", "1983/1984", "1984/1985", "1985/1986", "1986/1987", "1987/1988", "1988/1989", "1989/1990", "1990/1991", "1991/1992", "1992/1993", "1993/1994", "1994/1995")
regel.test <- is.element(saison, punkteregel)
if(regel.test==TRUE){
switch <- 0 # 2-Punkte-Regel
}else{
switch <- 1 # 3-Punkte-Regel
}
#--------------------------------------------------------
sz <- 1
liga <- subset(Bundesliga,Bundesliga$Saison==saison)
liga2 <- subset(liga, liga$Spieltag==1)
teams <- as.character(liga2$Heim)
teams <- c(teams, as.character(liga2$Gast))
#cat(length(teams))
Tabelle <- data.frame(dummy,teams,dummy,dummy,dummy,dummy,dummy,dummy,dummy,dummy)
colnames(Tabelle) <- c("Platz", "Mannschaft", "Spiele", "G","U","V","t","g","Diff", "Punkte")
if(Saison=="1993/1994"){
Tabelle$Punkte[Tabelle$Mannschaft=="Dynamo Dresden"] <- -4 # Abzug wegen Lizenzverstoessen
}
if(Saison=="1999/2000"){
Tabelle$Punkte[Tabelle$Mannschaft=="Eintracht Frankfurt"] <- -2 # Abzug wegen Lizenzverstoessen
}
if(Saison=="2003/2004"){
Tabelle$Punkte[Tabelle$Mannschaft=="1. FC Kaiserslautern"] <- -3 # Abzug wegen Lizenzverstoessen
}
Platzierung <- data.frame(teams,pdummy)
#cat(Platzierung)
while(sz < (Spieltag+1)){
### Spieltag-Ergebnisse
a <- 1
while(a<c){
b <- a+d
sliga <- subset(liga, liga$Spieltag==sz)
teams <- as.character(sliga$Heim)
teams <- c(teams, as.character(sliga$Gast))
heim <- teams[a]
gast <- teams[b]
heim.tore <- sliga$Tore.Heim[sliga$Heim==heim]
gast.tore <- sliga$Tore.Gast[sliga$Gast==gast]
Tabelle$Spiele[Tabelle$Mannschaft==heim] <- (Tabelle$Spiele[Tabelle$Mannschaft==heim] +1)
Tabelle$Spiele[Tabelle$Mannschaft==gast] <- (Tabelle$Spiele[Tabelle$Mannschaft==gast] +1)
Tabelle$t[Tabelle$Mannschaft==heim] <- (Tabelle$t[Tabelle$Mannschaft==heim] + heim.tore)
Tabelle$g[Tabelle$Mannschaft==heim] <- (Tabelle$g[Tabelle$Mannschaft==heim] + gast.tore)
Tabelle$t[Tabelle$Mannschaft==gast] <- (Tabelle$t[Tabelle$Mannschaft==gast] + gast.tore)
Tabelle$g[Tabelle$Mannschaft==gast] <- (Tabelle$g[Tabelle$Mannschaft==gast] + heim.tore)
heim.diff <- Tabelle$t[Tabelle$Mannschaft==heim] - Tabelle$g[Tabelle$Mannschaft==heim]
gast.diff <- Tabelle$t[Tabelle$Mannschaft==gast] - Tabelle$g[Tabelle$Mannschaft==gast]
Tabelle$Diff[Tabelle$Mannschaft==heim] <- heim.diff
Tabelle$Diff[Tabelle$Mannschaft==gast] <- gast.diff
Tabelle$Diff<-as.numeric(Tabelle$Diff)
if(gast.tore==heim.tore){
#unendschieden
Tabelle$U[Tabelle$Mannschaft==heim] <- (Tabelle$U[Tabelle$Mannschaft==heim] +1)
Tabelle$U[Tabelle$Mannschaft==gast] <- (Tabelle$U[Tabelle$Mannschaft==gast] +1)
Tabelle$Punkte[Tabelle$Mannschaft==heim] <- (Tabelle$Punkte[Tabelle$Mannschaft==heim] +1)
Tabelle$Punkte[Tabelle$Mannschaft==gast] <- (Tabelle$Punkte[Tabelle$Mannschaft==gast] +1)
}
if(gast.tore<heim.tore){
#Heim gewinnt
Tabelle$G[Tabelle$Mannschaft==heim] <- (Tabelle$G[Tabelle$Mannschaft==heim] +1)
Tabelle$V[Tabelle$Mannschaft==gast] <- (Tabelle$V[Tabelle$Mannschaft==gast] +1)
if(switch==0){
Tabelle$Punkte[Tabelle$Mannschaft==heim] <- (Tabelle$Punkte[Tabelle$Mannschaft==heim] +2)
}else{
Tabelle$Punkte[Tabelle$Mannschaft==heim] <- (Tabelle$Punkte[Tabelle$Mannschaft==heim] +3)
}
}
if(gast.tore>heim.tore){
#Gast gewinnt
Tabelle$V[Tabelle$Mannschaft==heim] <- (Tabelle$V[Tabelle$Mannschaft==heim] +1)
Tabelle$G[Tabelle$Mannschaft==gast] <- (Tabelle$G[Tabelle$Mannschaft==gast] +1)
if(switch==0){
Tabelle$Punkte[Tabelle$Mannschaft==gast] <- (Tabelle$Punkte[Tabelle$Mannschaft==gast] +2)
}else{
Tabelle$Punkte[Tabelle$Mannschaft==gast] <- (Tabelle$Punkte[Tabelle$Mannschaft==gast] +3)
}
}
a <- a+1
}
if(Saison=="1971/1972"){
Tabelle$Punkte[Tabelle$Mannschaft=="Arminia Bielefeld"] <- 0 # Bundesliga-Skandal
}
### die Sortierung laeuft jetzt ueber die Indexierung, warum das geht und das andere nicht
### weiss ich auch nicht, aber es haengt mit den Vorzeichen zusammen.
### Hatte zuerst mit *-1 rumprobiert, das war aber auch nicht gut...
### check aber noch mal...
temp <- rev(order(Tabelle[,"Punkte"],Tabelle[,"Diff"]))
Tabelle <- Tabelle[temp,]
sort <- 1
rank <- 1
while(sort< 2*d+1){
sort.team <- Tabelle$Mannschaft[sort]
Tabelle$Platz[Tabelle$Mannschaft==sort.team] <- rank
pt <- sz+1
Platzierung[Platzierung$teams==sort.team, pt] <- rank
sort <- sort+1
rank <- rank+1
}
sz <- sz+1
}
if(output=="Tabelle") return(Tabelle)
if(output=="Platzierung") {
Reihen <- 1:Spieltag
Spalten <- Platzierung[,1]
Platzierung <- data.frame(matrix(unlist(unclass(Platzierung)),nrow=length(Platzierung),byrow=TRUE,dimnames=list(names(Platzierung),Platzierung[,0])))
Platzierung <- Platzierung[-1,]
colnames(Platzierung) <- Spalten
rownames(Platzierung) <- Reihen
return(Platzierung)
}
}
|
/scratch/gouwar.j/cran-all/cranData/wikibooks/R/Bundesliga.Tabelle.R
|
`Bundesliga.XML` <-
function(Datei="Bundesliga.xml", Saison="all"){
Bundesliga <- wikibooks::Bundesliga
cat("Daten werden geschrieben...\r")
sink(Datei)
if(Saison=="all"){
saison <- as.character(c("1963/1964", "1964/1965", "1965/1966", "1966/1967", "1967/1968", "1968/1969", "1969/1970", "1970/1971", "1971/1972", "1972/1973", "1973/1974", "1974/1975", "1975/1976", "1976/1977", "1977/1978", "1978/1979", "1979/1980", "1980/1981", "1981/1982", "1982/1983", "1983/1984", "1984/1985", "1985/1986", "1986/1987", "1987/1988", "1988/1989", "1989/1990", "1990/1991", "1991/1992", "1992/1993", "1993/1994", "1994/1995", "1995/1996", "1996/1997", "1997/1998", "1998/1999", "1999/2000", "2000/2001", "2001/2002", "2002/2003", "2003/2004", "2004/2005", "2005/2006", "2006/2007"))
}else{
saison <- as.character(Saison)
}
saison.l <- length(saison)
spielnummer<-1
start.saison <- 1
ende.saison <- saison.l +1
cat("<?xml version='1.0' encoding='utf-8'?> \r")
cat("<Bundesliga>\r")
while(start.saison<ende.saison){ # gehe f<c3><bc>r jede Saison durch
sjahr <- saison[start.saison]
cat(sep = "","\t<saison jahr=\"",sjahr , "\">\r")
saison.liga <- subset(Bundesliga, Bundesliga$Saison==sjahr)
anzahl.spieltage <- length(levels(as.factor(saison.liga$Spieltag)))
start.spieltag <- 1
ende.spieltag <- anzahl.spieltage+1
while(start.spieltag<ende.spieltag){ # gehe die Spieltage durch
cat(sep = "","\t\t<spieltag nummer=\"",start.spieltag,"\">\r")
spieltag.liga <- subset(saison.liga, saison.liga$Spieltag==start.spieltag)
anzahl.spiele <- length(spieltag.liga$Heim)
start.spiele <- 1
ende.spiele <- anzahl.spiele+1
while(start.spiele<ende.spiele){ # gehe die Begegnungen durch
cat(sep = "","\t\t\t<spiel nummer=\"",spielnummer ,"\">\r")
Datum <- as.character(spieltag.liga$Datum[start.spiele])
Anpfiff <- as.character(spieltag.liga$Anpfiff[start.spiele])
Heim <- as.character(spieltag.liga$Heim[start.spiele])
Gast <- as.character(spieltag.liga$Gast[start.spiele])
Heimtore <- spieltag.liga$Tore.Heim[start.spiele]
Gasttore <- spieltag.liga$Tore.Gast[start.spiele]
Heimhalbzeit <- spieltag.liga$Tore.Heim.Halbzeit[start.spiele]
Gasthalbzeit <- spieltag.liga$Tore.Gast.Halbzeit[start.spiele]
cat(sep = "","\t\t\t\t<datum>",Datum,"</datum>\r")
cat(sep = "","\t\t\t\t<anpfiff>",Anpfiff,"</anpfiff>\r")
cat(sep = "","\t\t\t\t<heim>",Heim,"</heim>\r")
cat(sep = "","\t\t\t\t<gast>",Gast,"</gast>\r")
cat(sep = "","\t\t\t\t<heimtore>",Heimtore,"</heimtore>\r")
cat(sep = "","\t\t\t\t<gasttore>",Gasttore,"</gasttore>\r")
cat(sep = "","\t\t\t\t<heimhalbzeit>",Heimhalbzeit,"</heimhalbzeit>\r")
cat(sep = "","\t\t\t\t<gasthalbzeit>",Gasthalbzeit,"</gasthalbzeit>\r")
cat(sep = "","\t\t\t</spiel>\r")
start.spiele<- start.spiele+1
spielnummer<-spielnummer+1
}
cat("\t\t</spieltag>\r")
start.spieltag <- start.spieltag+1
}
cat("\t</saison>\r")
start.saison <- start.saison+1
}
cat("</Bundesliga>\r")
sink()
}
|
/scratch/gouwar.j/cran-all/cranData/wikibooks/R/Bundesliga.XML.R
|
`sens.spec` <-
structure(function(x,y, risk=1, dir="LESS", plot=F) {
frame <- data.frame(x,y)
var.min <- min(na.omit(x)) # welches ist der niedrigste Wert?
var.max <- max(na.omit(x)) # welches ist der hoechste Wert?
dummy <- var.min
cat("\r")
cat(c("Minimum of value: ", var.min, "\r"))
cat(c("Maximum of value: ", var.max, "\r", "\r"))
cat(c("Risk is coded with: ", risk, "\r"))
if (dir=="GREATER"|dir=="G"|dir=="greater"|dir=="g") {
cat("greater value means higher risk", "\r", "\r")
}
if (dir=="LESS"|dir=="L"|dir=="less"|dir=="l") {
cat("lesser value means higher risk", "\r", "\r")
}
sesp.table <- cbind(999, 999, 999, 999, 999, 999, 999) # dient der Indizierung, wird spaeter geloescht (s.u.)
while(dummy <= var.max) {
### true/false positive/negative
if (dir=="LESS"|dir=="L"|dir=="less"|dir=="l") {
tp <- length(frame$x[frame$x<=dummy & frame$y==risk]) # true positive
fp <- length(frame$x[frame$x<=dummy & frame$y!=risk]) # false positive
tn <- length(frame$x[frame$x>dummy & frame$y!=risk]) # true negative
fn <- length(frame$x[frame$x>dummy & frame$y==risk]) # false negative
}
if (dir=="GREATER"|dir=="G"|dir=="greater"|dir=="g") {
tp <- length(frame$x[frame$x>=dummy & frame$y==risk]) # true positive
fp <- length(frame$x[frame$x>=dummy & frame$y!=risk]) # false positive
tn <- length(frame$x[frame$x<dummy & frame$y!=risk]) # true negative
fn <- length(frame$x[frame$x<dummy & frame$y==risk]) # false negative
}
sensi <- round((tp / (tp+fn)),digits=3) # Sensitivitaet
speci <- round((tn / (tn+fp)),digits=3) # Spezifitaet
sesp.table <- rbind(sesp.table, c(dummy, sensi, speci, tp,fp,tn,fn))
dummy <- (dummy+1)
}
colnames(sesp.table) <- c("Value", "Sensitivy", "Specificy", "tp", "fp", "tn", "fn")
sesp.table <- sesp.table[-1,] # hier werden die "999" geloescht
if (plot==T) {
plot.table <- cbind(sesp.table[,2], sesp.table[,3])
plot(plot.table)
}
if (plot==F) {
print(sesp.table)
cat("\r")
cat("Cut-Off-Points include positive cases", "\r")
cat("\r")
}
}
, comment = "Funktion zur Berechnung von Sensitivitaet und Spezifitaet")
|
/scratch/gouwar.j/cran-all/cranData/wikibooks/R/sens.spec.R
|
utils::globalVariables(c("."))
|
/scratch/gouwar.j/cran-all/cranData/wikifacts/R/globals.R
|
#' Define a term from Wikipedia
#'
#' @description
#' `wiki_define()` displays plaintext extract(s) of the given term(s) from
#'' the Wikipedia article(s).
#'
#' @param term A non-empty character string or vector giving the name(s) of the term to be searched
#' @param sentences An integer (or whole number) giving the number of sentences to return
#'
#' @return An extract from the Wikipedia article
#'
#' @examples
#' wiki_define('R (programming language)')
#'
#' animals <- data.frame(name = c("dog", "cat"))
#' animals$definition <- wiki_define(animals$name, sentences = 1)
#' print(animals)
wiki_define <- function(term = NULL, sentences = 5L) {
if (!is.numeric(sentences)) {
sentences <- 10L
warning("'sentences' at wiki_define() should be an integer falling back to 10L")
}
sentences <- trunc(sentences)
tryCatch({
sapply(term, function(x){
response <- xml2::read_xml(paste0("https://en.wikipedia.org/w/api.php?action=query&prop=extracts&exsentences=",
sentences,
"&exlimit=1&titles=",
utils::URLencode(x),
"&explaintext=1&format=xml"))
xml2::xml_text(xml2::xml_find_first(response, "query/pages/page/extract"))
})
},
error = function (e) {"I got nothin'"},
warning = function (w) {"I got nothin'"})
}
|
/scratch/gouwar.j/cran-all/cranData/wikifacts/R/wiki_define.R
|
#' Generate 'did you know' facts from the Wikipedia main page on a specified date.
#'
#' @description
#' `wiki_didyouknow()` generates 'did you know' facts from the Wikipedia main page on a specified date.
#'
#' @param n_facts An integer determining the number of facts that will be generated, up to a limit of the maximum facts for the date specified.
#' @param date A date string of the form YYYY-MM-DD. Default value is a random date since 1 January 2015.
#' @param bare_fact Logical. Determining whether the fact should be quoted as is or surrounded by a preamble and courtesy statement.
#'
#' @return A vector of strings with random 'did you know' facts from Wikipedia's main page if it exists for the date specified - otherwise "I got nothin'"
#'
#' @examples
#' wiki_didyouknow(n_facts = 2, date = '2020-05-02')
wiki_didyouknow <- function(n_facts = 1L, date = sample(seq(as.Date("2015-01-01"), Sys.Date() - 1, by = "day"), 1), bare_fact = FALSE) {
locale <- Sys.getlocale("LC_TIME")
if (.Platform$OS.type == "windows") {
invisible(Sys.setlocale("LC_TIME", "English"))
} else {
invisible(Sys.setlocale("LC_TIME", "en_US.UTF-8"))
}
# get url from input and read html
date <- as.Date(date)
date1 <- format(date, "%Y_%B_")
date2 <- gsub("^0+", "", format(date, "%d"))
date_str <- paste0(date1, date2)
invisible(Sys.setlocale("LC_TIME", locale))
input <- paste0("https://en.wikipedia.org/wiki/Wikipedia:Main_Page_history/", date_str)
tryCatch({
input <- url(input, "rb")
wiki_page <- xml2::read_html(input, fill = TRUE)
close(input)
# scrape list data
dyk <- wiki_page %>%
rvest::html_nodes(xpath = '//*[@id="mp-dyk"]') %>%
rvest::html_nodes("li") %>%
rvest::html_text() %>%
subset(grepl("... that", .))
n <- min(n_facts, length(dyk))
dyk <- dyk[grepl("... that", dyk)] %>%
sample(n)
if (bare_fact == TRUE) {
dyk
} else {
paste0("Did you know ", gsub("\\.\\.\\. ", "", dyk), " (Courtesy of Wikipedia)")
}
},
error = function (e) {"I got nothin'"},
warning = function (w) {"I got nothin'"})
}
|
/scratch/gouwar.j/cran-all/cranData/wikifacts/R/wiki_didyouknow.R
|
#' Generate news items from the Wikipedia main page on a specified date.
#'
#' @description
#' `wiki_inthenews()` generates news items from the Wikipedia main page on a specified date.
#'
#' @param n_facts An integer determining the number of facts that will be generated, up to a limit of the maximum facts for the date specified.
#' @param date A date string of the form YYYY-MM-DD. Default value is a random date since 1 January 2015.
#' @param bare_fact Logical. Determining whether the fact should be quoted as is or surrounded by a preamble and courtesy statement.
#'
#' @return A vector of strings with random 'in the news' items from Wikipedia's main page, if it exists for the date specified - otherwise "I got nothin'"
#'
#' @examples
#' wiki_inthenews(n_facts = 1, date = '2020-05-02')
wiki_inthenews <- function(n_facts = 1L, date = sample(seq(as.Date("2015-01-01"), Sys.Date() - 1, by = "day"), 1), bare_fact = FALSE) {
locale <- Sys.getlocale("LC_TIME")
if (.Platform$OS.type == "windows") {
invisible(Sys.setlocale("LC_TIME", "English"))
} else {
invisible(Sys.setlocale("LC_TIME", "en_US.UTF-8"))
}
# get url from input and read html
date <- as.Date(date)
date1 <- format(date, "%Y_%B_")
date2 <- gsub("^0+", "", format(date, "%d"))
date_str <- paste0(date1, date2)
invisible(Sys.setlocale("LC_TIME", locale))
input <- paste0("https://en.wikipedia.org/wiki/Wikipedia:Main_Page_history/", date_str)
tryCatch({
input <- url(input, 'rb')
wiki_page <- xml2::read_html(input, fill = TRUE)
close(input)
# scrape list data
itn <- wiki_page %>%
rvest::html_nodes(xpath = '//*[@id="mp-itn"]') %>%
rvest::html_nodes("li") %>%
rvest::html_text() %>%
subset(nchar(.) > 40)
n <- min(n_facts, length(itn))
itn <- itn %>%
sample(n)
if (bare_fact == TRUE) {
itn
} else {
paste0("Here's some news from ", format(date, "%d %B %Y"), ". ", itn, " (Courtesy of Wikipedia)")
}
},
error = function (e) {"I got nothin'"},
warning = function (w) {"I got nothin'"})
}
|
/scratch/gouwar.j/cran-all/cranData/wikifacts/R/wiki_inthenews.R
|
#' Generate 'on this day' facts from the Wikipedia main page on a specified date.
#'
#' @description
#' `wiki_onthisday()` generates 'on this day' facts from the Wikipedia main page on a specified date.
#'
#' @param n_facts An integer determining the number of facts that will be generated, up to a limit of the maximum facts for the date specified.
#' @param date A date string of the form YYYY-MM-DD. Default value is a random date since 1 January 2015.
#' @param bare_fact Logical. Determining whether the fact should be quoted as is or surrounded by a preamble and courtesy statement.
#'
#' @return A vector of strings with random 'on this day' facts from Wikipedia's main page if it exists for the date specified - otherwise "I got nothin'"
#'
#' @examples
#' wiki_onthisday(date = '2020-05-02')
wiki_onthisday <- function(n_facts = 1L, date = sample(seq(as.Date("2015-01-01"), Sys.Date() - 1, by = "day"), 1), bare_fact = FALSE) {
locale <- Sys.getlocale("LC_TIME")
if (.Platform$OS.type == "windows") {
invisible(Sys.setlocale("LC_TIME", "English"))
} else {
invisible(Sys.setlocale("LC_TIME", "en_US.UTF-8"))
}
# get url from input and read html
date <- as.Date(date)
date1 <- format(date, "%Y_%B_")
date2 <- gsub("^0+", "", format(date, "%d"))
date_str <- paste0(date1, date2)
invisible(Sys.setlocale("LC_TIME", locale))
input <- paste0("https://en.wikipedia.org/wiki/Wikipedia:Main_Page_history/", date_str)
tryCatch({
input <- url(input, 'rb')
wiki_page <- xml2::read_html(input, fill = TRUE)
close(input)
# scrape list data
otd <- wiki_page %>%
rvest::html_nodes(xpath = '//*[@id="mp-otd"]') %>%
rvest::html_nodes("li") %>%
rvest::html_text() %>%
subset(grepl("^\\d{3}", .))
n <- min(n_facts, length(otd))
otd <- otd %>%
sample(n)
if(bare_fact == TRUE) {
otd
} else {
paste0("Did you know that on ", format(date, "%B"), " ", date2, " in ", otd, " (Courtesy of Wikipedia)")
}
},
error = function (e) {"I got nothin'"},
warning = function (w) {"I got nothin'"})
}
|
/scratch/gouwar.j/cran-all/cranData/wikifacts/R/wiki_onthisday.R
|
#' Send queries to Wikidata and receive results as dataframe
#'
#' @description
#' `wiki_query()` sends a SPARQL query to Wikidata and collects the results in a dataframe
#'
#' @param qry A character string representing a SPARQL query to be sent to Wikidata
#' @return A dataframe of results
#'
#' @examples
#' # List five diseases
#' query <- 'SELECT ?itemLabel WHERE {
#' ?item wdt:P31 wd:Q12136. #instance of disease
#' SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". }
#' }
#' LIMIT 5'
#' wiki_query(query)
wiki_query <- function(qry) {
# check the missing argument
if (missing(qry)) {
stop("No SPARQL query provided.")
}
# constants
LIMIT <- 2048
WIKIDATA <- "https://query.wikidata.org/sparql"
# remove unnecessary white characters
qry <- gsub("[[:blank:]]+", " ", qry)
# encode the SPARQL query for URL
qry <- utils::URLencode(qry)
qry <- gsub("#", "%23", qry)
# check SPARQL query length after the encoding
if (length(qry) > LIMIT) {
stop(paste("Too long SPARQL query: maximum is", LIMIT, "characters"))
}
# get URL response from Wikidata
spr <- url(
description = paste0(WIKIDATA, "?query=", qry),
headers = c("Accept" = "text/csv; charset=utf-8")
)
response <- tryCatch(
utils::read.csv(spr, na.strings = "", encoding="UTF-8"),
error = function(e) {
message(paste(e))
close(spr)
return(data.frame())
}
)
# check empty response
if (nrow(response) == 0) {
return(data.frame())
}
# return the full data frame with response
return(response)
}
|
/scratch/gouwar.j/cran-all/cranData/wikifacts/R/wiki_query.R
|
#' Generate random facts from historic Wikipedia main pages
#'
#' @description
#' `wiki_randomfact()` generates random facts from Wikipedia main pages after 1 January 2015.
#'
#' @param n_facts An integer determining the number of facts that will be generated.
#' @param fact String to determine the type of fact to be randomly generated - "any" will generate a random selection.
#' @param bare_fact Logical. Determining whether the fact should be quoted as is or surrounded by a preamble and courtesy statement.
#' @param repeats Logical. Determining if repeat facts should be permitted. If FALSE the number of facts may be less than requested.
#' @return A vector of strings with random items from Wikipedia's main page - otherwise "I got nothin'"
#'
#' @examples
#' wiki_randomfact()
wiki_randomfact <- function(n_facts = 1L, fact = c("any", "didyouknow", "onthisday", "inthenews"),
bare_fact = FALSE, repeats = TRUE) {
fact <- match.arg(fact)
fun1 <- function() {
wiki_didyouknow(date = sample(seq(as.Date("2015-01-01"), Sys.Date() - 1, by = "day"), 1), bare_fact = bare_fact)
}
fun2 <- function() {
wiki_onthisday(date = sample(seq(as.Date("2015-01-01"), Sys.Date() - 1, by = "day"), 1), bare_fact = bare_fact)
}
fun3 <- function() {
wiki_inthenews(date = sample(seq(as.Date("2015-01-01"), Sys.Date() - 1, by = "day"), 1), bare_fact = bare_fact)
}
out <- c()
for (i in 1:n_facts) {
s <- switch(fact, "any" = sample(1:3, 1), "didyouknow" = 1, "onthisday" = 2, "inthenews" = 3)
out[i] <- eval(parse(text = paste0('fun', s, "()")))
}
if (repeats ==FALSE) {
unique(out)
} else{
out
}
}
|
/scratch/gouwar.j/cran-all/cranData/wikifacts/R/wiki_randomfact.R
|
#' Display results of a Wikipedia search in the browser
#'
#' @description
#' `wiki_search()` displays the results of a Wikipedia search in the browser.
#'
#' @param term A non-empty character string giving the name of the term to be searched
#' @param browser A non-empty character string passed to [browseURL()] to determine the browser used.
#'
#' @return A display of the results of the search in the browser.
#'
#' @examples
#' wiki_search('R (programming language)')
wiki_search <- function(term = NULL, browser = getOption("browser")) {
url <- paste0("https://en.wikipedia.org/w/index.php?search=", utils::URLencode(term))
browseURL(url, browser = browser)
}
|
/scratch/gouwar.j/cran-all/cranData/wikifacts/R/wiki_search.R
|
#' lake_wiki_browser
#' @param lake_wiki_obj data.frame output of lake_wiki
#' @param lake_names fallback character vector of lake names
#' @export
#' @examples \dontrun{
#' lake_wiki_browser(lake_names = "Lake Mendota")
#' lake_wiki_browser(lake_names = c("Lake Mendota", "Lake Champlain"))
#' lake_wiki_browser(lake_wiki(c("Lake Mendota", "Lake Champlain")))
#' }
lake_wiki_browser <- function(lake_wiki_obj = NA, lake_names = NA){
stopifnot("Must provide either a name or a lake_wiki output object" =
!any(is.na(lake_names)) | inherits(lake_wiki_obj, "data.frame"))
# stopifnot("Must provide one of either a name or a lake_wiki output object" =
# !is.na(name) & inherits(lake_wiki_obj, "data.frame"))
open_page <- function(x){
page_metadata <- page_info("en","wikipedia", page = x)$query$pages
page_link <- page_metadata[[1]][["fullurl"]]
utils::browseURL(page_link)
}
is_lake_wiki_output <- as.character(inherits(lake_wiki_obj, "data.frame"))
lake_names <- switch(is_lake_wiki_output,
"FALSE" = lake_names,
"TRUE" = lake_wiki_obj[,"Name"])
invisible(sapply(lake_names, open_page))
}
|
/scratch/gouwar.j/cran-all/cranData/wikilake/R/browser.R
|
#' Clean output of lake_wiki
#'
#' Currently the only operation is to standardize the units of numeric fields.
#' See the output units with the unit_key_ function.
#'
#' @param dt output of the lake_wiki function
#'
#' @export
#' @examples \dontrun{
#' dt <- lake_wiki(c("Lake Mendota","Flagstaff Lake (Maine)"))
#' dt_clean <- lake_clean(dt)
#'
#' dt <- lake_wiki(c("Lake Mendota","Trout Lake (Wisconsin)"))
#' dt_clean <- lake_clean(dt)
#' }
lake_clean <- function(dt){
unit_key_numeric <- dplyr::filter(unit_key_(), format == "n" & !is.na(units))
for(i in seq_len(nrow(unit_key_numeric))){
var <- unit_key_numeric$Variable[i]
if(var %in% names(dt)){
# print(var)
dt[,var] <- sapply(dt[,var],
function(x) parse_unit_brackets(x, unit_key_numeric$units[i]))
}
}
dt
}
|
/scratch/gouwar.j/cran-all/cranData/wikilake/R/clean.R
|
#' lake_wiki
#' @param lake_name character
#' @param map logical produce map of lake location?
#' @param clean logical enforce standardized units following wikilake::unit_key_()?
#' @param ... arguments passed to maps::map
#' @importFrom tidyr pivot_wider
#' @importFrom dplyr mutate matches
#' @export
#' @examples \dontrun{
#' lake_wiki("Lake Peipsi")
#' lake_wiki("Flagstaff Lake (Maine)")
#' lake_wiki("Lake George (Michigan-Ontario)")
#' lake_wiki("Lake Michigan", map = TRUE, "usa")
#' lake_wiki("Lac La Belle, Michigan")
#' lake_wiki("Lake Antoine")
#' lake_wiki("Lake Baikal")
#' lake_wiki("Dockery Lake (Michigan)")
#' lake_wiki("Coldwater Lake")
#' lake_wiki("Bankson Lake")
#' lake_wiki("Fisher Lake (Michigan)")
#' lake_wiki("Beals Lake")
#' lake_wiki("Devils Lake (Michigan)")
#' lake_wiki("Lake Michigan")
#' lake_wiki("Fletcher Pond")
#' lake_wiki("Lake Bella Vista (Michigan)")
#' lake_wiki("Lake Mendota")
#' lake_wiki("Lake Mendota", map = TRUE, "usa")
#' lake_wiki("Lake Nipigon", map = TRUE, regions = "Canada")
#' lake_wiki("Trout Lake (Wisconsin)")
#'
#' # a vector of lake names
#' lake_wiki(c("Lake Mendota", "Trout Lake (Wisconsin)"))
#' lake_wiki(c("Lake Mendota", "Trout Lake (Wisconsin)"), map = TRUE)
#'
#' # throws warning on redirects
#' lake_wiki("Beals Lake")
#'
#' # ignore notability box
#' lake_wiki("Rainbow Lake (Waterford Township, Michigan)")
#' }
lake_wiki <- function(lake_name, map = FALSE, clean = TRUE, ...) {
.lake_wiki <- function(lake_name, ...) {
res <- get_lake_wiki(lake_name)
if (!is.null(res)) {
res <- tidy_lake_df(res)
}
res
}
res <- lapply(lake_name, function(x) .lake_wiki(x, map = map))
res <- res[sapply(res, function(x) !is.null(x))]
res <- data.frame(dplyr::bind_rows(
lapply(res, function(x) {
tidyr::pivot_wider(
data.frame(
field = names(x),
values = t(x)),
names_from = "field", values_from = "values")
})
), check.names = FALSE)
res <- dplyr::mutate(res, dplyr::across(dplyr::matches("Lon|Lat"), as.numeric))
if (map) {
map_lake_wiki(res, ...)
}
if (clean) {
res <- lake_clean(res)
}
res
}
#' get_lake_wiki
#' @import WikipediR
#' @import rvest
#' @importFrom xml2 read_html
#' @param lake_name character
#' @param cond character stopping condition
#' @examples \dontrun{
#' lake_name <- "Lake Nipigon"
#' get_lake_wiki(lake_name)
#' }
get_lake_wiki <- function(lake_name, cond = NA) {
# display page link
page_metadata <- page_info("en", "wikipedia", page = lake_name)$query$pages
page_link <- page_metadata[[1]][["fullurl"]]
message(paste0("Retrieving data from: ", page_link))
res <- get_content(lake_name)
if (is_redirect(res)) {
lake_name <- page_redirect(res)
message(paste0("Attempting redirect to '", lake_name, "'"))
res <- get_content(lake_name)
}
res <- tryCatch(
{
res <- rvest::html_nodes(res, "table")
meta_index <- grep("infobox vcard", rvest::html_attr(res, "class"))
if (is_not_lake_page(res, meta_index)) stop(cond)
if (length(meta_index) == 0) meta_index <- 1
res <- rvest::html_table(res[max(meta_index)])[[1]]
# create missing names
# rm rows that are just repeating the lake name
if (all(nchar(names(res)) < 3)) {
names(res) <- res[1, ]
}
res <- res[!apply(res, 1, function(x) all(x == names(res)[1])), ]
res <- suppressWarnings(apply(res, 2,
function(x) stri_encode(stri_trans_general(x,"Latin-ASCII"), "", "UTF-8")))
},
error = function(cond) {
message("'", paste0(lake_name,
"' is missing a metadata table or
does not have its own page"))
return(NA)
}
)
if (any(!is.na(res))) {
# format coordinates ####
has_multiple_rows <- !is.null(nrow(res))
if (has_multiple_rows) {
coords_raw <- res[which(res[, 1] == "Coordinates"), 2]
} else {
coords <- res[2]
}
is_tidy_coords <- nchar(coords_raw) < 33
if (!is_tidy_coords) {
coords <- strsplit(coords_raw, "\\/")[[1]]
coords <- sapply(coords, trimws)
coords <- coords[stringr::str_starts(coords, "\\d")]
coords <- sapply(coords, function(x) strsplit(x, "Coordinates: "))
coords <- sapply(coords, function(x) strsplit(x, " "))
coords <- paste(unlist(coords), collapse = ",")
coords <- strsplit(coords, ",")[[1]]
coords <- coords[!(seq_len(length(coords)) %in%
c(which(nchar(coords) == 0),
grep("W", coords),
grep("E", coords),
grep("S", coords),
grep("N", coords)))][1:2]
coords <- gsub("\\[.\\]", "", coords)
if (any(nchar(coords) > 5)) {
coords <- sapply(gsub(";", "", coords),
function(x) substring(x, 1, nchar(x) - 1))
coords <- suppressWarnings(paste(as.numeric(coords), collapse = ","))
} else {
coords <- paste(as.numeric(gsub(";", "", coords)), collapse = ",")
}
} else {
is_west <- length(grep("W", coords)) > 0
coords <- strsplit(coords, ", ")[[1]]
coords <- strsplit(coords, "[^0-9]+")
coords <- lapply(coords, as.numeric)
coords <- lapply(coords, function(x) x[1:3])
coords <- unlist(lapply(coords, dms2dd))
if (is_west) {
coords[2] <- coords[2] * -1
}
coords <- paste(coords, collapse = ",")
}
if (has_multiple_rows) {
res[which(res[, 1] == "Coordinates"), 2] <- coords
} else {
res[2] <- coords
}
# rm junk rows
if (has_multiple_rows) {
if (any(res[, 1] == "")) {
res <- res[-which(res[, 1] == ""), ]
}
if (any(nchar(res[, 1]) > 20)) {
res <- res[-which(nchar(res[, 1]) > 20), ]
}
if (length(grep("well-defined", res[, 1])) != 0) {
res <- res[!(1:nrow(res) %in% grep("well-defined", res[, 1])), ]
message("Shore length is not a well-defined measure.")
}
if (length(grep("Islands", res[, 1])) != 0) {
res <- res[!(1:nrow(res) %in% grep("Islands", res[, 1])), ]
}
if (length(grep("Settlements", res[, 1])) != 0) {
res <- res[!(1:nrow(res) %in% grep("Settlements", res[, 1])), ]
}
if (length(grep("Sign", res[, 1])) != 0) {
res <- res[!(1:nrow(res) %in% grep("Sign", res[, 1])), ]
}
}
res
}
}
|
/scratch/gouwar.j/cran-all/cranData/wikilake/R/get.R
|
#' map_lake_wiki
#' @param res data.frame output of get_lake_wiki
#' @param ... arguments passed to maps::map
#' @importFrom maps map
#' @importFrom sp coordinates
#' @importFrom graphics points
#' @examples \dontrun{
#' map_lake_wiki(lake_wiki("Corey Lake"), database = "usa")
#'
#' map_lake_wiki(lake_wiki("Lake Nipigon"), regions = "Canada")
#' }
map_lake_wiki <- function(res, ...){
coords <- res[,c("Lon", "Lat")]
res <- data.frame(as.matrix(coords))
sp::coordinates(res) <- ~Lon + Lat
maps::map(...)
points(res, col = "red", cex = 1.5, pch = 19)
}
|
/scratch/gouwar.j/cran-all/cranData/wikilake/R/map.R
|
# https://en.wikipedia.org/wiki/Template:Infobox_body_of_water
# specify default units
unit_key_ <- function(){
unit_key <-
"Variable, format, units\n
Name, c, NA\n
Location, c, NA\n
Group, ?, NA\n
Coordinates, ?, NA\n
Type, ?, NA\n
Etymology, ?, NA\n
Part of, ?, NA\n
Primary inflows, ?, NA\n
River sources, ?, NA\n
Primary outflows, ?, NA\n
Ocean/sea sources, ?, NA\n
Catchment area, n, km2\n
Basin countries, ?, NA\n
Managing agency, ?, NA\n
Designation, ?, NA\n
Built, ?, NA\n
Construction engineer, ?, NA\n
First flooded, ?, NA\n
Max. length, n, km\n
Max. width, n, km\n
Surface area, n, km2\n
Average depth, n, m\n
Max. depth, n, m\n
Water volume, n, m3\n
Residence time, n, years\n
Salinity, n, NA\n
Shore length1, n, km\n
Surface elevation, n, m\n
Max. temperature, n, NA\n
Min. temperature, n, NA\n
Frozen, ?, NA\n
Islands, ?, NA\n
Sections/sub-basins, ?, NA\n
Trenches, ?, NA\n
Benches, ?, NA\n
Settlements, ?, NA\n
Website, ?, NA\n
References, ?, NA"
read.csv(textConnection(unit_key), stringsAsFactors = FALSE,
strip.white = TRUE, sep = ",")
}
tidy_units <- function(res){
unit_key <- unit_key_()
known_units <- c("m", "km2", "years", "sq mi", "ha", "m3", "acres", "sq. km", "days", "acre feet")
numeric_cols <- unit_key$Variable[unit_key$format == "n"]
numeric_cols <- names(res) %in% numeric_cols
numeric_cols <- names(res)[numeric_cols]
if(length(numeric_cols) == 0){
res
}else{
specified_cols <- apply(res, 2,
function(x) any(
stringr::str_detect(x, known_units)))
specified_cols <- names(res)[specified_cols]
non_specified_cols <- numeric_cols[!(numeric_cols %in% specified_cols)]
if(length(non_specified_cols) > 0){
tryCatch({
res[,non_specified_cols] <- unit_key[unit_key$Variable %in% non_specified_cols,]
}, warning = function(w) {
res
})
}
# strip converted units
# in case of a choice prefer default
units_df <- data.frame(
zero_units = sapply(res[,numeric_cols], function(x) pull_units(x, 0)),
first_units = sapply(res[,numeric_cols], function(x) pull_units(x, 1)),
second_units = sapply(res[,numeric_cols], function(x) pull_units(x, 2)))
units_df$Variable <- row.names(units_df)
units_df$use <- NA
units_df <- merge(units_df, unit_key,
all.y = FALSE, all.x = TRUE, sort = FALSE)
units_df$use <- lapply(seq_len(nrow(units_df)), function(x) {
res <- which(units_df$units[x] ==
units_df[x, c("zero_units", "first_units", "second_units")]) - 1
if(length(res) < 1){
res <- 0
}else{
if(length(res) > 1){
res <- res[1]
}
}
res
})
res[,numeric_cols] <- sapply(seq_len(nrow(units_df)), function(x)
pull_position(res[, numeric_cols[x]], units_df$use[x]))
# assign units using the units package
# res[,numeric_cols]
quantities <- lapply(seq_len(length(numeric_cols)), function(x){
quantity <- res[,numeric_cols[x]]
quantity <- gsub(",", "", quantity)
quantity <- strsplit(quantity, " ")[[1]]
tryCatch(
units::set_units(as.numeric(quantity[1][1]),
units::as_units(quantity[2]),
mode = "standard"),
error = function(e){
trimws(paste(quantity, collapse = " "))
})
})
names(quantities) <- numeric_cols
quantities <- as.data.frame(quantities)
names(quantities) <- numeric_cols
res[,numeric_cols] <- quantities
}
res
}
|
/scratch/gouwar.j/cran-all/cranData/wikilake/R/units.R
|
#' dms2dd
#' @description Convert numeric coordinate vectors in degrees, minutes, and seconds to decimal degrees
#' @param x numeric vector of length 3 corresponding to degrees, minutes, and seconds
#' @export
#' @examples
#' dt <- rbind(c(25, 12, 53.66), c(-80, 32, 00.61))
#' apply(dt, 1, function(x) dms2dd(x))
dms2dd <- function(x) {
if (x[1] > 0) {
x[1] + x[2] / 60 + x[3] / 60 / 60
} else {
x[1] - x[2] / 60 - x[3] / 60 / 60
}
}
#' tidy_lake_df
#' @param lake data.frame output of get_lake_wiki
#' @importFrom stringr str_extract
tidy_lake_df <- function(lake) {
lake <- rbind(c("Name", colnames(lake)[1]), lake)
res <- list_to_df(lake)
res <- tidy_coordinates(res)
res <- tidy_depths(res)
res <- rm_line_breaks(res)
res <- tidy_units(res)
res
}
list_to_df <- function(ll) {
df_names <- ll[, 1]
df <- as.data.frame(t(ll[, -1]), stringsAsFactors = FALSE)
colnames(df) <- df_names
df
}
get_content <- function(lake_name) {
res <- WikipediR::page_content("en", "wikipedia", page_name = lake_name,
as_wikitext = FALSE)
res <- res$parse$text[[1]]
res <- xml2::read_html(res, encoding = "UTF-8")
res
}
is_redirect <- function(res) {
length(
grep("redirect",
rvest::html_attr(rvest::html_nodes(res, "div"), "class"))
) > 0
}
page_redirect <- function(res) {
rvest::html_attr(rvest::html_nodes(res, "a"), "title")[1]
}
is_not_lake_page <- function(res, meta_index) {
no_meta_index <- length(meta_index) == 0
if (no_meta_index) meta_index <- 1
res <- rvest::html_table(res[meta_index])[[1]]
has_keywords <- !any(sapply(c("lake",
"tributaries",
"outflow",
"elevation",
"coordinates"), function(x) stringr::str_detect(unlist(res), x)))
no_meta_index & has_keywords
}
tidy_coordinates <- function(res) {
lat <- as.numeric(strsplit(res$Coordinates, ",")[[1]][1])
lon <- as.numeric(strsplit(res$Coordinates, ",")[[1]][2])
res$Lat <- lat
res$Lon <- lon
res[, !(names(res) %in% c("Coordinates", "- coordinates"))]
}
tidy_depths <- function(res) {
depth_col_pos <- grep("depth", names(res))
depths <- res[, depth_col_pos]
if (length(depths) > 0) {
has_meters <- grep("m", depths)
is_meters_first <- stringr::str_locate(depths[has_meters], "m")[1] <
max(stringr::str_locate(depths[has_meters], "ft")[1],
stringr::str_locate(depths[has_meters], "feet")[1],
na.rm = TRUE)
if (is_meters_first) {
depths[has_meters] <- stringr::str_extract(depths[has_meters],
"(?<=).*\\sm")
} else {
depths[has_meters] <- stringr::str_extract(depths[has_meters],
"(?<=\\().*\\sm")
}
# depths[has_meters] <- sapply(depths[has_meters], function(x)
# substring(x, 1, nchar(x) - 2))
# missing_meters <- which(!(1:length(depths) %in% has_meters))
res[, depth_col_pos] <- depths
}
res
}
drop_trailing_line_break <- function(x) {
# x <- "asdf\nlp\noi"
first_break <- stringr::str_locate(pattern = "\n", x)[1]
if (substring(x, first_break - 1, first_break - 1) == ",") {
first_break <- first_break - 1
}
substring(x, 1, (first_break - 1))
}
rm_line_breaks <- function(res) {
bad_cols <- as.logical(apply(res, 2, function(x) length(grep("\n", x) > 0)))
res[, bad_cols] <- sapply(res[, bad_cols], drop_trailing_line_break)
res
}
# 0 = no choice, 1 = first choice, 2 = second choice
# pull_units(res$`Surface area`, 2)
# pull_units(res$`Water volume`, 0)
# pull_units(res$`Water volume`, 1)
# pull_units(res$`Water volume`, 2)
# pull_units(res$`Average depth`, 0)
# pull_units(res$`Max. length`, 0)
# pull_units(res$`Residence time`, 0)
# pull_units(res$`Residence time`, 1)
# pull_units(res$`Residence time`, 2)
pull_units <- function(x, position) {
x <- gsub("\\[\\d+\\]", "", x) # remove reference designations
if (length(grep("\\d", x)) == 0) {
position <- 3 # non-numeric result
}
if (nchar(x) > 0) {
x <- stringr::str_replace_all(x, "^[^\\d]+", "") # remove preappended qualifier text
}
if (position == 0) {
paren_pos <- stringr::str_locate_all(x, "\\(")[[1]][, 1]
if (length(paren_pos) == 0) paren_pos <- nchar(x) + 2
space_pos <- stringr::str_locate_all(x, " ")[[1]][, 1]
x <- substring(x, space_pos[1] + 1, paren_pos[1] - 2)
}
if (position == 1) {
paren_pos <- stringr::str_locate_all(x, "\\(")[[1]][, 1]
space_pos <- stringr::str_locate_all(x, " ")[[1]][, 1]
x <- substring(x, space_pos[1] + 1, paren_pos[1] - 2)
}
if (position == 2) {
space_pos <- stringr::str_locate_all(x, " ")[[1]][, 1]
x <- substring(x, space_pos[length(space_pos)] + 1, nchar(x) - 1)
}
x
}
# 0 = no choice, 1 = first choice, 2 = second choice
# pull_position(res$`Surface area`, 2)
# pull_position(res$`Water volume`, 0)
# pull_position(res$`Water volume`, 1)
# pull_position(res$`Water volume`, 2)
pull_position <- function(x, position) {
x <- gsub("\\[\\d+\\]", "", x) # remove reference designations
x <- stringr::str_replace_all(x, "^[^\\d]+", "") # remove preappended qualifier text
if (position == 0) {
paren_pos <- stringr::str_locate_all(x, "\\(")[[1]][, 1]
if (length(paren_pos) == 0) paren_pos <- nchar(x) + 2
space_pos <- stringr::str_locate_all(x, " ")[[1]][, 1]
x <- substring(x, 1, paren_pos[1] - 2)
}
if (position == 1) {
paren_pos <- stringr::str_locate_all(x, "\\(")[[1]][, 1]
space_pos <- stringr::str_locate_all(x, " ")[[1]][, 1]
x <- substring(x, 1, paren_pos[1] - 2)
}
if (position == 2) {
space_pos <- stringr::str_locate_all(x, " ")[[1]][, 1]
x <- substring(x, space_pos[length(space_pos) - 1] + 2, nchar(x) - 1)
}
x
}
#' Parse string representation of units package quantities
#'
#' @param x character string with unit in brackets
#' @param target_unit target unit to convert to. optional
#'
#' @export
#' @importFrom units as_units
#' @examples
#' x <- "1 [m]"
#' x <- "8.5 [m]"
#' parse_unit_brackets(x, "feet")
parse_unit_brackets <- function(x, target_unit = NA) {
if (is.na(x)) {
return(NA)
}
num_string <- strsplit(x, " ")[[1]][1]
units_string <- strsplit(x, " ")[[1]][2:length(strsplit(x, " ")[[1]])]
units_string <- gsub("\\[", "", units_string)
units_string <- gsub("\\]", "", units_string)
res <- tryCatch(units::as_units(as.numeric(num_string), units_string),
warning = function(w) return(NA),
error = function(e) return(NA))
if (!is.na(target_unit)) {
res <- tryCatch(units::set_units(res, target_unit, mode = "standard"),
error = function(e) NA)
}
res
}
|
/scratch/gouwar.j/cran-all/cranData/wikilake/R/utils.R
|
#' Scrape Wikipedia lakes metadata
#' @name wikilake-package
#' @aliases wikilake
#' @importFrom stringi stri_encode stri_trans_general
#' @importFrom utils read.csv
#' @importFrom units set_units as_units
#' @import selectr
#' @docType package
#' @title Scrape Wikipedia lakes metadata
#' @author \email{[email protected]}
NULL
#' Michigan Lakes
#'
#' Metadata of Michigan lakes scraped from Wikipedia.
#'
#' @format A data frame with 48 columns and 177 rows:
#' \itemize{
#' \item{Name: }{lake name}
#' \item{Location: }{location description}
#' \item{Primary inflows: }{rivers and streams}
#' \item{Basin countries: }{countries}
#' \item{Surface area: }{hectares}
#' \item{Max. depth: }{meters}
#' \item{Surface elevation: }{meters}
#' \item{Lat: }{decimal degrees}
#' \item{Lon: }{decimal degrees}
#' \item{Primary outflows: }{rivers and streams}
#' \item{Average depth: }{meters}
#' \item{Max. length: }{meters}
#' \item{Max. width: }{meters}
#' }
#' @docType data
#' @keywords datasets
#' @name milakes
NULL
|
/scratch/gouwar.j/cran-all/cranData/wikilake/R/wikilake-package.R
|
## -----------------------------------------------------------------------------
library(wikilake)
## ----category url, eval = FALSE-----------------------------------------------
# res <- WikipediR::page_info("en", "wikipedia",
# page = "Category:Lakes of Michigan")
## ----scrape names, eval = FALSE-----------------------------------------------
# res <- xml2::read_html(res$query$pages[[1]]$canonicalurl)
# res <- rvest::html_nodes(res, "#mw-pages .mw-category")
# res <- rvest::html_nodes(res, "li")
# res <- rvest::html_nodes(res, "a")
# res <- rvest::html_attr(res, "title")
## ----remove junk names, eval = FALSE------------------------------------------
# res <- res[!(seq_len(length(res)) %in% grep("List", res))]
# res <- res[!(seq_len(length(res)) %in% grep("Watershed", res))]
# res <- res[!(seq_len(length(res)) %in% grep("lakes", res))]
# res <- res[!(seq_len(length(res)) %in% grep("Mud Lake", res))]
## ----scrape tables, eval = FALSE----------------------------------------------
# res <- lapply(res, lake_wiki)
#
# # remove missing coordinates
# res <- res[unlist(lapply(res, function(x) !is.na(x[, "Lat"])))]
## ----collapse list to data.frame, eval = FALSE--------------------------------
# res_df_names <- unique(unlist(lapply(res, names)))
# res_df <- data.frame(matrix(NA, nrow = length(res),
# ncol = length(res_df_names)))
# names(res_df) <- res_df_names
# for (i in seq_len(length(res))) {
# dt_pad <- data.frame(matrix(NA, nrow = 1,
# ncol = length(res_df_names) - ncol(res[[i]])))
# names(dt_pad) <- res_df_names[!(res_df_names %in% names(res[[i]]))]
# dt <- cbind(res[[i]], dt_pad)
# dt <- dt[, res_df_names]
# res_df[i, ] <- dt
# }
## ----echo=FALSE, eval=FALSE---------------------------------------------------
# good_cols <- data.frame(as.numeric(as.character(apply(res_df,
# 2, function(x) sum(!is.na(x))))))
# good_cols <- cbind(good_cols, names(res_df))
# good_cols <- good_cols[good_cols[, 1] > 20, 2]
# good_cols <- as.character(good_cols)
#
# res_df <- res_df[, good_cols]
## ----echo = FALSE-------------------------------------------------------------
data(milakes)
res_df <- milakes
## ----map lakes, fig.height=6,fig.align="center"-------------------------------
library(sp)
coordinates(res_df) <- ~ Lon + Lat
map("state", region = "michigan", mar = c(0, 0, 0, 0))
points(res_df, col = "red", pch = 19)
## ----lake depth distribution--------------------------------------------------
hist(log(res_df$`Max. depth`), main = "", xlab = "Max depth (log(m))")
|
/scratch/gouwar.j/cran-all/cranData/wikilake/inst/doc/scrape_michigan_lakes.R
|
---
title: "Scrape Michigan Lakes"
author: "Jemma Stachelek"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Scrape Michigan Lakes}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r }
library(wikilake)
```
## Generate list of Michigan Lakes
### Get Wikipedia URL of Category
```{r category url, eval = FALSE}
res <- WikipediR::page_info("en", "wikipedia",
page = "Category:Lakes of Michigan")
```
### Scrape lake names
```{r scrape names, eval = FALSE}
res <- xml2::read_html(res$query$pages[[1]]$canonicalurl)
res <- rvest::html_nodes(res, "#mw-pages .mw-category")
res <- rvest::html_nodes(res, "li")
res <- rvest::html_nodes(res, "a")
res <- rvest::html_attr(res, "title")
```
### Remove junk names
```{r remove junk names, eval = FALSE}
res <- res[!(seq_len(length(res)) %in% grep("List", res))]
res <- res[!(seq_len(length(res)) %in% grep("Watershed", res))]
res <- res[!(seq_len(length(res)) %in% grep("lakes", res))]
res <- res[!(seq_len(length(res)) %in% grep("Mud Lake", res))]
```
### Scrape tables
```{r scrape tables, eval = FALSE}
res <- lapply(res, lake_wiki)
# remove missing coordinates
res <- res[unlist(lapply(res, function(x) !is.na(x[, "Lat"])))]
```
### Collapse list to `data.frame`
```{r collapse list to data.frame, eval = FALSE}
res_df_names <- unique(unlist(lapply(res, names)))
res_df <- data.frame(matrix(NA, nrow = length(res),
ncol = length(res_df_names)))
names(res_df) <- res_df_names
for (i in seq_len(length(res))) {
dt_pad <- data.frame(matrix(NA, nrow = 1,
ncol = length(res_df_names) - ncol(res[[i]])))
names(dt_pad) <- res_df_names[!(res_df_names %in% names(res[[i]]))]
dt <- cbind(res[[i]], dt_pad)
dt <- dt[, res_df_names]
res_df[i, ] <- dt
}
```
```{r echo=FALSE, eval=FALSE}
good_cols <- data.frame(as.numeric(as.character(apply(res_df,
2, function(x) sum(!is.na(x))))))
good_cols <- cbind(good_cols, names(res_df))
good_cols <- good_cols[good_cols[, 1] > 20, 2]
good_cols <- as.character(good_cols)
res_df <- res_df[, good_cols]
```
```{r echo = FALSE}
data(milakes)
res_df <- milakes
```
### Map lakes
```{r map lakes, fig.height=6,fig.align="center"}
library(sp)
coordinates(res_df) <- ~ Lon + Lat
map("state", region = "michigan", mar = c(0, 0, 0, 0))
points(res_df, col = "red", pch = 19)
```
```{r lake depth distribution }
hist(log(res_df$`Max. depth`), main = "", xlab = "Max depth (log(m))")
```
|
/scratch/gouwar.j/cran-all/cranData/wikilake/inst/doc/scrape_michigan_lakes.Rmd
|
---
title: "Scrape Michigan Lakes"
author: "Jemma Stachelek"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Scrape Michigan Lakes}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r }
library(wikilake)
```
## Generate list of Michigan Lakes
### Get Wikipedia URL of Category
```{r category url, eval = FALSE}
res <- WikipediR::page_info("en", "wikipedia",
page = "Category:Lakes of Michigan")
```
### Scrape lake names
```{r scrape names, eval = FALSE}
res <- xml2::read_html(res$query$pages[[1]]$canonicalurl)
res <- rvest::html_nodes(res, "#mw-pages .mw-category")
res <- rvest::html_nodes(res, "li")
res <- rvest::html_nodes(res, "a")
res <- rvest::html_attr(res, "title")
```
### Remove junk names
```{r remove junk names, eval = FALSE}
res <- res[!(seq_len(length(res)) %in% grep("List", res))]
res <- res[!(seq_len(length(res)) %in% grep("Watershed", res))]
res <- res[!(seq_len(length(res)) %in% grep("lakes", res))]
res <- res[!(seq_len(length(res)) %in% grep("Mud Lake", res))]
```
### Scrape tables
```{r scrape tables, eval = FALSE}
res <- lapply(res, lake_wiki)
# remove missing coordinates
res <- res[unlist(lapply(res, function(x) !is.na(x[, "Lat"])))]
```
### Collapse list to `data.frame`
```{r collapse list to data.frame, eval = FALSE}
res_df_names <- unique(unlist(lapply(res, names)))
res_df <- data.frame(matrix(NA, nrow = length(res),
ncol = length(res_df_names)))
names(res_df) <- res_df_names
for (i in seq_len(length(res))) {
dt_pad <- data.frame(matrix(NA, nrow = 1,
ncol = length(res_df_names) - ncol(res[[i]])))
names(dt_pad) <- res_df_names[!(res_df_names %in% names(res[[i]]))]
dt <- cbind(res[[i]], dt_pad)
dt <- dt[, res_df_names]
res_df[i, ] <- dt
}
```
```{r echo=FALSE, eval=FALSE}
good_cols <- data.frame(as.numeric(as.character(apply(res_df,
2, function(x) sum(!is.na(x))))))
good_cols <- cbind(good_cols, names(res_df))
good_cols <- good_cols[good_cols[, 1] > 20, 2]
good_cols <- as.character(good_cols)
res_df <- res_df[, good_cols]
```
```{r echo = FALSE}
data(milakes)
res_df <- milakes
```
### Map lakes
```{r map lakes, fig.height=6,fig.align="center"}
library(sp)
coordinates(res_df) <- ~ Lon + Lat
map("state", region = "michigan", mar = c(0, 0, 0, 0))
points(res_df, col = "red", pch = 19)
```
```{r lake depth distribution }
hist(log(res_df$`Max. depth`), main = "", xlab = "Max depth (log(m))")
```
|
/scratch/gouwar.j/cran-all/cranData/wikilake/vignettes/scrape_michigan_lakes.Rmd
|
##' @method print wpplot
##' @import utils
##' @export
print.wpplot <- function(x, ...) {
#if (Sys.getenv("TERM_PROGRAM") == "vscode") {
# p <- ggplotify::as.ggplot(x)
# print(p)
#} else {
# browseURL(svg2tempfile(x$svg))
#}
print(ggplotify::as.ggplot(x))
}
##' @importFrom ggplotify as.grob
##' @method as.grob wpplot
##' @importFrom grid rasterGrob
##' @export
as.grob.wpplot <- function(plot, ...) {
f <- svg2tempfile(plot$svg)
p <- rsvg::rsvg_nativeraster(f)
rasterGrob(p)
}
##' @method grid.draw wpplot
##' @importFrom grid grid.draw
##' @export
grid.draw.wpplot <- function(x, recording = TRUE) {
grid::grid.draw(as.grob.wpplot(x), recording = recording)
}
|
/scratch/gouwar.j/cran-all/cranData/wikiprofiler/R/methods.r
|
##' parse wikipathway gmt file to a gson object
##'
##'
##' @title read.wp
##' @rdname read-wp
##' @param file wikipathway gmt file
##' downloaded from 'https://wikipathways-data.wmcloud.org/current/gmt/'
##' @importFrom gson read.gmt.wp
##' @export
##' @return a 'gson' object
##' @author Guangchuang Yu
read.wp <- function(file) {
gson::read.gmt.wp(file, output = "gson")
}
|
/scratch/gouwar.j/cran-all/cranData/wikiprofiler/R/read-wp.R
|
svg2tempfile <- function(svg) {
f <- tempfile(fileext = ".svg")
cat(svg, file = f)
return(f)
}
#' @import grDevices
colorb <- function(Expression, low = "blue", high = "red") {
zero_scale_line <- find_zero_scale(Expression)
textele <- pretty(Expression, 4)
textele_low <- textele[which(textele <= zero_scale_line)]
textele_high <- textele[which(textele > zero_scale_line)]
Expression_low <- Expression[which(Expression <= zero_scale_line)]
Expression_high <- Expression[which(Expression > zero_scale_line)]
scaleExpr_low <- (Expression_low - min(textele_low)) / (zero_scale_line - min(textele_low))
scaleExpr_high <- (Expression_high - zero_scale_line) / (max(textele_high) - zero_scale_line)
scaleExpr_low <- round(scaleExpr_low, 2) * 1000 + 1 # 1-1001
scaleExpr_high <- round(scaleExpr_high, 2) * 1000 + 1 # 1-1001
colorB2R_low <- colorRampPalette(colors = c(low, "white"))
colorB2R_high <- colorRampPalette(colors = c("white", high))
c(colorB2R_low(1001)[sort(scaleExpr_low)], colorB2R_high(1001)[sort(scaleExpr_high)])
}
find_zero_scale <- function(value){
zero_scale_line <- 0
if(all(pretty(value, 4) > 0) || all(pretty(value, 4) < 0)){
zero_scale_line <- pretty(value, 4)[round(length(pretty(value, 4)) / 2)]
}
return(zero_scale_line)
}
legend_generator <- function(value, low = "blue", high = "red") {
temp <- pretty(value, 4)
seq1 <- ceiling(seq(from = 1, to = 1001, length.out = length(temp[which(temp <= 0)])))
seq2 <- ceiling(seq(from = 1, to = 1001, length.out = length(temp[which(temp >= 0)])))
c(colorRampPalette(colors = c(low, "white"))(1001)[seq1], colorRampPalette(colors = c("white", high))(1001)[seq2[-1]])
}
svg_halos <- function(svg, pos, gene) {
svg[pos - 1] <- paste(
sub(
"fill:black; stroke:none;",
"\" class=\"halo",
svg[pos - 1]
),
sub(
"/>",
paste(">", gene, "</text>", sep = ""),
svg[pos - 1]
)
)
return(svg)
}
svg_halos2 <- function(svg, positions, gene) {
for (pos in positions) {
svg <- svg_halos(svg, pos, gene)
}
return(svg)
}
replace_bg <- function(svg, position, color) {
j <- rev(grep("<g", svg[1:position]))[1]
replace <- sub(
"fill:.+;.+", paste("fill:", color,
"; text-rendering:geometricPrecision; stroke:white;\"",
sep = ""
),
svg[j]
)
svg[j] <- replace
return(svg)
}
replace_bg2 <- function(svg, positions, color) {
if (is.null(positions) || all(is.na(positions)) || length(positions) == 0) {
return(svg)
}
for (position in positions) {
if (is.na(position)) next
svg <- replace_bg(svg, position, color)
}
return(svg)
}
|
/scratch/gouwar.j/cran-all/cranData/wikiprofiler/R/utilities.r
|
#' @title Input specific wikipathways ID to get an output in class of wpplot.
#' @description Use wikipathways ID to open a local svg file. Then extract related information from svg file and build a wpplot class variance.
#' @param ID ID is wikipathways' ID.
#' @return A 'wpplot' object
#' @export
#' @examples
#' wpplot('WP63_117935')
wpplot <- function(ID) {
url <- paste0('https://www.wikipathways.org//wpi/wpi.php?action=',
'downloadFile&type=svg&pwTitle=Pathway:',
ID)
svg <- yulab.utils::yread(url)
if (!any(grepl('<svg', svg[1:10]))) {
stop("fail to read online wiki pathway file")
}
structure(list(
ID = ID,
svg = svg,
geneExpr = NULL
), class = "wpplot")
}
#' @title Fill the background of gene with color according to amount of gene expression.
#' @description Generate a color array.Fill the gene then generate the legend.
#' @param p p is
#' @param value value is the amount of expression.
#' @param low The color of lowest gene.
#' @param high The color of highest gene.
#' @param legend Whether you need legend.
#' @param legend_x horizontal position of the legend
#' @param legend_y vertical position of the legend
#' @return A 'wpplot' object
# @import org.Hs.eg.db
# @import BiocGenerics
#' @export
wp_bgfill <- function(p, value, high="red", low="blue", legend = TRUE, legend_x = 0.001, legend_y = 0.94) {
if(legend_x < 0 || legend_x > 1 || legend_y < 0 || legend_y > 1){
message('Parameters legend_x and legend_y must be numbers between 0 to 1!')
}
SYMBOLS <- sub('\\s+', '', sub('>', '', sub('</text', '', p$svg[grep('</text', p$svg)])))
SYMBOLS <- SYMBOLS[is.na(suppressWarnings(as.numeric(SYMBOLS)))]
if(!any(names(value) %in% SYMBOLS)){
message("Please make sure the input gene ID type is 'SYMBOL'")
return(p)
}
value <- value[names(value) %in% SYMBOLS]
mini <- min(value) %/% 10 * 10
maxi <- ceiling(max(value)/10) * 10
colornum <- (maxi-mini) / 10
colorbar <- colorb(value, low, high)
color <- colorbar[order(value)]
legendcolor <- legend_generator(value, low, high)
genes <- names(value)
for (i in seq_along(genes)) {
pos <- grep(genes[i], p$svg)
p$svg <- replace_bg2(p$svg, pos, color[i])
}
svg_width <- as.numeric(strsplit(strsplit(p$svg[4], 'width=\"')[[1]][2], '\" height=\"')[[1]][1])
svg_height <- as.numeric(strsplit(strsplit(strsplit(p$svg[4], 'width=\"')[[1]][2], '\" height=\"')[[1]][2], '\"')[[1]][1])
incrementX <- svg_width * legend_x
incrementY <- svg_height * (1 - legend_y)
if(incrementX > svg_width - 48)
incrementX <- svg_width - 48
if(incrementY > svg_height - 122){
incrementY <- svg_height - 122
}else if(incrementY < 3)
incrementY <- 3
textele <- rev(pretty(value, 4))
legendX <- 0 + incrementX
legendY <- 0 + incrementY
textX <- 40 + incrementX
textY <- seq(from = 5,to = 120,length.out = length(textele)) + incrementY
scalelineX <- 27 + incrementX
scalelineY <- seq(from = 2,to = 118,length.out = length(textele)) +incrementY
if(legend){
zero_scale_line <- find_zero_scale(value)
proportion <- seq(from = 2,to = 118,length.out = length(textele)) / 120
proportion <- proportion[length(which(pretty(value, 4) >= zero_scale_line))]
if(max(pretty(value, 4)) == 0){
proportion <- '0%'
}
if(min(pretty(value, 4)) == 0){
proportion <- '100%'
}
temp<-grep("</svg",p$svg)
p$svg[temp]<-sub("</svg",paste("<defs><linearGradient id=\"grad1\" x1=\"0%\" y1=\"0%\" x2=\"0%\" y2=\"100%\"><stop offset=\"0%\" style=\"stop-color:",high,";stop-opacity:1\"></stop><stop offset=\"",proportion,"\" style=\"stop-color:","white",";stop-opacity:1\"></stop><stop offset=\"100%\" style=\"stop-color:",low,";stop-opacity:1\"></stop></linearGradient></defs><rect x=\"",legendX,"\" y=\"",legendY,"\" width =\"30\" height=\"120\" style=\"fill:url(#grad1 );stroke-width:0;stroke:black\"></rect></svg",sep = ""),p$svg[temp])
for (i in 1:length(pretty(value, 4))){
temp<-grep("</svg",p$svg)
p$svg[temp]<-sub("</svg",paste("<text x=\"",textX,"\" y=\"",textY[i],"\" style=\"font-size:10; fill:black; stroke:none\">",textele[i],"</text></svg",sep = ""),p$svg[temp])
}
for (i in 1:length(pretty(value, 4))){
temp<-grep("</svg",p$svg)
p$svg[temp]<-sub("</svg",paste("<rect width=\"3\" height=\"1\" x=\"",scalelineX,"\" y=\"",scalelineY[i],"\" style=\"fill:white; stroke:none\"></rect></svg",sep = ""),p$svg[temp])
}
}
p$geneExpr <- value
return(p)
}
#' @title Add halo above gene name to get a clear view.
#' @description Add use svghalo2 function to add halo.
#' @param p An wpplot class variance.
#' @param bg.r The width of halo.
#' @param bg.col The color of halo.
#' @return A 'wpplot' object
#' @export
wp_shadowtext <- function(p, bg.r = 2, bg.col = "white") {
if (is.null(p$geneExpr)) return(p)
genes <- names(p$geneExpr)
for (i in seq_along(genes)) {
pos <- grep(genes[i], p$svg)
p$svg <- svg_halos(p$ svg, pos, genes[i])
}
i <- grep("><!--Generated by the Batik Graphics2D SVG Generator-->", p$svg)
p$svg[i] <- paste0(
"><!--Generated by the Batik Graphics2D SVG Generator-->",
"<style>.halo{fill:", bg.col,
";stroke:", bg.col, "; stroke-width:", bg.r,
";stroke-linejoin:round; vector-effect:non-scaling-stroke;",
"}</style><defs id=\"genericDefs\""
)
return(p)
}
#' @title Save the 'wpplot' object to a file.
#' @param p A 'wpplot' object
#' @param file the file to save the object
#' @param width Width of the figure
#' @param height Height of the figure
#' @param ... additional parameter passed to 'ggsave'
#' @return output the file and the input 'wpplot' object (invisible)
#' @import rsvg
#' @importFrom ggplot2 ggsave
#' @export
wpsave <- function(p, file, width=NULL, height=NULL, ...) {
# fileext <- sub(".*(\\..+)", "\\1", file)
# f <- svg2tempfile(p$svg)
# if (fileext == '.svg') {
# rsvg::rsvg_svg(f, file = file, width = width, height = height)
# } else if (fileext == '.pdf') {
# rsvg::rsvg_pdf(f, file = file, width = width, height = height)
# } else if (fileext == '.png') {
# rsvg::rsvg_png(f, file = file, width = width, height = height)
# } else {
# stop("file type not supported")
# }
g <- ggplotify::as.ggplot(p)
ggplot2::ggsave(plot = g,
filename = file,
width = width,
height = height,
...)
invisible(p)
}
##' @importFrom ggplot2 ggsave
##' @export
ggplot2::ggsave
|
/scratch/gouwar.j/cran-all/cranData/wikiprofiler/R/wpplot.R
|
if (getRversion() >= "2.15.1") {
utils::globalVariables(c('wikipedias'))
}
|
/scratch/gouwar.j/cran-all/cranData/wikitaxa/R/globals.R
|
#' Wikidata taxonomy data
#'
#' @export
#' @param x (character) a taxonomic name
#' @param property (character) a property id, e.g., P486
#' @param ... curl options passed on to `httr::GET()`
#' @param language (character) two letter language code
#' @param limit (integer) records to return. Default: 10
#' @return `wt_data` searches Wikidata, and returns a list with elements:
#' \itemize{
#' \item labels - data.frame with columns: language, value
#' \item descriptions - data.frame with columns: language, value
#' \item aliases - data.frame with columns: language, value
#' \item sitelinks - data.frame with columns: site, title
#' \item claims - data.frame with columns: claims, property_value,
#' property_description, value (comma separted values in string)
#' }
#'
#' `wt_data_id` gets the Wikidata ID for the searched term, and
#' returns the ID as character
#'
#' @details Note that `wt_data` can take a while to run since when fetching
#' claims it has to do so one at a time for each claim
#'
#' You can search things other than taxonomic names with `wt_data` if you
#' like
#' @examples \dontrun{
#' # search by taxon name
#' # wt_data("Mimulus alsinoides")
#'
#' # choose which properties to return
#' wt_data(x="Mimulus foliatus", property = c("P846", "P815"))
#'
#' # get a taxonomic identifier
#' wt_data_id("Mimulus foliatus")
#' # the id can be passed directly to wt_data()
#' # wt_data(wt_data_id("Mimulus foliatus"))
#' }
wt_data <- function(x, property = NULL, ...) {
UseMethod("wt_data")
}
#' @export
wt_data.wiki_id <- function(x, property = NULL, ...) {
data_wiki(x, property = property, ...)
}
#' @export
wt_data.default <- function(x, property = NULL, ...) {
x <- WikidataR::find_item(search_term = x, ...)
if (length(x) == 0) stop("no results found", call. = FALSE)
data_wiki(x[[1]]$id, property = property, ...)
}
#' @export
#' @rdname wt_data
wt_data_id <- function(x, language = "en", limit = 10, ...) {
x <- WikidataR::find_item(search_term = x, language = language,
limit = limit, ...)
x <- if (length(x) == 0) NA else x[[1]]$id
structure(x, class = "wiki_id")
}
data_wiki <- function(x, property = NULL, ...) {
xx <- WikidataR::get_item(x, ...)
if (is.null(property)) {
claims <- create_claims(xx[[1]]$claims)
} else{
cl <- Filter(function(x) x$mainsnak$property %in% property, xx[[1]]$claims)
if (length(cl) == 0) stop("No matching properties", call. = FALSE)
claims <- create_claims(cl)
}
list(
labels = dt_df(xx[[1]]$labels),
descriptions = dt_df(xx[[1]]$descriptions),
aliases = dt_df(xx[[1]]$aliases),
sitelinks = dt_df(lapply(xx[[1]]$sitelinks, function(x)
x[names(x) %in% c('site', 'title')])),
claims = dt_df(claims)
)
}
fetch_property <- function(x) {
tmp <- WikidataR::get_property(x)
list(
property_value = tmp[[1]]$labels$en$value,
property_description = tmp[[1]]$descriptions$en$value
)
}
create_claims <- function(x) {
lapply(x, function(z) {
ff <- c(
property = paste0(unique(z$mainsnak$property), collapse = ","),
fetch_property(unique(z$mainsnak$property)),
value = {
if (inherits(z$mainsnak$datavalue$value, "data.frame")) {
paste0(z$mainsnak$datavalue$value$`numeric-id`, collapse = ",")
} else {
paste0(z$mainsnak$datavalue$value, collapse = ",")
}
}
)
ff[vapply(ff, is.null, logical(1))] <- NA
ff
})
}
|
/scratch/gouwar.j/cran-all/cranData/wikitaxa/R/wiki.R
|
#' WikiCommons
#'
#' @export
#' @template args
#' @family Wikicommons functions
#' @return `wt_wikicommons` returns a list, with slots:
#' \itemize{
#' \item langlinks - language page links
#' \item externallinks - external links
#' \item common_names - a data.frame with `name` and `language` columns
#' \item classification - a data.frame with `rank` and `name` columns
#' }
#'
#' `wt_wikicommons_parse` returns a list
#'
#' `wt_wikicommons_search` returns a list with slots for `continue` and
#' `query`, where `query` holds the results, with `query$search` slot with
#' the search results
#' @references <https://www.mediawiki.org/wiki/API:Search> for help on search
#' @examples \dontrun{
#' # high level
#' wt_wikicommons(name = "Malus domestica")
#' wt_wikicommons(name = "Pinus contorta")
#' wt_wikicommons(name = "Ursus americanus")
#' wt_wikicommons(name = "Balaenoptera musculus")
#'
#' wt_wikicommons(name = "Category:Poeae")
#' wt_wikicommons(name = "Category:Pinaceae")
#'
#' # low level
#' pg <- wt_wiki_page("https://commons.wikimedia.org/wiki/Malus_domestica")
#' wt_wikicommons_parse(pg)
#'
#' # search wikicommons
#' # FIXME: utf=FALSE for now until curl::curl_escape fix
#' # https://github.com/jeroen/curl/issues/228
#' wt_wikicommons_search(query = "Pinus", utf8 = FALSE)
#'
#' ## use search results to dig into pages
#' res <- wt_wikicommons_search(query = "Pinus", utf8 = FALSE)
#' lapply(res$query$search$title[1:3], wt_wikicommons)
#' }
wt_wikicommons <- function(name, utf8 = TRUE, ...) {
assert(name, "character")
stopifnot(length(name) == 1)
prop <- c("langlinks", "externallinks", "common_names", "classification")
res <- wt_wiki_url_build(
wiki = "commons", type = "wikimedia", page = name,
utf8 = utf8,
prop = prop)
pg <- wt_wiki_page(res, ...)
wt_wikicommons_parse(pg, prop, tidy = TRUE)
}
#' @export
#' @rdname wt_wikicommons
wt_wikicommons_parse <- function(page, types = c("langlinks", "iwlinks",
"externallinks", "common_names",
"classification"),
tidy = FALSE) {
result <- wt_wiki_page_parse(page, types = types, tidy = tidy)
json <- jsonlite::fromJSON(rawToChar(page$content), simplifyVector = FALSE)
# if output is NULL
if (is.null(json$parse)) {
return(result)
}
# if page not found
txt <- xml2::read_html(json$parse$text[[1]])
html <- tryCatch(
xml2::xml_find_all(txt,
"//div[contains(., \"Domain\") or contains(., \"Phylum\")]")[[2]],
error = function(e) e)
if (inherits(html, "error")) return(list())
## Common names
if ("common_names" %in% types) {
vernacular_html <- xml2::xml_find_all(txt,
xpath = "//bdi[@class='vernacular']")
# XML formats:
# <bdi class="vernacular" lang="en"><a href="">name</a></bdi>
# <bdi class="vernacular" lang="en">name</bdi>
## Name formats:
# name1 / name2
# name1, name2
# name (category)
cnms <- lapply(vernacular_html, function(x) {
attributes <- xml2::xml_attrs(x)
language <- attributes[["lang"]]
name <- trimws(gsub("[ ]*\\(.*\\)", "", xml2::xml_text(x)))
list(
name = name,
language = language
)
})
result$common_names <- if (tidy) atbl(dt_df(cnms)) else cnms
}
## classification
if ("classification" %in% types) {
html <- tryCatch(
xml2::xml_find_all(txt,
"//div[contains(., \"Domain\") or contains(., \"Phylum\")]")[[2]],
error = function(e) e)
# labels
labels <- c(gsub(":", "", strex(xml2::xml_text(html), "[A-Za-z]+\\)?:")[[1]]), "Authority")
labels <- gsub("\\(|\\)", "", labels)
labels <- labels[-1]
# values
values <- xml2::xml_text(
xml2::xml_find_all(if (inherits(html, "xml_nodes")) html[[2]] else html, ".//b"))
values <- gsub("^:\\s+|^.+:\\s?", "", values)
values <- values[-1]
clz <- mapply(list, rank = labels, name = values,
SIMPLIFY = FALSE, USE.NAMES = FALSE)
result$classification <- if (tidy) atbl(dt_df(clz)) else clz
}
return(result)
}
#' @export
#' @rdname wt_wikicommons
wt_wikicommons_search <- function(query, limit = 10, offset = 0, utf8 = TRUE,
...) {
tmp <- g_et(search_base("commons"), sh(query, limit, offset, utf8), ...)
tmp$query$search <- atbl(tmp$query$search)
return(tmp)
}
|
/scratch/gouwar.j/cran-all/cranData/wikitaxa/R/wikicommons.R
|
# MediaWiki (general) ----------------
#' Parse MediaWiki Page URL
#'
#' Parse a MediaWiki page url into its component parts (wiki name, wiki type,
#' and page title). Supports both static page urls and their equivalent API
#' calls.
#'
#' @export
#' @param url (character) MediaWiki page url.
#' @family MediaWiki functions
#' @return a list with elements:
#' \itemize{
#' \item wiki - wiki language
#' \item type - wikipedia type
#' \item page - page name
#' }
#' @examples
#' wt_wiki_url_parse(url="https://en.wikipedia.org/wiki/Malus_domestica")
#' wt_wiki_url_parse("https://en.wikipedia.org/w/api.php?page=Malus_domestica")
wt_wiki_url_parse <- function(url) {
url <- curl::curl_unescape(url)
if (grepl("/w/api.php?", url)) {
matches <-
match_(
url, "//([^\\.]+).([^\\.]+).[^/]*/w/api\\.php\\?.*page=([^&]+).*$")
} else {
matches <- match_(url, "//([^\\.]+).([^\\.]+).[^/]*/wiki/([^\\?]+)")
}
return(list(
wiki = matches[2],
type = matches[3],
page = matches[4]
))
}
#' Build MediaWiki Page URL
#'
#' Builds a MediaWiki page url from its component parts (wiki name, wiki type,
#' and page title). Supports both static page urls and their equivalent API
#' calls.
#'
#' @export
#' @param wiki (character | list) Either the wiki name or a list with
#' `$wiki`, `$type`, and `$page` (the output of [wt_wiki_url_parse()]).
#' @param type (character) Wiki type.
#' @param page (character) Wiki page title.
#' @param api (boolean) Whether to return an API call or a static page url
#' (default). If `FALSE`, all following (API-only) arguments are ignored.
#' @param action (character) See <https://en.wikipedia.org/w/api.php>
#' for supported actions. This function currently only supports "parse".
#' @param redirects (boolean) If the requested page is set to a redirect,
#' resolve it.
#' @param format (character) See <https://en.wikipedia.org/w/api.php>
#' for supported output formats.
#' @param utf8 (boolean) If `TRUE`, encodes most (but not all) non-ASCII
#' characters as UTF-8 instead of replacing them with hexadecimal escape
#' sequences.
#' @param prop (character) Properties to retrieve, either as a character vector
#' or pipe-delimited string. See
#' <https://en.wikipedia.org/w/api.php?action=help&modules=parse> for
#' supported properties.
#' @family MediaWiki functions
#' @return a URL (character)
#' @examples
#' wt_wiki_url_build(wiki = "en", type = "wikipedia", page = "Malus domestica")
#' wt_wiki_url_build(
#' wt_wiki_url_parse("https://en.wikipedia.org/wiki/Malus_domestica"))
#' wt_wiki_url_build("en", "wikipedia", "Malus domestica", api = TRUE)
wt_wiki_url_build <- function(wiki, type = NULL, page = NULL, api = FALSE,
action = "parse", redirects = TRUE, format = "json",
utf8 = TRUE,
prop = c("text", "langlinks", "categories",
"links", "templates", "images",
"externallinks", "sections", "revid",
"displaytitle", "iwlinks", "properties")) {
assert(utf8, "logical")
if (is.null(type) && is.null(page)) {
type <- wiki$type
page <- wiki$page
wiki <- wiki$wiki
}
page <- gsub(" ", "_", page)
if (api) {
base_url <- paste0("https://", wiki, ".", type, ".org/w/api.php")
# To ensure it is removed
if (!utf8) utf8 <- ""
prop <- paste(prop, collapse = "|")
query <- c(page = page, mget(c("action", "redirects", "format", "utf8",
"prop")))
query <- query[vapply(query, "!=", logical(1), "")]
url <- crul::url_build(base_url, query = query)
return(url)
} else {
return(paste0("https://", wiki, ".", type, ".org/wiki/", page))
}
}
#' Get MediaWiki Page from API
#'
#' Supports both static page urls and their equivalent API calls.
#'
#' @export
#' @param url (character) MediaWiki page url.
#' @param ... Arguments passed to [wt_wiki_url_build()] if `url`
#' is a static page url.
#' @family MediaWiki functions
#' @return an `HttpResponse` response object from \pkg{crul}
#' @details If the URL given is for a human readable html page,
#' we convert it to equivalent API call - if URL is already an API call,
#' we just use that.
#' @examples \dontrun{
#' wt_wiki_page("https://en.wikipedia.org/wiki/Malus_domestica")
#' }
wt_wiki_page <- function(url, ...) {
stopifnot(inherits(url, "character"))
if (!grepl("/w/api.php?", url)) {
url <- wt_wiki_url_build(wt_wiki_url_parse(url), api = TRUE)
}
cli <- crul::HttpClient$new(url = url)
res <- cli$get(...)
res$raise_for_status()
return(res)
}
#' Parse MediaWiki Page
#'
#' Parses common properties from the result of a MediaWiki API page call.
#'
#' @export
#' @param page ([crul::HttpResponse]) Result of [wt_wiki_page()]
#' @param types (character) List of properties to parse.
#' @param tidy (logical). tidy output to data.frames when possible.
#' Default: `FALSE`
#' @family MediaWiki functions
#' @return a list
#' @details Available properties currently not parsed:
#' title, displaytitle, pageid, revid, redirects, text, categories,
#' links, templates, images, sections, properties, ...
#' @examples \dontrun{
#' pg <- wt_wiki_page("https://en.wikipedia.org/wiki/Malus_domestica")
#' wt_wiki_page_parse(pg)
#' }
wt_wiki_page_parse <- function(page, types = c("langlinks", "iwlinks",
"externallinks"),
tidy = FALSE) {
stopifnot(inherits(page, "HttpResponse"))
result <- list()
json <- jsonlite::fromJSON(rawToChar(page$content), tidy)
if (is.null(json$parse)) {
return(result)
}
## Links to equivalent page in other languages
if ("langlinks" %in% types) {
result$langlinks <- if (tidy) {
atbl(json$parse$langlinks)
} else {
vapply(json$parse$langlinks, "[[", "", "url")
}
}
## Other wiki links
if ("iwlinks" %in% types) {
result$iwlinks <- if (tidy) {
atbl(json$parse$iwlinks$url)
} else {
vapply(json$parse$iwlinks, "[[", "", "url")
}
}
## Links to external resources
if ("externallinks" %in% types) {
result$externallinks <- json$parse$externallinks
}
## Return
return(result)
}
|
/scratch/gouwar.j/cran-all/cranData/wikitaxa/R/wikipages.R
|
#' Wikipedia
#'
#' @export
#' @template args
#' @param wiki (character) wiki language. default: en. See [wikipedias] for
#' language codes.
#' @family Wikipedia functions
#' @return `wt_wikipedia` returns a list, with slots:
#' \itemize{
#' \item langlinks - language page links
#' \item externallinks - external links
#' \item common_names - a data.frame with `name` and `language` columns
#' \item classification - a data.frame with `rank` and `name` columns
#' \item synonyms - a character vector with taxonomic names
#' }
#'
#' `wt_wikipedia_parse` returns a list with same slots determined by
#' the `types` parmeter
#'
#' `wt_wikipedia_search` returns a list with slots for `continue` and
#' `query`, where `query` holds the results, with `query$search` slot with
#' the search results
#' @references <https://www.mediawiki.org/wiki/API:Search> for help on search
#' @examples \dontrun{
#' # high level
#' wt_wikipedia(name = "Malus domestica")
#' wt_wikipedia(name = "Malus domestica", wiki = "fr")
#' wt_wikipedia(name = "Malus domestica", wiki = "da")
#'
#' # low level
#' pg <- wt_wiki_page("https://en.wikipedia.org/wiki/Malus_domestica")
#' wt_wikipedia_parse(pg)
#' wt_wikipedia_parse(pg, tidy = TRUE)
#'
#' # search wikipedia
#' # FIXME: utf=FALSE for now until curl::curl_escape fix
#' # https://github.com/jeroen/curl/issues/228
#' wt_wikipedia_search(query = "Pinus", utf8=FALSE)
#' wt_wikipedia_search(query = "Pinus", wiki = "fr", utf8=FALSE)
#' wt_wikipedia_search(query = "Pinus", wiki = "br", utf8=FALSE)
#'
#' ## curl options
#' # wt_wikipedia_search(query = "Pinus", verbose = TRUE, utf8=FALSE)
#'
#' ## use search results to dig into pages
#' res <- wt_wikipedia_search(query = "Pinus", utf8=FALSE)
#' lapply(res$query$search$title[1:3], wt_wikipedia)
#' }
wt_wikipedia <- function(name, wiki = "en", utf8 = TRUE, ...) {
assert(name, "character")
assert(wiki, "character")
stopifnot(length(name) == 1)
prop <- c("langlinks", "externallinks", "common_names", "classification",
"synonyms")
res <- wt_wiki_url_build(
wiki = wiki, type = "wikipedia", page = name,
utf8 = utf8,
prop = prop)
pg <- wt_wiki_page(res, ...)
wt_wikipedia_parse(page = pg, types = prop, tidy = TRUE)
}
#' @export
#' @rdname wt_wikipedia
wt_wikipedia_parse <- function(page, types = c("langlinks", "iwlinks",
"externallinks", "common_names",
"classification"),
tidy = FALSE) {
result <- wt_wiki_page_parse(page, types = types, tidy = tidy)
json <- jsonlite::fromJSON(rawToChar(page$content), simplifyVector = TRUE)
if (is.null(json$parse)) {
return(result)
}
## Common names
if ("common_names" %in% types) {
xml <- xml2::read_html(json$parse$text[[1]])
names_xml <- list(
regular_bolds = xml2::xml_find_all(
xml,
xpath = "/html/body/p[count(preceding::div[contains(@id, 'toc') or contains(@class, 'toc')]) = 0 and count(preceding::h1) = 0 and count(preceding::h2) = 0 and count(preceding::h3) = 0]//b[not(parent::*[self::i]) and not(i)]"), #nolint
regular_biotabox_header =
xml2::xml_find_all(
xml,
xpath = "(//table[contains(@class, 'infobox biota') or contains(@class, 'infobox_v2 biota')]//th)[1]/b[not(parent::*[self::i]) and not(i)]") #nolint
)
# NOTE: Often unreliable.
regular_title <- stats::na.omit(
match_(json$parse$displaytitle, "^([^<]*)$")[2])
common_names <- unique(c(unlist(lapply(names_xml, xml2::xml_text)),
regular_title))
language <- match_(page$url, 'http[s]*://([^\\.]*)\\.')[2]
cnms <- lapply(common_names, function(name) {
list(name = name, language = language)
})
result$common_names <- if (tidy) atbl(dt_df(cnms)) else cnms
}
## classification
if ("classification" %in% types) {
txt <- xml2::read_html(json$parse$text[[1]])
html <-
xml2::xml_find_all(txt, "//table[@class=\"infobox biota\"]//span")
labels <- xml2::xml_attr(html, "class")
labels <- gsub("^\\s+|\\s$|\\(|\\)", "", labels)
values <- gsub("^\\s+|\\s$", "", xml2::xml_text(html))
clz <- mapply(list, rank = labels, name = values,
SIMPLIFY = FALSE, USE.NAMES = FALSE)
result$classification <- if (tidy) atbl(dt_df(clz)) else clz
}
## synonyms
if ("synonyms" %in% types) {
syns <- list()
txt <- xml2::read_html(json$parse$text[[1]])
html <-
xml2::xml_find_all(txt, "//table[@class=\"infobox biota\"]//td")
syn_node <-
xml2::xml_find_first(html, "//th/a[contains(text(), \"Synonyms\")]")
if (length(stats::na.omit(xml2::xml_text(syn_node))) > 0) {
if (grepl("<br>", html[[length(html)]])) {
syns <- xml2::xml_text(
xml2::xml_find_all(
xml2::xml_find_first(html[[length(html)]], "p"), "i"))
} else {
syn <- strsplit(xml2::xml_text(html[length(html)]), "\n")[[1]]
syns <- syn[nzchar(syn)]
}
}
result$synonyms <- syns
}
return(result)
}
#' @export
#' @rdname wt_wikipedia
wt_wikipedia_search <- function(query, wiki = "en", limit = 10, offset = 0,
utf8 = TRUE, ...) {
assert(wiki, "character")
tmp <- g_et(search_base(wiki, "wikipedia"), sh(query, limit, offset, utf8),
...)
tmp$query$search <- atbl(tmp$query$search)
return(tmp)
}
|
/scratch/gouwar.j/cran-all/cranData/wikitaxa/R/wikipedia.R
|
#' WikiSpecies
#'
#' @export
#' @template args
#' @family Wikispecies functions
#' @return `wt_wikispecies` returns a list, with slots:
#' \itemize{
#' \item langlinks - language page links
#' \item externallinks - external links
#' \item common_names - a data.frame with `name` and `language` columns
#' \item classification - a data.frame with `rank` and `name` columns
#' }
#'
#' `wt_wikispecies_parse` returns a list
#'
#' `wt_wikispecies_search` returns a list with slots for `continue` and
#' `query`, where `query` holds the results, with `query$search` slot with
#' the search results
#' @references <https://www.mediawiki.org/wiki/API:Search> for help on search
#' @examples \dontrun{
#' # high level
#' wt_wikispecies(name = "Malus domestica")
#' wt_wikispecies(name = "Pinus contorta")
#' wt_wikispecies(name = "Ursus americanus")
#' wt_wikispecies(name = "Balaenoptera musculus")
#'
#' # low level
#' pg <- wt_wiki_page("https://species.wikimedia.org/wiki/Abelmoschus")
#' wt_wikispecies_parse(pg)
#'
#' # search wikispecies
#' # FIXME: utf=FALSE for now until curl::curl_escape fix
#' # https://github.com/jeroen/curl/issues/228
#' wt_wikispecies_search(query = "pine tree", utf8=FALSE)
#'
#' ## use search results to dig into pages
#' res <- wt_wikispecies_search(query = "pine tree", utf8=FALSE)
#' lapply(res$query$search$title[1:3], wt_wikispecies)
#' }
wt_wikispecies <- function(name, utf8 = TRUE, ...) {
assert(name, "character")
stopifnot(length(name) == 1)
prop <- c("langlinks", "externallinks", "common_names", "classification")
res <- wt_wiki_url_build(
wiki = "species", type = "wikimedia", page = name,
utf8 = utf8,
prop = prop)
pg <- wt_wiki_page(res, ...)
wt_wikispecies_parse(pg, prop, tidy = TRUE)
}
#' @export
#' @rdname wt_wikispecies
wt_wikispecies_parse <- function(page, types = c("langlinks", "iwlinks",
"externallinks", "common_names",
"classification"),
tidy = FALSE) {
result <- wt_wiki_page_parse(page, types = types, tidy = tidy)
json <- jsonlite::fromJSON(rawToChar(page$content), simplifyVector = FALSE)
if (is.null(json$parse)) {
return(result)
}
## Common names
if ("common_names" %in% types) {
xml <- xml2::read_html(json$parse$text[[1]])
# XML formats:
# <b>language:</b> [name|<a>name</a>]
# Name formats:
# name1, name2
vernacular_html <- xml2::xml_find_all(
xml,
"(//h2/span[contains(@id, 'Vernacular')]/parent::*/following-sibling::div)[1]" #nolint
)
languages_html <- xml2::xml_find_all(vernacular_html, xpath = "b")
languages <- gsub("\\s*:\\s*", "",
unlist(lapply(languages_html, xml2::xml_text)))
names_html <-
xml2::xml_find_all(
vernacular_html,
"b[not(following-sibling::*[1][self::a])]/following-sibling::text()[1] | b/following-sibling::*[1][self::a]/text()") #nolint
common_names <- gsub("^\\s*", "",
unlist(lapply(names_html, xml2::xml_text)))
cnms <-
mapply(list, name = common_names,
language = languages, SIMPLIFY = FALSE, USE.NAMES = FALSE)
result$common_names <- if (tidy) atbl(dt_df(cnms)) else cnms
}
## classification
if ("classification" %in% types) {
txt <- xml2::read_html(json$parse$text[[1]])
html <- xml2::xml_text(
xml2::xml_find_first(txt, "//table[contains(@class, \"wikitable\")]//p"))
html <- strsplit(html, "\n")[[1]]
labels <-
vapply(html, function(z) strsplit(z, ":")[[1]][1], "", USE.NAMES = FALSE)
values <-
vapply(html, function(z) strsplit(z, ":")[[1]][2], "", USE.NAMES = FALSE)
values <- gsub("^\\s+|\\s+$", "", values)
clz <- mapply(list, rank = labels, name = values,
SIMPLIFY = FALSE, USE.NAMES = FALSE)
result$classification <- if (tidy) atbl(dt_df(clz)) else clz
}
return(result)
}
#' @export
#' @rdname wt_wikispecies
wt_wikispecies_search <- function(query, limit = 10, offset = 0, utf8 = TRUE,
...) {
tmp <- g_et(search_base("species"), sh(query, limit, offset, utf8), ...)
tmp$query$search <- atbl(tmp$query$search)
return(tmp)
}
|
/scratch/gouwar.j/cran-all/cranData/wikitaxa/R/wikispecies.R
|
#' @title wikitaxa
#' @description Taxonomic Information from Wikipedia
#' @name wikitaxa-package
#' @aliases wikitaxa
#' @docType package
#' @author Scott Chamberlain \email{myrmecocystus@@gmail.com}
#' @author Ethan Welty
#' @keywords package
NULL
#' List of Wikipedias
#'
#' data.frame of 295 rows, with 3 columns:
#' \itemize{
#' \item language - language
#' \item language_local - language in local name
#' \item wiki - langugae code for the wiki
#' }
#'
#' From <https://meta.wikimedia.org/wiki/List_of_Wikipedias>
#'
#' @name wikipedias
#' @docType data
#' @keywords data
NULL
|
/scratch/gouwar.j/cran-all/cranData/wikitaxa/R/wikitaxa-package.R
|
tc <- function(l) Filter(Negate(is.null), l)
dt_df <- function(x) {
(ffff <- data.table::setDF(data.table::rbindlist(x, fill = TRUE,
use.names = TRUE)))
}
search_base <- function(x, y = "wikimedia") {
sprintf("https://%s.%s.org/w/api.php", x, y)
}
atbl <- function(x) tibble::as_tibble(x)
g_et <- function(url, args = list(), ...) {
cli <- crul::HttpClient$new(url = url)
res <- cli$get(query = args, ...)
res$raise_for_status()
jsonlite::fromJSON(res$parse("UTF-8"))
}
assert <- function(x, y) {
if (!is.null(x)) {
if (!class(x) %in% y) {
stop(deparse(substitute(x)), " must be of class ",
paste0(y, collapse = ", "), call. = FALSE)
}
}
}
sh <- function(query, limit, offset, utf8) {
assert(limit, c("integer", "numeric"))
assert(offset, c("integer", "numeric"))
assert(utf8, "logical")
tc(list(
action = "query", list = "search", srsearch = query,
utf8 = if (utf8) "" else NULL, format = "json",
srprop = "size|wordcount|timestamp|snippet",
srlimit = limit, sroffset = offset
))
}
match_ <- function(string, pattern) {
pos <- regexec(pattern, string)
regmatches(string, pos)[[1]]
}
strex <- function(string, pattern) {
regmatches(string, gregexpr(pattern, string))
}
|
/scratch/gouwar.j/cran-all/cranData/wikitaxa/R/zzz.R
|
---
title: "Introduction to the wikitaxa package"
author: "Scott Chamberlain"
date: "2020-06-28"
output:
html_document:
toc: true
toc_float: true
theme: readable
vignette: >
%\VignetteIndexEntry{Introduction to the wikitaxa package}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
`wikitaxa` - Taxonomy data from Wikipedia
The goal of `wikitaxa` is to allow search and taxonomic data retrieval from
across many Wikimedia sites, including: Wikipedia, Wikicommons, and
Wikispecies.
There are lower level and higher level parts to the package API:
### Low level API
The low level API is meant for power users and gives you more control,
but requires more knowledge.
* `wt_wiki_page()`
* `wt_wiki_page_parse()`
* `wt_wiki_url_build()`
* `wt_wiki_url_parse()`
* `wt_wikispecies_parse()`
* `wt_wikicommons_parse()`
* `wt_wikipedia_parse()`
### High level API
The high level API is meant to be easier and faster to use.
* `wt_data()`
* `wt_data_id()`
* `wt_wikispecies()`
* `wt_wikicommons()`
* `wt_wikipedia()`
Search functions:
* `wt_wikicommons_search()`
* `wt_wikispecies_search()`
* `wt_wikipedia_search()`
## Installation
CRAN version
```r
install.packages("wikitaxa")
```
Dev version
```r
remotes::install_github("ropensci/wikitaxa")
```
```r
library("wikitaxa")
```
## wiki data
```r
z <- wt_data("Poa annua")
names(z)
#> [1] "labels" "descriptions" "aliases" "sitelinks" "claims"
head(z$labels)
#> language value
#> 1 pt Poa annua
#> 2 is Varpasveifgras
#> 3 pl Wiechlina roczna
#> 4 fr Pâturin annuel
#> 5 es Poa annua
#> 6 en Poa annua
```
Get a Wikidata ID
```r
wt_data_id("Mimulus foliatus")
#> [1] "Q6495130"
#> attr(,"class")
#> [1] "wiki_id"
```
## wikipedia
lower level
```r
pg <- wt_wiki_page("https://en.wikipedia.org/wiki/Malus_domestica")
res <- wt_wiki_page_parse(pg)
res$iwlinks
#> [1] "https://commons.wikimedia.org/wiki/Category:Apples"
#> [2] "https://commons.wikimedia.org/wiki/Category:Apple_cultivars"
#> [3] "https://www.wikidata.org/wiki/Q158657"
#> [4] "https://www.wikidata.org/wiki/Q18674606"
#> [5] "https://species.wikimedia.org/wiki/Malus_pumila"
#> [6] "https://species.wikimedia.org/wiki/Malus_domestica"
```
higher level
```r
res <- wt_wikipedia("Malus domestica")
res$common_names
#> # A tibble: 1 x 2
#> name language
#> <chr> <chr>
#> 1 Apple en
res$classification
#> # A tibble: 3 x 2
#> rank name
#> <chr> <chr>
#> 1 plainlinks ""
#> 2 binomial "Malus domestica"
#> 3 <NA> ""
```
choose a wikipedia language
```r
# French
wt_wikipedia(name = "Malus domestica", wiki = "fr")
#> $langlinks
#> # A tibble: 60 x 5
#> lang url langname autonym `*`
#> <chr> <chr> <chr> <chr> <chr>
#> 1 als https://als.wikipedia.org/wiki/Kultu… Alemannis… Alemannis… Kulturapf…
#> 2 am https://am.wikipedia.org/wiki/%E1%89… amharique አማርኛ ቱፋሕ
#> 3 ast https://ast.wikipedia.org/wiki/Malus… asturien asturianu Malus dom…
#> 4 az https://az.wikipedia.org/wiki/M%C9%9… azéri azərbayca… Mədəni al…
#> 5 bat-s… https://bat-smg.wikipedia.org/wiki/V… Samogitian žemaitėška Vuobelės
#> 6 bg https://bg.wikipedia.org/wiki/%D0%94… bulgare български Домашна я…
#> 7 bpy https://bpy.wikipedia.org/wiki/%E0%A… bishnupri… বিষ্ণুপ্র… আপেল
#> 8 ca https://ca.wikipedia.org/wiki/Pomera… catalan català Pomera co…
#> 9 cs https://cs.wikipedia.org/wiki/Jablo%… tchèque čeština Jabloň do…
#> 10 csb https://csb.wikipedia.org/wiki/Dom%C… kachoube kaszëbsczi Domôcô ja…
#> # … with 50 more rows
#>
#> $externallinks
#> [1] "http://www.cabi-publishing.org/pdf/Books/0851995926/0851995926_Chap01.pdf"
#> [2] "http://www.umass.edu/fruitadvisor/fruitnotes/ontheorigin.pdf"
#> [3] "http://www.applegenome.org"
#> [4] "http://societeradio-canada.info/emissions/les_annees_lumiere/2010-2011/"
#> [5] "http://www.nature.com/ng/journal/vaop/ncurrent/full/ng.654.html"
#> [6] "https://gallica.bnf.fr/ark:/12148/bpt6k28582v"
#> [7] "http://worldcat.org/issn/1471-2229&lang=fr"
#> [8] "https://www.ncbi.nlm.nih.gov/pubmed/26924309"
#> [9] "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4770685"
#> [10] "https://dx.doi.org/10.1186%2Fs12870-016-0739-y"
#> [11] "https://doi.org/10.1186/s12870-016-0739-y"
#> [12] "http://www.hamblenne.be/LISTE_RGF.pdf"
#> [13] "http://www.arcticapples.com/blog/john/demystifying-arctic-apples#.UaeX-Jzjmw5"
#> [14] "http://www.cctec.cornell.edu/plants/GENEVA-Apple-Rootstocks-Comparison-Chart-120911.pdf"
#> [15] "https://commons.wikimedia.org/wiki/Category:Malus_domestica?uselang=fr"
#> [16] "http://www.tela-botanica.org/page:eflore"
#> [17] "http://www.tela-botanica.org/bdtfx-nn-40744"
#> [18] "http://www.cbif.gc.ca/acp/fra/siti/regarder?tsn=516655"
#> [19] "http://www.itis.gov/servlet/SingleRpt/SingleRpt?search_topic=TSN&search_value=516655"
#> [20] "http://www.cbif.gc.ca/acp/fra/siti/regarder?tsn=25262"
#> [21] "http://www.itis.gov/servlet/SingleRpt/SingleRpt?search_topic=TSN&search_value=25262"
#> [22] "http://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?lin=s&p=has_linkout&id=3750"
#> [23] "http://www.ars-grin.gov/"
#> [24] "https://npgsweb.ars-grin.gov/gringlobal/taxonomydetail.aspx?104681"
#> [25] "https://www.biolib.cz/en/"
#> [26] "https://www.biolib.cz/en/taxon/id39552/"
#> [27] "http://inpn.mnhn.fr/isb/espece/cd_nom/107207"
#> [28] "http://www.bmlisieux.com/normandie/roblet.htm"
#> [29] "http://site.voila.fr/babadubonsai/docum/docmal.html"
#> [30] "http://www.fruitiers.net"
#> [31] "http://www.inra.fr/hyppz/CULTURES/3c---003.htm"
#> [32] "http://cat.inist.fr/?aModele=afficheN&cpsidt=15506238"
#> [33] "http://www.omafra.gov.on.ca/french/crops/facts/98-014.htm"
#> [34] "http://www.gardenaction.co.uk/fruit_veg_diary/fruit_veg_mini_project_september_2_apple.asp"
#>
#> $common_names
#> # A tibble: 1 x 2
#> name language
#> <chr> <chr>
#> 1 Pommier domestique fr
#>
#> $classification
#> # A tibble: 0 x 0
#>
#> $synonyms
#> list()
# Slovak
wt_wikipedia(name = "Malus domestica", wiki = "sk")
#> $langlinks
#> # A tibble: 60 x 5
#> lang url langname autonym `*`
#> <chr> <chr> <chr> <chr> <chr>
#> 1 als https://als.wikipedia.org/wiki/K… Alemannisch Alemannis… Kulturap…
#> 2 am https://am.wikipedia.org/wiki/%E… amharčina አማርኛ ቱፋሕ
#> 3 ast https://ast.wikipedia.org/wiki/M… astúrčina asturianu Malus do…
#> 4 az https://az.wikipedia.org/wiki/M%… azerbajdžančina azərbayca… Mədəni a…
#> 5 bat-s… https://bat-smg.wikipedia.org/wi… Samogitian žemaitėška Vuobelės
#> 6 bg https://bg.wikipedia.org/wiki/%D… bulharčina български Домашна …
#> 7 bpy https://bpy.wikipedia.org/wiki/%… bišnuprijskoma… বিষ্ণুপ্র… আপেল
#> 8 ca https://ca.wikipedia.org/wiki/Po… katalánčina català Pomera c…
#> 9 cs https://cs.wikipedia.org/wiki/Ja… čeština čeština Jabloň d…
#> 10 csb https://csb.wikipedia.org/wiki/D… kašubčina kaszëbsczi Domôcô j…
#> # … with 50 more rows
#>
#> $externallinks
#> list()
#>
#> $common_names
#> # A tibble: 1 x 2
#> name language
#> <chr> <chr>
#> 1 Jabloň domáca sk
#>
#> $classification
#> # A tibble: 0 x 0
#>
#> $synonyms
#> list()
# Vietnamese
wt_wikipedia(name = "Malus domestica", wiki = "vi")
#> $langlinks
#> # A tibble: 60 x 5
#> lang url langname autonym `*`
#> <chr> <chr> <chr> <chr> <chr>
#> 1 als https://als.wikipedia.org/wiki/Kul… Alemannisch Alemanni… Kulturapf…
#> 2 am https://am.wikipedia.org/wiki/%E1%… Tiếng Amha… አማርኛ ቱፋሕ
#> 3 ast https://ast.wikipedia.org/wiki/Mal… Tiếng Astu… asturianu Malus dom…
#> 4 az https://az.wikipedia.org/wiki/M%C9… Tiếng Azer… azərbayc… Mədəni al…
#> 5 zh-min-… https://zh-min-nan.wikipedia.org/w… Chinese (M… Bân-lâm-… Phōng-kó-…
#> 6 bg https://bg.wikipedia.org/wiki/%D0%… Tiếng Bulg… български Домашна я…
#> 7 ca https://ca.wikipedia.org/wiki/Pome… Tiếng Cata… català Pomera co…
#> 8 cs https://cs.wikipedia.org/wiki/Jabl… Tiếng Séc čeština Jabloň do…
#> 9 da https://da.wikipedia.org/wiki/Almi… Tiếng Đan … dansk Almindeli…
#> 10 de https://de.wikipedia.org/wiki/Kult… Tiếng Đức Deutsch Kulturapf…
#> # … with 50 more rows
#>
#> $externallinks
#> [1] "http://biology.umaine.edu/Amelanchier/Rosaceae_2007.pdf"
#> [2] "//dx.doi.org/10.1007%2Fs00606-007-0539-9"
#> [3] "https://npgsweb.ars-grin.gov/gringlobal/taxonomydetail.aspx?410495"
#> [4] "https://npgsweb.ars-grin.gov/gringlobal/taxonomydetail.aspx?30530"
#> [5] "http://www.uga.edu/fruit/apple.html"
#> [6] "//dx.doi.org/10.3732%2Fajb.93.3.357"
#> [7] "http://www.plosgenetics.org/article/info:doi%2F10.1371%2Fjournal.pgen.1002703"
#> [8] "//www.ncbi.nlm.nih.gov/pmc/articles/PMC3349737"
#> [9] "//www.ncbi.nlm.nih.gov/pubmed/22589740"
#> [10] "//dx.doi.org/10.1371%2Fjournal.pgen.1002703"
#> [11] "http://news.sciencemag.org/sciencenow/2012/05/scienceshot-the-secret-history-o.html"
#> [12] "http://www.plantpress.com/wildlife/o523-apple.php"
#> [13] "http://cahnrsnews.wsu.edu/2010/08/29/apple-cup-rivals-contribute-to-apple-genome-sequencing/"
#> [14] "http://www.nature.com/ng/journal/v42/n10/full/ng.654.html"
#> [15] "http://www.alphagalileo.org/ViewItem.aspx?ItemId=83717&CultureCode=en"
#> [16] "http://www.ornl.gov/sci/techresources/Human_Genome/project/info.shtml"
#> [17] "https://commons.wikimedia.org/wiki/Apple?uselang=vi"
#> [18] "https://commons.wikimedia.org/wiki/Category:Malus_domestica?uselang=vi"
#> [19] "http://bachkhoatoanthu.vass.gov.vn/noidung/tudien/Lists/GiaiNghia/View_Detail.aspx?ItemID=5007"
#> [20] "http://www.eol.org/pages/629094"
#> [21] "http://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?id=3750"
#> [22] "http://www.itis.gov/servlet/SingleRpt/SingleRpt?search_topic=TSN&search_value=516655"
#> [23] "http://www.catalogueoflife.org/col/details/species/id/19538828/synonym/19539435"
#> [24] "https://www.biolib.cz/cz/taxon/id39552"
#> [25] "https://gd.eppo.int/taxon/MABSD"
#> [26] "https://npgsweb.ars-grin.gov/gringlobal/taxonomydetail.aspx?id=104681"
#> [27] "http://www.ipni.org/ipni/idPlantNameSearch.do?id=726282-1"
#> [28] "https://www.itis.gov/servlet/SingleRpt/SingleRpt?search_topic=TSN&search_value=516655"
#> [29] "https://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?mode=Info&id=3750"
#> [30] "http://www.nzor.org.nz/names/14d024a2-d821-48e3-95d8-f0dd206c70a0"
#> [31] "http://www.pfaf.org/user/Plant.aspx?LatinName=Malus+domestica"
#> [32] "http://www.theplantlist.org/tpl1.1/record/rjp-454"
#> [33] "http://www.plantsoftheworldonline.org/taxon/urn:lsid:ipni.org:names:726282-1"
#> [34] "http://legacy.tropicos.org/Name/27804420"
#> [35] "https://vicflora.rbg.vic.gov.au/flora/taxon/e41b929d-b709-4f4c-8dbe-2a9241e2342b"
#> [36] "http://www.ipni.org/ipni/idPlantNameSearch.do?id=60476301-2"
#> [37] "http://www.plantsoftheworldonline.org/taxon/urn:lsid:ipni.org:names:60476301-2"
#> [38] "http://legacy.tropicos.org/Name/100473089"
#>
#> $common_names
#> # A tibble: 1 x 2
#> name language
#> <chr> <chr>
#> 1 Malus domestica vi
#>
#> $classification
#> # A tibble: 0 x 0
#>
#> $synonyms
#> list()
```
search
```r
wt_wikipedia_search(query = "Pinus")
#> $batchcomplete
#> [1] ""
#>
#> $continue
#> $continue$sroffset
#> [1] 10
#>
#> $continue$continue
#> [1] "-||"
#>
#>
#> $query
#> $query$searchinfo
#> $query$searchinfo$totalhits
#> [1] 3374
#>
#> $query$searchinfo$suggestion
#> [1] "penis"
#>
#> $query$searchinfo$suggestionsnippet
#> [1] "penis"
#>
#>
#> $query$search
#> # A tibble: 10 x 7
#> ns title pageid size wordcount snippet timestamp
#> <int> <chr> <int> <int> <int> <chr> <chr>
#> 1 0 Pine 39389 36555 4058 "A pine is any conifer in… 2020-06-26…
#> 2 0 Pinus po… 532941 31087 3069 "misidentified it as <spa… 2020-06-19…
#> 3 0 Pinus co… 507717 20486 2343 "all pines (member specie… 2020-05-25…
#> 4 0 Pinus je… 463015 9130 1008 "long, with a large (15 t… 2019-12-22…
#> 5 0 Pinus st… 464301 31478 3815 "3 ft) tall & wide. M… 2020-06-22…
#> 6 0 Pinus re… 507802 7501 783 ""<span class=\"sear… 2020-05-08…
#> 7 0 Pinus lo… 649634 15408 1741 "sometimes form dense for… 2020-04-14…
#> 8 0 Pinus la… 459402 11464 1338 "Fire affected this speci… 2020-01-15…
#> 9 0 Pinus ni… 438963 11947 1421 "hypodermal cells. P. nig… 2020-04-03…
#> 10 0 Pinus mu… 438946 11964 901 "encyclopedia) is still r… 2020-06-17…
```
search supports languages
```r
wt_wikipedia_search(query = "Pinus", wiki = "fr")
#> $batchcomplete
#> [1] ""
#>
#> $continue
#> $continue$sroffset
#> [1] 10
#>
#> $continue$continue
#> [1] "-||"
#>
#>
#> $query
#> $query$searchinfo
#> $query$searchinfo$totalhits
#> [1] 990
#>
#>
#> $query$search
#> # A tibble: 10 x 7
#> ns title pageid size wordcount snippet timestamp
#> <int> <chr> <int> <int> <int> <chr> <chr>
#> 1 0 Pin (pl… 89798 83647 9325 "<span class=\"searchmatc… 2020-05-23…
#> 2 0 Pinus p… 121544 31274 3892 "<span class=\"searchmatc… 2020-04-30…
#> 3 0 Pinus c… 98421 8237 959 "<span class=\"searchmatc… 2019-05-30…
#> 4 0 Pinus n… 950330 26623 3013 "recycler}}. <span class=… 2020-03-30…
#> 5 0 Pin syl… 121562 13725 1611 "<span class=\"searchmatc… 2020-03-22…
#> 6 0 Pinus h… 117280 22257 2671 "<span class=\"searchmatc… 2020-05-07…
#> 7 0 Pin par… 138378 8763 916 "<span class=\"searchmatc… 2020-04-26…
#> 8 0 Pinus s… 776950 11662 1628 "les articles homonymes, … 2020-04-14…
#> 9 0 Pinus m… 2480854 21747 2310 "<span class=\"searchmatc… 2019-02-25…
#> 10 0 Pinus u… 3208429 6316 720 "significations, voir Pin… 2020-03-09…
```
## wikicommons
lower level
```r
pg <- wt_wiki_page("https://commons.wikimedia.org/wiki/Abelmoschus")
res <- wt_wikicommons_parse(pg)
res$common_names[1:3]
#> [[1]]
#> [[1]]$name
#> [1] "okra"
#>
#> [[1]]$language
#> [1] "en"
#>
#>
#> [[2]]
#> [[2]]$name
#> [1] "مسكي"
#>
#> [[2]]$language
#> [1] "ar"
#>
#>
#> [[3]]
#> [[3]]$name
#> [1] "Abelmoş"
#>
#> [[3]]$language
#> [1] "az"
```
higher level
```r
res <- wt_wikicommons("Abelmoschus")
res$classification
#> # A tibble: 15 x 2
#> rank name
#> <chr> <chr>
#> 1 Domain "Eukaryota"
#> 2 unranked "Archaeplastida"
#> 3 Regnum "Plantae"
#> 4 Cladus "angiosperms"
#> 5 Cladus "eudicots"
#> 6 Cladus "core eudicots"
#> 7 Cladus "superrosids"
#> 8 Cladus "rosids"
#> 9 Cladus "eurosids II"
#> 10 Ordo "Malvales"
#> 11 Familia "Malvaceae"
#> 12 Subfamilia "Malvoideae"
#> 13 Tribus "Hibisceae"
#> 14 Genus "Abelmoschus"
#> 15 Authority " Medik. (1787)"
res$common_names
#> # A tibble: 19 x 2
#> name language
#> <chr> <chr>
#> 1 okra en
#> 2 مسكي ar
#> 3 Abelmoş az
#> 4 Bamja bs
#> 5 Ibiškovec cs
#> 6 Bisameibisch de
#> 7 Okrat fi
#> 8 Abelmosco gl
#> 9 Abelmošus hr
#> 10 Ybiškė lt
#> 11 അബെൽമോസ്കസ് ml
#> 12 Абельмош mrj
#> 13 Abelmoskusslekta nn
#> 14 Piżmian pl
#> 15 Абельмош ru
#> 16 Okrasläktet sv
#> 17 Абельмош udm
#> 18 Chi Vông vang vi
#> 19 黄葵属 zh
```
search
```r
wt_wikicommons_search(query = "Pinus")
#> $batchcomplete
#> [1] ""
#>
#> $continue
#> $continue$sroffset
#> [1] 10
#>
#> $continue$continue
#> [1] "-||"
#>
#>
#> $query
#> $query$searchinfo
#> $query$searchinfo$totalhits
#> [1] 270
#>
#>
#> $query$search
#> # A tibble: 10 x 7
#> ns title pageid size wordcount snippet timestamp
#> <int> <chr> <int> <lgl> <int> <chr> <chr>
#> 1 0 Pinus sylvestris 9066 NA 0 "" 2020-05-13T19:…
#> 2 0 Pinus ponderosa 250435 NA 0 "" 2020-04-18T15:…
#> 3 0 Pinus nigra 64703 NA 0 "" 2018-03-06T10:…
#> 4 0 Pinus 82071 NA 0 "" 2017-05-28T10:…
#> 5 0 Pinus mugo 132442 NA 0 "" 2019-07-26T09:…
#> 6 0 Rogów Arboretum 10563490 NA 0 "" 2020-01-01T13:…
#> 7 0 Pinus contorta 186918 NA 0 "" 2020-01-19T19:…
#> 8 0 Anacortes Community F… 2989013 NA 0 "" 2014-12-10T15:…
#> 9 0 Pinus halepensis 172181 NA 0 "" 2018-05-05T10:…
#> 10 0 Pinus brutia 139389 NA 0 "" 2014-11-23T11:…
```
## wikispecies
lower level
```r
pg <- wt_wiki_page("https://species.wikimedia.org/wiki/Malus_domestica")
res <- wt_wikispecies_parse(pg, types = "common_names")
res$common_names[1:3]
#> [[1]]
#> [[1]]$name
#> [1] "Ябълка"
#>
#> [[1]]$language
#> [1] "български"
#>
#>
#> [[2]]
#> [[2]]$name
#> [1] "Poma, pomera"
#>
#> [[2]]$language
#> [1] "català"
#>
#>
#> [[3]]
#> [[3]]$name
#> [1] "jabloň domácí"
#>
#> [[3]]$language
#> [1] "čeština"
```
higher level
```r
res <- wt_wikispecies("Malus domestica")
res$classification
#> # A tibble: 8 x 2
#> rank name
#> <chr> <chr>
#> 1 Superregnum Eukaryota
#> 2 Regnum Plantae
#> 3 Cladus Angiosperms
#> 4 Cladus Eudicots
#> 5 Cladus Core eudicots
#> 6 Cladus Rosids
#> 7 Cladus Eurosids I
#> 8 Ordo Rosales
res$common_names
#> # A tibble: 22 x 2
#> name language
#> <chr> <chr>
#> 1 Ябълка български
#> 2 Poma, pomera català
#> 3 jabloň domácí čeština
#> 4 Apfel Deutsch
#> 5 Μηλιά Ελληνικά
#> 6 Apple English
#> 7 Manzano español
#> 8 Aed-õunapuu eesti
#> 9 Tarhaomenapuu suomi
#> 10 Aapel Nordfriisk
#> # … with 12 more rows
```
search
```r
wt_wikispecies_search(query = "Pinus")
#> $batchcomplete
#> [1] ""
#>
#> $continue
#> $continue$sroffset
#> [1] 10
#>
#> $continue$continue
#> [1] "-||"
#>
#>
#> $query
#> $query$searchinfo
#> $query$searchinfo$totalhits
#> [1] 515
#>
#>
#> $query$search
#> # A tibble: 10 x 7
#> ns title pageid size wordcount snippet timestamp
#> <int> <chr> <int> <int> <int> <chr> <chr>
#> 1 0 Pinus 1.74e4 5737 784 "Familia: Pinaceae Ge… 2020-06-06…
#> 2 0 Pinus halepe… 4.51e4 4047 580 "Pinaceae Genus: <spa… 2019-12-20…
#> 3 0 Pinus pinea 4.51e4 1949 406 "Familia: Pinaceae Ge… 2019-10-19…
#> 4 0 Pinus veitch… 1.34e6 1450 181 "Familia: Pinaceae Ge… 2019-07-19…
#> 5 0 Pinus pumila 7.35e4 1395 189 "Pinaceae Genus: <spa… 2019-07-14…
#> 6 0 Pinus subg. … 3.01e5 358 27 "Pinaceae Genus: <spa… 2019-11-24…
#> 7 0 Pinus clausa 4.50e4 1552 208 "Pinaceae Genus: <spa… 2019-08-15…
#> 8 0 Pinus pseudo… 1.48e6 2114 310 "Genus: <span class=\… 2020-05-21…
#> 9 0 Pinus pinast… 1.32e6 2764 379 "Pinaceae Genus: <spa… 2019-12-20…
#> 10 0 Pinus nigra … 3.27e5 1799 138 "Genus: <span class=\… 2020-03-02…
```
|
/scratch/gouwar.j/cran-all/cranData/wikitaxa/inst/doc/wikitaxa.Rmd
|
---
title: "Introduction to the wikitaxa package"
author: "Scott Chamberlain"
date: "2020-06-28"
output:
html_document:
toc: true
toc_float: true
theme: readable
vignette: >
%\VignetteIndexEntry{Introduction to the wikitaxa package}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
`wikitaxa` - Taxonomy data from Wikipedia
The goal of `wikitaxa` is to allow search and taxonomic data retrieval from
across many Wikimedia sites, including: Wikipedia, Wikicommons, and
Wikispecies.
There are lower level and higher level parts to the package API:
### Low level API
The low level API is meant for power users and gives you more control,
but requires more knowledge.
* `wt_wiki_page()`
* `wt_wiki_page_parse()`
* `wt_wiki_url_build()`
* `wt_wiki_url_parse()`
* `wt_wikispecies_parse()`
* `wt_wikicommons_parse()`
* `wt_wikipedia_parse()`
### High level API
The high level API is meant to be easier and faster to use.
* `wt_data()`
* `wt_data_id()`
* `wt_wikispecies()`
* `wt_wikicommons()`
* `wt_wikipedia()`
Search functions:
* `wt_wikicommons_search()`
* `wt_wikispecies_search()`
* `wt_wikipedia_search()`
## Installation
CRAN version
```r
install.packages("wikitaxa")
```
Dev version
```r
remotes::install_github("ropensci/wikitaxa")
```
```r
library("wikitaxa")
```
## wiki data
```r
z <- wt_data("Poa annua")
names(z)
#> [1] "labels" "descriptions" "aliases" "sitelinks" "claims"
head(z$labels)
#> language value
#> 1 pt Poa annua
#> 2 is Varpasveifgras
#> 3 pl Wiechlina roczna
#> 4 fr Pâturin annuel
#> 5 es Poa annua
#> 6 en Poa annua
```
Get a Wikidata ID
```r
wt_data_id("Mimulus foliatus")
#> [1] "Q6495130"
#> attr(,"class")
#> [1] "wiki_id"
```
## wikipedia
lower level
```r
pg <- wt_wiki_page("https://en.wikipedia.org/wiki/Malus_domestica")
res <- wt_wiki_page_parse(pg)
res$iwlinks
#> [1] "https://commons.wikimedia.org/wiki/Category:Apples"
#> [2] "https://commons.wikimedia.org/wiki/Category:Apple_cultivars"
#> [3] "https://www.wikidata.org/wiki/Q158657"
#> [4] "https://www.wikidata.org/wiki/Q18674606"
#> [5] "https://species.wikimedia.org/wiki/Malus_pumila"
#> [6] "https://species.wikimedia.org/wiki/Malus_domestica"
```
higher level
```r
res <- wt_wikipedia("Malus domestica")
res$common_names
#> # A tibble: 1 x 2
#> name language
#> <chr> <chr>
#> 1 Apple en
res$classification
#> # A tibble: 3 x 2
#> rank name
#> <chr> <chr>
#> 1 plainlinks ""
#> 2 binomial "Malus domestica"
#> 3 <NA> ""
```
choose a wikipedia language
```r
# French
wt_wikipedia(name = "Malus domestica", wiki = "fr")
#> $langlinks
#> # A tibble: 60 x 5
#> lang url langname autonym `*`
#> <chr> <chr> <chr> <chr> <chr>
#> 1 als https://als.wikipedia.org/wiki/Kultu… Alemannis… Alemannis… Kulturapf…
#> 2 am https://am.wikipedia.org/wiki/%E1%89… amharique አማርኛ ቱፋሕ
#> 3 ast https://ast.wikipedia.org/wiki/Malus… asturien asturianu Malus dom…
#> 4 az https://az.wikipedia.org/wiki/M%C9%9… azéri azərbayca… Mədəni al…
#> 5 bat-s… https://bat-smg.wikipedia.org/wiki/V… Samogitian žemaitėška Vuobelės
#> 6 bg https://bg.wikipedia.org/wiki/%D0%94… bulgare български Домашна я…
#> 7 bpy https://bpy.wikipedia.org/wiki/%E0%A… bishnupri… বিষ্ণুপ্র… আপেল
#> 8 ca https://ca.wikipedia.org/wiki/Pomera… catalan català Pomera co…
#> 9 cs https://cs.wikipedia.org/wiki/Jablo%… tchèque čeština Jabloň do…
#> 10 csb https://csb.wikipedia.org/wiki/Dom%C… kachoube kaszëbsczi Domôcô ja…
#> # … with 50 more rows
#>
#> $externallinks
#> [1] "http://www.cabi-publishing.org/pdf/Books/0851995926/0851995926_Chap01.pdf"
#> [2] "http://www.umass.edu/fruitadvisor/fruitnotes/ontheorigin.pdf"
#> [3] "http://www.applegenome.org"
#> [4] "http://societeradio-canada.info/emissions/les_annees_lumiere/2010-2011/"
#> [5] "http://www.nature.com/ng/journal/vaop/ncurrent/full/ng.654.html"
#> [6] "https://gallica.bnf.fr/ark:/12148/bpt6k28582v"
#> [7] "http://worldcat.org/issn/1471-2229&lang=fr"
#> [8] "https://www.ncbi.nlm.nih.gov/pubmed/26924309"
#> [9] "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4770685"
#> [10] "https://dx.doi.org/10.1186%2Fs12870-016-0739-y"
#> [11] "https://doi.org/10.1186/s12870-016-0739-y"
#> [12] "http://www.hamblenne.be/LISTE_RGF.pdf"
#> [13] "http://www.arcticapples.com/blog/john/demystifying-arctic-apples#.UaeX-Jzjmw5"
#> [14] "http://www.cctec.cornell.edu/plants/GENEVA-Apple-Rootstocks-Comparison-Chart-120911.pdf"
#> [15] "https://commons.wikimedia.org/wiki/Category:Malus_domestica?uselang=fr"
#> [16] "http://www.tela-botanica.org/page:eflore"
#> [17] "http://www.tela-botanica.org/bdtfx-nn-40744"
#> [18] "http://www.cbif.gc.ca/acp/fra/siti/regarder?tsn=516655"
#> [19] "http://www.itis.gov/servlet/SingleRpt/SingleRpt?search_topic=TSN&search_value=516655"
#> [20] "http://www.cbif.gc.ca/acp/fra/siti/regarder?tsn=25262"
#> [21] "http://www.itis.gov/servlet/SingleRpt/SingleRpt?search_topic=TSN&search_value=25262"
#> [22] "http://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?lin=s&p=has_linkout&id=3750"
#> [23] "http://www.ars-grin.gov/"
#> [24] "https://npgsweb.ars-grin.gov/gringlobal/taxonomydetail.aspx?104681"
#> [25] "https://www.biolib.cz/en/"
#> [26] "https://www.biolib.cz/en/taxon/id39552/"
#> [27] "http://inpn.mnhn.fr/isb/espece/cd_nom/107207"
#> [28] "http://www.bmlisieux.com/normandie/roblet.htm"
#> [29] "http://site.voila.fr/babadubonsai/docum/docmal.html"
#> [30] "http://www.fruitiers.net"
#> [31] "http://www.inra.fr/hyppz/CULTURES/3c---003.htm"
#> [32] "http://cat.inist.fr/?aModele=afficheN&cpsidt=15506238"
#> [33] "http://www.omafra.gov.on.ca/french/crops/facts/98-014.htm"
#> [34] "http://www.gardenaction.co.uk/fruit_veg_diary/fruit_veg_mini_project_september_2_apple.asp"
#>
#> $common_names
#> # A tibble: 1 x 2
#> name language
#> <chr> <chr>
#> 1 Pommier domestique fr
#>
#> $classification
#> # A tibble: 0 x 0
#>
#> $synonyms
#> list()
# Slovak
wt_wikipedia(name = "Malus domestica", wiki = "sk")
#> $langlinks
#> # A tibble: 60 x 5
#> lang url langname autonym `*`
#> <chr> <chr> <chr> <chr> <chr>
#> 1 als https://als.wikipedia.org/wiki/K… Alemannisch Alemannis… Kulturap…
#> 2 am https://am.wikipedia.org/wiki/%E… amharčina አማርኛ ቱፋሕ
#> 3 ast https://ast.wikipedia.org/wiki/M… astúrčina asturianu Malus do…
#> 4 az https://az.wikipedia.org/wiki/M%… azerbajdžančina azərbayca… Mədəni a…
#> 5 bat-s… https://bat-smg.wikipedia.org/wi… Samogitian žemaitėška Vuobelės
#> 6 bg https://bg.wikipedia.org/wiki/%D… bulharčina български Домашна …
#> 7 bpy https://bpy.wikipedia.org/wiki/%… bišnuprijskoma… বিষ্ণুপ্র… আপেল
#> 8 ca https://ca.wikipedia.org/wiki/Po… katalánčina català Pomera c…
#> 9 cs https://cs.wikipedia.org/wiki/Ja… čeština čeština Jabloň d…
#> 10 csb https://csb.wikipedia.org/wiki/D… kašubčina kaszëbsczi Domôcô j…
#> # … with 50 more rows
#>
#> $externallinks
#> list()
#>
#> $common_names
#> # A tibble: 1 x 2
#> name language
#> <chr> <chr>
#> 1 Jabloň domáca sk
#>
#> $classification
#> # A tibble: 0 x 0
#>
#> $synonyms
#> list()
# Vietnamese
wt_wikipedia(name = "Malus domestica", wiki = "vi")
#> $langlinks
#> # A tibble: 60 x 5
#> lang url langname autonym `*`
#> <chr> <chr> <chr> <chr> <chr>
#> 1 als https://als.wikipedia.org/wiki/Kul… Alemannisch Alemanni… Kulturapf…
#> 2 am https://am.wikipedia.org/wiki/%E1%… Tiếng Amha… አማርኛ ቱፋሕ
#> 3 ast https://ast.wikipedia.org/wiki/Mal… Tiếng Astu… asturianu Malus dom…
#> 4 az https://az.wikipedia.org/wiki/M%C9… Tiếng Azer… azərbayc… Mədəni al…
#> 5 zh-min-… https://zh-min-nan.wikipedia.org/w… Chinese (M… Bân-lâm-… Phōng-kó-…
#> 6 bg https://bg.wikipedia.org/wiki/%D0%… Tiếng Bulg… български Домашна я…
#> 7 ca https://ca.wikipedia.org/wiki/Pome… Tiếng Cata… català Pomera co…
#> 8 cs https://cs.wikipedia.org/wiki/Jabl… Tiếng Séc čeština Jabloň do…
#> 9 da https://da.wikipedia.org/wiki/Almi… Tiếng Đan … dansk Almindeli…
#> 10 de https://de.wikipedia.org/wiki/Kult… Tiếng Đức Deutsch Kulturapf…
#> # … with 50 more rows
#>
#> $externallinks
#> [1] "http://biology.umaine.edu/Amelanchier/Rosaceae_2007.pdf"
#> [2] "//dx.doi.org/10.1007%2Fs00606-007-0539-9"
#> [3] "https://npgsweb.ars-grin.gov/gringlobal/taxonomydetail.aspx?410495"
#> [4] "https://npgsweb.ars-grin.gov/gringlobal/taxonomydetail.aspx?30530"
#> [5] "http://www.uga.edu/fruit/apple.html"
#> [6] "//dx.doi.org/10.3732%2Fajb.93.3.357"
#> [7] "http://www.plosgenetics.org/article/info:doi%2F10.1371%2Fjournal.pgen.1002703"
#> [8] "//www.ncbi.nlm.nih.gov/pmc/articles/PMC3349737"
#> [9] "//www.ncbi.nlm.nih.gov/pubmed/22589740"
#> [10] "//dx.doi.org/10.1371%2Fjournal.pgen.1002703"
#> [11] "http://news.sciencemag.org/sciencenow/2012/05/scienceshot-the-secret-history-o.html"
#> [12] "http://www.plantpress.com/wildlife/o523-apple.php"
#> [13] "http://cahnrsnews.wsu.edu/2010/08/29/apple-cup-rivals-contribute-to-apple-genome-sequencing/"
#> [14] "http://www.nature.com/ng/journal/v42/n10/full/ng.654.html"
#> [15] "http://www.alphagalileo.org/ViewItem.aspx?ItemId=83717&CultureCode=en"
#> [16] "http://www.ornl.gov/sci/techresources/Human_Genome/project/info.shtml"
#> [17] "https://commons.wikimedia.org/wiki/Apple?uselang=vi"
#> [18] "https://commons.wikimedia.org/wiki/Category:Malus_domestica?uselang=vi"
#> [19] "http://bachkhoatoanthu.vass.gov.vn/noidung/tudien/Lists/GiaiNghia/View_Detail.aspx?ItemID=5007"
#> [20] "http://www.eol.org/pages/629094"
#> [21] "http://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?id=3750"
#> [22] "http://www.itis.gov/servlet/SingleRpt/SingleRpt?search_topic=TSN&search_value=516655"
#> [23] "http://www.catalogueoflife.org/col/details/species/id/19538828/synonym/19539435"
#> [24] "https://www.biolib.cz/cz/taxon/id39552"
#> [25] "https://gd.eppo.int/taxon/MABSD"
#> [26] "https://npgsweb.ars-grin.gov/gringlobal/taxonomydetail.aspx?id=104681"
#> [27] "http://www.ipni.org/ipni/idPlantNameSearch.do?id=726282-1"
#> [28] "https://www.itis.gov/servlet/SingleRpt/SingleRpt?search_topic=TSN&search_value=516655"
#> [29] "https://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?mode=Info&id=3750"
#> [30] "http://www.nzor.org.nz/names/14d024a2-d821-48e3-95d8-f0dd206c70a0"
#> [31] "http://www.pfaf.org/user/Plant.aspx?LatinName=Malus+domestica"
#> [32] "http://www.theplantlist.org/tpl1.1/record/rjp-454"
#> [33] "http://www.plantsoftheworldonline.org/taxon/urn:lsid:ipni.org:names:726282-1"
#> [34] "http://legacy.tropicos.org/Name/27804420"
#> [35] "https://vicflora.rbg.vic.gov.au/flora/taxon/e41b929d-b709-4f4c-8dbe-2a9241e2342b"
#> [36] "http://www.ipni.org/ipni/idPlantNameSearch.do?id=60476301-2"
#> [37] "http://www.plantsoftheworldonline.org/taxon/urn:lsid:ipni.org:names:60476301-2"
#> [38] "http://legacy.tropicos.org/Name/100473089"
#>
#> $common_names
#> # A tibble: 1 x 2
#> name language
#> <chr> <chr>
#> 1 Malus domestica vi
#>
#> $classification
#> # A tibble: 0 x 0
#>
#> $synonyms
#> list()
```
search
```r
wt_wikipedia_search(query = "Pinus")
#> $batchcomplete
#> [1] ""
#>
#> $continue
#> $continue$sroffset
#> [1] 10
#>
#> $continue$continue
#> [1] "-||"
#>
#>
#> $query
#> $query$searchinfo
#> $query$searchinfo$totalhits
#> [1] 3374
#>
#> $query$searchinfo$suggestion
#> [1] "penis"
#>
#> $query$searchinfo$suggestionsnippet
#> [1] "penis"
#>
#>
#> $query$search
#> # A tibble: 10 x 7
#> ns title pageid size wordcount snippet timestamp
#> <int> <chr> <int> <int> <int> <chr> <chr>
#> 1 0 Pine 39389 36555 4058 "A pine is any conifer in… 2020-06-26…
#> 2 0 Pinus po… 532941 31087 3069 "misidentified it as <spa… 2020-06-19…
#> 3 0 Pinus co… 507717 20486 2343 "all pines (member specie… 2020-05-25…
#> 4 0 Pinus je… 463015 9130 1008 "long, with a large (15 t… 2019-12-22…
#> 5 0 Pinus st… 464301 31478 3815 "3 ft) tall & wide. M… 2020-06-22…
#> 6 0 Pinus re… 507802 7501 783 ""<span class=\"sear… 2020-05-08…
#> 7 0 Pinus lo… 649634 15408 1741 "sometimes form dense for… 2020-04-14…
#> 8 0 Pinus la… 459402 11464 1338 "Fire affected this speci… 2020-01-15…
#> 9 0 Pinus ni… 438963 11947 1421 "hypodermal cells. P. nig… 2020-04-03…
#> 10 0 Pinus mu… 438946 11964 901 "encyclopedia) is still r… 2020-06-17…
```
search supports languages
```r
wt_wikipedia_search(query = "Pinus", wiki = "fr")
#> $batchcomplete
#> [1] ""
#>
#> $continue
#> $continue$sroffset
#> [1] 10
#>
#> $continue$continue
#> [1] "-||"
#>
#>
#> $query
#> $query$searchinfo
#> $query$searchinfo$totalhits
#> [1] 990
#>
#>
#> $query$search
#> # A tibble: 10 x 7
#> ns title pageid size wordcount snippet timestamp
#> <int> <chr> <int> <int> <int> <chr> <chr>
#> 1 0 Pin (pl… 89798 83647 9325 "<span class=\"searchmatc… 2020-05-23…
#> 2 0 Pinus p… 121544 31274 3892 "<span class=\"searchmatc… 2020-04-30…
#> 3 0 Pinus c… 98421 8237 959 "<span class=\"searchmatc… 2019-05-30…
#> 4 0 Pinus n… 950330 26623 3013 "recycler}}. <span class=… 2020-03-30…
#> 5 0 Pin syl… 121562 13725 1611 "<span class=\"searchmatc… 2020-03-22…
#> 6 0 Pinus h… 117280 22257 2671 "<span class=\"searchmatc… 2020-05-07…
#> 7 0 Pin par… 138378 8763 916 "<span class=\"searchmatc… 2020-04-26…
#> 8 0 Pinus s… 776950 11662 1628 "les articles homonymes, … 2020-04-14…
#> 9 0 Pinus m… 2480854 21747 2310 "<span class=\"searchmatc… 2019-02-25…
#> 10 0 Pinus u… 3208429 6316 720 "significations, voir Pin… 2020-03-09…
```
## wikicommons
lower level
```r
pg <- wt_wiki_page("https://commons.wikimedia.org/wiki/Abelmoschus")
res <- wt_wikicommons_parse(pg)
res$common_names[1:3]
#> [[1]]
#> [[1]]$name
#> [1] "okra"
#>
#> [[1]]$language
#> [1] "en"
#>
#>
#> [[2]]
#> [[2]]$name
#> [1] "مسكي"
#>
#> [[2]]$language
#> [1] "ar"
#>
#>
#> [[3]]
#> [[3]]$name
#> [1] "Abelmoş"
#>
#> [[3]]$language
#> [1] "az"
```
higher level
```r
res <- wt_wikicommons("Abelmoschus")
res$classification
#> # A tibble: 15 x 2
#> rank name
#> <chr> <chr>
#> 1 Domain "Eukaryota"
#> 2 unranked "Archaeplastida"
#> 3 Regnum "Plantae"
#> 4 Cladus "angiosperms"
#> 5 Cladus "eudicots"
#> 6 Cladus "core eudicots"
#> 7 Cladus "superrosids"
#> 8 Cladus "rosids"
#> 9 Cladus "eurosids II"
#> 10 Ordo "Malvales"
#> 11 Familia "Malvaceae"
#> 12 Subfamilia "Malvoideae"
#> 13 Tribus "Hibisceae"
#> 14 Genus "Abelmoschus"
#> 15 Authority " Medik. (1787)"
res$common_names
#> # A tibble: 19 x 2
#> name language
#> <chr> <chr>
#> 1 okra en
#> 2 مسكي ar
#> 3 Abelmoş az
#> 4 Bamja bs
#> 5 Ibiškovec cs
#> 6 Bisameibisch de
#> 7 Okrat fi
#> 8 Abelmosco gl
#> 9 Abelmošus hr
#> 10 Ybiškė lt
#> 11 അബെൽമോസ്കസ് ml
#> 12 Абельмош mrj
#> 13 Abelmoskusslekta nn
#> 14 Piżmian pl
#> 15 Абельмош ru
#> 16 Okrasläktet sv
#> 17 Абельмош udm
#> 18 Chi Vông vang vi
#> 19 黄葵属 zh
```
search
```r
wt_wikicommons_search(query = "Pinus")
#> $batchcomplete
#> [1] ""
#>
#> $continue
#> $continue$sroffset
#> [1] 10
#>
#> $continue$continue
#> [1] "-||"
#>
#>
#> $query
#> $query$searchinfo
#> $query$searchinfo$totalhits
#> [1] 270
#>
#>
#> $query$search
#> # A tibble: 10 x 7
#> ns title pageid size wordcount snippet timestamp
#> <int> <chr> <int> <lgl> <int> <chr> <chr>
#> 1 0 Pinus sylvestris 9066 NA 0 "" 2020-05-13T19:…
#> 2 0 Pinus ponderosa 250435 NA 0 "" 2020-04-18T15:…
#> 3 0 Pinus nigra 64703 NA 0 "" 2018-03-06T10:…
#> 4 0 Pinus 82071 NA 0 "" 2017-05-28T10:…
#> 5 0 Pinus mugo 132442 NA 0 "" 2019-07-26T09:…
#> 6 0 Rogów Arboretum 10563490 NA 0 "" 2020-01-01T13:…
#> 7 0 Pinus contorta 186918 NA 0 "" 2020-01-19T19:…
#> 8 0 Anacortes Community F… 2989013 NA 0 "" 2014-12-10T15:…
#> 9 0 Pinus halepensis 172181 NA 0 "" 2018-05-05T10:…
#> 10 0 Pinus brutia 139389 NA 0 "" 2014-11-23T11:…
```
## wikispecies
lower level
```r
pg <- wt_wiki_page("https://species.wikimedia.org/wiki/Malus_domestica")
res <- wt_wikispecies_parse(pg, types = "common_names")
res$common_names[1:3]
#> [[1]]
#> [[1]]$name
#> [1] "Ябълка"
#>
#> [[1]]$language
#> [1] "български"
#>
#>
#> [[2]]
#> [[2]]$name
#> [1] "Poma, pomera"
#>
#> [[2]]$language
#> [1] "català"
#>
#>
#> [[3]]
#> [[3]]$name
#> [1] "jabloň domácí"
#>
#> [[3]]$language
#> [1] "čeština"
```
higher level
```r
res <- wt_wikispecies("Malus domestica")
res$classification
#> # A tibble: 8 x 2
#> rank name
#> <chr> <chr>
#> 1 Superregnum Eukaryota
#> 2 Regnum Plantae
#> 3 Cladus Angiosperms
#> 4 Cladus Eudicots
#> 5 Cladus Core eudicots
#> 6 Cladus Rosids
#> 7 Cladus Eurosids I
#> 8 Ordo Rosales
res$common_names
#> # A tibble: 22 x 2
#> name language
#> <chr> <chr>
#> 1 Ябълка български
#> 2 Poma, pomera català
#> 3 jabloň domácí čeština
#> 4 Apfel Deutsch
#> 5 Μηλιά Ελληνικά
#> 6 Apple English
#> 7 Manzano español
#> 8 Aed-õunapuu eesti
#> 9 Tarhaomenapuu suomi
#> 10 Aapel Nordfriisk
#> # … with 12 more rows
```
search
```r
wt_wikispecies_search(query = "Pinus")
#> $batchcomplete
#> [1] ""
#>
#> $continue
#> $continue$sroffset
#> [1] 10
#>
#> $continue$continue
#> [1] "-||"
#>
#>
#> $query
#> $query$searchinfo
#> $query$searchinfo$totalhits
#> [1] 515
#>
#>
#> $query$search
#> # A tibble: 10 x 7
#> ns title pageid size wordcount snippet timestamp
#> <int> <chr> <int> <int> <int> <chr> <chr>
#> 1 0 Pinus 1.74e4 5737 784 "Familia: Pinaceae Ge… 2020-06-06…
#> 2 0 Pinus halepe… 4.51e4 4047 580 "Pinaceae Genus: <spa… 2019-12-20…
#> 3 0 Pinus pinea 4.51e4 1949 406 "Familia: Pinaceae Ge… 2019-10-19…
#> 4 0 Pinus veitch… 1.34e6 1450 181 "Familia: Pinaceae Ge… 2019-07-19…
#> 5 0 Pinus pumila 7.35e4 1395 189 "Pinaceae Genus: <spa… 2019-07-14…
#> 6 0 Pinus subg. … 3.01e5 358 27 "Pinaceae Genus: <spa… 2019-11-24…
#> 7 0 Pinus clausa 4.50e4 1552 208 "Pinaceae Genus: <spa… 2019-08-15…
#> 8 0 Pinus pseudo… 1.48e6 2114 310 "Genus: <span class=\… 2020-05-21…
#> 9 0 Pinus pinast… 1.32e6 2764 379 "Pinaceae Genus: <spa… 2019-12-20…
#> 10 0 Pinus nigra … 3.27e5 1799 138 "Genus: <span class=\… 2020-03-02…
```
|
/scratch/gouwar.j/cran-all/cranData/wikitaxa/vignettes/wikitaxa.Rmd
|
#' Combine new results for a query with previously downloaded results
#'
#' @seealso [perform_query()]
#'
#' @param old The [query_tbl] of previous results
#' @param new The [query_tbl] of new results from the server
#'
#' @return A new [query_tbl] of the appropriate subclass, depending on whether
#' the batch is complete.
#'
#' @keywords internal
append_query_result <- function(old, new) {
UseMethod("append_query_result")
}
#' @export
append_query_result.complete <- function(old, new) {
new_query_tbl(
dplyr::bind_rows(old, new),
request = get_request(old),
continue = get_continue(new),
batchcomplete = get_batchcomplete(new),
class = query_tbl_subclass(new)
)
}
#' @export
append_query_result.incomplete <- function(old, new) {
new_query_tbl(
merge_tbl_cols(old, new),
request = get_request(old),
continue = get_continue(new),
batchcomplete = get_batchcomplete(new),
class = query_tbl_subclass(new)
)
}
#' @export
append_query_result.final <- function(old, new) {
rlang::abort(
"Attempting to append new results to a final query. There shouldn't be new results!",
old = old,
new = new
)
}
#' @export
append_query_result.query_tbl <- function(old, new) {
if (nrow(old) > 0) {
rlang::abort(
glue::glue(
"`append_query_result.query_tbl` called on non-empty query_tbl.",
"This method should only be called on the initial condition of the ",
"`continue_query` final loop."
)
)
}
new
}
merge_tbl_cols <- function(old, new) {
cols_to_merge <- intersect(list_cols(old), list_cols(new))
new_cols <- purrr::map(cols_to_merge, \(col) merge_col(col, old, new))
names(new_cols) <- cols_to_merge
dplyr::mutate(old, !!!new_cols)
}
merge_col <- function(col, old, new) {
start <- nrow(old) - nrow(new) + 1
end <- nrow(old)
bounds <- start:end
old[[col]][bounds] <- purrr::map2(old[[col]][bounds], new[[col]], dplyr::bind_rows)
old[[col]]
}
list_cols <- function(tbl) {
tbl |>
purrr::map(rlang::is_list) |>
purrr::keep(rlang::is_true) |>
names()
}
|
/scratch/gouwar.j/cran-all/cranData/wikkitidy/R/append-query-result.R
|
blame <- function(pages,
term,
id_type = c("pageid", "title"),
search_for = c("insertion", "deletion"),
search_strategy = c("linear", "binary"),
direction = c("back", "forward"),
from = NULL,
to = NULL,
skip = NULL,
language = "en") {
id_type <- rlang::arg_match(id_type)
search_for <- rlang::arg_match(search_for)
search_strategy <- rlang::arg_match(search_strategy)
direction <- rlang::arg_match(direction)
iterator <- tibble::tibble(pages, from, to)
if (search_strategy == "binary" & !is.null(skip)) {
warning("Ignored parameter `skip`: only relevant when search_strategy == 'linear'")
}
blames <-
purrr::pmap(
iterator,
\(page, from, to) .blame_one(
search_strategy,
page,
term,
id_type,
search_for,
direction,
from,
to,
language
)
)
}
.blame_one <- function(search_strategy, ...) {
switch(
search_strategy,
linear = .linear_blame_one(...),
binary = .binary_blame_one(...)
)
}
.linear_blame_one <- function(page, term, id_type, search_for, from, to, language) {
}
.binary_blame_one <- function(page, term, id_type, search_for, from, to, language) {
all_revisions <- .get_all_revids(page, id_type, from, to, language)
n_init <- length(all_revisions$revid)
# Set the 'found' parameter to kick off the iteration
found <- if (search_for == "deletion") TRUE else FALSE
curr_idx <- n_init %/% 2
}
.get_all_revids <- function(page, id_type, from, to, language) {
wiki_action_request(language = language) %>%
query_list_pages(
"revisions",
id_type := page,
rvprop = "ids",
rvlimit = "max",
# By default, the API lists revisions in reverse chronological order, so
# the "from" revision should be the *final* revision retrieved, and the
# "to" revision should be the first.
rvend = from,
rvstart = to
) %>%
retrieve_all()
}
.make_detector <- function(main_diff_type, highlight_type) {
detect_term_in_diff <- function(diff, term) {
if (diff$type == main_diff_type) {
stringr::str_detect(diff$text, term)
} else if (diff$type == 3 | diff$type == 5) {
.scan_highlight(diff, term, highlight_type)
} else {
FALSE
}
}
}
.scan_highlight <- function(diff, term, highlight_type) {
diff # a diff object
term # the term being searched for
highlight_type # the type of highlight to search for the term in
}
.detect_deletion <- .make_detector(main_diff_type = 2, highlight_type = 1)
.detect_insertion <- .make_detector(main_diff_type = 1, highlight_type = 0)
.compute_boundaries <- function(boundaries, action=c("branch_left","try_right")) {
action <- rlang::arg_match(action)
upper <- boundaries$upper
lower <- boundaries$lower
if (action == "branch_left") {
delta <- (upper - lower + 1) %/% 2
list(upper=upper-delta, lower=lower)
} else {
delta <- (upper - lower + 1)
list(upper=upper+delta, lower=lower+delta)
}
}
|
/scratch/gouwar.j/cran-all/cranData/wikkitidy/R/blame.R
|
datetime_for_url <- function(x, .default = NULL, .precision = "ymd") {
x_sym <- rlang::enexpr(x)
if (rlang::is_null(x)) {
return(.default)
}
withCallingHandlers(
warning = function(cnd) {
rlang::abort(
glue::glue("Unparseable dates in `{stringr::str_trunc(rlang::expr_deparse(x_sym)[[1]], 25)}`")
)
},
{datetimes <- lubridate::as_datetime(x)}
)
lubridate::format_ISO8601(datetimes, precision = .precision)
}
|
/scratch/gouwar.j/cran-all/cranData/wikkitidy/R/datetime-for-url.R
|
#' Generate pages that meet certain criteria, or which are related to a set of
#' known pages by certain properties
#'
#' Many of the endpoints on the Action API can be used as `generators`. Use
#' [list_all_generators()] to see a complete list. The main advantage of using a
#' generator is that you can chain it with calls to [query_page_properties()] to
#' find out specific information about the pages. This is not possible for
#' queries constructed using [query_list_pages()].
#'
#' There are two kinds of `generator`: list-generators and prop-generators. If
#' using a prop-generator, then you need to use a [query_by_()] function to tell
#' the API where to start from, as shown in the examples.
#'
#' To set additional parameters to a generator, prepend the parameter with "g".
#' For instance, to set a limit of 10 to the number of pages returned by the
#' `categorymembers` generator, set the parameter `gcmlimit = 10`.
#'
#' @param .req A httr2_request, e.g. generated by `wiki_action_request`
#' @param generator The generator module you wish to use. Most
#' [list](https://www.mediawiki.org/wiki/API:Lists) and
#' [property](https://www.mediawiki.org/wiki/API:Properties) modules can be
#' used, though not all.
#' @param ... <[`dynamic-dots`][rlang::dyn-dots]> Additional parameters to the
#' generator
#'
#' @return [query_generate_pages]: The modfied request, which can be passed to [next_batch] or
#' [retrieve_all] as appropriate.
#'
#' [list_all_generators]: a [tibble][tibble::tbl_df] of all the available generator
#' modules. The `name` column gives the name of the generator, while the
#' `group` column indicates whether the generator is based on a list module
#' or a property module. Generators based on property modules can only be
#' added to a query if you have already used [query_by_] to specify which
#' pages' properties should be generated.
#' @export
#'
#' @examples
#' # Search for articles about seagulls
#' seagulls <- wiki_action_request() %>%
#' query_generate_pages("search", gsrsearch = "seagull") %>%
#' next_batch()
#'
#' seagulls
query_generate_pages <- function(.req, generator, ...) {
group <- check_generator(generator)
# TODO: check_params
if (group == "prop" && !is_prop_query(.req)) {
rlang::abort(
glue::glue("{generator} is based on a 'property' endpoint; use `query_by_` to specify the starting pages before adding the generator to the query"),
class = "malformed_generator"
)
}
new_generator_query(.req, generator, ...)
}
#' @rdname query_generate_pages
#' @export
list_all_generators <- function() {
schema_query_modules %>%
dplyr::filter(generator == TRUE) %>%
dplyr::select(name, group)
}
#' Constructor for generator query type
#'
#' Construct a new query to a [generator
#' module](https://www.mediawiki.org/wiki/API:Query#Example_6:_Generators) of
#' the Action API. This low-level constructor only performs basic type-checking.
#' It is your responsibility to ensure that the chosen `generator` is an
#' existing API endpoint, and that you have composed the query correctly. For
#' a more user-friendly interface, use [query_generate_pages].
#'
#' @param .req A [`query/action_api/httr2_request`][wiki_action_request] object,
#' or a generator query as returned by this function.
#' @param generator The generator to add to the query. If the generator is based
#' on a [property module](https://www.mediawiki.org/wiki/API:Properties), then
#' `.req` must be a subtype of
#' [`prop/query/action_api/httr2_request`][new_prop_query]. If the generator
#' is based on a [list module](https://www.mediawiki.org/wiki/API:Lists), then
#' `.req` must subclass
#' [`query/action_api/httr2_request`][wiki_action_request] directly.
#' @param ... <[`dynamic-dots`][rlang::dyn-dots]> Further parameters to the generator
#'
#' @keywords low_level_action_api
#'
#' @return The output type depends on the input. If `.req` is a
#' [`query/action_api/httr2_request`][wiki_action_request], then the output
#' will be a `generator/query/action_api/httr2_request`. If `.req` is a
#' [`prop/query/action_api/httr2_request`][new_prop_query], then the return
#' object will be a subclass of the passed request, with "generator" as the
#' first term in the class vector, i.e.
#' `generator/(titles|pageids|revids)/prop/query/action_api/httr2_request`.
#' @export
#' @examples
#' # Build a generator query using a list module
#' # List all members of Category:Physics on English Wikipedia
#' physics <- wiki_action_request() %>%
#' new_generator_query("categorymembers", gcmtitle = "Category:Physics")
#'
#' # Build a generator query on a property module
#' # Generate the pages that are linked to Albert Einstein's page on English
#' # Wikipedia
#' einstein_categories <- wiki_action_request() %>%
#' new_prop_query("titles", "Albert Einstein") %>%
#' new_generator_query("iwlinks")
#'
new_generator_query <- function(.req, generator, ...) {
UseMethod("new_generator_query")
}
#' @export
new_generator_query.generator <- function(.req, generator, ...) {
req <- set_action(.req, "generator", generator, ...)
req
}
#' @export
new_generator_query.prop <- function(.req, generator, ...) {
NextMethod()
}
#' @export
new_generator_query.list <- function(.req, generator, ...) {
incompatible_query_error("generator", "list")
}
#' @export
new_generator_query.query <- function(.req, generator, ...) {
req <- set_action(.req, "generator", generator, ...)
class(req) <- c("generator", class(req))
req
}
is_generator_query <- function(.req) {
is_query_subtype(.req, "generator")
}
is_generator_module <- function(module) {
result <- schema_query_modules %>%
dplyr::filter(generator == TRUE) %>%
dplyr::group_by(group) %>%
dplyr::summarise(is_generator = module %in% name) %>%
dplyr::filter(is_generator == TRUE)
structure(
rlang::is_true(result$is_generator),
group = result$group
)
}
check_generator <- function(module) {
result <- is_generator_module(module)
if (!result) {
rlang::abort(
glue::glue("`{module}` cannot be used as a generator with the Action API, though it may be valid as a property or list query"),
class = "unknown_module_error"
)
} else {
group <- attr(result, "group", exact = TRUE)
invisible(group)
}
}
|
/scratch/gouwar.j/cran-all/cranData/wikkitidy/R/generator-query-type.R
|
#' Search for insertions, deletions or relocations of text between two versions
#' of a Wikipedia page
#'
#' Any two revisions of a Wikipedia page can be compared using the 'diff' tool.
#' The tool compares the 'from' revision to the 'to' revision, looking for
#' insertions, deletions or relocations of text. This operation can be performed
#' in any order, across any span of revisions.
#'
#' @param from Vector of revision ids
#' @param to Vector of revision ids
#' @param language Vector of two-letter language codes (will be recycled if
#' length==1)
#' @param simplify logical: should R simplify the result (see [return])
#'
#' @return The return value depends on the `simplify` parameter.
#' * If `simplify` == TRUE: A list of [tibble::tbl_df] objects the same
#' length as `from` and `to`. Most of the response data is stripped away,
#' leaving just the textual differences between the revisions, their location,
#' type and 'highlightRanges' if the textual differences are complicated.
#' * If `simplify` == FALSE: A list the same length as `from` and `to`
#' containing the full [wikidiff2
#' response](https://www.mediawiki.org/wiki/API:REST_API/Reference#Response_schema_3)
#' for each pair of revisions. This response includes additional data for
#' displaying diffs onscreen.
#' @export
#'
#' @examples
#' # Compare revision 847170467 to 851733941 on English Wikipedia
#' get_diff(847170467, 851733941)
#'
#' # The function is vectorised, so you can compare multiple pairs of revisions
#' # in a single call
#' # See diffs for the last two revisions of the Main Page
#' revisions <- wiki_action_request() %>%
#' query_by_title("Main Page") %>%
#' query_page_properties(
#' "revisions",
#' rvlimit = 2, rvprop = "ids", rvdir = "older"
#' ) %>%
#' next_result() %>%
#' tidyr::unnest(cols = c(revisions)) %>%
#' dplyr::mutate(diffs = get_diff(from = parentid, to = revid))
#' revisions
get_diff <- function(from, to, language = "en", simplify = TRUE) {
if (!rlang::is_scalar_logical(simplify)) {
rlang::abort("`simplify` must be either TRUE or FALSE")
}
response_type <- if (simplify) "wikidiff2" else NULL
get_rest_resource(
"revision", from, "compare", to,
language = language, response_type = response_type)
}
diff_to_tbl <- function(diff_list) {
purrr::map(diff_list, simplify_diff) %>%
dplyr::bind_rows() %>%
dplyr::filter(type != 0)
}
simplify_diff <- function(diff) {
diff <- purrr::modify_at(diff, "highlightRanges", dplyr::bind_rows)
diff <- purrr::list_flatten(diff)
}
#' @export
#' @describeIn parse_response Simplify a wikidiff2 response to a dataframe of
#' textual differences, discarding display data
parse_response.wikidiff2 <- function(response) {
diff_list <- purrr::map(response, "diff")
diffs <- purrr::map(diff_list, diff_to_tbl)
diffs
}
|
/scratch/gouwar.j/cran-all/cranData/wikkitidy/R/get-diff.R
|
#' Count how many times Wikipedia articles have been edited
#'
#' @param title A vector of article titles
#' @param type The [type of edit to
#' count](https://www.mediawiki.org/wiki/API:REST_API/Reference#Parameters_12)
#' @param from Optional: a vector of revision ids
#' @param to Optional: a vector of revision ids
#' @param language Vector of two-letter language codes for Wikipedia editions
#'
#' @return A [tibble::tbl_df] with two columns:
#' * 'count': integer, the number of edits of the given type
#' * 'limit': logical, whether the 'count' exceeds the API's limit. Each type of
#' edit has a different limit. If the 'count' exceeds the limit, then the
#' limit is returned as the count and 'limit' is set to TRUE
#' @export
#'
#' @examples
#' # Get the number of edits made by auto-confirmed editors to a page between
#' # revisions 384955912 and 406217369
#' get_history_count("Jupiter", "editors", 384955912, 406217369)
#'
#' # Compare which authors have the most edit activity
#' authors <- tibble::tribble(
#' ~author,
#' "Jane Austen",
#' "William Shakespeare",
#' "Emily Dickinson"
#' ) %>%
#' dplyr::mutate(get_history_count(author))
#' authors
get_history_count <- function(
title,
type = c("edits", "anonymous", "bot", "editors", "minor", "reverted"),
from = NULL,
to = NULL,
language = "en") {
type <- rlang::arg_match(type)
if (xor(is.null(from), is.null(to))) {
rlang::abort("If using `from` and `to`, then both must be supplied")
}
if (!is.null(from) && !(type == "edits" || type == "editors")) {
rlang::abort("If using `from` and `to`, you can only request counts for 'edits' or 'editors'")
}
get_rest_resource(
"page", title, "history", "counts", type, from = from, to = to,
language = language, response_type = "history_count_object"
)
}
#' @exportS3Method
parse_response.history_count_object <- function(response) {
dplyr::bind_rows(!!!response)
}
|
/scratch/gouwar.j/cran-all/cranData/wikkitidy/R/get-history-count.R
|
#' Get data about pages from their titles
#'
#' @description `get_latest_revision()` returns metadata about the latest
#' revision of each
#' page.
#'
#' `get_page_html()` returns the rendered html for each
#' page.
#'
#' `get_page_summary()` returns metadata about the latest revision, along
#' with the page description and a summary extracted from the opening
#' paragraph
#'
#' `get_page_related()` returns summaries for 20 related pages for each
#' passed page
#'
#' `get_page_talk()` returns structured talk page content for each
#' title. You must ensure to use the title for the Talk page itself, e.g.
#' "Talk:Earth" rather than "Earth"
#'
#' `get_page_langlinks()` returns interwiki links for each
#' title
#'
#' @param title A character vector of page titles.
#' @param language A character vector of two-letter language codes, either of
#' length 1 or the same length as `title`
#'
#' @return A list, vector or tibble, the same length as `title`, with the
#' desired data.
#'
#' @name page_vector_functions
#'
#' @examples
#' # Get language links for a known page on English Wikipedia
#' get_page_langlinks("Charles Harpur")
#'
#' # Many of these functions return a list of data frames. Tidyr can be useful.
#' # Get 20 related pages for German City
#' cities <- tibble::tribble(
#' ~city,
#' "Berlin",
#' "Darmstadt",
#' ) %>%
#' dplyr::mutate(related = get_page_related(city))
#' cities
#'
#' # Unest to get one row per related page:
#' tidyr::unnest(cities, "related")
#'
#' # The functions are vectorised over title and language
#' # Find all articles about Joanna Baillie, and retrieve summary data for
#' # the first two.
#' baillie <- get_page_langlinks("Joanna Baillie") %>%
#' dplyr::slice(1:2) %>%
#' dplyr::mutate(get_page_summary(title = title, language = code))
#' baillie
NULL
#' @rdname page_vector_functions
#' @export
get_latest_revision <- function(title, language = "en") {
get_rest_resource(
"page", "title", title,
language = language, api = "wikimedia", response_type = "revision_metadata"
)
}
#' @export
parse_response.revision_metadata <- function(response) {
purrr::map(response, "items") %>%
purrr::map(1) %>%
purrr::list_transpose() %>%
tibble::as_tibble()
}
#' @rdname page_vector_functions
#' @export
get_page_html <- function(title, language = "en") {
get_rest_resource(
"page", "html", title,
language = language, api = "wikimedia", response_format = "html"
)
}
#' @rdname page_vector_functions
#' @export
get_page_summary <- function(title, language = "en") {
get_rest_resource(
"page", "summary", title,
language = language, api = "wikimedia",
response_type = "summary"
)
}
#' @export
parse_response.summary <- function(response) {
flatten_bind(response)
}
#' @rdname page_vector_functions
#' @export
get_page_related <- function(title, language = "en") {
get_rest_resource(
"page", "related", title,
language = language, api = "wikimedia",
response_type = "summary_array"
)
}
#' @export
parse_response.summary_array <- function(response) {
purrr::map(response, "pages") %>% purrr::map(flatten_bind)
}
#' @rdname page_vector_functions
#' @export
get_page_talk <- function(title, language = "en") {
talk_pattern <- "^\\w+:"
if (!all(stringr::str_detect(title, talk_pattern))) {
rlang::abort("One or more titles do not begin with 'Talk:' or similar",
class="bad_title")
}
get_rest_resource(
"page", "talk", title,
language = language, api = "wikimedia"
)
}
#' @rdname page_vector_functions
#' @export
get_page_langlinks <- function(title, language = "en") {
get_rest_resource(
"page", title, "links", "language",
language = language, response_type = "page_language_object"
)
}
#' @export
parse_response.page_language_object <- function(response) {
purrr::map(response, dplyr::bind_rows)
}
|
/scratch/gouwar.j/cran-all/cranData/wikkitidy/R/get-page-data.R
|
#' Perform a query using the [MediaWiki Action
#' API](https://www.mediawiki.org/wiki/Special:MyLanguage/API:Main_page)
#'
#' @description `next_result()` sends exactly one request to the server.
#'
#' `next_batch()` requests results from the server until data is complete the
#' latest batch of pages in the result.
#'
#' `retrieve_all()` keeps requesting data until all the pages from the query
#' have been returned.
#'
#' @details It is rare that a query can be fulfilled in a single request to the
#' server. There are two ways a query can be incomplete. All queries return a
#' list of pages as their result. The result may be incomplete because not all
#' the data for each page has been returned. In this case the *batch* is
#' incomplete. Or the data may be complete for all pages, but there are more
#' pages available on the server. In this case the query can be *continued*.
#' Thus the three functions for `next_result()`, `next_batch()` and
#' `retrieve_all()`.
#'
#' @name get_query_results
#'
#' @param x The query. Either a [wiki_action_request] or a [query_tbl].
#'
#' @return A [query_tbl] containing results of the query. If `x` is a
#' [query_tbl], then the function will return a new data with the new data
#' appended to it. If `x` is a [wiki_action_request], then the returned
#' [query_tbl] will contain the necessary data to supply future calls to
#' `next_result()`, `next_batch()` or `retrieve_all()`.
#'
#' @examples
#' # Try out a request using next_result(), then retrieve the rest of the
#' # results. The clllimt limits the first request to 40 results.
#' preview <- wiki_action_request() %>%
#' query_by_title("Steve Wozniak") %>%
#' query_page_properties("categories", cllimit = 40) %>%
#' next_result()
#' preview
#'
#' all_results <- retrieve_all(preview)
#' all_results
#'
#' # tidyr is useful for list-columns.
#' all_results %>%
#' tidyr::unnest(cols=c(categories), names_sep = "_")
NULL
#' @rdname get_query_results
#' @export
next_result <- function(x) {
UseMethod("next_result")
}
#' @export
next_result.query_tbl <- function(x) {
continue <- get_continue(x)
request <- get_request(x)
result_tbl <- perform_query(request, continue)
append_query_result(old = x, new = result_tbl)
}
#' @export
next_result.query <- function(x) {
perform_query(x, continue = NULL)
}
#' @rdname get_query_results
#' @export
next_batch <- function(x) {
UseMethod("next_batch")
}
#' @export
next_batch.query <- function(x) {
first_result <- perform_query(x, continue = NULL)
complete_batch <- continue_query(first_result, is_incomplete)
complete_batch
}
#' @export
next_batch.query_tbl <- function(x) {
complete_batch <- continue_query(x, is_incomplete)
}
#' @rdname get_query_results
#' @export
retrieve_all <- function(x) {
UseMethod("retrieve_all")
}
#' @export
retrieve_all.query <- function(x) {
first_result <- perform_query(x, continue = NULL)
all_results <- continue_query(first_result, is_not_final)
all_results
}
#' @export
retrieve_all.query_tbl <- function(x) {
all_results <- continue_query(x, is_not_final)
}
#' Query the Action API continually until a continuation condition no longer
#' holds.
#'
#' @keywords internal
#'
#' @param last_result The query_tbl of results to complete
#' @param predicate The while condition. Results will be continually
#' requested until this evaluates 'false'.
#'
#' @return A query_tbl: an S3 dataframe that is a subclass of tibble::tibble
continue_query <- function(last_result, predicate, max_requests = 1000) {
results_inc <- 100
results <- vector("list", results_inc)
max_idx <- results_inc
next_idx <- 1
results[[next_idx]] <- last_result
while (predicate(last_result)) {
next_idx <- next_idx + 1
if (next_idx > max_requests) {
rlang::inform(
glue::glue("Query halted after {max_requests} requests Continue the query with `next_result`, `next_batch` or `retrieve_all`.")
)
break()
}
if (next_idx > max_idx) {
results <- c(results, vector("list", results_inc))
max_idx <- max_idx + results_inc
}
request <- get_request(last_result)
continue <- get_continue(last_result)
check_continue(continue)
last_result <- perform_query(request, continue)
results[[next_idx]] <- last_result
}
results %>%
purrr::keep(is_not_null) %>%
purrr::reduce(append_query_result, .init = empty_query_tbl())
}
check_continue <- function(continue) {
if (
!rlang::is_list(continue) &&
length(continue) > 1 &&
all(stringr::str_detect(names(continue), "continue"))
) {
rlang::abort(
"Invalid continue parameters",
class = "no_continue",
wikkitidy_continue = continue
)
}
}
is_not_final <- function(query_tbl) {
query_tbl_subclass(query_tbl) != "final"
}
is_incomplete <- function(query_tbl) {
query_tbl_subclass(query_tbl) == "incomplete"
}
|
/scratch/gouwar.j/cran-all/cranData/wikkitidy/R/get-query-results.R
|
get_random_page <- function(n, format = c("title", "html", "summary", "related"), language = "en") {
format <- rlang::arg_match(format)
response_format <- switch(
format,
"title" = ,
"summary" = ,
"related" = "json",
"html" = "html"
)
format_n <- rep(format, n)
response <- get_rest_resource(
"page", "random", format_n,
language = language, api = "wikimedia", response_format = response_format
)
response
}
|
/scratch/gouwar.j/cran-all/cranData/wikkitidy/R/get-random-page.R
|
#' Get resources from one of Wikipedia's [two REST
#' APIs](https://www.mediawiki.org/wiki/API)
#'
#' This function is intended for developer use. It makes it easy to quickly
#' generate vectorised calls to the different APIs.
#'
#' @param ... <[`dynamic-dots`][rlang::dyn-dots]> The URL components and query
#' parameters of the desired resources. Names of the arguments are ignored.
#' The function follows the [tidyverse vector recycling
#' rules](https://vctrs.r-lib.org/reference/vector_recycling_rules), so all
#' vectors must have the same length or be of length one. Unnamed arguments
#' will be appended to the URL path; named arguments will be added as query
#' parameters
#' @param language Character vector of two-letter language codes
#' @param api The desired REST api:
#' "[core](https://www.mediawiki.org/wiki/API:REST_API)",
#' "[wikimedia](https://www.mediawiki.org/wiki/Wikimedia_REST_API)",
#' "[wikimedia_org](https://wikimedia.org/api/rest_v1/)", or
#' "[xtools](https://www.mediawiki.org/wiki/XTools/API)"
#' @param response_format The expected Content-Type of the response. Currently "html" and
#' "json" are supported.
#' @param response_type The schema of the response. If supplied, the results will
#' be parsed using the schema.
#' @param failure_mode How to respond if a request fails
#' "error", the default: raise an error
#' "quiet", silently return NA
#'
#' @return A list of responses. If `response_format` == "json", then the responses
#' will be simple R lists. If `response_format` == "html", then the responses
#' will `xml_document` objects. If `response_type` is supplied, the response
#' will be coerced into a [tibble::tbl_df] or vector using the relevant schema.
#' If the response is a 'scalar list' (i.e. a list of length == 1), then it is
#' silently unlisted, returning a simple list or vector.
get_rest_resource <- function(
..., language = "en",
api = c("core", "wikimedia", "wikimedia_org", "xtools"),
response_format = c("json", "html"),
response_type = NULL,
failure_mode = c("error", "quiet")) {
dots <- rlang::list2(...) %>%
purrr::keep(\(x) !is.null(x)) %>%
purrr::map_if(is.character, str_for_rest)
pipeline <- list()
api <- rlang::arg_match(api)
pipeline$req_fn <- switch(api,
"core" = core_rest_request,
"wikimedia" = wikimedia_rest_request,
"wikimedia_org" = wikimedia_org_rest_request,
"xtools" = xtools_rest_request
)
failure_mode <- rlang::arg_match(failure_mode)
pipeline$error_fn <- switch(failure_mode,
"error" = NULL,
"quiet" = \(req) httr2::req_error(req, is_error = \(x) FALSE)
)
pipeline$perform_fn <- httr2::req_perform
response_format <- rlang::arg_match(response_format)
pipeline$resp_fn <- new_response_function(response_format, failure_mode)
if (!xor(is.null(response_type), rlang::is_scalar_character(response_type))) {
rlang::abort("`response_type` must be NULL or length 1")
}
params <- vctrs::vec_recycle_common(!!!dots, language = language)
get_one <- purrr::compose(!!!pipeline, .dir = "forward")
response <- purrr::pmap(params, get_one, .progress = T)
if (!is.null(response_type)) {
class(response) <- c(response_type, class(response))
response <- parse_response(response)
}
response <- if (rlang::is_scalar_list(response)) response[[1]] else response
response
}
new_response_function <- function(response_format, failure_mode) {
handler <- switch(
response_format,
"html" = httr2::resp_body_html,
"json" = httr2::resp_body_json
)
switch(
failure_mode,
"error" = handler,
"quiet" = handle_without_error(handler)
)
}
handle_without_error <- function(handler) {
function(resp) {
if (httr2::resp_is_error(resp)) {
list()
} else {
handler(resp)
}
}
}
|
/scratch/gouwar.j/cran-all/cranData/wikkitidy/R/get-rest-resource.R
|
#' List pages that meet certain criteria
#'
#' See [API:Lists](https://www.mediawiki.org/wiki/API:Lists) for available
#' list actions. Each list action returns a list of pages, typically including
#' their pageid, [namespace](https://www.mediawiki.org/wiki/Manual:Namespace)
#' and title. Individual lists have particular properties that can be requested,
#' which are usually prefaced with a two-word code based on the name of the
#' list (e.g. specific properties for the `categorymembers` list action are
#' prefixed with `cm`).
#'
#' When the request is performed, the data is returned in the body of the
#' request under the `query` object, labeled by the chosen list action.
#'
#' If you want to study the actual pages listed, it is advisable to retrieve
#' the pages directly using a generator, rather than listing their IDs using a
#' list action. When using a list action, a second request is required to get
#' further information about each page. Using a generator, you can query pages
#' and retrieve their relevant properties in a single API call.
#'
#' @param .req A httr2_request, e.g. generated by `wiki_action_request`
#' @param list The [type of list](https://www.mediawiki.org/wiki/API:Lists) to return
#' @param ... <[`dynamic-dots`][rlang::dyn-dots]> Additional parameters to the query, e.g. to set configure list
#'
#' @return An HTTP response: an S3 list with class httr2_request
#' @export
#'
#' @examples
#' # Get the ten most recently added pages in Category:Physics
#' physics_pages <- wiki_action_request() %>%
#' query_list_pages("categorymembers",
#' cmsort = "timestamp",
#' cmdir = "desc", cmtitle = "Category:Physics"
#' ) %>%
#' next_batch()
#'
#' physics_pages
query_list_pages <- function(.req, list, ...) {
check_module(list, "list")
# TODO: check_params
new_list_query(.req, list, ...)
}
#' @rdname query_list_pages
#' @export
list_all_list_modules <- function() {
schema_query_modules %>%
dplyr::filter(group == "list") %>%
dplyr::select(name)
}
#' Constructor for [list](https://www.mediawiki.org/wiki/API:Lists) queries
#'
#' This low-level constructor only performs basic type checking.
#'
#' @param .req A [`query/action_api/httr2_request`][wiki_action_request()]
#' object, or a `list/query/action_api/httr2_request` as returned by this
#' function.
#' @param list The [list module](https://www.mediawiki.org/wiki/API:Lists) to
#' add to the query
#' @param ... <[`dynamic-dots`][rlang::dyn-dots]> Parameters to the list module
#'
#' @keywords low_level_action_api
#'
#' @return An object of type `list/query/action_api/httr2_request`.
#' @export
#' @examples
#' # Create a query to list all members of Category:Physics
#' physics_query <- wiki_action_request() %>%
#' new_list_query("categorymembers", cmtitle="Category:Physics")
#'
new_list_query <- function(.req, list, ...) {
UseMethod("new_list_query")
}
#' @rdname new_list_query
#' @export
new_list_query.list <- function(.req, list, ...) {
req <- set_action(.req, "list", list, ...)
req
}
#' @rdname new_list_query
#' @export
new_list_query.generator <- function(.req, list, ...) {
incompatible_query_error("list", "generator")
}
#' @rdname new_list_query
#' @export
new_list_query.prop <- function(.req, list, ...) {
incompatible_query_error("list", "prop")
}
#' @rdname new_list_query
#' @export
new_list_query.query <- function(.req, list, ...) {
req <- set_action(.req, "list", list, ...)
class(req) <- c("list", class(req))
req
}
is_list_query <- function(.req) {
is_query_subtype(.req, "list")
}
|
/scratch/gouwar.j/cran-all/cranData/wikkitidy/R/list-query-type.R
|
#' Convert a response from a Wikipedia API into a convenient format
#'
#' Wikipedia's APIs provide data using a range of different json schemas.
#' This generic function converts the data into a convenient formats for use
#' in an R data frame.
#'
#' @param response The data retrieved from Wikipedia.
#'
#' @return A vector the same length as the response. Generally, this will be
#' a simple vector, a [tibble::tbl_df] or a list of [tibble::tbl_df] objects.
#' @export
#'
#' @keywords internal
parse_response <- function(response) {
UseMethod("parse_response")
}
#' @export
#' @describeIn parse_response By default, create a list of nested tbl_dfs
parse_response.default <- function(response) {
parsed <- purrr::map(response, dplyr::bind_rows)
parsed
}
#' @export
#' @describeIn parse_response Many of the endpoints return a list of named
#' values for each page, which can easily be row-bound. They often contain
#' nested data, however, which is automatically unnested by dplyr::bind_rows.
#' Hence this more basic approach.
parse_response.row_list <- function(response) {
robust_bind(response)
}
|
/scratch/gouwar.j/cran-all/cranData/wikkitidy/R/parse-response.R
|
#' Perform a single request to the Action API.
#'
#' This function is the workhorse behind the user-facing [next_result()],
#' [next_batch()] and [retrieve_all()].
#'
#' @seealso [append_query_result()]
#'
#' @param request The request object
#' @param continue The continue parameter returned by the previous request
#'
#' @return A [query_tbl()] of the results
#' @keywords internal
perform_query <- function(request, continue) {
UseMethod("perform_query")
}
#' @export
perform_query.prop <- function(request, continue) {
result <- get_result(request, continue, c("query", "pages"))
simplified_data <- purrr::list_transpose(result$x, simplify = FALSE)
result$x <- tibble::tibble(!!!simplified_data)
result_to_query_tbl(result)
}
#' @export
perform_query.list <- function(request, continue) {
result <- get_result(request, continue, c("query"))
# If more than one list module has been queried, preserve the name of the
# module. Otherwise drop it.
if (length(result$x) > 1) {
result$x <- purrr::list_flatten(result$data, name_spec = "{outer}") %>%
dplyr::bind_rows(.id = "list_module")
} else {
result$x <- result$x[[1]] %>% dplyr::bind_rows()
}
result_to_query_tbl(result)
}
#' @export
perform_query.generator <- function(request, continue) {
result <- get_result(request, continue, c("query", "pages"))
result$x <- result$x %>%
purrr::list_transpose() %>%
tibble::tibble(!!!.)
result_to_query_tbl(result)
}
get_result <- function(request, continue, pluck_params) {
resp <- request %>% httr2::req_url_query(!!!continue) %>% httr2::req_perform()
body <- httr2::resp_body_json(resp)
x <- purrr::pluck(body, !!!pluck_params)
new_continue <- purrr::pluck(body, "continue", .default = NA)
batchcomplete <- purrr::pluck(body, "batchcomplete", .default = FALSE)
class <- infer_result_type(new_continue, batchcomplete)
rlang::dots_list(x, request, continue = new_continue, batchcomplete, class, .named = TRUE)
}
infer_result_type <- function(continue, batchcomplete) {
if (rlang::is_na(continue)) {
"final"
} else if (rlang::is_false(batchcomplete)) {
"incomplete"
} else {
"complete"
}
}
result_to_query_tbl <- function(result) {
result$x <- dplyr::mutate(
result$x,
dplyr::across(
dplyr::where(rlang::is_list),
simplify_if_atomicish
)
)
result$x <- dplyr::mutate(
result$x,
dplyr::across(
dplyr::where(rlang::is_list),
\(col) purrr::map(col, robust_bind)
))
rlang::inject(new_query_tbl(!!!result))
}
|
/scratch/gouwar.j/cran-all/cranData/wikkitidy/R/perform-query.R
|
#' Add required prefix to URL parameters for MediaWiki Action API request
#'
#' @param params A character vector
#' @param prefix A character vector
#'
#' @return A character vector
#' @keywords internal
prefix_params <- function(params, prefix) {
unprefixed <- params[!startsWith(params, prefix)]
prefixed <- paste0(prefix, unprefixed)
params[!startsWith(params, prefix)] <- prefixed
params
}
|
/scratch/gouwar.j/cran-all/cranData/wikkitidy/R/prefix-params.R
|
#' Query the [MediaWiki Action
#' API](https://www.mediawiki.org/wiki/API:Main_page) using a vector of
#' Wikipedia pages
#'
#' These functions help you to build a query for the [MediaWiki Action
#' API](https://www.mediawiki.org/wiki/API:Main_page) if you already have a set
#' of pages that you wish to investigate. These functions can be combined with
#' [query_page_properties] to choose which properties to return for the passed
#' pages.
#'
#' If you don't already know which pages you wish to examine, you can build a
#' query to find pages that meet certain criteria using [query_list_pages] or
#' [query_generate_pages].
#'
#' @param .req A [wiki_action_request] query to modify
#' @param title A character vector of page titles
#' @param pageid A character or numeric vector of page ids
#' @param revid A character or numeric vector of revision ids
#'
#' @name query_by_
#'
#' @return A request object of type `pages/query/action_api/httr2_request`. To
#' perform the query, pass the object to [next_batch] or [retrieve_all]
#'
#' @examples
#' # Retrieve the categories for Charles Harpur's Wikipedia page
#' resp <- wiki_action_request() %>%
#' query_by_title("Charles Harpur") %>%
#' query_page_properties("categories") %>%
#' next_batch()
NULL
#' @rdname query_by_
#' @export
query_by_title <- function(.req, title) {
# TODO: check `title` parameter
new_prop_query(.req, "titles", title)
}
#' @rdname query_by_
#' @export
query_by_pageid <- function(.req, pageid) {
# TODO: check `pageid` parameter
new_prop_query(.req, "pageids", pageid)
}
#' @rdname query_by_
#' @export
query_by_revid <- function(.req, revid) {
# TODO: check `revid` parameter
new_prop_query(.req, "revids", revid)
}
#' Constructor for the property query type
#'
#' The intended use for this query is to set the 'titles', 'pageids' or 'revids'
#' parameter, and enforce that only one of these is set. All [property modules
#' API](https://www.mediawiki.org/wiki/API:Properties) in the Action API require
#' this parameter to be set, or they require a
#' [`generator`][new_generator_query] parameter to be set instead. The
#' `prop/query` type is an abstract type representing the three possible kinds
#' of property query that do not rely on a generator (see below on the return
#' value). A complication is that a `prop/query` can *itself* be used as the
#' basis for a generator.
#'
#' @param .req A [`query/action_api/httr2_request`][wiki_action_request] object,
#' or a `prop` query object as returned by this function. This parameter is
#' covariant on the type, so you can also pass all subtypes of `prop`.
#' @param by The type of page. Allowed values are: `r PROP_SUBTYPES`
#' @param pages A string, the pages to query by, corresponding to the 'by'
#' parameter. Multiple values should be separated with "|"
#' @param ... <[`dynamic-dots`][rlang::dyn-dots]> Further parameters to the
#' query
#'
#' @keywords low_level_action_api
#'
#' @return A properly qualified `prop/query` object. There are six
#' possibilities:
#' * `titles/prop/query`
#' * `pageids/prop/query`
#' * `revids/prop/query`
#' * `generator/titles/prop/query`
#' * `generator/pageids/prop/query`
#' * `generator/revids/prop/query`
#' @export
#' @examples
#' # Build a query on a set of pageids
#' # 963273 and 1159171 are Kate Bush albums
#' bush_albums_query <- wiki_action_request() %>%
#' new_prop_query("pageids", "963273|1159171")
#'
new_prop_query <- function(.req, by, pages, ...) {
UseMethod("new_prop_query")
}
PROP_SUBTYPES <- c("pageids", "titles", "revids")
#' @export
new_prop_query.prop <- function(.req, by, pages, ...) {
check_prop_subtype(.req, by)
NextMethod()
}
#' @export
new_prop_query.generator <- function(.req, by, pages, ...) {
check_prop_subtype(.req, by, .default = "non-prop generator")
req <- set_action(.req, by, pages, ...)
req
}
#' @export
new_prop_query.list <- function(.req, by, pages, ...) {
incompatible_query_error(paste0(by, "prop", collapse = "/"), "list")
}
#' @export
new_prop_query.query <- function(.req, by, pages, ...) {
by <- rlang::arg_match(by, PROP_SUBTYPES)
req <- rlang::inject(
set_action(.req, !!by, pages, ...)
)
class(req) <- c(by, "prop", class(req))
req
}
check_prop_subtype <- function(.req, by, .default = NULL) {
if (prop_subtype(.req, .default) != by) {
incompatible_query_error(by, prop_subtype(.req, .default))
}
}
prop_subtype <- function(.req, .default = NULL) {
subtype <- class(.req)[which(class(.req) == "prop")-1]
if (rlang::is_empty(subtype)) {
.default
} else {
subtype
}
}
is_prop_query <- function(.req) {
rlang::inherits_all(.req, c("prop", BASE_QUERY_CLASS))
}
|
/scratch/gouwar.j/cran-all/cranData/wikkitidy/R/prop-query-type.R
|
#' Explore Wikipedia's category system
#'
#' @description These functions provide access to the
#' [CategoryMembers](https://www.mediawiki.org/wiki/API:Categorymembers)
#' endpoint of the Action API.
#'
#' [query_category_members()] builds a [generator
#' query][query_generate_pages()] to return the members of a given category.
#'
#' [build_category_tree()] finds all the pages and subcategories beneath the
#' passed category, then recursively finds all the pages and subcategories
#' beneath them, until it can find no more subcategories.
#'
#' @param .req A [query request object][wiki_action_request()]
#' @param category The category to start from. [query_category_members()]
#' accepts either a numeric pageid or the page title. [build_category_tree()]
#' accepts a vector of page titles.
#' @param namespace Only return category members from the provided namespace
#' @param type Alternative to `namespace`: the type of category member to
#' return. Multiple types can be requested using a character vector. Defaults
#' to all.
#' @param limit The number to return each batch. Max 500.
#' @param sort How to sort the returned category members. 'timestamp' sorts them
#' by the date they were included in the category; 'sortkey' by the category
#' member's unique hexadecimal code
#' @param dir The direction in which to sort them
#' @param start If `sort` == 'timestamp', only return category members from
#' after this date. The argument is parsed by [lubridate::as_date()]
#' @param end If `sort` == 'timestamp', only return category members included in
#' the category from before this date. The argument is parsed by
#' [lubridate::as_date()]
#' @param language The language edition of Wikipedia to query
#'
#' @return [query_category_members()]: A request object of type
#' `generator/query/action_api/httr2_request`, which can be passed to
#' [next_batch()] or [retrieve_all()]. You can specify which properties to
#' retrieve for each page using [query_page_properties()].
#'
#' [build_category_tree()]: A list containing two dataframes. `nodes` lists
#' all the subcategories and pages found underneath the passed categories.
#' `edges` records the connections between them. The `source` column gives the
#' pageid of the parent category, while the `target` column gives the pageid
#' of any categories, pages or files contained within the `source` category.
#' The `timestamp` records the moment when the `target` page or subcategory
#' was included in the `source` category. The two dataframes in the list can
#' be passed to [igraph::graph_from_data_frame] for network analysis.
#' @export
#'
#' @examples
#' # Get the first 10 pages in 'Category:Physics' on English Wikipedia
#' physics_members <- wiki_action_request() %>%
#' query_category_members("Physics") %>% next_batch()
#' physics_members
#'
#'
#' # Build the tree of all albums for the Melbourne band Custard
#' tree <- build_category_tree("Category:Custard_(band)_albums")
#' tree
#'
#' # For network analysis and visualisation, you can pass the category tree
#' # to igraph
#' tree_graph <- igraph::graph_from_data_frame(tree$edges, vertices = tree$nodes)
#' tree_graph
query_category_members <- function(
.req,
category,
namespace = NULL,
type = c("file", "page", "subcat"),
limit = 10,
sort = c("sortkey", "timestamp"),
dir = c("ascending", "descending", "newer", "older"),
start = NULL,
end = NULL,
language = "en"
) {
category <- id_or_title(category, prefix = "Category")
namespace <- check_namespace(namespace)
type <- rlang::arg_match(type, multiple = T) %>%
paste0(collapse = "|")
limit <- check_limit(limit, max = 500)
sort <- rlang::arg_match(sort)
dir <- rlang::arg_match(dir)
if (!is.null(start) || !is.null(end)) {
if (!sort == "timestamp") {
rlang::abort("If using `start` or `end`, you must use sort = 'timestamp'",
class = "incompatible_arguments")
}
}
timestamp_args <- process_timestamps(start, end)
query_params <- rlang::dots_list(
!!!category,
namespace,
type,
limit,
sort,
dir,
!!!timestamp_args,
.named = T
)
names(query_params) <- stringr::str_c("gcm", names(query_params))
query_generate_pages(.req, "categorymembers", !!!query_params)
}
#' @rdname query_category_members
#' @export
build_category_tree <- function(category, language = "en") {
root <- get_latest_revision(category, language)
tree <- list(
nodes = tibble::tibble(
pageid = root[["page_id"]],
ns = root[["namespace"]],
title = root[["title"]],
type = "root"
),
edges = tibble::tibble(
source = integer(),
target = integer(),
timestamp = character()
)
)
progress <- cli::cli_progress_bar("Walking subcategories:")
tree <- walk_category_tree(tree, root$page_id, language, progress)
cli::cli_progress_done(id = progress)
# strip irrelevant <query_tbl> attributes and metadata
tree <- purrr::map(tree, tibble::as_tibble)
tree
}
walk_category_tree <- function(tree, category, language, progress) {
children <- get_children(category, language, progress)
new_categories <- extract_new_categories(tree, children)
if (length(new_categories) > 0) {
walk_category_tree(merge_trees(tree, children), new_categories, language, progress)
} else {
merge_trees(tree, children)
}
}
get_one_children <- function(category, language = "en", progress) {
cli::cli_progress_update(id = progress, force = T)
request <- wiki_action_request(language = language) %>%
new_list_query(
"categorymembers",
cmpageid = category,
cmprop = "ids|title|type|timestamp",
cmlimit = "max"
)
children <- retrieve_all(request)
dplyr::mutate(children, source = category)
}
get_children <- function(category, language = "en", progress) {
params <- vctrs::vec_recycle_common(category, language, progress)
children <- purrr::pmap(params, get_one_children)
children <- purrr::list_rbind(children)
list(
nodes = extract_nodes(children),
edges = extract_edges(children)
)
}
extract_nodes <- function(children) {
children %>%
dplyr::select(!timestamp:source) %>%
dplyr::distinct()
}
extract_edges <- function(children) {
children %>%
dplyr::select(source, target = pageid, timestamp)
}
extract_new_categories <- function(tree, children) {
children$nodes %>%
dplyr::filter(type == "subcat") %>%
dplyr::anti_join(tree$nodes, by = "pageid") %>%
.[["pageid"]]
}
merge_trees <- function(old_tree, new_tree) {
list(
nodes = dplyr::union(old_tree$nodes, new_tree$nodes),
edges = dplyr::union(old_tree$edges, new_tree$edges)
)
}
|
/scratch/gouwar.j/cran-all/cranData/wikkitidy/R/query-category-members.R
|
#' Choose properties to return for pages from the action API
#'
#' See [API:Properties](https://www.mediawiki.org/wiki/API:Properties) for a
#' list of available properties. Many have additional parameters to control
#' their behavior, which can be passed to this function as named arguments.
#'
#' [query_page_properties] is not useful on its own. It must be combined with a
#' [query_by_] function or [query_generate_pages] to specify which pages
#' properties are to be returned. It should be noted that many of the
#' [API:Properties](https://www.mediawiki.org/wiki/API:Properties) modules can
#' themselves be used as generators. If you wish to use a property module in
#' this way, then you must use [query_generate_pages], passing the name of the
#' property module as the `genenerator`.
#'
#' @param .req A httr2_request, e.g. generated by `wiki_action_request`
#' @param property The property to request
#' @param ... <[`dynamic-dots`][rlang::dyn-dots]> Additional parameters to pass, e.g. to modify what is returned by
#' the property request
#'
#' @return An HTTP response: an S3 list with class httr2_request
#' @export
#'
#' @examples
#' # Search for articles about seagulls and retrieve their number of
#' # watchers
#'
#' resp <- wiki_action_request() %>%
#' query_generate_pages("search", gsrsearch = "seagull") %>%
#' query_page_properties("info", inprop = "watchers") %>%
#' next_batch() %>%
#' dplyr::select(pageid, ns, title, watchers)
#' resp
query_page_properties <- function(.req, property, ...) {
check_module(property, "prop")
query <- set_action(.req, "prop", property, ...)
}
#' @rdname query_page_properties
#' @export
list_all_property_modules <- function() {
schema_query_modules %>%
dplyr::filter(group == "prop") %>%
dplyr::select(name)
}
|
/scratch/gouwar.j/cran-all/cranData/wikkitidy/R/query-page-properties.R
|
#' Representation of Wikipedia data returned from an [Action API Query
#' module](https://www.mediawiki.org/wiki/API:Query) as tibble, with request
#' metadata stored as attributes.
#'
#' @param x A tibble
#' @param request The httr2_request object used to generate the tibble
#' @param continue The continue parameter returned by the API
#' @param batchcomplete The batchcomplete parameter returned by the API
#'
#' @return A tibble: an S3 data.frame with class `query_tbl`.
#'
#' @keywords data_type
query_tbl <- function(x, request, continue, batchcomplete) {
request <- if (is.null(request)) NA else request
continue <- if(is.null(continue)) NA else continue
batchcomplete <- if(is.null(batchcomplete)) FALSE else batchcomplete
new_query_tbl(x, request, continue, batchcomplete)
}
QUERY_TBL_CLASS = c("query_tbl", "tbl_df", "tbl", "data.frame")
query_tbl_subclass <- function(x) {
setdiff(class(x), QUERY_TBL_CLASS)
}
# The constructor
new_query_tbl <- function(x, request, continue, batchcomplete, class=NULL) {
tibble::new_tibble(
x,
request = request,
continue = continue,
batchcomplete = batchcomplete,
class = c(class, "query_tbl")
)
}
#' @export
tbl_sum.query_tbl <- function(x, ...) {
url <- get_request(x)$url
c(
cli::cli_text("{.cls {paste0(class(x)[1:2], collapse = '/')}}"),
NextMethod()
)
}
#' @export
tbl_format_footer.query_tbl <- function(x, ...) {
default_footer <- NextMethod()
query_message <- if (rlang::is_na(get_continue(x))) {
cli::cli_alert_success("All results downloaded from server")
} else {
cli::cli_alert_info("There are more results on the server. Retrieve them with `next_batch()` or `retrieve_all()`")
}
batch_message <- if (rlang::is_true(get_batchcomplete(x))) {
cli::cli_alert_success("Data complete for all records")
} else {
cli::cli_alert_warning("Data not fully downloaded for last batch. Retrieve it with `next_batch()` or `retrieve_all()`.")
}
default_footer
}
validate_query_tbl <- function(x) {
tbl_var <- rlang::ensym(x)
if (!tibble::is_tibble(x)) {
rlang::abort(
glue::glue("`{tbl_var}` is not a tibble"),
class = "invalid"
)
}
continue <- get_continue(x)
if (
!(
rlang::is_na(continue) ||
(rlang::has_name(continue, "continue") && length(continue) > 1)
)
) {
rlang::abort(
glue::glue("`{tbl_var}` lacks a valid `continue` attribute"),
class = "invalid"
)
}
if (!rlang::is_scalar_logical(get_batchcomplete(x))) {
rlang::abort(
glue::glue("`{tbl_var}` lacks a valid `batchcomplete` attribute"),
class = "invalid"
)
}
if (!is_action_query(get_request(x))) {
rlang::abort(
glue::glue("`{tbl_var} lacks a valid `request` attribute"),
class = "invalid"
)
}
x
}
get_request <- purrr::attr_getter("request")
get_continue <- purrr::attr_getter("continue")
get_batchcomplete <- purrr::attr_getter("batchcomplete")
set_continue <- function(query_tbl, x) {
x <- if (is.null(x)) NA else x
attr(query_tbl, "continue") <- x
}
get_batchcomplete <- function(query_tbl) {
attr(query_tbl, "batchcomplete")
}
set_batchcomplete <- function(query_tbl, x) {
x <- if (is.null(x)) NA else x
attr(query_tbl, "batchcomplete") <- x
}
# A placeholder. The returned item should raise an error nearly everywhere.
empty_query_tbl <- function() {
new_query_tbl(tibble::tibble(), request = NA, continue = NA, batchcomplete = NA)
}
|
/scratch/gouwar.j/cran-all/cranData/wikkitidy/R/query-tbl.R
|
/scratch/gouwar.j/cran-all/cranData/wikkitidy/R/query-types.R
|
|
#' Build a REST request to one of Wikipedia's specific REST APIs
#'
#' @description `core_request_request()` builds a request for the [MediaWiki
#' Core REST API](https://www.mediawiki.org/wiki/API:REST_API), the basic REST
#' API available on all MediaWiki wikis.
#'
#' `wikimedia_rest_request()` builds a request for the [Wikimedia REST
#' API](https://www.mediawiki.org/wiki/Wikimedia_REST_API), an additional
#' api just for Wikipedia and other wikis managed by the Wikimedia
#' Foundation
#'
#' @param ... <[`dynamic-dots`][rlang::dyn-dots]> Components to add to the URL.
#' Unnamed arguments are added to the path of the request, while named
#' arguments are added as query parameters.
#' @param language The two-letter language code for the Wikipedia edition
#'
#' @return A `core/rest`, `wikimedia/rest`, object, an S3 vector that subclasses
#' `httr2_request` (see [httr2::request]). The request needs to be passed to
#' [httr2::req_perform] to retrieve data from the API.
#'
#' @name wikipedia_rest_apis
#'
#' @examples
#' # Get the html of the 'Earth' article on English Wikipedia
#' response <- core_rest_request("page", "Earth", "html") %>%
#' httr2::req_perform()
#'
#' response <- wikimedia_rest_request("page", "html", "Earth") %>%
#' httr2::req_perform()
#'
#' # Some REST requests take query parameters. Pass these as named arguments.
#' # To search German Wikipedia for articles about Goethe
#' response <- core_rest_request("search/page", q = "Goethe", limit = 2, language = "de") %>%
#' httr2::req_perform() %>%
#' httr2::resp_body_json()
NULL
#' @rdname wikipedia_rest_apis
#' @export
core_rest_request <- function(..., language = "en") {
request <- wp_rest_request(..., api = "w/rest.php/v1", language = language)
class(request) <- c("core", class(request))
request
}
#' @rdname wikipedia_rest_apis
#' @export
wikimedia_rest_request <- function(..., language = "en") {
request <- wp_rest_request(..., api = "api/rest_v1", language = language) %>%
httr2::req_throttle(199 / 1, realm = "wikimedia_rest")
class(request) <- c("wikimedia", class(request))
request
}
#' Build a REST request to one of the Wikimedia Foundation's central APIs
#'
#' @description `wikimedia_org_rest_request()` builds a request for the
#' [wikimedia.org REST API](https://wikimedia.org/api/rest_v1/), which
#' provides statistical data about Wikimedia Foundation projects
#'
#' `xtools_rest_request()` builds a request to the [XTools
#' API](https://www.mediawiki.org/wiki/XTools/API), which provides additional
#' statistical data about Wikimedia foundation projects
#'
#' @param endpoint The endpoint for the specific kind of request; for wikimedia
#' apis, this comprises the path components in between the general API
#' endpoint and the component specifying the project to query
#' @param ... <[`dynamic-dots`][rlang::dyn-dots]> Components to add to the URL.
#' Unnamed arguments are added to the path of the request, while named
#' arguments are added as query parameters.
#' @param language Two-letter language code for the desired Wikipedia edition.
#'
#' @return A `wikimedia_org/rest` or `xtools/rest` object, an S3 vector that
#' subclasses [httr2::request].
#'
#' @examples
#' # Build request for articleinfo about Kate Bush's page on English Wikipedia
#' request <- xtools_rest_request("page/articleinfo", "Kate_Bush")
#'
#' # Build request for most-viewed pages on German Wikipedia in July 2020
#' request <- wikimedia_org_rest_request(
#' "metrics/pageviews/top",
#' "all-access", "2020", "07", "all-days",
#' language = "de"
#' )
#' @name wikimedia_rest_apis
NULL
#' @rdname wikimedia_rest_apis
#' @export
wikimedia_org_rest_request <- function(endpoint, ..., language = "en") {
base_url <- "https://wikimedia.org/api/rest_v1/"
project <- glue::glue("{language}.wikipedia.org")
request <- wm_rest_request(..., base_url = base_url, project = project, endpoint = endpoint, language = language) %>%
httr2::req_throttle(200, realm = "wikmedia_org")
class(request) <- c("wikmedia_org", class(request))
request
}
#' @rdname wikimedia_rest_apis
#' @export
xtools_rest_request <- function(endpoint, ..., language = "en") {
base_url <- "https://xtools.wmflabs.org/api/"
project <- glue::glue("{language}.wikipedia")
request <- wm_rest_request(..., base_url = base_url, project = project, endpoint = endpoint, language = language)
class(request) <- c("xtools_api", class(request))
request
}
wp_rest_request <- function(..., api = character(), language = "en") {
dots <- rest_dots(...)
url <- glue::glue("https://{language}.wikipedia.org/")
rlang::inject(
request <- httr2::request(url) %>%
wikkitidy_user_agent() %>%
httr2::req_url_path_append(!!!api) %>%
httr2::req_url_path_append(!!!dots$path) %>%
httr2::req_url_query(!!!dots$query)
)
class(request) <- c("rest", class(request))
request
}
wm_rest_request <- function(..., base_url, project, endpoint, language) {
dots <- rest_dots(...)
rlang::inject(
request <- httr2::request(base_url) %>%
wikkitidy_user_agent() %>%
httr2::req_url_path_append(!!!endpoint) %>%
httr2::req_url_path_append(project) %>%
httr2::req_url_path_append(!!!dots$path) %>%
httr2::req_url_query(!!!dots$query)
)
class(request) <- c("rest", class(request))
request
}
rest_dots <- function(...) {
dots <- rlang::dots_list(..., .named = FALSE)
if (length(dots) == 0) {
rlang::abort(
"no path components provided for REST request"
)
}
path_components <- dots[!rlang::have_name(dots)]
query_params <- dots[rlang::have_name(dots)]
list("path" = path_components, "query" = query_params)
}
|
/scratch/gouwar.j/cran-all/cranData/wikkitidy/R/rest-request.R
|
#' Pipe operator
#'
#' See \code{magrittr::\link[magrittr:pipe]{\%>\%}} for details.
#'
#' @name %>%
#' @rdname pipe
#' @keywords internal
#' @export
#' @importFrom magrittr %>%
#' @usage lhs \%>\% rhs
#' @param lhs A value or the magrittr placeholder.
#' @param rhs A function call using the magrittr semantics.
#' @return The result of calling `rhs(lhs)`.
NULL
|
/scratch/gouwar.j/cran-all/cranData/wikkitidy/R/utils-pipe.R
|
#' Tidy eval helpers
#'
#' @description This page lists the tidy eval tools reexported in this package
#' from rlang. To learn about using tidy eval in scripts and packages at a high
#' level, see the [dplyr programming
#' vignette](https://dplyr.tidyverse.org/articles/programming.html) and the
#' [ggplot2 in packages
#' vignette](https://ggplot2.tidyverse.org/articles/ggplot2-in-packages.html).
#' The [Metaprogramming section](https://adv-r.hadley.nz/metaprogramming.html)
#' of [Advanced R](https://adv-r.hadley.nz) may also be useful for a deeper
#' dive.
#'
#' * The tidy eval operators `{{`, `!!`, and `!!!` are syntactic
#' constructs which are specially interpreted by tidy eval functions.
#' You will mostly need `{{`, as `!!` and `!!!` are more advanced
#' operators which you should not have to use in simple cases.
#'
#' The curly-curly operator `{{` allows you to tunnel data-variables
#' passed from function arguments inside other tidy eval functions.
#' `{{` is designed for individual arguments. To pass multiple
#' arguments contained in dots, use `...` in the normal way.
#'
#' ```
#' my_function <- function(data, var, ...) {
#' data %>%
#' group_by(...) %>%
#' summarise(mean = mean({{ var }}))
#' }
#' ```
#'
#' * [enquo()] and [enquos()] delay the execution of one or several
#' function arguments. The former returns a single expression, the
#' latter returns a list of expressions. Once defused, expressions
#' will no longer evaluate on their own. They must be injected back
#' into an evaluation context with `!!` (for a single expression) and
#' `!!!` (for a list of expressions).
#'
#' ```
#' my_function <- function(data, var, ...) {
#' # Defuse
#' var <- enquo(var)
#' dots <- enquos(...)
#'
#' # Inject
#' data %>%
#' group_by(!!!dots) %>%
#' summarise(mean = mean(!!var))
#' }
#' ```
#'
#' In this simple case, the code is equivalent to the usage of `{{`
#' and `...` above. Defusing with `enquo()` or `enquos()` is only
#' needed in more complex cases, for instance if you need to inspect
#' or modify the expressions in some way.
#'
#' * The `.data` pronoun is an object that represents the current
#' slice of data. If you have a variable name in a string, use the
#' `.data` pronoun to subset that variable with `[[`.
#'
#' ```
#' my_var <- "disp"
#' mtcars %>% summarise(mean = mean(.data[[my_var]]))
#' ```
#'
#' * Another tidy eval operator is `:=`. It makes it possible to use
#' glue and curly-curly syntax on the LHS of `=`. For technical
#' reasons, the R language doesn't support complex expressions on
#' the left of `=`, so we use `:=` as a workaround.
#'
#' ```
#' my_function <- function(data, var, suffix = "foo") {
#' # Use `{{` to tunnel function arguments and the usual glue
#' # operator `{` to interpolate plain strings.
#' data %>%
#' summarise("{{ var }}_mean_{suffix}" := mean({{ var }}))
#' }
#' ```
#'
#' * Many tidy eval functions like `dplyr::mutate()` or
#' `dplyr::summarise()` give an automatic name to unnamed inputs. If
#' you need to create the same sort of automatic names by yourself,
#' use `as_label()`. For instance, the glue-tunnelling syntax above
#' can be reproduced manually with:
#'
#' ```
#' my_function <- function(data, var, suffix = "foo") {
#' var <- enquo(var)
#' prefix <- as_label(var)
#' data %>%
#' summarise("{prefix}_mean_{suffix}" := mean(!!var))
#' }
#' ```
#'
#' Expressions defused with `enquo()` (or tunnelled with `{{`) need
#' not be simple column names, they can be arbitrarily complex.
#' `as_label()` handles those cases gracefully. If your code assumes
#' a simple column name, use `as_name()` instead. This is safer
#' because it throws an error if the input is not a name as expected.
#'
#' @md
#' @name tidyeval
#' @keywords internal
#' @importFrom rlang enquo enquos .data := as_name as_label
#' @aliases enquo enquos .data := as_name as_label
#' @return Consult the original rlang documentation for the return types of these
#' re-exported functions.
#' @export enquo enquos .data := as_name as_label
NULL
|
/scratch/gouwar.j/cran-all/cranData/wikkitidy/R/utils-tidy-eval.R
|
wikkitidy_user_agent <- function(.req) {
httr2::req_user_agent(.req, "wikkitidy R package (https://github.com/wikihistories/wikkitidy)")
}
str_for_rest <- function(titles) {
stringr::str_replace_all(titles, " ", "_")
}
#' Determine if a page parameter comprises titles or pageids, and prefix
#' accordingly.
#'
#' @param page Either a character or numeric vector. If a character vector, it
#' is interpreted as a vector of page titles. If a numeric vector, of pageids.
#' @param prefix Optional: A prefix to affix to the page titles if it is missing
#'
#' @return A list
#' @keywords internal
id_or_title <- function(page, prefix = NULL) {
UseMethod("id_or_title")
}
#' @rdname id_or_title
#' @export
id_or_title.character <- function(page, prefix = NULL) {
numeric_page <- suppressWarnings(as.numeric(page))
if (anyNA(numeric_page)) {
prefixed <- add_missing_prefix(page, prefix = prefix)
list("title" = prefixed)
} else {
list("pageid" = numeric_page)
}
}
#' @rdname id_or_title
#' @export
id_or_title.numeric <- function(page, prefix = NULL) {
page <- as.integer(page)
list("pageid" = page)
}
add_missing_prefix <- function(string, prefix = NULL) {
if (is.null(prefix)) {
string
} else {
pattern <- paste0(prefix, ":")
unprefixed <- !stringr::str_detect(string, pattern)
string[unprefixed] <- stringr::str_c(prefix, string[unprefixed], sep = ":")
string
}
}
#' Ensure that the limit is correct for the endpoint. Raise an error if not.
#'
#' @param limit The limit to be added to the query
#' @param max The maximum allowed for the given endpoint
#'
#' @return `limit`, assuming no errors
#' @keywords internal
check_limit <- function(limit, max) {
UseMethod("check_limit")
}
#' @export
check_limit.character <- function(limit, max) {
if (!rlang::is_scalar_character(limit)) {
rlang::abort("`limit` must be a scalar (length == 1)", class="non_scalar_arg")
}
if (limit == "max") {
limit
} else {
rlang::abort("`limit` must be 'max' or an integer", class="wrong_arg_type")
}
}
#' @export
check_limit.numeric <- function(limit, max) {
if (!rlang::is_scalar_integerish(limit)) {
rlang::abort("`limit` must be 'max' or a scalar integer", class="non_scalar_arg")
}
if (!limit < max) {
rlang::abort("`limit` is greater than the allowable maximum for this endpoint ({max})",
class = "exceed_max")
}
limit
}
#' Convert passed objects into ISO8601 strings for API requests
#'
#' @param ... Dynamic dots: the objects to be coerced
#'
#' @return A named list of ISO strings, the same length as `...`
#' @keywords internal
process_timestamps <- function(...) {
dots <- rlang::dots_list(..., .named = T)
purrr::map(dots, \(x) lubridate::as_date(x) %>% lubridate::format_ISO8601())
}
#' Ensure namespace arguments are valid
#' @param namespace An integer vector of namespace ids, or NULL
#' @return A character vector of namespace, spliced together with a `|`, or NULL
#' @keywords internal
check_namespace <- function(namespace) {
if (is.null(namespace)) {
return(NULL)
}
if (!rlang::is_integerish(namespace)) {
rlang::abort("`namespace` must be an integer vector", class="wrong_arg_type")
}
paste0(namespace, collapse = "|")
}
is_not_null <- function(x) {
!rlang::is_null(x)
}
one_if_true <- function(arg) {
arg_sym <- rlang::ensym(arg)
if (!rlang::is_scalar_logical(arg)) {
rlang::abort(glue::glue("Argument `{arg_sym}` must be either TRUE or FALSE"))
}
if (arg == TRUE) 1 else NULL
}
simplify_if_atomicish <- function(list_col) {
nulls <- purrr::map_lgl(list_col, is.null)
otherwise_atomic <- all(purrr::map_lgl(list_col[!nulls], rlang::is_atomic))
if (otherwise_atomic) {
list_col[nulls] <- NA
unlist(list_col)
} else {
list_col
}
}
robust_bind <- function(response) {
if (rlang::is_empty(response)) {
tibble::tibble()
} else if (rlang::is_scalar_list(response) && !rlang::is_list(response[[1]])) {
tibble::tibble(!!!response)
} else {
template_idx <- purrr::map_int(response, length) %>% which.max()
template <- names(response[[template_idx]])
response <- purrr::list_transpose(response, template = template, default = NA)
response <- tibble::tibble(!!!response)
response
}
}
flatten_bind <- function(response) {
response <- purrr::map(response, purrr::list_flatten)
robust_bind(response)
}
|
/scratch/gouwar.j/cran-all/cranData/wikkitidy/R/utils.R
|
#' Query Wikipedia using the [MediaWiki Action
#' API](https://www.mediawiki.org/wiki/API:Main_page)
#'
#' Wikipedia exposes a To build up a query, you first call
#' [wiki_action_request()] to create the basic request object, then use the
#' helper functions [query_page_properties()], [query_list_pages()] and
#' [query_generate_pages()] to modify the request, before calling [next_batch()]
#' or [retrieve_all()] to perform the query and download results from the
#' server.
#'
#' [wikkitidy] provides an ergonomic API for the Action API's [Query
#' modules](https://www.mediawiki.org/wiki/API:Query). These modules are most
#' useful for researchers, because they allow you to explore the stucture of
#' Wikipedia and its back pages. You can obtain a list of available modules in
#' your R console using [list_all_property_modules()], [list_all_list_modules()]
#' and [list_all_generators()],
#'
#' @param ... <[`dynamic-dots`][rlang::dyn-dots]> Parameters for the request
#' @param action The action to perform, typically 'query'
#' @param language The language edition of Wikipedia to request, e.g. 'en' or
#' 'fr'
#'
#' @return An `action_api` object, an S3 list that subclasses [httr2::request].
#' The dependencies between different aspects of the Action API are complex.
#' At the time of writing, there are five major subclasses of
#' `action_api/httr2_request`:
#'
#' * `generator/action_api/httr2_request`, returned (sometimes) by [query_generate_pages]
#' * `list/action_api/httr2_request`, returned by [query_list_pages]
#' * `titles`, `pageids` and `revids/action_api/httr2_request`, returned by the various [query_by_] functions
#'
#' You can use [query_page_properties] to modify any kind of query *except*
#' for `list` queries: indeed, the central limitation of the `list` queries is
#' that you cannot choose what properties to return for the pages the meet the
#' given criterion. The concept of a `generator` is complex. If the
#' `generator` is based on a
#' [property](https://www.mediawiki.org/wiki/API:Properties) module, then it
#' must be combined with a [query_by_] function to produce a valid query. If
#' the generator is based on a [list
#' module](https://www.mediawiki.org/wiki/API:Lists), then it *cannot* be
#' combined with a [query_by_] query.
#' @export
#'
#' @examples
#' # List the first 10 pages in the category 'Australian historians'
#' historians <- wiki_action_request() %>%
#' query_list_pages(
#' "categorymembers",
#' cmtitle = "Category:Australian_historians",
#' cmlimit = 10
#' ) %>%
#' next_batch()
#' historians
wiki_action_request <- function(..., action = "query", language = "en") {
base_url <- glue::glue("https://{language}.wikipedia.org/w/api.php")
params <- rlang::list2(
action = action,
format = "json",
formatversion = "2",
...
)
req <- httr2::request(base_url) %>%
httr2::req_url_query(!!!params) %>%
wikkitidy_user_agent()
structure(
req,
class = c(action, "action_api", class(req))
)
}
# Helpers for enforcing type restrictions on Action API request objects
BASE_QUERY_CLASS <- c("query", "action_api", "httr2_request")
is_action_query <- function(.req) {
rlang::inherits_all(.req, BASE_QUERY_CLASS)
}
check_is_action_query <- function(.req) {
if (!is_action_query(.req)) {
rlang::abort(
"this is not a request to a query module of the MediaWiki Action API",
class = "wrong_request_type"
)
}
}
is_base_query <- function(.req) {
rlang::inherits_only(.req, BASE_QUERY_CLASS)
}
is_query_subtype <- function(.req, subtype) {
rlang::inherits_all(.req, c(subtype, BASE_QUERY_CLASS))
}
incompatible_query_error <- function(new_type, old_type) {
rlang::abort(
glue::glue("you cannot combine a `{new_type}` query with an `{old_type}` query in the Action API"),
class = "incompatible_query_error")
}
# Helpers for validating existence of query modules
is_module <- function(module, group) {
schema_query_modules %>%
dplyr::filter(group == group) %>%
dplyr::summarise(is_module = module %in% name) %>%
.$is_module
}
check_module <- function(module, group) {
if (!is_module(module, group)) {
rlang::abort(
glue::glue("`{module}` is not a known `{group}` query to the Action API"),
class = "unknown_module"
)
}
}
# Helpers for modifying Action API query objects
set_action <- function(.req, action_type, action, ...) {
action_sym <- rlang::ensym(action_type)
action_string <- combine_query_params(.req, action_type, action)
action_params <- rlang::list2(...)
httr2::req_url_query(.req = .req, !!action_sym := action_string, !!!action_params)
}
combine_query_params <- function(.req, param_type, param) {
url <- httr2::url_parse(.req$url)
existing <- purrr::pluck(url, "query", param_type)
if (!is.null(existing)) {
paste0(c(existing, param), collapse = "|")
} else {
paste0(param, collapse = "|")
}
}
|
/scratch/gouwar.j/cran-all/cranData/wikkitidy/R/wiki-action-request.R
|
#' Check that a Wikimedia XML file has not been corrupted
#'
#' The Wikimedia Foundation publishes MD5 checksums for all its database dumps.
#' This function looks up the published sha1 checksums based on the file name,
#' then compares them to the locally calcualte has using the `openssl` package.
#'
#' @param path The path to the file
#'
#' @return True (invisibly) if successful, otherwise error
#'
verify_xml_integrity <- function(path) {
checksum <- .get_checksum(path)
conn <- file(path, open="rb")
local_hash <- openssl::sha1(conn) %>% as.character()
if (local_hash == checksum) {
invisible(TRUE)
} else {
rlang::abort(
glue::glue("Invalid checksum for {basename(path)}. File may be corrupted.")
)
}
}
.get_checksum <- function(path) {
chks_tbl <- .get_checksums(path) %>% .checksum_tbl()
dplyr::filter(chks_tbl, file == basename(path))$checksum[[1]]
}
.checksum_tbl <- function(raw_checksums) {
rows <- strsplit(raw_checksums, split="\n") %>% purrr::map(strsplit, split=" ")
tbl <- do.call(rbind, rows[[1]]) %>% data.frame()
names(tbl) <- c("checksum", "file")
tbl
}
.get_checksums <- function(path) {
filename <- basename(path)
nm_parts <- stringr::str_split_1(filename, "-")
md5_file <- glue::glue("{nm_parts[1]}-{nm_parts[2]}-sha1sums.txt")
checksums <- httr2::request("https://dumps.wikimedia.org/") %>%
wikkitidy_user_agent() %>%
httr2::req_url_path_append(nm_parts[1], nm_parts[2], md5_file) %>%
httr2::req_perform() %>%
httr2::resp_body_string()
checksums
}
|
/scratch/gouwar.j/cran-all/cranData/wikkitidy/R/wiki-xml.R
|
#' Get path to wikkitidy example
#'
#' wikkitidy comes bundled with a number of sample files in its `inst/extdata`
#' directory. This function make them easy to access
#'
#' @param file Name of file. If `NULL`, the example files will be listed.
#' @export
#' @return A character vector, containing either the path of the chosen file, or
#' the nicknames of all available example files.
#' @examples
#' wikkitidy_example()
#' wikkitidy_example("akan_wiki")
wikkitidy_example <- function(file = NULL) {
if (is.null(file)) {
names(.fn_map)
} else {
file <- rlang::arg_match(file, names(.fn_map))
file <- .fn_map[[file]]
system.file("extdata", file, package = "wikkitidy", mustWork = TRUE)
}
}
.fn_map <- list(
akan_wiki = "akwiki-20230301-pages-articles-multistream-index.txt.bz2"
)
|
/scratch/gouwar.j/cran-all/cranData/wikkitidy/R/wikkitidy-example.R
|
#' @keywords internal
"_PACKAGE"
## usethis namespace: start
#' @importFrom pillar tbl_format_footer
#' @importFrom pillar tbl_sum
#' @importFrom vctrs vec_ptype_abbr
#' @importFrom vctrs vec_ptype_full
## usethis namespace: end
NULL
# Suppress 'global variable' warnings due to using dplyr
# See https://dplyr.tidyverse.org/articles/in-packages.html
utils::globalVariables("type")
utils::globalVariables("generator")
utils::globalVariables("group")
utils::globalVariables("name")
utils::globalVariables(".")
utils::globalVariables("is_generator")
utils::globalVariables("pageid")
utils::globalVariables("timestamp")
|
/scratch/gouwar.j/cran-all/cranData/wikkitidy/R/wikkitidy-package.R
|
#' Access page-level statistics from the [XTools Page API endpoint](https://www.mediawiki.org/wiki/XTools/API/Page)
#'
#' @description `get_xtools_page_info()` returns [basic
#' statistics](https://www.mediawiki.org/wiki/XTools/API/Page#Article_info)
#' about articles' history and quality, including their total edits, creation
#' date, and assessment value (good, featured etc.)
#'
#' `get_xtools_page_prose()` returns [statistics about the word counts and
#' referencing](https://www.mediawiki.org/wiki/XTools/API/Page#Prose) of
#' articles
#'
#' `get_xtools_page_links()` returns [the number of ingoing and outgoing links
#' to articles, including
#' redirects](https://www.mediawiki.org/wiki/XTools/API/Page#Links)
#'
#' `get_xtools_page_top_editors()` returns the [list of top editors for
#' articles](https://www.mediawiki.org/wiki/XTools/API/Page#Top_editors), with
#' optional filters by date range and non-bot status
#'
#' `get_xtools_page_assessment()` returns more detailed [statistics about
#' articles' assessment status and Wikiproject importance
#' levels](https://www.mediawiki.org/wiki/XTools/API/Page#Assessments)
#'
#' @param title Character vector of page titles
#' @param language Language code for the version of Wikipedia to query
#' @param start A character vector or date object (optional): the start date for
#' calculating top editors
#' @param end A character vector or date object (optional): the end date for
#' calculating top editors
#' @param limit An integer: the maximum number of top editors to return
#' @param nobots TRUE or FALSE: if TRUE, bots are excluded from the top editor
#' calculation
#' @param classonly TRUE or FALSE: if TRUE, only return the article's assessment
#' status, without Wikiproject information
#' @param failure_mode What to do if no data is found. See [get_rest_resource()]
#'
#' @name xtools_page
#'
#' @return A list or tbl of results, the same length as `title`. **NB:** The
#' results for `get_xtools_page_assessment` are still not parsed properly.
#'
#' @examples
#' # Get basic statistics about Erich Auerbach on German Wikipedia
#' auerbach <- get_xtools_page_info("Erich Auerbach", language = "de")
#' auerbach
NULL
#' @rdname xtools_page
#' @export
get_xtools_page_info <- function(title, language = "en", failure_mode = c("error", "quiet")) {
get_rest_resource(
endpoint = "page/articleinfo",
title,
api = "xtools", language = language, response_type = "row_list", failure_mode = failure_mode
)
}
#' @rdname xtools_page
#' @export
get_xtools_page_prose <- function(title, language = "en", failure_mode = c("error", "quiet")) {
get_rest_resource(
endpoint = "page/prose",
title,
api = "xtools", language = language, response_type = "row_list", failure_mode = failure_mode
)
}
#' @rdname xtools_page
#' @export
get_xtools_page_links <- function(title, language = "en", failure_mode = c("error", "quiet")) {
get_rest_resource(
endpoint = "page/links",
title,
api = "xtools", language = language, response_type = "row_list", failure_mode = failure_mode
)
}
#' @rdname xtools_page
#' @export
get_xtools_page_top_editors <- function(title, start = NULL, end = NULL, limit = 1000, nobots = FALSE, language = "en", failure_mode = c("error", "quiet")) {
start <- datetime_for_url(start, .default = "/")
end <- datetime_for_url(end, .default = "/")
check_limit(limit, 1000)
nobots <- one_if_true(nobots)
get_rest_resource(
endpoint = "page/top_editors",
title, start, end, limit, nobots = nobots,
language = language, api = "xtools",
response_type = "row_list",
failure_mode = failure_mode
)
}
#' @rdname xtools_page
#' @export
get_xtools_page_assessment <- function(title, classonly = FALSE, language = "en", failure_mode = c("error", "quiet")) {
classonly <- one_if_true(classonly)
get_rest_resource(
endpoint = "page/assessments",
title, classonly = classonly,
language = language, api = "xtools",
response_type = "assessment_table",
failure_mode = failure_mode
)
}
#' @export
parse_response.assessment_table <- function(response) {
# TODO: This one's hard!
response
}
|
/scratch/gouwar.j/cran-all/cranData/wikkitidy/R/xtools-page-api.R
|
#' Wilcoxon Sign Rank Test Statistic Exact Distribution
#'
#' @name W_stat
#' @description This function allows the user to find the probability
#' values from the exact distribution of W, Bickel and Doksum(1973).
#' The exact P(W=x), P(W<=x), P(W>=x) values is found via an exhaustive enumeration
#' of the possible permutations of data with size n.
#'
#' @usage W_stat(n , test_stat, side = c('geq','leq','eq'))
#' @param n Size of data or Number of observations
#' @param test_stat The x value specified in P(W=x), P(W<=x), P(W>=x)
#' @param side The tails of exact probability the user wants to compute e.g.
#' 'eq' = P(W=x), 'leq' = P(W<=x), 'geq' = 'P(W>=x)
#' @return The exact probability values as specified.
#' @examples
#' W_stat(n=5, test_stat = 3, side = 'leq')
#' @export
#'
#'
Sys.setenv('_R_CHECK_SYSTEM_CLOCK_' = 0)
W_stat = function(n, test_stat , side = c('geq','leq','eq')){
mat = expand.grid(rep(list(c(-1,1)),(n)))
names(mat) = 1:n ; vec = 1:n
mat = sweep(mat, MARGIN=2, vec, `*`)
positive = 1*(mat>0)
mat= cbind(mat, "positive sum" =apply(mat*positive, 1, sum))
if(side == 'geq'){
num_combi = sum(mat$`positive sum` >= test_stat)
}
else if(side == 'leq'){
num_combi = sum(mat$`positive sum` <= test_stat)
}
else if(side == 'eq'){num_combi = sum(mat$`positive sum` == test_stat)}
total = nrow(mat)
prob = num_combi/total
return(prob)
}
|
/scratch/gouwar.j/cran-all/cranData/wilcoxmed/R/W_stat.R
|
#' 1-Sample Wilcoxon Sign Rank Hypothesis Test for Medians
#'
#' @name Wilcox.m.test
#' @description This function allows the user to conduct the 1-Sample Wilcoxon Sign Rank Hypothesis
#' Test for Medians using the probability values from the exact
#' distribution of W.
#'
#' @usage Wilcox.m.test(dat, m_h0, alpha = 0.05,
#' alternative=c('greater', 'lesser', 'noteq'), normal_approx=FALSE)
#' @param dat data vector relating to the sample the user is
#' performing the hypothesis test for
#' @param m_h0 The value of the median as specified by the null hypothesis H_0
#' @param alpha The significance level of the hypothesis test (default = 0.05)
#' @param alternative The sign of the alternative hypothesis.
#' e.g 'greater' - H_1:m>m_h0 , 'lesser' - H_1:m<m_h0, 'noteq' - H_1:m!=m_h0
#' @param normal_approx Should the normal approximation test be applied? (default = FALSE)
#' @return Prints out the results of the tests, and returns 3 values- test statistic,
#' p-value, and the significance level of the test, alpha
#' @references Peter J. Bickel and Kjell A. Doksum (1973). \emph{Mathematical Statistics:
#' Basic Ideas and Selected Topics}. Prentice Hall.
#' @examples
#' ##Given some data: 3, 4, 7, 10, 4, 12, 1, 9, 2, 15
#' ##If we want to test the hypotheses H_0: m=5 against H_1: m>5
#' ##without using normal approximation:
#' vec = c(3, 4, 7, 10, 4, 12, 1, 9, 2, 15)
#' res = Wilcox.m.test(dat = vec, m_h0 = 5,
#' alternative = 'greater', normal_approx = FALSE)
#'
#' ##If we want to apply the normal approximation(Z-test), with the same hypotheses:
#' res = Wilcox.m.test(dat = vec, m_h0 = 5,
#' alternative = 'greater', normal_approx = TRUE)
#' @details This hypothesis test allows breaking of ties, and the number of
#' ties broken is also reflected in the printed results.
#' @seealso \code{\link{wilcox.test}} for the same tests applied to 2 sample problems
#' but is not able to break ties
#' @export
#'
Sys.setenv('_R_CHECK_SYSTEM_CLOCK_' = 0)
Wilcox.m.test <- function(dat, m_h0 ,alpha = 0.05,
alternative=c('greater', 'lesser', 'noteq'),
normal_approx=FALSE){
console_length = getOption('width')
n <- length(dat)
Ri <- sign(dat-m_h0)*rank(abs(dat-m_h0))
ties <-length(abs(dat-m_h0)) - length(unique(rank(abs(dat-m_h0))))
W_obs <- sum(Ri[which(Ri>0)])
if(!normal_approx){
if(alternative == 'greater'){
p_value <- W_stat(n, W_obs, 'geq')
cat('\n', rep('', floor((console_length-42)/2)),
' 1-Sample Median Wilcoxon Sign-Rank Test', '\n','\n',
rep('', floor((console_length-10)/2)), 'Ties Broken =', ties, '\n','\n',
rep('', floor((console_length-60)/2)),
'Null hypothesis H_0: m =', m_h0, rep('', 5),
'Alternative Hypothesis H_1: m >', m_h0,'\n','\n',
rep('', floor((console_length-60)/2)),
'Test Statistic W =', W_obs,rep('', 5),
'p-value = ', p_value , rep('', 5), 'alpha =', alpha,'\n','\n')
if(p_value<=alpha){
cat(rep('', floor((console_length-30)/2)),
'Test Result: Reject H_0' , '\n','\n')
}
else{cat(rep('', floor((console_length-30)/2)),
'Test Result: Do not Reject H_0' , '\n','\n')}
}
else if(alternative == 'lesser'){
p_value <- W_stat(n, W_obs, 'leq')
cat('\n', rep('', floor((console_length-42)/2)),
' 1-Sample Median Wilcoxon Sign-Rank Test', '\n','\n',
rep('', floor((console_length-10)/2)), 'Ties Broken =', ties, '\n','\n',
rep('', floor((console_length-60)/2)),
'Null hypothesis H_0: m =', m_h0, rep('', 5),
'Alternative Hypothesis H_1: m <', m_h0,'\n','\n',
rep('', floor((console_length-60)/2)),
'Test Statistic W =', W_obs,rep('', 5),
'p-value = ', p_value , rep('', 5), 'alpha =', alpha,'\n','\n')
if(p_value<=alpha){
cat(rep('', floor((console_length-30)/2)),
'Test Result: Reject H_0' , '\n','\n')
}
else{cat(rep('', floor((console_length-30)/2)),
'Test Result: Do not Reject H_0' , '\n','\n')}
}
else if(alternative == 'noteq'){
p_value <- min(2*W_stat(n, W_obs, 'geq'),1)
cat('\n', rep('', floor((console_length-42)/2)),
' 1-Sample Median Wilcoxon Sign-Rank Test', '\n','\n',
rep('', floor((console_length-10)/2)), 'Ties Broken =', ties, '\n','\n',
rep('', floor((console_length-60)/2)),
'Null hypothesis H_0: m =', m_h0, rep('', 5),
'Alternative Hypothesis H_1: m !=', m_h0,'\n','\n',
rep('', floor((console_length-60)/2)),
'Test Statistic W =', W_obs,rep('', 5),
'p-value = ', p_value , rep('', 5), 'alpha =', alpha,'\n','\n')
if(p_value<=alpha){
cat(rep('', floor((console_length-30)/2)),
'Test Result: Reject H_0' , '\n','\n')
}
else{cat(rep('', floor((console_length-30)/2)),
'Test Result: Do not Reject H_0' , '\n','\n')}
}
}
else if(normal_approx){
EW <- n*(n+1)/4
SDW <- sqrt(n*(n+1)*(2*n+1)/24)
Z_stat <- (W_obs - EW)/SDW
if(alternative == 'greater'){
p_value <- stats::pnorm(Z_stat, lower.tail = FALSE)
cat('\n', rep('', floor((console_length-60)/2)),
' 1-Sample Median Wilcoxon Sign-Rank Test - Normal Approximation', '\n','\n',
rep('', floor((console_length-10)/2)), 'Ties Broken =', ties, '\n','\n',
rep('', floor((console_length-60)/2)),
'Null hypothesis H_0: m =', m_h0, rep('', 5),
'Alternative Hypothesis H_1: m >', m_h0,'\n','\n',
rep('', floor((console_length-60)/2)),
'Test Statistic Z =', Z_stat ,rep('', 5),
'p-value = ', p_value , rep('', 5), 'alpha =', alpha,'\n','\n')
if(p_value<=alpha){
cat(rep('', floor((console_length-30)/2)),
'Test Result: Reject H_0' , '\n','\n')
}
else{cat(rep('', floor((console_length-30)/2)),
'Test Result: Do not Reject H_0' , '\n','\n')}
}
else if(alternative == 'lesser'){
p_value <- stats::pnorm(Z_stat, lower.tail = TRUE)
cat('\n', rep('', floor((console_length-60)/2)),
' 1-Sample Median Wilcoxon Sign-Rank Test - Normal Approximation', '\n','\n',
rep('', floor((console_length-10)/2)), 'Ties Broken =', ties, '\n','\n',
rep('', floor((console_length-60)/2)),
'Null hypothesis H_0: m =', m_h0, rep('', 5),
'Alternative Hypothesis H_1: m <', m_h0,'\n','\n',
rep('', floor((console_length-60)/2)),
'Test Statistic Z =', Z_stat ,rep('', 5),
'p-value = ', p_value , rep('', 5), 'alpha =', alpha,'\n','\n')
if(p_value<=alpha){
cat(rep('', floor((console_length-30)/2)),
'Test Result: Reject H_0' , '\n','\n')
}
else{cat(rep('', floor((console_length-30)/2)),
'Test Result: Do not Reject H_0' , '\n','\n')}
}
else if(alternative == 'noteq'){
p_value <- min(2*stats::pnorm(Z_stat, lower.tail = TRUE),1)
cat('\n', rep('', floor((console_length-60)/2)),
' 1-Sample Median Wilcoxon Sign-Rank Test - Normal Approximation', '\n','\n',
rep('', floor((console_length-10)/2)), 'Ties Broken =', ties, '\n','\n',
rep('', floor((console_length-60)/2)),
'Null hypothesis H_0: m =', m_h0, rep('', 5),
'Alternative Hypothesis H_1: m !=', m_h0,'\n','\n',
rep('', floor((console_length-60)/2)),
'Test Statistic Z =', Z_stat ,rep('', 5),
'p-value = ', p_value , rep('', 5), 'alpha =', alpha,'\n','\n')
if(p_value<=alpha){
cat(rep('', floor((console_length-30)/2)),
'Test Result: Reject H_0' , '\n','\n')
}
else{cat(rep('', floor((console_length-30)/2)),
'Test Result: Do not Reject H_0' , '\n','\n')}
}
}
if(!normal_approx){
invisible((list('p-value' = p_value, 'Test Statistic W' = W_obs, 'alpha' = alpha)))
}
else{
invisible((list('p-value' = p_value, 'Test Statistic Z' = Z_stat, 'alpha' = alpha)))
}
}
|
/scratch/gouwar.j/cran-all/cranData/wilcoxmed/R/Wilcox.m.test.R
|
#' A templating mechanism for data frames
#' @docType package
#' @name wildcard-package
#' @author William Michael Landau \email{[email protected]}
#' @references \url{https://github.com/wlandau/wildcard}
#' @examples
#' myths <- data.frame(
#' myth = c('Bigfoot', 'UFO', 'Loch Ness Monster'),
#' claim = c('various', 'day', 'day'),
#' note = c('various', 'pictures', 'reported day'))
#' wildcard(myths, wildcard = 'day', values = c('today', 'yesterday'))
#' wildcard(myths, wildcard = 'day', values = c('today', 'yesterday'),
#' expand = FALSE)
#' locations <- data.frame(
#' myth = c('Bigfoot', 'UFO', 'Loch Ness Monster'),
#' origin = 'where')
#' rules <- list(
#' where = c('North America', 'various', 'Scotland'),
#' UFO = c('spaceship', 'saucer'))
#' wildcard(locations, rules = rules, expand = c(FALSE, TRUE))
#' numbers <- data.frame(x = 4, y = 3, z = 4444, w = 4.434)
#' wildcard(numbers, wildcard = 4, values = 7)
#' df <- data.frame(
#' ID = c('24601', 'Javert', 'Fantine'),
#' fate = c('fulfillment', 'confusion', 'misfortune'))
#' expandrows(df, n = 2, type = 'each')
#' expandrows(df, n = 2, type = 'times')
#' @importFrom magrittr %>%
#' @importFrom stringi stri_rand_strings
NULL
|
/scratch/gouwar.j/cran-all/cranData/wildcard/R/package.R
|
#' @title Function \code{wildcard}
#' @description Main function of the package. Evaluate a wildcard
#' to fill in or expand a data frame.
#' Copied and modified from \code{remakeGenerator::evaluate()} under GPL-3:
#' \url{https://github.com/wlandau/remakeGenerator}
#' @export
#' @param df data frame
#' @param rules list with names a wildcards and elements as vectors of values
#' to substitute in place of the wildcards.
#' @param wildcard character scalar, a wildcard found in a data frame
#' @param values vector of values to substitute in place of a wildcard
#' @param expand logical, whether to expand the rows of the data frame to
#' substitute each value for each wildcard in turn.
#' If \code{FALSE}, no new rows will be added to \code{df}
#' when the values are substituted in place of wildcards.
#' Can be a vector of length \code{length(rules)}
#' if using the \code{rules} argument.
#' @param include character vector of columns of \code{df}
#' to be included in the wildcard evaluation.
#' The values will replace the wildcards in these columns
#' but not in any of the other colums.
#' All columns are included by default.
#' You may use \code{include} or \code{exclude} (or neither),
#' but not both.
#' @param exclude character vector of columns of \code{df}
#' to be EXCLUDED from the wildcard evaluation.
#' The values will NOT replace the wildcards in any of these
#' columns, but wildcard evaluation will occur in all
#' the other columns.
#' By default, no columns are excluded (all columns
#' are used for wildcard evaluation).
#' You may use \code{include} or \code{exclude} (or neither),
#' but not both.
#' @examples
#' myths <- data.frame(
#' myth = c('Bigfoot', 'UFO', 'Loch Ness Monster'),
#' claim = c('various', 'day', 'day'),
#' note = c('various', 'pictures', 'reported day'))
#' wildcard(myths, wildcard = 'day', values = c('today', 'yesterday'))
#' wildcard(myths, wildcard = 'day', values = c('today', 'yesterday'),
#' expand = FALSE)
#' locations <- data.frame(
#' myth = c('Bigfoot', 'UFO', 'Loch Ness Monster'),
#' origin = 'where')
#' rules <- list(
#' where = c('North America', 'various', 'Scotland'),
#' UFO = c('spaceship', 'saucer'))
#' wildcard(locations, rules = rules, expand = c(FALSE, TRUE))
#' numbers <- data.frame(x = 4, y = 3, z = 4444, w = 4.434)
#' wildcard(numbers, wildcard = 4, values = 7)
#' # Inclusion and exclusion
#' wildcard(myths, wildcard = "day", values = c("today", "yesterday"),
#' include = "claim")
#' wildcard(myths, wildcard = "day", values = c("today", "yesterday"),
#' exclude = c("claim", "note"))
#' # Wildcards should not also be replacement values.
#' # Otherwise, the output will be strange
#' # and will depend on the order of the wildcards.
#' \dontrun{
#' df <- data.frame(x = "a", y = "b")
#' rules <- list(a = letters[1:3], b = LETTERS[1:3])
#' wildcard(df, rules = rules)
#' }
wildcard <- function(df, rules = NULL, wildcard = NULL,
values = NULL, expand = TRUE, include = NULL, exclude = NULL) {
df <- as.data.frame(df)
check_df(df)
include <- process_include(df = df, include = include, exclude = exclude)
exclude <- NULL
stopifnot(is.logical(expand))
if (!is.null(rules))
return(wildcards(df = df, rules = rules, expand = expand,
include = include, exclude = exclude))
df <- nofactors(df)
if (is.null(wildcard) | is.null(values))
return(df)
matches <- get_matches(df = df, wildcard = wildcard,
include = include)
if (!any(matches))
return(df)
major <- unique_random_string(colnames(df))
minor <- unique_random_string(c(colnames(df), major))
df[[major]] <- df[[minor]] <- seq_len(nrow(df))
matching <- df[matches, ]
if (expand)
matching <- expandrows(matching, n = length(values))
true_cols <- setdiff(colnames(matching), c(major, minor)) %>%
intersect(y = include)
if (length(true_cols)){
matching[, true_cols] <- lapply(matching[, true_cols,
drop = FALSE], gsub_multiple, pattern = wildcard,
replacement = values) %>% as.data.frame(stringsAsFactors = FALSE)
}
rownames(df) <- rownames(matching) <- NULL
matching[[minor]] <- seq_len(nrow(matching))
out <- rbind(matching, df[!matches, ])
out <- out[order(out[[major]], out[[minor]]), ]
out[[major]] <- out[[minor]] <- NULL
rownames(out) <- NULL
out
}
#' @title Function \code{expand}
#' @description Expand the rows of a data frame
#' Copied and modified from \code{remakeGenerator::expand()} under GPL>=3:
#' \url{https://github.com/wlandau/remakeGenerator}
#' @export
#' @seealso \code{\link{wildcard}}]
#' @param df data frame
#' @param n number of duplicates per row
#' @param type character scalar. If \code{'each'},
#' rows will be duplicated in place.
#' If \code{'times'}, the data frame itself will be repeated \code{n} times.
#' @examples
#' df <- data.frame(
#' ID = c('24601', 'Javert', 'Fantine'),
#' fate = c('fulfillment', 'confusion', 'misfortune'))
#' expandrows(df, n = 2, type = 'each')
#' expandrows(df, n = 2, type = 'times')
expandrows <- function(df, n = 2, type = c("each", "times")) {
if (n < 2)
return(df)
nrows <- nrow(df)
type <- match.arg(type)
if (type == "each")
i <- rep(seq_len(nrows), each = n)
else
i <- rep(seq_len(nrows), times = n)
df <- df[i, ]
rownames(df) <- NULL
df
}
#' @title Function \code{nofactors}
#' @description Turn all the factors of a data frame into characters.
#' @export
#' @seealso \code{\link{wildcard}}
#' @param df data frame
#' @examples
#' class(iris$Species)
#' str(iris)
#' out <- nofactors(iris)
#' class(out$Species)
#' str(out)
nofactors <- function(df) {
lapply(df, factor2character) %>%
as.data.frame(stringsAsFactors = FALSE)
}
|
/scratch/gouwar.j/cran-all/cranData/wildcard/R/ui.R
|
# Copied from the remakeGenerator package under GPL>=3:
# \url{https://github.com/wlandau/remakeGenerator}
wildcards <- function(df, rules = NULL, expand = TRUE,
include = NULL, exclude = NULL) {
if (!length(rules))
return(nofactors(df))
check_rules(rules)
stopifnot(is.list(rules))
stopifnot(is.logical(expand))
expand <- rep(expand, length.out = length(rules))
for (index in seq_len(length(rules)))
df <- wildcard(
df,
wildcard = names(rules)[index], values = rules[[index]],
expand = expand[index],
include = include,
exclude = exclude)
df
}
check_df <- function(df){
dm <- dim(df)
good <- length(dm) == 2 & all(dm > 0)
if (!good){
stop("df must have two dimensions and must be nonempty.")
}
}
check_rules <- function(rules){
wildcards <- names(rules)
values <- unlist(rules) %>%
unique %>%
unname
if (length(intersect(wildcards, values)))
warning(
"In `rules`, some wildcards are also replacement values.\n",
"The returned data frame may be different than you expect,\n",
"and it may depend on the order of the wildcards in `rules`.")
}
factor2character <- function(x) {
if (is.factor(x))
x <- as.character(x)
x
}
get_matches <- function(df, wildcard, include) {
lapply(df[, include, drop = FALSE], matches_col, wildcard = wildcard) %>%
Reduce(f = "|")
}
gsub_multiple <- function(pattern, replacement, x) {
i <- grepl(pattern, x)
if (!sum(i))
return(x)
replacement <- rep(replacement, length.out = sum(i))
x[i] <- Vectorize(function(pattern, replacement, x) {
gsub(pattern = pattern, replacement = replacement,
x = x, fixed = TRUE)
},
c("x", "replacement"))(pattern, replacement, x[i])
x
}
matches_col <- function(x, wildcard) {
grepl(wildcard, x, fixed = TRUE)
}
process_include <- function(df, include, exclude){
i <- !is.null(include)
e <- !is.null(exclude)
columns <- colnames(df)
if (i & e){
stop("You may specify include or exclude, but not both.")
} else if (!i & !e){
colnames(df)
} else if (e){
setdiff(columns, exclude)
} else {
intersect(columns, include)
}
}
unique_random_string <- function(exclude = NULL, n = 30) {
while ((out <- stri_rand_strings(1, n)) %in% exclude) # nolint
next
out
}
|
/scratch/gouwar.j/cran-all/cranData/wildcard/R/utils.R
|
## ----wildcard------------------------------------------------------------
library(wildcard)
myths <- data.frame(
myth = c("Bigfoot", "UFO", "Loch Ness Monster"),
claim = c("various", "day", "day"),
note = c("various", "pictures", "reported day"))
myths
wildcard(myths, wildcard = "day", values = c("today", "yesterday"))
wildcard(myths, wildcard = "day", values = c("today", "yesterday"),
expand = FALSE)
wildcard(myths, wildcard = "day", values = c("today", "yesterday"),
include = "claim")
wildcard(myths, wildcard = "day", values = c("today", "yesterday"),
exclude = c("claim", "note"))
locations <- data.frame(
myth = c("Bigfoot", "UFO", "Loch Ness Monster"),
origin = "where")
rules <- list(
where = c("North America", "various", "Scotland"),
UFO = c("spaceship", "saucer"))
wildcard(locations, rules = rules, expand = c(FALSE, TRUE))
numbers <- data.frame(x = 4, y = 3, z = 4444, w = 4.434)
wildcard(numbers, wildcard = 4, values = 7)
## ----expandrows----------------------------------------------------------
df <- data.frame(
ID = c("24601", "Javert", "Fantine"),
fate = c("fulfillment", "confusion", "misfortune"))
expandrows(df, n = 2, type = "each")
expandrows(df, n = 2, type = "times")
## ----nofactors-----------------------------------------------------------
class(iris$Species)
str(iris)
out <- nofactors(iris)
class(out$Species)
str(out)
|
/scratch/gouwar.j/cran-all/cranData/wildcard/inst/doc/wildcard.R
|
---
title: "Wildcards for data frames"
author: "William Michael Landau"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{wildcard}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---

The wildcard package is a templating mechanism for data frames. Wildcards are placeholders for text, and you can evaluate them to generate new data frames from templates. The functionality is straightforward.
## `wildcard()`
```{r wildcard}
library(wildcard)
myths <- data.frame(
myth = c("Bigfoot", "UFO", "Loch Ness Monster"),
claim = c("various", "day", "day"),
note = c("various", "pictures", "reported day"))
myths
wildcard(myths, wildcard = "day", values = c("today", "yesterday"))
wildcard(myths, wildcard = "day", values = c("today", "yesterday"),
expand = FALSE)
wildcard(myths, wildcard = "day", values = c("today", "yesterday"),
include = "claim")
wildcard(myths, wildcard = "day", values = c("today", "yesterday"),
exclude = c("claim", "note"))
locations <- data.frame(
myth = c("Bigfoot", "UFO", "Loch Ness Monster"),
origin = "where")
rules <- list(
where = c("North America", "various", "Scotland"),
UFO = c("spaceship", "saucer"))
wildcard(locations, rules = rules, expand = c(FALSE, TRUE))
numbers <- data.frame(x = 4, y = 3, z = 4444, w = 4.434)
wildcard(numbers, wildcard = 4, values = 7)
```
## `expandrows()`
```{r expandrows}
df <- data.frame(
ID = c("24601", "Javert", "Fantine"),
fate = c("fulfillment", "confusion", "misfortune"))
expandrows(df, n = 2, type = "each")
expandrows(df, n = 2, type = "times")
```
## `nofactors()`
```{r nofactors}
class(iris$Species)
str(iris)
out <- nofactors(iris)
class(out$Species)
str(out)
```
## Troubleshooting
You can submit questions, bug reports, and feature requests to the [issue tracker](https://github.com/wlandau/wildcard/issues). Please take care to search for duplicates first, even among the closed issues.
## A cautionary note
Be sure that wildcards and are not also replacement values.
```r
df <- data.frame(x = "a", y = "b")
rules <- list(a = letters[1:3], b = LETTERS[1:3])
wildcard(df, rules = rules)
```
```r
## x y
## 1 a A
## 2 a B
## 3 a C
## 4 A A
## 5 B B
## 6 C C
## 7 c A
## 8 c B
## c C
## Warning message:
## In check_rules(rules) :
## In `rules`, some wildcards are also replacement values.
## The returned data frame may be different than you expect,
## and it may depend on the order of the wildcards in `rules`.
```
|
/scratch/gouwar.j/cran-all/cranData/wildcard/inst/doc/wildcard.Rmd
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.