content
stringlengths 0
14.9M
| filename
stringlengths 44
136
|
---|---|
---
title: "Analyzing work loop experiments in workloopR"
author: "Vikram B. Baliga"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Analyzing work loop experiments in workloopR}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r setup, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
The function `analyze_workloop()` in `workloopR` allows users to evaluate the mechanical work and power output of a muscle they have investigated through work loop experiments.
To demonstrate `analyze_workloop()`, we will first load `workloopR` and use example data provided with the package. We'll also load a couple packages within the `tidyverse` to help with data wrangling and plotting.
## Load packages and data
```{r package_loading, message=FALSE, warning=FALSE}
library(workloopR)
library(magrittr)
library(ggplot2)
library(dplyr)
```
## Visualize
We'll now import the `workloop.ddf` file included with `workloopR`. Because this experiment involved using a gear ratio of 2, we'll use `fix_GR()` to also implement this correction.
Ultimately, an object of classes `workloop`, `muscle_stim`, and `data.frame` is produced. `muscle_stim` objects are used throughout `workloopR` to help with data formatting and error checking across functions. Additionally setting the class to `workloop` allows our functions to understand that the data have properties that other experiment types (twitch, tetanus) do not.
```{r data_import}
## The file workloop.ddf is included and therefore can be accessed via
## system.file("subdirectory","file_name","package") . We'll then use
## read_ddf() to import it, creating an object of class "muscle_stim".
## fix_GR() multiplies Force by 2 and divides Position by 2
workloop_dat <-
system.file(
"extdata",
"workloop.ddf",
package = 'workloopR') %>%
read_ddf(phase_from_peak = TRUE) %>%
fix_GR(GR = 2)
summary(workloop_dat)
```
Running `summary()` on a `muscle_stim shows a handy summary of file properties, data, and experimental parameters.
Let's plot Time against Force, Position, and Stimulus (Stim) to visualize the time course of the work loop experiment.
To get them all plotted in the same figure, we'll transform the data as they are being plotted. Please note that this is for aesthetic purposes only - the underlying data will not be changed after the plotting is complete.
```{r intial_plot}
scale_position_to_force <- 3000
workloop_dat %>%
# Set the x axis for the whole plot
ggplot(aes(x = Time)) +
# Add a line for force
geom_line(aes(y = Force, color = "Force"),
lwd = 1) +
# Add a line for Position, scaled to approximately the same range as Force
geom_line(aes(y = Position * scale_position_to_force, color = "Position")) +
# For stim, we only want to plot where stimulation happens, so we filter the data
geom_point(aes(y = 0, color = "Stim"), size = 1,
data = filter(workloop_dat, Stim == 1)) +
# Next we add the second y-axis with the corrected units
scale_y_continuous(sec.axis = sec_axis(~ . / scale_position_to_force, name = "Position (mm)")) +
# Finally set colours, labels, and themes
scale_color_manual(values = c("#FC4E2A", "#4292C6", "#373737")) +
labs(y = "Force (mN)", x = "Time (secs)", color = "Parameter:") +
ggtitle("Time course of \n work loop experiment") +
theme_bw() +
theme(legend.position = "bottom", legend.direction = "horizontal")
```
There's a lot to digest here. The blue trace shows the change in length of the muscle via cyclical, sinusoidal changes to Position. The dark gray Stim dots show stimulation on a off vs. on basis. Stimulus onset is close to when the muscle is at L0 and the stimulator zapped the muscle four times in pulses of 0.2 ms width at 300 Hz. The resulting force development is shown in red. These cycles of length change and stimulation occurred a total of 6 times (measuring L0-to-L0).
## Select cycles
We are now ready to run the `select_cycles()` function. This function subsets the data and labels each cycle in prep for our `analyze_workloop()` function.
In many cases, researchers are interested in using the final 3 cycles for analyses. Accordingly, we'll set the `keep_cycles` parameter to `4:6`.
One thing to pay heed to is the cycle definition, encoded as `cycle_def` within the arguments of `select_cycles()`. There are three options for how cycles can be defined and are named based on the starting (and ending) points of the cycle. We'll use the L0-to-L0 option, which is encoded as `lo`.
The function internally performs butterworth filtering of the Position data via `signal::butter()`. This is because Position data are often noisy, which makes assessing true peak values difficult. The default values of `bworth_order = 2` and `bworth_freq = 0.05` work well in most cases, but we recommend you please plot your data and assess this yourself.
We will keep things straightforward for now so that we can proceed to the analytical stage. Please see the final section of this vignette for more details on using `select_cycles()`.
```{r select_cycles}
## Select cycles
workloop_selected <-
workloop_dat %>%
select_cycles(cycle_def="lo", keep_cycles = 4:6)
summary(workloop_selected)
attr(workloop_selected, "retained_cycles")
```
The `summary()` function now reflects that 3 cycles of the original 6 have been retained, and getting the `"retained_cycles"` attribute shows that these cycles are 4, 5, and 6 from the original data.
To avoid confusion in numbering schemes between the original data and the new object, once `select_cycles()` has been used we label cycles by letter. So, cycle 4 is now "a", 5 is "b" and 6 is "c".
## Plot the work loop cycles
```{r work_loop_fig}
workloop_selected %>%
ggplot(aes(x = Position, y = Force)) +
geom_point(size=0.3) +
labs(y = "Force (mN)", x = "Position (mm)") +
ggtitle("Work loop") +
theme_bw()
```
## Basics of `analyze_workloop()`
Now we're ready to use `analyze_workloop()`.
Again, running `select_cycles()` beforehand was necessary, so we will switch to using `workloop_selected` as our data object.
Within `analyze_workloop()`, the `GR =` option allows for the gear ratio to be corrected if it hasn't been already. But because we already ran `fix_GR()` to correct the gear ratio to 2, we will not need to use it here again. So, this for this argument, we will use `GR = 1`, which keeps the data as they are. Please take care to ensure that you do not overcorrect for gear ratio by setting it multiple times. Doing so induces multiplicative changes. E.g. setting `GR = 3` on an object and then setting `GR = 3` again produces a gear ratio correction of 9.
### Using the default `simplify = FALSE` version
The argument `simplify = ` affects the output of the `analyze_workloop()` function. We'll first take a look at the organization of the "full" version, i.e. keeping the default `simplify = FALSE`.
```{r analyze_workloop}
## Run the analyze_workloop() function
workloop_analyzed <-
workloop_selected %>%
analyze_workloop(GR = 1)
## Produces a list of objects.
## The print method gives a simple output:
workloop_analyzed
## How is the list organized?
names(workloop_analyzed)
```
This produces an `analyzed_workloop` object that is essentially a `list` that is organized by cycle. Within each of these, time-course data are stored as a `data.frame` and important metadata are stored as attributes.
Users may typically want work and net power from each cycle. Within the `analyzed_workloop` object, these two values are stored as attributes: `"work"` (in J) and `"net_power"` (in W). To get them for a specific cycle:
```{r metrics_for_cycle}
## What is work for the second cycle?
attr(workloop_analyzed$cycle_b, "work")
## What is net power for the third cycle?
attr(workloop_analyzed$cycle_c, "net_power")
```
To see how e.g. the first cycle is organized:
```{r cycle_a_organization}
str(workloop_analyzed$cycle_a)
```
Within each cycle's `data.frame`, the usual `Time`, `Position`, `Force`, and `Stim` are stored. `Cycle`, added via `select_cycles()`, denotes cycle identity and `Percent_of_Cycle` displays time as a percentage of that particular cycle.
`analyze_workloop()` also computes instantaneous velocity (`Inst_Velocity`) which can sometimes be noisy, leading us to also apply a butterworth filter to this velocity (`Filt_Velocity`). See the function's help file for more details on how to tweak filtering. The time course of power (instantaneous power) is also provided as `Inst_Power`.
Each of these variables can be plot against Time to see the time-course of that variable's change over the cycle. For example, we will plot instantaneous force in cycle b:
```{r instant_power_plot}
workloop_analyzed$cycle_b %>%
ggplot(aes(x = Percent_of_Cycle, y = Inst_Power)) +
geom_line(lwd = 1) +
labs(y = "Instantaneous Power (W)", x = "Percent cycle") +
ggtitle("Instantaneous power \n during cycle b") +
theme_bw()
```
### Setting `simpilfy = TRUE` in the `analyze_workloop()` function
If you simply want work and net power for each cycle without retaining any of the time-course data, set `simplify = TRUE` within `analyze_workloop()`.
```{r simplify_TRUE}
workloop_analyzed_simple <-
workloop_selected %>%
analyze_workloop(GR = 1, simplify = TRUE)
## Produces a simple data.frame:
workloop_analyzed_simple
str(workloop_analyzed_simple)
```
Here, work (in J) and net power (in W) are simply returned in a `data.frame` that is organized by cycle. No other attributes are stored.
## More on cycle definitions in `select_cycles()`
As noted above, there are three options for cycle definitions within `select_cycles()`, encoded as `cycle_def`. The three options for how cycles can be defined are named based on the starting (and ending) points of the cycle: L0-to-L0 (`lo`), peak-to-peak (`p2p`), and trough-to-trough (`t2t`).
We highly recommend that you plot your Position data after using `select_cycles()`. The `pracma::findpeaks()` function work for most data (especially sine waves), but it is conceivable that small, local 'peaks' may be misinterpreted as a cycle's true minimum or maximum.
We also note that edge cases (i.e. the first cycle or the final cycle) may also be subject to issues in which the cycles are not super well defined via an automated algorithm.
Below, we will plot a couple case examples to show what we generally expect. We recommend plotting your data in a similar fashion to verify that `select_cycles()` is behaving in the way you expect.
```{r select_cycles_defintions}
## Select cycles 4:6 using lo
workloop_dat %>%
select_cycles(cycle_def="lo", keep_cycles = 4:6) %>%
ggplot(aes(x = Time, y = Position)) +
geom_line() +
theme_bw()
## Select cycles 4:6 using p2p
workloop_dat %>%
select_cycles(cycle_def="p2p", keep_cycles = 4:6) %>%
ggplot(aes(x = Time, y = Position)) +
geom_line() +
theme_bw()
## here we see that via 'p2p' the final cycle is ill-defined because the return
## to L0 is considered a cycle. Using a p2p definition, what we actually want is
## to use cycles 3:5 to get the final 3 full cycles:
workloop_dat %>%
select_cycles(cycle_def="p2p", keep_cycles = 3:5) %>%
ggplot(aes(x = Time, y = Position)) +
geom_line() +
theme_bw()
## this difficulty in defining cycles may be more apparent by first plotting the
## cycles 1:6, e.g.
workloop_dat %>%
select_cycles(cycle_def="p2p", keep_cycles = 1:6) %>%
ggplot(aes(x = Time, y = Position)) +
geom_line() +
theme_bw()
```
|
/scratch/gouwar.j/cran-all/cranData/workloopR/vignettes/Analyzing-workloops.Rmd
|
---
title: "Working with isometric experiments in workloopR"
author: "Vikram B. Baliga"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Working with isometric experiments in workloopR}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r setup, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
The `workloopR` package also provides a function that can calculate the timing and magnitude of force during isometric experiments (twitch, tetanus) via the `isometric_timing()` function.
To demonstrate, we will first load `workloopR` and use example data provided with the package. We'll also load a couple packages within the `tidyverse` as well as `viridis` to help with data wrangling and plotting.
## Load packages and data
```{r package_loading, message=FALSE, warning=FALSE}
library(workloopR)
library(magrittr)
library(ggplot2)
```
## Visualize
We'll now import the `twitch.ddf` file included with `workloopR`.
```{r data_import}
## The file twitch.ddf is included and therefore can be accessed via
## system.file("subdirectory","file_name","package") . We'll then use
## read_ddf() to import it, creating an object of class "muscle_stim".
twitch_dat <-
system.file(
"extdata",
"twitch.ddf",
package = 'workloopR') %>%
read_ddf()
```
Let's plot Force vs. Time to visualize the time course of force development and relaxation.
```{r intial_plot}
twitch_dat %>%
ggplot(aes(x = Time, y = Force)) +
geom_line() +
ylab("Force (mN)") +
xlab("Time (sec)") +
theme_minimal()
```
This plot reveals that the final row of the data has Force = 0 and is likely an artifact. We can also see that the most salient parts of the twitch occur between ~ 0.075 and ~ 0.2 seconds.
We'll just re-plot the salient parts of the twitch by setting new limits on the axes via `ggplot2::xlim()` and `ggplot2::ylim()`. Please note that this will not change any analyses - we are simply doing it for ease of visualizing patterns.
```{r data_cleaning}
## Re-plot
twitch_dat %>%
ggplot(aes(x = Time, y = Force)) +
geom_line(lwd = 1) +
xlim(0.075, 0.2) +
ylim(200, 450) +
xlab("Time (sec)") +
ylab("Force (mN)") +
theme_minimal()
```
Looks better!
## Basics of `isometric_timing()`
Now we're ready to use `isometric_timing()`.
```{r twitch_analysis}
## Run the isometric_timing() function
twitch_analyzed <-
twitch_dat %>%
isometric_timing()
twitch_analyzed
```
The function returns a new `data.frame` that provides information about the timing and magnitude of force at various intervals within the twitch. All returned values are absolute; in other words, time is measured from the beginning of the file and forces are returned in their actual magnitudes.
The first five columns of this `data.frame` are fixed. They will return (in this order): 1) the ID of the file, 2) the time at which stimulation occurs, 3) magnitude of force when stimulation occurs, 4) time at which peak force occurs, and 5) magnitude of peak force.
The function also provides data that help describe the rising and the relaxation phases of the twitch at certain "set points". By default, in the rising phase the set points are at 10% and at 90% of peak force development. Timing and force magnitudes at these points are returned as columns in the `data.frame`. And for the relaxation phase, the time and magnitude of force when force has relaxed to 90% and 50% of peak force are given.
The user has some flexibility in specifying how data are grabbed from the rising and falling phases. There are two arguments: `rising = c()` and `falling = c()`. Each of these arguments can be filled with a vector of any length. Within the vector, each of these "set points" must be a vector between 0 and 100, signifying the % of peak force development that is to be described.
For example, if we'd like to describe the rising phase at six points (e.g. 5%, 10%, 25%, 50%, 75%, and 95% of peak force development):
```{r twitch_rising_custom}
## Change rising supply a custom set of force development set points
twitch_rising_custom <-
twitch_dat %>%
isometric_timing(rising = c(5, 10, 25, 50, 75, 95))
## The returned `data.frame` contains the timing and force magnitudes
## of these set points in the "..._rising_..." columns
twitch_rising_custom
```
### Tetanus trials
The `isometric_timing()` function can also work on `tetanus` objects that have been imported via `read_ddf()`. Should a `tetanus` object be used, the set points for relaxing are automatically set to `relaxing = c()`, which excludes this argument from producing anything. Instead, the timing & magnitude of force at stimulation, peak force, and specified points of the rising phase are returned. The idea of 'relaxation' is simply ignored.
To demonstrate, we'll use an example tetanus trial included in `workloopR`:
```{r tetanus}
tetanus_analyzed <-
system.file(
"extdata",
"tetanus.ddf",
package = 'workloopR') %>%
read_ddf() %>%
isometric_timing(rising = c(25, 50, 75))
tetanus_analyzed
```
## Computing intervals
The returned `data.frame` provides all timing and force magnitudes in absolute terms, i.e. time since the start of the file and actual force magnitudes. Often, we'd like to report characteristics of the twitch as intervals.
To calculate, e.g. the interval between stimulation and peak force (often reported as "time to peak force"):
```{r twitch_intervals}
## Time to peak force from stimulation
twitch_analyzed$time_peak - twitch_analyzed$time_stim
```
## Annotate the twitch plot
It is also good to plot some of these metrics and see if they pass the eye-test.
We'll use our analyzed twitch and the `viridis` package to supply colors to dots at key points.
```{r annotated_plot}
## Create a color pallete
## Generated using `viridis::viridis(6)`
## We use hard-coded values here just to avoid extra dependencies
colz <- c("#440154FF","#414487FF","#2A788EFF",
"#22A884FF","#7AD151FF","#FDE725FF")
twitch_dat %>%
ggplot(aes(x = Time, y = Force)) +
geom_line(lwd = 1) +
xlim(0.075, 0.2) +
ylim(200, 450) +
xlab("Time (sec)") +
ylab("Force (mN)") +
geom_point(x = twitch_analyzed$time_stim,
y = twitch_analyzed$force_stim,
color = colz[1], size = 3) +
geom_point(x = twitch_analyzed$time_peak,
y = twitch_analyzed$force_peak,
color = colz[4], size = 3) +
geom_point(x = twitch_analyzed$time_rising_10,
y = twitch_analyzed$force_rising_10,
color = colz[2], size = 3) +
geom_point(x = twitch_analyzed$time_rising_90,
y = twitch_analyzed$force_rising_90,
color = colz[3], size = 3) +
geom_point(x = twitch_analyzed$time_relaxing_90,
y = twitch_analyzed$force_relaxing_90,
color = colz[5], size = 3) +
geom_point(x = twitch_analyzed$time_relaxing_50,
y = twitch_analyzed$force_relaxing_50,
color = colz[6], size = 3) +
theme_minimal()
```
The plot has dots added for each of the six time&force points that the function returns by default.
|
/scratch/gouwar.j/cran-all/cranData/workloopR/vignettes/Calculating-twitch-kinetics.Rmd
|
---
title: "Introduction to workloopR"
author: "Vikram B. Baliga"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Introduction to workloopR}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r setup, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
## Welcome to workloopR
In this vignette, we'll provide an overview of core functions in `workloopR`. Other vignettes within the package give more details with respect to specific use-cases. Examples with code can also be found within each function's Help doc.
`workloopR` (pronounced "work looper") provides functions for the import, transformation, and analysis of muscle physiology experiments. As you may have guessed, the initial motivation was to provide functions to analyze work loop experiments in R, but we have expanded this goal to cover additional types of experiments that are often involved in work loop procedures. There are three currently supported experiment types: work loop, simple twitch, and tetanus.
## Analytical pipelines
To cut to the chase, `workloopR` offers the ability to import, transform, and then analyze a data file. For example, with a work loop file:
```{r a_p_single_file}
library(workloopR)
## import the workloop.ddf file included in workloopR
wl_dat <-read_ddf(system.file("extdata", "workloop.ddf",
package = 'workloopR'),
phase_from_peak = TRUE)
## select cycles 3 through 5 using a peak-to-peak definition
wl_selected <- select_cycles(wl_dat, cycle_def = "p2p", keep_cycles = 3:5)
## run the analysis function and get the full object
wl_analyzed <- analyze_workloop(wl_selected, GR = 2)
## for brevity, the print() method for this object produces a simple output
wl_analyzed
## but see the structure for the full output, e.g.
#str(wl_analyzed)
## or run the analysis but get the simplified version
wl_analyzed_simple <- analyze_workloop(wl_selected, simplify = TRUE, GR = 2)
wl_analyzed_simple
```
Batch processing of files within a directory (e.g. successive trials of an experiment) is also readily achieved:
```{r a_p_batch_files}
## batch read and analyze files included with workloopR
analyzed_wls <- read_analyze_wl_dir(system.file("extdata/wl_duration_trials",
package = 'workloopR'),
cycle_def = "p2p",
keep_cycles = 2:4,
phase_from_peak = TRUE
)
## now summarize
summarized_wls <- summarize_wl_trials(analyzed_wls)
summarized_wls
```
Sections below will give more specific overviews.
## Data import
Data that are stored in .ddf format (e.g. generated by Aurora Scientific's Dynamic Muscle Control and Analysis Software) are easily imported via the function `read_ddf()`. Two additional all-in-one functions (`read_analyze_wl()` and `read_analyze_wl_dir()`) also import data and subsequently transform and analyze them. More on those functions later!
Importing via these functions generates objects of class `muscle_stim`, which are formatted to work nicely with `workloopR`'s core functions and help with error checking procedures throughout the package. `muscle_stim` objects are organized to store time-series data for Time, Position, Force, and Stimulation in a `data.frame` and also store core metadata and experimental parameters as Attributes.
We'll provide a quick example using data that are included within the package.
```{r data_import}
library(workloopR)
## import the workloop.ddf file included in workloopR
wl_dat <-read_ddf(system.file("extdata", "workloop.ddf",
package = 'workloopR'),
phase_from_peak = TRUE)
## muscle_stim objects have their own print() and summary() S3 methods
## for example:
summary(wl_dat) # some handy info about the imported file
## see the first few rows of data stored within
head(wl_dat)
```
### Attributes
Again, important object metadata and experimental parameters are stored as attributes. We make extensive use of attributes throughout the package and most functions will update at least one attribute after completion. So please see this feature of your `muscle_stim` objects for important info!
You can use `attributes` on an object itself (e.g. `attributes(wl_dat)`), but we'll avoid doing so because the printout can be pretty lengthy.
Instead, let's just look at a couple interesting ones.
```{r attributes}
## names(attributes(x) gives a list of all the attributes' names
names(attributes(wl_dat))
## take a look at the stimulation protocol
attr(wl_dat, "protocol_table")
## at what frequency were cyclic changes to Position performed?
attr(wl_dat, "cycle_frequency")
## at what frequency were data recorded?
attr(wl_dat, "sample_frequency")
```
## Data from files that are not of .ddf format
Data that are read from other file formats can be constructed into `muscle_stim` objects via `as_muscle_stim()`. Should you need to do this, please refer to our vignette "Importing data from non .ddf sources" for an overview.
## Transformations and corrections to data
Prior to analyses, data can be transformed or corrected. Transformational functions include gear ratio correction (`fix_GR()`) and position inversion (`invert_position()`). The core idea behind these two functions is to correct issues related to data acquisition.
For example, to apply a gear ratio correction of 2:
```{r transformations}
## this multiples Force by 2
## and multiplies Position by (1/2)
wl_fixed <- fix_GR(wl_dat, GR = 2)
# quick check:
max(wl_fixed$Force)/max(wl_dat$Force) #5592.578 / 2796.289 = 2
max(wl_fixed$Position)/max(wl_dat$Position) #1.832262 / 3.664524 = 0.5
```
### A particularly important transformation - `select_cycles()`
Another 'transformational' function is `select_cycles()`, which subsets cycles within a work loop experiment. This is a necessary step prior to analyses of work loop data: data are labeled by cycle for use with `analyze_workloop()`.
## Data analytical functions
Core analytical functions include `analyze_workloop()` for work loop files and `isometric_timing()` for twitches. `analyze_workloop()` computes instantaneous velocity, net work, instantaneous power, and net power for work loop experiments on a per-cycle basis. `isometric_timing()` provides summarization of twitch kinetics.
To see more details about these functions, please refer to "Analyzing work loop experiments in workloopR" for work loop analyses and "Working with twitch files in workloopR" for twitches.
Some functions are readily available for batch processing of files. The `read_analyze_wl_dir()` function allows for the batch import, cycle selection, gear ratio correction, and ultimately work & power computation for all work loop experiment files within a specified directory. The `get_wl_metadata()` and `summarize_wl_trials()` functions organize scanned files by recency (according to their time of last modification: 'mtime') and then report work and power output in the order that trials were run.
This ultimately allows for the `time_correct()` function to correct for degradation of the muscle (according to power & work) over time, assuming that the first and final trials are identical in experimental parameters. If these parameters are not identical, we advise against using this function.
## Thanks for reading!
Please feel free to contact either Vikram or Shree with suggestions or code development requests. We are especially interested in expanding our data import functions to accommodate file types other than .ddf in future versions of `workloopR`.
|
/scratch/gouwar.j/cran-all/cranData/workloopR/vignettes/Introduction-to-workloopR.Rmd
|
---
title: "Plotting data in workloopR"
author: "Shreeram Senthivasan"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Plotting data in workloopR}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
Let's take a look at plotting data stored in objects created by `workloopR`!
## Loading packages and data
```{r package_loading, message=FALSE, warning=FALSE}
library(workloopR)
library(magrittr)
library(ggplot2)
library(purrr)
library(tidyr)
library(dplyr)
```
## Plotting `workloop` objects
### Working with single files
Let's start by visualizing the raw traces in our data files, specifically position, force, and stimulation over time.
```{r data_import}
workloop_dat<-
system.file(
"extdata",
"workloop.ddf",
package = 'workloopR') %>%
read_ddf(phase_from_peak = TRUE) %>%
fix_GR(2)
```
```{r raw_trace}
# To overlay position and force, we need them to be on comparable scales
# We will then use two y-axis to make the units clear
scale_position_to_force <- 3000
workloop_dat %>%
# Set the x axis for the whole plot
ggplot(aes(x = Time)) +
# Add a line for force
geom_line(aes(y = Force, color = "Force"),
lwd = 1) +
# Add a line for Position, scaled to approximately the same range as Force
geom_line(aes(y = Position * scale_position_to_force, color = "Position")) +
# For stim, we only want to plot where stimulation happens, so we filter the data
geom_point(aes(y = 0, color = "Stim"), size = 1,
data = filter(workloop_dat, Stim == 1)) +
# Next we add the second y-axis with the corrected units
scale_y_continuous(sec.axis = sec_axis(~ . / scale_position_to_force, name = "Position (mm)")) +
# Finally set colours, labels, and themes
scale_color_manual(values = c("#FC4E2A", "#4292C6", "#373737")) +
labs(y = "Force (mN)", x = "Time (secs)", color = "Parameter:") +
ggtitle("Time course of \n work loop experiment") +
theme_bw() +
theme(legend.position = "bottom", legend.direction = "horizontal")
```
Next, we would select cycles from the workloop in preparation for analysis. Before we do this, let's keep all the cycles and visualize how `select_cycles()` splits the data. Note that you can include 0 in the `keep_cycles` argument to include data that is categorized as being outside of a complete cycle. This assigns a single cycle label (a) to the data before and after the complete cycles.
```{r annotate_cycles, warning=FALSE}
# Let's calculate x and y positions to add labels for each cycle
workloop_dat<-
workloop_dat %>%
select_cycles('lo', 0:6)
label_dat<-
workloop_dat %>%
group_by(Cycle) %>%
summarize(
x = mean(Time)
) %>%
# And add another row for the incomplete cycles at the beginning
bind_rows(data.frame(
Cycle = 'a',
x = 0))
workloop_dat %>%
ggplot(aes(x = Time, y = Position, colour = Cycle)) +
geom_point(size=1) +
geom_text(aes(x, y=2.1, colour = Cycle, label = Cycle), data = label_dat) +
labs(y = "Position (mm)", x = "Time (secs)") +
ggtitle("Division of position\nby `select_cycles()`") +
theme_bw() +
theme(legend.position = "none")
```
Visualizing the cycles is a highly recommended in case noise before or after the experimental procedure is interpreted as a cycle.
Let's go ahead and use cycles 2 to 5 (labeled c-f in the previous plot). Note however that the cycle labels will be reassigned from a-d when we subset the data.
```{r select_cycles}
workloop_dat<-
workloop_dat %>%
select_cycles('p2p', 2:5)
```
Now let's plot some work loops!
```{r analyze_workloop}
# Let's start with a single cycle using colour to indicate time
workloop_dat %>%
filter(Cycle == 'a') %>%
ggplot(aes(x = Position, y = Force)) +
geom_path(aes(colour = Time)) +
labs(y = "Force (mN)", x = "Position (mm)", colour = "Time (sec)") +
ggtitle("Single work loop") +
theme_bw() +
theme(legend.position = "bottom", legend.direction = "horizontal")
# Now let's see how the work loop changes across cycles
# We can use arrows to indicate direction through time
workloop_dat %>%
ggplot(aes(x = Position, y = Force)) +
geom_path(aes(colour = Cycle), arrow=arrow()) +
labs(y = "Force (mN)", x = "Position (mm)", colour = "Cycle index") +
ggtitle("Work loops by cycle index") +
theme_bw() +
theme(legend.position = "bottom", legend.direction = "horizontal")
```
### Working with single files
Working with multiple files is a little trickier as multiple the data are stored in separate `data.frame`s organized into a list. The easiest way to deal with this issue is to add a column specifying the file id and concatenating the data together. Refer to the "Batch processing" vignette for more information on working with multiple files.
```{r multifile}
multi_workloop_dat<-
system.file(
"extdata/wl_duration_trials",
package = 'workloopR') %>%
read_ddf_dir(phase_from_peak = TRUE) %>%
map(fix_GR, 2) %>%
map(select_cycles,'p2p', 4) %>%
map(analyze_workloop)
# Summarize provides a quick way to pull out most experimental parameters, etc
multi_workloop_dat %>%
summarize_wl_trials %>%
ggplot(aes(Stimulus_Pulses, Mean_Power)) +
geom_point() +
labs(y = "Mean Power (W)", x = "Stim Duration (pulses)") +
ggtitle("Mean power over trial\nby stimulus duration") +
theme_bw() +
theme(legend.position = "bottom", legend.direction = "horizontal")
# Accessing the time course data requires more manipulation
multi_workloop_dat %>%
map(~ mutate(.x$cycle_a, stim_pulses = attr(.x, "stimulus_pulses"))) %>%
bind_rows %>%
ggplot(aes(Percent_of_Cycle, Inst_Power)) +
geom_path(aes(colour = as.factor(stim_pulses)))+
labs(y = "Power (W)", x = "Percent of Cycle", colour = "Stim Duration") +
ggtitle("Time course of instantaneous\npower by stimulus duration") +
theme_bw() +
theme(legend.position = "bottom", legend.direction = "horizontal")
```
## Plotting isometric objects
### Working with single files
One useful visualization with isometric data is annotating peak force and other timing points. With a single file and multiple set points, some manipulation is useful to make annotating a little cleaner.
```{r isometric_annotation}
twitch_dat<-
system.file(
"extdata",
"twitch.ddf",
package = 'workloopR') %>%
read_ddf() %>%
fix_GR(2)
# We now need to reshape the single row into three columns, a label for the point, an x value for the label (time), and a y value (force). See the `tidyr` package and associated vignettes on reshaping tips
label_dat<-
twitch_dat %>%
isometric_timing(c(10,90),50) %>%
gather(label, value) %>%
filter(label != 'file_id') %>%
separate(label, c("type", "identifier"), "_", extra="merge") %>%
spread(type,value)
label_dat$time<-as.numeric(label_dat$time)
label_dat$force<-as.numeric(label_dat$force)
ggplot() +
geom_line(aes(Time, Force), data = twitch_dat) +
geom_point(aes(time, force), data = label_dat) +
geom_text(aes(time, force, label = identifier), hjust=-0.15, data = label_dat) +
labs(y = "Force (mN)", x = "Time (sec)") +
ggtitle("Force development in a twitch trial") +
theme_bw() +
theme(legend.position = "bottom", legend.direction = "horizontal")
```
### Working with multiple files
We can also overlay data from multiple isometric trials to see how force evolves across trials. Please see the "Batch processing" vignette for more details on how to work with multiple files.
```{r iso_multi}
multi_twitch_dat<-
system.file(
"extdata/twitch_csv",
package = 'workloopR') %>%
list.files(full.names = T) %>%
map(read.csv) %>%
map2(c("2mA","3mA","4mA","5mA"), ~as_muscle_stim(.x, type = 'twitch', file_id = .y))
# Next we want another data.frame of label data
multi_label_dat<-
multi_twitch_dat %>%
map_dfr(isometric_timing) %>%
select(file_id, ends_with("peak")) %>%
mutate(label = paste0(round(force_peak),"mV"))
# Once again we want the data in a single data.frame with a column for which trial it came from
multi_twitch_dat %>%
map_dfr(~mutate(.x, file_id = attr(.x, "file_id"))) %>%
ggplot(aes(x = Time, y = Force, colour = file_id)) +
geom_line() +
geom_text(aes(time_peak, force_peak, label = label), hjust=-0.7, data = multi_label_dat) +
labs(y = "Force (mN)", x = "Time (sec)", colour = "Stimulation Current") +
ggtitle("Force development across twitch trials") +
theme_bw() +
theme(legend.position = "bottom", legend.direction = "horizontal")
```
Please note that these twitch trials have differing values of initial force, so actual force developments are not identical to peak forces.
|
/scratch/gouwar.j/cran-all/cranData/workloopR/vignettes/Plotting-workloopR.Rmd
|
---
title: "Batch processing"
author: "Shreeram Senthivasan"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Batch processing}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
Many of the functions in the `workloopR` package are built to facilitate batch processing of workloop and related data files. This vignette will start with an overview of how the functions were intended to be used for batch processing and then provide specific examples.
## Conceptual overview
We generally expect a single file to store data from a single experimental trial, whereas directories hold data from all the trials of a single experiment. Accordingly, the `muscle_stim` objects created and used by most of the `workloopR` functions are intended to hold data from a single trial of a workloop or related experiment. Lists are then used to package together trials from a single experiment. This also lends itself to using recursion to transform and analyze all data from a single experiment.
In broad strokes, there are three ways that batch processing has been worked into `workloopR` functions. First, some functions like the `*_dir()` family of import functions and `summarize_wl_trials()` specifically generate or require lists of `muscle_stim` objects. Second, the first argument of all other functions are the objects being manipulated, which can help clean up recursion using the `purrr::map()` family of functions. Finally, some functions return summarized data as single rows of a data.frame that can easily be bound together to generate a summary table.
## Load packages and data
This vignette will rely heavily on the `purrr::map()` family of functions for recursion, though it should be mentioned that the `base::apply()` family of functions would work as well.
```{r package_loading, message=FALSE, warning=FALSE}
library(workloopR)
library(magrittr)
library(purrr)
```
## Necessarily-multi-trial functions
### `*_dir()` functions
Both `read_ddf()` and `read_analyze_wl()` have alternatives suffixed by `_dir()` to read in multiple files from a directory. Both take a path to the directory and an optional regular expression to filter files by and return a list of `muscle_stim` objects or `analyzed_workloop` objects, respectively.
```{r}
workloop_trials_list<-
system.file(
"extdata/wl_duration_trials",
package = 'workloopR') %>%
read_ddf_dir(phase_from_peak = TRUE)
workloop_trials_list[1:2]
```
The `sort_by` argument can be used to rearrange this list by any attribute of the read-in objects. By default, the objects are sorted by their modification time. Other arguments of `read_ddf()` and `read_analyze_wl()` can also be passed to their `*_dir()` alternatives as named arguments.
```{r}
analyzed_wl_list<-
system.file(
"extdata/wl_duration_trials",
package = 'workloopR') %>%
read_analyze_wl_dir(sort_by = 'file_id',
phase_from_peak = TRUE,
cycle_def = 'lo',
keep_cycles = 3)
analyzed_wl_list[1:2]
```
### Summarizing workloop trials
In a series of workloop trials, it can useful to see how mean power and work change as you vary different experimental parameters. To facilitate this, `summarize_wl_trials()` specifically takes a list of `analyzed_workloop` objects and returns a `data.frame` of this information. We will explore ways of generating lists of analyzed workloops without using `read_analyze_wl_dir()` in the following section.
```{r}
analyzed_wl_list %>%
summarize_wl_trials
```
## Manual recursion examples
### Batch import for non-ddf data
One of the more realistic use cases for manual recursion is for importing data from multiple related trials that are not stored in ddf format. As with importing individual non-ddf data sources, we start by reading the data into a data.frame, only now we want a list of data.frames. In this example, we will read in csv files and stitch them into a list using `purrr::map()`
```{r}
non_ddf_list<-
# Generate a vector of file names
system.file(
"extdata/twitch_csv",
package = 'workloopR') %>%
list.files(full.names = T) %>%
# Read into a list of data.frames
map(read.csv) %>%
# Coerce into a workloop object
map(as_muscle_stim, type = "twitch")
```
### Data transformation and analysis
Applying a constant transformation to a list of `muscle_stim` objects is fairly straightforward using `purrr::map()`.
```{r}
non_ddf_list<-
non_ddf_list %>%
map(~{
attr(.x,"stimulus_width")<-0.2
attr(.x,"stimulus_offset")<-0.1
return(.x)
}) %>%
map(fix_GR,2)
```
Applying a non-constant transformation like setting a unique file ID can be done using `purrr::map2()`.
```{r}
file_ids<-paste0("0",1:4,"-",2:5,"mA-twitch.csv")
non_ddf_list<-
non_ddf_list %>%
map2(file_ids, ~{
attr(.x,"file_id")<-.y
return(.x)
})
non_ddf_list
```
Analysis can similarly be run recursively. `isometric_timing()` in particular returns a single row of a data.frame with timings and forces for key points in an isometric dataset. Here we can use `purrr::map_dfr()` to bind the rows together for neatness.
```{r}
non_ddf_list %>%
map_dfr(isometric_timing)
```
|
/scratch/gouwar.j/cran-all/cranData/workloopR/vignettes/batch-processing.Rmd
|
---
title: "Importing data from non .ddf sources"
author: "Vikram B. Baliga"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Importing data from non .ddf sources}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r setup, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
`workloopR`'s data import functions, such as `read_ddf()`, are generally geared towards importing data from .ddf files (e.g. those generated by Aurora Scientific's Dynamic Muscle Control and Analysis Software).
Should your data be stored in another file format, you can use the `as_muscle_stim()` function to generate your own `muscle_stim` objects. These `muscle_stim` objects are used by nearly all other `workloopR` functions and are formatted in a very specific way. This helps ensure that other functions can interpret data & metadata correctly and also perform internal checks.
## Load packages
Before running through anything, we'll ensure we have the packages we need.
```{r package_loading, message=FALSE, warning=FALSE}
library(workloopR)
library(magrittr)
library(ggplot2)
```
## Data
Because it is somewhat difficult to simulate muscle physiology data, we'll use one of our workloop files, deconstruct it, and then re-assemble the data via `as_muscle_stim()`.
```{r get_data}
## Load in the work loop example data from workloopR
workloop_dat <-
system.file(
"extdata",
"workloop.ddf",
package = 'workloopR') %>%
read_ddf(phase_from_peak = TRUE) %>%
fix_GR(GR = 2)
## First we'll extract Time
Time <- workloop_dat$Time
## Now Position
Position <- workloop_dat$Position
## Force
Force <- workloop_dat$Force
## Stimulation
Stim <- workloop_dat$Stim
## Put it all together as a data.frame
my_data <- data.frame(Time = Time,
Position = Position,
Force = Force,
Stim = Stim)
head(my_data)
```
## Assemble via `as_muscle_stim()`
It is absolutely crucial that the columns be named "Time", "Position", "Force", and "Stim" (all case-sensitive). Otherwise, `as_muscle_stim()` will not interpret data correctly.
At minimum, this `data.frame`, the type of experiment, and the frequency at which data were recorded (`sample_frequency`, as a numeric) are necessary for `as_muscle_stim()`.
```{r as_mus_basic}
## Put it together
my_muscle_stim <- as_muscle_stim(x = my_data,
type = "workloop",
sample_frequency = 10000)
## Data are stored in columns and basically behave as data.frames
head(my_muscle_stim)
ggplot(my_muscle_stim, aes(x = Time, y = Position)) +
geom_line() +
labs(y = "Position (mm)", x = "Time (secs)") +
ggtitle("Time course of length change") +
theme_bw()
```
### Attributes
By default, a couple attributes are auto-filled based on the available information, but it's pretty bare-bones
```{r attributes}
str(attributes(my_muscle_stim))
```
We highly encourage you to add in as many of these details as possible by passing them in via the `...` argument. For example:
```{r add_file_id}
## This time, add the file's name via "file_id"
my_muscle_stim <- as_muscle_stim(x = my_data,
type = "workloop",
sample_frequency = 10000,
file_id = "workloop123")
## For simplicity, we'll just target the file_id attribute directly instead of
## printing all attributes again
attr(my_muscle_stim, "file_id")
```
### Possible attributes
Here is a list of all possible attributes that can be filled.
```{r}
names(attributes(workloop_dat))
```
To see how each should be formatted, (e.g. which ones take numeric values vs. character vectors...etc)
```{r}
str(attributes(workloop_dat))
```
## Thanks for reading!
Please feel free to contact either Vikram or Shree with suggestions or code development requests. We are especially interested in expanding our data import functions to accommodate file types other than .ddf in future versions of `workloopR`.
|
/scratch/gouwar.j/cran-all/cranData/workloopR/vignettes/non-ddf-sources.Rmd
|
#' Big 5 Euro League Season Stats
#'
#' Returns data frame of selected statistics for seasons of the big 5 Euro leagues, for either
#' whole team or individual players.
#' Multiple seasons can be passed to the function, but only one `stat_type` can be selected
#'
#' @param season_end_year the year(s) the season concludes
#' @param stat_type the type of team statistics the user requires
#' @param team_or_player result either summarised for each team, or individual players
#' @param time_pause the wait time (in seconds) between page loads
#'
#' The statistic type options (stat_type) include:
#'
#' \emph{"standard"}, \emph{"shooting"}, \emph{"passing"}, \emph{"passing_types"},
#' \emph{"gca"}, \emph{"defense"}, \emph{"possession"}, \emph{"playing_time"},
#' \emph{"misc"}, \emph{"keepers"}, \emph{"keepers_adv"}
#'
#' @return returns a dataframe of a selected team or player statistic type for a selected season(s)
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#' @importFrom utils read.csv
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' fb_big5_advanced_season_stats(season_end_year=2021,stat_type="possession",team_or_player="player")
#' })
#' }
fb_big5_advanced_season_stats <- function(season_end_year, stat_type, team_or_player, time_pause=3) {
stat_types <- c("standard", "shooting", "passing", "passing_types", "gca", "defense", "possession", "playing_time", "misc", "keepers", "keepers_adv")
if(!stat_type %in% stat_types) stop("check stat type")
# .pkg_message("Scraping {team_or_player} season '{stat_type}' stats")
main_url <- "https://fbref.com"
season_end_year_num <- season_end_year
seasons <- read.csv("https://raw.githubusercontent.com/JaseZiv/worldfootballR_data/master/raw-data/all_leages_and_cups/all_competitions.csv", stringsAsFactors = F)
seasons_urls <- seasons %>%
dplyr::filter(stringr::str_detect(.data[["competition_type"]], "Big 5 European Leagues")) %>%
dplyr::filter(season_end_year %in% season_end_year_num) %>%
dplyr::arrange(season_end_year) %>%
dplyr::pull(seasons_urls) %>% unique()
time_wait <- time_pause
get_each_big5_stats_type <- function(season_url, time_pause=time_wait) {
# put sleep in as per new user agreement on FBref
Sys.sleep(time_pause)
pb$tick()
if(stat_type == "standard") {
stat_type <- "stats"
}
# fixes the change fbref made with the name of playing time
if(stat_type == "playing_time") {
stat_type <- "playingtime"
}
# fixes the change fbref made with the name of advanced keepers
if(stat_type == "keepers_adv") {
stat_type <- "keepersadv"
}
start_part <- sub('/[^/]*$', '', season_url)
end_part <- gsub(".*/", "", season_url)
if(team_or_player == "team") {
player_or_team_part <- "squads"
} else {
player_or_team_part <- "players"
}
stat_urls <- paste0(start_part, "/", stat_type, "/", player_or_team_part, "/", end_part)
team_page <- stat_urls %>%
.load_page()
if(team_or_player == "player") {
stat_df <- team_page %>%
rvest::html_nodes(".table_container") %>%
rvest::html_nodes("table") %>%
rvest::html_table() %>%
data.frame()
} else {
stat_df <- team_page %>%
rvest::html_nodes(".table_container") %>%
rvest::html_nodes("table")
stat_df_for <- stat_df[1] %>%
rvest::html_table() %>%
data.frame()
stat_df_against <- stat_df[2] %>%
rvest::html_table() %>%
data.frame()
stat_df <- rbind(stat_df_for, stat_df_against)
}
var_names <- stat_df[1,] %>% as.character()
new_names <- paste(var_names, names(stat_df), sep = "_")
if(stat_type == "playingtime") {
new_names <- new_names %>%
gsub("\\..[[:digit:]]+", "", .) %>%
gsub("\\.[[:digit:]]+", "", .) %>%
gsub("_Var", "", .) %>%
gsub("# Pl", "Num_Players", .) %>%
gsub("%", "_percent", .) %>%
gsub("_Performance", "", .) %>%
# gsub("_Penalty", "", .) %>%
gsub("1/3", "Final_Third", .) %>%
gsub("/", "_per_", .) %>%
gsub("-", "_minus_", .) %>%
gsub("90s", "Mins_Per_90", .) %>%
gsub("\\+", "plus", .)
} else {
new_names <- new_names %>%
gsub("\\..*", "", .) %>%
gsub("_Var", "", .) %>%
gsub("# Pl", "Num_Players", .) %>%
gsub("%", "_percent", .) %>%
gsub("_Performance", "", .) %>%
# gsub("_Penalty", "", .) %>%
gsub("1/3", "Final_Third", .) %>%
gsub("/", "_per_", .) %>%
gsub("-", "_minus_", .) %>%
gsub("90s", "Mins_Per_90", .)
}
names(stat_df) <- new_names
stat_df <- stat_df[-1,]
urls <- team_page %>%
rvest::html_nodes(".table_container") %>%
rvest::html_nodes("table") %>%
rvest::html_nodes("tbody") %>%
rvest::html_nodes("tr") %>% rvest::html_node("td a") %>% rvest::html_attr("href") %>% paste0(main_url, .)
# important here to change the order of when URLs are applied, so if player, bind before fintering, otherwise after filtering
# to remove the NAs for the sub heading rows
if(team_or_player == "player") {
stat_df <- dplyr::bind_cols(stat_df, Url=urls)
stat_df <- stat_df %>%
dplyr::filter(.data[["Rk"]] != "Rk") %>%
dplyr::select(-.data[["Rk"]])
stat_df <- stat_df %>%
dplyr::select(-.data[["Matches"]])
cols_to_transform <- stat_df %>%
dplyr::select(-.data[["Squad"]], -.data[["Player"]], -.data[["Nation"]], -.data[["Pos"]], -.data[["Comp"]], -.data[["Age"]], -.data[["Url"]]) %>% names()
chr_vars_to_transform <- stat_df %>%
dplyr::select(.data[["Nation"]], .data[["Comp"]]) %>% names()
} else {
stat_df <- stat_df %>%
dplyr::filter(.data[["Rk"]] != "Rk") %>%
dplyr::select(-.data[["Rk"]])
stat_df <- dplyr::bind_cols(stat_df, Url=urls)
cols_to_transform <- stat_df %>%
dplyr::select(-.data[["Squad"]], -.data[["Comp"]], -.data[["Url"]]) %>% names()
chr_vars_to_transform <- stat_df %>%
dplyr::select(.data[["Comp"]]) %>% names()
}
stat_df <- stat_df %>%
dplyr::mutate_at(.vars = cols_to_transform, .funs = function(x) {gsub(",", "", x)}) %>%
dplyr::mutate_at(.vars = cols_to_transform, .funs = function(x) {gsub("+", "", x)}) %>%
dplyr::mutate_at(.vars = cols_to_transform, .funs = as.numeric) %>%
dplyr::mutate_at(.vars = chr_vars_to_transform, .funs = function(x) {gsub("^\\S* ", "", x)})
stat_df <- stat_df %>%
dplyr::mutate(Team_or_Opponent = ifelse(!stringr::str_detect(.data[["Squad"]], "vs "), "team", "opponent")) %>%
dplyr::filter(.data[["Team_or_Opponent"]] == "team") %>%
dplyr::bind_rows(
stat_df %>%
dplyr::mutate(Team_or_Opponent = ifelse(!stringr::str_detect(.data[["Squad"]], "vs "), "team", "opponent")) %>%
dplyr::filter(.data[["Team_or_Opponent"]] == "opponent")
) %>%
dplyr::mutate(season_url = season_url,
Squad = gsub("vs ", "", .data[["Squad"]])) %>%
dplyr::select(.data[["season_url"]], .data[["Squad"]], .data[["Comp"]], .data[["Team_or_Opponent"]], dplyr::everything())
stat_df <- seasons %>%
dplyr::select(Season_End_Year=.data[["season_end_year"]], .data[["seasons_urls"]]) %>%
dplyr::left_join(stat_df, by = c("seasons_urls" = "season_url")) %>%
dplyr::select(-.data[["seasons_urls"]]) %>%
dplyr::filter(!is.na(.data[["Squad"]])) %>%
dplyr::arrange(.data[["Season_End_Year"]], .data[["Squad"]], dplyr::desc(.data[["Team_or_Opponent"]]))
if(team_or_player == "player") {
stat_df$Team_or_Opponent <- NULL
}
return(stat_df)
}
# create the progress bar with a progress function.
pb <- progress::progress_bar$new(total = length(seasons_urls))
all_stats_df <- seasons_urls %>%
purrr::map_df(get_each_big5_stats_type)
return(all_stats_df)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/fb_big5_advanced_season.R
|
#' Get player goal logs
#'
#' Returns the player's career goal and assist logs
#'
#' @param player_urls the URL(s) of the player(s)
#' @param time_pause the wait time (in seconds) between page loads
#' @param goals_or_assists select whether to return data of "goals" (the default), "assists", or "both"
#'
#' @return returns a dataframe of the player's goals and assists
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' # for single players:
#' jwp_url <- "https://fbref.com/en/players/3515d404/"
#' fb_player_goal_logs(player_urls = jwp_url, goals_or_assists = "goals")
#' })
#' }
fb_player_goal_logs <- function(player_urls, time_pause=3, goals_or_assists="goals") {
t <- time_pause
g_a <- goals_or_assists # one of "goals", "assists", or "both"
get_each_player_goal_log <- function(player_url, time_pause=t, goals_or_assists=g_a) {
pb$tick()
# put sleep in as per new user agreement on FBref
Sys.sleep(time_pause)
page <- .load_page(player_url)
page_urls <- page %>% rvest::html_elements("#inner_nav .hoversmooth a")
# if there is only one competition the "Goal Logs" text is a url
# if there is more than one competitions it is a drop down menu
# access from the "All Competitions" url instead
# (may be better ways to get the All Competitions url)
index <- ifelse(
!is.na(grep("Goal Logs", rvest::html_text(page_urls))[1]),
grep("Goal Logs", rvest::html_text(page_urls))[1],
grep("All Competitions", rvest::html_text(page_urls))[4]
)
goal_log_url <- page_urls[index] %>% rvest::html_attr("href") %>% unique() %>% paste0("https://fbref.com", .)
goal_log_page <- tryCatch(.load_page(goal_log_url), error = function(e) NA)
if(!is.na(goal_log_page)) {
# get goals, or an empty data frame
tab_box_goals <-
{
if(goals_or_assists %in% c("goals","both"))
tryCatch({goal_log_page %>% rvest::html_nodes(".stats_table") %>% .[1] %>% rvest::html_table()},
error=function(e) {tab_box_for <- NA})
else data.frame()
}
# get assists, or an empty data frame
tab_box_assists <-
{
if(goals_or_assists %in% c("assists","both"))
tryCatch({goal_log_page %>% rvest::html_nodes(".stats_table") %>% .[2] %>% rvest::html_table()},
error=function(e) {tab_box_for <- NA})
else data.frame()
}
# new column names
new_names <- c(Body_Part="Body.Part",Type1="Type",Type2="Type.1")
# turn html tables into data frames, force minute to be a character, and label the goals and assists
goals <-
tab_box_goals %>%
data.frame() %>%
dplyr::mutate(dplyr::across(tidyselect::contains("Minute"), as.character)) %>%
dplyr::mutate(Goal_or_Assist="goal")
assists <-
tab_box_assists %>%
data.frame() %>%
dplyr::mutate(dplyr::across(tidyselect::contains("Minute"), as.character)) %>%
dplyr::mutate(Goal_or_Assist="assist")
# bind the tables together
goals_and_assists <-
dplyr::bind_rows(goals,assists) %>%
dplyr::rename(tidyselect::any_of(new_names)) %>%
dplyr::filter(!is.na(.data[["Rk"]])) %>%
dplyr::arrange(dplyr::desc(.data[["Goal_or_Assist"]]),.data[["Rk"]])
return(goals_and_assists)
}
}
# create the progress bar with a progress function.
pb <- progress::progress_bar$new(total = length(player_urls))
goals_map <- player_urls %>%
purrr::map_df(get_each_player_goal_log)
return(goals_map)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/fb_player_goal_logs.R
|
#' Get fbref Player Match Logs
#'
#' Returns all match logs for a selected player, season and stat type
#'
#' @param player_url the URL of the player (can come from fb_player_urls())
#' @param season_end_year the year the season(s) concludes
#' @param stat_type the type of statistic required
#' @param time_pause the wait time (in seconds) between page loads
#'
#'The statistic type options (stat_type) include:
#'
#' \emph{"summary"}, \emph{"keepers"}, \emph{"passing"}, \emph{"passing_types"},
#' \emph{"gca"}, \emph{"defense"}, \emph{"possession"}, \emph{"misc"}
#'
#' @return returns a dataframe of a player's match logs for a season
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
#'
#' @examples
#' \donttest{
#' try({
#' fb_player_match_logs("https://fbref.com/en/players/3bb7b8b4/Ederson",
#' season_end_year = 2021, stat_type = 'summary')
#' })
#' }
fb_player_match_logs <- function(player_url, season_end_year, stat_type, time_pause=3) {
stat_types <- c("summary", "keepers", "passing", "passing_types", "gca", "defense", "possession", "misc")
if(!stat_type %in% stat_types) stop("check stat type")
main_url <- "https://fbref.com"
# put sleep in as per new user agreement on FBref
Sys.sleep(time_pause)
player_page <- .load_page(player_url)
player_name <- player_page %>% rvest::html_node("h1") %>% rvest::html_text() %>% stringr::str_squish()
main_cats <- player_page %>% rvest::html_nodes("#inner_nav") %>% rvest::html_nodes(".full.hasmore")
span_names <- main_cats %>% rvest::html_nodes("span") %>% rvest::html_text()
main_cats <- main_cats[grep("Match Logs", span_names)]
match_log_names <- main_cats %>% rvest::html_nodes("p") %>% rvest::html_text()
match_log_names <- match_log_names %>% gsub("Match Logs \\(", "", .) %>% gsub("\\)", "", .)
log_level1 <- main_cats %>% rvest::html_nodes("ul")
# create a dataframe of all stat types, seasons and their URLs
# this is to combat the way the html is laid out on the site
all_logs <- data.frame()
for(i in 1:length(log_level1)) {
log_name <- match_log_names[i]
log_urls <- log_level1[i] %>% rvest::html_nodes("li") %>% rvest::html_nodes("a") %>% rvest::html_attr("href")
season <- log_level1[i] %>% rvest::html_nodes("li") %>% rvest::html_nodes("a") %>% rvest::html_text()
each_log_df <- cbind(log_name=log_name, log_urls=log_urls, season=season) %>% data.frame()
all_logs <- rbind(all_logs, each_log_df)
}
all_logs <- all_logs %>%
dplyr::mutate(season_end = gsub(".*-", "", .data[["season"]]),
stat = dplyr::case_when(
log_name == "Summary" ~ "summary",
log_name == "Goalkeeping" ~ "keepers",
log_name == "Passing" ~ "passing",
log_name == "Pass Types" ~ "passing_types",
log_name == "Goal and Shot Creation" ~ "gca",
log_name == "Defensive Actions" ~ "defense",
log_name == "Possession" ~ "possession",
log_name == "Miscellaneous Stats" ~ "misc",
TRUE ~ NA_character_
))
log_url <- all_logs %>%
dplyr::filter(.data[["stat"]] == stat_type,
.data[["season_end"]] == season_end_year) %>%
dplyr::pull(log_urls)
if(length(log_url) == 0) stop(glue::glue("check stat type: {stat_type} or season end: {season_end_year} exists"))
season <- all_logs %>%
dplyr::filter(.data[["stat"]] == stat_type,
.data[["season_end"]] == season_end_year) %>%
dplyr::pull(season)
# Get match logs for stat -------------------------------------------------
Sys.sleep(1)
stat_page <- .load_page(paste0(main_url, log_url))
tab <- stat_page %>% rvest::html_nodes(".table_container") %>% rvest::html_nodes("table") %>% rvest::html_table() %>% data.frame()
tab <- .clean_table_names(tab)
tab <- tab %>%
dplyr::filter(.data[["Date"]] != "") %>%
dplyr::mutate(Squad = sub("^.*?([A-Z])", "\\1", .data[["Squad"]]),
Opponent = sub("^.*?([A-Z])", "\\1", .data[["Opponent"]]),
Player = player_name,
Season = season) %>%
dplyr::select(.data[["Player"]], .data[["Season"]], dplyr::everything(), -.data[["Match Report"]])
non_num_vars <- c("Player", "Season", "Date", "Day", "Comp", "Round", "Venue", "Result", "Squad", "Opponent", "Start", "Pos")
cols_to_transform <- names(tab)[!names(tab) %in% non_num_vars]
suppressWarnings(
tab <- tab %>%
dplyr::mutate_at(.vars = cols_to_transform, .funs = function(x) {gsub(",", "", x)}) %>%
dplyr::mutate_at(.vars = cols_to_transform, .funs = function(x) {gsub("+", "", x)}) %>%
dplyr::mutate_at(.vars = cols_to_transform, .funs = as.numeric)
)
return(tab)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/fb_player_match_logs.R
|
#' Get fbref Full Player Scouting Report
#'
#' Returns the scouting report for a selected player
#'
#' @param player_url the URL of the player (can come from fb_player_urls())
#' @param pos_versus either "primary" or "secondary" as fbref offer comparisons against multiple positions
#' @param time_pause the wait time (in seconds) between page loads
#'
#' @return returns a dataframe of a player's full scouting information for all seasons available on FBref
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' fb_player_scouting_report(player_url = "https://fbref.com/en/players/d70ce98e/Lionel-Messi",
#' pos_versus = "primary")
#'
#' # to filter for the last 365 days:
#' fb_player_scouting_report(player_url = "https://fbref.com/en/players/d70ce98e/Lionel-Messi",
#' pos_versus = "primary") %>% dplyr::filter(scouting_period == "Last 365 Days")
#'
#' # to get secondary positions
#' fb_player_scouting_report(player_url = "https://fbref.com/en/players/d70ce98e/Lionel-Messi",
#' pos_versus = "secondary")
#'
#' # for the 2020-2021 La Liga season
#' fb_player_scouting_report(player_url = "https://fbref.com/en/players/d70ce98e/Lionel-Messi",
#' pos_versus = "secondary") %>% dplyr::filter(scouting_period == "2020-2021 La Liga")
#' })
#' }
fb_player_scouting_report <- function(player_url, pos_versus, time_pause=3) {
main_url <- "https://fbref.com"
# put sleep in as per new user agreement on FBref
Sys.sleep(time_pause)
player_page <- xml2::read_html(player_url)
player_name <- player_page %>% rvest::html_node("h1") %>% rvest::html_text() %>% stringr::str_squish()
main_cats <- player_page %>% rvest::html_nodes("#inner_nav") %>% rvest::html_nodes(".full.hasmore")
span_names <- main_cats %>% rvest::html_nodes("span") %>% rvest::html_text()
main_cats <- main_cats[grep("Scouting Report", span_names)]
# scout_level1_url <- main_cats %>% rvest::html_nodes("ul li") %>% .[1] %>%
# rvest::html_nodes("a") %>% rvest::html_attr("href") %>% paste0(main_url, .)
scout_level1_url <- main_cats %>% rvest::html_nodes("ul li") %>%
rvest::html_nodes("a") %>% rvest::html_attr("href") %>% paste0(main_url, .)
all_scout_pos <- data.frame()
for(each_scout_url in scout_level1_url) {
Sys.sleep(time_pause)
scout_pg <- xml2::read_html(each_scout_url)
period <- scout_pg %>% rvest::html_nodes("#all_scout") %>% rvest::html_nodes(".section_heading_text") %>% rvest::html_text() %>%
unique() %>% stringr::str_squish()
# .pkg_message("Scraping full scouting report for {player_name} for period: {period}")
outer <- scout_pg %>% rvest::html_nodes("#all_scout") %>% rvest::html_nodes(".filter.switcher") %>% rvest::html_nodes("div")
# if(pos_versus == "primary") {
# pos_versus <- 1
# tryCatch({versus <- outer %>% rvest::html_text() %>%
# stringr::str_squish() %>% .[1]}, error = function(e) {versus <- data.frame()})
# } else if (pos_versus == "secondary") {
# pos_versus <- 2
# tryCatch({versus <- outer %>% rvest::html_text() %>%
# stringr::str_squish() %>% .[2]}, error = function(e) {versus <- data.frame()})
# }
# else {
# stop(glue::glue("Select a correct 'pos_versus' value from either 'primary' or 'secondary'"))
# }
if(length(outer) == 1) {
pos_versus_idx <- 1
versus <- outer %>% rvest::html_text() %>%
stringr::str_squish()
} else if (length(outer) > 1) {
if(pos_versus != "primary") {
pos_versus_idx <- 2
} else {
pos_versus_idx <- 1
}
versus <- outer %>% rvest::html_text() %>%
stringr::str_squish() %>% .[pos_versus_idx]
} else {
stop(glue::glue("Full scouting report not available for {player_name}"))
}
scouting_all <- scout_pg %>% rvest::html_nodes("#all_scout") %>% rvest::html_nodes(".table_container")
scout_pos <- scouting_all[pos_versus_idx] %>%
rvest::html_nodes("table") %>% rvest::html_table() %>% data.frame()
missing_idx <- scout_pos[,1] != ""
scout_pos <- scout_pos[missing_idx,]
names(scout_pos) <- scout_pos[1,]
scout_pos <- scout_pos %>%
dplyr::rename(Per90=.data$`Per 90`)
df <- data.frame(Statistic="Standard", Per90="Standard", Percentile="Standard")
scout_pos <- rbind(df, scout_pos)
scout_pos$stat_group <- NA_character_
stat_names <- scout_pos$Statistic
idx <- grep("Statistic", stat_names)
stat_vct <- c()
for(i in 1:length(idx)) {
id <- idx[i]-1
st <- stat_names[id]
tryCatch(scout_pos[c(id:(idx[i+1]-2)), "stat_group"] <- st, error = function(e) scout_pos[c(id:nrow(scout_pos)), "stat_group"] <- st)
}
scout_pos$stat_group[is.na(scout_pos$stat_group)] <- st
scout_pos <- scout_pos[-c(idx, idx-1), ]
scout_pos <- scout_pos %>%
dplyr::mutate(Player=player_name,
Versus=gsub("vs. ", "", versus),
Per90 = gsub("\\+", "", .data[["Per90"]]) %>% gsub("\\%", "", .) %>% gsub("\\,", "", .) %>% as.numeric(),
Percentile = as.numeric(.data[["Percentile"]])) %>%
dplyr::select(.data[["Player"]], .data[["Versus"]], StatGroup=.data[["stat_group"]], dplyr::everything())
mins_played <- scout_pg %>% rvest::html_nodes(".footer") %>% rvest::html_nodes("strong") %>%
rvest::html_text() %>% gsub(" minutes", "", .) %>% as.numeric() %>% unique()
scout_pos <- scout_pos %>%
dplyr::mutate(BasedOnMinutes = mins_played)
scout_pos <- scout_pos %>%
dplyr::mutate(scouting_period = period)
all_scout_pos <- dplyr::bind_rows(all_scout_pos, scout_pos)
}
return(all_scout_pos)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/fb_player_scouting.R
|
#' Get team goal logs
#'
#' Returns the team's season goal logs
#'
#' @param team_urls the URL(s) of the team(s) (can come from fb_teams_urls())
#' @param time_pause the wait time (in seconds) between page loads
#' @param for_or_against select whether to return data of goals "for" (the default), goals "against", or "both"
#'
#' @return returns a dataframe of the team's goals scored and conceded in the season
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' # for single teams:
#' man_city_url <- "https://fbref.com/en/squads/b8fd03ef/Manchester-City-Stats"
#' fb_team_goal_logs(team_urls = man_city_url, for_or_against = "for")
#' })
#' }
fb_team_goal_logs <- function(team_urls, time_pause=3, for_or_against="for") {
t <- time_pause
f_a <- for_or_against # one of "team", "opposition", or "both"
get_each_team_goal_log <- function(team_url, time_pause=t, for_or_against=f_a) {
pb$tick()
# put sleep in as per new user agreement on FBref
Sys.sleep(time_pause)
page <- .load_page(team_url)
page_urls <- page %>% rvest::html_elements("#inner_nav .hoversmooth a")
# if there is only one competition the "Goal Logs" text is a url
# if there is more than one competitions it is a drop down menu
# access from the "All Competitions" url instead
# (may be better ways to get the All Competitions url)
index <- ifelse(
!is.na(grep("Goal Logs", rvest::html_text(page_urls))[1]),
grep("Goal Logs", rvest::html_text(page_urls))[1],
grep("All Competitions", rvest::html_text(page_urls))[4]
)
goal_log_url <- page_urls[index] %>% rvest::html_attr("href") %>% unique() %>% paste0("https://fbref.com", .)
goal_log_page <- tryCatch(.load_page(goal_log_url), error = function(e) NA)
if(!is.na(goal_log_page)) {
# get goals for, or an empty data frame
tab_box_goals_for <-
{
if(for_or_against %in% c("for","both"))
tryCatch({goal_log_page %>% rvest::html_nodes(".stats_table") %>% .[1] %>% rvest::html_table()},
error=function(e) {tab_box_for <- NA})
else data.frame()
}
# get goals against, or an empty data frame
tab_box_goals_against <-
{
if(for_or_against %in% c("against","both"))
tryCatch({goal_log_page %>% rvest::html_nodes(".stats_table") %>% .[2] %>% rvest::html_table()},
error=function(e) {tab_box_for <- NA})
else data.frame()
}
# new column names
new_names <- c(Body_Part="Body.Part",Type1="Type",Type2="Type.1")
# turn html tables into data frames, force minute to be a character, and label the goals
goals_for <-
tab_box_goals_for %>%
data.frame() %>%
dplyr::mutate(dplyr::across(tidyselect::contains(c("Minute","Goalkeeper","Assist","Notes")), as.character)) %>%
dplyr::mutate(For_or_Against="for")
goals_against <-
tab_box_goals_against %>%
data.frame() %>%
dplyr::mutate(dplyr::across(tidyselect::contains(c("Minute","Goalkeeper","Assist","Notes")), as.character)) %>%
dplyr::mutate(For_or_Against="against")
# bind the tables together
goals <-
dplyr::bind_rows(goals_for,goals_against) %>%
dplyr::rename(tidyselect::any_of(new_names)) %>%
dplyr::filter(!is.na(.data[["Rk"]])) %>%
dplyr::arrange(dplyr::desc(.data[["For_or_Against"]]),.data[["Date"]],.data[["Minute"]])
return(goals)
}
}
# create the progress bar with a progress function.
pb <- progress::progress_bar$new(total = length(team_urls))
goals_map <- team_urls %>%
purrr::map_df(get_each_team_goal_log)
return(goals_map)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/fb_team_goal_logs.R
|
#' Get team match log stats
#'
#' Returns all match statistics for a team(s) in a given season
#'
#' @param team_urls the URL(s) of the teams(s) (can come from fb_teams_urls())
#' @param stat_type the type of statistic required
#' @param time_pause the wait time (in seconds) between page loads
#'
#' The statistic type options (stat_type) include:
#'
#' \emph{"shooting"}, \emph{"keeper"}, \emph{"passing"},
#' \emph{"passing_types"}, \emph{"gca"}, \emph{"defense"},
#' \emph{"misc"}
#'
#' @return returns a dataframe with the selected stat outputs of all games played by the selected team(s)
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#' @importFrom utils read.csv
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' # for single teams:
#' man_city_url <- "https://fbref.com/en/squads/b8fd03ef/Manchester-City-Stats"
#' fb_team_match_log_stats(team_urls = man_city_url, stat_type = "passing")
#' })
#' }
fb_team_match_log_stats <- function(team_urls, stat_type, time_pause=3) {
# .pkg_message("Scraping team match logs...")
main_url <- "https://fbref.com"
stat_types <- c("shooting", "keeper", "passing", "passing_types", "gca", "defense", "possession", "misc")
if(!stat_type %in% stat_types) stop("check stat type")
time_wait <- time_pause
get_each_team_log <- function(team_url, time_pause=time_wait) {
pb$tick()
# put sleep in as per new user agreement on FBref
Sys.sleep(time_pause)
team_page <- .load_page(team_url)
team_name <- sub('.*\\/', '', team_url) %>% gsub("-Stats", "", .) %>% gsub("-", " ", .)
stat_urls <- team_page %>% rvest::html_nodes("#content .filter")
stat_urls <- stat_urls[grep("Match Log Types", rvest::html_text(stat_urls))]
stat_urls <- stat_urls %>% rvest::html_nodes("a") %>% rvest::html_attr("href") %>% .[!is.na(.)]
selected_url <- stat_urls[grep(paste0(stat_type, "/"), stat_urls)]
if(length(selected_url) == 0) print(glue::glue("{stat_type} stat not available for {team_name}"))
selected_url <- paste0(main_url, selected_url)
Sys.sleep(time_pause)
stat_page <- .load_page(selected_url)
for_against <- stat_page %>%
rvest::html_nodes("#all_matchlogs #switcher_matchlogs .table_container")
df_for <- tryCatch(for_against[1] %>% rvest::html_nodes("table") %>% rvest::html_table() %>% data.frame(), error = function(e) data.frame())
if(nrow(df_for) > 0) {
colnames(df_for)[grep("For.", colnames(df_for))] <- ""
df_for <- .clean_table_names(df_for)
df_for <- df_for %>%
dplyr::filter(.data[["Date"]] != "")
df_for$ForAgainst <- "For"
# because the opponent names also contain some country abbreviations in the table output, will get the opponent names separately
# to do this, the most reliable way is to find the index of the `Opponent` column and then grab that - ensures if some
# leagues/teams have differnt col numbers, this will be dynamic
tab_names <- for_against[1] %>% rvest::html_nodes("thead tr") %>% .[2] %>% rvest::html_nodes("th") %>% rvest::html_text()
opp_idx <- grep("opponent", tolower(tab_names))
opponent_names <- for_against[1] %>% rvest::html_nodes(paste0(".left:nth-child(", opp_idx, ") a")) %>% rvest::html_text()
df_for$Opponent <- opponent_names
}
# now do for against:
df_against <- tryCatch(for_against[2] %>% rvest::html_nodes("table") %>% rvest::html_table() %>% data.frame(), error = function(e) data.frame())
if(nrow(df_against) > 0) {
colnames(df_against)[grep("Against.", colnames(df_against))] <- ""
df_against <- .clean_table_names(df_against)
df_against <- df_against %>%
dplyr::filter(.data[["Date"]] != "")
df_against$ForAgainst <- "Against"
df_against$Opponent <- opponent_names
}
team_log <- tryCatch(dplyr::bind_rows(df_for, df_against), error = function(e) data.frame())
if(nrow(team_log) > 0) {
team_log <- team_log %>%
dplyr::mutate(Team_Url = team_url,
Team = team_name) %>%
dplyr::select(.data[["Team_Url"]], .data[["Team"]], .data[["ForAgainst"]], dplyr::everything(), -.data[["Match Report"]])
cols_to_transform <- team_log %>%
dplyr::select(-.data[["Team_Url"]], -.data[["Team"]], -.data[["ForAgainst"]], -.data[["Date"]], -.data[["Time"]], -.data[["Comp"]], -.data[["Round"]], -.data[["Day"]],
-.data[["Venue"]], -.data[["Result"]], -.data[["GF"]], -.data[["GA"]], -.data[["Opponent"]]) %>% names()
team_log <- team_log %>%
dplyr::mutate_at(.vars = cols_to_transform, .funs = function(x) {gsub(",", "", x)}) %>%
dplyr::mutate_at(.vars = cols_to_transform, .funs = function(x) {gsub("+", "", x)}) %>%
dplyr::mutate_at(.vars = cols_to_transform, .funs = as.numeric)
}
return(team_log)
}
# create the progress bar with a progress function.
pb <- progress::progress_bar$new(total = length(team_urls))
all_team_logs <- team_urls %>%
purrr::map_df(get_each_team_log)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/fb_team_match_log_stats.R
|
#' Get fbref Team's Player Season Statistics
#'
#' Returns the team's players season stats for a selected team(s) and stat type
#'
#' @param team_urls the URL(s) of the teams(s) (can come from fb_teams_urls())
#' @param stat_type the type of statistic required
#' @param time_pause the wait time (in seconds) between page loads
#'
#'The statistic type options (stat_type) include:
#'
#' \emph{"standard"}, \emph{"shooting"}, \emph{"passing"},
#' \emph{"passing_types"}, \emph{"gca"}, \emph{"defense"}, \emph{"possession"}
#' \emph{"playing_time"}, \emph{"misc"}, \emph{"keeper"}, \emph{"keeper_adv"}
#'
#' @return returns a dataframe of all players of a team's season stats
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' fb_team_player_stats("https://fbref.com/en/squads/d6a369a2/Fleetwood-Town-Stats",
#' stat_type = 'standard')
#'
#' league_url <- fb_league_urls(country = "ENG", gender = "M",
#' season_end_year = 2022, tier = "3rd")
#' team_urls <- fb_teams_urls(league_url)
#' multiple_playing_time <- fb_team_player_stats(team_urls,
#' stat_type = "playing_time")
#' })
#' }
fb_team_player_stats <- function(team_urls, stat_type, time_pause=3) {
stat_types <- c("standard", "shooting", "passing", "passing_types", "gca", "defense", "possession", "playing_time", "misc", "keeper", "keeper_adv")
if(!stat_type %in% stat_types) stop("check stat type")
# .pkg_message("Scraping {team_or_player} season '{stat_type}' stats")
main_url <- "https://fbref.com"
time_wait <- time_pause
get_each_team_players <- function(team_url, time_pause=time_wait) {
pb$tick()
# put sleep in as per new user agreement on FBref
Sys.sleep(time_pause)
page <- .load_page(team_url)
team_season <- page %>% rvest::html_nodes("h1") %>% rvest::html_nodes("span") %>% .[1] %>% rvest::html_text()
season <- team_season %>% stringr::str_extract(., "(\\d+)-(\\d+)")
Squad <- team_season %>% gsub("(\\d+)-(\\d+)", "", .) %>% gsub("\\sStats", "", .) %>% stringr::str_squish()
league <- page %>% rvest::html_nodes("h1") %>% rvest::html_nodes("span") %>% .[2] %>% rvest::html_text() %>% gsub("\\(", "", .) %>% gsub("\\)", "", .)
tabs <- page %>% rvest::html_nodes(".table_container")
tab_names <- page %>% rvest::html_nodes("#content") %>% rvest::html_nodes(".table_wrapper") %>% rvest::html_attr("id")
tab_idx <- grep(paste0("all_stats_", stat_type, "$"), tab_names)
if(length(tab_idx) == 0) {
stop(glue::glue("Stat: {stat_type} not available for {Squad}"))
tab <- data.frame()
} else {
tab <- tabs[tab_idx] %>% rvest::html_node("table") %>% rvest::html_table() %>% data.frame()
var_names <- tab[1,] %>% as.character()
new_names <- paste(var_names, names(tab), sep = "_")
new_names <- new_names %>%
gsub("\\..[0-9]", "", .) %>%
gsub("\\.[0-9]", "", .) %>%
gsub("\\.", "_", .) %>%
gsub("_Var", "", .) %>%
gsub("#", "Player_Num", .) %>%
gsub("%", "_percent", .) %>%
gsub("_Performance", "", .) %>%
gsub("_Penalty", "", .) %>%
gsub("1/3", "Final_Third", .) %>%
gsub("\\+/-", "Plus_Minus", .) %>%
gsub("/", "_per_", .) %>%
gsub("-", "_minus_", .) %>%
gsub("90s", "Mins_Per_90", .) %>%
gsub("__", "_", .)
names(tab) <- new_names
tab <- tab[-1,]
remove_rows <- min(grep("Squad ", tab$Player)):nrow(tab)
tab <- tab[-remove_rows, ]
tab$Matches <- NULL
if(any(grepl("Nation", colnames(tab)))) {
tab$Nation <- gsub(".*? ", "", tab$Nation)
}
non_num_vars <- c("Player", "Nation", "Pos", "Age")
cols_to_transform <- names(tab)[!names(tab) %in% non_num_vars]
tab <- tab %>%
dplyr::mutate_at(.vars = cols_to_transform, .funs = function(x) {gsub(",", "", x)}) %>%
dplyr::mutate_at(.vars = cols_to_transform, .funs = function(x) {gsub("+", "", x)}) %>%
dplyr::mutate_at(.vars = cols_to_transform, .funs = as.numeric)
# this is a new fix and more robust to occasions when some players don't have a URL
tab_player_urls <- tabs[tab_idx] %>% rvest::html_node("table")
url_rows <- tab_player_urls %>% rvest::html_elements("tbody tr")
player_urls <- c()
for(i in url_rows) {
each_url <- i %>% rvest::html_elements("th a") %>% rvest::html_attr("href") %>%
paste0(main_url, .)
# the below will catch when there is no player URL and it returns NA
if(each_url == "https://fbref.com") {
each_url <- NA_character_
}
player_urls <- c(player_urls, each_url)
}
tab <- tab %>%
dplyr::mutate(Season = season,
Squad=Squad,
Comp=league,
PlayerURL = player_urls) %>%
dplyr::select(.data[["Season"]], .data[["Squad"]], .data[["Comp"]], dplyr::everything())
}
return(tab)
}
# create the progress bar with a progress function.
pb <- progress::progress_bar$new(total = length(team_urls))
all_stats_df <- team_urls %>%
purrr::map_df(get_each_team_players)
return(all_stats_df)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/fb_team_player_stats.R
|
#' Get team player wages
#'
#' Returns all player wages from FBref via Capology
#'
#' @param team_urls the URL(s) of the teams(s) (can come from fb_teams_urls())
#' @param time_pause the wait time (in seconds) between page loads
#'
#' @return returns a dataframe with all available estimated player wages for the selected team(s)
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' # for single teams:
#' man_city_url <- "https://fbref.com/en/squads/b8fd03ef/Manchester-City-Stats"
#' fb_squad_wages(team_urls = man_city_url)
#' })
#' }
fb_squad_wages <- function(team_urls, time_pause=3) {
time_wait <- time_pause
main_url <- "https://fbref.com"
each_fb_squad_wages <- function(team_url, time_pause=time_wait) {
pb$tick()
# put sleep in as per new user agreement on FBref
Sys.sleep(time_pause)
pg <- .load_page(team_url)
team_name <- sub('.*\\/', '', team_url) %>% gsub("-Stats", "", .) %>% gsub("-", " ", .)
league <- pg %>% rvest::html_elements(".prevnext+ p a") %>% rvest::html_text()
season <- pg %>% rvest::html_nodes("h1") %>% rvest::html_text() %>% stringr::str_squish() %>% sub(" .*", "", .)
# because current season team urls don't contain the season, need to do this additional page load
# there would be too many flakey assumptions to stitch this together
# might look at storing data on all team URLs for the leagues, which would then allow this to be read in and only one page load
wage_url <- pg %>% rvest::html_elements("#inner_nav .hoversmooth a")
idx <- grep("Wages", rvest::html_text(wage_url))
wage_url <- wage_url[idx] %>% rvest::html_attr("href") %>% unique() %>% paste0(main_url, .)
Sys.sleep(time_pause)
wage_pg <- tryCatch(.load_page(wage_url), error = function(e) NA)
if(!is.na(wage_pg)) {
tab_box <- wage_pg %>% rvest::html_nodes(".stats_table")
tab_element <- tab_box %>% rvest::html_nodes("tbody tr")
# get player urls, row by row
player_urls <- c()
for(i in tab_element) {
each_url <- i %>% rvest::html_elements("th a") %>% rvest::html_attr("href") %>%
paste0(main_url, .)
# the below will catch when there is no player URL and it returns NA
if(each_url == "https://fbref.com") {
each_url <- NA_character_
}
player_urls <- c(player_urls, each_url)
}
# first, want a raw df which cleans the Nation col, separates the columns into each currency and appends player urls
dat_raw <- tab_box %>%
rvest::html_table() %>% data.frame() %>%
dplyr::filter(!grepl("Players", .data[["Player"]])) %>%
dplyr::mutate(Nation = gsub(".*? ", "", .data[["Nation"]])) %>%
tidyr::separate(col = .data[["Weekly.Wages"]], into = c("WeeklyWage1", "WeeklyWage2"), sep = "\\(") %>%
tidyr::separate(col = .data[["WeeklyWage2"]], into = c("WeeklyWage2", "WeeklyWage3"), sep = ", ") %>%
tidyr::separate(col = .data[["Annual.Wages"]], into = c("AnnualWage1", "AnualWage2"), sep = "\\(") %>%
tidyr::separate(col = .data[["AnualWage2"]], into = c("AnnualWage2", "AnnualWage3"), sep = "\\, ") %>% suppressWarnings() %>%
dplyr::mutate(Url = player_urls)
#---- Codes for currency -----#
# # EUR: "(\u20AC)"
#
# # GBP: "(\u00A3)"
#
# # USD: "\\$"
# now make a long df, rename columns based on currency and empty records where there is no wage provided
dat_long <- dat_raw %>%
tidyr::pivot_longer(cols = tidyselect::contains("Wage"), names_to = "pay_type") %>%
dplyr::mutate(value = gsub(" ", "", .data[["value"]]) %>% gsub(",", "", .) %>% gsub("\\)", "", .)) %>%
dplyr::mutate(currency = dplyr::case_when(
grepl("(\u20AC)", .data[["value"]]) ~ "EUR",
grepl("(\u00A3)", .data[["value"]]) ~ "GBP",
grepl("\\$", .data[["value"]]) ~ "USD",
TRUE ~ NA_character_
)) %>%
dplyr::mutate(pay_type = gsub("\\d+", "", .data[["pay_type"]])) %>%
dplyr::mutate(pay_type = dplyr::case_when(
is.na(.data[["currency"]]) ~ NA_character_,
!is.na(.data[["currency"]]) ~ paste0(.data[["pay_type"]], .data[["currency"]])
)) %>%
dplyr::mutate(value = gsub("[^\x20-\x7E]", "", .data[["value"]]) %>% gsub("\\$", "", .) %>% as.numeric())
# finally convert it back to a wide df, doing this only for players who have a value in wages
dat_wide <- dat_long %>%
dplyr::filter(!is.na(.data[["value"]])) %>%
tidyr::pivot_wider(id_cols = c(.data[["Player"]], .data[["Nation"]], .data[["Pos"]], .data[["Age"]], .data[["Notes"]], .data[["Url"]]),
names_from = .data[["pay_type"]], values_from = .data[["value"]]) %>%
# then we join back the empty wage players so as not to stuff up our pivot_wider
dplyr::bind_rows(
dat_long %>%
dplyr::filter(is.na(.data[["value"]])) %>%
dplyr::select(.data[["Player"]], .data[["Nation"]], .data[["Pos"]], .data[["Age"]], .data[["Notes"]], .data[["Url"]]) %>% dplyr::distinct()
)
# now add metadata
dat_wide <- dat_wide %>%
dplyr::mutate(Team = team_name,
Comp = league,
Season = season) %>%
dplyr::select(.data[["Team"]], .data[["Comp"]], .data[["Season"]], .data[["Player"]], .data[["Nation"]], .data[["Pos"]], .data[["Age"]], tidyselect::contains("Wage"), .data[["Notes"]], .data[["Url"]])
} else {
print(glue::glue("NOTE: Wage data is not available for {team_url}. Check {team_url} to see if it exists."))
dat_wide <- data.frame()
}
return(dat_wide)
}
# create the progress bar with a progress function.
pb <- progress::progress_bar$new(total = length(team_urls))
all_team_wages <- team_urls %>%
purrr::map_df(each_fb_squad_wages)
return(all_team_wages)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/fbref_wages.R
|
#' @importFrom rvest read_html html_elements html_text
#' @importFrom purrr keep
#' @importFrom stringr str_detect
.fotmob_extract_meta <- function() {
page <- "https://www.fotmob.com/" %>%
rvest::read_html()
page %>%
rvest::html_elements("script") %>%
rvest::html_text(trim = TRUE) %>%
purrr::keep(stringr::str_detect, "props")
}
#' @importFrom stringr str_extract
#' @importFrom purrr pluck
.fotmob_get_build_id <- function() {
meta <- .fotmob_extract_meta()
meta %>%
stringr::str_extract('(?<=\\"buildId\\"[:]).*(?=\\,\\"isFallback\\")') %>%
safely_from_json() %>%
pluck("result")
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/fotmob_helpers.R
|
## TODO: Cache this.
.fotmob_load_csv <- function(suffix) {
read.csv(
sprintf("https://raw.githubusercontent.com/JaseZiv/worldfootballR_data/master/raw-data/%s", suffix),
stringsAsFactors = FALSE
)
}
#' @importFrom purrr transpose map_dfr
#' @importFrom dplyr filter
.fotmob_get_league_urls <- function(
leagues,
league_id = NULL,
country = NULL,
league_name = NULL
) {
has_country <- !is.null(country)
has_league_name <- !is.null(league_name)
has_league_id <- !is.null(league_id)
if(!has_league_id & !(has_country & has_league_name)) {
stop(
"Must provide `league_id` or both of `country` and `league_name`."
)
}
has_country_and_league_name <- has_country & has_league_name
urls <- if(has_country_and_league_name) {
n_country <- length(country)
n_league_name <- length(league_name)
if(n_country != n_league_name) {
stop(
sprintf(
"If providing `country` and `league_name`, length of each must be the same (%s != %s).",
n_country,
n_league_name
)
)
}
pairs <- list(
country = country,
league_name = league_name
) %>%
purrr::transpose()
purrr::map_dfr(
pairs,
~dplyr::filter(
leagues,
.data[["ccode"]] == .x$country, .data[["name"]] == .x$league_name
)
)
} else {
leagues %>%
dplyr::filter(.data[["id"]] %in% league_id)
}
n_urls <- nrow(urls)
if(n_urls == 0) {
stop(
"Could not find any leagues matching specified parameters."
)
}
n_params <- ifelse(
has_country_and_league_name,
n_country,
length(league_id)
)
if(n_urls < n_params) {
warning(
sprintf(
"Found less leagues than specified (%s < %s).",
n_urls,
n_params
)
)
} else if (n_urls > n_params) {
warning(
sprintf(
"Found more leagues than specified (%s > %s).",
n_urls,
n_params
)
)
}
urls
}
#' Get fotmob league ids
#'
#' Returns a dataframe of the league ids available on fotmob
#'
#' @param cached Whether to load the dataframe from the \href{https://github.com/JaseZiv/worldfootballR_data/blob/master/raw-data/fotmob-leagues/all_leagues.csv}{data CSV}. This is faster and most likely what you want to do, unless you identify a league that's being tracked by fotmob that's not in this pre-saved CSV.
#'
#' @importFrom httr POST content
#' @importFrom purrr map_dfr
#' @importFrom tibble enframe
#' @importFrom dplyr rename
#' @importFrom tidyr unnest_wider unnest_longer
#' @importFrom janitor clean_names
#' @export
fotmob_get_league_ids <- function(cached = TRUE) {
if(cached) {
return(.fotmob_load_csv("fotmob-leagues/all_leagues.csv"))
}
resp <- httr::POST("https://www.fotmob.com/api/allLeagues")
cont <- resp %>% httr::content()
.extract_leagues <- function(x) {
cont[[x]] %>%
tibble::enframe() %>%
dplyr::select(.data[["value"]]) %>%
tidyr::unnest_wider(.data[["value"]]) %>%
tidyr::unnest_longer(.data[["leagues"]]) %>%
dplyr::rename(
country = .data[["name"]]
) %>%
tidyr::unnest_wider(.data[["leagues"]]) %>%
janitor::clean_names()
}
purrr::map_dfr(
c(
"international",
"countries"
),
.extract_leagues
)
}
.fotmob_get_league_ids <- function(cached = TRUE, ...) {
leagues <- fotmob_get_league_ids(cached = cached)
.fotmob_get_league_urls(
leagues = leagues,
...
)
}
#' @importFrom stringr str_replace
.fotmob_get_league_resp_from_build_id <- function(page_url, stats = FALSE) {
build_id <- .fotmob_get_build_id()
url <- sprintf("https://www.fotmob.com/_next/data/%s%s.json", build_id, page_url)
if(stats) {
url <- stringr::str_replace(url, "overview", "stats")
}
safely_from_json(url)
}
#' @importFrom purrr safely
.fotmob_get_league_resp <- function(league_id, page_url, fallback = TRUE) {
url <- sprintf("https://www.fotmob.com/api/leagues?id=%s", league_id)
resp <- safely_from_json(url)
if(!is.null(resp$result)) {
return(resp$result)
}
first_url <- url
if(fallback) {
resp <- .fotmob_get_league_resp_from_build_id(page_url)
if(!is.null(resp$result)) {
return(resp$result)
}
}
stop(
sprintf("Could not identify the league endpoint at either %s or %s. Stopping with the following error from jsonlite::fromJSON:\n", first_url, url, res$error)
)
}
#' Get fotmob match results by league
#'
#' Returns match results for all matches played on the selected date from fotmob.com.
#'
#' @param country Three character country code. Can be one or multiple. If provided, `league_name` must also be provided (of the same length)
#' @param league_name League names. If provided, `country` must also be provided (of the same length).
#' @param league_id Fotmob ID for the league. Only used if `country` and `league_name` are not specified.
#' @inheritParams fotmob_get_league_ids
#'
#' @return returns a dataframe of league matches
#'
#' @importFrom purrr possibly map2_dfr
#' @importFrom tibble tibble
#' @importFrom rlang maybe_missing
#'
#' @export
#'
#' @examples
#' \donttest{
#' try({
#' library(dplyr)
#' library(tidyr)
#'
#' # one league
#' fotmob_get_league_matches(
#' country = "ENG",
#' league_name = "Premier League"
#' )
#'
#' # one league, by id
#' fotmob_get_league_matches(
#' league_id = 47
#' )
#'
#' # multiple leagues (could also use ids)
#' league_matches <- fotmob_get_league_matches(
#' country = c("ENG", "ESP" ),
#' league_name = c("Premier League", "LaLiga")
#' )
#'
#' # probably the data that you care about
#' league_matches %>%
#' dplyr::select(match_id = id, home, away) %>%
#' tidyr::unnest_wider(c(home, away), names_sep = "_")
#' })
#' }
fotmob_get_league_matches <- function(country, league_name, league_id, cached = TRUE) {
urls <- .fotmob_get_league_ids(
cached = cached,
country = rlang::maybe_missing(country, NULL),
league_name = rlang::maybe_missing(league_name, NULL),
league_id = rlang::maybe_missing(league_id, NULL)
)
fp <- purrr::possibly(
.fotmob_get_league_matches,
quiet = FALSE,
otherwise = tibble::tibble()
)
purrr::map2_dfr(
urls$id, urls$page_url,
.fotmob_get_league_matches
)
}
#' @importFrom janitor clean_names
#' @importFrom tibble as_tibble
#' @importFrom purrr map_dfr
#' @importFrom dplyr bind_rows
.fotmob_get_league_matches <- function(league_id, page_url) {
resp <- .fotmob_get_league_resp(league_id, page_url)
rounds <- resp$matches$data$matchesCombinedByRound
rounds %>%
purrr::map_dfr(
~purrr::map_dfr(.x, dplyr::bind_rows) %>%
dplyr::mutate(across(.data[["roundName"]], as.character))
) %>%
janitor::clean_names() %>%
tibble::as_tibble()
}
#' Get standings from fotmob
#'
#' Returns league standings from fotmob.com. 3 types are returned: all, home, away
#'
#' @inheritParams fotmob_get_league_matches
#'
#' @return returns a dataframe of league standings
#'
#' @importFrom purrr possibly map2_dfr
#' @importFrom tibble tibble
#' @importFrom rlang maybe_missing
#'
#' @export
#'
#' @examples
#' \donttest{
#' try({
#' library(dplyr)
#' library(tidyr)
#'
#' # one league
#' fotmob_get_league_tables(
#' country = "ENG",
#' league_name = "Premier League"
#' )
#'
#' # one league, by id
#' fotmob_get_league_tables(
#' league_id = 47
#' )
#'
#' # multiple leagues (could also use ids)
#' league_tables <- fotmob_get_league_tables(
#' country = c("ENG", "ESP" ),
#' league_name = c("Premier League", "LaLiga")
#' )
#'
#' # look at tables if only away matches are considered
#' league_tables %>%
#' dplyr::filter(table_type == "away")
#' })
#' }
fotmob_get_league_tables <- function(country, league_name, league_id, cached = TRUE) {
urls <- .fotmob_get_league_ids(
cached = cached,
country = rlang::maybe_missing(country, NULL),
league_name = rlang::maybe_missing(league_name, NULL),
league_id = rlang::maybe_missing(league_id, NULL)
)
fp <- purrr::possibly(
.fotmob_get_league_tables,
quiet = FALSE,
otherwise = tibble::tibble()
)
purrr::map2_dfr(
urls$id, urls$page_url,
fp
)
}
#' @importFrom janitor clean_names
#' @importFrom tibble as_tibble
#' @importFrom rlang .data
#' @importFrom dplyr select all_of bind_rows rename mutate
#' @importFrom tidyr pivot_longer unnest_longer unnest
.fotmob_get_league_tables <- function(league_id, page_url) {
resp <- .fotmob_get_league_resp(league_id, page_url)
table_init <- resp$table$data
cols <- c("all", "home", "away")
table <- if("table" %in% names(table_init)) {
table_init$table %>% dplyr::select(dplyr::all_of(cols))
} else if("tables" %in% names(table_init)) {
tables <- dplyr::bind_rows(table_init$tables)
tables$all <- tables$table$all
tables$home <- tables$table$home
tables$away <- tables$table$away
tables %>%
dplyr::rename(
group_id = .data[["leagueId"]],
group_page_url = .data[["pageUrl"]],
group_name = .data[["leagueName"]]
) %>%
dplyr::select(
-c(.data[["table"]], .data[["legend"]])
)
} else {
stop(
"Expected to find `table` or `tables` element but did not."
)
}
table <- table %>%
janitor::clean_names() %>%
tibble::as_tibble()
res <- table %>%
tidyr::pivot_longer(
dplyr::all_of(cols),
names_to = "table_type",
values_to = "table"
) %>%
tidyr::unnest_longer(
.data[["table"]]
) %>%
tidyr::unnest(
.data[["table"]],
names_sep = "_"
) %>%
janitor::clean_names() %>%
tibble::as_tibble()
res %>%
dplyr::mutate(
league_id = !!league_id,
page_url = !!page_url,
.before = 1
)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/fotmob_leagues.R
|
.extract_fotmob_match_general <- function(url) {
resp <- safely_from_json(url)$result
general <- resp$general
scalars <- data.frame(
stringsAsFactors = FALSE,
match_id = general$matchId, ## don't technically need this since `.fotmob_get_single_match_details` is wrapped with `.wrap_fotmob_match_id_f`
match_round = ifelse(is.null(general$matchRound), "", general$matchRound),
league_id = general$leagueId,
league_name = general$leagueName,
league_round_name = general$leagueRoundName,
parent_league_id = general$parentLeagueId,
parent_league_season = general$parentLeagueSeason,
match_time_utc = general$matchTimeUTC
)
teams <- data.frame(
stringsAsFactors = FALSE,
home_team_id = unlist(general$homeTeam$id),
home_team = unlist(general$homeTeam$name),
home_team_color = unlist(general$teamColors$home),
away_team_id = unlist(general$awayTeam$id),
away_team = unlist(general$awayTeam$name),
away_team_color = unlist(general$teamColors$away)
)
list(
resp = resp,
scalars = scalars,
teams = teams
)
}
#' Get fotmob match results by date
#'
#' Returns match results for all matches played on the selected date from fotmob.com
#'
#' @param dates a vector of string-formatted dates in "Ymd" format, e.g. "20210926". An attempt is
#' made to coerce the input to the necessary format if a date is passed in.
#'
#' @return returns a dataframe of match results
#'
#' @importFrom purrr map_dfr
#'
#' @export
#'
#' @examples
#' \donttest{
#' try({
#' library(dplyr)
#' library(tidyr)
#'
#' results <- fotmob_get_matches_by_date(date = c("20210925", "20210926"))
#' results %>%
#' dplyr::select(primary_id, ccode, league_name = name, match_id)
#' })
#' }
#'
fotmob_get_matches_by_date <- function(dates) {
purrr::map_dfr(dates, .fotmob_get_matches_by_single_date)
}
#' @importFrom lubridate is.Date ymd
#' @importFrom stringr str_remove_all
#' @importFrom glue glue
#' @importFrom janitor clean_names
#' @importFrom purrr possibly
#' @importFrom tibble as_tibble tibble
#' @importFrom tidyr unnest
#' @importFrom dplyr rename select
#' @importFrom tidyselect vars_select_helpers
#' @importFrom magrittr %>%
.fotmob_get_matches_by_single_date <- function(date) {
# CRAN feedback was to remove this from the existing functions so I have for now
# print(glue::glue('Scraping match results data from fotmob for "{date}".'))
main_url <- "https://www.fotmob.com/api/"
is_date <- lubridate::is.Date(date)
if(is_date) {
date <- lubridate::ymd(date)
}
date <- stringr::str_remove_all(as.character(date), "-")
url <- paste0(main_url, "matches?date=", date)
f <- function(url) {
resp <- safely_from_json(url)$result
res <- resp$leagues %>%
janitor::clean_names() %>%
tibble::as_tibble()
if(nrow(res) == 0) {
return(res)
}
res %>%
dplyr::rename(match = .data[["matches"]]) %>%
tidyr::unnest(.data[["match"]], names_sep = "_") %>%
dplyr::rename(home = .data[["match_home"]], away = .data[["match_away"]]) %>%
tidyr::unnest(c(.data[["home"]], .data[["away"]], .data[["match_status"]]), names_sep = "_") %>%
dplyr::select(-tidyselect::vars_select_helpers$where(is.list)) %>%
janitor::clean_names()
}
fp <- purrr::possibly(f, otherwise = tibble::tibble())
fp(url)
}
#' Get fotmob match details by match id
#'
#' Returns match details from fotmob.com
#'
#' @param match_ids a vector of strings or numbers representing matches
#'
#' @return returns a dataframe of match shots
#'
#' @examples
#' \donttest{
#' try({
#' library(dplyr)
#' library(tidyr)
#' results <- fotmob_get_matches_by_date(date = "20210926")
#' match_ids <- results %>%
#' dplyr::select(primary_id, ccode, league_name = name, match_id) %>%
#' dplyr::filter(league_name == "Premier League", ccode == "ENG") %>%
#' dplyr::pull(match_id)
#' match_ids # 3609987 3609979
#' details <- fotmob_get_match_details(match_ids)
#' })
#' }
#' @export
fotmob_get_match_details <- function(match_ids) {
.wrap_fotmob_match_f(match_ids, .fotmob_get_single_match_details)
}
#' @importFrom glue glue
#' @importFrom purrr possibly
#' @importFrom dplyr bind_cols mutate case_when
#' @importFrom janitor clean_names
#' @importFrom rlang .data
#' @importFrom tibble as_tibble tibble
#' @importFrom tidyr unnest unnest_wider
.fotmob_get_single_match_details <- function(match_id) {
# CRAN feedback was to remove this from the existing functions so I have for now
# print(glue::glue("Scraping match data from fotmob for match {match_id}."))
main_url <- "https://www.fotmob.com/api/"
url <- paste0(main_url, "matchDetails?matchId=", match_id)
f <- function(url) {
general <- .extract_fotmob_match_general(url)
df <- dplyr::bind_cols(
general$scalars,
general$teams
)
shots <- general$resp$content$shotmap$shots %>% janitor::clean_names()
has_shots <- length(shots) > 0
df <- tibble::as_tibble(df)
if(isTRUE(has_shots)) {
df$shots <- list(shots)
df <- df %>%
tidyr::unnest(.data[["shots"]]) %>%
tidyr::unnest_wider(.data[["on_goal_shot"]], names_sep = "_") %>%
janitor::clean_names()
} else {
df$shots <- NULL
}
df
}
fp <- purrr::possibly(f, otherwise = tibble::tibble())
fp(url)
}
#' Get fotmob match top team stats by match id
#'
#' Returns match top team stats from fotmob.com
#'
#' @param match_ids a vector of strings or numbers representing matches
#'
#' @return returns a dataframe of match top team stats
#'
#' @examples
#' \donttest{
#' try({
#' library(dplyr)
#' library(tidyr)
#' results <- fotmob_get_matches_by_date(date = "20210926")
#' match_ids <- results %>%
#' dplyr::select(primary_id, ccode, league_name = name, match_id) %>%
#' dplyr::filter(league_name == "Premier League", ccode == "ENG") %>%
#' dplyr::pull(match_id)
#' match_ids # 3609987 3609979
#' details <- fotmob_get_match_team_stats(match_ids)
#' })
#' }
#' @export
fotmob_get_match_team_stats <- function(match_ids) {
.wrap_fotmob_match_f(match_ids, .fotmob_get_single_match_team_stats)
}
#' @importFrom purrr possibly
#' @importFrom dplyr bind_cols mutate across rename
#' @importFrom janitor clean_names
#' @importFrom rlang .data
#' @importFrom tibble as_tibble
#' @importFrom tidyr unnest unnest_wider hoist
.fotmob_get_single_match_team_stats <- function(match_id) {
main_url <- "https://www.fotmob.com/api/"
url <- paste0(main_url, "matchDetails?matchId=", match_id)
f <- function(url) {
general <- .extract_fotmob_match_general(url)
df <- dplyr::bind_cols(
general$scalars,
general$teams
)
stats <- general$resp$content$stats$stats %>% janitor::clean_names()
has_stats <- length(stats) > 0
df <- tibble::as_tibble(df)
if(isTRUE(has_stats)) {
wide_stats <- stats %>%
tidyr::unnest_wider(stats, names_sep = '_') %>%
tidyr::unnest(vars_select_helpers$where(is.list))
clean_stats <- wide_stats %>%
tidyr::hoist(
.data[["stats_stats"]],
"home_value" = 1
) %>%
dplyr::rename("away_value" = .data[["stats_stats"]]) %>%
dplyr::mutate(
dplyr::across(c(.data[["home_value"]], .data[["away_value"]]), as.character)
) %>%
tidyr::unnest(c(.data[["home_value"]], .data[["away_value"]]))
df <- dplyr::bind_cols(df, clean_stats)
}
df
}
fp <- purrr::possibly(f, otherwise = tibble::tibble())
fp(url)
}
#' Get fotmob match info by match id
#'
#' Returns match info from fotmob.com
#'
#' @param match_ids a vector of strings or numbers representing matches
#'
#' @return returns a dataframe of match info
#'
#' @examples
#' \donttest{
#' try({
#' library(dplyr)
#' library(tidyr)
#' results <- fotmob_get_matches_by_date(date = "20210926")
#' match_ids <- results %>%
#' dplyr::select(primary_id, ccode, league_name = name, match_id) %>%
#' dplyr::filter(league_name == "Premier League", ccode == "ENG") %>%
#' dplyr::pull(match_id)
#' match_ids # 3609987 3609979
#' details <- fotmob_get_match_info(match_ids)
#' })
#' }
#' @export
fotmob_get_match_info <- function(match_ids) {
.wrap_fotmob_match_f(match_ids, .fotmob_get_single_match_info)
}
#' @importFrom purrr possibly
#' @importFrom dplyr bind_cols
#' @importFrom janitor clean_names
#' @importFrom rlang .data
#' @importFrom tibble tibble enframe
#' @importFrom tidyr pivot_wider unnest unnest_wider
.fotmob_get_single_match_info <- function(match_id) {
main_url <- "https://www.fotmob.com/api/"
url <- paste0(main_url, "matchDetails?matchId=", match_id)
f <- function(url) {
general <- .extract_fotmob_match_general(url)
df <- dplyr::bind_cols(
general$scalars,
general$teams
)
info <- general$resp$content$matchFacts$infoBox
unnested_info <- info %>%
tibble::enframe() %>%
tidyr::pivot_wider(
names_from = .data[["name"]],
values_from = .data[["value"]]
) %>%
janitor::clean_names() %>%
tidyr::unnest_wider(
c(tidyselect::vars_select_helpers$where(is.list), -tidyselect::vars_select_helpers$any_of("attendance")),
names_sep = "_"
) %>%
tidyr::unnest(
tidyselect::vars_select_helpers$any_of("attendance")
) %>%
janitor::clean_names()
if (nrow(df) != 1) {
return(tibble::tibble())
}
dplyr::bind_cols(df, unnested_info)
}
fp <- purrr::possibly(f, otherwise = tibble::tibble())
fp(url)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/fotmob_matches.R
|
#' @importFrom purrr map_dfr
#' @importFrom dplyr mutate across
#' @importFrom rlang .data
#' @importFrom stats setNames
.wrap_fotmob_match_f <- function(match_ids, f) {
purrr::map_dfr(
stats::setNames(match_ids, match_ids),
f,
.id = "match_id"
) %>%
dplyr::mutate(
dplyr::across(.data[["match_id"]], as.integer)
)
}
#' Get fotmob match player details by match id
#'
#' Returns match details from fotmob.com
#'
#' @param match_ids a vector of strings or numbers representing matches
#'
#' @return returns a dataframe of match players
#'
#' @examples
#' \donttest{
#' try({
#' library(dplyr)
#' library(tidyr)
#' ## single match
#' players <- fotmob_get_match_players(3610132)
#' salah_id <- "292462"
#' players %>%
#' dplyr::filter(id == salah_id) %>%
#' dplyr::select(player_id = id, stats) %>%
#' tidyr::unnest(stats)
#'
#' ## multiple matches
#' fotmob_get_match_players(c(3609987, 3609979))
#' })
#' }
#' @export
fotmob_get_match_players <- function(match_ids) {
.wrap_fotmob_match_f(match_ids, .fotmob_get_single_match_players)
}
#' @importFrom glue glue
#' @importFrom tibble as_tibble tibble
#' @importFrom purrr pluck map_dfr map2_dfr possibly
#' @importFrom dplyr bind_cols select filter distinct any_of
#' @importFrom tidyr pivot_longer pivot_wider unnest_wider
#' @importFrom rlang .data
#' @importFrom janitor make_clean_names
#' @importFrom tibble as_tibble tibble
#' @importFrom stringr str_detect
.fotmob_get_single_match_players <- function(match_id) {
# CRAN feedback was to remove this from the existing functions so I have for now
# print(glue::glue("Scraping match data from fotmob for match {match_id}."))
main_url <- "https://www.fotmob.com/api/"
url <- paste0(main_url, "matchDetails?matchId=", match_id)
f <- function(url) {
resp <- safely_from_json(url)$result
table <- resp$content$table
lineup <- resp$content$lineup$lineup
starters <- lineup$players
bench <- lineup$bench
stopifnot(length(starters) == 2) ## 2 teams
stopifnot(length(bench) == 2)
.clean_positions <- function(p) {
## index represents F/M/D/G (1/2/3/4), row index represents player (usually up to 4)
player_ids <- as.character(p[["id"]])
n <- length(player_ids)
## for some reason jsonlite tries to combine stuff row-wise when it really should just bind things column-wise
ps <- p[["stats"]]
.clean_stats <- function(x) {
if(is.null(x)) {
return(data.frame(dummy = 1))
}
xt <- t(x)
xt[xt == "NULL"] <- NA_real_
rn <- rownames(xt)
suppressWarnings(
xt2 <- matrix(
as.character(xt),
ncol = ncol(xt)
) %>%
tibble::as_tibble()
)
xt2 %>%
dplyr::bind_cols(
col = janitor::make_clean_names(rn)
) %>%
tidyr::pivot_longer(
-.data[["col"]]
) %>%
dplyr::select(
-.data[["name"]]
) %>%
dplyr::filter(stringr::str_detect(.data[["col"]], "^stats_"), !is.na(.data[["value"]])) %>%
dplyr::distinct(.data[["col"]], .data[["value"]]) %>%
tidyr::pivot_wider(
names_from = "col",
values_from = "value"
) %>%
as.data.frame()
}
stats <- if(is.null(ps)) {
NULL
} else {
pss <- ps %>% purrr::map_dfr(.clean_stats)
if(any(colnames(pss) == "dummy")) {
pss <- pss %>% dplyr::select(-dplyr::any_of("dummy"))
}
pss
}
pp <- function(..., .na, .f) {
res <- purrr::pluck(p, ...)
if(is.null(res) | length(res) == 0) {
return(rep(.na, n))
}
dots <- list(...)
.f(res)
}
ppc <- function(...) pp(..., .na = NA_character_, .f = as.character)
ppi <- function(...) pp(..., .na = NA_integer_, .f = as.integer)
ppl <- function(...) pp(..., .na = NA, .f = as.logical)
pp2 <- function(...) {
res <- purrr::pluck(p, ...)
if(is.null(res) | length(res) == 0) {
return(NULL)
}
res
}
rows <- tibble::tibble(
"id" = player_ids,
"using_opta_id" = ppl("usingOptaId"),
"first_name" = ppc("name", "firstName"),
"last_name" = ppc("name", "lastName"),
"image_url" = ppc("imageUrl"),
"page_url" = ppc("pageUrl"),
"shirt" = ppc("shirt") ,
"is_home_team" = ppl("isHomeTeam"),
"time_subbed_on" = ppi("timeSubbedOn"),
"time_subbed_off" = ppi("timeSubbedOff"),
"usual_position" = ppi("usualPosition"),
"position_row" = ppi("positionRow"),
"role" = ppc("role"),
"is_captain" = ppl("isCaptain"),
"subbed_out" = ppi("events", "subbedOut"),
"g" = ppi("events", "g"),
"rating_num" = ppc("rating", "num"),
"rating_bgcolor" = ppc("rating", "bgcolor"),
"is_top_rating" = ppl("rating", "isTop", "isTopRating"),
"is_match_finished" = ppl("rating", "isTop", "isMatchFinished"),
"fantasy_score_num" = ppc("fantasyScore", "num"),
"fantasy_score_bgcolor" = ppc("fantasyScore", "bgcolor"),
"home_team_id" = ppi("teamData", "home", "id"),
"home_team_color" = ppc("teamData", "home", "color"),
"away_team_id" = ppi("teamData", "away", "id"),
"away_team_color" = ppc("teamData", "away", "color")
)
rows$stats <- stats
rows$shotmap <- if(!is.null(pp2("shotmap", 1))) pp2("shotmap") else NULL
rows
}
add_team_info <- function(p, i) {
res <- .clean_positions(p)
res$team_id <- lineup$teamId[i]
res$team_name <- lineup$teamName[i]
res %>%
dplyr::relocate(
.data[["team_id"]],
.data[["team_name"]],
.before = 1
)
}
res <- dplyr::bind_rows(
purrr::map2_dfr(
starters, seq_along(starters),
~purrr::map2_dfr(
.x, .y,
add_team_info
)
) %>%
dplyr::mutate(
is_starter = TRUE
),
purrr::map2_dfr(
bench, seq_along(bench),
add_team_info
) %>%
dplyr::mutate(
is_starter = FALSE
)
) %>%
tibble::as_tibble()
## Overwrite the existing variables since the away team value is "bad".
## See https://github.com/JaseZiv/worldfootballR/issues/93
## For non-domestic leagues, the table element will not have a teams element
## See https://github.com/JaseZiv/worldfootballR/issues/111
coerce_team_id <- function(df, side) {
idx <- ifelse(side == "home", 1, 2)
team_col <- sprintf("%s_team_id", side)
df[[team_col]] <- ifelse(
is.logical(table) | !("teams" %in% names(table)),
df[[team_col]],
table$teams[idx]
)
df
}
res <- coerce_team_id(res, "home")
res <- coerce_team_id(res, "away")
res <- res %>% tidyr::unnest_wider(.data[["stats"]])
res
}
fp <- purrr::possibly(f, otherwise = tibble::tibble())
fp(url)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/fotmob_players.R
|
#' @importFrom tibble tibble as_tibble
#' @importFrom dplyr select
#' @importFrom tidyr unnest
#' @importFrom janitor clean_names
#' @importFrom stringr str_detect str_replace_all
.fotmob_get_single_season_stats <- function(league_id, season_id, stat) {
if (stringr::str_detect(season_id, "-")) {
season_id <- stringr::str_replace_all(season_id, "-", "\\/")
}
url <- sprintf(
"https://data.fotmob.com/stats/%s/season/%s/%s.json",
league_id,
season_id,
stat
)
resp <- safely_from_json(url)
if(!is.null(resp$error)) {
warning(
sprintf(
'Issue with data at `url = "%s".\n%s', url, resp$error
)
)
return(tibble::tibble())
}
resp$result %>%
tibble::as_tibble() %>%
dplyr::select(.data[["TopLists"]]) %>%
tidyr::unnest(.data[["TopLists"]]) %>%
dplyr::select(.data[["StatList"]]) %>%
tidyr::unnest(.data[["StatList"]]) %>%
janitor::clean_names()
}
.upper1 <- function(x) {
x <- tolower(x)
substr(x, 1, 1) <- toupper(substr(x, 1, 1))
x
}
#' @importFrom rvest html_text2 html_attr
#' @importFrom stringr str_detect str_remove_all
.extract_seasons_and_stats_from_options <- function(options) {
labels <- rvest::html_text2(options)
rgx <- "(^.*)(\\s)(20[012].*$)"
league_name <- labels %>% stringr::str_replace(rgx, "\\1")
season_or_stat_name <- labels %>% stringr::str_replace(rgx, "\\3")
league_name <- ifelse(season_or_stat_name == league_name, NA_character_, league_name)
is_season <- season_or_stat_name %>% stringr::str_detect("^2")
ids <- options %>% rvest::html_attr("value")
has_dash <- any(stringr::str_detect(ids[is_season], "-"))
if (has_dash) {
season_or_stat_name <- c(
paste0(season_or_stat_name[is_season], "-", stringr::str_remove_all(ids[is_season], "^.*-")),
season_or_stat_name[!is_season]
)
}
tibble::tibble(
league_name = league_name,
option_type = ifelse(is_season, "season", "stat"),
name = season_or_stat_name,
id = ids
)
}
#' @importFrom tibble tibble as_tibble
#' @importFrom rvest read_html html_elements html_attr
#' @importFrom stringr str_replace str_replace str_detect
#' @importFrom rlang maybe_missing
#' @importFrom dplyr distinct arrange bind_rows select filter
#' @importFrom purrr keep map_dfr
.fotmob_get_stat_and_season_options <- function(
country,
league_name,
league_id,
team_or_player,
page_url,
cached
) {
tables <- fotmob_get_league_tables(
cached = cached,
country = rlang::maybe_missing(country, NULL),
league_name = rlang::maybe_missing(league_name, NULL),
league_id = rlang::maybe_missing(league_id, NULL)
)
url <- sprintf(
"https://www.fotmob.com%s/%ss",
stringr::str_replace(tables$page_url[1], "overview", "stats"),
team_or_player
)
page <- url %>% rvest::read_html()
see_all_button <- page %>% rvest::html_elements(".SeeAllButton")
has_see_all_button <- length(see_all_button) > 0
options <- page %>% rvest::html_elements("option")
has_options <- length(options) > 0
if (has_see_all_button) {
hrefs <- see_all_button %>% rvest::html_attr("href")
next_url <- sprintf(
"https://www.fotmob.com%s",
hrefs[1]
)
next_page <- next_url %>% rvest::read_html()
options <- next_page %>% rvest::html_elements("option")
.extract_seasons_and_stats_from_options(options)
} else if (has_options) {
values <- options %>% rvest::html_attr("value")
parts <- values %>%
purrr::keep(~stringr::str_detect(.x, "season")) %>%
stringr::str_split("/")
if(length(parts) == 0) {
rlang::abort(glue::glue("Could not parse season ids from {url}."))
}
if(length(parts[[1]]) < 5) {
rlang::abort(glue::glue("Season ids not stored in expected format at {url}."))
}
season_ids <- parts %>%
purrr::map_chr(~purrr::pluck(.x, 4))
## protect against the current season being in the offseason, unless there is no other season.
season_id <- ifelse(length(season_ids) > 1, season_ids[2], season_ids[1])
next_url <- sprintf(
"https://www.fotmob.com/leagues/%s/stats/season/%s/%ss/saves_team",
tables$league_id[1],
season_id,
team_or_player
)
next_page <- next_url %>% rvest::read_html()
next_options <- next_page %>% rvest::html_elements("option")
.extract_seasons_and_stats_from_options(next_options)
} else {
resp <- .fotmob_get_league_resp_from_build_id(page_url, stats = TRUE)
if(is.null(resp$result)) {
stop(
sprintf("Can't find season stats data. Failed with the following error:\n", resp$error)
)
}
stats <- resp$result$pageProps$stats
seasons <- stats$seasonStatLinks$Name
label <- sprintf("%ss", .upper1(team_or_player))
valid_seasons <- setNames(seasons, seasons) %>%
purrr::keep(
~.x %in% stats$seasonStatLinks$Name
) %>%
names()
extract_options <- function(season) {
link <- stats$seasonStatLinks %>%
filter(.data[["Name"]] == !!season)
topstats_url <- sprintf("https://data.fotmob.com/%s", link$RelativePath)
topstats <- purrr::map_dfr(topstats_url, safely_from_json) ## Liga MX will have two rows
toplists <- topstats$result$TopLists %>%
dplyr::distinct(header = .data[["Title"]], name = .data[["StatName"]])
negate <- ifelse(team_or_player == "team", FALSE, TRUE)
toplists <- toplists %>%
dplyr::filter(stringr::str_detect(.data[["name"]], "team", negate = !!negate))
season_name <- season
if (any(colnames(link) == "Group")) {
season_name <- sprintf("%s-%s", season_name, link$Group)
}
season_id <- as.character(link$TournamentId)
if (any(colnames(link) == "Group")) {
season_id <- sprintf("%s-%s", season_id, link$Group)
}
dplyr::bind_rows(
tibble::tibble(
league_name = NA_character_,
option_type = "season",
name = season_name,
id = season_id
),
tibble::tibble(
league_name = NA_character_,
option_type = "stat",
name = toplists$header,
id = toplists$name
)
)
}
possibly_extract_options <- purrr::possibly(extract_options, otherwise = tibble::tibble(), quiet = TRUE)
valid_seasons %>%
purrr::map_dfr(possibly_extract_options) %>%
dplyr::distinct() %>%
dplyr::arrange(.data[["option_type"]], .data[["name"]])
}
}
#' Get season statistics from fotmob
#'
#' Returns team or player season-long statistics standings from fotmob.com.
#'
#' @inheritParams fotmob_get_league_matches
#' @param season_name Season names in the format `"2021/2022"`. Multiple allowed. If multiple leagues are specified, season stats are retrieved for each league.
#' @param stat_league_name Same format as `league_name`. If not provided explicitly, then it takes on the same value as `league_name`. If provided explicitly, should be of the same length as `league_name` (or `league_id` if `league_name` is not provided).
#'
#' Note that not Fotmob currently only goes back as far as `"2016/2017"`. Some leagues may not have data for that far back.
#'
#' @param team_or_player return statistics for either \code{"team"} or \code{"player"}. Can only be one or the other.
#' @param stat_name the type of statistic. Can be more than one.
#' `stat_name` must be one of the following for \code{"player"}:
#'
#' \itemize{
#' \item{Accurate long balls per 90}
#' \item{Accurate passes per 90}
#' \item{Assists}
#' \item{Big chances created}
#' \item{Big chances missed}
#' \item{Blocks per 90}
#' \item{Chances created}
#' \item{Clean sheets}
#' \item{Clearances per 90}
#' \item{Expected assist (xA)}
#' \item{Expected assist (xA) per 90}
#' \item{Expected goals (xG)}
#' \item{Expected goals (xG) per 90}
#' \item{Expected goals on target (xGOT)}
#' \item{FotMob rating}
#' \item{Fouls committed per 90}
#' \item{Goals + Assists}
#' \item{Goals conceded per 90}
#' \item{Goals per 90}
#' \item{Goals prevented}
#' \item{Interceptions per 90}
#' \item{Penalties conceded}
#' \item{Penalties won}
#' \item{Possession won final 3rd per 90}
#' \item{Red cards}
#' \item{Save percentage}
#' \item{Saves per 90}
#' \item{Shots on target per 90}
#' \item{Shots per 90}
#' \item{Successful dribbles per 90}
#' \item{Successful tackles per 90}
#' \item{Top scorer}
#' \item{xG + xA per 90}
#' \item{Yellow cards}
#' }
#'
#' For \code{"team"}, `stat_name` must be one of the following:
#' \itemize{
#' \item{Accurate crosses per match}
#' \item{Accurate long balls per match}
#' \item{Accurate passes per match}
#' \item{Average possession}
#' \item{Big chances created}
#' \item{Big chances missed}
#' \item{Clean sheets}
#' \item{Clearances per match}
#' \item{Expected goals}
#' \item{FotMob rating}
#' \item{Fouls per match}
#' \item{Goals conceded per match}
#' \item{Goals per match}
#' \item{Interceptions per match}
#' \item{Penalties awarded}
#' \item{Penalties conceded}
#' \item{Possession won final 3rd per match}
#' \item{Red cards}
#' \item{Saves per match}
#' \item{Shots on target per match}
#' \item{Successful tackles per match}
#' \item{xG conceded}
#' \item{Yellow cards}
#' }
#'
#' Fotmob has changed these stat names over time, so this list may be out-dated. If you try an invalid stat name, you should see an error message indicating which ones are available.
#'
#' @return returns a dataframe of team or player stats
#'
#' @importFrom purrr map_dfr map2_dfr pmap_dfr possibly
#' @importFrom rlang maybe_missing .data
#' @importFrom dplyr filter select
#' @importFrom tibble tibble
#' @importFrom glue glue_collapse
#'
#' @export
#' @examples
#' \donttest{
#' try({
#' epl_team_xg_2021 <- fotmob_get_season_stats(
#' country = "ENG",
#' league_name = "Premier League",
#' season = "2020/2021",
#' stat_name = "Expected goals",
#' team_or_player = "team"
#' )
#' })
#' }
fotmob_get_season_stats <- function(
country,
league_name,
league_id,
season_name,
team_or_player = c("team", "player"),
stat_name,
stat_league_name = league_name,
cached = TRUE
) {
match.arg(team_or_player, several.ok = FALSE)
stopifnot("`season_name` cannot be NULL.`" = !is.null(season_name))
urls <- .fotmob_get_league_ids(
cached = cached,
country = rlang::maybe_missing(country, NULL),
league_name = rlang::maybe_missing(league_name, NULL),
league_id = rlang::maybe_missing(league_id, NULL)
)
stat_league_name <- rlang::maybe_missing(stat_league_name, urls$name)
n_league_name <- length(urls$name)
n_stat_league_name <- length(stat_league_name)
## this check only comes into play if the user explicitly specifies `stat_league_name`
if(n_league_name != n_stat_league_name) {
stop(
sprintf(
"`league_name` and `stat_league_name` must have the same length (%s != %s)", n_league_name, n_stat_league_name
)
)
}
urls$stat_league_name <- stat_league_name
fp <- purrr::possibly(
.fotmob_get_single_league_single_season_stats,
otherwise = tibble::tibble(),
quiet = FALSE
)
## Note that this is written in this awkward fashion (instead of expanding on stat, season_name, AND league_id)
## so that we can re-use the season options for a given league without having to re-scrape it every time
## (if we have multiple stats or seasons for a given league).
g <- function(stat_name, season_name, league_id) {
url <- urls %>% dplyr::filter(.data[["id"]] == !!league_id)
country <- url$ccode
league_name <- url$name
page_url <- url$page_url
options <- .fotmob_get_stat_and_season_options(
cached = cached,
country = country,
league_name = league_name,
league_id = league_id,
page_url = page_url,
team_or_player = team_or_player
)
stat_options <- options %>%
dplyr::filter(.data[["option_type"]] == "stat") %>%
dplyr::select(stat_name = .data[["name"]], stat = .data[["id"]])
filt_stat_options <- stat_options %>%
dplyr::filter(.data[["stat_name"]] == !!stat_name)
if(nrow(filt_stat_options) == 0) {
stop(
glue::glue('`"{stat_name}"` is not a valid `stat_name` for `league_name = "{league_name}"` (`league_id = {league_id}`). Try one of the following:\n{glue::glue_collapse(stat_options$stat_name, "\n")}')
)
}
season_options <- options %>%
dplyr::filter(.data[["option_type"]] == "season") %>%
dplyr::select(.data[["league_name"]], season_name = .data[["name"]], season_id = .data[["id"]])
if(nrow(season_options) == 0) {
stop(
glue::glue(
"No seasons with stats found for league. Try one of the following:\n{glue::glue_collapse(stat_options$stat_name, '\n')}"
)
)
}
season_options$league_name <- ifelse(
is.na(season_options$league_name),
league_name,
season_options$league_name
)
purrr::map_dfr(
season_name,
~fp(
country = country,
league_name = league_name,
league_id = league_id,
stat_name = stat_name,
stat = filt_stat_options %>%
dplyr::filter(.data[["stat_name"]] == !!stat_name) %>%
dplyr::select(.data[["stat"]]) %>%
unique(),
stat_league_name = url$stat_league_name,
season_name = .x,
season_options = season_options
)
)
}
params <- expand.grid(
stat_name = stat_name,
season_name = season_name,
stringsAsFactors = FALSE
)
purrr::map2_dfr(
params$stat_name,
params$season_name,
~purrr::pmap_dfr(
list(
.x,
.y,
urls$id
),
~g(..1, ..2, ..3)
)
)
}
#' @importFrom glue glue glue_collapse
#' @importFrom dplyr filter mutate
#' @importFrom tibble tibble
#' @importFrom rlang .data
.fotmob_get_single_league_single_season_stats <- function(
country,
league_name,
league_id,
stat_name,
stat,
stat_league_name,
season_name,
season_options
) {
filt_season_options <- season_options %>%
dplyr::filter(
.data[["league_name"]] == !!stat_league_name,
.data[["season_name"]] == !!season_name
)
n_season_options <- nrow(filt_season_options)
print_season_league_name_error <- function(stem) {
glue::glue(
'`season_name` = "{season_name}", `stat_league_name` = "{stat_league_name}" {stem}. Try one of the following `stat_league_name`, `season_name` pairs:\n{glue::glue_collapse(sprintf("%s, %s", season_options$league_name, season_options$season_name), "\n")}'
)
}
if(n_season_options == 0) {
stop(
print_season_league_name_error("not found")
)
}
if(n_season_options > 1) {
stop(
print_season_league_name_error("match more than 1 result")
)
}
fp <- purrr::possibly(
.fotmob_get_single_season_stats,
quiet = FALSE,
otherwise = tibble::tibble()
)
res <- fp(
league_id = league_id,
season_id = filt_season_options$season_id,
stat = stat
)
res %>%
dplyr::mutate(
country = country,
league_name = league_name,
league_id = league_id,
season_name = season_name,
season_id = filt_season_options$season_id,
stat_league_name = stat_league_name,
stat_name = stat_name,
stat = stat,
.before = 1
)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/fotmob_stats.R
|
#' Get FBref advanced match stats
#'
#' Returns data frame of selected statistics for each match, for either whole team or individual players.
#' Multiple URLs can be passed to the function, but only one `stat_type` can be selected.
#' Replaces the deprecated function get_advanced_match_stats()
#'
#' @param match_url the three character country code for all countries
#' @param stat_type the type of team statistics the user requires
#' @param team_or_player result either summarised for each team, or individual players
#' @param time_pause the wait time (in seconds) between page loads
#'
#' The statistic type options (stat_type) include:
#'
#' \emph{"summary"}, \emph{"passing"}, \emph{"passing"_types},
#' \emph{"defense" }, \emph{"possession"}, \emph{"misc"}, \emph{"keeper"}
#'
#' @return returns a dataframe of a selected team statistic type for a selected match(es)
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#' @importFrom utils read.csv
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' urls <- fb_match_urls(country = "AUS", gender = "F", season_end_year = 2021, tier = "1st")
#'
#' df <- fb_advanced_match_stats(match_url=urls,stat_type="possession",team_or_player="player")
#' })
#' }
fb_advanced_match_stats <- function(match_url, stat_type, team_or_player, time_pause=3) {
main_url <- "https://fbref.com"
time_wait <- time_pause
get_each_match_statistic <- function(match_url, time_pause=time_wait) {
pb$tick()
# put sleep in as per new user agreement on FBref
Sys.sleep(time_pause)
match_page <- tryCatch(.load_page(match_url), error = function(e) NA)
if(!is.na(match_page)) {
match_report <- .get_match_report_page(match_page = match_page)
league <- match_page %>%
rvest::html_nodes("#content") %>%
rvest::html_node("a") %>% rvest::html_text()
all_tables <- match_page %>%
rvest::html_nodes(".table_container")
if(stat_type == "summary") {
stat_df <- all_tables[which(stringr::str_detect(all_tables %>% rvest::html_attr("id"), "summary$"))] %>%
rvest::html_nodes("table")
} else if(stat_type == "passing") {
stat_df <- all_tables[which(stringr::str_detect(all_tables %>% rvest::html_attr("id"), "passing$"))] %>%
rvest::html_nodes("table")
} else if(stat_type == "passing_types") {
stat_df <- all_tables[which(stringr::str_detect(all_tables %>% rvest::html_attr("id"), "passing_types"))] %>%
rvest::html_nodes("table")
} else if(stat_type == "defense") {
stat_df <- all_tables[which(stringr::str_detect(all_tables %>% rvest::html_attr("id"), "defense$"))] %>%
rvest::html_nodes("table")
} else if(stat_type == "possession") {
stat_df <- all_tables[which(stringr::str_detect(all_tables %>% rvest::html_attr("id"), "possession$"))] %>%
rvest::html_nodes("table")
} else if(stat_type == "misc") {
stat_df <- all_tables[which(stringr::str_detect(all_tables %>% rvest::html_attr("id"), "misc$"))] %>%
rvest::html_nodes("table")
} else if(stat_type == "keeper") {
stat_df <- all_tables[which(stringr::str_detect(all_tables %>% rvest::html_attr("id"), "keeper_stats"))] %>%
rvest::html_nodes("table")
}
# shots stat type is not yet built in to the function
# else if(stat_type == "shots") {
# stat_df <- all_tables[which(stringr::str_detect(all_tables %>% rvest::html_attr("id"), "all_shots"))] %>% .[1] %>%
# rvest::html_table()
#
# }
if(length(stat_df) != 0) {
if(!stat_type %in% c("shots")) {
Team <- match_page %>%
rvest::html_nodes("div+ strong a") %>%
rvest::html_text() %>% .[1]
home_stat <- stat_df[1] %>%
rvest::html_table() %>% data.frame() %>%
.clean_match_advanced_stats_data()
Home_Away <- "Home"
home_stat <- cbind(Team, Home_Away, home_stat)
Team <- match_page %>%
rvest::html_nodes("div+ strong a") %>%
rvest::html_text() %>% .[2]
away_stat <- stat_df[2] %>%
rvest::html_table() %>% data.frame() %>%
.clean_match_advanced_stats_data()
Home_Away <- "Away"
away_stat <- cbind(Team, Home_Away, away_stat)
stat_df_output <- dplyr::bind_rows(home_stat, away_stat)
if(any(grepl("Nation", colnames(stat_df_output)))) {
if(!stat_type %in% c("keeper", "shots")) {
if(team_or_player == "team") {
stat_df_output <- stat_df_output %>%
dplyr::filter(stringr::str_detect(.data[["Player"]], " Players")) %>%
dplyr::select(-.data[["Player"]], -.data[["Player_Num"]], -.data[["Nation"]], -.data[["Pos"]], -.data[["Age"]])
} else {
stat_df_output <- stat_df_output %>%
dplyr::filter(!stringr::str_detect(.data[["Player"]], " Players"))
}
}
} else {
if(!stat_type %in% c("keeper", "shots")) {
if(team_or_player == "team") {
stat_df_output <- stat_df_output %>%
dplyr::filter(stringr::str_detect(.data[["Player"]], " Players")) %>%
dplyr::select(-.data[["Player"]], -.data[["Player_Num"]], -.data[["Pos"]], -.data[["Age"]])
} else {
stat_df_output <- stat_df_output %>%
dplyr::filter(!stringr::str_detect(.data[["Player"]], " Players"))
}
}
}
stat_df_output <- cbind(match_report, stat_df_output)
} else if(stat_type == "shots") {
}
} else {
print(glue::glue("NOTE: Stat Type '{stat_type}' is not found for this match. Check {match_url} to see if it exists."))
stat_df_output <- data.frame()
}
} else {
print(glue::glue("Stats data not available for {match_url}"))
stat_df_output <- data.frame()
}
return(stat_df_output)
}
# create the progress bar with a progress function.
pb <- progress::progress_bar$new(total = length(match_url))
suppressWarnings(
final_df <- match_url %>%
purrr::map_df(get_each_match_statistic) )
return(final_df)
}
#' Get advanced match stats
#'
#' Returns data frame of selected statistics for each match, for either whole team or individual players.
#' Multiple URLs can be passed to the function, but only one `stat_type` can be selected
#'
#' @param match_url the three character country code for all countries
#' @param stat_type the type of team statistics the user requires
#' @param team_or_player result either summarised for each team, or individual players
#' @param time_pause the wait time (in seconds) between page loads
#'
#' The statistic type options (stat_type) include:
#'
#' \emph{"summary"}, \emph{"passing"}, \emph{"passing"_types},
#' \emph{"defense" }, \emph{"possession"}, \emph{"misc"}, \emph{"keeper"}
#'
#' @return returns a dataframe of a selected team statistic type for a selected match(es)
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#' @importFrom utils read.csv
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' urls <- get_match_urls(country = "AUS", gender = "F", season_end_year = 2021, tier = "1st")
#'
#' df <- get_advanced_match_stats(match_url=urls,stat_type="possession",team_or_player="player")
#' })
#' }
get_advanced_match_stats <- function(match_url, stat_type, team_or_player, time_pause=3) {
.Deprecated("fb_advanced_match_stats")
main_url <- "https://fbref.com"
time_wait <- time_pause
get_each_match_statistic <- function(match_url, time_pause=time_wait) {
pb$tick()
# put sleep in as per new user agreement on FBref
Sys.sleep(time_pause)
match_page <- tryCatch(xml2::read_html(match_url), error = function(e) NA)
if(!is.na(match_page)) {
match_report <- .get_match_report_page(match_page = match_page)
league_url <- match_page %>%
rvest::html_nodes("#content") %>%
rvest::html_node("a") %>%
rvest::html_attr("href") %>% paste0(main_url, .)
all_tables <- match_page %>%
rvest::html_nodes(".table_container")
if(stat_type == "summary") {
stat_df <- all_tables[which(stringr::str_detect(all_tables %>% rvest::html_attr("id"), "summary$"))] %>%
rvest::html_nodes("table")
} else if(stat_type == "passing") {
stat_df <- all_tables[which(stringr::str_detect(all_tables %>% rvest::html_attr("id"), "passing$"))] %>%
rvest::html_nodes("table")
} else if(stat_type == "passing_types") {
stat_df <- all_tables[which(stringr::str_detect(all_tables %>% rvest::html_attr("id"), "passing_types"))] %>%
rvest::html_nodes("table")
} else if(stat_type == "defense") {
stat_df <- all_tables[which(stringr::str_detect(all_tables %>% rvest::html_attr("id"), "defense$"))] %>%
rvest::html_nodes("table")
} else if(stat_type == "possession") {
stat_df <- all_tables[which(stringr::str_detect(all_tables %>% rvest::html_attr("id"), "possession$"))] %>%
rvest::html_nodes("table")
} else if(stat_type == "misc") {
stat_df <- all_tables[which(stringr::str_detect(all_tables %>% rvest::html_attr("id"), "misc$"))] %>%
rvest::html_nodes("table")
} else if(stat_type == "keeper") {
stat_df <- all_tables[which(stringr::str_detect(all_tables %>% rvest::html_attr("id"), "keeper_stats"))] %>%
rvest::html_nodes("table")
}
# shots stat type is not yet built in to the function
# else if(stat_type == "shots") {
# stat_df <- all_tables[which(stringr::str_detect(all_tables %>% rvest::html_attr("id"), "all_shots"))] %>% .[1] %>%
# rvest::html_table()
#
# }
if(length(stat_df) != 0) {
if(!stat_type %in% c("shots")) {
Team <- match_page %>%
rvest::html_nodes("div:nth-child(1) div strong a") %>%
rvest::html_text() %>% .[1]
home_stat <- stat_df[1] %>%
rvest::html_table() %>% data.frame() %>%
.clean_match_advanced_stats_data()
Home_Away <- "Home"
home_stat <- cbind(Team, Home_Away, home_stat)
Team <- match_page %>%
rvest::html_nodes("div:nth-child(2) div strong a") %>%
rvest::html_text() %>% .[2]
away_stat <- stat_df[2] %>%
rvest::html_table() %>% data.frame() %>%
.clean_match_advanced_stats_data()
Home_Away <- "Away"
away_stat <- cbind(Team, Home_Away, away_stat)
stat_df_output <- dplyr::bind_rows(home_stat, away_stat)
if(any(grepl("Nation", colnames(stat_df_output)))) {
if(!stat_type %in% c("keeper", "shots")) {
if(team_or_player == "team") {
stat_df_output <- stat_df_output %>%
dplyr::filter(stringr::str_detect(.data[["Player"]], " Players")) %>%
dplyr::select(-.data[["Player"]], -.data[["Player_Num"]], -.data[["Nation"]], -.data[["Pos"]], -.data[["Age"]])
} else {
stat_df_output <- stat_df_output %>%
dplyr::filter(!stringr::str_detect(.data[["Player"]], " Players"))
}
}
} else {
if(!stat_type %in% c("keeper", "shots")) {
if(team_or_player == "team") {
stat_df_output <- stat_df_output %>%
dplyr::filter(stringr::str_detect(.data[["Player"]], " Players")) %>%
dplyr::select(-.data[["Player"]], -.data[["Player_Num"]], -.data[["Pos"]], -.data[["Age"]])
} else {
stat_df_output <- stat_df_output %>%
dplyr::filter(!stringr::str_detect(.data[["Player"]], " Players"))
}
}
}
stat_df_output <- cbind(match_report, stat_df_output)
} else if(stat_type == "shots") {
}
} else {
print(glue::glue("NOTE: Stat Type '{stat_type}' is not found for this match. Check {match_url} to see if it exists."))
stat_df_output <- data.frame()
}
} else {
print(glue::glue("Stats data not available for {match_url}"))
stat_df_output <- data.frame()
}
return(stat_df_output)
}
# create the progress bar with a progress function.
pb <- progress::progress_bar$new(total = length(match_url))
suppressWarnings(
final_df <- match_url %>%
purrr::map_df(get_each_match_statistic) )
seasons <- read.csv("https://raw.githubusercontent.com/JaseZiv/worldfootballR_data/master/raw-data/all_leages_and_cups/all_competitions.csv", stringsAsFactors = F)
seasons <- seasons %>%
dplyr::filter(.data[["seasons_urls"]] %in% final_df$League_URL) %>%
dplyr::select(League=.data[["competition_name"]], Gender=.data[["gender"]], Country=.data[["country"]], Season=.data[["seasons"]], League_URL=.data[["seasons_urls"]])
final_df <- seasons %>%
dplyr::left_join(final_df, by = "League_URL") %>%
dplyr::select(-.data[["League_URL"]]) %>% dplyr::distinct(.keep_all = T)
return(final_df)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/get_advanced_match_stats.R
|
#' Get FBref match lineups
#'
#' Returns lineups for home and away teams for a selected match
#' Replaces the deprecated function get_match_lineups
#'
#' @param match_url the fbref.com URL for the required match
#' @param time_pause the wait time (in seconds) between page loads
#'
#' @return returns a dataframe with the team lineups for a selected match
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' match <- fb_match_urls(country = "AUS", gender = "F", season_end_year = 2021, tier = "1st")[1]
#' df <- fb_match_lineups(match_url = match)
#' })
#' }
fb_match_lineups <- function(match_url, time_pause=3) {
# .pkg_message("Scraping lineups")
main_url <- "https://fbref.com"
time_wait <- time_pause
get_each_match_lineup <- function(match_url, time_pause=time_wait) {
pb$tick()
# put sleep in as per new user agreement on FBref
Sys.sleep(time_pause)
match_page <- tryCatch(.load_page(match_url), error = function(e) NA)
if(!is.na(match_page)) {
match_date <- match_page %>% rvest::html_nodes(".venuetime") %>% rvest::html_attr("data-venue-date")
lineups <- match_page %>% rvest::html_nodes(".lineup") %>% rvest::html_nodes("table")
home <- 1
away <- 2
get_each_lineup <- function(home_away) {
lineup <- lineups[home_away] %>% rvest::html_table() %>% data.frame()
player_urls <- lineups[home_away] %>% rvest::html_nodes("a") %>% rvest::html_attr("href") %>% paste0(main_url, .)
formation <- names(lineup)[1]
is_diamond <- grepl("\\..$", formation)
# on Windows, the diamond is coming through as utf-8, while on MacOS coming through as ".."
if(grepl("u", formation, ignore.case = T)) {
formation <- formation %>% gsub("u\\..*", "", ., ignore.case = T) %>%stringr::str_extract_all(., "[[:digit:]]") %>% unlist() %>% paste(collapse = "-")
} else {
formation <- formation %>% stringr::str_extract_all(., "[[:digit:]]") %>% unlist() %>% paste(collapse = "-")
}
if(is_diamond) {
formation <- paste0(formation, "-diamond")
}
tryCatch( {team <- match_page %>% rvest::html_nodes("div+ strong a") %>% rvest::html_text() %>% .[home_away]}, error = function(e) {team <- NA})
bench_index <- which(lineup[,1] == "Bench")
suppressMessages(lineup <- lineup[1:(bench_index-1),] %>% dplyr::mutate(Starting = "Pitch") %>%
dplyr::bind_rows(
lineup[(bench_index+1):nrow(lineup),] %>% dplyr::mutate(Starting = "Bench")
) )
lineup <- lineup %>%
dplyr::mutate(Matchday = match_date,
Team = team,
Formation = formation,
PlayerURL = player_urls)
names(lineup) <- c("Player_Num", "Player_Name", "Starting", "Matchday", "Team", "Formation", "PlayerURL")
all_tables <- match_page %>%
rvest::html_nodes(".table_container")
stat_df <- all_tables[which(stringr::str_detect(all_tables %>% rvest::html_attr("id"), "summary$"))] %>%
rvest::html_nodes("table")
if(home_away == 1) {
home_or_away <- "Home"
} else {
home_or_away <- "Away"
}
additional_info <- stat_df[home_away]%>% rvest::html_table() %>% data.frame()
additional_info <- additional_info %>%
.clean_match_advanced_stats_data() %>%
dplyr::filter(!is.na(.data[["Player_Num"]])) %>%
dplyr::bind_cols(Team=team, Home_Away=home_or_away, .) %>%
dplyr::mutate(Player_Num = as.character(.data[["Player_Num"]]))
if(any(grepl("Nation", colnames(additional_info)))) {
additional_info <- additional_info %>%
dplyr::select(.data[["Team"]], .data[["Home_Away"]], .data[["Player"]], .data[["Player_Num"]], .data[["Nation"]], .data[["Pos"]], .data[["Age"]], .data[["Min"]], .data[["Gls"]], .data[["Ast"]], .data[["CrdY"]], .data[["CrdR"]])
} else {
additional_info <- additional_info %>%
dplyr::select(.data[["Team"]], .data[["Home_Away"]], .data[["Player"]], .data[["Player_Num"]], .data[["Pos"]], .data[["Age"]], .data[["Min"]], .data[["Gls"]], .data[["Ast"]], .data[["CrdY"]], .data[["CrdR"]])
}
lineup <- lineup %>%
dplyr::mutate(Player_Num = as.character(.data[["Player_Num"]])) %>%
dplyr::left_join(additional_info, by = c("Team", "Player_Name" = "Player", "Player_Num")) %>%
dplyr::mutate(Home_Away = ifelse(is.na(.data[["Home_Away"]]), home_or_away, .data[["Home_Away"]])) %>%
dplyr::select(.data[["Matchday"]], .data[["Team"]], .data[["Home_Away"]], .data[["Formation"]], .data[["Player_Num"]], .data[["Player_Name"]], .data[["Starting"]], dplyr::everything()) %>%
dplyr::mutate(Matchday = lubridate::ymd(.data[["Matchday"]])) %>%
dplyr::mutate(MatchURL = match_url)
return(lineup)
}
all_lineup <- tryCatch(c(home, away) %>%
purrr::map_df(get_each_lineup), error = function(e) data.frame())
if(nrow(all_lineup) == 0) {
print(glue::glue("Lineups not available for {match_url}"))
}
} else {
print(glue::glue("Lineups not available for {match_url}"))
all_lineup <- data.frame()
}
return(all_lineup)
}
# create the progress bar with a progress function.
pb <- progress::progress_bar$new(total = length(match_url))
all_lineups <- match_url %>%
purrr::map_df(get_each_match_lineup)
return(all_lineups)
}
#' Get match lineups
#'
#' Returns lineups for home and away teams for a selected match
#'
#' @param match_url the fbref.com URL for the required match
#' @param time_pause the wait time (in seconds) between page loads
#'
#' @return returns a dataframe with the team lineups for a selected match
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' match <- get_match_urls(country = "AUS", gender = "F", season_end_year = 2021, tier = "1st")[1]
#' df <- get_match_lineups(match_url = match)
#' })
#' }
get_match_lineups <- function(match_url, time_pause=3) {
# .pkg_message("Scraping lineups")
.Deprecated("fb_match_lineups")
main_url <- "https://fbref.com"
time_wait <- time_pause
get_each_match_lineup <- function(match_url, time_pause=time_wait) {
pb$tick()
# put sleep in as per new user agreement on FBref
Sys.sleep(time_pause)
match_page <- tryCatch(xml2::read_html(match_url), error = function(e) NA)
if(!is.na(match_page)) {
match_date <- match_page %>% rvest::html_nodes(".venuetime") %>% rvest::html_attr("data-venue-date")
lineups <- match_page %>% rvest::html_nodes(".lineup") %>% rvest::html_nodes("table")
home <- 1
away <- 2
get_each_lineup <- function(home_away) {
lineup <- lineups[home_away] %>% rvest::html_table() %>% data.frame()
player_urls <- lineups[home_away] %>% rvest::html_nodes("a") %>% rvest::html_attr("href") %>% paste0(main_url, .)
formation <- names(lineup)[1]
is_diamond <- grepl("\\..$", formation)
# on Windows, the diamond is coming through as utf-8, while on MacOS coming through as ".."
if(grepl("u", formation, ignore.case = T)) {
formation <- formation %>% gsub("u\\..*", "", ., ignore.case = T) %>%stringr::str_extract_all(., "[[:digit:]]") %>% unlist() %>% paste(collapse = "-")
} else {
formation <- formation %>% stringr::str_extract_all(., "[[:digit:]]") %>% unlist() %>% paste(collapse = "-")
}
if(is_diamond) {
formation <- paste0(formation, "-diamond")
}
tryCatch( {team <- match_page %>% rvest::html_nodes("div:nth-child(2) div strong a") %>% rvest::html_text() %>% .[home_away]}, error = function(e) {team <- NA})
bench_index <- which(lineup[,1] == "Bench")
suppressMessages(lineup <- lineup[1:(bench_index-1),] %>% dplyr::mutate(Starting = "Pitch") %>%
dplyr::bind_rows(
lineup[(bench_index+1):nrow(lineup),] %>% dplyr::mutate(Starting = "Bench")
) )
lineup <- lineup %>%
dplyr::mutate(Matchday = match_date,
Team = team,
Formation = formation,
PlayerURL = player_urls)
names(lineup) <- c("Player_Num", "Player_Name", "Starting", "Matchday", "Team", "Formation", "PlayerURL")
all_tables <- match_page %>%
rvest::html_nodes(".table_container")
stat_df <- all_tables[which(stringr::str_detect(all_tables %>% rvest::html_attr("id"), "summary$"))] %>%
rvest::html_nodes("table")
if(home_away == 1) {
home_or_away <- "Home"
} else {
home_or_away <- "Away"
}
additional_info <- stat_df[home_away]%>% rvest::html_table() %>% data.frame()
additional_info <- additional_info %>%
.clean_match_advanced_stats_data() %>%
dplyr::filter(!is.na(.data[["Player_Num"]])) %>%
dplyr::bind_cols(Team=team, Home_Away=home_or_away, .) %>%
dplyr::mutate(Player_Num = as.character(.data[["Player_Num"]]))
if(any(grepl("Nation", colnames(additional_info)))) {
additional_info <- additional_info %>%
dplyr::select(.data[["Team"]], .data[["Home_Away"]], .data[["Player"]], .data[["Player_Num"]], .data[["Nation"]], .data[["Pos"]], .data[["Age"]], .data[["Min"]], .data[["Gls"]], .data[["Ast"]], .data[["CrdY"]], .data[["CrdR"]])
} else {
additional_info <- additional_info %>%
dplyr::select(.data[["Team"]], .data[["Home_Away"]], .data[["Player"]], .data[["Player_Num"]], .data[["Pos"]], .data[["Age"]], .data[["Min"]], .data[["Gls"]], .data[["Ast"]], .data[["CrdY"]], .data[["CrdR"]])
}
lineup <- lineup %>%
dplyr::mutate(Player_Num = as.character(.data[["Player_Num"]])) %>%
dplyr::left_join(additional_info, by = c("Team", "Player_Name" = "Player", "Player_Num")) %>%
dplyr::mutate(Home_Away = ifelse(is.na(.data[["Home_Away"]]), home_or_away, .data[["Home_Away"]])) %>%
dplyr::select(.data[["Matchday"]], .data[["Team"]], .data[["Home_Away"]], .data[["Formation"]], .data[["Player_Num"]], .data[["Player_Name"]], .data[["Starting"]], dplyr::everything()) %>%
dplyr::mutate(Matchday = lubridate::ymd(.data[["Matchday"]])) %>%
dplyr::mutate(MatchURL = match_url)
return(lineup)
}
all_lineup <- tryCatch(c(home, away) %>%
purrr::map_df(get_each_lineup), error = function(e) data.frame())
if(nrow(all_lineup) == 0) {
print(glue::glue("Lineups not available for {match_url}"))
}
} else {
print(glue::glue("Lineups not available for {match_url}"))
all_lineup <- data.frame()
}
return(all_lineup)
}
# create the progress bar with a progress function.
pb <- progress::progress_bar$new(total = length(match_url))
all_lineups <- match_url %>%
purrr::map_df(get_each_match_lineup)
return(all_lineups)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/get_match_lineups.R
|
#' Get match report for webpage
#'
#' Internal function to return match report details for a selected match, with input being a webpage, not url
#'
#' @param match_page the fbref.com URL for the required match
#'
#' @return returns a dataframe with the match details for a selected match
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#' @noRd
#'
.get_match_report_page <- function(match_page) {
# seasons <- read.csv("https://raw.githubusercontent.com/JaseZiv/worldfootballR_data/master/raw-data/all_leages_and_cups/all_competitions.csv", stringsAsFactors = F)
Game_URL <- match_page %>% rvest::html_nodes(".langs") %>% rvest::html_node(".en") %>% rvest::html_attr("href")
each_game_page <- tryCatch(match_page, error = function(e) NA)
# tryCatch( {each_game_page <- xml2::read_html(match_url)}, error = function(e) {each_game_page <- NA})
if(!is.na(each_game_page)) {
game <- each_game_page %>% rvest::html_nodes("h1") %>% rvest::html_text()
# tryCatch( {League <- each_game_page %>% rvest::html_nodes("h1+ div a") %>% rvest::html_text()}, error = function(e) {League <- NA})
tryCatch( {League <- each_game_page %>% rvest::html_nodes("h1+ div a:nth-child(1)") %>% rvest::html_text()}, error = function(e) {League <- NA})
tryCatch( {Match_Date <- each_game_page %>% rvest::html_nodes(".venuetime") %>% rvest::html_attr("data-venue-date")}, error = function(e) {Match_Date <- NA})
tryCatch( {Matchweek <- each_game_page %>% rvest::html_nodes("h1+ div") %>% rvest::html_text()}, error = function(e) {Matchweek <- NA})
tryCatch( {Home_Team <- each_game_page %>% rvest::html_nodes("div+ strong a") %>% rvest::html_text() %>% .[1]}, error = function(e) {Home_Team <- NA})
tryCatch( {Home_Formation <- each_game_page %>% rvest::html_nodes(".lineup#a") %>% rvest::html_nodes("th") %>% rvest::html_text() %>% .[1] %>% gsub(".*\\(", "", .) %>% gsub("\\)", "", .)}, error = function(e) {Home_Formation <- NA})
tryCatch( {Home_Score <- each_game_page %>% rvest::html_nodes(".scores") %>% rvest::html_nodes(".score") %>% rvest::html_text() %>% .[1]}, error = function(e) {Home_Score <- NA})
tryCatch( {Home_xG <- each_game_page %>% rvest::html_nodes(".scores") %>% rvest::html_nodes(".score_xg") %>% rvest::html_text() %>% .[1]}, error = function(e) {Home_xG <- NA})
tryCatch( {Home_Goals <- each_game_page %>% rvest::html_nodes("#a") %>% .[[1]] %>% rvest::html_text() %>%
stringr::str_squish()}, error = function(e) {Home_Goals <- NA})
tryCatch( {Home_Yellow_Cards <- each_game_page %>% rvest::html_nodes(".cards") %>% .[1] %>% rvest::html_nodes("span.yellow_card, span.yellow_red_card") %>% length()}, error = function(e) {Home_Yellow_Cards <- 0})
tryCatch( {Home_Red_Cards <- each_game_page %>% rvest::html_nodes(".cards") %>% .[1] %>% rvest::html_nodes("span.red_card, span.yellow_red_card") %>% length()}, error = function(e) {Home_Red_Cards <- 0})
tryCatch( {Away_Team <- each_game_page %>% rvest::html_nodes("div+ strong a") %>% rvest::html_text() %>% .[2]}, error = function(e) {Away_Team <- NA})
tryCatch( {Away_Formation <- each_game_page %>% rvest::html_nodes(".lineup#b") %>% rvest::html_nodes("th") %>% rvest::html_text() %>% .[1] %>% gsub(".*\\(", "", .) %>% gsub("\\)", "", .)}, error = function(e) {Away_Formation <- NA})
tryCatch( {Away_Score <- each_game_page %>% rvest::html_nodes(".scores") %>% rvest::html_nodes(".score") %>%rvest:: html_text() %>% .[2]}, error = function(e) {Away_Score <- NA})
tryCatch( {Away_xG <- each_game_page %>% rvest::html_nodes(".scores") %>% rvest::html_nodes(".score_xg") %>% rvest::html_text() %>% .[2]}, error = function(e) {Away_xG <- NA})
tryCatch( {Away_Goals <- each_game_page %>% rvest::html_nodes("#b") %>% .[[1]] %>% rvest::html_text() %>% stringr::str_squish()}, error = function(e) {Away_Goals <- NA})
tryCatch( {Away_Yellow_Cards <- each_game_page %>% rvest::html_nodes(".cards") %>% .[2] %>% rvest::html_nodes("span.yellow_card, span.yellow_red_card") %>% length()}, error = function(e) {Away_Yellow_Cards <- 0})
tryCatch( {Away_Red_Cards <- each_game_page %>% rvest::html_nodes(".cards") %>% .[2] %>% rvest::html_nodes("span.red_card, span.yellow_red_card") %>% length()}, error = function(e) {Away_Red_Cards <- 0})
suppressWarnings(each_game <- cbind(League, Match_Date, Matchweek, Home_Team, Home_Formation, Home_Score, Home_xG, Home_Goals, Home_Yellow_Cards, Home_Red_Cards, Away_Team, Away_Formation, Away_Score, Away_xG, Away_Goals, Away_Yellow_Cards, Away_Red_Cards, Game_URL) %>%
dplyr::as_tibble() %>%
dplyr::mutate(Home_Score = as.numeric(.data[["Home_Score"]]),
Home_xG = as.numeric(.data[["Home_xG"]]),
Away_Score = as.numeric(.data[["Away_Score"]]),
Away_xG = as.numeric(.data[["Away_xG"]]))
)
} else {
print(glue::glue("{Game_URL} is not available"))
each_game <- data.frame()
}
return(each_game)
}
#' Get FBref match report
#'
#' Returns match report details for selected matches.
#' Replaces the deprecated function get_match_report
#'
#' @param match_url the fbref.com URL for the required match
#' @param time_pause the wait time (in seconds) between page loads
#'
#' @return returns a dataframe with the match details for a selected match
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' match <- fb_match_urls(country = "AUS", gender = "F", season_end_year = 2021, tier = "1st")[1]
#' df <- fb_match_report(match_url = match)
#' })
#' }
fb_match_report <- function(match_url, time_pause=3) {
time_wait <- time_pause
each_match_report <- function(match_url, time_pause=time_wait) {
pb$tick()
# put sleep in as per new user agreement on FBref
Sys.sleep(time_pause)
match_page <- tryCatch(.load_page(match_url), error = function(e) NA)
if(!is.na(match_page)) {
each_game <- .get_match_report_page(match_page)
} else {
print(glue::glue("{match_url} is not available"))
each_game <- data.frame()
}
return(each_game)
}
# create the progress bar with a progress function.
pb <- progress::progress_bar$new(total = length(match_url))
all_games <- match_url %>%
purrr::map_df(each_match_report)
return(all_games)
}
#' Get match report
#'
#' Returns match report details for selected matches
#'
#' @param match_url the fbref.com URL for the required match
#' @param time_pause the wait time (in seconds) between page loads
#'
#' @return returns a dataframe with the match details for a selected match
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' match <- get_match_urls(country = "AUS", gender = "F", season_end_year = 2021, tier = "1st")[1]
#' df <- get_match_report(match_url = match)
#' })
#' }
get_match_report <- function(match_url, time_pause=3) {
.Deprecated("fb_match_report")
time_wait <- time_pause
each_match_report <- function(match_url, time_pause=time_wait) {
pb$tick()
# put sleep in as per new user agreement on FBref
Sys.sleep(time_pause)
match_page <- tryCatch(xml2::read_html(match_url), error = function(e) NA)
if(!is.na(match_page)) {
each_game <- .get_match_report_page(match_page)
} else {
print(glue::glue("{match_url} is not available"))
each_game <- data.frame()
}
return(each_game)
}
# create the progress bar with a progress function.
pb <- progress::progress_bar$new(total = length(match_url))
all_games <- match_url %>%
purrr::map_df(each_match_report)
seasons <- read.csv("https://raw.githubusercontent.com/JaseZiv/worldfootballR_data/master/raw-data/all_leages_and_cups/all_competitions.csv", stringsAsFactors = F)
seasons <- seasons %>%
dplyr::filter(.data[["seasons_urls"]] %in% all_games$League_URL) %>%
dplyr::select(League=.data[["competition_name"]], Gender=.data[["gender"]], Country=.data[["country"]], Season=.data[["seasons"]], League_URL=.data[["seasons_urls"]])
all_games <- seasons %>%
dplyr::left_join(all_games, by = "League_URL") %>%
dplyr::select(-.data[["League_URL"]]) %>% dplyr::distinct(.keep_all = T)
return(all_games)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/get_match_report.R
|
#' Get single match results
#'
#' Internal function used in get_match_results
#'
#' @param fixture_url URL of the league season's fixtures and results page
#' @param time_pause the wait time (in seconds) between page loads
#'
#' @return returns a raw dataframe with the results of the league season
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#' @importFrom utils read.csv
#'
#' @export
#' @noRd
#'
#' @examples
#' \dontrun{
#' try({
#' df <- .get_each_season_results(fixture_url =
#' "https://fbref.com/en/comps/9/schedule/Premier-League-Scores-and-Fixtures"
#' )
#'
#' })
#' }
.get_each_season_results <- function(fixture_url, time_pause=3) {
main_url <- "https://fbref.com"
# pb$tick()
# put sleep in as per new user agreement on FBref
Sys.sleep(time_pause)
fixtures_page <- .load_page(fixture_url)
season_name <- fixtures_page %>% rvest::html_nodes("h2 span") %>% rvest::html_text() %>% .[1]
tab_holder <- fixtures_page %>%
rvest::html_node(".stats_table tbody") %>% rvest::html_nodes("tr")
spacer_idx <- !grepl("spacer partial", xml2::xml_attrs(tab_holder))
tab_holder <- tab_holder[spacer_idx]
idx_rm <- grep("thead", xml2::xml_attrs(tab_holder))
if(length(idx_rm) != 0) {
tab_holder <- tab_holder[-idx_rm]
}
season_summary <- tryCatch(fixtures_page %>%
rvest::html_table() %>% .[1] %>% data.frame(), error = function(e) data.frame())
# error handling - The first available Iranian season has something weird in the HTML, meaning the above code wont work
# (https://fbref.com/en/comps/64/741/schedule/2014-2015-Persian-Gulf-Pro-League-Scores-and-Fixtures)
if(nrow(season_summary) == 0) {
season_summary <- fixtures_page %>% rvest::html_node(".stats_table") %>%
rvest::html_table() %>% data.frame()
}
season_summary <- season_summary[spacer_idx,]
if(length(idx_rm) != 0) {
season_summary <- season_summary[-idx_rm,]
}
get_url <- function(tab_element) {
a <- tab_element %>% rvest::html_node(xpath='.//*[@data-stat="match_report"]//a') %>% rvest::html_attr("href")
if(is.na(a) || length(a) == 0) {
a <- NA_character_
} else {
a <- paste0(main_url, a)
}
return(a)
}
match_urls <- purrr::map_chr(tab_holder, get_url)
# match_urls <- match_urls[!duplicated(match_urls, incomparables = NA)]
suppressWarnings(
season_summary <- season_summary %>%
dplyr::filter(is.na(.data[["Time"]]) | .data[["Time"]] != "Time") %>%
dplyr::mutate(Score = iconv(.data[["Score"]], 'utf-8', 'ascii', sub=' ') %>% stringr::str_squish()) %>%
tidyr::separate(.data[["Score"]], into = c("HomeGoals", "AwayGoals"), sep = " ") %>%
dplyr::mutate(HomeGoals = as.numeric(.data[["HomeGoals"]]),
AwayGoals = as.numeric(.data[["AwayGoals"]]),
Attendance = as.numeric(gsub(",", "", .data[["Attendance"]])))
)
season_summary <- season_summary %>%
dplyr::mutate(HomeGoals = ifelse(is.na(.data[["HomeGoals"]]) & !is.na(.data[["AwayGoals"]]), .data[["AwayGoals"]], .data[["HomeGoals"]]),
AwayGoals = ifelse(is.na(.data[["AwayGoals"]]) & !is.na(.data[["HomeGoals"]]), .data[["HomeGoals"]], .data[["AwayGoals"]]))
season_summary <- cbind(fixture_url, season_summary)
if(!any(stringr::str_detect(names(season_summary), "Round"))) {
Round <- rep(NA, nrow(season_summary))
season_summary <- cbind(Round, season_summary)
}
if(!any(stringr::str_detect(names(season_summary), "Wk"))) {
Wk <- rep(NA, nrow(season_summary))
season_summary <- cbind(Wk, season_summary)
}
if(any(stringr::str_detect(names(season_summary), "xG"))) {
season_summary <- season_summary %>%
dplyr::select(.data[["fixture_url"]], Round, .data[["Wk"]], .data[["Day"]], .data[["Date"]], .data[["Time"]], .data[["Home"]], .data[["HomeGoals"]], Home_xG=.data[["xG"]], .data[["Away"]], .data[["AwayGoals"]], Away_xG=.data[["xG.1"]], .data[["Attendance"]], .data[["Venue"]], .data[["Referee"]], .data[["Notes"]]) %>%
dplyr::mutate(Home_xG = as.numeric(.data[["Home_xG"]]),
Away_xG = as.numeric(.data[["Away_xG"]]))
} else {
season_summary <- season_summary %>%
dplyr::select(.data[["fixture_url"]], Round, .data[["Wk"]], .data[["Day"]], .data[["Date"]], .data[["Time"]], .data[["Home"]], .data[["HomeGoals"]], .data[["Away"]], .data[["AwayGoals"]], .data[["Attendance"]], .data[["Venue"]], .data[["Referee"]], .data[["Notes"]])
}
season_summary <- season_summary %>%
dplyr::mutate(Wk = as.character(.data[["Wk"]])) %>%
dplyr::mutate(MatchURL = match_urls)
return(season_summary)
}
#' Get FBref match results
#'
#' Returns the game results for a given league season(s)
#' Replaces the deprecated function get_match_results
#'
#' @param country the three character country code
#' @param gender gender of competition, either "M" or "F"
#' @param season_end_year the year(s) the season concludes
#' @param tier the tier of the league, ie '1st' for the EPL or '2nd' for the Championship and so on
#' @param non_dom_league_url the URL for Cups and Competitions found at https://fbref.com/en/comps/
#'
#' @return returns a dataframe with the results of the competition, season and gender
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#' @importFrom utils read.csv
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' df <- fb_match_results(country = c("ITA"), gender = "M", season_end_year = 2021)
#' # for results from English Championship:
#' df <- fb_match_results(country = "ENG", gender = "M", season_end_year = 2021, tier = "2nd")
#' # for international friendlies:
#'
#' })
#' }
fb_match_results <- function(country, gender, season_end_year, tier = "1st", non_dom_league_url = NA) {
main_url <- "https://fbref.com"
# .pkg_message("Scraping match results")
country_abbr <- country
gender_M_F <- gender
season_end_year_num <- season_end_year
comp_tier <- tier
cups_url <- non_dom_league_url
seasons <- read.csv("https://raw.githubusercontent.com/JaseZiv/worldfootballR_data/master/raw-data/all_leages_and_cups/all_competitions.csv", stringsAsFactors = F)
if(is.na(cups_url)) {
fixtures_urls <- seasons %>%
dplyr::filter(stringr::str_detect(.data[["competition_type"]], "Leagues")) %>%
dplyr::filter(country %in% country_abbr,
gender %in% gender_M_F,
season_end_year %in% season_end_year_num,
tier %in% comp_tier,
!is.na(.data[["fixtures_url"]])) %>%
dplyr::arrange(season_end_year) %>%
dplyr::pull(.data[["fixtures_url"]]) %>% unique()
} else {
fixtures_urls <- seasons %>%
dplyr::filter(.data[["comp_url"]] %in% cups_url,
gender %in% gender_M_F,
season_end_year %in% season_end_year_num,
!is.na(.data[["fixtures_url"]])) %>%
dplyr::arrange(season_end_year) %>%
dplyr::pull(.data[["fixtures_url"]]) %>% unique()
}
stopifnot("Data not available for the season(s) selected" = length(fixtures_urls) > 0)
# create the progress bar with a progress function.
# pb <- progress::progress_bar$new(total = length(fixtures_urls))
all_results <- fixtures_urls %>%
purrr::map_df(.get_each_season_results)
all_results <- seasons %>%
dplyr::select(Competition_Name=.data[["competition_name"]], Gender=.data[["gender"]], Country=.data[["country"]], Season_End_Year=.data[["season_end_year"]], .data[["seasons_urls"]], .data[["fixtures_url"]]) %>%
dplyr::right_join(all_results, by = c("fixtures_url" = "fixture_url")) %>%
dplyr::select(-.data[["seasons_urls"]], -.data[["fixtures_url"]]) %>%
dplyr::mutate(Date = lubridate::ymd(.data[["Date"]])) %>%
dplyr::arrange(.data[["Country"]], .data[["Competition_Name"]], .data[["Gender"]], .data[["Season_End_Year"]], .data[["Wk"]], .data[["Date"]], .data[["Time"]]) %>% dplyr::distinct(.keep_all = T)
# .pkg_message("Match results finished scraping")
return(all_results)
}
#' Get match results
#'
#' Returns the game results for a given league season(s)
#'
#' @param country the three character country code
#' @param gender gender of competition, either "M" or "F"
#' @param season_end_year the year(s) the season concludes
#' @param tier the tier of the league, ie '1st' for the EPL or '2nd' for the Championship and so on
#' @param non_dom_league_url the URL for Cups and Competitions found at https://fbref.com/en/comps/
#'
#' @return returns a dataframe with the results of the competition, season and gender
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#' @importFrom utils read.csv
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' df <- get_match_results(country = c("ITA"), gender = "M", season_end_year = 2021)
#' # for results from English Championship:
#' df <- get_match_results(country = "ENG", gender = "M", season_end_year = 2021, tier = "2nd")
#' # for international friendlies:
#'
#' })
#' }
get_match_results <- function(country, gender, season_end_year, tier = "1st", non_dom_league_url = NA) {
.Deprecated("fb_match_results")
main_url <- "https://fbref.com"
# .pkg_message("Scraping match results")
country_abbr <- country
gender_M_F <- gender
season_end_year_num <- season_end_year
comp_tier <- tier
cups_url <- non_dom_league_url
seasons <- read.csv("https://raw.githubusercontent.com/JaseZiv/worldfootballR_data/master/raw-data/all_leages_and_cups/all_competitions.csv", stringsAsFactors = F)
if(is.na(cups_url)) {
fixtures_urls <- seasons %>%
dplyr::filter(stringr::str_detect(.data[["competition_type"]], "Leagues")) %>%
dplyr::filter(country %in% country_abbr,
gender %in% gender_M_F,
season_end_year %in% season_end_year_num,
tier %in% comp_tier,
!is.na(.data[["fixtures_url"]])) %>%
dplyr::arrange(season_end_year) %>%
dplyr::pull(.data[["fixtures_url"]]) %>% unique()
} else {
fixtures_urls <- seasons %>%
dplyr::filter(.data[["comp_url"]] %in% cups_url,
gender %in% gender_M_F,
season_end_year %in% season_end_year_num,
!is.na(.data[["fixtures_url"]])) %>%
dplyr::arrange(season_end_year) %>%
dplyr::pull(.data[["fixtures_url"]]) %>% unique()
}
stopifnot("Data not available for the season(s) selected" = length(fixtures_urls) > 0)
# create the progress bar with a progress function.
# pb <- progress::progress_bar$new(total = length(fixtures_urls))
all_results <- fixtures_urls %>%
purrr::map_df(.get_each_season_results)
all_results <- seasons %>%
dplyr::select(Competition_Name=.data[["competition_name"]], Gender=.data[["gender"]], Country=.data[["country"]], Season_End_Year=.data[["season_end_year"]], .data[["seasons_urls"]], .data[["fixtures_url"]]) %>%
dplyr::right_join(all_results, by = c("fixtures_url" = "fixture_url")) %>%
dplyr::select(-.data[["seasons_urls"]], -.data[["fixtures_url"]]) %>%
dplyr::mutate(Date = lubridate::ymd(.data[["Date"]])) %>%
dplyr::arrange(.data[["Country"]], .data[["Competition_Name"]], .data[["Gender"]], .data[["Season_End_Year"]], .data[["Wk"]], .data[["Date"]], .data[["Time"]]) %>% dplyr::distinct(.keep_all = T)
# .pkg_message("Match results finished scraping")
return(all_results)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/get_match_results.R
|
#' Get FBref match shooting event data
#'
#' Returns detailed player shooting data for home and away teams for a selected match(es)
#' Replaces the deprecated function get_match_shooting
#'
#' @param match_url the fbref.com URL for the required match
#' @param time_pause the wait time (in seconds) between page loads
#'
#' @return returns a dataframe
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' match <- "https://fbref.com/en/matches/bf52349b/Fulham-Arsenal-September-12-2020-Premier-League"
#' df <- fb_match_shooting(match_url = match)
#' })
#' }
#'
fb_match_shooting <- function(match_url, time_pause=3) {
# .pkg_message("Scraping detailed shot and shot creation data...")
time_wait <- time_pause
get_each_match_shooting_data <- function(match_url, time_pause=time_wait) {
pb$tick()
# put sleep in as per new user agreement on FBref
Sys.sleep(time_pause)
match_page <- .load_page(match_url)
tryCatch( {home_team <- match_page %>% rvest::html_nodes("div+ strong a") %>% rvest::html_text() %>% .[1]}, error = function(e) {home_team <- NA})
tryCatch( {away_team <- match_page %>% rvest::html_nodes("div+ strong a") %>% rvest::html_text() %>% .[2]}, error = function(e) {away_team <- NA})
# home_away_df <- data.frame(Team=home_team, Home_Away="Home") %>%
# rbind(data.frame(Team=away_team, Home_Away = "Away"))
match_date <- match_page %>% rvest::html_nodes(".venuetime") %>% rvest::html_attr("data-venue-date")
all_shots <- match_page %>% rvest::html_nodes("#switcher_shots") %>% rvest::html_nodes("div")
if(length(all_shots) > 0) {
# all_shots <- match_page %>% rvest::html_nodes("#shots_all")
# shot_df <- all_shots %>% rvest::html_table() %>% data.frame()
# function to clean home and away df
prep_shot_df <- function(shot_df) {
var_names <- shot_df[1,] %>% as.character()
new_names <- paste(var_names, names(shot_df), sep = "_")
new_names <- gsub("_Var.[0-9]", "", new_names) %>% gsub(".1.1", ".1", .) %>% gsub(".2.1", ".2", .) %>% gsub("\\.", "_", .)
names(shot_df) <- new_names
shot_df <- shot_df[-1,]
shot_df <- shot_df %>%
dplyr::mutate(Match_Half = dplyr::case_when(
as.numeric(gsub("\\+.*", "", .data[["Minute"]])) <= 45 ~ 1,
dplyr::between(as.numeric(gsub("\\+.*", "", .data[["Minute"]])), 46, 90) ~ 2,
dplyr::between(as.numeric(gsub("\\+.*", "", .data[["Minute"]])), 91, 105) ~ 3,
dplyr::between(as.numeric(gsub("\\+.*", "", .data[["Minute"]])), 106, 120) ~ 4,
TRUE ~ 5))
shot_df <- shot_df %>%
dplyr::filter(.data[["Minute"]] != "")
shot_df <- dplyr::bind_cols(Date=match_date, shot_df)
return(shot_df)
}
home_shot_df <- tryCatch(all_shots[2] %>% rvest::html_nodes("table") %>% rvest::html_table() %>% data.frame(), error = function(e) data.frame())
if(nrow(home_shot_df > 0)) {
home_shot_df <- prep_shot_df(home_shot_df) %>%
dplyr::mutate(Home_Away = "Home")
}
away_shot_df <- tryCatch(all_shots[3] %>% rvest::html_nodes("table") %>% rvest::html_table() %>% data.frame(), error = function(e) data.frame())
if(nrow(away_shot_df > 0)) {
away_shot_df <- prep_shot_df(away_shot_df) %>%
dplyr::mutate(Home_Away = "Away")
}
all_shot_df <- home_shot_df %>%
rbind(away_shot_df)
all_shot_df <- all_shot_df %>%
dplyr::select(.data[["Date"]], .data[["Squad"]], .data[["Home_Away"]], .data[["Match_Half"]], dplyr::everything())
} else {
print(glue::glue("Detailed shot data unavailable for {match_url}"))
all_shot_df <- data.frame()
}
return(all_shot_df)
}
# create the progress bar with a progress function.
pb <- progress::progress_bar$new(total = length(match_url))
all_shooting <- match_url %>%
purrr::map_df(get_each_match_shooting_data)
return(all_shooting)
}
#' Get match shooting event data
#'
#' Returns detailed player shooting data for home and away teams for a selected match(es)
#'
#' @param match_url the fbref.com URL for the required match
#' @param time_pause the wait time (in seconds) between page loads
#'
#' @return returns a dataframe
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' match <- "https://fbref.com/en/matches/bf52349b/Fulham-Arsenal-September-12-2020-Premier-League"
#' df <- get_match_shooting(match_url = match)
#' })
#' }
#'
get_match_shooting <- function(match_url, time_pause=3) {
# .pkg_message("Scraping detailed shot and shot creation data...")
.Deprecated("fb_match_shooting")
time_wait <- time_pause
get_each_match_shooting_data <- function(match_url, time_pause=time_wait) {
pb$tick()
# put sleep in as per new user agreement on FBref
Sys.sleep(time_pause)
match_page <- xml2::read_html(match_url)
tryCatch( {home_team <- match_page %>% rvest::html_nodes("div:nth-child(1) div strong a") %>% rvest::html_text() %>% .[1]}, error = function(e) {home_team <- NA})
tryCatch( {away_team <- match_page %>% rvest::html_nodes("div:nth-child(1) div strong a") %>% rvest::html_text() %>% .[2]}, error = function(e) {away_team <- NA})
# home_away_df <- data.frame(Team=home_team, Home_Away="Home") %>%
# rbind(data.frame(Team=away_team, Home_Away = "Away"))
match_date <- match_page %>% rvest::html_nodes(".venuetime") %>% rvest::html_attr("data-venue-date")
all_shots <- match_page %>% rvest::html_nodes("#switcher_shots") %>% rvest::html_nodes("div")
if(length(all_shots) > 0) {
# all_shots <- match_page %>% rvest::html_nodes("#shots_all")
# shot_df <- all_shots %>% rvest::html_table() %>% data.frame()
# function to clean home and away df
prep_shot_df <- function(shot_df) {
names(shot_df) <- c("Minute", "Shooting_Player", "Squad", "Outcome", "Distance", "Body_Part", "Shot_Notes", "SCA1_Player", "SCA1_Event", "SCA2_Player", "SCA2_Event")
shot_df <- shot_df[-1, ]
shot_df <- shot_df %>%
dplyr::mutate(Match_Half = dplyr::case_when(
as.numeric(gsub("\\+.*", "", .data[["Minute"]])) <= 45 ~ 1,
dplyr::between(as.numeric(gsub("\\+.*", "", .data[["Minute"]])), 46, 90) ~ 2,
dplyr::between(as.numeric(gsub("\\+.*", "", .data[["Minute"]])), 91, 105) ~ 3,
dplyr::between(as.numeric(gsub("\\+.*", "", .data[["Minute"]])), 106, 120) ~ 4,
TRUE ~ 5))
shot_df <- shot_df %>%
dplyr::filter(.data[["Minute"]] != "")
shot_df <- dplyr::bind_cols(Date=match_date, shot_df)
return(shot_df)
}
home_shot_df <- tryCatch(all_shots[2] %>% rvest::html_nodes("table") %>% rvest::html_table() %>% data.frame(), error = function(e) data.frame())
if(nrow(home_shot_df > 0)) {
home_shot_df <- prep_shot_df(home_shot_df)
}
away_shot_df <- tryCatch(all_shots[3] %>% rvest::html_nodes("table") %>% rvest::html_table() %>% data.frame(), error = function(e) data.frame())
if(nrow(away_shot_df > 0)) {
away_shot_df <- prep_shot_df(away_shot_df)
}
all_shot_df <- home_shot_df %>%
rbind(away_shot_df)
all_shot_df <- all_shot_df %>%
dplyr::mutate(Home_Away = ifelse(.data[["Squad"]] == home_team, "Home", "Away")) %>%
dplyr::select(.data[["Date"]], .data[["Squad"]], .data[["Home_Away"]], .data[["Match_Half"]], dplyr::everything())
} else {
print(glue::glue("Detailed shot data unavailable for {match_url}"))
all_shot_df <- data.frame()
}
return(all_shot_df)
}
# create the progress bar with a progress function.
pb <- progress::progress_bar$new(total = length(match_url))
all_shooting <- match_url %>%
purrr::map_df(get_each_match_shooting_data)
return(all_shooting)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/get_match_shooting.R
|
#' Get FBref match summary
#'
#' Returns match summary data for selected match URLs, including goals, subs and cards
#' Replaces the deprecated function get_match_summary
#'
#' @param match_url the fbref.com URL for the required match
#' @param time_pause the wait time (in seconds) between page loads
#'
#' @return returns a dataframe with the match events (goals, cards, subs) for selected matches
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' match <- fb_match_urls(country = "AUS", gender = "F", season_end_year = 2021, tier = "1st")[1]
#' df <- fb_match_summary(match_url = match)
#' })
#' }
#'
fb_match_summary <- function(match_url, time_pause=3) {
time_wait <- time_pause
get_each_match_summary <- function(match_url, time_pause=time_wait) {
pb$tick()
# put sleep in as per new user agreement on FBref
Sys.sleep(time_pause)
each_game_page <- tryCatch(.load_page(match_url), error = function(e) NA)
if(!is.na(each_game_page)) {
match_report <- .get_match_report_page(match_page = each_game_page)
Home_Team <- tryCatch(each_game_page %>% rvest::html_nodes("div+ strong a") %>% rvest::html_text() %>% .[1], error = function(e) NA)
Away_Team <- tryCatch(each_game_page %>% rvest::html_nodes("div+ strong a") %>% rvest::html_text() %>% .[2], error = function(e) NA)
events <- each_game_page %>% rvest::html_nodes("#events_wrap")
events_home <- events %>% rvest::html_nodes(".a") %>% rvest::html_text() %>% stringr::str_squish()
events_away <- events %>% rvest::html_nodes(".b") %>% rvest::html_text() %>% stringr::str_squish()
home_events <- tryCatch(data.frame(Team=Home_Team, Home_Away="Home", events_string=events_home), error = function(e) data.frame())
away_events <- tryCatch(data.frame(Team=Away_Team, Home_Away="Away", events_string=events_away), error = function(e) data.frame())
events_df <- dplyr::bind_rows(home_events, away_events)
if(nrow(events_df) > 0) {
events_df <- events_df %>%
dplyr::mutate(Event_Time = gsub("&rsquor.*", "", .data[["events_string"]])) %>%
dplyr::mutate(Is_Pens = stringr::str_detect(.data[["Event_Time"]], "[A-Z]"))
suppressWarnings(
events_df <- events_df %>%
dplyr::mutate(Event_Half = dplyr::case_when(
!.data[["Is_Pens"]] & as.numeric(gsub("\\+.*", "", .data[["Event_Time"]])) <= 45 ~ 1,
!.data[["Is_Pens"]] & dplyr::between(as.numeric(gsub("\\+.*", "", .data[["Event_Time"]])), 46, 90) ~ 2,
!.data[["Is_Pens"]] & dplyr::between(as.numeric(gsub("\\+.*", "", .data[["Event_Time"]])), 91, 105) ~ 3,
!.data[["Is_Pens"]] & dplyr::between(as.numeric(gsub("\\+.*", "", .data[["Event_Time"]])), 106, 120) ~ 4,
TRUE ~ 5
))
)
events_df <- events_df %>%
dplyr::mutate(Event_Time = gsub("&rsquor.*", "", .data[["events_string"]]) %>% ifelse(stringr::str_detect(., "\\+"), (as.numeric(gsub("\\+.*", "", .)) + as.numeric(gsub(".*\\+", "", .))), .),
Event_Time = ifelse(.data[["Is_Pens"]], 121, .data[["Event_Time"]]),
Event_Time = as.numeric(.data[["Event_Time"]]),
Event_Type = ifelse(stringr::str_detect(tolower(.data[["events_string"]]), "penalty"), "Penalty",
ifelse(stringr::str_detect(tolower(.data[["events_string"]]), "own goal"), "Own Goal",
gsub(".* [^\x20-\x7E] ", "", .data[["events_string"]]) %>% gsub("[[:digit:]]:[[:digit:]]", "", .))) %>% stringr::str_squish(),
Event_Players = gsub(".*\\;", "", .data[["events_string"]]) %>% gsub(" [^\x20-\x7E] .*", "", .),
Score_Progression = stringr::str_extract(.data[["Event_Players"]], "[[:digit:]]:[[:digit:]]"),
Event_Players = gsub("[[:digit:]]:[[:digit:]]", "", .data[["Event_Players"]]) %>% stringr::str_squish(),
Penalty_Number = dplyr::case_when(
.data[["Is_Pens"]] ~ gsub("([0-9]+).*$", "\\1", .data[["Event_Players"]]),
TRUE ~ NA_character_
),
Penalty_Number = as.numeric(.data[["Penalty_Number"]]),
Event_Players = gsub("[[:digit:]]+\\s", "", .data[["Event_Players"]]),
Event_Type = ifelse(.data[["Is_Pens"]], "Penalty Shootout", .data[["Event_Type"]])) %>%
dplyr::select(-.data[["events_string"]]) %>%
dplyr::arrange(.data[["Event_Half"]], .data[["Event_Time"]])
events_df <- cbind(match_report, events_df)
} else {
events_df <- data.frame()
}
} else {
print(glue::glue("Match Summary not available for {match_url}"))
events_df <- data.frame()
}
return(events_df)
}
# create the progress bar with a progress function.
pb <- progress::progress_bar$new(total = length(match_url))
all_events_df <- match_url %>%
purrr::map_df(get_each_match_summary)
return(all_events_df)
}
#' Get match summary
#'
#' Returns match summary data for selected match URLs, including goals, subs and cards
#'
#' @param match_url the fbref.com URL for the required match
#' @param time_pause the wait time (in seconds) between page loads
#'
#' @return returns a dataframe with the match events (goals, cards, subs) for selected matches
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' match <- get_match_urls(country = "AUS", gender = "F", season_end_year = 2021, tier = "1st")[1]
#' df <- get_match_summary(match_url = match)
#' })
#' }
#'
get_match_summary <- function(match_url, time_pause=3) {
.Deprecated("fb_match_summary")
time_wait <- time_pause
get_each_match_summary <- function(match_url, time_pause=time_wait) {
pb$tick()
# put sleep in as per new user agreement on FBref
Sys.sleep(time_pause)
each_game_page <- tryCatch(xml2::read_html(match_url), error = function(e) NA)
if(!is.na(each_game_page)) {
match_report <- .get_match_report_page(match_page = each_game_page)
Home_Team <- tryCatch(each_game_page %>% rvest::html_nodes("div:nth-child(1) div strong a") %>% rvest::html_text() %>% .[1], error = function(e) NA)
Away_Team <- tryCatch(each_game_page %>% rvest::html_nodes("div:nth-child(1) div strong a") %>% rvest::html_text() %>% .[2], error = function(e) NA)
events <- each_game_page %>% rvest::html_nodes("#events_wrap")
events_home <- events %>% rvest::html_nodes(".a") %>% rvest::html_text() %>% stringr::str_squish()
events_away <- events %>% rvest::html_nodes(".b") %>% rvest::html_text() %>% stringr::str_squish()
home_events <- tryCatch(data.frame(Team=Home_Team, Home_Away="Home", events_string=events_home), error = function(e) data.frame())
away_events <- tryCatch(data.frame(Team=Away_Team, Home_Away="Away", events_string=events_away), error = function(e) data.frame())
events_df <- dplyr::bind_rows(home_events, away_events)
if(nrow(events_df) > 0) {
events_df <- events_df %>%
dplyr::mutate(Event_Time = gsub("&rsquor.*", "", .data[["events_string"]])) %>%
dplyr::mutate(Is_Pens = stringr::str_detect(.data[["Event_Time"]], "[A-Z]"))
suppressWarnings(
events_df <- events_df %>%
dplyr::mutate(Event_Half = dplyr::case_when(
!.data[["Is_Pens"]] & as.numeric(gsub("\\+.*", "", .data[["Event_Time"]])) <= 45 ~ 1,
!.data[["Is_Pens"]] & dplyr::between(as.numeric(gsub("\\+.*", "", .data[["Event_Time"]])), 46, 90) ~ 2,
!.data[["Is_Pens"]] & dplyr::between(as.numeric(gsub("\\+.*", "", .data[["Event_Time"]])), 91, 105) ~ 3,
!.data[["Is_Pens"]] & dplyr::between(as.numeric(gsub("\\+.*", "", .data[["Event_Time"]])), 106, 120) ~ 4,
TRUE ~ 5
))
)
events_df <- events_df %>%
dplyr::mutate(Event_Time = gsub("&rsquor.*", "", .data[["events_string"]]) %>% ifelse(stringr::str_detect(., "\\+"), (as.numeric(gsub("\\+.*", "", .)) + as.numeric(gsub(".*\\+", "", .))), .),
Event_Time = ifelse(.data[["Is_Pens"]], 121, .data[["Event_Time"]]),
Event_Time = as.numeric(.data[["Event_Time"]]),
Event_Type = ifelse(stringr::str_detect(tolower(.data[["events_string"]]), "penalty"), "Penalty",
ifelse(stringr::str_detect(tolower(.data[["events_string"]]), "own goal"), "Own Goal",
gsub(".* [^\x20-\x7E] ", "", .data[["events_string"]]) %>% gsub("[[:digit:]]:[[:digit:]]", "", .))) %>% stringr::str_squish(),
Event_Players = gsub(".*\\;", "", .data[["events_string"]]) %>% gsub(" [^\x20-\x7E] .*", "", .),
Score_Progression = stringr::str_extract(.data[["Event_Players"]], "[[:digit:]]:[[:digit:]]"),
Event_Players = gsub("[[:digit:]]:[[:digit:]]", "", .data[["Event_Players"]]) %>% stringr::str_squish(),
Penalty_Number = dplyr::case_when(
.data[["Is_Pens"]] ~ gsub("([0-9]+).*$", "\\1", .data[["Event_Players"]]),
TRUE ~ NA_character_
),
Penalty_Number = as.numeric(.data[["Penalty_Number"]]),
Event_Players = gsub("[[:digit:]]+\\s", "", .data[["Event_Players"]]),
Event_Type = ifelse(.data[["Is_Pens"]], "Penalty Shootout", .data[["Event_Type"]])) %>%
dplyr::select(-.data[["events_string"]]) %>%
dplyr::arrange(.data[["Event_Half"]], .data[["Event_Time"]])
events_df <- cbind(match_report, events_df)
} else {
events_df <- data.frame()
}
} else {
print(glue::glue("Match Summary not available for {match_url}"))
events_df <- data.frame()
}
return(events_df)
}
# create the progress bar with a progress function.
pb <- progress::progress_bar$new(total = length(match_url))
all_events_df <- match_url %>%
purrr::map_df(get_each_match_summary)
seasons <- read.csv("https://raw.githubusercontent.com/JaseZiv/worldfootballR_data/master/raw-data/all_leages_and_cups/all_competitions.csv", stringsAsFactors = F)
seasons <- seasons %>%
dplyr::filter(.data[["seasons_urls"]] %in% all_events_df$League_URL) %>%
dplyr::select(League=.data[["competition_name"]], Gender=.data[["gender"]], Country=.data[["country"]], Season=.data[["seasons"]], League_URL=.data[["seasons_urls"]])
all_events_df <- seasons %>%
dplyr::left_join(all_events_df, by = "League_URL") %>%
dplyr::select(-.data[["League_URL"]]) %>% dplyr::distinct(.keep_all = T)
return(all_events_df)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/get_match_summary.R
|
#' Get FBref match URLs
#'
#' Returns the URL for each match played for a given league season
#' Replaces the deprecated get_match_urls
#'
#' @param country the three character country code
#' @param gender gender of competition, either "M" or "F", or both
#' @param season_end_year the year the season(s) concludes
#' @param tier the tier of the league, ie '1st' for the EPL or '2nd' for the Championship and so on
#' @param non_dom_league_url the URL for Cups and Competitions found at https://fbref.com/en/comps/
#' @param time_pause the wait time (in seconds) between page loads
#'
#' @return returns a character vector of all fbref match URLs for selected competition, season and gender
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' fb_match_urls(country = "ENG", gender = "M", season_end_year = c(2019:2021), tier = "1st")
#' non_dom <- "https://fbref.com/en/comps/218/history/Friendlies-M-Seasons"
#' fb_match_urls(country = "", gender = "M", season_end_year = 2021, non_dom_league_url = non_dom)
#' })
#' }
fb_match_urls <- function(country, gender, season_end_year, tier = "1st", non_dom_league_url = NA, time_pause=3) {
main_url <- "https://fbref.com"
# .pkg_message("Scraping match URLs")
country_abbr <- country
gender_M_F <- gender
season_end_year_num <- season_end_year
comp_tier <- tier
cups_url <- non_dom_league_url
seasons <- read.csv("https://raw.githubusercontent.com/JaseZiv/worldfootballR_data/master/raw-data/all_leages_and_cups/all_competitions.csv", stringsAsFactors = F)
if(is.na(cups_url)) {
fixtures_url <- seasons %>%
dplyr::filter(stringr::str_detect(.data[["competition_type"]], "Leagues")) %>%
dplyr::filter(country %in% country_abbr,
gender %in% gender_M_F,
season_end_year %in% season_end_year_num,
tier %in% comp_tier,
!is.na(fixtures_url)) %>%
dplyr::arrange(season_end_year) %>%
dplyr::pull(fixtures_url) %>% unique()
} else {
fixtures_url <- seasons %>%
dplyr::filter(.data[["comp_url"]] %in% cups_url,
gender %in% gender_M_F,
season_end_year %in% season_end_year_num,
!is.na(fixtures_url)) %>%
dplyr::arrange(season_end_year) %>%
dplyr::pull(fixtures_url) %>% unique()
}
time_wait <- time_pause
get_each_seasons_urls <- function(fixture_url, time_pause=time_wait) {
# put sleep in as per new user agreement on FBref
Sys.sleep(time_pause)
match_report_urls <- .load_page(fixture_url) %>%
rvest::html_nodes("td.left~ .left+ .left a") %>%
rvest::html_attr("href") %>%
paste0(main_url, .) %>% unique()
return(match_report_urls)
}
all_seasons_match_urls <- fixtures_url %>%
purrr::map(get_each_seasons_urls) %>%
unlist()
history_index <- grep("-History", all_seasons_match_urls)
if(length(history_index) != 0) {
all_seasons_match_urls <- all_seasons_match_urls[-history_index]
}
# .pkg_message("Match URLs scrape completed")
return(all_seasons_match_urls)
}
#' Get match URLs
#'
#' Returns the URL for each match played for a given league season
#'
#' @param country the three character country code
#' @param gender gender of competition, either "M" or "F", or both
#' @param season_end_year the year the season(s) concludes
#' @param tier the tier of the league, ie '1st' for the EPL or '2nd' for the Championship and so on
#' @param non_dom_league_url the URL for Cups and Competitions found at https://fbref.com/en/comps/
#' @param time_pause the wait time (in seconds) between page loads
#'
#' @return returns a character vector of all fbref match URLs for selected competition, season and gender
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' get_match_urls(country = "ENG", gender = "M", season_end_year = c(2019:2021), tier = "1st")
#' non_dom <- "https://fbref.com/en/comps/218/history/Friendlies-M-Seasons"
#' get_match_urls(country = "", gender = "M", season_end_year = 2021, non_dom_league_url = non_dom)
#' })
#' }
get_match_urls <- function(country, gender, season_end_year, tier = "1st", non_dom_league_url = NA, time_pause=3) {
.Deprecated("fb_match_urls")
main_url <- "https://fbref.com"
# .pkg_message("Scraping match URLs")
country_abbr <- country
gender_M_F <- gender
season_end_year_num <- season_end_year
comp_tier <- tier
cups_url <- non_dom_league_url
seasons <- read.csv("https://raw.githubusercontent.com/JaseZiv/worldfootballR_data/master/raw-data/all_leages_and_cups/all_competitions.csv", stringsAsFactors = F)
if(is.na(cups_url)) {
fixtures_url <- seasons %>%
dplyr::filter(stringr::str_detect(.data[["competition_type"]], "Leagues")) %>%
dplyr::filter(country %in% country_abbr,
gender %in% gender_M_F,
season_end_year %in% season_end_year_num,
tier %in% comp_tier,
!is.na(fixtures_url)) %>%
dplyr::arrange(season_end_year) %>%
dplyr::pull(fixtures_url) %>% unique()
} else {
fixtures_url <- seasons %>%
dplyr::filter(.data[["comp_url"]] %in% cups_url,
gender %in% gender_M_F,
season_end_year %in% season_end_year_num,
!is.na(fixtures_url)) %>%
dplyr::arrange(season_end_year) %>%
dplyr::pull(fixtures_url) %>% unique()
}
time_wait <- time_pause
get_each_seasons_urls <- function(fixture_url, time_pause=time_wait) {
# put sleep in as per new user agreement on FBref
Sys.sleep(time_pause)
match_report_urls <- xml2::read_html(fixture_url) %>%
rvest::html_nodes("td.left~ .left+ .left a") %>%
rvest::html_attr("href") %>%
paste0(main_url, .) %>% unique()
return(match_report_urls)
}
all_seasons_match_urls <- fixtures_url %>%
purrr::map(get_each_seasons_urls) %>%
unlist()
history_index <- grep("-History", all_seasons_match_urls)
if(length(history_index) != 0) {
all_seasons_match_urls <- all_seasons_match_urls[-history_index]
}
# .pkg_message("Match URLs scrape completed")
return(all_seasons_match_urls)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/get_match_urls.R
|
#' Get fbref Player Season Statistics
#'
#' Returns the historical season stats for a selected player(s) and stat type
#'
#' @param player_url the URL(s) of the player(s) (can come from fb_player_urls())
#' @param stat_type the type of statistic required
#' @param time_pause the wait time (in seconds) between page loads
#'
#'The statistic type options (stat_type) include:
#'
#' \emph{"standard"}, \emph{"shooting"}, \emph{"passing"},
#' \emph{"passing_types"}, \emph{"gca"}, \emph{"defense"}, \emph{"possession"}
#' \emph{"playing_time"}, \emph{"misc"}, \emph{"keeper"}, \emph{"keeper_adv"}
#'
#' @return returns a dataframe of a player's historical season stats
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' fb_player_season_stats("https://fbref.com/en/players/3bb7b8b4/Ederson",
#' stat_type = 'standard')
#'
#' multiple_playing_time <- fb_player_season_stats(
#' player_url = c("https://fbref.com/en/players/d70ce98e/Lionel-Messi",
#' "https://fbref.com/en/players/dea698d9/Cristiano-Ronaldo"),
#' stat_type = "playing_time")
#' })
#' }
fb_player_season_stats <- function(player_url, stat_type, time_pause=3) {
main_url <- "https://fbref.com"
time_wait <- time_pause
get_each_player_season <- function(player_url, stat_type, time_pause=time_wait) {
pb$tick()
# put sleep in as per new user agreement on FBref
Sys.sleep(time_pause)
stat_types <- c("standard", "shooting", "passing", "passing_types", "gca", "defense", "possession", "playing_time", "misc", "keeper", "keeper_adv")
if(!stat_type %in% stat_types) stop("check stat type")
player_page <- .load_page(player_url)
player_name <- player_page %>% rvest::html_node("h1") %>% rvest::html_text() %>% stringr::str_squish()
comps_filters <- player_page %>%
rvest::html_nodes(".filter") %>%
rvest::html_nodes("a") %>%
rvest::html_attr("href") %>% .[!is.na(.)]
# if(length(comps_filters) >= 2) {
if(any(grepl("All-Competitions", comps_filters))) {
all_comps_url <- player_page %>%
rvest::html_nodes(".filter") %>%
rvest::html_nodes("a") %>%
rvest::html_attr("href") %>%
.[grep("All-Compe", .)] %>% paste0(main_url, .)
Sys.sleep(time_pause)
all_comps_page <- .load_page(all_comps_url)
expanded_table_elements <- all_comps_page %>% rvest::html_nodes(".table_container") %>% rvest::html_nodes("table")
expanded_table_idx <- c()
for(i in 1:length(expanded_table_elements)) {
idx <- xml2::xml_attrs(expanded_table_elements[[i]])[["id"]]
expanded_table_idx <- c(expanded_table_idx, idx)
}
idx <- grep("_expanded", expanded_table_idx)
if(length(idx) == 0) {
idx <- grep("_dom_", expanded_table_idx)
expanded_table_idx <- expanded_table_idx[idx]
expanded_table_elements <- expanded_table_elements[idx]
stat_df <- tryCatch(
.clean_player_season_stats(expanded_table_elements[which(stringr::str_detect(expanded_table_idx, paste0("stats_", stat_type, "_dom_lg")))]),
error = function(e) data.frame()
)
} else {
expanded_table_idx <- expanded_table_idx[idx]
expanded_table_elements <- expanded_table_elements[idx]
stat_df <- tryCatch(
.clean_player_season_stats(expanded_table_elements[which(stringr::str_detect(expanded_table_idx, paste0("stats_", stat_type, "_expanded")))]),
error = function(e) data.frame()
)
}
# for leagues where there are no options between all_comps, dom, cups, etc, and only domestic league data exists:
} else {
expanded_table_elements <- player_page %>% rvest::html_nodes(".table_container") %>% rvest::html_nodes("table")
if(length(expanded_table_elements) == 0) {
stat_df <- data.frame(player_name = player_name, player_url = player_url, Country = NA_character_)
} else {
expanded_table_idx <- c()
for(i in 1:length(expanded_table_elements)) {
idx <- xml2::xml_attrs(expanded_table_elements[[i]])[["id"]]
expanded_table_idx <- c(expanded_table_idx, idx)
}
idx <- grep("_dom_", expanded_table_idx)
expanded_table_idx <- expanded_table_idx[idx]
expanded_table_elements <- expanded_table_elements[idx]
stat_df <- tryCatch(
.clean_player_season_stats(expanded_table_elements[which(stringr::str_detect(expanded_table_idx, paste0("stats_", stat_type)))]),
error = function(e) data.frame()
)
}
}
if(nrow(stat_df) == 0){
print(glue::glue("{stat_type} data not available for: {player_url}"))
} else {
stat_df <- stat_df %>%
dplyr::mutate(player_name = player_name,
player_url = player_url,
Country = gsub("^.*? ([A-Z])", "\\1", .data[["Country"]])) %>%
dplyr::select(player_name, player_url, dplyr::everything())
}
return(stat_df)
}
# create the progress bar with a progress function.
pb <- progress::progress_bar$new(total = length(player_url))
# run for all players selected
all_players <- purrr::map2_df(player_url, stat_type, get_each_player_season)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/get_season_player_stats.R
|
#' Get FBref season team stats
#'
#' Returns different team season statistics results for a given league season and stat type
#' Replaces the deprecated function get_season_team_stats
#'
#' @param country the three character country code for all countries
#' @param gender gender of competition, either "M", "F" or both
#' @param season_end_year the year the season(s) concludes
#' @param tier the tier of the league, ie '1st' for the EPL or '2nd' for the Championship and so on
#' @param stat_type the type of team statistics the user requires
#' @param time_pause the wait time (in seconds) between page loads
#'
#' The statistic type options (stat_type) include:
#'
#' \emph{"league_table"}, \emph{"league_table_home_away}", \emph{"standard"},
#' \emph{"keeper"}, \emph{"keeper_adv"}, \emph{"shooting"}, \emph{"passing"},
#' \emph{"passing_types"}, \emph{"goal_shot_creation"}, \emph{"defense" },
#' \emph{"possession"}, \emph{"playing_time"}, \emph{"misc"}
#'
#' @return returns a dataframe of a selected team statistic type for a selected league season
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#' @importFrom utils read.csv
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' fb_season_team_stats("ITA", "M", 2021, "1st", "defense")
#' })
#' }
fb_season_team_stats <- function(country, gender, season_end_year, tier, stat_type, time_pause=3) {
# .pkg_message("Scraping season {stat_type} stats")
main_url <- "https://fbref.com"
country_abbr <- country
gender_M_F <- gender
season_end_year_num <- season_end_year
comp_tier <- tier
seasons <- read.csv("https://raw.githubusercontent.com/JaseZiv/worldfootballR_data/master/raw-data/all_leages_and_cups/all_competitions.csv", stringsAsFactors = F)
seasons_urls <- seasons %>%
dplyr::filter(stringr::str_detect(.data[["competition_type"]], "Leagues")) %>%
dplyr::filter(country %in% country_abbr,
gender %in% gender_M_F,
season_end_year %in% season_end_year_num,
tier %in% comp_tier) %>%
dplyr::arrange(season_end_year) %>%
dplyr::pull(seasons_urls) %>% unique()
time_wait <- time_pause
get_each_stats_type <- function(season_url, time_pause=time_wait) {
# put sleep in as per new user agreement on FBref
Sys.sleep(time_pause)
season_stats_page <- .load_page(season_url)
league_standings <- season_stats_page %>% rvest::html_nodes("table")
all_stat_tabs_holder <- season_stats_page %>% rvest::html_nodes(".stats_table")
all_tables <- c()
for(i in 1:length(all_stat_tabs_holder)) {
all_tables <- c(all_tables, xml2::xml_attrs(all_stat_tabs_holder[[i]])[["id"]])
}
tryCatch(
if(stat_type == "league_table") {
stats_url <- NA
if(any(grepl("Conference", all_tables, ignore.case = TRUE))) {
stat_df1 <- league_standings[1] %>% rvest::html_table() %>% data.frame()
conf1 <- season_stats_page %>% rvest::html_nodes(".section_content") %>% rvest::html_nodes("h2") %>% .[1] %>% rvest::html_text()
stat_df1 <- stat_df1 %>%
dplyr::mutate(Conference = conf1)
stat_df2 <- league_standings[3] %>% rvest::html_table() %>% data.frame()
conf2 <- season_stats_page %>% rvest::html_nodes(".section_content") %>% rvest::html_nodes("h2") %>% .[3] %>% rvest::html_text()
stat_df2 <- stat_df2 %>%
dplyr::mutate(Conference = conf2)
stat_df <- dplyr::bind_rows(stat_df1, stat_df2)
} else {
stat_df <- league_standings[1] %>% rvest::html_table() %>% data.frame()
}
if(any(grepl("Attendance", names(stat_df)))) {
stat_df$Attendance <- gsub(",", "", stat_df$Attendance) %>% as.numeric()
}
} else if(stat_type == "league_table_home_away") {
stats_url <- NA
if(any(grepl("Conference", all_tables, ignore.case = TRUE))) {
stat_df1 <- league_standings[2] %>% rvest::html_table() %>% data.frame()
conf1 <- season_stats_page %>% rvest::html_nodes(".section_content") %>% rvest::html_nodes("h2") %>% .[2] %>% rvest::html_text()
stat_df1 <- stat_df1 %>%
dplyr::mutate(Conference = conf1)
stat_df2 <- league_standings[4] %>% rvest::html_table() %>% data.frame()
conf2 <- season_stats_page %>% rvest::html_nodes(".section_content") %>% rvest::html_nodes("h2") %>% .[4] %>% rvest::html_text()
stat_df2 <- stat_df2 %>%
dplyr::mutate(Conference = conf2)
stat_df <- dplyr::bind_rows(stat_df1, stat_df2)
stat_df$Conference[1] <- "Conference"
} else {
stat_df <- league_standings[2] %>% rvest::html_table() %>% data.frame()
}
var_names <- stat_df[1,] %>% as.character()
new_names <- paste(var_names, names(stat_df), sep = "_")
new_names <- new_names %>%
gsub("\\..*", "", .) %>%
gsub("_Var", "", .) %>%
gsub("/", "_per_", .)
names(stat_df) <- new_names
stat_df <- stat_df[-1,]
stat_df <- stat_df %>%
dplyr::filter(.data[["Rk"]] != "Rk") %>%
dplyr::rename(Conference = .data[["Conference_Conference"]])
cols_to_transform <- stat_df %>%
dplyr::select(-.data[["Squad"]], -.data[["Conference"]]) %>% names()
stat_df <- stat_df %>%
dplyr::mutate_at(.vars = cols_to_transform, .funs = function(x) {gsub(",", "", x)}) %>%
dplyr::mutate_at(.vars = cols_to_transform, .funs = function(x) {gsub("+", "", x)}) %>%
dplyr::mutate_at(.vars = cols_to_transform, .funs = as.numeric)
} else if(stat_type == "standard") {
stat_df_for <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_standard_for"))]), error = function(e) data.frame())
stat_df_against <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_standard_against"))]), error = function(e) data.frame())
stat_df <- tryCatch(rbind(stat_df_for, stat_df_against), error = function(e) data.frame())
} else if(stat_type == "keeper") {
stat_df_for <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_keeper_for"))]), error = function(e) data.frame())
stat_df_against <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_keeper_against"))]), error = function(e) data.frame())
stat_df <- tryCatch(rbind(stat_df_for, stat_df_against), error = function(e) data.frame())
} else if(stat_type == "keeper_adv") {
stat_df_for <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_keeper_adv_for"))]), error = function(e) data.frame())
stat_df_against <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_keeper_adv_against"))]), error = function(e) data.frame())
stat_df <- tryCatch(rbind(stat_df_for, stat_df_against), error = function(e) data.frame())
} else if(stat_type == "shooting") {
stat_df_for <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_shooting_for"))]), error = function(e) data.frame())
stat_df_against <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_shooting_against"))]), error = function(e) data.frame())
stat_df <- tryCatch(rbind(stat_df_for, stat_df_against), error = function(e) data.frame())
} else if(stat_type == "passing") {
stat_df_for <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_passing_for"))]), error = function(e) data.frame())
stat_df_against <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_passing_against"))]), error = function(e) data.frame())
stat_df <- tryCatch(rbind(stat_df_for, stat_df_against), error = function(e) data.frame())
} else if(stat_type == "passing_types") {
stat_df_for <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_passing_types_for"))]), error = function(e) data.frame())
stat_df_against <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_passing_types_against"))]), error = function(e) data.frame())
stat_df <- tryCatch(rbind(stat_df_for, stat_df_against), error = function(e) data.frame())
} else if(stat_type == "goal_shot_creation") {
stat_df_for <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_gca_for"))]), error = function(e) data.frame())
stat_df_against <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_gca_against"))]), error = function(e) data.frame())
stat_df <- tryCatch(rbind(stat_df_for, stat_df_against), error = function(e) data.frame())
} else if(stat_type == "defense") {
stat_df_for <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_defense_for"))]), error = function(e) data.frame())
stat_df_against <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_defense_against"))]), error = function(e) data.frame())
stat_df <- tryCatch(rbind(stat_df_for, stat_df_against), error = function(e) data.frame())
} else if(stat_type == "possession") {
stat_df_for <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_possession_for"))]), error = function(e) data.frame())
stat_df_against <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_possession_against"))]), error = function(e) data.frame())
stat_df <- tryCatch(rbind(stat_df_for, stat_df_against), error = function(e) data.frame())
} else if(stat_type == "playing_time") {
stat_df_for <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_playing_time_for"))]), error = function(e) data.frame())
stat_df_against <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_playing_time_against"))]), error = function(e) data.frame())
stat_df <- tryCatch(rbind(stat_df_for, stat_df_against), error = function(e) data.frame())
} else if(stat_type == "misc") {
stat_df_for <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_misc_for"))]), error = function(e) data.frame())
stat_df_against <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_misc_against"))]), error = function(e) data.frame())
stat_df <- tryCatch(rbind(stat_df_for, stat_df_against), error = function(e) data.frame())
}, error = function(e) {stat_df <- data.frame()} )
if(nrow(stat_df) == 0) {
stat_df <- data.frame()
print(glue::glue("NOTE: Stat Type '{stat_type}' is not found for this league season. Check {season_url} to see if it exists."))
} else {
# stopifnot("Data not available, see message above" = length(stats_url) > 0)
names(stat_df) <- gsub("\\+", "_plus_", names(stat_df))
stat_df <- stat_df %>%
dplyr::mutate(Team_or_Opponent = ifelse(!stringr::str_detect(.data[["Squad"]], "vs "), "team", "opponent")) %>%
dplyr::filter(.data[["Team_or_Opponent"]] == "team") %>%
dplyr::bind_rows(
stat_df %>%
dplyr::mutate(Team_or_Opponent = ifelse(!stringr::str_detect(.data[["Squad"]], "vs "), "team", "opponent")) %>%
dplyr::filter(.data[["Team_or_Opponent"]] == "opponent")
) %>%
dplyr::mutate(season_url = season_url) %>%
dplyr::select(.data[["season_url"]], .data[["Squad"]], .data[["Team_or_Opponent"]], dplyr::everything())
stat_df <- seasons %>%
dplyr::select(Competition_Name=.data[["competition_name"]], Gender=.data[["gender"]], Country=.data[["country"]], Season_End_Year=.data[["season_end_year"]], .data[["seasons_urls"]]) %>%
dplyr::left_join(stat_df, by = c("seasons_urls" = "season_url")) %>%
dplyr::select(-.data[["seasons_urls"]]) %>%
dplyr::filter(!is.na(.data[["Squad"]])) %>%
dplyr::arrange(.data[["Country"]], .data[["Competition_Name"]], .data[["Gender"]], .data[["Season_End_Year"]], dplyr::desc(.data[["Team_or_Opponent"]]), .data[["Squad"]]) %>% dplyr::distinct(.keep_all = T)
}
return(stat_df)
}
all_stats_df <- seasons_urls %>%
purrr::map_df(get_each_stats_type)
return(all_stats_df)
}
#' Get season team stats
#'
#' Returns different team season statistics results for a given league season and stat type
#'
#' @param country the three character country code for all countries
#' @param gender gender of competition, either "M", "F" or both
#' @param season_end_year the year the season(s) concludes
#' @param tier the tier of the league, ie '1st' for the EPL or '2nd' for the Championship and so on
#' @param stat_type the type of team statistics the user requires
#' @param time_pause the wait time (in seconds) between page loads
#'
#' The statistic type options (stat_type) include:
#'
#' \emph{"league_table"}, \emph{"league_table_home_away}", \emph{"standard"},
#' \emph{"keeper"}, \emph{"keeper_adv"}, \emph{"shooting"}, \emph{"passing"},
#' \emph{"passing_types"}, \emph{"goal_shot_creation"}, \emph{"defense" },
#' \emph{"possession"}, \emph{"playing_time"}, \emph{"misc"}
#'
#' @return returns a dataframe of a selected team statistic type for a selected league season
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#' @importFrom utils read.csv
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' get_season_team_stats("ITA", "M", 2021, "1st", "defense")
#' })
#' }
get_season_team_stats <- function(country, gender, season_end_year, tier, stat_type, time_pause=3) {
.Deprecated("fb_season_team_stats")
# .pkg_message("Scraping season {stat_type} stats")
main_url <- "https://fbref.com"
country_abbr <- country
gender_M_F <- gender
season_end_year_num <- season_end_year
comp_tier <- tier
seasons <- read.csv("https://raw.githubusercontent.com/JaseZiv/worldfootballR_data/master/raw-data/all_leages_and_cups/all_competitions.csv", stringsAsFactors = F)
seasons_urls <- seasons %>%
dplyr::filter(stringr::str_detect(.data[["competition_type"]], "Leagues")) %>%
dplyr::filter(country %in% country_abbr,
gender %in% gender_M_F,
season_end_year %in% season_end_year_num,
tier %in% comp_tier) %>%
dplyr::arrange(season_end_year) %>%
dplyr::pull(seasons_urls) %>% unique()
time_wait <- time_pause
get_each_stats_type <- function(season_url, time_pause=time_wait) {
# put sleep in as per new user agreement on FBref
Sys.sleep(time_pause)
season_stats_page <- xml2::read_html(season_url)
league_standings <- season_stats_page %>% rvest::html_nodes("table")
all_stat_tabs_holder <- season_stats_page %>% rvest::html_nodes(".stats_table")
all_tables <- c()
for(i in 1:length(all_stat_tabs_holder)) {
all_tables <- c(all_tables, xml2::xml_attrs(all_stat_tabs_holder[[i]])[["id"]])
}
tryCatch(
if(stat_type == "league_table") {
stats_url <- NA
if(any(grepl("Conference", all_tables, ignore.case = TRUE))) {
stat_df1 <- league_standings[1] %>% rvest::html_table() %>% data.frame()
conf1 <- season_stats_page %>% rvest::html_nodes(".section_content") %>% rvest::html_nodes("h2") %>% .[1] %>% rvest::html_text()
stat_df1 <- stat_df1 %>%
dplyr::mutate(Conference = conf1)
stat_df2 <- league_standings[3] %>% rvest::html_table() %>% data.frame()
conf2 <- season_stats_page %>% rvest::html_nodes(".section_content") %>% rvest::html_nodes("h2") %>% .[3] %>% rvest::html_text()
stat_df2 <- stat_df2 %>%
dplyr::mutate(Conference = conf2)
stat_df <- dplyr::bind_rows(stat_df1, stat_df2)
} else {
stat_df <- league_standings[1] %>% rvest::html_table() %>% data.frame()
}
if(any(grepl("Attendance", names(stat_df)))) {
stat_df$Attendance <- gsub(",", "", stat_df$Attendance) %>% as.numeric()
}
} else if(stat_type == "league_table_home_away") {
stats_url <- NA
if(any(grepl("Conference", all_tables, ignore.case = TRUE))) {
stat_df1 <- league_standings[2] %>% rvest::html_table() %>% data.frame()
conf1 <- season_stats_page %>% rvest::html_nodes(".section_content") %>% rvest::html_nodes("h2") %>% .[2] %>% rvest::html_text()
stat_df1 <- stat_df1 %>%
dplyr::mutate(Conference = conf1)
stat_df2 <- league_standings[4] %>% rvest::html_table() %>% data.frame()
conf2 <- season_stats_page %>% rvest::html_nodes(".section_content") %>% rvest::html_nodes("h2") %>% .[4] %>% rvest::html_text()
stat_df2 <- stat_df2 %>%
dplyr::mutate(Conference = conf2)
stat_df <- dplyr::bind_rows(stat_df1, stat_df2)
stat_df$Conference[1] <- "Conference"
} else {
stat_df <- league_standings[2] %>% rvest::html_table() %>% data.frame()
}
var_names <- stat_df[1,] %>% as.character()
new_names <- paste(var_names, names(stat_df), sep = "_")
new_names <- new_names %>%
gsub("\\..*", "", .) %>%
gsub("_Var", "", .) %>%
gsub("/", "_per_", .)
names(stat_df) <- new_names
stat_df <- stat_df[-1,]
stat_df <- stat_df %>%
dplyr::filter(.data[["Rk"]] != "Rk") %>%
dplyr::rename(Conference = .data[["Conference_Conference"]])
cols_to_transform <- stat_df %>%
dplyr::select(-.data[["Squad"]], -.data[["Conference"]]) %>% names()
stat_df <- stat_df %>%
dplyr::mutate_at(.vars = cols_to_transform, .funs = function(x) {gsub(",", "", x)}) %>%
dplyr::mutate_at(.vars = cols_to_transform, .funs = function(x) {gsub("+", "", x)}) %>%
dplyr::mutate_at(.vars = cols_to_transform, .funs = as.numeric)
} else if(stat_type == "standard") {
stat_df_for <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_standard_for"))]), error = function(e) data.frame())
stat_df_against <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_standard_against"))]), error = function(e) data.frame())
stat_df <- tryCatch(rbind(stat_df_for, stat_df_against), error = function(e) data.frame())
} else if(stat_type == "keeper") {
stat_df_for <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_keeper_for"))]), error = function(e) data.frame())
stat_df_against <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_keeper_against"))]), error = function(e) data.frame())
stat_df <- tryCatch(rbind(stat_df_for, stat_df_against), error = function(e) data.frame())
} else if(stat_type == "keeper_adv") {
stat_df_for <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_keeper_adv_for"))]), error = function(e) data.frame())
stat_df_against <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_keeper_adv_against"))]), error = function(e) data.frame())
stat_df <- tryCatch(rbind(stat_df_for, stat_df_against), error = function(e) data.frame())
} else if(stat_type == "shooting") {
stat_df_for <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_shooting_for"))]), error = function(e) data.frame())
stat_df_against <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_shooting_against"))]), error = function(e) data.frame())
stat_df <- tryCatch(rbind(stat_df_for, stat_df_against), error = function(e) data.frame())
} else if(stat_type == "passing") {
stat_df_for <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_passing_for"))]), error = function(e) data.frame())
stat_df_against <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_passing_against"))]), error = function(e) data.frame())
stat_df <- tryCatch(rbind(stat_df_for, stat_df_against), error = function(e) data.frame())
} else if(stat_type == "passing_types") {
stat_df_for <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_passing_types_for"))]), error = function(e) data.frame())
stat_df_against <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_passing_types_against"))]), error = function(e) data.frame())
stat_df <- tryCatch(rbind(stat_df_for, stat_df_against), error = function(e) data.frame())
} else if(stat_type == "goal_shot_creation") {
stat_df_for <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_gca_for"))]), error = function(e) data.frame())
stat_df_against <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_gca_against"))]), error = function(e) data.frame())
stat_df <- tryCatch(rbind(stat_df_for, stat_df_against), error = function(e) data.frame())
} else if(stat_type == "defense") {
stat_df_for <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_defense_for"))]), error = function(e) data.frame())
stat_df_against <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_defense_against"))]), error = function(e) data.frame())
stat_df <- tryCatch(rbind(stat_df_for, stat_df_against), error = function(e) data.frame())
} else if(stat_type == "possession") {
stat_df_for <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_possession_for"))]), error = function(e) data.frame())
stat_df_against <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_possession_against"))]), error = function(e) data.frame())
stat_df <- tryCatch(rbind(stat_df_for, stat_df_against), error = function(e) data.frame())
} else if(stat_type == "playing_time") {
stat_df_for <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_playing_time_for"))]), error = function(e) data.frame())
stat_df_against <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_playing_time_against"))]), error = function(e) data.frame())
stat_df <- tryCatch(rbind(stat_df_for, stat_df_against), error = function(e) data.frame())
} else if(stat_type == "misc") {
stat_df_for <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_misc_for"))]), error = function(e) data.frame())
stat_df_against <- tryCatch(.clean_advanced_stat_table(input_table_element = all_stat_tabs_holder[which(stringr::str_detect(all_tables, "stats_squads_misc_against"))]), error = function(e) data.frame())
stat_df <- tryCatch(rbind(stat_df_for, stat_df_against), error = function(e) data.frame())
}, error = function(e) {stat_df <- data.frame()} )
if(nrow(stat_df) == 0) {
stat_df <- data.frame()
print(glue::glue("NOTE: Stat Type '{stat_type}' is not found for this league season. Check {season_url} to see if it exists."))
} else {
# stopifnot("Data not available, see message above" = length(stats_url) > 0)
names(stat_df) <- gsub("\\+", "_plus_", names(stat_df))
stat_df <- stat_df %>%
dplyr::mutate(Team_or_Opponent = ifelse(!stringr::str_detect(.data[["Squad"]], "vs "), "team", "opponent")) %>%
dplyr::filter(.data[["Team_or_Opponent"]] == "team") %>%
dplyr::bind_rows(
stat_df %>%
dplyr::mutate(Team_or_Opponent = ifelse(!stringr::str_detect(.data[["Squad"]], "vs "), "team", "opponent")) %>%
dplyr::filter(.data[["Team_or_Opponent"]] == "opponent")
) %>%
dplyr::mutate(season_url = season_url) %>%
dplyr::select(.data[["season_url"]], .data[["Squad"]], .data[["Team_or_Opponent"]], dplyr::everything())
stat_df <- seasons %>%
dplyr::select(Competition_Name=.data[["competition_name"]], Gender=.data[["gender"]], Country=.data[["country"]], Season_End_Year=.data[["season_end_year"]], .data[["seasons_urls"]]) %>%
dplyr::left_join(stat_df, by = c("seasons_urls" = "season_url")) %>%
dplyr::select(-.data[["seasons_urls"]]) %>%
dplyr::filter(!is.na(.data[["Squad"]])) %>%
dplyr::arrange(.data[["Country"]], .data[["Competition_Name"]], .data[["Gender"]], .data[["Season_End_Year"]], dplyr::desc(.data[["Team_or_Opponent"]]), .data[["Squad"]]) %>% dplyr::distinct(.keep_all = T)
}
return(stat_df)
}
all_stats_df <- seasons_urls %>%
purrr::map_df(get_each_stats_type)
return(all_stats_df)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/get_season_team_stats.R
|
utils::globalVariables(c(".", "an.error.occured", "pb", "res"))
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/globals.R
|
#' Clean advanced statistic tables
#'
#' Returns cleaned dataframe for each of the team statistic tables used by get_season_team_stats()
#'
#' @param input_table_element element of the html table on the league season page
#'
#' @return a data frame for the selected league seasons advanced statistic
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#' @noRd
#'
.clean_advanced_stat_table <- function(input_table_element) {
stat_df <- input_table_element %>%
rvest::html_table() %>%
data.frame()
var_names <- stat_df[1,] %>% as.character()
new_names <- paste(var_names, names(stat_df), sep = "_")
new_names <- new_names %>%
gsub("\\..[0-9]", "", .) %>%
gsub("\\.[0-9]", "", .) %>%
gsub("\\.", "_", .) %>%
gsub("_Var", "", .) %>%
gsub("# Pl", "Num_Players", .) %>%
gsub("%", "_percent", .) %>%
gsub("_Performance", "", .) %>%
# gsub("_Penalty", "", .) %>%
gsub("1/3", "Final_Third", .) %>%
gsub("\\+/-", "Plus_Minus", .) %>%
gsub("/", "_per_", .) %>%
gsub("-", "_minus_", .) %>%
gsub("90s", "Mins_Per_90", .) %>%
gsub("__", "_", .)
names(stat_df) <- new_names
stat_df <- stat_df[-1,]
cols_to_transform <- stat_df %>%
dplyr::select(-.data[["Squad"]]) %>% names()
stat_df <- stat_df %>%
dplyr::mutate_at(.vars = cols_to_transform, .funs = function(x) {gsub(",", "", x)}) %>%
dplyr::mutate_at(.vars = cols_to_transform, .funs = function(x) {gsub("+", "", x)}) %>%
dplyr::mutate_at(.vars = cols_to_transform, .funs = as.numeric)
return(stat_df)
}
#' Clean player season statistic tables
#'
#' Returns cleaned dataframe for each of the player statistic tables used by fb_player_season_stats()
#'
#' @param input_table_element element of the html table on the player page
#'
#' @return a data frame for the selected player's advanced statistic
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#' @noRd
#'
.clean_player_season_stats <- function(input_table_element) {
stat_df <- input_table_element %>%
rvest::html_table() %>%
data.frame()
var_names <- stat_df[1,] %>% as.character()
new_names <- paste(var_names, names(stat_df), sep = "_")
new_names <- new_names %>%
gsub("\\..[0-9]", "", .) %>%
gsub("\\.[0-9]", "", .) %>%
gsub("\\.", "_", .) %>%
gsub("_Var", "", .) %>%
gsub("_Playing", "", .) %>%
gsub("%", "_percent", .) %>%
gsub("_Performance", "", .) %>%
# gsub("_Penalty", "", .) %>%
gsub("1/3", "Final_Third", .) %>%
gsub("\\+/-", "Plus_Minus", .) %>%
gsub("/", "_per_", .) %>%
gsub("-", "_minus_", .) %>%
gsub("90s", "Mins_Per_90", .) %>%
gsub("__", "_", .)
names(stat_df) <- new_names
stat_df <- stat_df[-1,]
stat_df <- stat_df %>% dplyr::select(-.data[["Matches"]])
remove_rows <- min(grep("Season", stat_df$Season)):nrow(stat_df)
stat_df <- stat_df[-remove_rows, ]
if(any(grepl("LgRank", names(stat_df)))){
cols_to_transform <- stat_df %>%
dplyr::select(-.data[["Season"]], -.data[["Squad"]], -.data[["Country"]], -.data[["Comp"]], -.data[["LgRank"]]) %>% names()
} else {
cols_to_transform <- stat_df %>%
dplyr::select(-.data[["Season"]], -.data[["Squad"]], -.data[["Country"]], -.data[["Comp"]]) %>% names()
}
stat_df <- stat_df %>%
dplyr::mutate_at(.vars = cols_to_transform, .funs = function(x) {gsub(",", "", x)}) %>%
dplyr::mutate_at(.vars = cols_to_transform, .funs = function(x) {gsub("+", "", x)}) %>%
dplyr::mutate_at(.vars = cols_to_transform, .funs = as.numeric)
return(stat_df)
}
#' Clean each match advanced statistic tables
#'
#' Returns cleaned data frame for each of the team statistic tables for each selected match
#'
#' @param df_in a raw match stats data frame
#'
#' @return a cleaned data frame for the selected match advanced statistic
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#' @noRd
#'
.clean_match_advanced_stats_data <- function(df_in) {
var_names <- df_in[1,] %>% as.character()
new_names <- paste(var_names, names(df_in), sep = "_")
new_names <- new_names %>%
gsub("\\..[0-9]", "", .) %>%
gsub("\\.[0-9]", "", .) %>%
gsub("\\.", "_", .) %>%
gsub("_Var", "", .) %>%
gsub("#", "Player_Num", .) %>%
gsub("%", "_percent", .) %>%
gsub("_Performance", "", .) %>%
gsub("_Penalty", "", .) %>%
gsub("1/3", "Final_Third", .) %>%
gsub("\\+/-", "Plus_Minus", .) %>%
gsub("/", "_per_", .) %>%
gsub("-", "_minus_", .) %>%
gsub("90s", "Mins_Per_90", .) %>%
gsub("__", "_", .)
names(df_in) <- new_names
df_in <- df_in[-1,]
if(any(grepl("Nation", colnames(df_in)))) {
df_in$Nation <- gsub(".*? ", "", df_in$Nation)
}
# cols_to_transform <- df_in %>%
# dplyr::select(-.data[["Player"]], -.data[["Nation"]], -.data[["Pos"]], -.data[["Age"]]) %>% names()
non_num_vars <- c("Player", "Nation", "Pos", "Age")
cols_to_transform <- names(df_in)[!names(df_in) %in% non_num_vars]
df_in <- df_in %>%
dplyr::mutate_at(.vars = cols_to_transform, .funs = function(x) {gsub(",", "", x)}) %>%
dplyr::mutate_at(.vars = cols_to_transform, .funs = function(x) {gsub("+", "", x)}) %>%
dplyr::mutate_at(.vars = cols_to_transform, .funs = as.numeric)
return(df_in)
}
#' Clean stat table column names
#'
#' Returns cleaned column names for stats tables
#'
#' @param df_in a raw match stats data frame
#'
#' @return a data frame with cleaned names
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#' @noRd
#'
.clean_table_names <- function(df_in) {
var_names <- df_in[1,] %>% as.character()
new_names <- paste(var_names, names(df_in), sep = "_")
new_names <- new_names %>%
gsub("\\..[0-9]", "", .) %>%
gsub("\\.[0-9]", "", .) %>%
gsub("\\.", "_", .) %>%
gsub("_Var", "", .) %>%
gsub("#", "Num_", .) %>%
gsub("%", "_percent", .) %>%
# gsub("_Performance", "", .) %>%
gsub("_Penalty", "", .) %>%
gsub("1/3", "Final_Third", .) %>%
gsub("\\+/-", "Plus_Minus", .) %>%
gsub("/", "_per_", .) %>%
gsub("-", "_minus_", .) %>%
gsub("90s", "Mins_Per_90", .) %>%
gsub("__", "_", .)
names(df_in) <- new_names
df_in <- df_in[-1,]
colnames(df_in) <- gsub("_$", "", colnames(df_in))
return(df_in)
}
#' Convert formatted valuations to numeric
#'
#' Returns a numeric data type for player valuations
#'
#' @param euro_value raw valuation from transfermarkt.com
#'
#' @return a cleaned numeric data value for market and/or transfer valuation
#'
#' @importFrom magrittr %>%
#' @noRd
#'
.convert_value_to_numeric <- function(euro_value) {
clean_val <- gsub("[^\x20-\x7E]", "", euro_value) %>% tolower()
if(grepl("free", clean_val)) {
clean_val <- 0
} else if(grepl("m", clean_val)) {
clean_val <- suppressWarnings(gsub("m", "", clean_val) %>% as.numeric() * 1000000)
} else if(grepl("th.", clean_val)) {
clean_val <- suppressWarnings(gsub("th.", "", clean_val) %>% as.numeric() * 1000)
} else {
clean_val <- suppressWarnings(as.numeric(clean_val) * 1)
}
return(clean_val)
}
#' Clean Understat JSON data
#'
#' Returns a cleaned Understat data frame
#'
#' @param page_url understat.com page URL
#' @param script_name html JSON script name
#'
#' @return a cleaned Understat data frame
#'
#' @importFrom magrittr %>%
#' @noRd
#'
.get_clean_understat_json <- function(page_url, script_name) {
main_url <- "https://understat.com/"
page <- tryCatch( xml2::read_html(page_url), error = function(e) NA)
if(!is.na(page)) {
# locate script tags
clean_json <- page %>% rvest::html_nodes("script") %>% as.character()
clean_json <- clean_json[grep(script_name, clean_json)] %>% stringi::stri_unescape_unicode()
clean_json <- qdapRegex::rm_square(clean_json, extract = TRUE, include.markers = TRUE) %>% unlist() %>% stringr::str_subset("\\[\\]", negate = TRUE)
out_df <- lapply(clean_json, .fromJSON) %>% do.call("rbind", .)
# some outputs don't come with the season present, so add it in if not
if(!any(grepl("season", colnames(out_df)))) {
season_element <- page %>% rvest::html_nodes(xpath = '//*[@name="season"]') %>%
rvest::html_nodes("option")
season_element <- season_element[grep("selected", season_element)]
# season <- season_element %>% rvest::html_attr("value") %>% .[1] %>% as.numeric()
season <- season_element %>% rvest::html_text()
out_df <- cbind(season, out_df)
}
} else {
out_df <- data.frame()
}
out_df <- do.call(data.frame, out_df)
return(out_df)
}
#' Understat shots location helper function
#'
#' Returns a cleaned Understat shooting location data frame
#'
#' @param type_url can be season, team, match, player URL
#'
#' @return a cleaned Understat shooting location data frame
#'
#' @importFrom magrittr %>%
#' @importFrom stats runif
#' @noRd
#'
.understat_shooting <- function(type_url) {
main_url <- "https://understat.com/"
# need to get the game IDs first, filtering out matches not yet played as these URLs will error
games <- .get_clean_understat_json(page_url = type_url, script_name = "datesData") %>%
dplyr::filter(.data[["isResult"]])
# then create a chr vector of match URLs
match_urls <- paste0(main_url, "match/", games$id)
# start scrape:
shots_data <- data.frame()
for(each_match in match_urls) {
Sys.sleep(round(runif(1, 1, 2)))
tryCatch(df <- .get_clean_understat_json(page_url = each_match, script_name = "shotsData"), error = function(e) data.frame())
if(nrow(df) == 0) {
print(glue::glue("Shots data for match_url {each_match} not available"))
}
shots_data <- rbind(shots_data, df)
}
return(shots_data)
}
#' Clean date fields
#'
#' Returns a date format in YYYY-MM-DD from 'mmm d, yyyy'
#'
#' @param dirty_dates formatted date value
#'
#' @return a cleaned date
#'
#' @importFrom magrittr %>%
#' @noRd
#'
.tm_fix_dates <- function(dirty_dates) {
fix_date <- function(dirty_date) {
if(is.na(dirty_date)) {
clean_date <- NA_character_
} else {
split_string <- strsplit(dirty_date, split = " ") %>% unlist() %>% gsub(",", "", .)
if(length(split_string) != 3) {
clean_date <- NA_character_
} else {
tryCatch({clean_date <- lubridate::ymd(paste(split_string[3], split_string[1], split_string[2], sep = "-")) %>%
as.character()}, error = function(e) {country_name <- NA_character_})
}
}
return(clean_date)
}
clean_dates <- dirty_dates %>% purrr::map_chr(fix_date)
return(clean_dates)
}
#' Replace Empty Values
#'
#' Returns a NA character for empty values
#'
#' @param val a value that can either be empty, or not empty
#'
#' @return NA_character where the extracted value is empty, or the value itself
#' @noRd
#'
.replace_empty_na <- function(val) {
if(length(val) == 0) {
val <- NA_character_
} else {
val <- val
}
return(val)
}
# .pkg_message <- function(msg) {
# if(getOption("mypackage.verbose", default = TRUE)) message(glue::glue(msg))
# return(NULL)
# }
#' Load Page with headers
#'
#' loads webpages with a header passed to read_html
#'
#' @param page_url url of the page wanted to be loaded
#'
#' @return a html webpage
#'
#' @noRd
#'
.load_page <- function(page_url) {
agent <- getOption("worldfootballR.agent", default = "RStudio Desktop (2022.7.1.554); R (4.1.1 x86_64-w64-mingw32 x86_64 mingw32)")
ua <- httr::user_agent(agent)
session <- rvest::session(url = page_url, ua)
xml2::read_html(session)
}
# Use 1.8.0 version of jsonlite::fromJSON since 1.8.2's version (that uses base::url()) doesn't work for some cases
#' @importFrom jsonlite validate parse_json
#' @importFrom curl new_handle handle_setheaders curl
.fromJSON <- function(txt, simplifyVector = TRUE, simplifyDataFrame = simplifyVector, simplifyMatrix = simplifyVector, flatten = FALSE, ...) {
# check type
if (!is.character(txt) && !inherits(txt, "connection")) {
stop("Argument 'txt' must be a JSON string, URL or file.")
}
# overload for URL or path
if (is.character(txt) && length(txt) == 1 && nchar(txt, type="bytes") < 2084 && !jsonlite::validate(txt)) {
if (grepl("^https?://", txt, useBytes=TRUE)) {
agent <- getOption("worldfootballR.agent", default = "RStudio Desktop (2022.7.1.554); R (4.1.1 x86_64-w64-mingw32 x86_64 mingw32)")
h <- curl::new_handle(useragent = agent)
curl::handle_setheaders(h, Accept = "application/json, text/*, */*")
txt <- curl::curl(txt, handle = h)
} else if (file.exists(txt)) {
# With files we can never know for sure the encoding. Lets try UTF8 first.
# txt <- raw_to_json(readBin(txt, raw(), file.info(txt)$size));
txt <- file(txt)
}
}
jsonlite::parse_json(
txt = txt,
flatten = flatten,
simplifyVector = simplifyVector,
simplifyDataFrame = simplifyDataFrame,
simplifyMatrix = simplifyMatrix,
...
)
}
# Sometimes .fromJSON doesn't work, but jsonlite::fromJSON will. See https://github.com/JaseZiv/worldfootballR/issues/201
#' @importFrom purrr safely
#' @importFrom jsonlite fromJSON
#'
#' @noRd
safely_from_json <- function(...) {
f <- purrr::safely(.fromJSON, otherwise = NULL, quiet = TRUE)
resp <- f(...)
if (!is.null(resp)) {
return(resp)
}
jsonlite::fromJSON(...)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/internals.R
|
#' Read in stored RDS data
#'
#' Reads in RDS stored data files typically stored on GitHub
#'
#' @param file_url URL to RDS file(s) for reading in
#'
#' @return data type dependent on RDS file being read in
#' @noRd
#'
.file_reader <- function(file_url) {
tryCatch(readRDS(url(file_url)), error = function(e) data.frame()) %>%
suppressWarnings()
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/internals_loading.R
|
#' Load match results
#'
#' Loading version of \code{get_match_results}
#' Returns the game results for a given league season(s) from FBref
#'
#' @param country the three character country code
#' @param gender gender of competition, either "M" or "F"
#' @param season_end_year the year(s) the season concludes
#' @param tier the tier of the league, ie '1st' for the EPL or '2nd' for the Championship and so on
#'
#' @return returns a dataframe with the results of the competition, season and gender
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#' @importFrom utils read.csv
#'
#' @export
#'
#' @examples
#' \donttest{
#' try({
#' df <- load_match_results(
#' country = c("ITA"), gender = "M", season_end_year = 2021, tier = "1st"
#' )
#' # for results from English 1st div for men and women:
#' df <- load_match_results(
#' country = "ENG", gender = c("M", "F"), season_end_year = 2021, tier = "1st"
#' )
#' })
#' }
load_match_results <- function(country, gender, season_end_year, tier) {
dat_urls <- paste0("https://github.com/JaseZiv/worldfootballR_data/releases/download/match_results/", country, "_match_results.rds")
# collect_date <- .file_reader("https://github.com/JaseZiv/worldfootballR_data/blob/master/data/match_results/scrape_time_match_results.rds?raw=true")
dat_df <- dat_urls %>% purrr::map_df(.file_reader)
if(nrow(dat_df) == 0) {
cli::cli_alert("Data not loaded. Please check parameters")
} else {
dat_df <- dat_df %>%
dplyr::filter(.data[["Country"]] %in% country,
.data[["Gender"]] %in% gender,
.data[["Season_End_Year"]] %in% season_end_year,
.data[["Tier"]] %in% tier) %>%
dplyr::select(-.data[["Tier"]])
cli::cli_alert("Data last updated {attr(dat_df, 'scrape_timestamp')} UTC")
}
return(dat_df)
}
#' Load match competition results
#'
#' Returns the game results for a competition(s),
#' ie League cups or international competitions from FBref.
#' comp_name comes from https://github.com/JaseZiv/worldfootballR_data/tree/master/data/match_results_cups#readme
#'
#' @param comp_name the three character country code
#'
#' @return returns a dataframe with the results of the competition name
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
#'
#' @examples
#' \donttest{
#' try({
#' df <- load_match_comp_results(
#' comp_name = "Coppa Italia"
#' )
#' # for multiple competitions:
#' cups <- c("FIFA Women's World Cup",
#' "FIFA World Cup")
#' df <- load_match_comp_results(
#' comp_name = cups
#' )
#' })
#' }
load_match_comp_results <- function(comp_name) {
f_name <- janitor::make_clean_names(comp_name)
dat_urls <- paste0("https://github.com/JaseZiv/worldfootballR_data/releases/download/match_results_cups/", f_name, "_match_results.rds")
dat_df <- dat_urls %>% purrr::map_df(.file_reader)
if(nrow(dat_df) == 0) {
cli::cli_alert("Data not loaded. Please check parameters")
} else {
dat_df <- dat_df %>%
# dplyr::filter(.data[["Country"]] %in% country,
# .data[["Gender"]] %in% gender,
# .data[["Season_End_Year"]] %in% season_end_year,
# .data[["Tier"]] %in% tier) %>%
dplyr::select(-.data[["Tier"]])
cli::cli_alert("Data last updated {attr(dat_df, 'scrape_timestamp')} UTC")
}
return(dat_df)
}
#' Load Big 5 Euro League Season Stats
#'
#' Loading version of \code{fb_big5_advanced_season_stats}
#' Returns data frame of selected statistics for seasons of the big 5 Euro leagues, for either
#' whole team or individual players.
#' Multiple seasons can be passed to the function, but only one `stat_type` can be selected
#'
#' @param season_end_year the year(s) the season concludes
#' @param stat_type the type of team statistics the user requires
#' @param team_or_player result either summarised for each team, or individual players
#'
#' The statistic type options (stat_type) include:
#'
#' \emph{"standard"}, \emph{"shooting"}, \emph{"passing"}, \emph{"passing_types"},
#' \emph{"gca"}, \emph{"defense"}, \emph{"possession"}, \emph{"playing_time"},
#' \emph{"misc"}, \emph{"keepers"}, \emph{"keepers_adv"}
#'
#' @return returns a dataframe of a selected team or player statistic type for a selected season(s)
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
#'
#' @examples
#' \donttest{
#' try({
#' df <- load_fb_big5_advanced_season_stats(
#' season_end_year = c(2018:2022), stat_type = "defense", team_or_player = "player"
#' )
#'
#' df <- load_fb_big5_advanced_season_stats(
#' season_end_year = 2022, stat_type = "defense", team_or_player = "player"
#' )
#' })
#' }
load_fb_big5_advanced_season_stats <- function(season_end_year = NA, stat_type, team_or_player) {
dat_url <- paste0("https://github.com/JaseZiv/worldfootballR_data/releases/download/fb_big5_advanced_season_stats/big5_", team_or_player, "_", stat_type, ".rds")
# collect_date <- .file_reader("https://github.com/JaseZiv/worldfootballR_data/blob/master/data/fb_big5_advanced_season_stats/scrape_time_big5_advanced_season_stats.rds?raw=true")
dat_df <- tryCatch(.file_reader(dat_url), error = function(e) data.frame())
if(nrow(dat_df) == 0) {
cli::cli_alert("Data not available. Check you have the correct stat_type or team_or_player")
} else {
if(!all(is.na(season_end_year))) {
dat_df <- dat_df %>%
dplyr::filter(.data[["Season_End_Year"]] %in% season_end_year)
}
cli::cli_alert("Data last updated {attr(dat_df, 'scrape_timestamp')} UTC")
}
return(dat_df)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/load_fb.R
|
#' @importFrom purrr possibly map_dfr
#' @importFrom cli cli_alert
.load_fotmob <- function(country = NULL, league_name = NULL, league_id = NULL, cached = NULL, url_stem) {
fotmob_urls <- .fotmob_get_league_ids(
cached = cached,
country = country,
league_name = league_name,
league_id = league_id
)
url_format <- sprintf(
"https://github.com/JaseZiv/worldfootballR_data/releases/download/%s.rds",
url_stem
)
urls <- sprintf(
url_format,
fotmob_urls$id
)
fp <- purrr::possibly(
.file_reader,
quiet = FALSE,
otherwise = data.frame()
)
res <- purrr::map_dfr(urls, fp)
if(nrow(res) == 0) {
cli::cli_alert("Data not loaded. Please check parameters")
} else {
## when there are multiple data sets loaded in, seems like this is the attribute for the first
cli::cli_alert("Data last updated {attr(res, 'scrape_timestamp')} UTC")
}
res
}
#' Load pre saved fotmob match details
#'
#' Loading version of \code{fotmob_get_match_details}, but for all seasons for which
#' data is available, not just the current season.
#' Note that fotmobonly has match details going back to the 2020-21 season for most leagues.
#'
#' @inheritParams fotmob_get_league_matches
#'
#' @return returns a dataframe of league matches
#' @importFrom rlang maybe_missing
#'
#' @examples
#' \dontrun{
#' try({
#' # one league
#' load_fotmob_match_details(
#' country = "ENG",
#' league_name = "Premier League"
#' )
#'
#' ## this is the same output format as the following
#' fotmob_get_match_details(match_id = 3411352)
#'
#' # one league, by id
#' load_fotmob_match_details(league_id = 47)
#'
#' # multiple leagues (could also use ids)
#' load_fotmob_match_details(
#' country = c("ENG", "ESP" ),
#' league_name = c("Premier League", "LaLiga")
#' )
#' })
#' }
#' @export
load_fotmob_match_details <- function(country, league_name, league_id, cached = TRUE) {
.load_fotmob(
country = rlang::maybe_missing(country, NULL),
league_name = rlang::maybe_missing(league_name, NULL),
league_id = rlang::maybe_missing(league_id, NULL),
cached = cached,
url_stem = "fotmob_match_details/%s_match_details"
)
}
#' Load pre saved fotmob match ids by date
#'
#' Loading version of \code{fotmob_get_matches_by_date}. Goes back to August 2017.
#'
#' @inheritParams load_fotmob_match_details
#' @return returns a dataframe of league match ids
#' @importFrom rlang maybe_missing
#' @examples
#' \dontrun{
#' try({
#' ## just load match ids
#' load_fotmob_matches_by_date(
#' country = "ENG",
#' league_name = "Premier League"
#' )
#'
#' ## can also do it for multiple leagues
#' load_fotmob_matches_by_date(
#' country = c("ENG", "ESP" ),
#' league_name = c("Premier League", "LaLiga")
#' )
#' })
#' }
#' @export
load_fotmob_matches_by_date <- function(country, league_name, league_id, cached = TRUE) {
.load_fotmob(
country = rlang::maybe_missing(country, NULL),
league_name = rlang::maybe_missing(league_name, NULL),
league_id = rlang::maybe_missing(league_id, NULL),
cached = cached,
url_stem = "fotmob_matches_by_date/%s_matches_by_date"
)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/load_fotmob.R
|
#' Load Understat league shot locations
#'
#' Loading version of \code{understat_league_season_shots}, but for all seasons
#' Returns shooting locations for all matches played in the selected league
#'
#' @param league the available leagues in Understat as outlined below
#'
#' The leagues currently available for Understat are:
#' \emph{"EPL"}, \emph{"La liga}", \emph{"Bundesliga"},
#' \emph{"Serie A"}, \emph{"Ligue 1"}, \emph{"RFPL"}
#'
#' @return returns a dataframe of shooting locations for a selected league
#'
#' @importFrom magrittr %>%
#'
#' @examples
#' \dontrun{
#' try({
#' df <- load_understat_league_shots(league="Serie A")
#' })
#' }
#' @export
load_understat_league_shots <- function(league) {
leagues <- c("EPL", "La liga", "Bundesliga", "Serie A", "Ligue 1", "RFPL")
if(!league %in% leagues) stop("Check league name")
if(league == "La liga") {
league <- "La_liga"
} else if (league == "Serie A") {
league <- "Serie_A"
} else if (league == "Ligue 1") {
league <- "Ligue_1"
}
league_name_clean <- janitor::make_clean_names(league)
# then read in data
dat_urls <- paste0("https://github.com/JaseZiv/worldfootballR_data/releases/download/understat_shots/", league_name_clean, "_shot_data.rds?raw=true")
dat_df <- .file_reader(dat_urls)
if(nrow(dat_df) == 0) {
cli::cli_alert("Data not loaded. Please check parameters")
} else {
cli::cli_alert("Data last updated {attr(dat_df, 'scrape_timestamp')} UTC")
}
return(dat_df)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/load_understat.R
|
#' Player Mapping Dictionary
#'
#' Returns data frame of players from the top 5 Euro leagues, their player URL
#' and their respective Transfermarkt URL.
#' Currently only for the players who have been in the top 5 leagues since
#' the 2017-2018 season
#'
#' @return returns a dataframe of FBref players and respective Transfermarkt URL
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#' @importFrom readr read_csv
#'
#' @export
#'
#' @examples
#' \donttest{
#' try({
#' mapped_players <- player_dictionary_mapping()
#' })
#' }
player_dictionary_mapping <- function() {
players_mapped <- readr::read_csv("https://github.com/JaseZiv/worldfootballR_data/raw/master/raw-data/fbref-tm-player-mapping/output/fbref_to_tm_mapping.csv",
show_col_types = FALSE)
return(players_mapped)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/player_dictionary_mapping.R
|
#' Get Transfermarkt player market values
#'
#' Returns data frame of player valuations (in Euros) from transfermarkt.com
#' Replaces the deprecated function get_player_market_values
#'
#' @param country_name the country of the league's players
#' @param start_year the start year of the season (2020 for the 20/21 season)
#' @param league_url league url from transfermarkt.com. To be used when country_name not available in main function
#'
#' @return returns a dataframe of player valuations for country/seasons
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#' @importFrom utils read.csv
#'
#' @export
#'
tm_player_market_values <- function(country_name, start_year, league_url = NA) {
# .pkg_message("Extracting player market values...")
main_url <- "https://www.transfermarkt.com"
if(is.na(league_url)) {
meta_df <- read.csv(url("https://raw.githubusercontent.com/JaseZiv/worldfootballR_data/master/raw-data/transfermarkt_leagues/main_comp_seasons.csv"),
stringsAsFactors = F)
tryCatch({meta_df_seasons <- meta_df %>%
dplyr::filter(.data[["country"]] %in% country_name, .data[["season_start_year"]] %in% start_year)}, error = function(e) {meta_df_seasons <- data.frame()})
if(nrow(meta_df_seasons) == 0) {
stop(glue::glue("Country {country_name} or season {start_year} not found. Check that the country and season exists at https://github.com/JaseZiv/worldfootballR_data/blob/master/raw-data/transfermarkt_leagues/main_comp_seasons.csv"))
}
} else {
tryCatch({league_page <- xml2::read_html(league_url)}, error = function(e) {league_page <- c()})
if(length(league_page) == 0) {
stop(glue::glue("League URL(s) {league_url} not found. Please check transfermarkt.com for the correct league URL"))
}
comp_name <- league_page %>% rvest::html_nodes(".data-header__headline-wrapper--oswald") %>% rvest::html_text() %>% stringr::str_squish() %>% .replace_empty_na()
country <- league_page %>% rvest::html_nodes(".data-header__club a") %>% rvest::html_text() %>% stringr::str_squish() %>% .replace_empty_na()
seasons <- league_page %>% rvest::html_nodes(".chzn-select") %>% rvest::html_nodes("option")
season_start_year <- c()
for(each_season in seasons) {
season_start_year <- c(season_start_year, xml2::xml_attrs(each_season)[["value"]])
}
season_urls <- paste0(league_url, "/plus/?saison_id=", season_start_year)
meta_df_seasons <- data.frame(comp_name=as.character(comp_name), region=NA_character_, country=as.character(country), comp_url=as.character(league_url), season_start_year=as.numeric(season_start_year), season_urls=as.character(season_urls))
tryCatch({meta_df_seasons <- meta_df_seasons %>%
dplyr::filter(.data[["season_start_year"]] %in% start_year)}, error = function(e) {meta_df_seasons <- data.frame()})
if(nrow(meta_df_seasons) == 0) {
stop(glue::glue("League URL(s) {league_urls} or seasons {start_year} not found. Please check transfermarkt.com for the correct league URL"))
}
}
all_seasons_urls <- meta_df_seasons$season_urls
all_seasons_df <- data.frame()
for(each_season in all_seasons_urls) {
season_page <- xml2::read_html(each_season)
team_urls <- season_page %>%
rvest::html_nodes("#yw1 .hauptlink a") %>% rvest::html_attr("href") %>%
# rvest::html_elements("tm-tooltip a") %>% rvest::html_attr("href") %>%
unique() %>% paste0(main_url, .)
# there now appears to be an errorneous URL so will remove that manually:
if(any(grepl("com#", team_urls))) {
team_urls <- team_urls[-grep(".com#", team_urls)]
}
team_urls <- gsub("startseite", "kader", team_urls) %>%
paste0(., "/plus/1")
# Get Teams ---------------------------------------------------------------
league_season_df <- data.frame()
for(each_team in team_urls) {
team_page <- xml2::read_html(each_team)
team_data <- team_page %>% rvest::html_nodes("#yw1") %>% rvest::html_nodes(".items") %>% rvest::html_node("tbody")
tab_head_names <- team_page %>% rvest::html_nodes("#yw1") %>% rvest::html_nodes(".items") %>% rvest::html_nodes("th") %>% rvest::html_text()
# team name
squad <- team_page %>% rvest::html_node(".data-header__headline-wrapper--oswald") %>% rvest::html_text() %>% stringr::str_squish()
# numbers
player_num <- team_data %>% rvest::html_nodes(".rn_nummer") %>% rvest::html_text()
if(length(player_num) == 0) {
player_num <- NA_character_
}
# player names
player_name <- team_data %>% rvest::html_nodes(".hauptlink a") %>% rvest::html_text() %>% stringr::str_squish()
if(length(player_name) == 0) {
player_name <- NA_character_
}
# player_url
player_url <- team_data %>% rvest::html_nodes(".hauptlink a") %>% rvest::html_attr("href") %>%
paste0(main_url, .)
if(length(player_url) == 0) {
player_url <- NA_character_
}
# player position
player_position <- team_data %>% rvest::html_nodes(".inline-table tr+ tr td") %>% rvest::html_text() %>% stringr::str_squish()
if(length(player_position) == 0) {
player_position <- NA_character_
}
# birthdate
player_birthday <- team_data %>% rvest::html_nodes("td:nth-child(3)") %>% rvest::html_text()
if(length(player_birthday) == 0) {
player_birthday <- NA_character_
}
# player_nationality
player_nationality <- c()
player_nat <- team_data %>% rvest::html_nodes(".flaggenrahmen:nth-child(1)")
if(length(player_nat) == 0) {
player_nationality <- NA_character_
} else {
for(i in 1:length(player_nat)) {
player_nationality <- c(player_nationality, xml2::xml_attrs(player_nat[[i]])[["title"]])
}
}
# current club - only for previous seasons, not current:
current_club_idx <- grep("Current club", tab_head_names)
if(length(current_club_idx) == 0) {
current_club <- NA_character_
} else {
c_club <- team_data %>% rvest::html_nodes(paste0("td:nth-child(", current_club_idx,")"))
current_club <- c()
for(cc in c_club) {
each_current <- cc %>% rvest::html_nodes("a") %>% rvest::html_nodes("img") %>% rvest::html_attr("alt")
if(length(each_current) == 0) {
each_current <- NA_character_
}
current_club <- c(current_club, each_current)
}
}
# player height
height_idx <- grep("Height", tab_head_names)
if(length(height_idx) == 0) {
player_height_mtrs <- NA_character_
} else {
suppressWarnings(player_height_mtrs <- team_data %>% rvest::html_nodes(paste0("td:nth-child(", height_idx, ")")) %>% rvest::html_text() %>%
gsub(",", "\\.", .) %>% gsub("m", "", .) %>% stringr::str_squish() %>% as.numeric())
}
# player_foot
foot_idx <- grep("Foot", tab_head_names)
if(length(foot_idx) == 0) {
player_foot <- NA_character_
} else {
player_foot <- team_data %>% rvest::html_nodes(paste0("td:nth-child(", foot_idx,")")) %>% rvest::html_text()
}
# date joined club
joined_idx <- grep("Joined", tab_head_names)
if(length(joined_idx) == 0) {
date_joined <- NA_character_
} else {
date_joined <- team_data %>% rvest::html_nodes(paste0("td:nth-child(", joined_idx, ")")) %>% rvest::html_text()
}
# joined from
from_idx <- grep("Signed from", tab_head_names)
if(length(from_idx) == 0) {
joined_from <- NA_character_
} else {
p_club <- tryCatch(team_data %>% rvest::html_nodes(paste0("td:nth-child(", from_idx, ")")), error = function(e) NA)
joined_from <- c()
for(pc in p_club) {
each_past <- pc %>% rvest::html_nodes("a") %>% rvest::html_nodes("img") %>% rvest::html_attr("alt")
if(length(each_past) == 0) {
each_past <- NA_character_
}
joined_from <- c(joined_from, each_past)
}
}
# contract expiry
contract_idx <- grep("Contract", tab_head_names)
if(length(contract_idx) == 0) {
contract_expiry <- NA_character_
} else {
contract_expiry <- team_data %>% rvest::html_nodes(paste0("td:nth-child(", contract_idx, ")")) %>% rvest::html_text()
}
# value
player_market_value <- team_data %>% rvest::html_nodes(".rechts.hauptlink") %>% rvest::html_text()
if(length(player_market_value) == 0) {
player_market_value <- NA_character_
}
suppressWarnings(team_df <- cbind(each_season, squad, player_num, player_name, player_url, player_position, player_birthday, player_nationality, current_club,
player_height_mtrs, player_foot, date_joined, joined_from, contract_expiry, player_market_value) %>% data.frame())
team_df <- team_df %>%
dplyr::rename(season_urls=each_season)
league_season_df <- rbind(league_season_df, team_df)
}
all_seasons_df <- rbind(all_seasons_df, league_season_df)
}
all_seasons_df <- meta_df_seasons %>%
dplyr::left_join(all_seasons_df, by = "season_urls")
all_seasons_df <- all_seasons_df %>%
dplyr::mutate(player_market_value_euro = mapply(.convert_value_to_numeric, player_market_value)) %>%
dplyr::mutate(date_joined = .tm_fix_dates(.data[["date_joined"]]),
contract_expiry = .tm_fix_dates(.data[["contract_expiry"]])) %>%
tidyr::separate(., player_birthday, into = c("Month", "Day", "Year"), sep = " ", remove = F) %>%
dplyr::mutate(player_age = sub(".*(?:\\((.*)\\)).*|.*", "\\1", .data[["Year"]]),
Day = gsub(",", "", .data[["Day"]]) %>% as.numeric(),
Year = as.numeric(gsub("\\(.*", "", .data[["Year"]])),
Month = match(.data[["Month"]], month.abb),
player_dob = suppressWarnings(lubridate::ymd(paste(.data[["Year"]], .data[["Month"]], .data[["Day"]], sep = "-")))) %>%
dplyr::mutate(player_age = as.numeric(gsub("\\D", "", .data[["player_age"]]))) %>%
dplyr::select(.data[["comp_name"]], .data[["region"]], .data[["country"]], .data[["season_start_year"]], .data[["squad"]], .data[["player_num"]], .data[["player_name"]], .data[["player_position"]], .data[["player_dob"]], .data[["player_age"]], .data[["player_nationality"]], .data[["current_club"]],
.data[["player_height_mtrs"]], .data[["player_foot"]], .data[["date_joined"]], .data[["joined_from"]], .data[["contract_expiry"]], .data[["player_market_value_euro"]], .data[["player_url"]])
return(all_seasons_df)
}
#' Get player market values
#'
#' Returns data frame of player valuations (in Euros) from transfermarkt.com
#'
#' @param country_name the country of the league's players
#' @param start_year the start year of the season (2020 for the 20/21 season)
#' @param league_url league url from transfermarkt.com. To be used when country_name not available in main function
#'
#' @return returns a dataframe of player valuations for country/seasons
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#' @importFrom utils read.csv
#'
#' @export
#'
get_player_market_values <- function(country_name, start_year, league_url = NA) {
.Deprecated("tm_player_market_values")
# .pkg_message("Extracting player market values...")
main_url <- "https://www.transfermarkt.com"
if(is.na(league_url)) {
meta_df <- read.csv(url("https://raw.githubusercontent.com/JaseZiv/worldfootballR_data/master/raw-data/transfermarkt_leagues/main_comp_seasons.csv"),
stringsAsFactors = F)
tryCatch({meta_df_seasons <- meta_df %>%
dplyr::filter(.data[["country"]] %in% country_name, .data[["season_start_year"]] %in% start_year)}, error = function(e) {meta_df_seasons <- data.frame()})
if(nrow(meta_df_seasons) == 0) {
stop(glue::glue("Country {country_name} or season {start_year} not found. Check that the country and season exists at https://github.com/JaseZiv/worldfootballR_data/blob/master/raw-data/transfermarkt_leagues/main_comp_seasons.csv"))
}
} else {
tryCatch({league_page <- xml2::read_html(league_url)}, error = function(e) {league_page <- c()})
if(length(league_page) == 0) {
stop(glue::glue("League URL(s) {league_url} not found. Please check transfermarkt.com for the correct league URL"))
}
comp_name <- league_page %>% rvest::html_nodes(".spielername-profil") %>% rvest::html_text()
country <- league_page %>% rvest::html_nodes(".miniflagge")
country <- xml2::xml_attrs(country[[1]])[["title"]]
seasons <- league_page %>% rvest::html_nodes(".chzn-select") %>% rvest::html_nodes("option")
season_start_year <- c()
for(each_season in seasons) {
season_start_year <- c(season_start_year, xml2::xml_attrs(each_season)[["value"]])
}
season_urls <- paste0(league_url, "/plus/?saison_id=", season_start_year)
meta_df_seasons <- data.frame(comp_name=as.character(comp_name), region=NA_character_, country=as.character(country), comp_url=as.character(league_url), season_start_year=as.numeric(season_start_year), season_urls=as.character(season_urls))
tryCatch({meta_df_seasons <- meta_df_seasons %>%
dplyr::filter(.data[["season_start_year"]] %in% start_year)}, error = function(e) {meta_df_seasons <- data.frame()})
if(nrow(meta_df_seasons) == 0) {
stop(glue::glue("League URL(s) {league_urls} or seasons {start_year} not found. Please check transfermarkt.com for the correct league URL"))
}
}
all_seasons_urls <- meta_df_seasons$season_urls
all_seasons_df <- data.frame()
for(each_season in all_seasons_urls) {
season_page <- xml2::read_html(each_season)
team_urls <- season_page %>%
rvest::html_nodes("#yw1 .hauptlink a") %>% rvest::html_attr("href") %>%
# rvest::html_elements("tm-tooltip a") %>% rvest::html_attr("href") %>%
unique() %>% paste0(main_url, .)
# there now appears to be an errorneous URL so will remove that manually:
if(any(grepl("com#", team_urls))) {
team_urls <- team_urls[-grep(".com#", team_urls)]
}
team_urls <- gsub("startseite", "kader", team_urls) %>%
paste0(., "/plus/1")
# Get Teams ---------------------------------------------------------------
league_season_df <- data.frame()
for(each_team in team_urls) {
team_page <- xml2::read_html(each_team)
team_data <- team_page %>% rvest::html_nodes("#yw1") %>% rvest::html_nodes(".items") %>% rvest::html_node("tbody")
tab_head_names <- team_page %>% rvest::html_nodes("#yw1") %>% rvest::html_nodes(".items") %>% rvest::html_nodes("th") %>% rvest::html_text()
# team name
squad <- team_page %>% rvest::html_node(".dataName") %>% rvest::html_text() %>% stringr::str_squish()
# numbers
player_num <- team_data %>% rvest::html_nodes(".rn_nummer") %>% rvest::html_text()
if(length(player_num) == 0) {
player_num <- NA_character_
}
# player names
player_name <- team_data %>% rvest::html_nodes(".hauptlink a") %>% rvest::html_text() %>% stringr::str_squish()
if(length(player_name) == 0) {
player_name <- NA_character_
}
# player_url
player_url <- team_data %>% rvest::html_nodes(".hauptlink a") %>% rvest::html_attr("href") %>%
paste0(main_url, .)
if(length(player_url) == 0) {
player_url <- NA_character_
}
# player position
player_position <- team_data %>% rvest::html_nodes(".inline-table tr+ tr td") %>% rvest::html_text() %>% stringr::str_squish()
if(length(player_position) == 0) {
player_position <- NA_character_
}
# birthdate
player_birthday <- team_data %>% rvest::html_nodes("td:nth-child(3)") %>% rvest::html_text()
if(length(player_birthday) == 0) {
player_birthday <- NA_character_
}
# player_nationality
player_nationality <- c()
player_nat <- team_data %>% rvest::html_nodes(".flaggenrahmen:nth-child(1)")
if(length(player_nat) == 0) {
player_nationality <- NA_character_
} else {
for(i in 1:length(player_nat)) {
player_nationality <- c(player_nationality, xml2::xml_attrs(player_nat[[i]])[["title"]])
}
}
# current club - only for previous seasons, not current:
current_club_idx <- grep("Current club", tab_head_names)
if(length(current_club_idx) == 0) {
current_club <- NA_character_
} else {
c_club <- team_data %>% rvest::html_nodes(paste0("td:nth-child(", current_club_idx,")"))
current_club <- c()
for(cc in c_club) {
each_current <- cc %>% rvest::html_nodes("a") %>% rvest::html_nodes("img") %>% rvest::html_attr("alt")
if(length(each_current) == 0) {
each_current <- NA_character_
}
current_club <- c(current_club, each_current)
}
}
# player height
height_idx <- grep("Height", tab_head_names)
if(length(height_idx) == 0) {
player_height_mtrs <- NA_character_
} else {
suppressWarnings(player_height_mtrs <- team_data %>% rvest::html_nodes(paste0("td:nth-child(", height_idx, ")")) %>% rvest::html_text() %>%
gsub(",", "\\.", .) %>% gsub("m", "", .) %>% stringr::str_squish() %>% as.numeric())
}
# player_foot
foot_idx <- grep("Foot", tab_head_names)
if(length(foot_idx) == 0) {
player_foot <- NA_character_
} else {
player_foot <- team_data %>% rvest::html_nodes(paste0("td:nth-child(", foot_idx,")")) %>% rvest::html_text()
}
# date joined club
joined_idx <- grep("Joined", tab_head_names)
if(length(joined_idx) == 0) {
date_joined <- NA_character_
} else {
date_joined <- team_data %>% rvest::html_nodes(paste0("td:nth-child(", joined_idx, ")")) %>% rvest::html_text()
}
# joined from
from_idx <- grep("Signed from", tab_head_names)
if(length(from_idx) == 0) {
joined_from <- NA_character_
} else {
p_club <- tryCatch(team_data %>% rvest::html_nodes(paste0("td:nth-child(", from_idx, ")")), error = function(e) NA)
joined_from <- c()
for(pc in p_club) {
each_past <- pc %>% rvest::html_nodes("a") %>% rvest::html_nodes("img") %>% rvest::html_attr("alt")
if(length(each_past) == 0) {
each_past <- NA_character_
}
joined_from <- c(joined_from, each_past)
}
}
# contract expiry
contract_idx <- grep("Contract", tab_head_names)
if(length(contract_idx) == 0) {
contract_expiry <- NA_character_
} else {
contract_expiry <- team_data %>% rvest::html_nodes(paste0("td:nth-child(", contract_idx, ")")) %>% rvest::html_text()
}
# value
player_market_value <- team_data %>% rvest::html_nodes(".rechts.hauptlink") %>% rvest::html_text()
if(length(player_market_value) == 0) {
player_market_value <- NA_character_
}
suppressWarnings(team_df <- cbind(each_season, squad, player_num, player_name, player_url, player_position, player_birthday, player_nationality, current_club,
player_height_mtrs, player_foot, date_joined, joined_from, contract_expiry, player_market_value) %>% data.frame())
team_df <- team_df %>%
dplyr::rename(season_urls=each_season)
league_season_df <- rbind(league_season_df, team_df)
}
all_seasons_df <- rbind(all_seasons_df, league_season_df)
}
all_seasons_df <- meta_df_seasons %>%
dplyr::left_join(all_seasons_df, by = "season_urls")
all_seasons_df <- all_seasons_df %>%
dplyr::mutate(player_market_value_euro = mapply(.convert_value_to_numeric, player_market_value)) %>%
dplyr::mutate(date_joined = .tm_fix_dates(.data[["date_joined"]]),
contract_expiry = .tm_fix_dates(.data[["contract_expiry"]])) %>%
tidyr::separate(., player_birthday, into = c("Month", "Day", "Year"), sep = " ", remove = F) %>%
dplyr::mutate(player_age = sub(".*(?:\\((.*)\\)).*|.*", "\\1", .data[["Year"]]),
Day = gsub(",", "", .data[["Day"]]) %>% as.numeric(),
Year = as.numeric(gsub("\\(.*", "", .data[["Year"]])),
Month = match(.data[["Month"]], month.abb),
player_dob = suppressWarnings(lubridate::ymd(paste(.data[["Year"]], .data[["Month"]], .data[["Day"]], sep = "-")))) %>%
dplyr::mutate(player_age = as.numeric(gsub("\\D", "", .data[["player_age"]]))) %>%
dplyr::select(.data[["comp_name"]], .data[["region"]], .data[["country"]], .data[["season_start_year"]], .data[["squad"]], .data[["player_num"]], .data[["player_name"]], .data[["player_position"]], .data[["player_dob"]], .data[["player_age"]], .data[["player_nationality"]], .data[["current_club"]],
.data[["player_height_mtrs"]], .data[["player_foot"]], .data[["date_joined"]], .data[["joined_from"]], .data[["contract_expiry"]], .data[["player_market_value_euro"]], .data[["player_url"]])
return(all_seasons_df)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/player_market_values.R
|
#' Get Transfermarkt player transfer history
#'
#' Returns data frame of player(s) transfer history from transfermarkt.com
#' Replaces the deprecated function player_transfer_history
#'
#' @param player_urls the player url(s) from transfermarkt
#'
#' @return returns a dataframe of player transfers
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
#'
tm_player_transfer_history <- function(player_urls) {
# .pkg_message("Extracting player transfer history data. Please acknowledge transfermarkt.com as the data source.")
single_player_transfer_history <- function(player_url) {
pb$tick()
main_url <- "https://www.transfermarkt.com"
page <- xml2::read_html(player_url)
player_name <- page %>% rvest::html_node("h1") %>% rvest::html_text() %>% gsub("#[[:digit:]]+ ", "", .) %>% stringr::str_squish()
box_holder <- page %>% rvest::html_nodes(".viewport-tracking")
# need to get the index of the box containing transfer history data
box_idx <- box_holder %>% rvest::html_attr('data-viewport') %>% grep("Transferhistorie", .)
all_transfers_player <- box_holder[box_idx]
# now want the rows that contain data, but need to remove heading, column headers and footer
# this step is new as transfermarkt USED TO contain this data in a table element
all_transfer_rows <- all_transfers_player %>% rvest::html_children()
rem_rows_idx <- c(grep("Transfer history", all_transfer_rows %>% rvest::html_text()),
grep("Season", all_transfer_rows %>% rvest::html_text()),
grep("Total", all_transfer_rows %>% rvest::html_text()))
# only keep the necessary rows
all_transfer_rows <- all_transfer_rows[-rem_rows_idx]
# loop through each of the rows
each_player_df <- data.frame()
for(each_row in 1:length(all_transfer_rows)) {
season <- tryCatch(all_transfer_rows[each_row] %>% rvest::html_nodes(".tm-player-transfer-history-grid__season") %>% rvest::html_text() %>% stringr::str_squish(), error = function(e) NA_character_)
if(rlang::is_empty(season)) {
each_row_df <- data.frame()
} else {
transfer_date <- tryCatch(all_transfer_rows[each_row] %>% rvest::html_nodes(".tm-player-transfer-history-grid__date") %>% rvest::html_text() %>% stringr::str_squish() %>%
.tm_fix_dates() %>% lubridate::ymd(), error = function(e) NA_character_)
# country_flags <- tryCatch(all_transfer_rows[each_row] %>% rvest::html_nodes(".flagge"), error = function(e) NA)
country_from <- tryCatch(all_transfer_rows[each_row] %>% rvest::html_nodes(".tm-player-transfer-history-grid__old-club .tm-player-transfer-history-grid__flag") %>% rvest::html_attr("alt") %>% stringr::str_squish() %>% .replace_empty_na(), error = function(e) NA_character_)
country_to <- tryCatch(all_transfer_rows[each_row] %>% rvest::html_nodes(".tm-player-transfer-history-grid__new-club .tm-player-transfer-history-grid__flag") %>% rvest::html_attr("alt"), error = function(e) NA_character_)
# to handle for players that are retired, like: "https://www.transfermarkt.com/massimiliano-allegri/profil/spieler/163501":
if(rlang::is_empty(country_to)) {
country_to <- tryCatch(all_transfer_rows[each_row] %>% rvest::html_nodes(".tm-player-transfer-history-grid__new-club .tm-player-transfer-history-grid__club-link") %>% rvest::html_text() %>% stringr::str_squish(), error = function(e) NA_character_)
}
team_from <- tryCatch(all_transfer_rows[each_row] %>% rvest::html_nodes(".tm-player-transfer-history-grid__old-club .tm-player-transfer-history-grid__club-link") %>% rvest::html_text() %>%
stringr::str_squish(), error = function(e) NA_character_)
team_to <- tryCatch(all_transfer_rows[each_row] %>% rvest::html_nodes(".tm-player-transfer-history-grid__new-club .tm-player-transfer-history-grid__club-link") %>% rvest::html_text() %>%
stringr::str_squish(), error = function(e) NA_character_)
market_value <- tryCatch(all_transfer_rows[each_row] %>% rvest::html_nodes(".tm-player-transfer-history-grid__market-value") %>% rvest::html_text() %>%
stringr::str_squish() %>%
.convert_value_to_numeric(), error = function(e) NA_character_)
transfer_value <- tryCatch(all_transfer_rows[each_row] %>% rvest::html_nodes(".tm-player-transfer-history-grid__fee") %>% rvest::html_text() %>%
stringr::str_squish() %>%
.convert_value_to_numeric, error = function(e) NA_character_)
# to get contract length, which isn't on the main page listing all transfers:
extra_info_url <- all_transfer_rows[each_row] %>% rvest::html_nodes(".tm-player-transfer-history-grid__link") %>% rvest::html_attr("href") %>% paste0(main_url, .)
extra_info <- tryCatch(xml2::read_html(extra_info_url), error = function(e) NA)
contract_box <- extra_info %>% rvest::html_nodes(".large-4.columns") %>% rvest::html_node("table") %>% rvest::html_children()
contract_idx <- grep("Remaining contract duration", contract_box %>% rvest::html_text())
if(is.na(extra_info)) {
contract_expiry <- NA
days_remaining <- NA
} else {
text_to_remove <- contract_box[contract_idx] %>% rvest::html_nodes("b") %>% rvest::html_text()
if(length(text_to_remove) == 0) {
contract_expiry <- NA
days_remaining <- NA
} else {
contract_expiry <- contract_box[contract_idx] %>% rvest::html_text() %>%
gsub(text_to_remove, "", .) %>% stringr::str_squish() %>% gsub(".*\\((.*)\\).*", "\\1", .) %>% .tm_fix_dates() %>% lubridate::ymd()
days_remaining <- difftime(contract_expiry, transfer_date, units = c("days")) %>% as.numeric()
}
}
each_row_df <- data.frame(player_name=as.character(player_name), season=as.character(season), transfer_date=lubridate::ymd(transfer_date),
country_from=as.character(country_from), team_from=as.character(team_from), country_to=as.character(country_to),
team_to=as.character(team_to), market_value=as.numeric(market_value), transfer_value=as.numeric(transfer_value),
contract_expiry=lubridate::ymd(contract_expiry), days_remaining=as.numeric(days_remaining))
}
each_player_df <- rbind(each_player_df, each_row_df)
}
return(each_player_df)
}
# create the progress bar with a progress function.
pb <- progress::progress_bar$new(total = length(player_urls))
all_players <- player_urls %>%
purrr::map_df(single_player_transfer_history)
return(all_players)
}
#' Get player transfer history
#'
#' Returns data frame of player(s) transfer history from transfermarkt.com
#'
#' @param player_urls the player url(s) from transfermarkt
#'
#' @return returns a dataframe of player transfers
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
#'
player_transfer_history <- function(player_urls) {
.Deprecated("tm_player_transfer_history")
# .pkg_message("Extracting player transfer history data. Please acknowledge transfermarkt.com as the data source.")
single_player_transfer_history <- function(player_url) {
pb$tick()
main_url <- "https://www.transfermarkt.com"
page <- xml2::read_html(player_url)
player_name <- page %>% rvest::html_node("h1") %>% rvest::html_text() %>% gsub("#[[:digit:]]+ ", "", .) %>% stringr::str_squish()
box_holder <- page %>% rvest::html_nodes(".viewport-tracking")
# need to get the index of the box containing transfer history data
box_idx <- box_holder %>% rvest::html_attr('data-viewport') %>% grep("Transferhistorie", .)
all_transfers_player <- box_holder[box_idx]
# now want the rows that contain data, but need to remove heading, column headers and footer
# this step is new as transfermarkt USED TO contain this data in a table element
all_transfer_rows <- all_transfers_player %>% rvest::html_children()
rem_rows_idx <- c(grep("Transfer history", all_transfer_rows %>% rvest::html_text()),
grep("Season", all_transfer_rows %>% rvest::html_text()),
grep("Total", all_transfer_rows %>% rvest::html_text()))
# only keep the necessary rows
all_transfer_rows <- all_transfer_rows[-rem_rows_idx]
# loop through each of the rows
each_player_df <- data.frame()
for(each_row in 1:length(all_transfer_rows)) {
season <- tryCatch(all_transfer_rows[each_row] %>% rvest::html_nodes(".tm-player-transfer-history-grid__season") %>% rvest::html_text() %>% stringr::str_squish(), error = function(e) NA_character_)
if(rlang::is_empty(season)) {
each_row_df <- data.frame()
} else {
transfer_date <- tryCatch(all_transfer_rows[each_row] %>% rvest::html_nodes(".tm-player-transfer-history-grid__date") %>% rvest::html_text() %>% stringr::str_squish() %>%
.tm_fix_dates() %>% lubridate::ymd(), error = function(e) NA_character_)
# country_flags <- tryCatch(all_transfer_rows[each_row] %>% rvest::html_nodes(".flagge"), error = function(e) NA)
country_from <- tryCatch(all_transfer_rows[each_row] %>% rvest::html_nodes(".tm-player-transfer-history-grid__old-club .tm-player-transfer-history-grid__flag") %>% rvest::html_attr("alt") %>% stringr::str_squish() %>% .replace_empty_na(), error = function(e) NA_character_)
country_to <- tryCatch(all_transfer_rows[each_row] %>% rvest::html_nodes(".tm-player-transfer-history-grid__new-club .tm-player-transfer-history-grid__flag") %>% rvest::html_attr("alt"), error = function(e) NA_character_)
# to handle for players that are retired, like: "https://www.transfermarkt.com/massimiliano-allegri/profil/spieler/163501":
if(rlang::is_empty(country_to)) {
country_to <- tryCatch(all_transfer_rows[each_row] %>% rvest::html_nodes(".tm-player-transfer-history-grid__new-club .tm-player-transfer-history-grid__club-link") %>% rvest::html_text() %>% stringr::str_squish(), error = function(e) NA_character_)
}
team_from <- tryCatch(all_transfer_rows[each_row] %>% rvest::html_nodes(".tm-player-transfer-history-grid__old-club .tm-player-transfer-history-grid__club-link") %>% rvest::html_text() %>%
stringr::str_squish(), error = function(e) NA_character_)
team_to <- tryCatch(all_transfer_rows[each_row] %>% rvest::html_nodes(".tm-player-transfer-history-grid__new-club .tm-player-transfer-history-grid__club-link") %>% rvest::html_text() %>%
stringr::str_squish(), error = function(e) NA_character_)
market_value <- tryCatch(all_transfer_rows[each_row] %>% rvest::html_nodes(".tm-player-transfer-history-grid__market-value") %>% rvest::html_text() %>%
stringr::str_squish() %>%
.convert_value_to_numeric(), error = function(e) NA_character_)
transfer_value <- tryCatch(all_transfer_rows[each_row] %>% rvest::html_nodes(".tm-player-transfer-history-grid__fee") %>% rvest::html_text() %>%
stringr::str_squish() %>%
.convert_value_to_numeric, error = function(e) NA_character_)
# to get contract length, which isn't on the main page listing all transfers:
extra_info_url <- all_transfer_rows[each_row] %>% rvest::html_nodes(".tm-player-transfer-history-grid__link") %>% rvest::html_attr("href") %>% paste0(main_url, .)
extra_info <- tryCatch(xml2::read_html(extra_info_url), error = function(e) NA)
contract_box <- extra_info %>% rvest::html_nodes(".large-4.columns") %>% rvest::html_node("table") %>% rvest::html_children()
contract_idx <- grep("Remaining contract duration", contract_box %>% rvest::html_text())
if(is.na(extra_info)) {
contract_expiry <- NA
days_remaining <- NA
} else {
text_to_remove <- contract_box[contract_idx] %>% rvest::html_nodes("b") %>% rvest::html_text()
if(length(text_to_remove) == 0) {
contract_expiry <- NA
days_remaining <- NA
} else {
contract_expiry <- contract_box[contract_idx] %>% rvest::html_text() %>%
gsub(text_to_remove, "", .) %>% stringr::str_squish() %>% gsub(".*\\((.*)\\).*", "\\1", .) %>% .tm_fix_dates() %>% lubridate::ymd()
days_remaining <- difftime(contract_expiry, transfer_date, units = c("days")) %>% as.numeric()
}
}
each_row_df <- data.frame(player_name=as.character(player_name), season=as.character(season), transfer_date=lubridate::ymd(transfer_date),
country_from=as.character(country_from), team_from=as.character(team_from), country_to=as.character(country_to),
team_to=as.character(team_to), market_value=as.numeric(market_value), transfer_value=as.numeric(transfer_value),
contract_expiry=lubridate::ymd(contract_expiry), days_remaining=as.numeric(days_remaining))
}
each_player_df <- rbind(each_player_df, each_row_df)
}
return(each_player_df)
}
# create the progress bar with a progress function.
pb <- progress::progress_bar$new(total = length(player_urls))
all_players <- player_urls %>%
purrr::map_df(single_player_transfer_history)
return(all_players)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/player_transfer_history.R
|
#' Get FBref team match results
#'
#' Returns all game results for a team in a given season
#' Replaces the deprecated function get_team_match_results
#'
#' @param team_url the URL for the team season
#' @param time_pause the wait time (in seconds) between page loads
#'
#' @return returns a dataframe with the results of all games played by the selected team(s)
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#' @importFrom utils read.csv
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' # for single teams:
#' man_city_url <- "https://fbref.com/en/squads/b8fd03ef/Manchester-City-Stats"
#' fb_team_match_results(man_city_url)
#' })
#' }
fb_team_match_results <- function(team_url, time_pause=3) {
# .pkg_message("Scraping team match logs...")
time_wait <- time_pause
get_each_team_log <- function(team_url, time_pause=time_wait) {
pb$tick()
# put sleep in as per new user agreement on FBref
Sys.sleep(time_pause)
team_page <- .load_page(team_url)
team_name <- sub('.*\\/', '', team_url) %>% gsub("-Stats", "", .) %>% gsub("-", " ", .)
opponent_names <- team_page %>% rvest::html_nodes(".left:nth-child(10) a") %>% rvest::html_text()
team_log <- team_page %>%
rvest::html_nodes("#all_matchlogs") %>%
rvest::html_nodes("table") %>%
rvest::html_table() %>% data.frame()
team_log$Opponent <- opponent_names
team_log <- team_log %>%
dplyr::mutate(Team_Url = team_url,
Team = team_name) %>%
dplyr::select(.data[["Team_Url"]], .data[["Team"]], dplyr::everything(), -.data[["Match.Report"]])
team_log <- team_log %>%
dplyr::mutate(Attendance = gsub(",", "", .data[["Attendance"]]) %>% as.numeric(),
GF = as.character(.data[["GF"]]),
GA = as.character(.data[["GA"]]))
return(team_log)
}
# create the progress bar with a progress function.
pb <- progress::progress_bar$new(total = length(team_url))
all_team_logs <- team_url %>%
purrr::map_df(get_each_team_log)
}
#' Get team match results
#'
#' Returns all game results for a team in a given season
#'
#' @param team_url the URL for the team season
#' @param time_pause the wait time (in seconds) between page loads
#'
#' @return returns a dataframe with the results of all games played by the selected team(s)
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#' @importFrom utils read.csv
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' # for single teams:
#' man_city_url <- "https://fbref.com/en/squads/b8fd03ef/Manchester-City-Stats"
#' get_team_match_results(man_city_url)
#' })
#' }
get_team_match_results <- function(team_url, time_pause=3) {
# .pkg_message("Scraping team match logs...")
.Deprecated("fb_team_match_results")
time_wait <- time_pause
get_each_team_log <- function(team_url, time_pause=time_wait) {
pb$tick()
# put sleep in as per new user agreement on FBref
Sys.sleep(time_pause)
team_page <- xml2::read_html(team_url)
team_name <- sub('.*\\/', '', team_url) %>% gsub("-Stats", "", .) %>% gsub("-", " ", .)
opponent_names <- team_page %>% rvest::html_nodes(".left:nth-child(10) a") %>% rvest::html_text()
team_log <- team_page %>%
rvest::html_nodes("#all_matchlogs") %>%
rvest::html_nodes("table") %>%
rvest::html_table() %>% data.frame()
team_log$Opponent <- opponent_names
team_log <- team_log %>%
dplyr::mutate(Team_Url = team_url,
Team = team_name) %>%
dplyr::select(.data[["Team_Url"]], .data[["Team"]], dplyr::everything(), -.data[["Match.Report"]])
team_log <- team_log %>%
dplyr::mutate(Attendance = gsub(",", "", .data[["Attendance"]]) %>% as.numeric(),
GF = as.character(.data[["GF"]]),
GA = as.character(.data[["GA"]]))
return(team_log)
}
# create the progress bar with a progress function.
pb <- progress::progress_bar$new(total = length(team_url))
all_team_logs <- team_url %>%
purrr::map_df(get_each_team_log)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/team_match_results.R
|
#' Get expiring contracts
#'
#' Returns a data frame of players with expiring contracts for a selected league and time period
#'
#' @param country_name the country of the league's players
#' @param contract_end_year the year the contract is due to expire
#' @param league_url league url from transfermarkt.com. To be used when country_name not available in main function
#'
#' @return returns a dataframe of expiring contracts in the selected league
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#' @importFrom utils read.csv
#'
#' @export
#'
tm_expiring_contracts <- function(country_name, contract_end_year, league_url = NA) {
main_url <- "https://www.transfermarkt.com"
if(is.na(league_url)) {
meta_df <- read.csv(url("https://raw.githubusercontent.com/JaseZiv/worldfootballR_data/master/raw-data/transfermarkt_leagues/main_comp_seasons.csv"),
stringsAsFactors = F)
tryCatch({meta_df_seasons <- meta_df %>%
dplyr::filter(.data[["country"]] %in% country_name)}, error = function(e) {meta_df_seasons <- data.frame()})
if(nrow(meta_df_seasons) == 0) {
stop(glue::glue("Country {country_name} not found. Check that the country and season exists at https://github.com/JaseZiv/worldfootballR_data/blob/master/raw-data/transfermarkt_leagues/main_comp_seasons.csv"))
}
comp_url <- meta_df_seasons$comp_url %>% unique() %>% .[1]
} else {
tryCatch({league_page <- .load_page(league_url)}, error = function(e) {league_page <- c()})
tryCatch({country <- league_page %>%
rvest::html_nodes(".data-header") %>%
rvest::html_node("img") %>%
rvest::html_attr("alt") %>% .[!is.na(.)] %>% stringr::str_squish()}, error = function(e) {country_name <- NA_character_})
tryCatch({comp_name <- league_page %>% rvest::html_nodes(".data-header__headline-wrapper--oswald") %>% rvest::html_text() %>% stringr::str_squish()},
error = function(e) {country_name <- comp_name})
comp_url <- league_url
if(length(league_page) == 0) {
stop(glue::glue("League URL(s) {league_url} not found. Please check transfermarkt.com for the correct league URL"))
}
}
comp_url <- gsub("startseite", "endendevertraege", comp_url)
comp_url <- paste0(comp_url, "/plus/1/galerie/0?jahr=", contract_end_year, "&ausrichtung=alle&spielerposition_id=alle&altersklasse=alle&land_id=0&yt0=Show")
expiring_page <- xml2::read_html(comp_url)
expiring_urls <- expiring_page %>% rvest::html_nodes(".tm-pagination__list-item a") %>% rvest::html_attr("href")
if(length(expiring_urls)==0) {
expiring_pages <- comp_url
} else {
expiring_pages <- expiring_urls %>%
paste0(main_url, .) %>% unique()
}
get_each_page <- function(page_url) {
if(length(expiring_pages) == 0) {
exp_pg <- tryCatch(expiring_page %>% rvest::html_nodes("#yw1") %>% rvest::html_nodes("tbody") %>% .[[1]] %>% rvest::html_children(),
error = function(e) exp_pg <- NA_character_)
} else {
exp_pg <- xml2::read_html(page_url) %>% rvest::html_nodes("#yw1") %>% rvest::html_nodes("tbody") %>% .[[1]] %>% rvest::html_children()
}
# exp_pg <- xml2::read_html(page_url) %>% rvest::html_nodes("#yw1") %>% rvest::html_nodes("tbody") %>% .[[1]] %>% rvest::html_children()
player_name <- tryCatch(exp_pg %>% rvest::html_nodes(".inline-table .hauptlink a") %>% rvest::html_text(),
error = function(e) player_name <- NA_character_)
player_url <- tryCatch(exp_pg %>% rvest::html_nodes(".inline-table .hauptlink a") %>% rvest::html_attr("href") %>% paste0(main_url, .),
error = function(e) player_url <- NA_character_)
position <- tryCatch(exp_pg %>% rvest::html_nodes(".inline-table tr+ tr td") %>% rvest::html_text(),
error = function(e) position <- NA_character_)
nationality <- tryCatch(exp_pg %>% rvest::html_nodes(".flaggenrahmen:nth-child(1)") %>% rvest::html_attr("title"),
error = function(e) nationality <- NA_character_)
second_nationality <- c()
for(i in exp_pg) {
nat <- tryCatch(i %>% rvest::html_nodes("br+ .flaggenrahmen") %>% rvest::html_attr("title") %>% .replace_empty_na(),
error = function(e) nat <- NA_character_)
second_nationality <- tryCatch(c(second_nationality, nat),
error = function(e) second_nationality <- NA_character_)
}
current_club <- tryCatch(exp_pg %>% rvest::html_nodes(".tiny_wappen") %>% rvest::html_attr("alt"),
error = function(e) current_club <- NA_character_)
contract_expiry <- tryCatch(exp_pg %>% rvest::html_nodes(".no-border-rechts+ .zentriert") %>% rvest::html_text() %>% .tm_fix_dates(),
error = function(e) contract_expiry <- NA_character_)
contract_option <- tryCatch(exp_pg %>% rvest::html_nodes(".no-border-rechts~ .zentriert+ .zentriert") %>% rvest::html_text(),
error = function(e) contract_option <- NA_character_)
player_market_value <- tryCatch(exp_pg %>% rvest::html_nodes(".zentriert+ .hauptlink") %>% rvest::html_text(),
error = function(e) player_market_value <- NA_character_)
transfer_fee <- tryCatch(exp_pg %>% rvest::html_nodes(".hauptlink+ .rechts") %>% rvest::html_text(),
error = function(e) transfer_fee <- NA_character_)
agent <- tryCatch(exp_pg %>% rvest::html_nodes(".rechts+ .hauptlink") %>% rvest::html_text() %>% stringr::str_squish(),
error = function(e) agent <- NA_character_)
out_df <- cbind(player_name, player_url, position, nationality, second_nationality,
current_club, contract_expiry, contract_option, player_market_value, transfer_fee, agent) %>% data.frame()
if(is.na(league_url)) {
out_df <- meta_df_seasons %>% dplyr::select(.data[["comp_name"]], .data[["country"]], .data[["comp_url"]]) %>% dplyr::distinct() %>%
cbind(out_df)
} else {
out_df <- cbind(comp_name, country, comp_url, out_df)
}
out_df <- out_df %>%
dplyr::mutate(player_name = as.character(.data[["player_name"]]),
player_url = as.character(.data[["player_url"]]),
position = as.character(.data[["position"]]),
nationality = as.character(.data[["nationality"]]),
second_nationality = as.character(.data[["second_nationality"]]),
current_club = as.character(.data[["current_club"]]),
contract_expiry = lubridate::ymd(.data[["contract_expiry"]]),
contract_option = as.character(.data[["contract_option"]]),
player_market_value = mapply(.convert_value_to_numeric, .data[["player_market_value"]]),
transfer_fee = mapply(.convert_value_to_numeric, .data[["transfer_fee"]]),
agent = as.character(.data[["agent"]]))
return(out_df)
}
f_possibly <- purrr::possibly(get_each_page, otherwise = data.frame(), quiet = FALSE)
purrr::map_dfr(
expiring_pages,
f_possibly
)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/tm_expiring_contracts.R
|
#' Get league debutants
#'
#' Returns a data frame of debutants for a selected league
#'
#' @param country_name the country of the league's players
#' @param league_url league url from transfermarkt.com. To be used when country_name not available in main function
#' @param debut_type whether you want 'league' debut or 'pro' debut
#' @param debut_start_year the season start year of the beginning of the period you want results for
#' @param debut_end_year the season start year of the end of the period you want results for
#'
#' @return returns a dataframe of players who debuted in the selected league
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#' @importFrom utils read.csv
#'
#' @export
#'
tm_league_debutants <- function(country_name, league_url = NA, debut_type, debut_start_year, debut_end_year) {
main_url <- "https://www.transfermarkt.com"
if(is.na(league_url)) {
meta_df <- read.csv(url("https://raw.githubusercontent.com/JaseZiv/worldfootballR_data/master/raw-data/transfermarkt_leagues/main_comp_seasons.csv"),
stringsAsFactors = F)
tryCatch({meta_df_seasons <- meta_df %>%
dplyr::filter(.data[["country"]] %in% country_name)}, error = function(e) {meta_df_seasons <- data.frame()})
if(nrow(meta_df_seasons) == 0) {
stop(glue::glue("Country {country_name} not found. Check that the country and season exists at https://github.com/JaseZiv/worldfootballR_data/blob/master/raw-data/transfermarkt_leagues/main_comp_seasons.csv"))
}
comp_url <- meta_df_seasons$comp_url %>% unique() %>% .[1]
} else {
tryCatch({league_page <- xml2::read_html(league_url)}, error = function(e) {league_page <- c()})
tryCatch({country <- league_page %>%
rvest::html_nodes(".data-header") %>%
rvest::html_node("img") %>%
rvest::html_attr("alt") %>% .[!is.na(.)] %>% stringr::str_squish()}, error = function(e) {country_name <- NA_character_})
tryCatch({comp_name <- league_page %>% rvest::html_nodes(".data-header__headline-wrapper--oswald") %>% rvest::html_text() %>% stringr::str_squish()},
error = function(e) {country_name <- comp_name})
comp_url <- league_url
if(length(league_page) == 0) {
stop(glue::glue("League URL(s) {league_url} not found. Please check transfermarkt.com for the correct league URL"))
}
}
comp_url <- gsub("startseite", "profidebuetanten", comp_url)
comp_url <- paste0(comp_url, "/plus/1/galerie/0?saisonIdVon=", debut_start_year, "&saison_id=", debut_end_year, "&option=")
if(debut_type == "league") {
comp_url <- paste0(comp_url, "liga")
} else if (debut_type == "pro") {
comp_url <- paste0(comp_url, "profi")
} else {
stop("Check debut type is either 'league' or 'pro'")
}
debutants_page <- xml2::read_html(comp_url)
debutants_pages <- debutants_page %>% rvest::html_nodes(".tm-pagination__list-item a") %>% rvest::html_attr("href") %>%
paste0(main_url, .)
get_each_page <- function(page_url) {
debs <- xml2::read_html(page_url) %>% rvest::html_nodes("#yw1") %>% rvest::html_nodes("tbody") %>% .[[1]] %>% rvest::html_children()
player_name <- tryCatch(debs %>% rvest::html_nodes(".inline-table .hauptlink a") %>% rvest::html_text(),
error = function(e) player_name <- NA_character_)
player_url <- tryCatch(debs %>% rvest::html_nodes(".inline-table .hauptlink a") %>% rvest::html_attr("href") %>% paste0(main_url, .),
error = function(e) player_url <- NA_character_)
position <- tryCatch(debs %>% rvest::html_nodes(".inline-table tr+ tr td") %>% rvest::html_text(),
error = function(e) position <- NA_character_)
nationality <- tryCatch(debs %>% rvest::html_nodes(".flaggenrahmen:nth-child(1)") %>% html_attr("title"),
error = function(e) nationality <- NA_character_)
second_nationality <- c()
for(i in debs) {
nat <- tryCatch(i %>% rvest::html_nodes("br+ .flaggenrahmen") %>% rvest::html_attr("title") %>% .replace_empty_na(),
error = function(e) nat <- NA_character_)
second_nationality <- tryCatch(c(second_nationality, nat),
error = function(e) second_nationality <- NA_character_)
}
debut_for <- tryCatch(debs %>% rvest::html_nodes(".zentriert+ .zentriert img") %>% rvest::html_attr("alt"),
error = function(e) debut_for <- NA_character_)
appearances <- tryCatch(debs %>% rvest::html_nodes(".zentriert.hauptlink a") %>% rvest::html_text() %>%
gsub("Not.*", "0", .) %>% as.numeric(),
error = function(e) appearances <- NA_character_)
goals <- tryCatch(debs %>% rvest::html_nodes(".hauptlink+ .zentriert a") %>% rvest::html_text() %>%
gsub("-", "0", .) %>% as.numeric(),
error = function(e) goals <- NA_character_)
minutes_played <- tryCatch(debs %>% rvest::html_nodes("td:nth-child(6)") %>% rvest::html_text() %>%
gsub("\\.", "", .) %>% gsub("'", "", .) %>% gsub("-", "0", .) %>% as.numeric(),
error = function(e) minutes_played <- NA_character_)
debut_date <- tryCatch(debs %>% rvest::html_nodes(".spielDatum") %>% rvest::html_text() %>% .tm_fix_dates(),
error = function(e) debut_date <- NA_character_)
home_team <- tryCatch(debs %>% rvest::html_nodes(".rechts img") %>% rvest::html_attr("alt"),
error = function(e) home_team <- NA_character_)
away_team <- tryCatch(debs %>% rvest::html_nodes(".links img") %>% rvest::html_attr("alt"),
error = function(e) away_team <- NA_character_)
score <- tryCatch(debs %>% rvest::html_nodes(".ergebnis-link span") %>% rvest::html_text(),
error = function(e) score <- NA_character_)
home_goals <- tryCatch(gsub(":.*", "", score) %>% as.numeric(),
error = function(e) home_goals <- NA_character_)
away_goals <- tryCatch(gsub("*.:", "", score) %>% as.numeric(),
error = function(e) away_goals <- NA_character_)
opponent <- tryCatch(ifelse(debut_for == home_team, away_team, home_team),
error = function(e) opponent <- NA_character_)
goals_for <- tryCatch(ifelse(debut_for == home_team, home_goals, away_goals),
error = function(e) goals_for <- NA_character_)
goals_against <- tryCatch(ifelse(debut_for == home_team, away_goals, home_goals),
error = function(e) goals_against <- NA_character_)
age_debut <- tryCatch(debs %>% rvest::html_nodes(".match+ .hauptlink") %>% rvest::html_text() %>% stringr::str_squish(),
error = function(e) age_debut <- NA_character_)
value_at_debut <- tryCatch(debs %>% rvest::html_nodes(".hauptlink:nth-child(9)") %>% rvest::html_text(),
error = function(e) value_at_debut <- NA_character_)
player_market_value <- tryCatch(debs %>% rvest::html_nodes(".rechts.hauptlink~ .hauptlink+ .hauptlink") %>% rvest::html_text(),
error = function(e) player_market_value <- NA_character_)
out_df <- cbind(player_name, player_url, position, nationality, second_nationality,
debut_for, debut_date, opponent, goals_for, goals_against, age_debut, value_at_debut, player_market_value,
appearances, goals, minutes_played) %>% data.frame()
if(is.na(league_url)) {
out_df <- meta_df_seasons %>% dplyr::select(.data[["comp_name"]], .data[["country"]], .data[["comp_url"]]) %>% dplyr::distinct() %>%
cbind(out_df)
} else {
out_df <- cbind(comp_name, country, comp_url, out_df)
}
out_df <- out_df %>%
dplyr::mutate(player_name = as.character(.data[["player_name"]]),
player_url = as.character(.data[["player_url"]]),
position = as.character(.data[["position"]]),
nationality = as.character(.data[["nationality"]]),
second_nationality = as.character(.data[["second_nationality"]]),
debut_for = as.character(.data[["debut_for"]]),
debut_date = lubridate::ymd(.data[["debut_date"]]),
opponent = as.character(.data[["opponent"]]),
goals_for = as.numeric(.data[["goals_for"]]),
goals_against = as.numeric(.data[["goals_against"]]),
age_debut = as.character(.data[["age_debut"]]),
value_at_debut = mapply(.convert_value_to_numeric, .data[["value_at_debut"]]),
player_market_value = mapply(.convert_value_to_numeric, .data[["player_market_value"]]),
appearances = as.numeric(.data[["appearances"]]),
goals = as.numeric(.data[["goals"]]),
minutes_played = as.numeric(.data[["minutes_played"]]))
out_df <- out_df %>%
dplyr::mutate(debut_type = debut_type)
return(out_df)
}
f_possibly <- purrr::possibly(get_each_page, otherwise = data.frame(), quiet = FALSE)
purrr::map_dfr(
debutants_pages,
f_possibly
)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/tm_league_debutants.R
|
#' Get league injuries
#'
#' Returns a data frame of all currently injured players players for a selected league
#'
#' @param country_name the country of the league's players
#' @param league_url league url from transfermarkt.com. To be used when country_name not available in main function
#'
#' @return returns a dataframe of injured players in the selected league
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#' @importFrom utils read.csv
#'
#' @export
#'
tm_league_injuries <- function(country_name, league_url = NA) {
main_url <- "https://www.transfermarkt.com"
if(is.na(league_url)) {
meta_df <- read.csv(url("https://raw.githubusercontent.com/JaseZiv/worldfootballR_data/master/raw-data/transfermarkt_leagues/main_comp_seasons.csv"),
stringsAsFactors = F)
tryCatch({meta_df_seasons <- meta_df %>%
dplyr::filter(.data[["country"]] %in% country_name)}, error = function(e) {meta_df_seasons <- data.frame()})
if(nrow(meta_df_seasons) == 0) {
stop(glue::glue("Country {country_name} not found. Check that the country and season exists at https://github.com/JaseZiv/worldfootballR_data/blob/master/raw-data/transfermarkt_leagues/main_comp_seasons.csv"))
}
comp_url <- meta_df_seasons$comp_url %>% unique() %>% .[1]
} else {
tryCatch({league_page <- xml2::read_html(league_url)}, error = function(e) {league_page <- c()})
tryCatch({country <- league_page %>%
rvest::html_nodes(".data-header") %>%
rvest::html_node("img") %>%
rvest::html_attr("alt") %>% .[!is.na(.)] %>% stringr::str_squish()}, error = function(e) {country_name <- NA_character_})
tryCatch({comp_name <- league_page %>% rvest::html_nodes(".data-header__headline-wrapper--oswald") %>% rvest::html_text() %>% stringr::str_squish()},
error = function(e) {country_name <- comp_name})
comp_url <- league_url
if(length(league_page) == 0) {
stop(glue::glue("League URL(s) {league_url} not found. Please check transfermarkt.com for the correct league URL"))
}
}
comp_url <- gsub("startseite", "verletztespieler", comp_url)
comp_url <- paste0(comp_url, "/plus/1")
injuries_page <- xml2::read_html(comp_url)
injuries_page <- injuries_page %>% rvest::html_nodes("#yw1") %>% rvest::html_nodes("tbody") %>% .[[1]] %>% rvest::html_children()
player_name <- tryCatch(injuries_page %>% rvest::html_nodes(".inline-table .hauptlink a") %>% rvest::html_text(),
error = function(e) player_name <- NA_character_)
player_url <- tryCatch(injuries_page %>% rvest::html_nodes(".inline-table .hauptlink a") %>% rvest::html_attr("href") %>% paste0(main_url, .),
error = function(e) player_url <- NA_character_)
position <- tryCatch(injuries_page %>% rvest::html_nodes(".inline-table tr+ tr td") %>% rvest::html_text(),
error = function(e) position <- NA_character_)
current_club <- tryCatch(injuries_page %>% rvest::html_nodes(".tiny_wappen") %>% rvest::html_attr("alt"),
error = function(e) current_club <- NA_character_)
age <- tryCatch(injuries_page %>% rvest::html_nodes(".no-border-rechts+ .zentriert") %>% rvest::html_text() %>% as.numeric(),
error = function(e) age <- NA_integer_)
nationality <- tryCatch(injuries_page %>% rvest::html_nodes(".flaggenrahmen:nth-child(1)") %>% rvest::html_attr("title"),
error = function(e) nationality <- NA_character_)
second_nationality <- c()
for(i in injuries_page) {
nat <- tryCatch(i %>% rvest::html_nodes("br+ .flaggenrahmen") %>% rvest::html_attr("title") %>% .replace_empty_na(),
error = function(e) nat <- NA_character_)
second_nationality <- tryCatch(c(second_nationality, nat),
error = function(e) second_nationality <- NA_character_)
}
injury <- tryCatch(injuries_page %>% rvest::html_nodes("td.links") %>% rvest::html_text(),
error = function(e) injury <- NA_character_)
injured_since <- tryCatch(injuries_page %>% rvest::html_nodes(".links+ td") %>% rvest::html_text() %>% .tm_fix_dates(),
error = function(e) injured_since <- NA_character_)
injured_until <- tryCatch(injuries_page %>% rvest::html_nodes(".links~ .zentriert+ td.zentriert") %>% rvest::html_text() %>% .tm_fix_dates(),
error = function(e) injured_until <- NA_character_)
player_market_value <- tryCatch(injuries_page %>% rvest::html_nodes("td.rechts") %>% rvest::html_text(),
error = function(e) player_market_value <- NA_character_)
out_df <- cbind(player_name, player_url, position, current_club, age, nationality, second_nationality,
injury, injured_since, injured_until, player_market_value) %>% data.frame()
if(is.na(league_url)) {
out_df <- meta_df_seasons %>% dplyr::select(.data[["comp_name"]], .data[["country"]], .data[["comp_url"]]) %>% dplyr::distinct() %>%
cbind(out_df)
} else {
out_df <- cbind(comp_name, country, comp_url, out_df)
}
out_df <- out_df %>%
dplyr::mutate(player_name = as.character(.data[["player_name"]]),
player_url = as.character(.data[["player_url"]]),
position = as.character(.data[["position"]]),
current_club = as.character(.data[["current_club"]]),
age = as.numeric(.data[["age"]]),
nationality = as.character(.data[["nationality"]]),
second_nationality = as.character(.data[["second_nationality"]]),
injury = as.character(.data[["injury"]]),
injured_since = lubridate::ymd(.data[["injured_since"]]),
injured_until = lubridate::ymd(.data[["injured_until"]]),
player_market_value = mapply(.convert_value_to_numeric, .data[["player_market_value"]]))
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/tm_league_injuries.R
|
#' Get weekly league table
#'
#' Returns the league table for each chosen matchday from transfermarkt
#'
#' @param country_name the country of the league's players
#' @param start_year the start year of the season (2020 for the 20/21 season)
#' @param matchday the matchweek number. Can be a vector of matchdays
#' @param league_url league url from transfermarkt.com. To be used when country_name not available in main function
#'
#' @return returns a dataframe of the table for a selected league and matchday
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#' @importFrom utils read.csv
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' tm_matchday_table(country_name="England", start_year="2020", matchday=1)
#' tm_matchday_table(country_name="England", start_year="2020", matchday=c(1:5))
#' })
#' }
tm_matchday_table <- function(country_name, start_year, matchday, league_url=NA) {
# .pkg_message("Scraping matchday table. Please acknowledge transfermarkt.com as the data source")
main_url <- "https://www.transfermarkt.com"
if(is.na(league_url)) {
meta_df <- read.csv(url("https://raw.githubusercontent.com/JaseZiv/worldfootballR_data/master/raw-data/transfermarkt_leagues/main_comp_seasons.csv"),
stringsAsFactors = F)
tryCatch({meta_df_seasons <- meta_df %>%
dplyr::filter(.data[["country"]] %in% country_name, .data[["season_start_year"]] %in% start_year)}, error = function(e) {meta_df_seasons <- data.frame()})
if(nrow(meta_df_seasons) == 0) {
stop(glue::glue("Country {country_name} or season {start_year} not found. Check that the country and season exists at https://github.com/JaseZiv/worldfootballR_data/blob/master/raw-data/transfermarkt_leagues/main_comp_seasons.csv"))
}
season_url <- meta_df_seasons$season_urls
} else {
tryCatch({league_page <- xml2::read_html(league_url)}, error = function(e) {league_page <- c()})
tryCatch({country_name <- league_page %>%
rvest::html_nodes(".data-header") %>%
rvest::html_node("img") %>%
rvest::html_attr("alt") %>% .[!is.na(.)] %>% stringr::str_squish()}, error = function(e) {country_name <- NA_character_})
if(length(league_page) == 0) {
stop(glue::glue("League URL(s) {league_url} not found. Please check transfermarkt.com for the correct league URL"))
}
season_url <- league_url
}
season_url <- gsub("startseite", "spieltagtabelle", season_url)
all_matchdays <- data.frame()
for(each_matchday in matchday) {
# .pkg_message("Extracting league table for matchday {each_matchday}...")
if(is.na(league_url)) {
matchday_url <- paste0(season_url, "&spieltag=", each_matchday)
} else {
matchday_url <- paste0(season_url, "?saison_id=", start_year, "&spieltag=", each_matchday)
}
tab <- xml2::read_html(matchday_url)
league_name <- tab %>% rvest::html_node("h1") %>% rvest::html_text() %>% stringr::str_squish()
weekly_table <- tab %>%
rvest::html_nodes(".box")
weekly_table <- weekly_table[-1]
all_lens <- c()
for(i in weekly_table) {
len <- i %>% rvest::html_nodes("th") %>% length()
all_lens <- c(all_lens, len)
}
idx <- which(all_lens == 9)
if(length(idx) == 1) {
matchday_table <- weekly_table[idx] %>% rvest::html_nodes("table") %>% rvest::html_table() %>% data.frame()
} else {
matchday_table_1 <- weekly_table[idx[1]] %>% rvest::html_nodes("table") %>% rvest::html_table() %>% data.frame()
matchday_table_2 <- weekly_table[idx[2]] %>% rvest::html_nodes("table") %>% rvest::html_table() %>% data.frame()
matchday_table <- rbind(matchday_table_1, matchday_table_2)
}
matchday_table <- matchday_table %>%
dplyr::mutate(Matchday = each_matchday,
Country=country_name,
League = league_name) %>%
dplyr::select(.data[["Country"]], .data[["League"]], .data[["Matchday"]], Rk=.data$`X.`, Team=.data[["Club.1"]],
P=.data[["Var.4"]], .data[["W"]], .data[["D"]], .data[["L"]], .data[["Goals"]], G_Diff=.data$`X...`, .data[["Pts"]]) %>%
tidyr::separate(.data[["Goals"]], into = c("GF", "GA"), sep = ":") %>%
dplyr::mutate(GF = as.numeric(.data[["GF"]]),
GA = as.numeric(.data[["GA"]])) %>% suppressWarnings()
matchday_table <- matchday_table %>%
dplyr::mutate_at(.vars = c("P", "W", "D", "L", "GF", "GA", "G_Diff", "Pts"), as.numeric) %>% suppressWarnings()
matchday_table <- matchday_table %>%
janitor::clean_names() %>%
dplyr::rename(squad = .data[["team"]])
all_matchdays <- rbind(all_matchdays, matchday_table)
}
return(all_matchdays)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/tm_matchday_table.R
|
#' Get transfermarkt player bios
#'
#' Returns data frame of player bios from transfermarkt.com
#'
#' @param player_urls player url(s) from transfermarkt
#'
#' @return returns a dataframe of player bios
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' player_url <- "https://www.transfermarkt.com/eden-hazard/profil/spieler/50202"
#' tm_player_bio(player_url)
#' tm_player_bio(player_urls = c("https://www.transfermarkt.com/eden-hazard/profil/spieler/50202",
#' "https://www.transfermarkt.com/sergio-ramos/profil/spieler/25557",
#' "https://www.transfermarkt.com/ivo-grbic/profil/spieler/226073"))
#' })
#' }
tm_player_bio <- function(player_urls) {
# .pkg_message("Scraping player bios. Please acknowledge transfermarkt.com as the data source")
each_bio <- function(player_url) {
pb$tick()
player_page <- tryCatch(xml2::read_html(player_url), error = function(e) NA)
if(!is.na(player_page)) {
player_name <- player_page %>% rvest::html_nodes("div h1") %>% rvest::html_text()
# there was a change detected on 2022-04-12 of the name and valuation changing in the html
player_name <- gsub("#[[:digit:]]+ ", "", player_name) %>% stringr::str_squish()
# print(glue::glue("Scraping player_bio for {player_name}"))
X1 <- player_page %>% rvest::html_nodes(".info-table__content--regular") %>% rvest::html_text() %>% stringr::str_squish() %>% .replace_empty_na()
X2 <- player_page %>% rvest::html_nodes(".info-table__content--bold") %>% rvest::html_text() %>% stringr::str_squish() %>% .replace_empty_na()
a <- cbind(X1, X2) %>% data.frame()
a <- a %>% dplyr::filter(!stringr::str_detect(.data[["X1"]], "Social-Media")) %>%
dplyr::mutate(X1 = gsub(":", "", .data[["X1"]]))
X2 <- tryCatch(player_page %>% rvest::html_nodes(".socialmedia-icons") %>% rvest::html_nodes("a") %>% rvest::html_attr("href"), error = function(e) NA_character_) %>% .replace_empty_na()
X1 <- tryCatch(player_page %>% rvest::html_nodes(".socialmedia-icons") %>% rvest::html_nodes("a") %>% rvest::html_attr("title"), error = function(e) NA_character_) %>% .replace_empty_na()
socials <- cbind(X1, X2)
a <- rbind(a, socials) %>% dplyr::mutate(X1 = ifelse(.data[["X1"]] == "", "Website", .data[["X1"]]))
# handle for duplicate socials
a <- a %>% dplyr::distinct(X1, .keep_all = TRUE)
player_val <- tryCatch(player_page %>% rvest::html_nodes(".tm-player-market-value-development__current-value") %>% rvest::html_text() %>%
stringr::str_squish(), error = function(e) NA_character_) %>% .replace_empty_na()
player_val_max <- tryCatch(player_page %>% rvest::html_nodes(".tm-player-market-value-development__max-value") %>% rvest::html_text() %>%
stringr::str_squish(), error = function(e) NA_character_) %>% .replace_empty_na()
player_val_max_date <- tryCatch(player_page %>% rvest::html_nodes(".tm-player-market-value-development__max div") %>% .[3] %>% rvest::html_text() %>%
stringr::str_squish(), error = function(e) NA_character_) %>% .replace_empty_na()
val_df <- data.frame(X1=c("player_valuation", "max_player_valuation", "max_player_valuation_date"), X2=c(player_val, player_val_max, player_val_max_date))
a <- rbind(a, val_df)
a <- a %>%
dplyr::mutate(player_name = player_name) %>%
tidyr::pivot_wider(names_from = .data[["X1"]], values_from = .data[["X2"]]) %>%
janitor::clean_names() %>%
dplyr::mutate(player_valuation = .convert_value_to_numeric(euro_value = .data[["player_valuation"]]),
max_player_valuation = .convert_value_to_numeric(euro_value = .data[["max_player_valuation"]]),
max_player_valuation_date = .tm_fix_dates(dirty_dates = .data[["max_player_valuation_date"]])) %>%
dplyr::mutate(URL = player_url)
} else {
a <- data.frame()
}
return(a)
}
# create the progress bar with a progress function.
pb <- progress::progress_bar$new(total = length(player_urls))
full_bios <- player_urls %>%
purrr::map_df(each_bio)
# some of the following columns may not exist, so this series of if statements will handle for this:
if(any(grepl("date_of_birth", colnames(full_bios)))) {
full_bios <- full_bios %>%
dplyr::mutate(date_of_birth = .tm_fix_dates(.data[["date_of_birth"]]))
}
if(any(grepl("joined", colnames(full_bios)))) {
full_bios <- full_bios %>%
dplyr::mutate(joined = .tm_fix_dates(.data[["joined"]]))
}
if(any(grepl("contract_expires", colnames(full_bios)))) {
full_bios <- full_bios %>%
dplyr::mutate(contract_expires = .tm_fix_dates(.data[["contract_expires"]]))
}
if(any(grepl("date_of_last_contract_extension", colnames(full_bios)))) {
full_bios <- full_bios %>%
dplyr::mutate(date_of_last_contract_extension = .tm_fix_dates(.data[["date_of_last_contract_extension"]]))
}
if(any(grepl("height", colnames(full_bios)))) {
full_bios <- full_bios %>%
dplyr::mutate(height = gsub(",", "\\.", .data[["height"]]) %>% gsub("m", "", .) %>% stringr::str_squish() %>% as.numeric())
}
return(full_bios)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/tm_player_bio.R
|
#' Get player injury history
#'
#' Returns data frame of a player's injury history transfermarkt.com
#'
#' @param player_urls player url(s) from transfermarkt
#'
#' @return returns a dataframe of player injury history
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
#'
tm_player_injury_history <- function(player_urls) {
main_url <- "https://www.transfermarkt.com"
# .pkg_message("Scraping player bios. Please acknowledge transfermarkt.com as the data source")
each_player <- function(player_url) {
pb$tick()
# player_bio <- tm_player_bio(player_urls = player_url)
player_url_fixed <- gsub("profil", "verletzungen", player_url) %>% paste0(., "/plus/1")
injury_page <- xml2::read_html(player_url_fixed)
player_name <- injury_page %>% rvest::html_nodes("h1") %>% rvest::html_text()
injury_urls <- injury_page %>% html_nodes(".tm-pagination__list-item a") %>% html_attr("href")
if(length(injury_urls)==0) {
injury_pages <- player_url_fixed
} else {
injury_pages <- injury_urls %>%
paste0(main_url, .) %>% unique()
}
get_each_page <- function(page_url) {
if(length(injury_urls) == 0) {
pg <- tryCatch(injury_page %>% rvest::html_nodes("#yw1") %>% rvest::html_nodes("tbody") %>% .[[1]] %>% rvest::html_children(),
error = function(e) pg <- NA_character_)
} else {
pg <- xml2::read_html(page_url) %>% rvest::html_nodes("#yw1") %>% rvest::html_nodes("tbody") %>% .[[1]] %>% rvest::html_children()
}
season_injured <- tryCatch(pg %>% rvest::html_nodes("td:nth-child(1)") %>% rvest::html_text(),
error = function(e) season_injured <- NA_character_)
injury <- tryCatch(pg %>% rvest::html_nodes(".zentriert+ .hauptlink") %>% rvest::html_text(),
error = function(e) injury <- NA_character_)
injured_since <- tryCatch(pg %>% rvest::html_nodes(".hauptlink+ .zentriert") %>% rvest::html_text() %>% .tm_fix_dates(),
error = function(e) injured_since <- NA_character_)
injured_until <- tryCatch(pg %>% rvest::html_nodes(".zentriert+ td.zentriert") %>% rvest::html_text() %>% .tm_fix_dates(),
error = function(e) injured_until <- NA_character_)
duration <- tryCatch(pg %>% rvest::html_nodes(".zentriert+ td.rechts") %>% rvest::html_text(),
error = function(e) duration <- NA_character_)
games_missed <- tryCatch(pg %>% rvest::html_nodes(".wappen_verletzung") %>% rvest::html_text() %>% as.numeric() %>% suppressWarnings(),
error = function(e) games_missed <- NA_integer_)
club <- tryCatch(pg %>% rvest::html_nodes("img") %>% rvest::html_attr("alt"),
error = function(e) goals <- NA_character_)
out_df <- cbind(player_name, player_url, season_injured, injury, injured_since, injured_until, duration, games_missed, club) %>%
suppressWarnings() %>% data.frame()
out_df <- out_df %>%
dplyr::mutate(player_name = as.character(.data[["player_name"]]),
player_url = as.character(.data[["player_url"]]),
season_injured = as.character(.data[["season_injured"]]),
injury = as.character(.data[["injury"]]),
injured_since = lubridate::ymd(.data[["injured_since"]]),
injured_until = lubridate::ymd(.data[["injured_until"]]),
duration = as.character(.data[["duration"]]),
games_missed = as.character(.data[["games_missed"]]),
club = as.character(.data[["club"]]))
# # ----- use the below if want to include player bio data to injury histories -----#
#
# out_df <- cbind(season_injured, injury, injured_since, injured_until, duration, games_missed, club) %>%
# suppressWarnings() %>% data.frame()
# out_df <- out_df %>%
# dplyr::mutate(season_injured = as.character(.data[["season_injured"]]),
# injury = as.character(.data[["injury"]]),
# injured_since = lubridate::ymd(.data[["injured_since"]]),
# injured_until = lubridate::ymd(.data[["injured_until"]]),
# duration = as.character(.data[["duration"]]),
# games_missed = as.character(.data[["games_missed"]]),
# club = as.character(.data[["club"]]))
return(out_df)
}
f_possibly <- purrr::possibly(get_each_page, otherwise = data.frame(), quiet = FALSE)
player_df <- purrr::map_dfr(
injury_pages,
f_possibly
)
# player_df <- cbind(player_bio, player_df)
return(player_df)
}
# create the progress bar with a progress function.
pb <- progress::progress_bar$new(total = length(player_urls))
f_possibly2 <- purrr::possibly(each_player, otherwise = data.frame(), quiet = FALSE)
purrr::map_dfr(
player_urls,
f_possibly2
)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/tm_player_injury_history.R
|
#' Get squad player stats
#'
#' Returns basic stats for players of a team season
#'
#' @param team_url transfermarkt.com team url for a season
#'
#' @return returns a dataframe of all player stats for team(s)
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
tm_squad_stats <- function(team_url) {
main_url <- "https://www.transfermarkt.com"
# .pkg_message("Scraping squad player stats. Please acknowledge transfermarkt.com as the data source")
each_squad_stats <- function(each_team_url) {
pb$tick()
team_data_url <- gsub("startseite", "leistungsdaten", each_team_url)
team_data_page <- tryCatch(xml2::read_html(team_data_url), error = function(e) NA)
if(!is.na(team_data_page)) {
team_name <- team_data_page %>% rvest::html_nodes(".data-header__headline-wrapper--oswald") %>% rvest::html_text() %>% stringr::str_squish()
league <- team_data_page %>% rvest::html_nodes(".data-header__club a") %>% rvest::html_text() %>% stringr::str_squish()
country <- team_data_page %>% rvest::html_nodes(".vm") %>% rvest::html_attr("title")
team_data_table <- team_data_page %>% rvest::html_nodes("#yw1") %>% rvest::html_node("table") %>% rvest::html_nodes("tbody") %>% rvest::html_children()
player_name <- team_data_table %>% rvest::html_nodes(".hauptlink") %>% rvest::html_nodes(".hide-for-small") %>% rvest::html_text()
player_url <- team_data_table %>% rvest::html_nodes(".hauptlink") %>% rvest::html_nodes(".hide-for-small a") %>% rvest::html_attr("href") %>% paste0(main_url, .)
player_pos <- team_data_table %>% rvest::html_nodes(".inline-table tr+ tr td") %>% rvest::html_text()
player_age <- team_data_table %>% rvest::html_nodes(".posrela+ .zentriert") %>% rvest::html_text()
nationality <- team_data_table %>% rvest::html_nodes(".flaggenrahmen:nth-child(1)") %>% rvest::html_attr("title")
in_squad <- team_data_table %>% rvest::html_nodes("td:nth-child(5)") %>% rvest::html_text() %>%
gsub("-", "0", .) %>% as.numeric()
appearances <- team_data_table %>% rvest::html_nodes("td:nth-child(6)") %>% rvest::html_text() %>%
gsub("Not.*", "0", .) %>% as.numeric()
goals <- team_data_table %>% rvest::html_nodes(".zentriert:nth-child(7)") %>% rvest::html_text() %>%
gsub("-", "0", .) %>% as.numeric()
minutes_played <- team_data_table %>% rvest::html_nodes(".rechts") %>% rvest::html_text() %>%
gsub("\\.", "", .) %>% gsub("'", "", .) %>% gsub("-", "0", .) %>% as.numeric()
team_data_df <- data.frame(team_name = as.character(team_name), league = as.character(league), country = as.character(country),
player_name = as.character(player_name), player_url = as.character(player_url), player_pos = as.character(player_pos),
player_age = as.numeric(player_age), nationality = as.character(nationality), in_squad = as.numeric(in_squad),
appearances = as.numeric(appearances), goals = as.numeric(goals), minutes_played = as.numeric(minutes_played))
} else {
team_data_df <- data.frame()
}
return(team_data_df)
}
# create the progress bar with a progress function.
pb <- progress::progress_bar$new(total = length(team_url))
f_possibly <- purrr::possibly(each_squad_stats, otherwise = data.frame(), quiet = FALSE)
purrr::map_dfr(
team_url,
f_possibly
)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/tm_squad_stats.R
|
#' Get Staff Member's job history
#'
#' Returns all roles a selected staff member(s) has held and performance data
#'
#' @param staff_urls transfermarkt.com staff(s) url (can use tm_league_staff_urls() to get)
#'
#' @return returns a data frame of all roles a selected staff member(s) has held and performance data
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
tm_staff_job_history <- function(staff_urls) {
get_each_staff <- function(staff_url) {
pb$tick()
main_staff_url <- staff_url
staff_url <- staff_url %>% gsub("profil", "stationen", .) %>% paste0(., "/plus/1")
staff_pg <- xml2::read_html(staff_url)
name <- staff_pg %>% rvest::html_nodes("h1") %>% rvest::html_text() %>% stringr::str_squish()
current_club <- staff_pg %>% rvest::html_nodes(".data-header__club a") %>% rvest::html_text() %>% .replace_empty_na()
current_role <- staff_pg %>% rvest::html_nodes(".data-header__label b") %>% rvest::html_text() %>% stringr::str_squish() %>% .replace_empty_na()
metadata <- staff_pg %>% rvest::html_nodes(".data-header__details")
data_head_top <- metadata %>% rvest::html_nodes(".data-header__items li") %>% rvest::html_text() %>% stringr::str_squish()
# data_vals <- metadata %>% rvest::html_nodes(".data-header__content") %>% rvest::html_text() %>% stringr::str_squish()
meta_df <- data.frame(data_head_top) %>%
tidyr::separate(data_head_top, into = c("data_head", "data_vals"), sep = ":") %>%
tidyr::pivot_wider(names_from = .data[["data_head"]], values_from = .data[["data_vals"]]) %>%
janitor::clean_names()
meta_df <- meta_df %>% tidyr::separate(.data[["date_of_birth_age"]], into = "date_of_birth", sep = " \\(") %>% suppressWarnings()
staff_hist <- staff_pg %>% rvest::html_nodes("#yw1") %>% rvest::html_nodes("tbody") %>% .[[1]] %>% rvest::html_children()
club <- staff_hist %>% rvest::html_nodes(".no-border-links a") %>% rvest::html_text()
position <- staff_hist %>% rvest::html_nodes("td:nth-child(2)") %>% rvest::html_text()
appointed <- staff_hist %>% rvest::html_nodes(".no-border-links+ .zentriert") %>% rvest::html_text() %>%
gsub(".*\\(", "", .) %>% gsub("\\)", "", .) %>% stringr::str_squish()
contract_expiry <- staff_hist %>% rvest::html_nodes("td:nth-child(4)") %>% rvest::html_text() %>%
gsub(".*\\(", "", .) %>% gsub("\\)", "", .) %>% stringr::str_squish() %>% gsub("expected ", "", .)
days_in_charge <- staff_hist %>% rvest::html_nodes("td:nth-child(5)") %>% rvest::html_text() %>% gsub(" Days", "", .) %>% as.numeric()
matches <- staff_hist %>% rvest::html_nodes("td:nth-child(6)") %>% rvest::html_text() %>% as.numeric() %>% suppressWarnings()
wins <- staff_hist %>% rvest::html_nodes("td:nth-child(7)") %>% rvest::html_text() %>% as.numeric() %>% suppressWarnings()
draws <- staff_hist %>% rvest::html_nodes("td:nth-child(8)") %>% rvest::html_text() %>% as.numeric() %>% suppressWarnings()
losses <- staff_hist %>% rvest::html_nodes("td:nth-child(9)") %>% rvest::html_text() %>% as.numeric() %>% suppressWarnings()
players_used <- staff_hist %>% rvest::html_nodes("td:nth-child(10)") %>% rvest::html_text() %>% as.numeric() %>% suppressWarnings()
goals <- staff_hist %>% rvest::html_nodes("td:nth-child(11)") %>% rvest::html_text()
avg_goals_for <- gsub(":.*", "", goals) %>% stringr::str_squish() %>% as.numeric()
avg_goals_against <- gsub(".*:", "", goals) %>% stringr::str_squish() %>% as.numeric()
ppm <- staff_hist %>% rvest::html_nodes("td:nth-child(12)") %>% rvest::html_text() %>% as.numeric() %>% suppressWarnings()
staff_df <- cbind(name, current_club, current_role, meta_df, position, club, appointed, contract_expiry, days_in_charge, matches, wins, draws, losses, players_used, avg_goals_for, avg_goals_against, ppm)
clean_role <- function(dirty_string, dirt) {
gsub(dirt, "", dirty_string)
}
staff_df <- staff_df %>%
dplyr::mutate(position = mapply(clean_role, dirty_string=position, dirt=club)) %>%
dplyr::mutate(appointed = .tm_fix_dates(.data[["appointed"]]),
contract_expiry = .tm_fix_dates(.data[["contract_expiry"]])) %>%
dplyr::mutate(staff_url = main_staff_url)
}
# create the progress bar with a progress function.
pb <- progress::progress_bar$new(total = length(staff_urls))
f_possibly <- purrr::possibly(get_each_staff, otherwise = data.frame(), quiet = FALSE)
purrr::map_dfr(
staff_urls,
f_possibly
)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/tm_staff_job_history.R
|
#' Get team staff history
#'
#' Returns all people who have held the selected role in a team's history
#'
#' @param team_urls transfermarkt.com team(s) url for a season
#' @param staff_role the role description which can be found here:
#' https://github.com/JaseZiv/worldfootballR_data/blob/master/raw-data/transfermarkt_staff/tm_staff_types.csv
#'
#' @return returns a data frame of all selected staff roles for a team(s) history
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
tm_team_staff_history <- function(team_urls, staff_role = "Manager") {
main_url <- "https://www.transfermarkt.com"
tm_staff_types <- read.csv("https://raw.githubusercontent.com/JaseZiv/worldfootballR_data/master/raw-data/transfermarkt_staff/tm_staff_types.csv", stringsAsFactors = F)
if(!tolower(staff_role) %in% tolower(tm_staff_types$staff_type_text)) stop("Check that staff role exists...")
tm_staff_idx <- tm_staff_types$staff_type_idx[tolower(tm_staff_types$staff_type_text) == tolower(staff_role)]
get_each_team_staff_history <- function(team_url) {
pb$tick()
manager_history_url <- gsub("startseite", "mitarbeiterhistorie", team_url) %>% gsub("saison_id.*", "", .) %>% paste0(., "personalie_id/", tm_staff_idx, "/plus/1")
history_pg <- xml2::read_html(manager_history_url)
team_name <- history_pg %>% rvest::html_nodes("h1") %>% rvest::html_text() %>% stringr::str_squish() %>% .replace_empty_na()
league <- history_pg %>% rvest::html_nodes(".data-header__club a") %>% rvest::html_text() %>% stringr::str_squish() %>% .replace_empty_na()
country <- history_pg %>% rvest::html_nodes(".data-header__content img") %>% rvest::html_attr("title") %>% .replace_empty_na()
mgrs <- history_pg %>% rvest::html_nodes("#yw1") %>% rvest::html_nodes("tbody") %>% .[[1]] %>% rvest::html_children()
mgr_df <- data.frame()
for(each_row in 1:length(mgrs)) {
mgr_df[each_row, "staff_name"] <- tryCatch(mgrs[each_row] %>% rvest::html_node(".hauptlink") %>% rvest::html_nodes("a") %>% rvest::html_text(),
error = function(e) mgr_df[each_row, "manager_name"] <- NA_character_)
mgr_df[each_row, "staff_url"] <- tryCatch(mgrs[each_row] %>% rvest::html_node(".hauptlink") %>% rvest::html_nodes("a") %>% rvest::html_attr("href") %>% paste0(main_url, .),
error = function(e) mgr_df[each_row, "manager_url"] <- NA_character_)
mgr_df[each_row, "staff_dob"] <- tryCatch(mgrs[each_row] %>% rvest::html_node(".inline-table tr+ tr td") %>% rvest::html_text(),
error = function(e) mgr_df[each_row, "dob"] <- NA_character_)
mgr_df[each_row, "staff_nationality"] <- tryCatch(mgrs[each_row] %>% rvest::html_nodes(".zentriert .flaggenrahmen") %>% .[[1]] %>% rvest::html_attr("title"),
error = function(e) mgr_df[each_row, "staff_nationality"] <- NA_character_)
mgr_df[each_row, "staff_nationality_secondary"] <- tryCatch(mgrs[each_row] %>% rvest::html_nodes(".zentriert .flaggenrahmen") %>% .[[2]] %>% rvest::html_attr("title"),
error = function(e) mgr_df[each_row, "staff_nationality_secondary"] <- NA_character_)
mgr_df[each_row, "appointed"] <- tryCatch(mgrs[each_row] %>% rvest::html_nodes("td:nth-child(3)") %>% rvest::html_text() %>% .tm_fix_dates(),
error = function(e) mgr_df[each_row, "appointed"] <- NA_character_)
mgr_df[each_row, "end_date"] <- tryCatch(mgrs[each_row] %>% rvest::html_nodes("td:nth-child(4)") %>% rvest::html_text() %>% .tm_fix_dates(),
error = function(e) mgr_df[each_row, "end_date"] <- NA_character_)
mgr_df[each_row, "days_in_post"] <- tryCatch(mgrs[each_row] %>% rvest::html_nodes("td.rechts") %>% rvest::html_text(),
error = function(e) mgr_df[each_row, "days_in_post"] <- NA_character_)
mgr_df[each_row, "matches"] <- tryCatch(mgrs[each_row] %>% rvest::html_nodes(".rechts+ td") %>% rvest::html_text() %>% as.numeric() %>% suppressWarnings(),
error = function(e) mgr_df[each_row, "matches"] <- NA_character_)
mgr_df[each_row, "wins"] <- tryCatch(mgrs[each_row] %>% rvest::html_nodes("td:nth-child(7)") %>% rvest::html_text() %>% as.numeric() %>% suppressWarnings(),
error = function(e) mgr_df[each_row, "wins"] <- NA_character_)
mgr_df[each_row, "draws"] <- tryCatch(mgrs[each_row] %>% rvest::html_nodes("td:nth-child(8)") %>% rvest::html_text() %>% as.numeric() %>% suppressWarnings(),
error = function(e) mgr_df[each_row, "draws"] <- NA_character_)
mgr_df[each_row, "losses"] <- tryCatch(mgrs[each_row] %>% rvest::html_nodes("td:nth-child(9)") %>% rvest::html_text() %>% as.numeric() %>% suppressWarnings(),
error = function(e) mgr_df[each_row, "losses"] <- NA_character_)
mgr_df[each_row, "ppg"] <- tryCatch(mgrs[each_row] %>% rvest::html_nodes("td:nth-child(10)") %>% rvest::html_text() %>% as.numeric() %>% suppressWarnings(),
error = function(e) mgr_df[each_row, "ppg"] <- NA_character_)
}
# add metadata
mgr_df <- cbind(team_name, league, country, staff_role, mgr_df)
mgr_df <- mgr_df %>%
dplyr:: mutate(matches = dplyr::case_when( is.na(.data[["matches"]]) ~ 0, TRUE ~ .data[["matches"]]),
wins = dplyr::case_when( is.na(.data[["wins"]]) ~ 0, TRUE ~ .data[["wins"]]),
draws = dplyr::case_when( is.na(.data[["draws"]]) ~ 0, TRUE ~ .data[["draws"]]),
losses = dplyr::case_when( is.na(.data[["losses"]]) ~ 0, TRUE ~ .data[["losses"]]),
ppg = dplyr::case_when( is.na(.data[["ppg"]]) ~ 0, TRUE ~ .data[["ppg"]]),
) %>%
dplyr::mutate(appointed = lubridate::ymd(.data[["appointed"]]),
end_date = lubridate::ymd(.data[["end_date"]])) %>%
dplyr::mutate(days_in_post = dplyr::case_when(is.na(.data[["end_date"]]) ~ as.numeric(lubridate::today() - .data[["appointed"]]), TRUE ~ as.numeric(.data[["end_date"]] - .data[["appointed"]])))
return(mgr_df)
}
# create the progress bar with a progress function.
pb <- progress::progress_bar$new(total = length(team_urls))
f_possibly <- purrr::possibly(get_each_team_staff_history, otherwise = data.frame(), quiet = FALSE)
purrr::map_dfr(
team_urls,
f_possibly
)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/tm_team_staff_history.R
|
#' Team transfer balances
#'
#' Returns all team's transfer aggregated performances for a chosen league season
#'
#' @param country_name the country of the league's players
#' @param start_year the start year of the season (2020 for the 20/21 season)
#' @param league_url league url from transfermarkt.com. To be used when country_name not available in main function
#'
#' @return returns a dataframe of the summarised financial transfer performance of all teams for a league season
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#' @importFrom utils read.csv
#'
#' @export
tm_team_transfer_balances <- function(country_name, start_year, league_url=NA) {
# .pkg_message("Scraping team transfer balances for the season. Please acknowledge transfermarkt.com as the data source.")
main_url <- "https://www.transfermarkt.com"
if(is.na(league_url)) {
meta_df <- read.csv(url("https://raw.githubusercontent.com/JaseZiv/worldfootballR_data/master/raw-data/transfermarkt_leagues/main_comp_seasons.csv"),
stringsAsFactors = F)
tryCatch({meta_df_seasons <- meta_df %>%
dplyr::filter(.data[["country"]] %in% country_name, .data[["season_start_year"]] %in% start_year)}, error = function(e) {meta_df_seasons <- data.frame()})
if(nrow(meta_df_seasons) == 0) {
stop(glue::glue("Country {country_name} or season {start_year} not found. Check that the country and season exists at https://github.com/JaseZiv/worldfootballR_data/blob/master/raw-data/transfermarkt_leagues/main_comp_seasons.csv"))
}
season_url <- meta_df_seasons$season_urls
} else {
tryCatch({league_page <- xml2::read_html(league_url)}, error = function(e) {league_page <- c()})
tryCatch({country_name <- league_page %>%
rvest::html_nodes(".data-header") %>%
rvest::html_node("img") %>%
rvest::html_attr("alt") %>% .[!is.na(.)] %>% stringr::str_squish()}, error = function(e) {country_name <- NA_character_})
if(length(league_page) == 0) {
stop(glue::glue("League URL(s) {league_url} not found. Please check transfermarkt.com for the correct league URL"))
}
season_url <- paste0(league_url, "?saison_id=", start_year)
}
season_url <- gsub("startseite", "transfers", season_url)
page <- xml2::read_html(season_url)
team_transfers <- page %>% rvest::html_nodes(".large-8") %>% rvest::html_nodes(".box")
tryCatch({country_name <- page %>%
rvest::html_nodes(".data-header") %>%
rvest::html_node("img") %>%
rvest::html_attr("alt") %>% .[!is.na(.)] %>% stringr::str_squish()}, error = function(e) {country_name <- NA_character_})
league_name <- page %>% rvest::html_nodes(".data-header__headline-wrapper--oswald") %>% rvest::html_text() %>% stringr::str_squish()
tryCatch({season <- team_transfers[1] %>%
rvest::html_node(".table-header") %>%
rvest::html_text() %>%
stringr::str_squish() %>% gsub("Transfers ", "", .)}, error = function(e) {country_name <- NA_character_})
all_df <- data.frame()
for(i in team_transfers) {
team_name <- i %>%
rvest::html_nodes(".table-header") %>%
rvest::html_text()
if(length(team_name) == 0 || grepl("Transfer", team_name)) {
df <- data.frame()
} else {
val_out <- i %>% rvest::html_nodes(".transfer-einnahmen-ausgaben") %>% rvest::html_text() %>% .[1] %>% stringr::str_squish()
val_in <- i %>% rvest::html_nodes(".transfer-einnahmen-ausgaben") %>% rvest::html_text() %>% .[2] %>% stringr::str_squish()
age_in <- i %>% rvest::html_nodes(".transfer-zusatzinfo-alter") %>% rvest::html_text() %>% .[1] %>% stringr::str_squish()
age_out <- i %>% rvest::html_nodes(".transfer-zusatzinfo-alter") %>% rvest::html_text() %>% .[2] %>% stringr::str_squish()
df <- data.frame(country=country_name, league=league_name, season=season, squad=team_name, expenditure_euros=val_out, income_euros=val_in, avg_age_out=age_out, avg_age_in=age_in)
}
all_df <- dplyr::bind_rows(all_df, df)
}
all_df <- all_df %>%
dplyr::mutate(expenditure_euros = gsub("Expenditure: ", "", .data[["expenditure_euros"]]),
income_euros = gsub("Income: ", "", .data[["income_euros"]]),
avg_age_out = as.numeric(gsub(".*:","", .data[["avg_age_out"]])),
avg_age_in = as.numeric(gsub(".*:","", .data[["avg_age_in"]]))) %>%
dplyr::mutate(expenditure_euros = mapply(.convert_value_to_numeric, .data[["expenditure_euros"]]),
income_euros = mapply(.convert_value_to_numeric, .data[["income_euros"]]))
return(all_df)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/tm_team_transfer_balances.R
|
#' Get team transfers
#'
#' Returns all transfer arrivals and departures for a given team season
#'
#' @param team_url transfermarkt.com team url for a season
#' @param transfer_window which window the transfer occurred - options include "all" for both, "summer" or "winter"
#'
#' @return returns a dataframe of all team transfers
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
tm_team_transfers <- function(team_url, transfer_window = "all") {
main_url <- "https://www.transfermarkt.com"
if(!tolower(transfer_window) %in% c("all", "summer", "winter")) stop("check transfer window is either 'all', 'summer' or 'winter'")
# .pkg_message("Scraping team transfer arrivals and departures. Please acknowledge transfermarkt.com as the data source")
each_team_xfer <- function(each_team_url) {
pb$tick()
xfers_url <- gsub("startseite", "transfers", each_team_url)
team_page <- xml2::read_html(xfers_url)
team_name <- team_page %>% rvest::html_nodes("h1") %>% rvest::html_text() %>% stringr::str_squish() %>% .replace_empty_na()
league <- team_page %>% rvest::html_nodes(".data-header__club a") %>% rvest::html_text() %>% stringr::str_squish() %>% .replace_empty_na()
country <- team_page %>% rvest::html_nodes(".data-header__content img") %>% rvest::html_attr("title") %>% .replace_empty_na()
season <- xfers_url %>% gsub(".*saison_id/", "", .) %>% .replace_empty_na()
tab_box <- team_page %>% rvest::html_nodes(".box")
tab_names <- tab_box %>% rvest::html_nodes("h2") %>% rvest::html_text() %>% stringr::str_squish()
# need to get the URL to be able to pass transfer window
xfers_window_box <- tab_box[grep("Transfers ", tab_names)]
# there are summer ("s") and winter ("w") transfer periods - which get used will be set by the user?
if(tolower(transfer_window) == "all") {
summer_winter <- c("s", "w")
} else if (tolower(transfer_window) == "summer") {
summer_winter <- "s"
} else {
summer_winter <- "w"
}
team_df <- data.frame()
for(each_window in summer_winter){
xfers_window_url <- xfers_window_box %>% rvest::html_nodes(".content") %>% rvest::html_children() %>% rvest::html_attr("action")
xfers_window_url <- xfers_window_url %>% paste0(main_url, ., "?saison_id=", season, "&pos=&detailpos=&w_s=", each_window)
team_page_window <- xml2::read_html(xfers_window_url)
tab_box_window <- team_page_window %>% rvest::html_nodes(".box")
# need to isolate the arrivals and departures tables
tab_names <- tab_box_window %>% rvest::html_nodes("h2") %>% rvest::html_text() %>% stringr::str_squish()
tab_box_window <- tab_box_window[which(tab_names %in% c("Arrivals", "Departures"))]
both_tabs <- tab_box_window %>% rvest::html_nodes(".responsive-table")
# create output for team of both arrivals and departures
team_df_each_window <- data.frame()
for(i in 1:length(tab_box_window)) {
each_tab <- tryCatch(both_tabs[i] %>% rvest::html_nodes("tbody") %>% .[[1]] %>% rvest::html_children(), error = function(e) NA_character_)
if(any(is.na(each_tab))) {
player_df <- data.frame()
} else {
player_df <- data.frame()
for(j in 1:length(each_tab)) {
player_df[j, "transfer_type"] <- tryCatch(tab_box_window[i] %>% rvest::html_nodes("h2") %>% rvest::html_text() %>% stringr::str_squish(),
error = function(e) player_df[j, "transfer_type"] <- NA_character_)
player_df[j, "player_name"] <- tryCatch(each_tab[j] %>% rvest::html_node(".hauptlink") %>% rvest::html_nodes("a") %>% rvest::html_text(),
error = function(e) player_df[j, "player_name"] <- NA_character_)
player_df[j, "player_url"] <- tryCatch(each_tab[j] %>% rvest::html_node(".hauptlink") %>% rvest::html_nodes("a") %>% rvest::html_attr("href") %>% paste0(main_url, .),
error = function(e) player_df[j, "player_url"] <- NA_character_)
player_df[j, "player_position"] <- tryCatch(each_tab[j] %>% rvest::html_nodes("td:nth-child(2) tr+ tr td") %>% rvest::html_text() %>% .replace_empty_na(),
error = function(e) player_df[j, "player_position"] <- NA_character_)
player_df[j, "player_age"] <- tryCatch(each_tab[j] %>% rvest::html_nodes("td.zentriert:nth-child(3)") %>% rvest::html_text() %>% .replace_empty_na(),
error = function(e) player_df[j, "player_age"] <- NA_character_)
player_df[j, "player_nationality"] <- tryCatch(each_tab[j] %>% rvest::html_nodes(".zentriert .flaggenrahmen") %>% .[1] %>% rvest::html_attr("title") %>% .replace_empty_na(),
error = function(e) player_df[j, "player_nationality"] <- NA_character_)
player_df[j, "club_2"] <- tryCatch(each_tab[j] %>% rvest::html_nodes(".inline-table") %>% rvest::html_nodes("a") %>% .[3] %>% rvest::html_text() %>% .replace_empty_na(),
error = function(e) player_df[j, "club_2"] <- NA_character_)
player_df[j, "league_2"] <- tryCatch(each_tab[j] %>% rvest::html_nodes(".flaggenrahmen+ a") %>% rvest::html_text() %>% .replace_empty_na(),
error = function(e) player_df[j, "league_2"] <- NA_character_)
player_df[j, "country_2"] <- tryCatch(each_tab[j] %>% rvest::html_nodes(".inline-table .flaggenrahmen") %>% rvest::html_attr("alt") %>% .replace_empty_na(),
error = function(e) player_df[j, "country_2"] <- NA_character_)
player_df[j, "transfer_fee"] <- tryCatch(each_tab[j] %>% rvest::html_nodes(".rechts a") %>% rvest::html_text() %>% .replace_empty_na(),
error = function(e) player_df[j, "transfer_fee"] <- NA_character_)
player_df[j, "is_loan"] <- tryCatch(grepl("loan", player_df[j, "transfer_fee"], ignore.case = T),
error = function(e) player_df[j, "is_loan"] <- NA_character_)
player_df[j, "transfer_fee_dup"] <- tryCatch(player_df[j, "transfer_fee"],
error = function(e) player_df[j, "transfer_fee_dup"] <- NA_character_)
if(is.na(each_tab[j])) {
player_df[j, "transfer_fee_notes1"] <- NA_character_
} else {
if(length(each_tab[j] %>% rvest::html_nodes(".rechts.hauptlink a i") %>% rvest::html_text()) == 0) {
player_df[j, "transfer_fee_notes1"] <- NA_character_
} else {
player_df[j, "transfer_fee_notes1"] <- tryCatch(each_tab[j] %>% rvest::html_nodes(".rechts.hauptlink a i") %>% rvest::html_text(),
error = function(e) player_df[j, "transfer_fee_notes1"] <- NA_character_)
}
}
player_df[j, "window"] <- tryCatch(each_window,
error = function(e) player_df[j, "window"] <- NA_character_)
}
}
team_df_each_window <- dplyr::bind_rows(team_df_each_window, player_df)
}
team_df <- dplyr::bind_rows(team_df, team_df_each_window)
}
team_df <- team_df %>%
dplyr::mutate(window = dplyr::case_when(
window == "s" ~ "Summer",
window == "w" ~ "Winter",
TRUE ~ "Unknown"
))
# add metadata
team_df <- cbind(team_name, league, country, season, team_df)
# cleaning up final output data
team_df <- team_df %>%
dplyr::mutate(transfer_fee = ifelse(stringr::str_detect(.data[["transfer_fee_dup"]], "Loan fee:"), .data[["transfer_fee_notes1"]], .data[["transfer_fee"]])) %>%
dplyr::mutate(transfer_fee = mapply(.convert_value_to_numeric, euro_value = .data[["transfer_fee"]])) %>%
dplyr::mutate(transfer_fee_dup = ifelse(is.na(.data[["transfer_fee"]]), .data[["transfer_fee_dup"]], NA_character_),
transfer_fee_dup = gsub("End of loan", "End of loan ", .data[["transfer_fee_dup"]])) %>%
dplyr::rename(transfer_notes = .data[["transfer_fee_dup"]]) %>%
dplyr::select(-.data[["transfer_fee_notes1"]])
#----- Get player stats including goals and appearances: -----#
team_data_url <- gsub("startseite", "leistungsdaten", each_team_url)
team_data_page <- xml2::read_html(team_data_url)
team_data_table <- team_data_page %>% rvest::html_nodes("#yw1") %>% rvest::html_node("table") %>% rvest::html_nodes("tbody") %>% rvest::html_children()
player_name <- team_data_table %>% rvest::html_nodes(".hauptlink") %>% rvest::html_nodes(".hide-for-small") %>% rvest::html_text()
# player_age <- team_data_table %>% rvest::html_nodes(".posrela+ .zentriert") %>% rvest::html_text()
in_squad <- team_data_table %>% rvest::html_nodes("td:nth-child(5)") %>% rvest::html_text() %>%
gsub("-", "0", .) %>% as.numeric()
appearances <- team_data_table %>% rvest::html_nodes("td:nth-child(6)") %>% rvest::html_text() %>%
gsub("Not.*", "0", .) %>% as.numeric()
goals <- team_data_table %>% rvest::html_nodes(".zentriert:nth-child(7)") %>% rvest::html_text() %>%
gsub("-", "0", .) %>% as.numeric()
minutes_played <- team_data_table %>% rvest::html_nodes(".rechts") %>% rvest::html_text() %>%
gsub("\\.", "", .) %>% gsub("'", "", .) %>% gsub("-", "0", .) %>% as.numeric()
team_data_df <- data.frame(player_name = as.character(player_name), in_squad = as.numeric(in_squad),
appearances = as.numeric(appearances), goals = as.numeric(goals), minutes_played = as.numeric(minutes_played))
# now join the two data sets together:
team_df <- team_df %>%
dplyr::left_join(team_data_df, by = c("player_name"))
return(team_df)
}
# create the progress bar with a progress function.
pb <- progress::progress_bar$new(total = length(team_url))
final_output <- team_url %>%
purrr::map_df(each_team_xfer)
final_output <- final_output %>%
dplyr::filter(!is.na(.data[["player_name"]]))
return(final_output)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/tm_team_transfers.R
|
#' Get Understat season match results
#'
#' Returns match results for all matches played in the selected league season from Understat.com
#'
#' @param league the available leagues in Understat as outlined below
#' @param season_start_year the year the season started
#'
#' The leagues currently available for Understat are:
#' \emph{"EPL"}, \emph{"La liga}", \emph{"Bundesliga"},
#' \emph{"Serie A"}, \emph{"Ligue 1"}, \emph{"RFPL"}
#'
#' @return returns a dataframe of match results for a selected league season
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
understat_league_match_results <- function(league, season_start_year) {
# .pkg_message("Scraping match results data for {league} {season_start_year} season. Please acknowledge understat.com as the data source")
main_url <- "https://understat.com/"
leagues <- c("EPL", "La liga", "Bundesliga", "Serie A", "Ligue 1", "RFPL")
if(!league %in% leagues) stop("Check league name")
if(league == "La liga") {
league <- "La_liga"
} else if (league == "Serie A") {
league <- "Serie_A"
} else if (league == "Ligue 1") {
league <- "Ligue_1"
}
league_url <- paste0(main_url, "league/", league, "/", season_start_year)
# to get available seasons:
# league_page <- xml2::read_html(league_url)
# avail_seasons <- league_page %>% rvest::html_nodes(xpath = '//*[@name="season"]') %>%
# rvest::html_nodes("option") %>% rvest::html_text() %>% gsub("/.*", "", .)
match_results <- .get_clean_understat_json(page_url = league_url, script_name = "datesData") %>%
dplyr::filter(.data[["isResult"]])
match_results <- cbind(league, match_results)
match_results <- match_results %>%
dplyr::rename(match_id=.data[["id"]], home_id=.data[["h.id"]], home_team=.data[["h.title"]], home_abbr=.data[["h.short_title"]], away_id=.data[["a.id"]], away_team=.data[["a.title"]], away_abbr=.data[["a.short_title"]],
home_goals=.data[["goals.h"]], away_goals=.data[["goals.a"]], home_xG=.data[["xG.h"]], away_xG=.data[["xG.a"]],
forecast_win=.data[["forecast.w"]], forecast_draw=.data[["forecast.d"]], forecast_loss=.data[["forecast.l"]])
match_results <- match_results %>%
dplyr::mutate_at(c("home_goals", "away_goals", "home_xG", "away_xG", "forecast_win", "forecast_draw", "forecast_loss"), as.numeric)
return(match_results)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/understat_match_results.R
|
#' Get Understat league season shot locations
#'
#' Returns shooting locations for all matches played in the selected league season from Understat.com
#'
#' @param league the available leagues in Understat as outlined below
#' @param season_start_year the year the season started
#'
#' The leagues currently available for Understat are:
#' \emph{"EPL"}, \emph{"La liga}", \emph{"Bundesliga"},
#' \emph{"Serie A"}, \emph{"Ligue 1"}, \emph{"RFPL"}
#'
#' @return returns a dataframe of shooting locations for a selected league season
#'
#' @importFrom magrittr %>%
#'
#' @export
understat_league_season_shots <- function(league, season_start_year) {
# .pkg_message("Scraping shots data for {league} {season_start_year} season. Please acknowledge understat.com as the data source")
main_url <- "https://understat.com/"
leagues <- c("EPL", "La liga", "Bundesliga", "Serie A", "Ligue 1", "RFPL")
if(!league %in% leagues) stop("Check league name")
if(league == "La liga") {
league <- "La_liga"
} else if (league == "Serie A") {
league <- "Serie_A"
} else if (league == "Ligue 1") {
league <- "Ligue_1"
}
league_url <- paste0(main_url, "league/", league, "/", season_start_year)
shots_data <- .understat_shooting(type_url = league_url)
if(nrow(shots_data) == 0) {
shots_data <- shots_data
} else {
shots_data <- cbind(league, shots_data)
shots_data <- shots_data %>%
dplyr::rename(home_team=.data[["h_team"]], away_team=.data[["a_team"]], home_goals=.data[["h_goals"]], away_goals=.data[["a_goals"]]) %>%
dplyr::mutate_at(c("X", "Y", "xG", "home_goals", "away_goals"), as.numeric)
}
return(shots_data)
}
#' Get Understat team season shot locations
#'
#' Returns shooting locations for all matches played by a selected team from Understat.com
#'
#' @param team_url the URL of the team season
#'
#' @return returns a dataframe of shooting locations for a selected team season
#'
#' @importFrom magrittr %>%
#'
#' @export
understat_team_season_shots <- function(team_url) {
# .pkg_message("Scraping all shots for team {team_url}. Please acknowledge understat.com as the data source")
shots_df <- .understat_shooting(type_url = team_url)
shots_df <- shots_df %>%
dplyr::rename(home_away=.data[["h_a"]], home_team=.data[["h_team"]], away_team=.data[["a_team"]], home_goals=.data[["h_goals"]], away_goals=.data[["a_goals"]]) %>%
dplyr::mutate_at(c("minute", "X", "Y", "xG", "home_goals", "away_goals"), as.numeric)
return(shots_df)
}
#' Get Understat match shot locations
#'
#' Returns shooting locations for a selected match from Understat.com
#'
#' @param match_url the URL of the match played
#'
#' @return returns a dataframe of shooting locations for a selected team season
#'
#' @importFrom magrittr %>%
#'
#' @export
understat_match_shots <- function(match_url) {
# .pkg_message("Scraping all shots for match {match_url}. Please acknowledge understat.com as the data source")
match_shots_df <- .get_clean_understat_json(page_url = match_url, script_name = "shotsData")
match_shots_df <- match_shots_df %>%
dplyr::rename(home_away=.data[["h_a"]], home_team=.data[["h_team"]], away_team=.data[["a_team"]], home_goals=.data[["h_goals"]], away_goals=.data[["a_goals"]]) %>%
dplyr::mutate_at(c("minute", "X", "Y", "xG", "home_goals", "away_goals"), as.numeric)
return(match_shots_df)
}
#' Get all Understat shot locations for a player
#'
#' Returns shooting locations for a selected player for all matches played from Understat.com
#'
#' @param player_url the URL of a selected player
#'
#' @return returns a dataframe of shooting locations for a selected player
#'
#' @importFrom magrittr %>%
#'
#' @export
understat_player_shots <- function(player_url) {
# .pkg_message("Scraping all shots for player {player_url}. Please acknowledge understat.com as the data source")
shots_df <- understat_match_shots(match_url = player_url)
return(shots_df)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/understat_shots.R
|
#' Get Understat team player stats
#'
#' Retrieve Understat team player stats.
#'
#' @inheritParams understat_team_season_shots
#'
#' @return a dataframe of player stats for a selected team season
#'
#' @export
understat_team_players_stats <- function(team_url) {
f_possibly <- purrr::possibly(.understat_team_players_stats, otherwise = data.frame(), quiet = FALSE)
purrr::map_dfr(
team_url,
f_possibly
)
}
.understat_team_players_stats <- function(team_url) {
# .pkg_message("Scraping player stats for team {team_url}. Please acknowledge understat.com as the data source")
players_data <- .get_clean_understat_json(team_url, "playersData")
names(players_data)[names(players_data) == "team_title"] <- "team_name"
names(players_data)[names(players_data) == "id"] <- "player_id"
withr::local_options(list(readr.num_columns = 0))
players_data <- readr::type_convert(players_data)
return(players_data)
}
#' Get Understat team statistics breakdowns
#'
#' Returns a data frame for the selected team(s) with stats broken down in
#' different ways. Breakdown groups include:
#'
#' \emph{"Situation"}, \emph{"Formation}", \emph{"Game state"},
#' \emph{"Timing"}, \emph{"Shot zones"}, \emph{"Attack speed"}, \emph{"Result"}
#'
#' @param team_urls the url(s) of the teams in question
#'
#' @return returns a dataframe of all stat groups and values
#'
#' @importFrom magrittr %>%
#'
#' @export
understat_team_stats_breakdown <- function(team_urls) {
f_possibly <- purrr::possibly(.understat_team_stats_breakdown, otherwise = data.frame())
purrr::map_dfr(team_urls, f_possibly)
}
.understat_team_stats_breakdown <- function(team_url) {
data_html <-
team_url %>%
rvest::read_html()
team_name <- data_html %>% rvest::html_nodes(".breadcrumb") %>% rvest::html_nodes("li") %>% .[3] %>% rvest::html_text()
# need to do this to get the 'selected' season, otherwise all that get's returned is the current season
season_element <- data_html %>% rvest::html_nodes(xpath = '//*[@name="season"]') %>%
rvest::html_nodes("option")
season_element <- season_element[grep("selected", season_element)]
season <- season_element %>% rvest::html_attr("value") %>% as.numeric()
# statistics data
data_statistics <-
data_html %>%
rvest::html_nodes("script") %>%
as.character() %>%
stringr::str_subset("statisticsData") %>%
stringi::stri_unescape_unicode() %>%
gsub("<script>\n\tvar statisticsData = JSON.parse\\('", "", .) %>%
gsub("'\\);\n</script>", "", .) %>%
jsonlite::parse_json(simplifyVector=TRUE)
.parse_understat <- function(stat_list, stat_group) {
x <- stat_list[stat_group] %>% .[[1]]
stat_group_name <- stat_group
all_names <- names(x)
get_each_stat <- function(stat_name) {
stat_name <- stat_name
stat_vals <- x[stat_name] %>% unname() %>%
do.call(data.frame, .)
out_df <- data.frame(team_name=team_name, season_start_year=season, stat_group_name=stat_group_name, stat_name=stat_name, stringsAsFactors = F)
out_df <- dplyr::bind_cols(out_df, stat_vals)
out_df[,"stat"] <- NULL
return(out_df)
}
purrr::map_df(all_names, get_each_stat)
}
stat_group_options <- names(data_statistics)
full_df <- data.frame()
for(i in stat_group_options) {
test <- .parse_understat(stat_list = data_statistics, stat_group = i)
full_df <- dplyr::bind_rows(full_df, test)
}
return(full_df)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/understat_teams.R
|
#' Get fbref League URLs
#'
#' Returns the URLs for season leagues of a selected country
#'
#' @param country the three character country code
#' @param gender gender of competition, either "M" or "F"
#' @param season_end_year the year the season(s) concludes (defaults to all available seasons)
#' @param tier the tier of the league, ie '1st' for the EPL or '2nd' for the Championship and so on
#'
#' @return returns a character vector of all fbref league URLs for selected country, season, gender and tier
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' fb_league_urls(country = "ENG", gender = "M", season_end_year = 2021, tier = '1st')
#' })
#' }
fb_league_urls <- function(country, gender, season_end_year, tier = "1st") {
# .pkg_message("Getting league URLs")
main_url <- "https://fbref.com"
country_abbr <- country
gender_M_F <- gender
season_end_year_num <- season_end_year
comp_tier <- tier
seasons <- read.csv("https://raw.githubusercontent.com/JaseZiv/worldfootballR_data/master/raw-data/all_leages_and_cups/all_competitions.csv", stringsAsFactors = F)
league_seasons_urls <- seasons %>%
dplyr::filter(.data[["country"]] %in% country_abbr,
.data[["gender"]] %in% gender_M_F,
.data[["season_end_year"]] %in% season_end_year_num,
.data[["tier"]] %in% comp_tier) %>%
dplyr::pull(.data[["seasons_urls"]]) %>% unique()
return(league_seasons_urls)
}
#' Get fbref Team URLs
#'
#' Returns the URLs for all teams for a given league
#'
#' @param league_url the league URL (can be from fb_league_urls())
#' @param time_pause the wait time (in seconds) between page loads
#'
#' @return returns a character vector of all fbref team URLs for a selected league
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' fb_teams_urls("https://fbref.com/en/comps/9/Premier-League-Stats")
#' })
#' }
fb_teams_urls <- function(league_url, time_pause=3) {
# .pkg_message("Scraping team URLs")
# put sleep in as per new user agreement on FBref
Sys.sleep(time_pause)
league_season_page <- .load_page(league_url)
main_url <- "https://fbref.com"
urls <- league_season_page %>%
rvest::html_nodes("img+ a") %>%
rvest::html_attr("href") %>%
unique() %>% paste0(main_url, .)
return(urls)
}
#' Get fbref Player URLs
#'
#' Returns the URLs for all players for a given team
#'
#' @param team_url the player's team URL (can be from fb_team_urls())
#' @param time_pause the wait time (in seconds) between page loads
#'
#' @return returns a character vector of all fbref player URLs for a selected team
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
#'
#' @examples
#' \dontrun{
#' try({
#' fb_player_urls("https://fbref.com/en/squads/fd962109/Fulham-Stats")
#' })
#' }
fb_player_urls <- function(team_url, time_pause=3) {
# .pkg_message("Scraping Player URLs")
main_url <- "https://fbref.com"
# put sleep in as per new user agreement on FBref
Sys.sleep(time_pause)
player_page <- .load_page(team_url)
player_urls <- player_page %>%
rvest::html_nodes("#all_stats_standard th a") %>%
rvest::html_attr("href") %>%
paste0(main_url, .)
return(player_urls)
}
#' Get transfermarkt Team URLs
#'
#' Returns the URLs for all teams for a given league season
#'
#' @param country_name the country of the league's players
#' @param start_year the start year of the season (2020 for the 20/21 season)
#' @param league_url league url from transfermarkt.com. To be used when country_name not available in main function
#'
#' @return returns a character vector of all transfermarkt team URLs for a selected league
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#' @importFrom utils read.csv
#'
#' @export
tm_league_team_urls <- function(country_name, start_year, league_url = NA) {
main_url <- "https://www.transfermarkt.com"
if(is.na(league_url)) {
meta_df <- read.csv(url("https://raw.githubusercontent.com/JaseZiv/worldfootballR_data/master/raw-data/transfermarkt_leagues/main_comp_seasons.csv"),
stringsAsFactors = F)
tryCatch({meta_df_seasons <- meta_df %>%
dplyr::filter(.data[["country"]] %in% country_name, .data[["season_start_year"]] %in% start_year)}, error = function(e) {meta_df_seasons <- data.frame()})
if(nrow(meta_df_seasons) == 0) {
stop(glue::glue("Country {country_name} or season {start_year} not found. Check that the country and season exists at https://github.com/JaseZiv/worldfootballR_data/blob/master/raw-data/transfermarkt_leagues/main_comp_seasons.csv"))
}
season_url <- meta_df_seasons$season_urls
} else {
tryCatch({league_page <- xml2::read_html(league_url)}, error = function(e) {league_page <- c()})
tryCatch({country_name <- league_page %>%
rvest::html_nodes(".profilheader") %>%
rvest::html_node("img") %>%
rvest::html_attr("alt") %>% .[!is.na(.)]}, error = function(e) {country_name <- NA_character_})
if(length(league_page) == 0) {
stop(glue::glue("League URL(s) {league_url} not found. Please check transfermarkt.com for the correct league URL"))
}
season_url <- paste0(league_url, "?saison_id=", start_year)
}
season_page <- xml2::read_html(season_url)
team_urls <- season_page %>%
rvest::html_nodes("#yw1 .hauptlink a") %>% rvest::html_attr("href") %>%
# rvest::html_elements("tm-tooltip a") %>% rvest::html_attr("href") %>%
unique() %>% paste0(main_url, .)
# there now appears to be an errorneous URL so will remove that manually:
if(any(grepl("com#", team_urls))) {
team_urls <- team_urls[-grep(".com#", team_urls)]
}
return(team_urls)
}
#' Get transfermarkt Player URLs
#'
#' Returns the transfermarkt URLs for all players for a given team
#'
#' @param team_url the player's team URL (can be from tm_league_team_urls())
#'
#' @return returns a character vector of all transfermarkt player URLs for a selected team
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
#'
tm_team_player_urls <- function(team_url) {
main_url <- "https://www.transfermarkt.com"
tryCatch({team_page <- xml2::read_html(team_url)}, error = function(e) {team_page <- c()})
player_urls <- team_page %>%
rvest::html_nodes(".nowrap a") %>% rvest::html_attr("href") %>%
unique() %>%
paste0(main_url, .)
return(player_urls)
}
#' Get transfermarkt Club Staff URLs
#'
#' Returns the transfermarkt URLs for all staff of selected roles for a given team
#'
#' @param team_urls the staff member's team URL (can be from tm_league_team_urls())
#' @param staff_role role of the staff member URLs required for with options including:
#'
#' \emph{"Manager"}, \emph{"Assistant Manager"}, \emph{"Goalkeeping Coach"},
#' \emph{"Fitness Coach"}, \emph{"Conditioning Coach"}
#'
#' @return returns a character vector of all transfermarkt staff URLs for a selected team(s)
#'
#' @importFrom magrittr %>%
#' @importFrom rlang .data
#'
#' @export
tm_team_staff_urls <- function(team_urls, staff_role) {
if(!staff_role %in% c("Manager", "Assistant Manager", "Goalkeeping Coach", "Fitness Coach", "Conditioning Coach")) stop("Check that staff role is one of:\n'Manager', 'Assistant Manager', 'Goalkeeping Coach', 'Fitness Coach' or 'Conditioning Coach'")
get_each_team <- function(team_url) {
pb$tick()
main_url <- "https://www.transfermarkt.com"
manager_history_url <- gsub("startseite", "mitarbeiter", team_url) %>% gsub("/saison_id.*", "", .)
history_pg <- xml2::read_html(manager_history_url)
coaching <- history_pg %>% rvest::html_nodes(".large-8") %>% rvest::html_nodes(".box") %>% .[1]
name <- coaching %>% rvest::html_nodes(".hauptlink a") %>% rvest::html_text()
url <- coaching %>% rvest::html_nodes(".hauptlink a") %>% rvest::html_attr("href") %>% paste0(main_url, .)
position <- coaching %>% rvest::html_nodes(".inline-table tr+ tr td") %>% rvest::html_text()
coaching_df <- data.frame(name, url, position)
if(staff_role == "Manager") {
url <- coaching_df %>% dplyr::filter(position %in% c("Manager", "Caretaker Manager")) %>% dplyr::pull(.data[["url"]])
} else {
url <- coaching_df %>% dplyr::filter(position == staff_role) %>% dplyr::pull(.data[["url"]])
}
return(url)
}
# create the progress bar with a progress function.
pb <- progress::progress_bar$new(total = length(team_urls))
f_possibly <- purrr::possibly(get_each_team, otherwise = NA_character_, quiet = FALSE)
purrr::map(
team_urls,
f_possibly
) %>% unlist()
}
#' Get Understat team info
#'
#' Retrieve Understat team metadata, including team URLs. Similar to `understatr::get_team_meta`.
#'
#' @param team_names a vector of team names (can be just 1)
#'
#' @return a data.frame
#'
#' @export
#' @examples
##' \dontrun{
#' try({
#' understat_team_meta(team_name = c("Liverpool", "Manchester City"))
#' })
#' }
understat_team_meta <- function(team_names) {
f_possibly <- purrr::possibly(.understat_team_meta, otherwise = data.frame())
purrr::map_dfr(team_names, f_possibly)
}
#' @importFrom stringr str_replace_all
#' @importFrom rvest read_html html_nodes html_attr html_text
.understat_team_meta <- function(team_name) {
# .pkg_message("Scraping {team_name} metadata. Please acknowledge understat.com as the data source")
main_url <- "https://understat.com"
team_name <- stringr::str_replace_all(team_name, " ", "_")
team_url <- paste(main_url, "team", team_name, sep = "/")
team_page <- xml2::read_html(team_url)
year_link <- rvest::html_nodes(team_page, "#header :nth-child(2)")
year_options <- rvest::html_nodes(year_link[2], "option")
team_df <- data.frame(
team_name = team_name,
year = as.numeric(rvest::html_attr(year_options, "value")),
season = rvest::html_text(year_options),
stringsAsFactors = FALSE
)
team_df$url <- paste(team_url, team_df$year, sep = "/")
return(team_df)
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/worldfootballr_helpers.R
|
#' @importFrom rstudioapi isAvailable versionInfo
#' @importFrom utils sessionInfo
.onLoad <- function(libname, pkgname) {
agent_option <- getOption("worldfootballR.agent")
if (is.null(agent_option)) {
prefix <- if(interactive() & rstudioapi::isAvailable()) {
version_info <- rstudioapi::versionInfo()
substr(version_info$mode, 1, 1) <- toupper(substr(version_info$mode, 1, 1))
sprintf(
"RStudio %s (%s)",
version_info$mode,
version_info$version
)
} else {
"RStudio Desktop (2022.7.1.554)"
}
session_info <- utils::sessionInfo()
r_version <- session_info$R.version
agent <- sprintf(
"%s; R (%s.%s %s %s %s)",
prefix,
r_version$major,
r_version$minor,
r_version$platform,
r_version$arch,
r_version$arch
)
options("worldfootballR.agent" = agent)
}
}
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/R/zzz.R
|
# install.packages("ggsoccer")
library(hexSticker)
library(ggplot2)
library(ggsoccer)
sysfonts::font_add_google(name = "Chivo", family = "chivo")
sysfonts::font_add_google(name = "Play", family = "play")
pitch <- ggplot() +
annotate_pitch(fill = "#538032", colour = "white") +
theme_pitch() +
theme(panel.background = element_rect(fill = "#538032"))
pitch <- pitch + theme_void() + theme_transparent()
sticker(pitch,
package="worldfootballR",
p_family = "play", p_size=6, p_color = "white",
s_x=1, s_y=.8, s_width=1.3, s_height=0.85,
h_fill = "#538032",
url = "https://jaseziv.github.io/worldfootballR/", u_y = 0.07, u_x = 1.0, u_size = 1.2, u_color = "white", u_family = "play",
filename="man/figures/logo.png")
# smaller size hex logo:
sticker(pitch,
package="worldfootballR",
p_family = "play", p_size=6, p_color = "white",
s_x=1, s_y=.8, s_width=1.3, s_height=0.85,
h_fill = "#538032",
url = "https://jaseziv.github.io/worldfootballR/", u_y = 0.07, u_x = 1.0, u_size = 1.2, u_color = "white", u_family = "play",
filename="man/figures/logo_small_size.png") # modify size in viewer to dimensions 181x209 as a png
###########################################################################
# Different Options: ------------------------------------------------------
# sticker(pitch,
# package="worldfootballR",
# p_size=6, p_color = "white",
# s_x=1, s_y=.8, s_width=1.3, s_height=0.85,
# h_fill = "#538032",
# url = "https://jaseziv.github.io/worldfootballR/", u_y = 0.09, u_x = 1.05, u_size = 1.2, u_color = "white",
# filename="man/figures/logo_standard.png")
#
#
# sticker(pitch,
# package="worldfootballR",
# p_family = "chivo",
# p_size=6, p_color = "white",
# s_x=1, s_y=.8, s_width=1.3, s_height=0.85,
# h_fill = "#538032",
# url = "https://jaseziv.github.io/worldfootballR/", u_y = 0.07, u_x = 1.0, u_size = 1.2, u_color = "white", u_family = "chivo",
# filename="man/figures/logo_chivo.png")
#
#
# sticker(pitch,
# package="worldfootballR",
# p_family = "play",
# p_size=6, p_color = "white",
# s_x=1, s_y=.8, s_width=1.3, s_height=0.85,
# h_fill = "#538032",
# h_color = "black",
# spotlight = T, l_y = 0.83,
# url = "https://jaseziv.github.io/worldfootballR/", u_y = 0.07, u_x = 1.0, u_size = 1.2, u_color = "white", u_family = "play",
# filename="man/figures/logo_play_black_border.png")
|
/scratch/gouwar.j/cran-all/cranData/worldfootballR/man/figures/hex_sticker.R
|
#' Export a meteorological data frame in ADMS format
#'
#' Writes a text file in the ADMS format to a location of the user's choosing,
#' with optional interpolation of missing values.
#'
#' @param dat A data frame imported by [importNOAA()].
#' @param out A file name for the ADMS file. The file is written to the working
#' directory by default.
#' @param interp Should interpolation of missing values be undertaken? If `TRUE`
#' linear interpolation is carried out for gaps of up to and including
#' `maxgap`.
#' @param maxgap The maximum gap in hours that should be interpolated where
#' there are missing data when `interp = TRUE.` Data with gaps more than
#' `maxgap` are left as missing.
#'
#' @return `exportADMS()` returns the input `dat` invisibly.
#' @export
#' @examples
#' \dontrun{
#' ## import some data then export it
#' dat <- importNOAA(year = 2012)
#' exportADMS(dat, out = "~/adms_met.MET")
#' }
exportADMS <- function(dat, out = "./ADMS_met.MET", interp = FALSE, maxgap = 2) {
# save input for later
input <- dat
# keep R check quiet
wd <- u <- v <- NULL
## make sure the data do not have gaps
all.dates <- data.frame(
date = seq(ISOdatetime(
year = as.numeric(format(dat$date[1], "%Y")),
month = 1, day = 1, hour = 0, min = 0,
sec = 0, tz = "GMT"
),
ISOdatetime(
year = as.numeric(format(dat$date[1], "%Y")),
month = 12, day = 31, hour = 23, min = 0,
sec = 0, tz = "GMT"
),
by = "hour"
)
)
dat <- merge(dat, all.dates, all = TRUE)
## make sure precipitation is available
if (!"precip" %in% names(dat)) {
dat$precip <- NA
}
if (interp) {
varInterp <- c("ws", "u", "v", "air_temp", "RH", "cl")
# transform wd
dat <- mutate(dat, u = sin(pi * wd / 180),
v = cos(pi * wd / 180))
for (variable in varInterp) {
# if all missing, then don't interpolate
if(all(is.na(dat[[variable]])))
return()
# first fill with linear interpolation
filled <- stats::approx(dat$date, dat[[variable]], xout = dat$date,
na.rm = TRUE, rule = 2, method = "linear")$y
# find out length of missing data
is_missing <- rle(is.na(dat[[variable]]))
is_missing <- rep(ifelse(is_missing$values, is_missing$lengths, 0),
times = is_missing$lengths)
id <- which(is_missing > maxgap)
# update data frame
dat[[variable]] <- filled
dat[[variable]][id] <- NA
}
dat <- mutate(dat, wd = as.vector(atan2(u, v) * 360 / 2 / pi))
## correct for negative wind directions
ids <- which(dat$wd < 0) ## ids where wd < 0
dat$wd[ids] <- dat$wd[ids] + 360
dat <- select(dat, -v, -u)
}
## exports met data to ADMS format file
year <- as.numeric(format(dat$date, "%Y"))
day <- as.numeric(format(dat$date, "%j"))
hour <- as.numeric(format(dat$date, "%H"))
station <- "0000"
# check if present
if (!"cl" %in% names(dat)) dat$cl <- NA
if (!"precip" %in% names(dat)) dat$precip <- NA
## data frame of met data needed
adms <- data.frame(station, year, day, hour,
round(dat$air_temp, 1), round(dat$ws, 1),
round(dat$wd, 1), round(dat$RH, 1),
round(dat$cl), round(dat$precip, 1),
stringsAsFactors = FALSE
)
## print key data capture rates to the screen
dc <- round(100 - 100 * (length(which(is.na(dat$ws))) / length(dat$ws)), 1)
print(paste("Data capture for wind speed:", dc, "%"))
dc <- round(100 - 100 * (length(which(is.na(dat$wd))) / length(dat$wd)), 1)
print(paste("Data capture for wind direction:", dc, "%"))
dc <- round(100 - 100 * (length(which(is.na(dat$air_temp))) / length(dat$air_temp)), 1)
print(paste("Data capture for temperature:", dc, "%"))
dc <- round(100 - 100 * (length(which(is.na(dat$cl))) / length(dat$cl)), 1)
print(paste("Data capture for cloud cover:", dc, "%"))
## replace NA with -999
adms[] <- lapply(adms, function(x) replace(x, is.na(x), -999))
## write the data file
utils::write.table(
adms,
file = out,
col.names = FALSE,
row.names = FALSE,
sep = ",",
quote = FALSE
)
## add the header lines
fConn <- file(out, "r+")
Lines <- readLines(fConn)
writeLines(c(
"VARIABLES:\n10\nSTATION DCNN\nYEAR\nTDAY\nTHOUR\nT0C\nU\nPHI\nRHUM\nCL\nP\nDATA:",
Lines
), con = fConn)
close(fConn)
# return input invisibly
invisible(input)
}
|
/scratch/gouwar.j/cran-all/cranData/worldmet/R/exportADMS.R
|
#' Get information on meteorological sites
#'
#' This function is primarily used to find a site code that can be used to
#' access data using [importNOAA()]. Sites searches of approximately 30,000
#' sites can be carried out based on the site name and based on the nearest
#' locations based on user-supplied latitude and longitude.
#'
#' @title Find a ISD site code and other meta data
#' @param site A site name search string e.g. `site = "heathrow"`. The search
#' strings and be partial and can be upper or lower case e.g. `site =
#' "HEATHR"`.
#' @param lat A latitude in decimal degrees to search. Takes the values -90 to
#' 90.
#' @param lon A longitude in decimal degrees to search. Takes values -180 to
#' 180. Negative numbers are west of the Greenwich meridian.
#' @param country The country code. This is a two letter code. For a full
#' listing see <https://www1.ncdc.noaa.gov/pub/data/noaa/isd-history.csv>.
#' @param state The state code. This is a two letter code.
#' @param n The number of nearest sites to search based on `latitude` and
#' `longitude`.
#' @param end.year To help filter sites based on how recent the available data
#' are. `end.year` can be "current", "any" or a numeric year such as 2016, or
#' a range of years e.g. 1990:2016 (which would select any site that had an
#' end date in that range. **By default only sites that have some data for the
#' current year are returned**.
#' @param provider By default a map will be created in which readers may toggle
#' between a vector base map and a satellite/aerial image. `provider` allows
#' users to override this default; see
#' \url{http://leaflet-extras.github.io/leaflet-providers/preview/} for a list
#' of all base maps that can be used. If multiple base maps are provided, they
#' can be toggled between using a "layer control" interface.
#' @param plot If `TRUE` will plot sites on an interactive leaflet map.
#' @param returnMap Should the leaflet map be returned instead of the meta data?
#' Default is `FALSE`.
#' @return A data frame is returned with all available meta data, mostly
#' importantly including a `code` that can be supplied to [importNOAA()]. If
#' latitude and longitude searches are made an approximate distance, `dist` in
#' km is also returned.
#' @export
#' @seealso [getMetaLive()] to download the all meta data to allow re-use and
#' direct querying.
#' @author David Carslaw
#' @examples
#'
#' \dontrun{
#' ## search for sites with name beijing
#' getMeta(site = "beijing")
#' }
#'
#' \dontrun{
#' ## search for near a specified lat/lon - near Beijing airport
#' ## returns 'n' nearest by default
#' getMeta(lat = 40, lon = 116.9)
#' }
getMeta <- function(site = "heathrow",
lat = NA,
lon = NA,
country = NA,
state = NA,
n = 10,
end.year = "current",
provider = c("OpenStreetMap", "Esri.WorldImagery"),
plot = TRUE,
returnMap = FALSE) {
## read the meta data
## download the file, else use the package version
# check if url works and exit gracefully if not
# function to fail gracefully if url is down / moved etc.
not_working <- function(url = "https://www1.ncdc.noaa.gov/pub/data/noaa/isd-history.csv"){
test <- try(suppressWarnings(readLines(url, n = 1)), silent = TRUE)
inherits(test, "try-error")
}
if (not_working()) {
message("Meta data currently not available.")
return()
}
meta <- getMetaLive()
# check year
if (!any(end.year %in% c("current", "all"))) {
if (!is.numeric(end.year)) {
stop("end.year should be one of 'current', 'all' or a numeric 4-digit year such as 2016.")
}
}
# we base the current year as the max available in the meta data
if ("current" %in% end.year)
end.year <-
max(as.numeric(format(meta$END, "%Y")), na.rm = TRUE)
if ("all" %in% end.year)
end.year <- 1900:2100
## search based on name of site
if (!missing(site)) {
## search for station
meta <- meta[grep(site, meta$STATION, ignore.case = TRUE),]
}
## search based on country codes
if (!missing(country) && !is.na(country)) {
## search for country
id <- which(meta$CTRY %in% toupper(country))
meta <- meta[id,]
}
## search based on state codes
if (!missing(state)) {
## search for state
id <- which(meta$ST %in% toupper(state))
meta <- meta[id,]
}
# make sure no missing lat / lon
id <- which(is.na(meta$LON))
if (length(id) > 0)
meta <- meta[-id,]
id <- which(is.na(meta$LAT))
if (length(id) > 0)
meta <- meta[-id,]
# filter by end year
id <- which(format(meta$END, "%Y") %in% end.year)
meta <- meta[id,]
## approximate distance to site
if (!missing(lat) && !missing(lon)) {
r <- 6371 # radius of the Earth
## Coordinates need to be in radians
meta$longR <- meta$LON * pi / 180
meta$latR <- meta$LAT * pi / 180
LON <- lon * pi / 180
LAT <- lat * pi / 180
meta$dist <- acos(sin(LAT) * sin(meta$latR) + cos(LAT) *
cos(meta$latR) * cos(meta$longR - LON)) * r
## sort and retrun top n nearest
meta <- utils::head(openair:::sortDataFrame(meta, key = "dist"), n)
}
dat <- rename(meta, latitude = LAT, longitude = LON)
names(dat) <- tolower(names(dat))
if (plot) {
content <- paste(
paste0("<b>", dat$station, "</b>"),
paste("<hr><b>Code:</b>", dat$code),
paste("<b>Start:</b>", dat$begin),
paste("<b>End:</b>", dat$end),
sep = "<br/>"
)
if ("dist" %in% names(dat)) {
content <- paste(
content,
paste("<b>Distance:</b>", round(dat$dist, 1), "km"),
sep = "<br/>"
)
}
m <- leaflet::leaflet(dat)
for (i in provider) {
m <- leaflet::addProviderTiles(map = m, provider = i, group = i)
}
m <-
leaflet::addMarkers(
map = m,
~ longitude,
~ latitude,
popup = content,
clusterOptions = leaflet::markerClusterOptions()
)
if (!is.na(lat) && !is.na(lon)) {
m <- leaflet::addCircles(
map = m,
lng = lon,
lat = lat,
weight = 20,
radius = 200,
stroke = TRUE,
color = "red",
popup = paste(
"Search location",
paste("Lat =", dat$latitude),
paste("Lon =", dat$longitude),
sep = "<br/>"
)
)
}
if (length(provider) > 1) {
m <-
leaflet::addLayersControl(
map = m,
baseGroups = provider,
options = leaflet::layersControlOptions(collapsed = FALSE, autoZIndex = FALSE)
)
}
print(m)
}
if (returnMap)
return(m)
else
return(dat)
}
#' Obtain site meta data from NOAA server
#'
#' Download all NOAA meta data, allowing for re-use and direct querying.
#'
#' @param ... Currently unused.
#'
#' @return a [tibble][tibble::tibble-package]
#'
#' @examples
#' \dontrun{
#' meta <- getMetaLive()
#' head(meta)
#' }
#' @export
getMetaLive <- function(...) {
## downloads the whole thing fresh
url <- "https://www1.ncdc.noaa.gov/pub/data/noaa/isd-history.csv"
meta <- read_csv(
url,
skip = 21,
col_names = FALSE,
col_types = cols(
X1 = col_character(),
X2 = col_character(),
X3 = col_character(),
X4 = col_character(),
X5 = col_character(),
X6 = col_character(),
X7 = col_double(),
X8 = col_double(),
X9 = col_double(),
X10 = col_date(format = "%Y%m%d"),
X11 = col_date(format = "%Y%m%d")
),
progress = FALSE
)
# if not available e.g. due to US Government shutdown, flag and exit
# some header data may still be read, so check column number
if (ncol(meta) == 1L)
stop(
"File not available, check \nhttps://www1.ncdc.noaa.gov/pub/data/noaa/ for potential server problems.",
call. = FALSE
)
## names in the meta file
names(meta) <- c(
"USAF",
"WBAN",
"STATION",
"CTRY",
"ST",
"CALL",
"LAT",
"LON",
"ELEV(M)",
"BEGIN",
"END"
)
## full character string of site id
meta$USAF <-
formatC(meta$USAF,
width = 6,
format = "d",
flag = "0")
## code used to query data
meta$code <- paste0(meta$USAF, "-", meta$WBAN)
return(meta)
}
# how to update meta data
# meta <- getMeta(end.year = "all")
# usethis::use_data(meta, overwrite = TRUE, internal = TRUE)
# usethis::use_data(meta, overwrite = TRUE)
|
/scratch/gouwar.j/cran-all/cranData/worldmet/R/getMeta.R
|
#' Import Meteorological data from the NOAA Integrated Surface Database (ISD)
#'
#' This is the main function to import data from the NOAA Integrated Surface
#' Database (ISD). The ISD contains detailed surface meteorological data from
#' around the world for over 30,000 locations.
#'
#' Note the following units for the main variables:
#'
#' \describe{
#'
#' \item{date}{Date/time in POSIXct format. **Note the time zone is GMT (UTC)
#' and may need to be adjusted to merge with other local data. See details
#' below.**}
#'
#' \item{latitude}{Latitude in decimal degrees (-90 to 90).}
#'
#' \item{longitude}{Longitude in decimal degrees (-180 to 180). Negative numbers
#' are west of the Greenwich Meridian.}
#'
#' \item{elevation}{Elevation of site in metres.}
#'
#' \item{wd}{Wind direction in degrees. 90 is from the east.}
#'
#' \item{ws}{Wind speed in m/s.}
#'
#' \item{ceil_hgt}{The height above ground level (AGL) of the lowest cloud or
#' obscuring phenomena layer aloft with 5/8 or more summation total sky cover,
#' which may be predominantly opaque, or the vertical visibility into a
#' surface-based obstruction.}
#'
#' \item{visibility}{The visibility in metres.}
#'
#' \item{air_temp}{Air temperature in degrees Celcius.}
#'
#' \item{dew_point}{The dew point temperature in degrees Celcius.}
#'
#' \item{atmos_pres}{The sea level pressure in millibars.}
#'
#' \item{RH}{The relative humidity (%).}
#'
#' \item{cl_1, ..., cl_3}{Cloud cover for different layers in Oktas (1-8).}
#'
#' \item{cl}{Maximum of cl_1 to cl_3 cloud cover in Oktas (1-8).}
#'
#' \item{cl_1_height, ..., cl_3_height}{Height of the cloud base for each later
#' in metres.}
#'
#' \item{precip_12}{12-hour precipitation in mm. The sum of this column should
#' give the annual precipitation.}
#'
#' \item{precip_6}{6-hour precipitation in mm.}
#'
#' \item{precip}{This value of precipitation spreads the 12-hour total across
#' the previous 12 hours.}
#'
#'
#' \item{pwc}{The description of the present weather description (if
#' available).}
#'
#' }
#'
#' The data are returned in GMT (UTC). It may be necessary to adjust the time
#' zone when combining with other data. For example, if air quality data were
#' available for Beijing with time zone set to "Etc/GMT-8" (note the negative
#' offset even though Beijing is ahead of GMT. See the `openair` package and
#' manual for more details), then the time zone of the met data can be changed
#' to be the same. One way of doing this would be `attr(met$date, "tzone") <-
#' "Etc/GMT-8"` for a meteorological data frame called `met`. The two data sets
#' could then be merged based on `date`.
#'
#' @param code The identifying code as a character string. The code is a
#' combination of the USAF and the WBAN unique identifiers. The codes are
#' separated by a \dQuote{-} e.g. `code = "037720-99999"`.
#' @param year The year to import. This can be a vector of years e.g. `year =
#' 2000:2005`.
#' @param hourly Should hourly means be calculated? The default is `TRUE`. If
#' `FALSE` then the raw data are returned.
#' @param n.cores Number of cores to use for parallel processing. Default is 1
#' and hence no parallelism.
#' @param quiet If FALSE, print missing sites / years to the screen.
#' @param path If a file path is provided, the data are saved as an rds file at
#' the chosen location e.g. `path = "C:/Users/David"`. Files are saved by
#' year and site.
#' @export
#' @import readr tidyr dplyr
#' @return Returns a data frame of surface observations. The data frame is
#' consistent for use with the `openair` package. Note that the data are
#' returned in GMT (UTC) time zone format. Users may wish to express the data
#' in other time zones, e.g., to merge with air pollution data. The
#' [lubridate][lubridate::lubridate-package] package is useful in this
#' respect.
#' @seealso [getMeta()] to obtain the codes based on various site search
#' approaches.
#' @author David Carslaw
#' @examples
#'
#' \dontrun{
#' ## use Beijing airport code (see getMeta example)
#' dat <- importNOAA(code = "545110-99999", year = 2010:2011)
#' }
importNOAA <- function(code = "037720-99999", year = 2014,
hourly = TRUE,
n.cores = 1, quiet = FALSE, path = NA) {
## main web site https://www.ncei.noaa.gov/products/land-based-station/integrated-surface-database
## formats document https://www.ncei.noaa.gov/data/global-hourly/doc/isd-format-document.pdf
# brief csv file description https://www.ncei.noaa.gov/data/global-hourly/doc/CSV_HELP.pdf
## gis map https://gis.ncdc.noaa.gov/map/viewer/#app=cdo&cfg=cdo&theme=hourly&layers=1
## go through each of the years selected, use parallel processing
i <- station <- . <- NULL
# sites and years to process
site_process <- expand.grid(
code = code,
year = year,
stringsAsFactors = FALSE
)
if (n.cores > 1) {
cl <- parallel::makeCluster(n.cores)
doParallel::registerDoParallel(cl)
dat <- foreach::foreach(
i = 1:nrow(site_process),
.combine = "bind_rows",
.export = "getDat",
.errorhandling = "remove"
) %dopar%
getDat(
year = site_process$year[i],
code = site_process$code[i],
hourly = hourly
)
parallel::stopCluster(cl)
} else {
dat <-
purrr::pmap(site_process, getDat,
hourly = hourly, .progress = "Importing NOAA Data") %>%
purrr::list_rbind()
}
if (is.null(dat) || nrow(dat) == 0) {
print("site(s) do not exist.")
return()
}
# check to see what is missing and print to screen
actual <- select(dat, code, date, station) %>%
mutate(year = as.numeric(format(date, "%Y"))) %>%
group_by(code, year) %>%
slice(1)
actual <- left_join(site_process, actual, by = c("code", "year"))
if (length(which(is.na(actual$date))) > 0 && !quiet) {
print("The following sites / years are missing:")
print(filter(actual, is.na(date)))
}
if (!is.na(path)) {
if (!dir.exists(path)) {
warning("Directory does not exist, file not saved", call. = FALSE)
return()
}
# save as year / site files
writeMet <- function(dat) {
saveRDS(dat, paste0(path, "/", unique(dat$code), "_", unique(dat$year), ".rds"))
return(dat)
}
mutate(dat, year = format(date, "%Y")) %>%
group_by(code, year) %>%
do(writeMet(.))
}
return(dat)
}
getDat <- function(code, year, hourly) {
# function to supress timeAverage printing
# (can't see option to turn it off)
quiet <- function(x) {
sink(tempfile())
on.exit(sink())
invisible(force(x))
}
## location of data
file.name <- paste0(
"https://www.ncei.noaa.gov/data/global-hourly/access/",
year, "/", gsub(pattern = "-", "", code), ".csv"
)
# suppress warnings because some fields might be missing in the list
# Note that not all available data is returned - just what I think is most useful
met_data <- try(suppressWarnings(read_csv(
file.name,
col_types = cols_only(
STATION = col_character(),
DATE = col_datetime(format = ""),
SOURCE = col_double(),
LATITUDE = col_double(),
LONGITUDE = col_double(),
ELEVATION = col_double(),
NAME = col_character(),
REPORT_TYPE = col_character(),
CALL_SIGN = col_double(),
QUALITY_CONTROL = col_character(),
WND = col_character(),
CIG = col_character(),
VIS = col_character(),
TMP = col_character(),
DEW = col_character(),
SLP = col_character(),
AA1 = col_character(),
AW1 = col_character(),
GA1 = col_character(),
GA2 = col_character(),
GA3 = col_character()
),
progress = FALSE
)), silent = TRUE
)
if (class(met_data)[1] == "try-error") {
message(paste0("Missing data for site ", code, " and year ", year))
met_data <- NULL
return()
}
met_data <- rename(met_data,
code = STATION,
station = NAME,
date = DATE,
latitude = LATITUDE,
longitude = LONGITUDE,
elev = ELEVATION
)
met_data$code <- code
# separate WND column
if ("WND" %in% names(met_data)) {
met_data <- separate(met_data, WND, into = c("wd", "x", "y", "ws", "z"))
met_data <- mutate(met_data,
wd = as.numeric(wd),
wd = ifelse(wd == 999, NA, wd),
ws = as.numeric(ws),
ws = ifelse(ws == 9999, NA, ws),
ws = ws / 10
)
}
# separate TMP column
if ("TMP" %in% names(met_data)) {
met_data <- separate(met_data, TMP, into = c("air_temp", "flag_temp"), sep = ",")
met_data <- mutate(met_data,
air_temp = as.numeric(air_temp),
air_temp = ifelse(air_temp == 9999, NA, air_temp),
air_temp = air_temp / 10
)
}
# separate VIS column
if ("VIS" %in% names(met_data)) {
met_data <- separate(met_data, VIS,
into = c("visibility", "flag_vis1", "flag_vis2", "flag_vis3"),
sep = ",", fill = "right"
)
met_data <- mutate(met_data,
visibility = as.numeric(visibility),
visibility = ifelse(visibility %in% c(9999, 999999), NA, visibility)
)
}
# separate DEW column
if ("DEW" %in% names(met_data)) {
met_data <- separate(met_data, DEW, into = c("dew_point", "flag_dew"), sep = ",")
met_data <- mutate(met_data,
dew_point = as.numeric(dew_point),
dew_point = ifelse(dew_point == 9999, NA, dew_point),
dew_point = dew_point / 10
)
}
# separate SLP column
if ("SLP" %in% names(met_data)) {
met_data <- separate(met_data, SLP,
into = c("atmos_pres", "flag_pres"), sep = ",",
fill = "right"
)
met_data <- mutate(met_data,
atmos_pres = as.numeric(atmos_pres),
atmos_pres = ifelse(atmos_pres %in% c(99999, 999999), NA, atmos_pres),
atmos_pres = atmos_pres / 10
)
}
# separate CIG (sky condition) column
if ("CIG" %in% names(met_data)) {
met_data <- separate(met_data, CIG,
into = c("ceil_hgt", "flag_sky1", "flag_sky2", "flag_sky3"),
sep = ",", fill = "right"
)
met_data <- mutate(met_data,
ceil_hgt = as.numeric(ceil_hgt),
ceil_hgt = ifelse(ceil_hgt == 99999, NA, ceil_hgt)
)
}
## relative humidity - general formula based on T and dew point
met_data$RH <- 100 * ((112 - 0.1 * met_data$air_temp + met_data$dew_point) /
(112 + 0.9 * met_data$air_temp))^8
if ("GA1" %in% names(met_data)) {
# separate GA1 (cloud layer 1 height, amount) column
met_data <- separate(met_data, GA1,
into = c("cl_1", "code_1", "cl_1_height", "code_2", "cl_1_type", "code_3"),
sep = ","
)
met_data <- mutate(met_data,
cl_1 = as.numeric(cl_1),
cl_1 = ifelse((is.na(cl_1) & ceil_hgt == 22000), 0, cl_1),
cl_1 = ifelse(cl_1 == 99, NA, cl_1),
cl_1_height = as.numeric(cl_1_height),
cl_1_height = ifelse(cl_1_height == 99999, NA, cl_1_height)
)
}
if ("GA2" %in% names(met_data)) {
met_data <- separate(met_data, GA2,
into = c("cl_2", "code_1", "cl_2_height", "code_2", "cl_2_type", "code_3"),
sep = ","
)
met_data <- mutate(met_data,
cl_2 = as.numeric(cl_2),
cl_2 = ifelse(cl_2 == 99, NA, cl_2),
cl_2_height = as.numeric(cl_2_height),
cl_2_height = ifelse(cl_2_height == 99999, NA, cl_2_height)
)
}
if ("GA3" %in% names(met_data)) {
met_data <- separate(met_data, GA3,
into = c("cl_3", "code_1", "cl_3_height", "code_2", "cl_3_type", "code_3"),
sep = ","
)
met_data <- mutate(met_data,
cl_3 = as.numeric(cl_3),
cl_3 = ifelse(cl_3 == 99, NA, cl_3),
cl_3_height = as.numeric(cl_3_height),
cl_3_height = ifelse(cl_3_height == 99999, NA, cl_3_height)
)
}
## for cloud cover, make new 'cl' max of 3 cloud layers
if ("cl_3" %in% names(met_data)) {
met_data$cl <- pmax(met_data$cl_1, met_data$cl_2, met_data$cl_3, na.rm = TRUE)
}
# PRECIP AA1
if ("AA1" %in% names(met_data)) {
met_data <- separate(met_data, AA1,
into = c("precip_code", "precip_raw", "code_1", "code_2"),
sep = ","
)
met_data <- mutate(met_data,
precip_raw = as.numeric(precip_raw),
precip_raw = ifelse(precip_raw == 9999, NA, precip_raw),
precip_raw = precip_raw / 10
)
# deal with 6 and 12 hour precip
id <- which(met_data$precip_code == "06")
if (length(id) > 0) {
met_data$precip_6 <- NA
met_data$precip_6[id] <- met_data$precip_raw[id]
}
id <- which(met_data$precip_code == "12")
if (length(id) > 0) {
met_data$precip_12 <- NA
met_data$precip_12[id] <- met_data$precip_raw[id]
}
}
# weather codes, AW1
if ("AW1" %in% names(met_data)) {
met_data <- separate(met_data, AW1,
into = c("pwc", "code_1"),
sep = ",", fill = "right"
)
met_data <- left_join(met_data, worldmet::weatherCodes, by = "pwc")
met_data <- select(met_data, -pwc) %>%
rename(pwc = description)
}
## select the variables we want
met_data <- select(met_data, any_of(c(
"date", "code", "station", "latitude", "longitude", "elev",
"ws", "wd", "air_temp", "atmos_pres",
"visibility", "dew_point", "RH",
"ceil_hgt",
"cl_1", "cl_2", "cl_3", "cl",
"cl_1_height", "cl_2_height",
"cl_3_height", "pwc", "precip_12",
"precip_6", "precip"
)))
## present weather is character and cannot be averaged, take first
if ("pwc" %in% names(met_data) && hourly) {
pwc <- met_data[c("date", "pwc")]
pwc$date2 <- format(pwc$date, "%Y-%m-%d %H") ## nearest hour
tmp <- pwc[which(!duplicated(pwc$date2)), ]
dates <- as.POSIXct(paste0(unique(pwc$date2), ":00:00"), tz = "GMT")
pwc <- data.frame(date = dates, pwc = tmp$pwc)
PWC <- TRUE
}
## average to hourly
if (hourly) {
met_data <-
quiet(openair::timeAverage(
met_data,
avg.time = "hour",
type = c("code", "station")
))
}
## add pwc back in
if (exists("pwc")) {
met_data <- left_join(met_data, pwc, by = "date")
}
## add precipitation - based on 12 HOUR averages, so work with hourly data
## spread out precipitation across each hour
## only do this if precipitation exists
if ("precip_12" %in% names(met_data) && hourly) {
## make new precip variable
met_data$precip <- NA
## id where there is 12 hour data
id <- which(!is.na(met_data$precip_12))
if (length(id) == 0L) {
return()
}
id <- id[id > 11] ## make sure we don't run off beginning
for (i in seq_along(id)) {
met_data$precip[(id[i] - 11):id[i]] <- met_data$precip_12[id[i]] / 12
}
}
# replace NaN with NA
met_data[] <- lapply(met_data, function(x) {
replace(x, is.nan(x), NA)
})
return(as_tibble(met_data))
}
|
/scratch/gouwar.j/cran-all/cranData/worldmet/R/metNOAA.R
|
#' Codes for weather types
#'
#' This data frame consists of the weather description codes used in the ISD. It
#' is not of general use to most users.
#'
#' @name weatherCodes
#' @docType data
#' @keywords datasets
#' @examples
#' ## basic structure
#' head(weatherCodes)
NULL
|
/scratch/gouwar.j/cran-all/cranData/worldmet/R/weatherCodes.R
|
#' @keywords internal
#'
#' @details This package contains functions to import surface meteorological
#' data from over 30,000 sites around the world. These data are curated by
#' NOAA as part of the Integrated Surface Database (ISD).
#'
#' If you access these data using the `worldmet` package please give full
#' acknowledgement to NOAA ISD. Users should also take a note of the usage
#' restrictions.
#'
#' These data work well with the `openair` package that has been developed to
#' analyse air pollution data.
#'
#' @references Search for NOAA ISD for more information.
#'
#' @seealso See <https://github.com/davidcarslaw/openair> for information on the
#' related `openair` package.
"_PACKAGE"
## usethis namespace: start
#' @importFrom foreach %dopar%
#' @importFrom magrittr %>%
#' @importFrom tibble tibble
## usethis namespace: end
NULL
|
/scratch/gouwar.j/cran-all/cranData/worldmet/R/worldmet-package.R
|
.onLoad <- function(...) {
## packageStartupMessage("\ttype citation(\"openair\") for how to cite openair")
utils::globalVariables(c("AA1", "AW1", "CIG", "DATE", "DEW", "ELEVATION", "GA1", "GA2", "GA3",
"LATITUDE", "LONGITUDE", "NAME",
"SLP", "STATION", "TMP", "VIS", "WND", "air_temp", "atmos_pres",
"ceil_hgt", "cl_1", "cl_1_height",
"cl_2", "cl_2_height", "cl_3", "cl_3_height", "description",
"dew_point", "latitude", "precip_raw",
"longitude", "precip", "visibility", "wd", "weatherCodes", "ws"))
}
|
/scratch/gouwar.j/cran-all/cranData/worldmet/R/zzz.R
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## ----setup--------------------------------------------------------------------
library(worldmet)
## ----getmeta------------------------------------------------------------------
getMeta()
## ----getmetai, echo=FALSE, out.width="100%"-----------------------------------
worldmet::getMeta(returnMap = TRUE)
## ----getmeta2-----------------------------------------------------------------
getMeta(lat = 51.5, lon = 0)
## ----getmeta2i, echo=FALSE, out.width="100%"----------------------------------
worldmet::getMeta(lat = 51.5, lon = 0, returnMap = TRUE)
|
/scratch/gouwar.j/cran-all/cranData/worldmet/inst/doc/find-sites.R
|
---
title: "Locating Met Sites"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Locating Met Sites}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
```{r setup}
library(worldmet)
```
# Show all sites
The function to use to find which sites are available is `getMeta()`. While this function can be run using different options, it can be easiest to run it without any options. This produces a map of all the available sites, which can be quickly accessed.
Click to expand the markers until you find the site of interest (shown by a blue marker). This reveals some site information including the site name and the start and end date of available data. The most important information revealed in the marker is the code, which is used to access the data.
```{r getmeta}
getMeta()
```
```{r getmetai, echo=FALSE, out.width="100%"}
worldmet::getMeta(returnMap = TRUE)
```
When using `getMeta()` it will probably be useful to read the information into a data frame. Note that `plot = FALSE` stops the function from providing the map.
```r
met_info <- getMeta(plot = FALSE)
```
# Search based on latitude and longitude
Often, one has an idea of the region in which a site is of interest. For example, if the interest was in sites close to London, the latitude and longitude can be supplied and a search is carried out of the 10 nearest sites to that location. There is also an option n that can be used in change the number of nearest sites shown. If a lat/lon is provided, clicking on the blue marker will show the approximate distance between the site and the search coordinates.
```{r getmeta2}
getMeta(lat = 51.5, lon = 0)
```
```{r getmeta2i, echo=FALSE, out.width="100%"}
worldmet::getMeta(lat = 51.5, lon = 0, returnMap = TRUE)
```
|
/scratch/gouwar.j/cran-all/cranData/worldmet/inst/doc/find-sites.Rmd
|
---
title: "Locating Met Sites"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Locating Met Sites}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
```{r setup}
library(worldmet)
```
# Show all sites
The function to use to find which sites are available is `getMeta()`. While this function can be run using different options, it can be easiest to run it without any options. This produces a map of all the available sites, which can be quickly accessed.
Click to expand the markers until you find the site of interest (shown by a blue marker). This reveals some site information including the site name and the start and end date of available data. The most important information revealed in the marker is the code, which is used to access the data.
```{r getmeta}
getMeta()
```
```{r getmetai, echo=FALSE, out.width="100%"}
worldmet::getMeta(returnMap = TRUE)
```
When using `getMeta()` it will probably be useful to read the information into a data frame. Note that `plot = FALSE` stops the function from providing the map.
```r
met_info <- getMeta(plot = FALSE)
```
# Search based on latitude and longitude
Often, one has an idea of the region in which a site is of interest. For example, if the interest was in sites close to London, the latitude and longitude can be supplied and a search is carried out of the 10 nearest sites to that location. There is also an option n that can be used in change the number of nearest sites shown. If a lat/lon is provided, clicking on the blue marker will show the approximate distance between the site and the search coordinates.
```{r getmeta2}
getMeta(lat = 51.5, lon = 0)
```
```{r getmeta2i, echo=FALSE, out.width="100%"}
worldmet::getMeta(lat = 51.5, lon = 0, returnMap = TRUE)
```
|
/scratch/gouwar.j/cran-all/cranData/worldmet/vignettes/find-sites.Rmd
|
#' Get AphiaIDs by attribute definition ID
#'
#' @export
#' @param id (numeric/integer) a attribute ID. For `wm_attr_aphia` it's
#' required and must be `length(id) == 1`, for `wm_attr_aphia_` it's
#' optional and can be `length(id) >= 1`
#' @param offset (integer) record to start at. default: 1
#' @param name (character) one or more taxonomic names. optional
#' @template curl
#' @template plural
#' @return A tibble/data.frame. when using underscore method, outputs from
#' each input are binded together, but can be split by `id` column
#' @examples \dontrun{
#' wm_attr_aphia(id = 7)
#' wm_attr_aphia(id = 4)
#' wm_attr_aphia(id = 4, offset = 50)
#'
#' wm_attr_aphia_(id = c(7, 2))
#' }
wm_attr_aphia <- function(id, offset = 1, ...) {
assert(id, c("numeric", "integer"))
assert(offset, c("numeric", "integer"))
assert_len(id, 1)
wm_GET(file.path(wm_base(), "AphiaIDsByAttributeKeyID", id),
query = cc(list(offset = offset)), ...)
}
#' @export
#' @rdname wm_attr_aphia
wm_attr_aphia_ <- function(id = NULL, name = NULL, ...) {
id <- id_name(id, name)
run_bind(id, wm_attr_aphia, on_error = warning, ...)
}
|
/scratch/gouwar.j/cran-all/cranData/worrms/R/wm_attr_aphia.R
|
#' Get attributes grouped by a CategoryID
#'
#' @export
#' @param id (numeric/integer) a CategoryID. For `wm_attr_category` it's
#' required and must be `length(id) == 1`, for `wm_attr_category_` it's
#' optional and can be `length(id) >= 1`
#' @param name (character) one or more taxonomic names. optional
#' @template curl
#' @template plural
#' @return A tibble/data.frame. when using underscore method, outputs from
#' each input are binded together, but can be split by `id` column
#' @examples \dontrun{
#' wm_attr_category(id = 7)
#' wm_attr_category(id = 2)
#'
#' wm_attr_category_(id = c(7, 2))
#' }
wm_attr_category <- function(id, ...) {
assert(id, c("numeric", "integer"))
assert_len(id, 1)
wm_GET(file.path(wm_base(), "AphiaAttributeValuesByCategoryID", id), ...)
}
#' @export
#' @rdname wm_attr_category
wm_attr_category_ <- function(id = NULL, name = NULL, ...) {
id <- id_name(id, name)
run_bind(id, wm_attr_category, on_error = warning, ...)
}
|
/scratch/gouwar.j/cran-all/cranData/worrms/R/wm_attr_category.R
|
#' Get attribute data by AphiaID
#'
#' @export
#' @param id (numeric/integer) an AphiaID. For `wm_attr_data` it's
#' required and must be `length(id) == 1`, for `wm_attr_data_` it's
#' optional and can be `length(id) >= 1`
#' @param name (character) one or more taxonomic names. optional
#' @param include_inherited (logical) Include attributes inherited from
#' its parent taxon. Default: `FALSE`
#' @template curl
#' @template plural
#' @return A tibble/data.frame. when using underscore method, outputs from
#' each input are binded together, but can be split by `id` column
#' @examples \dontrun{
#' wm_attr_data(id = 127160)
#' wm_attr_data(id = 126436)
#'
#' wm_attr_data_(id = c(127160, 126436))
#' }
wm_attr_data <- function(id, include_inherited = FALSE, ...) {
assert(id, c("numeric", "integer"))
assert(include_inherited, "logical")
assert_len(id, 1)
wm_GET(file.path(wm_base(), "AphiaAttributesByAphiaID", id),
query = cc(list(include_inherited = include_inherited)), ...)
}
#' @export
#' @rdname wm_attr_data
wm_attr_data_ <- function(id = NULL, name = NULL, ...) {
id <- id_name(id, name)
run_bind(id, wm_attr_data, on_error = warning, ...)
}
|
/scratch/gouwar.j/cran-all/cranData/worrms/R/wm_attr_data.R
|
#' Get attribute definition by ID
#'
#' @export
#' @param id (numeric/integer) an attribute ID. For `wm_attr_def` it's
#' required and must be `length(id) == 1`, for `wm_attr_def_` it's
#' optional and can be `length(id) >= 1`
#' @param name (character) one or more taxonomic names. optional
#' @param include_inherited (logical) Include attributes inherited from
#' its parent taxon. Default: `FALSE`
#' @template curl
#' @template plural
#' @return A tibble/data.frame. when using underscore method, outputs from
#' each input are binded together, but can be split by `id` column
#' @examples \dontrun{
#' wm_attr_def(id = 1)
#' wm_attr_def(id = 4)
#' wm_attr_def(id = 4, include_inherited = TRUE)
#'
#' wm_attr_def_(id = c(4, 1))
#' }
wm_attr_def <- function(id, include_inherited = FALSE, ...) {
assert(id, c("numeric", "integer"))
assert(include_inherited, "logical")
assert_len(id, 1)
wm_GET(file.path(wm_base(), "AphiaAttributeKeysByID", id),
query = cc(list(include_inherited = include_inherited)), ...)
}
#' @export
#' @rdname wm_attr_def
wm_attr_def_ <- function(id = NULL, name = NULL, ...) {
id <- id_name(id, name)
run_bind(id, wm_attr_def, on_error = warning, ...)
}
|
/scratch/gouwar.j/cran-all/cranData/worrms/R/wm_attr_def.R
|
#' Get children for an AphiaID
#'
#' @export
#' @param id (numeric/integer) an AphiaID. For `wm_children` it's
#' required and must be `length(id) == 1`, for `wm_children_` it's
#' optional and can be `length(id) >= 1`
#' @param marine_only (logical) marine only or not. default: `TRUE`
#' @param offset (integer) record to start at. default: 1
#' @param name (character) one or more taxonomic names. optional
#' @template curl
#' @template plural
#' @return A tibble/data.frame. when using underscore method, outputs from
#' each input are binded together, but can be split by `id` column
#' @examples \dontrun{
#' wm_children(343613)
#' wm_children(id = 105706)
#' wm_children(id = 105706, FALSE)
#' wm_children(id = 105706, offset = 5)
#'
#' # plural version, via id or name
#' wm_children_(id = c(105706, 343613))
#' wm_children_(name = c('Mesodesma', 'Leucophaeus'))
#' }
wm_children <- function(id, marine_only = TRUE, offset = 1, ...) {
assert(id, c("numeric", "integer"))
assert(marine_only, "logical")
assert(offset, c("numeric", "integer"))
assert_len(id, 1)
wm_GET(file.path(wm_base(), "AphiaChildrenByAphiaID", id),
query = cc(list(marine_only = as_log(marine_only),
offset = offset)), ...)
}
#' @export
#' @rdname wm_children
wm_children_ <- function(id = NULL, name = NULL, marine_only = TRUE,
offset = 1, ...) {
id <- id_name(id, name)
run_bind(id, wm_children, marine_only = marine_only, offset = offset,
on_error = warning, ...)
}
|
/scratch/gouwar.j/cran-all/cranData/worrms/R/wm_children.R
|
#' Get classification for an AphiaID
#'
#' @export
#' @param id (numeric/integer) an AphiaID. For `wm_children` it's
#' required and must be `length(id) == 1`, for `wm_children_` it's
#' optional and can be `length(id) >= 1`
#' @param name (character) one or more taxonomic names. optional
#' @template curl
#' @template plural
#' @return A tibble/data.frame. when using underscore method, outputs from
#' each input are binded together, but can be split by `id` column
#' @examples \dontrun{
#' wm_classification(id = 105706)
#' wm_classification(id = 126436)
#'
#' wm_classification(254967)
#' wm_classification(344089)
#'
#' # plural version, via id or name
#' wm_classification_(id = c(254967, 344089))
#' wm_classification_(name = c('Platanista gangetica', 'Leucophaeus scoresbii'))
#' }
wm_classification <- function(id, ...) {
assert(id, c("numeric", "integer"))
assert_len(id, 1)
res <- wm_GET(file.path(wm_base(), "AphiaClassificationByAphiaID", id), ...)
done <- FALSE
out <- list()
it <- 0
while (!done) {
it <- it + 1
ch <- res$child
if (!is.null(ch)) {
out[[it]] <- ch[!names(ch) %in% "child"]
res <- list(child = ch$child)
} else {
done <- TRUE
}
}
dat <- Filter(function(x) !is.null(x) && length(x) != 0, out)
dat <- do.call("rbind.data.frame", dat)
dat$rank <- as.character(dat$rank)
dat$scientificname <- as.character(dat$scientificname)
if (NROW(dat) == 0) dat <- NULL
tibble::as_tibble(dat)
}
#' @export
#' @rdname wm_classification
wm_classification_ <- function(id = NULL, name = NULL, ...) {
id <- id_name(id, name)
run_bind(id, wm_classification, on_error = warning, ...)
}
|
/scratch/gouwar.j/cran-all/cranData/worrms/R/wm_classification.R
|
#' Get vernacular names from an AphiaID
#'
#' @export
#' @param id (numeric/integer) an AphiaID. For `wm_common_id` it's
#' required and must be `length(id) == 1`, for `wm_common_id_` it's
#' optional and can be `length(id) >= 1`
#' @param name (character) one or more taxonomic names. optional
#' @template curl
#' @template plural
#' @return A tibble/data.frame. when using underscore method, outputs from
#' each input are binded together, but can be split by `id` column
#' @examples \dontrun{
#' wm_common_id(id = 105706)
#' wm_common_id(id = 156806)
#' wm_common_id(id = 397065)
#'
#' wm_common_id_(id = c(105706, 156806, 397065))
#' nms <- c("Rhincodontidae", "Mesodesma deauratum", "Cryptomya californica")
#' wm_common_id_(name = nms)
#' }
wm_common_id <- function(id, ...) {
assert(id, c("numeric", "integer"))
assert_len(id, 1)
wm_GET(file.path(wm_base(), "AphiaVernacularsByAphiaID", id), ...)
}
#' @export
#' @rdname wm_common_id
wm_common_id_ <- function(id = NULL, name = NULL, ...) {
id <- id_name(id, name)
run_bind(id, wm_common_id, on_error = warning, ...)
}
|
/scratch/gouwar.j/cran-all/cranData/worrms/R/wm_common_id.R
|
#' Get distribution data by AphiaID
#'
#' @export
#' @param id (numeric/integer) an AphiaID. For `wm_distribution` it's
#' required and must be `length(id) == 1`, for `wm_distribution_` it's
#' optional and can be `length(id) >= 1`
#' @param name (character) one or more taxonomic names. optional
#' @template curl
#' @template plural
#' @return A tibble/data.frame. when using underscore method, outputs from
#' each input are binded together, but can be split by `id` column
#' @examples \dontrun{
#' wm_distribution(id = 156806)
#' wm_distribution(id = 126436)
#'
#' wm_distribution_(id = c(156806, 126436))
#' }
wm_distribution <- function(id, ...) {
assert(id, c("numeric", "integer"))
assert_len(id, 1)
wm_GET(file.path(wm_base(), "AphiaDistributionsByAphiaID", id), ...)
}
#' @export
#' @rdname wm_distribution
wm_distribution_ <- function(id = NULL, name = NULL, ...) {
id <- id_name(id, name)
run_bind(id, wm_distribution, on_error = warning, ...)
}
|
/scratch/gouwar.j/cran-all/cranData/worrms/R/wm_distribution.R
|
#' Get an external ID via an AphiaID
#'
#' @export
#' @param id (numeric/integer) an AphiaID. For `wm_external` it's
#' required and must be `length(id) == 1`, for `wm_external_` it's
#' optional and can be `length(id) >= 1`
#' @param type (character) the type of external id. one of: tsn, bold,
#' dyntaxa, eol, fishbase, iucn, lsid, ncbi, gisd. default: tsn
#' @param name (character) one or more taxonomic names. optional
#' @template curl
#' @template plural
#' @return An integer that is the ID. When using underscore method,
#' a list, named by the input IDs
#' @examples \dontrun{
#' # by default, get a TSN (an ITIS code)
#' wm_external(id = 1080)
#'
#' ## get many
#' wm_external_(id = c(1080, 126436))
#'
#' # BOLD code
#' wm_external(id = 278468, type = "bold")
#'
#' # NCBI code
#' wm_external(id = 278468, type = "ncbi")
#'
#' # fishbase code
#' wm_external(id = 278468, type = "fishbase")
#'
#' # curl options
#' library(crul)
#' wm_external(id = 105706, verbose = TRUE)
#' }
wm_external <- function(id, type = "tsn", ...) {
assert(id, c("numeric", "integer"))
assert(type, "character")
assert_len(id, 1)
as.integer(wm_GET(
file.path(wm_base(), "AphiaExternalIDByAphiaID", id),
query = cc(list(type = type)), ...))
}
#' @export
#' @rdname wm_external
wm_external_ <- function(id = NULL, name = NULL, type = "tsn", ...) {
id <- id_name(id, name)
run_c(id, wm_external, type = type, on_error = warning, ...)
}
|
/scratch/gouwar.j/cran-all/cranData/worrms/R/wm_externalid.R
|
#' Get taxonomic name for an AphiaID
#'
#' @export
#' @param id (numeric/integer) an AphiaID, required. For `wm_id2name`
#' must be `length(id) == 1`, but for `wm_id2name_` can be
#' `length(id) >= 1`
#' @template curl
#' @template plural
#' @return An character string that is the taxnomic name. When using underscore
#' method, a list, named by the input IDs
#' @examples \dontrun{
#' wm_id2name(id = 105706)
#' wm_id2name_(id = c(105706, 126436))
#' }
wm_id2name <- function(id, ...) {
assert(id, c("numeric", "integer"))
wm_GET(file.path(wm_base(), "AphiaNameByAphiaID", id), ...)
}
#' @export
#' @rdname wm_id2name
wm_id2name_ <- function(id, ...) {
id <- id_name(id, NULL)
run_c(id, wm_id2name, on_error = warning, ...)
}
|
/scratch/gouwar.j/cran-all/cranData/worrms/R/wm_id2name.R
|
#' Get AphiaID from a taxonomic name
#'
#' @export
#' @param name (character) a taxonomic name, required. For
#' `wm_name2id` must be `length(name) == 1`, but for `wm_name2id_`
#' can be `length(name) >= 1`
#' @template curl
#' @template plural
#' @return An integer that is the AphiaID. When using underscore method,
#' a list, named by the input names
#' @examples \dontrun{
#' wm_name2id(name = "Rhincodon")
#' wm_name2id_(name = c("Rhincodon", "Gadus morhua"))
#' }
wm_name2id <- function(name, ...) {
assert(name, "character")
assert_len(name, 1)
wm_GET(file.path(wm_base(), "AphiaIDByName", name), ...)
}
#' @export
#' @rdname wm_name2id
wm_name2id_ <- function(name, ...) {
run_c(name, wm_name2id, on_error = warning, ...)
}
|
/scratch/gouwar.j/cran-all/cranData/worrms/R/wm_name2id.R
|
#' Get taxonomic ranks by their identifier
#'
#' @export
#' @name wm_ranks
#' @param rank_id (numeric/integer) a rank identifier. length==1
#' @param rank_name (character) a rank name. length==1
#' @param id an AphiaID. length==1
#' @param offset (integer) record to start at. default: 1
#' @template curl
#' @return A tibble/data.frame
#' @examples \dontrun{
#' wm_ranks_id(220)
#' wm_ranks_id(180)
#' wm_ranks_id(180, id = 4)
#'
#' wm_ranks_name("genus")
#' wm_ranks_name("genus", id = 4)
#' }
#' @export
#' @rdname wm_ranks
wm_ranks_id <- function(rank_id, id = NULL, offset = 1, ...) {
assert(rank_id, c("numeric", "integer"))
assert(id, c("numeric", "integer"))
assert_len(rank_id, 1)
assert_len(id, 1)
wm_GET(file.path(wm_base(), "AphiaTaxonRanksByID", rank_id),
query = cc(list(AphiaID = id)), ...)
}
#' @export
#' @rdname wm_ranks
wm_ranks_name <- function(rank_name, id = NULL, offset = 1, ...) {
assert(rank_name, c("character"))
assert(id, c("numeric", "integer"))
assert_len(rank_name, 1)
assert_len(id, 1)
wm_GET(file.path(wm_base(), "AphiaTaxonRanksByName", rank_name),
query = cc(list(AphiaID = id)), ...)
}
|
/scratch/gouwar.j/cran-all/cranData/worrms/R/wm_ranks.R
|
#' Get complete AphiaRecord for an AphiaID
#'
#' @export
#' @param id (numeric/integer) an AphiaID. For `wm_record` it's
#' required and must be `length(id) == 1`, for `wm_record_` it's
#' optional and can be `length(id) >= 1`
#' @param name (character) one or more taxonomic names. optional
#' @template curl
#' @template plural
#' @return A named list. When using underscore method, each output is named
#' by the input ID, and can be separated by the list names
#' @note `wm_record_` is defunct, `wm_record` can do plural requests now
#' @examples \dontrun{
#' wm_record(id = 105706)
#' wm_record(id = c(105706, 126436))
#' wm_record_(id = c(105706, 126436))
#' }
wm_record <- function(id, ...) {
assert(id, c("numeric", "integer"))
args <- stats::setNames(as.list(id), rep('aphiaids[]', length(id)))
wm_GET(file.path(wm_base(), "AphiaRecordsByAphiaIDs"),
query = args, ...)
}
#' @export
#' @rdname wm_record
wm_record_ <- function(id = NULL, name = NULL, ...) {
.Deprecated(msg="wm_record_ is deprecated; wm_record now does >1 id")
id <- id_name(id, name)
run_c(id, wm_record, on_error = warning, ...)
}
|
/scratch/gouwar.j/cran-all/cranData/worrms/R/wm_record.R
|
#' Get record by external ID
#'
#' @export
#' @param id (numeric/integer) an AphiaID. For `wm_record_by_external`
#' it's required and must be `length(id) == 1`, for
#' `wm_record_by_external_` it's optional and can be `length(id) >= 1`
#' @param type (character) the type of external id. one of: tsn, bold,
#' dyntaxa, eol, fishbase, iucn, lsid, ncbi, gisd. default: tsn
#' @param name (character) one or more taxonomic names. optional
#' @template curl
#' @template plural
#' @return A named list. When using underscore method, each output is named
#' by the input ID, and can be separated by the list names
#' @examples \dontrun{
#' wm_record_by_external(id = 85257)
#' wm_record_by_external(id = 159854)
#'
#' wm_record_by_external_(id = c(85257, 159854))
#' }
wm_record_by_external <- function(id, type = "tsn", ...) {
assert(id, c("numeric", "integer"))
assert(type, "character")
assert_len(id, 1)
wm_GET(file.path(wm_base(), "AphiaRecordByExternalID", id),
query = cc(list(type = type)), ...)
}
#' @export
#' @rdname wm_record_by_external
wm_record_by_external_ <- function(id = NULL, name = NULL, type = "tsn", ...) {
id <- id_name(id, name)
run_c(id, wm_record_by_external, type = type, on_error = warning, ...)
}
|
/scratch/gouwar.j/cran-all/cranData/worrms/R/wm_record_by_external.R
|
#' Get records by vernacular name, optional fuzzy matching
#'
#' @export
#' @param name (character) a species common name. required. For
#' `wm_records_common` must be `length(name) == 1`; for `wm_records_common_`
#' can be `length(name) >= 1`
#' @param fuzzy (logical) fuzzy search. default: `FALSE`
#' @param offset (integer) record to start at. default: 1
#' @template curl
#' @template plural
#' @return A tibble/data.frame. when using underscore method, outputs from
#' each input are binded together, but can be split by `id` column
#' @examples \dontrun{
#' wm_records_common(name = 'dolphin')
#' wm_records_common(name = 'clam')
#'
#' wm_records_common_(name = c('dolphin', 'clam'))
#'
#' wm_records_common(name = 'dolphin', fuzzy = TRUE)
#' wm_records_common(name = 'clam', fuzzy = TRUE, offset = 5)
#' }
wm_records_common <- function(name, fuzzy = FALSE, offset = 1,
...) {
assert(name, "character")
assert(fuzzy, "logical")
assert(offset, c('numeric', 'integer'))
assert_len(name, 1)
if (length(name) > 1) stop("'name' must be of length 1", call. = FALSE)
args <- cc(list(
like = as_log(fuzzy),
offset = offset
))
wm_GET(file.path(wm_base(), "AphiaRecordsByVernacular", name),
query = args, ...)
}
#' @export
#' @rdname wm_records_common
wm_records_common_ <- function(name, fuzzy = FALSE, offset = 1, ...) {
run_bind(name, wm_records_common, fuzzy = fuzzy, offset = offset,
on_error = warning, ...)
}
|
/scratch/gouwar.j/cran-all/cranData/worrms/R/wm_records_common.R
|
#' Get records by date
#'
#' @export
#' @param start_date (character) start date. required.
#' @param end_date (character) end date. optional
#' @param marine_only (logical) marine only or not. default: `TRUE`
#' @param offset (integer) record to start at. default: 1
#' @template curl
#' @return A tibble/data.frame
#' @examples \dontrun{
#' a_date <- format(Sys.Date() - 1, "%Y-%m-%dT%H:%M:%S+00:00")
#' wm_records_date(a_date)
#' }
wm_records_date <- function(start_date, end_date = NULL, marine_only = TRUE,
offset = 1, ...) {
assert(start_date, c("character", "Date"))
assert(end_date, c("character", "Date"))
assert(marine_only, "logical")
assert(offset, c('numeric', 'integer'))
wm_GET(file.path(wm_base(), "AphiaRecordsByDate"),
query = cc(list(
startdate = start_date, enddate = end_date,
marine_only = as_log(marine_only), offset = offset)), ...)
}
|
/scratch/gouwar.j/cran-all/cranData/worrms/R/wm_records_date.R
|
#' Get records by single name, optional fuzzy matching
#'
#' @export
#' @param name (character) a taxonomic name, required.
#' @param fuzzy (logical) fuzzy search. default: `TRUE`
#' @param marine_only (logical) marine only or not. default: `TRUE`
#' @param offset (integer) record to start at. default: 1
#' @template curl
#' @note there is no underscore method like other functions in this package
#' as there is already a plural version: [wm_records_names()]
#' @return A tibble/data.frame
#' @examples \dontrun{
#' wm_records_name(name = 'Leucophaeus')
#' wm_records_name(name = 'Leucophaeus', fuzzy = FALSE)
#' wm_records_name(name = 'Leucophaeus', marine_only = FALSE)
#' wm_records_name(name = 'Platanista', marine_only = FALSE)
#' wm_records_name(name = 'Platanista', marine_only = FALSE, offset = 5)
#' }
wm_records_name <- function(name, fuzzy = TRUE, marine_only = TRUE, offset = 1,
...) {
assert(name, "character")
assert(fuzzy, "logical")
assert(marine_only, "logical")
assert(offset, c('numeric', 'integer'))
if (length(name) > 1) stop("'name' must be of length 1", call. = FALSE)
args <- cc(list(
like = as_log(fuzzy),
marine_only = as_log(marine_only),
offset = offset
))
wm_GET(file.path(wm_base(), "AphiaRecordsByName", name),
query = args, ...)
}
|
/scratch/gouwar.j/cran-all/cranData/worrms/R/wm_records_name.R
|
#' Get records for one or more taxonomic name(s)
#'
#' @export
#' @param name (character) start date. required.
#' @param marine_only (logical) marine only or not. default: `TRUE`
#' @template curl
#' @note there is no underscore method like other functions in this package
#' as this is the plural version for [wm_records_name()]
#' @return A list of tibble's/data.frame's, one for each of the input names
#' @examples \dontrun{
#' wm_records_names(name = 'Leucophaeus scoresbii')
#' wm_records_names(name = c('Leucophaeus scoresbii', 'Coryphaena'))
#' }
wm_records_names <- function(name, marine_only = TRUE, ...) {
assert(name, "character")
assert(marine_only, "logical")
args <- cc(list(marine_only = as_log(marine_only)))
args <- c(args,
stats::setNames(as.list(name),
rep('scientificnames[]',
length(name))))
result <- wm_GET(file.path(wm_base(), "AphiaRecordsByNames"),
query = args, ...)
if (identical(result, tibble::tibble())) {
rep(list(tibble::data_frame()), length(name))
} else {
result
}
}
|
/scratch/gouwar.j/cran-all/cranData/worrms/R/wm_records_names.R
|
#' Get AphiaRecords for a given taxonRankID
#'
#' @export
#' @param rank_id (numeric/integer) a rank id
#' @param id (character) a single AphiaID
#' @param offset (integer) record to start at. default: 1
#' @template curl
#' @template plural
#' @return A tibble/data.frame. when using underscore method, outputs from
#' each input are binded together, but can be split by `id` column
#' @examples \dontrun{
#' wm_records_rank(rank_id = 180, id = 106776)
#' wm_records_rank(rank_id = 180, id = 106776, offset = 50)
#' }
wm_records_rank <- function(rank_id, id = NULL, offset = 1, ...) {
assert(rank_id, c("numeric", "integer"))
assert(id, c("numeric", "integer"))
assert(offset, c("numeric", "integer"))
assert_len(rank_id, 1)
assert_len(id, 1)
wm_GET(file.path(wm_base(), "AphiaRecordsByTaxonRankID", rank_id),
query = cc(list(belongsTo = id, offset = offset)), ...)
}
|
/scratch/gouwar.j/cran-all/cranData/worrms/R/wm_records_rank.R
|
#' Get records for one or more taxonomic name(s) using
#' the TAXAMATCH fuzzy matching algorithm
#'
#' @export
#' @param name (character) taxon name. required.
#' @param marine_only (logical) marine only or not. default: `TRUE`
#' @template curl
#' @note there is no underscore method like other functions in this package
#' as this function already accepts many names
#' @return A list of tibble's/data.frame's, one for each of the input names
#' @examples \dontrun{
#' wm_records_taxamatch(name = 'Leucophaeus')
#' wm_records_taxamatch(name = c('Leucophaeus', 'Coryphaena'))
#' }
wm_records_taxamatch <- function(name, marine_only = TRUE, ...) {
assert(name, "character")
assert(marine_only, "logical")
args <- cc(list(marine_only = as_log(marine_only)))
args <- c(args,
stats::setNames(as.list(name),
rep('scientificnames[]',
length(name))))
result <- wm_GET(file.path(wm_base(), "AphiaRecordsByMatchNames"),
query = args, ...)
if (identical(result, tibble::tibble())) {
rep(list(tibble::data_frame()), length(name))
} else {
result
}
}
|
/scratch/gouwar.j/cran-all/cranData/worrms/R/wm_records_taxamatch.R
|
#' Get sources for an AphiaID
#'
#' @export
#' @param id (numeric/integer) an AphiaID. For `wm_sources` it's required
#' and must be `length(id) == 1`, for `wm_sources_` it's optional and
#' can be `length(id) >= 1`
#' @param name (character) one or more taxonomic names. optional
#' @template curl
#' @template plural
#' @return A tibble/data.frame. when using underscore method, outputs from
#' each input are binded together, but can be split by `id` column
#' @examples \dontrun{
#' wm_sources(id = 105706)
#' wm_sources_(id = 105706)
#' wm_sources_(id = c(105706, 126436))
#' wm_sources_(name = c("Rhincodontidae", "Gadus morhua"))
#' }
wm_sources <- function(id, ...) {
assert(id, c("numeric", "integer"))
assert_len(id, 1)
wm_GET(file.path(wm_base(), "AphiaSourcesByAphiaID", id), ...)
}
#' @export
#' @rdname wm_sources
wm_sources_ <- function(id = NULL, name = NULL, ...) {
id <- id_name(id, name)
run_bind(id, wm_sources, on_error = warning, ...)
}
|
/scratch/gouwar.j/cran-all/cranData/worrms/R/wm_sources.R
|
#' Get synonyms for an AphiaID
#'
#' @export
#' @param id (numeric/integer) an AphiaID. For `wm_synonyms` it's required
#' and must be `length(id) == 1`, for `wm_synonyms_` it's optional and
#' can be `length(id) >= 1`
#' @param name (character) one or more taxonomic names. optional
#' @param offset (integer) record to start at. default: 1
#' @template curl
#' @template plural
#' @return A tibble/data.frame. when using underscore method, outputs from
#' each input are binded together, but can be split by `id` column
#' @examples \dontrun{
#' wm_synonyms(id = 105706)
#' wm_synonyms_(id = 105706)
#' wm_synonyms(id = 126436)
#' wm_synonyms(id = 126436, offset = 10)
#' wm_synonyms_(id = c(105706, 126436))
#' }
wm_synonyms <- function(id, offset = 1, ...) {
assert(id, c("numeric", "integer"))
assert(offset, c("numeric", "integer"))
assert_len(id, 1)
wm_GET(file.path(wm_base(), "AphiaSynonymsByAphiaID", id),
query = cc(list(offset = offset)), ...)
}
#' @export
#' @rdname wm_synonyms
wm_synonyms_ <- function(id = NULL, name = NULL, ...) {
id <- id_name(id, name)
run_bind(id, wm_synonyms, on_error = warning, ...)
}
|
/scratch/gouwar.j/cran-all/cranData/worrms/R/wm_synonyms.R
|
#' @title worrms
#' @description World Register of Marine Species Client
#' @name worrms-package
#' @aliases worrms
#' @docType package
#' @keywords package
#' @author Scott Chamberlain \email{myrmecocystus@@gmail.com}
#'
#' @section Fail behavior:
#' The WoRMS REST API doesn't have sophisticated error messaging, so
#' most errors will result in a `(204) - No Content` or
#' in `(400) - Bad Request`
#'
#' Because WoRMS doesn't do comprehensive error reporting, we do a fair
#' amount of checking user inputs to help prevent errors that will be
#' meaningless to the user. Let us know if we can improve on this.
NULL
|
/scratch/gouwar.j/cran-all/cranData/worrms/R/worrms-package.R
|
wm_GET <- function(url, query = list(), on_error = stop, ...) {
cli <- crul::HttpClient$new(url = url, opts = list(...))
temp <- cli$get(query = query)
if (temp$status_code > 201) {
on_error(sprintf("(%s) %s - %s", temp$status_code, temp$status_http()$message,
basename(url)), call. = FALSE)
}
if (identical(on_error, warning) && temp$status_code == 204) {
return(tibble::tibble())
}
tmp <- jsonlite::fromJSON(temp$parse("UTF-8"), flatten = TRUE)
if (inherits(tmp, "data.frame")) {
tibble::as_tibble(tmp)
} else if (inherits(tmp, "list")) {
if (all(sapply(tmp, class) == "data.frame")) {
lapply(tmp, tibble::as_tibble)
} else {
tmp
}
} else {
tmp
}
}
wm_base <- function() "https://www.marinespecies.org/rest"
cc <- function(x) Filter(Negate(is.null), x)
ccn <- function(x) {
Filter(function(z) !is.null(z) && NROW(z) > 0, x)
}
`%||%` <- function(x, y) if (is.null(x) || length(x) == 0) y else x
as_log <- function(x) {
if (is.null(x)) {
x
} else {
if (x) "true" else "false"
}
}
assert <- function(x, y) {
if (!is.null(x)) {
if (!class(x) %in% y) {
stop(deparse(substitute(x)), " must be of class ",
paste0(y, collapse = ", "), call. = FALSE)
}
}
}
assert_len <- function(x, y) {
if (!is.null(x)) {
if (length(x) != y) {
stop(deparse(substitute(x)), " must be of length ", y,
call. = FALSE)
}
}
}
br <- function(x) {
(x <- data.table::setDF(
data.table::rbindlist(x, use.names = TRUE, fill = TRUE, idcol = "id")))
}
run_c <- function(id, fun, ...) {
ccn(stats::setNames(lapply(id, fun, ...), id))
}
run_bind <- function(id, fun, ...) {
tibble::as_tibble(br(ccn(
stats::setNames(lapply(id, fun, ...), id)
)))
}
id_name <- function(id, name) {
if (!xor(is.null(id), is.null(name))) stop("use only one of 'id' or 'name'")
if (!is.null(name)) {
unlist(lapply(name, safe_wm_name2id), FALSE)
} else {
id
}
}
safe_wm_name2id <- function(x, ...) {
res <- tryCatch(wm_name2id(x, ...),
error = function(e) e,
warning = function(w) w
)
if (inherits(res, "error") || inherits(res, "warning")) {
warning(sprintf("%s: %s", x, res$message))
return(NULL)
} else {
return(res)
}
}
|
/scratch/gouwar.j/cran-all/cranData/worrms/R/zzz.R
|
---
title: Introduction to worrms
author: Scott Chamberlain
date: "2020-07-07"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Introduction to worrms}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
`worrms` is an R client for the World Register of Marine Species (https://www.marinespecies.org/)
See the taxize book (https://taxize.dev) for taxonomically focused work in this and similar packages.
## Install
Stable version from CRAN
```r
install.packages("worrms")
```
Development version from GitHub
```r
install.packages("remotes")
remotes::install_github("ropensci/worrms")
```
```r
library("worrms")
```
## Get records
WoRMS 'records' are taxa, not specimen occurrences or something else.
by date
```r
wm_records_date('2016-12-23T05:59:45+00:00')
#> # A tibble: 50 x 27
#> AphiaID url scientificname authority status unacceptreason taxonRankID
#> <int> <chr> <chr> <chr> <chr> <lgl> <int>
#> 1 894302 http… Paleopolymorp… Vasilenk… accep… NA 220
#> 2 894296 http… Parapachyphlo… Miklukho… accep… NA 220
#> 3 894298 http… Parapachyphlo… Miklukho… accep… NA 220
#> 4 894301 http… Ovulina radia… Seguenza… accep… NA 220
#> 5 894299 http… Parafissurina… Petri, 1… accep… NA 220
#> 6 894297 http… Parapachyphlo… Miklukho… accep… NA 220
#> 7 919684 http… Flintina serr… Cuvillie… accep… NA 220
#> 8 906465 http… Verneuilinoid… Bartenst… accep… NA 220
#> 9 903640 http… Quinqueloculi… Hussey, … accep… NA 220
#> 10 917580 http… Orbitolina ra… Sahni & … accep… NA 220
#> # … with 40 more rows, and 20 more variables: rank <chr>, valid_AphiaID <int>,
#> # valid_name <chr>, valid_authority <chr>, parentNameUsageID <int>,
#> # kingdom <chr>, phylum <chr>, class <chr>, order <chr>, family <chr>,
#> # genus <chr>, citation <chr>, lsid <chr>, isMarine <int>, isBrackish <lgl>,
#> # isFreshwater <lgl>, isTerrestrial <lgl>, isExtinct <int>, match_type <chr>,
#> # modified <chr>
```
by a taxonomic name
```r
wm_records_name(name = 'Leucophaeus scoresbii')
#> # A tibble: 1 x 27
#> AphiaID url scientificname authority status unacceptreason taxonRankID rank
#> <int> <chr> <chr> <chr> <chr> <lgl> <int> <chr>
#> 1 344089 http… Leucophaeus s… Traill, … accep… NA 220 Spec…
#> # … with 19 more variables: valid_AphiaID <int>, valid_name <chr>,
#> # valid_authority <chr>, parentNameUsageID <int>, kingdom <chr>,
#> # phylum <chr>, class <chr>, order <chr>, family <chr>, genus <chr>,
#> # citation <chr>, lsid <chr>, isMarine <int>, isBrackish <lgl>,
#> # isFreshwater <lgl>, isTerrestrial <lgl>, isExtinct <lgl>, match_type <chr>,
#> # modified <chr>
```
by many names
```r
wm_records_names(name = c('Leucophaeus scoresbii', 'Coryphaena'))
#> [[1]]
#> # A tibble: 1 x 27
#> AphiaID url scientificname authority status unacceptreason taxonRankID rank
#> <int> <chr> <chr> <chr> <chr> <lgl> <int> <chr>
#> 1 344089 http… Leucophaeus s… Traill, … accep… NA 220 Spec…
#> # … with 19 more variables: valid_AphiaID <int>, valid_name <chr>,
#> # valid_authority <chr>, parentNameUsageID <int>, kingdom <chr>,
#> # phylum <chr>, class <chr>, order <chr>, family <chr>, genus <chr>,
#> # citation <chr>, lsid <chr>, isMarine <int>, isBrackish <lgl>,
#> # isFreshwater <lgl>, isTerrestrial <lgl>, isExtinct <lgl>, match_type <chr>,
#> # modified <chr>
#>
#> [[2]]
#> # A tibble: 2 x 27
#> AphiaID url scientificname authority status unacceptreason taxonRankID rank
#> <int> <chr> <chr> <chr> <chr> <chr> <int> <chr>
#> 1 125960 http… Coryphaena Linnaeus… accep… <NA> 180 Genus
#> 2 843430 <NA> <NA> <NA> quara… synonym NA <NA>
#> # … with 19 more variables: valid_AphiaID <int>, valid_name <chr>,
#> # valid_authority <chr>, parentNameUsageID <int>, kingdom <chr>,
#> # phylum <chr>, class <chr>, order <chr>, family <chr>, genus <chr>,
#> # citation <chr>, lsid <chr>, isMarine <int>, isBrackish <int>,
#> # isFreshwater <int>, isTerrestrial <int>, isExtinct <lgl>, match_type <chr>,
#> # modified <chr>
```
by common name
```r
wm_records_common(name = 'clam')
#> # A tibble: 4 x 27
#> AphiaID url scientificname authority status unacceptreason taxonRankID rank
#> <int> <chr> <chr> <chr> <chr> <lgl> <int> <chr>
#> 1 141919 http… Mercenaria me… (Linnaeu… accep… NA 220 Spec…
#> 2 140431 http… Mya truncata Linnaeus… accep… NA 220 Spec…
#> 3 141936 http… Venus verruco… Linnaeus… accep… NA 220 Spec…
#> 4 575771 http… Verpa penis (Linnaeu… accep… NA 220 Spec…
#> # … with 19 more variables: valid_AphiaID <int>, valid_name <chr>,
#> # valid_authority <chr>, parentNameUsageID <int>, kingdom <chr>,
#> # phylum <chr>, class <chr>, order <chr>, family <chr>, genus <chr>,
#> # citation <chr>, lsid <chr>, isMarine <int>, isBrackish <lgl>,
#> # isFreshwater <lgl>, isTerrestrial <lgl>, isExtinct <lgl>, match_type <chr>,
#> # modified <chr>
```
using the TAXMATCH algorithm
```r
wm_records_taxamatch(name = 'Leucophaeus scoresbii')
#> [[1]]
#> # A tibble: 1 x 27
#> AphiaID url scientificname authority status unacceptreason taxonRankID rank
#> <int> <chr> <chr> <chr> <chr> <lgl> <int> <chr>
#> 1 344089 http… Leucophaeus s… Traill, … accep… NA 220 Spec…
#> # … with 19 more variables: valid_AphiaID <int>, valid_name <chr>,
#> # valid_authority <chr>, parentNameUsageID <int>, kingdom <chr>,
#> # phylum <chr>, class <chr>, order <chr>, family <chr>, genus <chr>,
#> # citation <chr>, lsid <chr>, isMarine <int>, isBrackish <lgl>,
#> # isFreshwater <lgl>, isTerrestrial <lgl>, isExtinct <lgl>, match_type <chr>,
#> # modified <chr>
```
## APHIA ID <--> name
```r
wm_name2id(name = "Rhincodon")
#> [1] 105749
```
```r
wm_id2name(id = 105706)
#> [1] "Rhincodontidae"
```
## Get AphiaID via an external ID
```r
wm_external(id = 1080)
#> [1] 85257
wm_external(id = 105706)
#> [1] 159854
```
## Get vernacular names from an AphiaID
```r
wm_common_id(id = 156806)
#> # A tibble: 2 x 3
#> vernacular language_code language
#> <chr> <chr> <chr>
#> 1 gilded wedgeclam eng English
#> 2 Turton's wedge clam eng English
```
## Children
Get direct taxonomic children for an AphiaID
```r
wm_classification(id = 105706)
#> # A tibble: 11 x 3
#> AphiaID rank scientificname
#> <int> <chr> <chr>
#> 1 2 Kingdom Animalia
#> 2 1821 Phylum Chordata
#> 3 146419 Subphylum Vertebrata
#> 4 1828 Superclass Gnathostomata
#> 5 11676 Superclass Pisces
#> 6 10193 Class Elasmobranchii
#> 7 368407 Subclass Neoselachii
#> 8 368408 Infraclass Selachii
#> 9 368410 Superorder Galeomorphi
#> 10 10208 Order Orectolobiformes
#> 11 105706 Family Rhincodontidae
```
## Classification
Get classification for an AphiaID
```r
wm_classification(id = 105706)
#> # A tibble: 11 x 3
#> AphiaID rank scientificname
#> <int> <chr> <chr>
#> 1 2 Kingdom Animalia
#> 2 1821 Phylum Chordata
#> 3 146419 Subphylum Vertebrata
#> 4 1828 Superclass Gnathostomata
#> 5 11676 Superclass Pisces
#> 6 10193 Class Elasmobranchii
#> 7 368407 Subclass Neoselachii
#> 8 368408 Infraclass Selachii
#> 9 368410 Superorder Galeomorphi
#> 10 10208 Order Orectolobiformes
#> 11 105706 Family Rhincodontidae
```
## Synonyms
Get synonyms for an AphiaID
```r
wm_synonyms(id = 105706)
#> # A tibble: 1 x 27
#> AphiaID url scientificname authority status unacceptreason taxonRankID rank
#> <int> <chr> <chr> <chr> <chr> <chr> <int> <chr>
#> 1 148832 http… Rhiniodontidae Müller &… unacc… synonym 140 Fami…
#> # … with 19 more variables: valid_AphiaID <int>, valid_name <chr>,
#> # valid_authority <chr>, parentNameUsageID <int>, kingdom <chr>,
#> # phylum <chr>, class <chr>, order <chr>, family <chr>, genus <lgl>,
#> # citation <chr>, lsid <chr>, isMarine <lgl>, isBrackish <lgl>,
#> # isFreshwater <lgl>, isTerrestrial <lgl>, isExtinct <lgl>, match_type <chr>,
#> # modified <chr>
```
## attributes (i.e., traits)
attribute definition by ID
```r
wm_attr_def(id = 1)
#> # A tibble: 1 x 4
#> measurementTypeID measurementType CategoryID children
#> <int> <chr> <int> <list>
#> 1 1 IUCN Red List Category 1 <df[,4] [2 × 4]>
```
attribute data by AphiaID
```r
wm_attr_data(id = 127160)
#> # A tibble: 24 x 10
#> AphiaID measurementType… measurementType measurementValue source_id reference
#> <chr> <int> <chr> <chr> <int> <chr>
#> 1 127160 23 Species import… FAO-ASFIS: Spec… 197354 "FAO Fis…
#> 2 127160 23 Species import… MSFD indicators 197546 "Daniel …
#> 3 127160 23 Species import… MSFD indicators 197549 "ICES. 2…
#> 4 127160 23 Species import… MSFD indicators 197615 "List of…
#> 5 127160 23 Species import… MSFD indicators 197615 "List of…
#> 6 127160 23 Species import… MSFD indicators 197615 "List of…
#> 7 127160 23 Species import… MSFD indicators 197615 "List of…
#> 8 127160 23 Species import… MSFD indicators 197616 "List of…
#> 9 127160 23 Species import… MSFD indicators 197616 "List of…
#> 10 127160 23 Species import… MSFD indicators 197549 "ICES. 2…
#> # … with 14 more rows, and 4 more variables: qualitystatus <chr>,
#> # AphiaID_Inherited <int>, CategoryID <int>, children <list>
```
attributes grouped by a CategoryID
```r
wm_attr_category(id = 7)
#> # A tibble: 6 x 4
#> measurementValueID measurementValue measurementValueCode children
#> <int> <chr> <chr> <list>
#> 1 183 benthos <NA> <df[,4] [8 × 4]>
#> 2 184 plankton <NA> <df[,4] [7 × 4]>
#> 3 194 nekton <NA> <df[,0] [0 × 0]>
#> 4 323 neuston <NA> <df[,0] [0 × 0]>
#> 5 378 edaphofauna <NA> <df[,4] [2 × 4]>
#> 6 331 not applicable N/A <df[,0] [0 × 0]>
```
AphiaIDs by attribute definition ID
```r
wm_attr_aphia(id = 4)
#> # A tibble: 50 x 2
#> AphiaID Attributes
#> <int> <list>
#> 1 11 <df[,10] [1 × 10]>
#> 2 55 <df[,10] [2 × 10]>
#> 3 57 <df[,10] [2 × 10]>
#> 4 58 <df[,10] [2 × 10]>
#> 5 59 <df[,10] [2 × 10]>
#> 6 63 <df[,10] [2 × 10]>
#> 7 64 <df[,10] [2 × 10]>
#> 8 69 <df[,10] [2 × 10]>
#> 9 90 <df[,10] [2 × 10]>
#> 10 91 <df[,10] [2 × 10]>
#> # … with 40 more rows
```
|
/scratch/gouwar.j/cran-all/cranData/worrms/inst/doc/worrms.Rmd
|
---
title: Introduction to worrms
author: Scott Chamberlain
date: "2020-07-07"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Introduction to worrms}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
`worrms` is an R client for the World Register of Marine Species (https://www.marinespecies.org/)
See the taxize book (https://taxize.dev) for taxonomically focused work in this and similar packages.
## Install
Stable version from CRAN
```r
install.packages("worrms")
```
Development version from GitHub
```r
install.packages("remotes")
remotes::install_github("ropensci/worrms")
```
```r
library("worrms")
```
## Get records
WoRMS 'records' are taxa, not specimen occurrences or something else.
by date
```r
wm_records_date('2016-12-23T05:59:45+00:00')
#> # A tibble: 50 x 27
#> AphiaID url scientificname authority status unacceptreason taxonRankID
#> <int> <chr> <chr> <chr> <chr> <lgl> <int>
#> 1 894302 http… Paleopolymorp… Vasilenk… accep… NA 220
#> 2 894296 http… Parapachyphlo… Miklukho… accep… NA 220
#> 3 894298 http… Parapachyphlo… Miklukho… accep… NA 220
#> 4 894301 http… Ovulina radia… Seguenza… accep… NA 220
#> 5 894299 http… Parafissurina… Petri, 1… accep… NA 220
#> 6 894297 http… Parapachyphlo… Miklukho… accep… NA 220
#> 7 919684 http… Flintina serr… Cuvillie… accep… NA 220
#> 8 906465 http… Verneuilinoid… Bartenst… accep… NA 220
#> 9 903640 http… Quinqueloculi… Hussey, … accep… NA 220
#> 10 917580 http… Orbitolina ra… Sahni & … accep… NA 220
#> # … with 40 more rows, and 20 more variables: rank <chr>, valid_AphiaID <int>,
#> # valid_name <chr>, valid_authority <chr>, parentNameUsageID <int>,
#> # kingdom <chr>, phylum <chr>, class <chr>, order <chr>, family <chr>,
#> # genus <chr>, citation <chr>, lsid <chr>, isMarine <int>, isBrackish <lgl>,
#> # isFreshwater <lgl>, isTerrestrial <lgl>, isExtinct <int>, match_type <chr>,
#> # modified <chr>
```
by a taxonomic name
```r
wm_records_name(name = 'Leucophaeus scoresbii')
#> # A tibble: 1 x 27
#> AphiaID url scientificname authority status unacceptreason taxonRankID rank
#> <int> <chr> <chr> <chr> <chr> <lgl> <int> <chr>
#> 1 344089 http… Leucophaeus s… Traill, … accep… NA 220 Spec…
#> # … with 19 more variables: valid_AphiaID <int>, valid_name <chr>,
#> # valid_authority <chr>, parentNameUsageID <int>, kingdom <chr>,
#> # phylum <chr>, class <chr>, order <chr>, family <chr>, genus <chr>,
#> # citation <chr>, lsid <chr>, isMarine <int>, isBrackish <lgl>,
#> # isFreshwater <lgl>, isTerrestrial <lgl>, isExtinct <lgl>, match_type <chr>,
#> # modified <chr>
```
by many names
```r
wm_records_names(name = c('Leucophaeus scoresbii', 'Coryphaena'))
#> [[1]]
#> # A tibble: 1 x 27
#> AphiaID url scientificname authority status unacceptreason taxonRankID rank
#> <int> <chr> <chr> <chr> <chr> <lgl> <int> <chr>
#> 1 344089 http… Leucophaeus s… Traill, … accep… NA 220 Spec…
#> # … with 19 more variables: valid_AphiaID <int>, valid_name <chr>,
#> # valid_authority <chr>, parentNameUsageID <int>, kingdom <chr>,
#> # phylum <chr>, class <chr>, order <chr>, family <chr>, genus <chr>,
#> # citation <chr>, lsid <chr>, isMarine <int>, isBrackish <lgl>,
#> # isFreshwater <lgl>, isTerrestrial <lgl>, isExtinct <lgl>, match_type <chr>,
#> # modified <chr>
#>
#> [[2]]
#> # A tibble: 2 x 27
#> AphiaID url scientificname authority status unacceptreason taxonRankID rank
#> <int> <chr> <chr> <chr> <chr> <chr> <int> <chr>
#> 1 125960 http… Coryphaena Linnaeus… accep… <NA> 180 Genus
#> 2 843430 <NA> <NA> <NA> quara… synonym NA <NA>
#> # … with 19 more variables: valid_AphiaID <int>, valid_name <chr>,
#> # valid_authority <chr>, parentNameUsageID <int>, kingdom <chr>,
#> # phylum <chr>, class <chr>, order <chr>, family <chr>, genus <chr>,
#> # citation <chr>, lsid <chr>, isMarine <int>, isBrackish <int>,
#> # isFreshwater <int>, isTerrestrial <int>, isExtinct <lgl>, match_type <chr>,
#> # modified <chr>
```
by common name
```r
wm_records_common(name = 'clam')
#> # A tibble: 4 x 27
#> AphiaID url scientificname authority status unacceptreason taxonRankID rank
#> <int> <chr> <chr> <chr> <chr> <lgl> <int> <chr>
#> 1 141919 http… Mercenaria me… (Linnaeu… accep… NA 220 Spec…
#> 2 140431 http… Mya truncata Linnaeus… accep… NA 220 Spec…
#> 3 141936 http… Venus verruco… Linnaeus… accep… NA 220 Spec…
#> 4 575771 http… Verpa penis (Linnaeu… accep… NA 220 Spec…
#> # … with 19 more variables: valid_AphiaID <int>, valid_name <chr>,
#> # valid_authority <chr>, parentNameUsageID <int>, kingdom <chr>,
#> # phylum <chr>, class <chr>, order <chr>, family <chr>, genus <chr>,
#> # citation <chr>, lsid <chr>, isMarine <int>, isBrackish <lgl>,
#> # isFreshwater <lgl>, isTerrestrial <lgl>, isExtinct <lgl>, match_type <chr>,
#> # modified <chr>
```
using the TAXMATCH algorithm
```r
wm_records_taxamatch(name = 'Leucophaeus scoresbii')
#> [[1]]
#> # A tibble: 1 x 27
#> AphiaID url scientificname authority status unacceptreason taxonRankID rank
#> <int> <chr> <chr> <chr> <chr> <lgl> <int> <chr>
#> 1 344089 http… Leucophaeus s… Traill, … accep… NA 220 Spec…
#> # … with 19 more variables: valid_AphiaID <int>, valid_name <chr>,
#> # valid_authority <chr>, parentNameUsageID <int>, kingdom <chr>,
#> # phylum <chr>, class <chr>, order <chr>, family <chr>, genus <chr>,
#> # citation <chr>, lsid <chr>, isMarine <int>, isBrackish <lgl>,
#> # isFreshwater <lgl>, isTerrestrial <lgl>, isExtinct <lgl>, match_type <chr>,
#> # modified <chr>
```
## APHIA ID <--> name
```r
wm_name2id(name = "Rhincodon")
#> [1] 105749
```
```r
wm_id2name(id = 105706)
#> [1] "Rhincodontidae"
```
## Get AphiaID via an external ID
```r
wm_external(id = 1080)
#> [1] 85257
wm_external(id = 105706)
#> [1] 159854
```
## Get vernacular names from an AphiaID
```r
wm_common_id(id = 156806)
#> # A tibble: 2 x 3
#> vernacular language_code language
#> <chr> <chr> <chr>
#> 1 gilded wedgeclam eng English
#> 2 Turton's wedge clam eng English
```
## Children
Get direct taxonomic children for an AphiaID
```r
wm_classification(id = 105706)
#> # A tibble: 11 x 3
#> AphiaID rank scientificname
#> <int> <chr> <chr>
#> 1 2 Kingdom Animalia
#> 2 1821 Phylum Chordata
#> 3 146419 Subphylum Vertebrata
#> 4 1828 Superclass Gnathostomata
#> 5 11676 Superclass Pisces
#> 6 10193 Class Elasmobranchii
#> 7 368407 Subclass Neoselachii
#> 8 368408 Infraclass Selachii
#> 9 368410 Superorder Galeomorphi
#> 10 10208 Order Orectolobiformes
#> 11 105706 Family Rhincodontidae
```
## Classification
Get classification for an AphiaID
```r
wm_classification(id = 105706)
#> # A tibble: 11 x 3
#> AphiaID rank scientificname
#> <int> <chr> <chr>
#> 1 2 Kingdom Animalia
#> 2 1821 Phylum Chordata
#> 3 146419 Subphylum Vertebrata
#> 4 1828 Superclass Gnathostomata
#> 5 11676 Superclass Pisces
#> 6 10193 Class Elasmobranchii
#> 7 368407 Subclass Neoselachii
#> 8 368408 Infraclass Selachii
#> 9 368410 Superorder Galeomorphi
#> 10 10208 Order Orectolobiformes
#> 11 105706 Family Rhincodontidae
```
## Synonyms
Get synonyms for an AphiaID
```r
wm_synonyms(id = 105706)
#> # A tibble: 1 x 27
#> AphiaID url scientificname authority status unacceptreason taxonRankID rank
#> <int> <chr> <chr> <chr> <chr> <chr> <int> <chr>
#> 1 148832 http… Rhiniodontidae Müller &… unacc… synonym 140 Fami…
#> # … with 19 more variables: valid_AphiaID <int>, valid_name <chr>,
#> # valid_authority <chr>, parentNameUsageID <int>, kingdom <chr>,
#> # phylum <chr>, class <chr>, order <chr>, family <chr>, genus <lgl>,
#> # citation <chr>, lsid <chr>, isMarine <lgl>, isBrackish <lgl>,
#> # isFreshwater <lgl>, isTerrestrial <lgl>, isExtinct <lgl>, match_type <chr>,
#> # modified <chr>
```
## attributes (i.e., traits)
attribute definition by ID
```r
wm_attr_def(id = 1)
#> # A tibble: 1 x 4
#> measurementTypeID measurementType CategoryID children
#> <int> <chr> <int> <list>
#> 1 1 IUCN Red List Category 1 <df[,4] [2 × 4]>
```
attribute data by AphiaID
```r
wm_attr_data(id = 127160)
#> # A tibble: 24 x 10
#> AphiaID measurementType… measurementType measurementValue source_id reference
#> <chr> <int> <chr> <chr> <int> <chr>
#> 1 127160 23 Species import… FAO-ASFIS: Spec… 197354 "FAO Fis…
#> 2 127160 23 Species import… MSFD indicators 197546 "Daniel …
#> 3 127160 23 Species import… MSFD indicators 197549 "ICES. 2…
#> 4 127160 23 Species import… MSFD indicators 197615 "List of…
#> 5 127160 23 Species import… MSFD indicators 197615 "List of…
#> 6 127160 23 Species import… MSFD indicators 197615 "List of…
#> 7 127160 23 Species import… MSFD indicators 197615 "List of…
#> 8 127160 23 Species import… MSFD indicators 197616 "List of…
#> 9 127160 23 Species import… MSFD indicators 197616 "List of…
#> 10 127160 23 Species import… MSFD indicators 197549 "ICES. 2…
#> # … with 14 more rows, and 4 more variables: qualitystatus <chr>,
#> # AphiaID_Inherited <int>, CategoryID <int>, children <list>
```
attributes grouped by a CategoryID
```r
wm_attr_category(id = 7)
#> # A tibble: 6 x 4
#> measurementValueID measurementValue measurementValueCode children
#> <int> <chr> <chr> <list>
#> 1 183 benthos <NA> <df[,4] [8 × 4]>
#> 2 184 plankton <NA> <df[,4] [7 × 4]>
#> 3 194 nekton <NA> <df[,0] [0 × 0]>
#> 4 323 neuston <NA> <df[,0] [0 × 0]>
#> 5 378 edaphofauna <NA> <df[,4] [2 × 4]>
#> 6 331 not applicable N/A <df[,0] [0 × 0]>
```
AphiaIDs by attribute definition ID
```r
wm_attr_aphia(id = 4)
#> # A tibble: 50 x 2
#> AphiaID Attributes
#> <int> <list>
#> 1 11 <df[,10] [1 × 10]>
#> 2 55 <df[,10] [2 × 10]>
#> 3 57 <df[,10] [2 × 10]>
#> 4 58 <df[,10] [2 × 10]>
#> 5 59 <df[,10] [2 × 10]>
#> 6 63 <df[,10] [2 × 10]>
#> 7 64 <df[,10] [2 × 10]>
#> 8 69 <df[,10] [2 × 10]>
#> 9 90 <df[,10] [2 × 10]>
#> 10 91 <df[,10] [2 × 10]>
#> # … with 40 more rows
```
|
/scratch/gouwar.j/cran-all/cranData/worrms/vignettes/worrms.Rmd
|
#' Add a word to a word matrix
#' @param x word matrix
#' @param word the word to add (character/scalar)
#' @param must_intersect force the added word to intersect with >1 word (logical/scalar)
#' @param shape_matrix a binary matrix generated from a call to \code{\link{image_matrix}}
#' @return word matrix with word added (if possible)
add_word <- function(x,
word = "finding",
must_intersect = FALSE,
shape_matrix = NULL) {
r <- nrow(x)
c <- ncol(x)
n <- nchar(word)
# update matrix with possible insertion points
# - if 'must_intersect' is true, prioritize the maximal crossing-points
# - account for a 'shape_matrix', if present
M <-
purrr::map2(
max_word_size(x, shape_matrix),
word_intersections(x, word),
function(a, b)
(!must_intersect & a >= n) | (b > 0 & b == max(b))
)
# choose randomly (uniform) from all possible positions
# - if 'must_intersect' is true, choose from (maximal) crossing-points only
v <- purrr::map_int(M, sum)
if (sum(v) == 0)
return(x)
d <- sample(names(M), 1, prob = v / sum(v))
pos <- which(M[[d]])
if (length(pos) > 1)
pos <- sample(pos, 1)
# add word at the chosen position
id <- switch(d,
down = pos:(pos + n - 1),
across = seq(pos, pos + (n - 1) * r, by = r)
)
sword <- stringr::str_split(word, "")[[1]]
x[id] <- sword
# store position (for drawing solution lines & crossword puzzles)
positions <- tibble::tibble(
word = word,
letters = sword,
id = id,
i = (id - 1) %% r + 1,
j = floor((id - 1) / r) + 1,
dir = d
)
if (is.null(attr(x, "positions"))) {
attr(x, "positions") <- positions
} else {
attr(x, "positions") <- dplyr::bind_rows(attr(x, "positions"), positions)
}
x
}
|
/scratch/gouwar.j/cran-all/cranData/worrrd/R/add_word.R
|
#' Create a puzzle book
#' @param input_file yaml file containing book details/contents
#' @param output_file full path to output file (with .pdf extension)
#' @param solutions include solutions (logical/scalar)
#'
#' @examples
#' \donttest{
#' # Create demo book included with package
#' book(output_file = "demo.pdf")
#' unlink("demo.pdf")
#' }
#' @return full path to the created puzzle book
#' @export
book <- function(input_file = system.file("book.yml", package = "worrrd"),
output_file = "book.pdf",
solutions = TRUE) {
# load content
p <- yaml::read_yaml(input_file)
# create .Rmd file
rmd_header <- glue::glue_data(p,
'---
title: {p$title}
author: {p$author}
output: pdf_document
params:
config_file: {input_file}
---
')
rmd_prelims <- '
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = FALSE, include = TRUE, fig.width = 8, fig.height = 8)
```
```{r, include = FALSE}
library(worrrd)
library(ggplot2)
# load book contents
p <- yaml::read_yaml(params$config_file)
# parse r-code (words only)
p$pages <- purrr::map(
p$pages,
function(pgs) {
if (length(pgs$words) == 1 && stringr::str_detect(pgs$words, "\\`.*\\`"))
pgs$words <- eval(parse(text = stringr::str_replace_all(pgs$words, "\\`", "")))
pgs
}
)
# create all wordsearches
out <- purrr::map(p$pages, ~wordsearch(words = .x$words, r = p$rows, c = p$cols, image = .x$image))
```
'
rmd_cover <- glue::glue("
## A Message from the Author
Enjoy this awesome wordsearch book!
\\newpage
")
rmd_content <-
purrr::map_chr(
1:length(p$pages),
function(i) {
glue::glue(.open = "..", .close = "..",
'# `r p$pages[[..i..]]$name`
```{r}
plot(out[[..i..]], solution = FALSE)
```
\\newpage
'
)
})
if (solutions == T) {
rmd_solutions <-
purrr::map_chr(
1:length(p$pages),
function(i) {
glue::glue(.open = "..", .close = "..",
'# `r p$pages[[..i..]]$name` SOLUTION
```{r}
plot(out[[..i..]], solution = TRUE)
```
\\newpage
'
)
})
rmd_content <- c(rmd_content, rmd_solutions)
}
rmd_file <- tempfile(fileext = ".Rmd")
writeLines(c(rmd_header, rmd_prelims, rmd_cover, rmd_content), con = rmd_file)
# render as pdf
rmarkdown::render(rmd_file, output_dir = getwd(), output_file = output_file, quiet = TRUE)
unlink(rmd_file)
message("New book can be found at ", output_file, ".")
return(output_file)
}
|
/scratch/gouwar.j/cran-all/cranData/worrrd/R/book.R
|
#' Create a crossword puzzle
#' @param words a vector of words (character/vector)
#' @param clues a vector a clues (character/vector)
#' @param r number of rows (numeric/scalar)
#' @param c number of columns (numeric/scalar)
#' @param method generate puzzle using 'optimal' or 'random' word order, where
#' the optimal order will place words with the most overlap first
#' @importFrom magrittr "%>%"
#'
#' @examples
#' # Example 1 ----
#' words <- c("apple", "pear", "banana")
#' clues <- c("red fruit", "bartlett", "green then yellow")
#' x <- crossword(words, clues)
#' plot(x, solution = TRUE)
#'
#' # Example 2 ---
#' dat <-
#' dplyr::tribble(
#' ~word, ~clue,
#' "dog", "Bark. Bark. Bark.",
#' "cat", "Purrr",
#' "horse", "Neighhhhh",
#' "frog", "Ribbit Ribbit",
#' "cow", "Moooooooo",
#' "fox", "Nee Nee Nee (What does the ____ say?)",
#' "sheep", "Bleat",
#' "snake", "Hissss",
#' "duck", "Quack",
#' "bird", "Chirp"
#' )
#' ex2 <- crossword(words = dat$word, clues = dat$clue, r = 40, c = 40)
#' plot(ex2, solution = TRUE, clues = TRUE)
#'
#' @return crossword object
#' @export
crossword <- function(words,
clues,
r = 50,
c = 50,
method = c("optimal", "random")) {
# check inputs
if (length(words) != length(clues))
stop("Invalid input: `words` and `clues` must have the same length.")
method <- match.arg(method)
# prepare word list
# - uppercase everything; ignore spaces
words <- prepare_words(words)
n <- length(words)
# -- do not allow duplicates/repeats in word list
if (any(duplicated(words)))
stop("Must provide a set of words without duplicates.")
# -- remove words that won't fit
id <- nchar(words) <= max(c(r, c))
words <- words[id]
clues <- clues[id]
if (length(words) == 0) {
message("No words can be placed. Try a larger grid-size, or shorter words.")
return(NULL)
}
# save clues for use later
df <- tibble::tibble(word = words, clue = clues)
# TODO: automatically determine r/c based on content of `words`
# create empty matrix to store crossword
x <- matrix(NA, nrow = r, ncol = c)
# iterate: add words to the board
# - force words to be intersecting after placing the first word
# TODO: place first word in board center
# - each time a word is added, reshuffle the remaining words
# - if no word can be placed after a reshuffle, give up (i.e., break loop)
while (length(words) > 0) {
word_added <- FALSE
if (method == "optimal") {
wts <- word_overlap(words)
ids <- rank(-wts, ties.method = "first")
words <- words[ids]
} else {
words <- sample(words)
}
for (word in words) {
x <- add_word(x, word, must_intersect = n > length(words))
if (word %in% unique(attr(x, "positions")$word)) {
words <- setdiff(words, word)
word_added <- TRUE
break
}
}
if (!word_added)
break
}
# status report
if (length(words) > 0)
message(paste0("Could not place the following words:\n\n", paste0(words, collapse = "\n")))
message(paste0("Found positions for ", n - length(words), "/", n, " words."))
# # TODO: trim matrix padding
# p <- attr(x, "positions")
# row_range <- range(p$i)
# minr <- min(p$i) - 1
# maxr <- max(p$i) + 1
# minc <- min(p$j) - 1
# maxc <- max(p$j) + 1
# x[minr:maxr, minc:maxc] # trim matrix
#
# p %>%
# dplyr::mutate(
# i = i - minr + 1,
# j = j - minc + 1
# )
# # -- how to adjust 'id'?
# # -- how to deal with words on edges?
# add clues
# TODO: account for duplicate words (which is allowed)
attr(x, "clues") <- attr(x, "positions") %>%
dplyr::group_by(word) %>%
dplyr::slice(1) %>%
dplyr::group_by(dir) %>%
dplyr::mutate(
n = dplyr::row_number()
) %>%
dplyr::left_join(df, by = "word")
as_crossword(x)
}
# Constructors =============================================================
#' Assign an object to the `crossword` class
#' @param x an object containing crossword data
#' @return crossword object:
#' a matrix reprepresentation of the crossword, with attributes:
#' positions: tibble representation of crossword
#' clues: tibble representation of clue start (i.e., clue number locations)
#'
#' @export
as_crossword <- function(x) {
if (!is_crossword(x))
class(x) <- append("crossword", class(x))
x
}
#' Check if an object is of the `crossword` class
#' @param x an R object to check
#' @return logical/scalar
#' @export
is_crossword <- function(x) {
inherits(x, "crossword")
}
# Methods ==================================================================
#' Print a crossword puzzle
#' @param x a crossword object (see \code{\link{crossword}})
#' @param ... additional printing args
#' @return crossword object
#' @export
print.crossword <- function(x, ...) {
clues <- attr(x, "clues")
cat(paste("Crossword Puzzle\n"))
cat(paste("Contains", nrow(clues), "clues.\n"))
cat(paste("There are", sum(clues$dir == "across"), "across and", sum(clues$dir == "down"), "down.\n"))
invisible(x)
}
#' Plot a crossword puzzle
#' @param x a crossword object (see \code{\link{crossword}})
#' @param solution show solution? (logical/scalar)
#' @param clues show clues? (logical/scalar)
#' @param title puzzle title (character/scalar)
#' @param legend_size letter size of word list; set to NULL to auto-size (numeric/scalar)
#' @param ... additional printing args
#' @return ggplot2 object
#' @export
plot.crossword <- function(x,
solution = FALSE,
clues = FALSE,
title = "Crossword Puzzle",
legend_size = 4,
...) {
g1 <- ggplot2::ggplot(attr(x, "positions")) +
ggplot2::geom_tile(aes(x = .data$j, y = .data$i, group = .data$word), color = "black", fill = "white", alpha = 1) +
ggplot2::geom_text(aes(x = .data$j, y = .data$i, label = .data$n), size = 2, nudge_y = .35, nudge_x = -.35, color = "black", data = attr(x, "clues")) +
ggplot2::ggtitle(title) +
ggplot2::scale_y_reverse() +
ggplot2::theme_void() +
ggplot2::theme(
aspect.ratio = ncol(x) / nrow(x),
panel.background = element_rect(fill = "black"),
plot.title = element_text(hjust = 0.5, size = 24, face = "bold")
)
if (solution)
g1 <- g1 + ggplot2::geom_text(aes(x = .data$j, y = .data$i, label = .data$letters), color = "red")
if (clues) {
g2 <-
purrr::map(
c("Across", "Down"),
function(d) {
xt <- attr(x, "clues") %>%
dplyr::mutate(
clue = paste0(.data[["n"]], ". ", .data[["clue"]])
) %>%
dplyr::filter(
dir == tolower(d)
)
nn <- max(xt$n)
ggplot2::ggplot(xt) +
# ggtext::geom_richtext(
# aes(x = 1, y = .data$n, label = .data$clue),
# fill = NA,
# size = legend_size,
# label.color = NA, # remove background and outline
# label.padding = grid::unit(rep(0, 4), "pt") # remove padding
# ) +
ggplot2::geom_text(aes(x = 1, y = .data$n, label = .data$clue), hjust = 0, size = legend_size) +
ggplot2::annotate("text", x = 1, y = 0, label = paste0("underline(bold(", toupper(d), "))"), parse = TRUE) +
ggplot2::theme_void() +
ggplot2::scale_y_reverse() +
ggplot2::xlim(.8, 1.5)
}
)
g3 <- cowplot::plot_grid(g2[[1]], g2[[2]], nrow = 2, rel_widths = c(1, 1))
g1 <- cowplot::plot_grid(g1, g3, nrow = 1, rel_widths = c(3/4, 1/4))
#g1 <- gridExtra::grid.arrange(g1, gc[[1]], gc[[2]], layout_matrix = rbind(c(1, 2), c(1, 3)))
}
g1
}
|
/scratch/gouwar.j/cran-all/cranData/worrrd/R/crossword.R
|
#' Convert an image to a 0/1 matrix
#' @param img full path to image (character/scalar)
#' @param rows number of rows (numeric/scalar)
#' @param columns number of columns (numeric/scalar)
image_matrix <- function(img = "https://upload.wikimedia.org/wikipedia/commons/9/96/Tux_Paint_banana.svg",
rows = 10,
columns = 10) {
# load image
img <- magick::image_read(img)
# scale to size of word_grid
img <- magick::image_scale(img, paste0(rows,"x", columns, "!"))
# convert the image to a black/white matrix representation
C <- purrr::map(1:dim(img[[1]])[1], ~img[[1]][.x, , ] == "ff") # white = 'ff ff ff ff'
outline <- Reduce("+", C) / length(C)
if (any(outline == 1)) {
#if (all(unique(as.numeric(outline)) %in% c(0, 1))) {
outline <- outline != 1 # if colors retained across 4 matrices
} else {
outline <- outline != 0 # if colors pushed to last matrix (note: why is this happening??)
}
outline
}
|
/scratch/gouwar.j/cran-all/cranData/worrrd/R/image_matrix.R
|
#' Make the 'worrrd' logo
#' @importFrom magrittr "%>%"
make_logo <- function() {
# map coordinates
w1 <-
tibble::tibble(
x = rep(3, 6),
y = 3:-2,
label = stringr::str_split("WORRRD", "")[[1]]
)
w2 <-
tibble::tibble(
x = 1:6,
y = rep(1, 6),
label = stringr::str_split("WORRRD", "")[[1]]
)
# draw via ggplot
dplyr::bind_rows(w1, w2) %>%
ggplot2::ggplot(aes(x = .data$x, y = .data$y)) +
ggplot2::geom_tile(fill = "gray", color = "black", alpha = .5) +
ggplot2::geom_text(aes(label = .data$label), size = 3) +
ggplot2::theme_void()
ggplot2::ggsave("logo.png", width = 1, height = 1)
# usethis approach
#usethis::use_logo("logo.png")
# resize via magick
img <- magick::image_read("logo.png")
img <- magick::image_scale(img, "x100")
magick::image_write(img, "logo.png")
}
|
/scratch/gouwar.j/cran-all/cranData/worrrd/R/make_logo.R
|
#' Compute maximum word size, based on the current word matrix
#' @param x word_search matrix
#' @param shape_matrix shape matrix (logical) of identical size to 'x'
max_word_size <- function(x, shape_matrix = NULL) {
# setup activation matrix
r <- nrow(x)
c <- ncol(x)
A <- matrix(1, nrow = r, ncol = c)
A[!is.na(x)] <- 0
# apply shape matrix
if (!is.null(shape_matrix))
A[!shape_matrix] <- 0
# compute max word sizes
out <- list(across = A, down = A)
for (i in 1:r) {
for (j in 1:c) {
out$across[i, j] <- sum(cumprod(A[i, j:c]))
out$down[i, j] <- sum(cumprod(A[i:r, j]))
}
}
out
}
|
/scratch/gouwar.j/cran-all/cranData/worrrd/R/max_word_size.R
|
#' Prepare a word(s)
#' @param x word list (character/vector)
prepare_words <- function(x) {
stringr::str_replace_all(toupper(x), " |-|'|\\.", "")
}
|
/scratch/gouwar.j/cran-all/cranData/worrrd/R/prepare_words.R
|
#' Prepare a worrrd object for printing
#' @param x ggplot object
#' @param filename name of file
#'
#' @examples
#' words <- c("dog", "cat", "horse", "frog", "cow", "fox")
#' ex1 <- wordsearch(words, r = 10, c = 10)
#' my_puzzle <- plot(ex1, solution = FALSE)
#' printable(my_puzzle, "my_wordsearch.pdf")
#' unlink("my_wordsearch.pdf")
#'
#' @return filename of pdf puzzle
#' @export
printable <- function(x, filename = "plot.pdf") {
ggplot2::ggsave(
filename,
plot = x,
width = 11,
height = 8.5
)
}
|
/scratch/gouwar.j/cran-all/cranData/worrrd/R/printable.R
|
#' Get possible intersection points based on the current board and a provided word
#' @param x word matrix
#' @param word the word to add (character/scalar)
#' @return for each direction, a matrix of crossing-point counts
#' @importFrom utils tail
word_intersections <- function(x, word = "needles") {
r <- nrow(x)
c <- ncol(x)
A <- matrix(0, nrow = r, ncol = c)
out <- list(across = A, down = A)
sword <- stringr::str_split(toupper(word), "")[[1]]
# search the entire word matrix for possible intersecting insertion points
for (i in 1:r) {
for (j in 1:c) {
if (!is.na(x[i, j])) {
ids <- x[i, j] == sword
if (any(ids)) {
for (id in which(ids)) {
# down
xseq <- (i - id + 1):(i + length(ids) - id)
if (all(xseq > 0 & xseq <= r)) {
cids <- which(is.na(x[xseq, j])) # find non-crossing points
if (j > 1 && all(is.na(x[xseq[cids], j - 1]))) # check left (minus crossing points)
if (j < c && all(is.na(x[xseq[cids], j + 1]))) # check right (minus crossing points)
if (xseq[1] > 1 && is.na(x[xseq[1] - 1, j])) # check above
if (tail(xseq, 1) < r && is.na(x[tail(xseq, 1) + 1, j])) # check below
if (all(is.na(x[xseq, j]) | x[xseq, j] == sword))
out$down[xseq[1], j] <- length(xseq) - length(cids) # save the intersection count for each
}
# across
yseq <- (j - id + 1):(j + length(ids) - id)
if (all(yseq > 0 & yseq <= c)) {
cids <- which(is.na(x[i, yseq])) # find non-crossing points
if (i > 1 && all(is.na(x[i - 1, yseq[cids] ]))) # check above (minus crossing points)
if (i < r && all(is.na(x[i + 1, yseq[cids] ]))) # check below (minus crossing points)
if (yseq[1] > 1 && is.na(x[i, yseq[1] - 1])) # check left (before)
if (tail(yseq, 1) < c && is.na(x[i, tail(yseq, 1) + 1])) # check right (after)
if (all(is.na(x[i, yseq]) | x[i, yseq] == sword))
out$across[i, yseq[1] ] <- length(yseq) - length(cids) # save the intersection count for each
}
}
}
}
}
}
out
}
|
/scratch/gouwar.j/cran-all/cranData/worrrd/R/word_intersections.R
|
#' Compute overlap score for a vector of words
#' @param words vector of words (character/vector)
word_overlap <- function(words) {
purrr::map_int(
seq_along(words),
function(i) {
sum(
purrr::map_int(
stringr::str_split(words[i], "")[[1]],
function(letter) {
sum(stringr::str_detect(words[-i], letter))
}
)
)
}
)
}
|
/scratch/gouwar.j/cran-all/cranData/worrrd/R/word_overlap.R
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.