content
stringlengths
0
14.9M
filename
stringlengths
44
136
--- title: "Reproducing a WORCS project" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Reproducing a WORCS project} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` This is a tutorial on how to reproduce a project created with the `worcs` package. At the core of a typical `worcs` projects is an 'R Markdown' document, containing prose and analysis code. This document can be compiled, or "knitted", to reproduce the analyses. This tutorial will guide you through the steps necessary to make that happen. ## Install 'RStudio' and 'R' **You can skip these steps if you have a recent and working installation of 'RStudio' and 'R'.** 1. Install [R (free)](https://CRAN.R-project.org) 1. Install ['RStudio' Desktop (Free)](https://posit.co/download/rstudio-desktop/) ## Install R-package dependencies 1. Install all packages required for WORCS by running the following code in the 'RStudio' console. Be prepared for three contingencies: + If you receive any error saying *There is no package called [package name]*, then run the code `install.packages("package name")` + If you are prompted to update packages, just press [ENTER] to avoid updating packages. Updating packages this way in an interactive session sometimes leads to errors if the packages are loaded. + If you see a pop-up dialog asking *Do you want to install from sources the package which needs compilation?*, click *No*. ``` install.packages("worcs", dependencies = TRUE) tinytex::install_tinytex() renv::consent(provided = TRUE) ``` ## Obtaining the project repository <!-- To do so, open RStudio and, in the console, --> <!-- If you are familiar with 'Git' and 'GitHub', you can "clone" the project as usual. --> WORCS projects are typically hosted on 'GitHub', or another 'Git' remote repository. The recommended way to obtain a local copy of the project repository is to "clone" it. On 'GitHub', this is done by clicking the green button labeled "Code". Clicking it reveals the HTTPS link to the project repository (see below). Copy this link to the clipboard by clicking the clipboard icon next to it. ![](github_download.png) Next, open an RStudio instance and run the following code in the console, replacing `https://github.com/username/reponame.git` with the HTTPS address you just copied to clipboard, and replacing the `c:/where/you/want/the/repo` with the location on your hard drive where you want to clone the repository: ``` gert::git_clone("https://github.com/username/reponame.git", path = "c:/where/you/want/the/repo") ``` **Note: While it is also possible to download a compressed (ZIP) archive containing the project (see the image above), this has an important limitation: a repository downloaded via the GitHub interface is, itself, not a Git repository! This is a peculiarity of the GitHub interface. This might result in unexpected behavior when using any WORCS functionality that relies on Git for version control. Thus, as a general rule, we advise cloning projects instead.** ## Open the project in 'RStudio' Most projects can be opened by loading the '.RProj' file in the main folder. This should be explained in the project 'README.md' as well. ## Restore the package dependencies You will need to restore the packages used by the authors, using the `renv` package. See [this article](https://rstudio.github.io/renv/articles/renv.html) for more information about `renv`. With the project open in 'RStudio', type the following in the console: ``` renv::restore() ``` ## Open the project entry point The entry point is the core document that can be executed to reproduce the analysis. This is typically a manuscript, or occasionally an R-script file. Use the following function to open the entry point file in 'RStudio': ``` load_entrypoint() ``` ## Reproduce the analyses From `worcs` version 0.1.12, projects can be reproduced using the function `reproduce()`. This function will evaluate the reproducibility recipe stored in the `.worcs` project file, and checks whether the resulting endpoints have the correct checksums (i.e., are unchanged relative to the authors' original work). ## No access to original data Sometimes, authors have not made the original data available. In this case, the project ought to contain a synthetic data file with similar properties to the original data. This synthetic data allows you to verify that the analyses can be run, and that the code is correct. The results will, however, deviate from the original findings and should not be substantively interpreted. Authors may use the function `notify_synthetic()` to generate a message in the paper when a synthetic dataset is used. Authors should also provide information in the README.md file on how to obtain access to the original data in case an audit is warranted. Please read the WORCS paper [@vanlissaWORCSWorkflowOpen2021] for more information about how checksums are used so that auditors can verify the authenticity of the original data.
/scratch/gouwar.j/cran-all/cranData/worcs/inst/doc/reproduce.Rmd
## ---- include = FALSE--------------------------------------------------------- knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ## ---- eval=FALSE-------------------------------------------------------------- # renv::consent(provided = TRUE) # worcs::git_user("your_name", "your_email")
/scratch/gouwar.j/cran-all/cranData/worcs/inst/doc/setup-docker.R
--- title: "Setting up your computer for WORCS - Docker-edition" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Setting up your computer for WORCS - Docker-edition} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` If you do not want to install R, RStudio, Latex and Git on a personal computer (as described in [`vignette("setup", package = "worcs")`](https://cjvanlissa.github.io/worcs/articles/setup.html)), but would like to use Docker instead, follow these steps in order: 1. Install [Docker](https://docs.docker.com/get-docker/) 2. Open a terminal/cmd/shell. 2. Start the `worcs` image: ```{bash, eval=FALSE} docker run -e PASSWORD=secret -p 8787:8787 -it cjvanlissa/worcs:latest ``` 3. Open the address `127.0.0.1:8787/` in a browser. Login using username=rstudio and password=secret. Then setup the container. ```{r, eval=FALSE} renv::consent(provided = TRUE) worcs::git_user("your_name", "your_email") ``` To terminate the container, press Ctrl + C in the terminal. To save files from a Docker session on your disk, you have to link a directory explicitly when starting the container. On Unix file systems, this is done as follows: ```{bash, eval=FALSE} -v /path/on/your/pc:/home/rstudio ``` And on Windows file systems, as follows: ```{bash, eval=FALSE} -v //c/path/on/your/windows/pc:/home/rstudio ``` Then start the Docker session using this command: ```{bash, eval=FALSE} docker run -e PASSWORD=secret -p 8787:8787 -v /path/on/your/pc:/home/rstudio -it cjvanlissa/worcs:latest ``` That's it!
/scratch/gouwar.j/cran-all/cranData/worcs/inst/doc/setup-docker.Rmd
## ---- include = FALSE--------------------------------------------------------- knitr::opts_chunk$set( collapse = TRUE, comment = "#>" )
/scratch/gouwar.j/cran-all/cranData/worcs/inst/doc/setup.R
--- title: "Setting up your computer for WORCS" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Setting up your computer for WORCS} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` This is a tutorial on how to set up your personal computer for use with the `worcs` package. It guides you through the installation of several software packages, and registration on GitHub. This vignette does not assume a prior installation of `R`, so it is suitable for novice users. You only have to perform these steps once for every computer you intend to use `R` and `worcs` on, and the entire process should take approximately 30 minutes if you start from scratch. In case some of the software is already installed on your system, you can skip those related steps. Follow these steps in order: 1. Install [R (free)](https://CRAN.R-project.org) 2. Install ['RStudio' Desktop (Free)](https://posit.co/download/rstudio-desktop/) 3. Install Git from [git-scm.com](https://git-scm.com/downloads). Use the default, recommended settings. It is especially important to leave these settings selected: + Git from the command line and also from third party software <!--*The `worcs` R-package calls Git from the command line*--> + Use the 'OpenSSL' library <!--*For secure data transfer with GitHub*--> + Checkout Windows-style, commit Unix-style line endings <!--*This is the preferred setting when collaborating with others on different platforms. Be prepared that, on windows, you will receive harmless notifications about LF to CRLF line endings. *--> + Enable Git Credential Manager <!--*For logging in to GitHub*--> + If you run into any trouble, a more comprehensive tutorial on installing Git is available at [happygitwithr.com](https://happygitwithr.com/install-git.html). 4. Register on 'GitHub' (alternatively: see [this vignette](https://cjvanlissa.github.io/worcs/articles/git_cloud.html) on how to use 'GitLab' or 'Bitbucket') + Go to [github.com](https://github.com/) and click *Sign up*. Choose an "Individual", "Free" plan. <!-- + Request a [free academic upgrade](https://help.github.com/en/articles/applying-for-an-educator-or-researcher-discount). This allows you to create *private repositories*, which are only visible to you and selected collaborators, and can be made public when your work is published. --> 5. Install all packages required for WORCS by running the code block below this writing in the 'RStudio' console. Be prepared for three contingencies: + If you receive any error saying *There is no package called [package name]*, then run the code `install.packages("package name")` + If you are prompted to update packages, just press [ENTER] to avoid updating packages. Updating packages this way in an interactive session sometimes leads to errors if the packages are loaded. + If you see a pop-up dialog asking *Do you want to install from sources the package which needs compilation?*, click *No*. ``` install.packages("worcs", dependencies = TRUE) tinytex::install_tinytex() renv::consent(provided = TRUE) ``` 6. Connect 'RStudio' to Git and GitHub (for more support, see [Happy Git with R](https://happygitwithr.com/) a. Open 'RStudio', open the Tools menu, click *Global Options*, and click *Git/SVN* a. Verify that *Enable version control interface for RStudio projects* is selected a. Verify that *Git executable:* shows the location of git.exe. If it is missing, manually fix the location of the file. <!-- a. Click *Create RSA Key*. Do not enter a passphrase. Press *Create*. A window with some information will open, which you can close. --> <!-- a. Click *View public key*, and copy the entire text to the clipboard. --> a. Restart your computer <!-- a. Go to [github.com](https://github.com/) --> <!-- a. Click your user icon, click *Settings*, and then select the *SSH and GPG keys* tab. --> <!-- a. Click *New SSH key*. Give it an arbitrary name (e.g., your computer ID), and paste the public key from your clipboard into the box labeled *Key*. --> <!-- a. Open 'RStudio' again (unless it restarted by itself) --> a. Run `usethis::create_github_token()`. This should open a webpage with a dialog that allows you to create a Personal Access Token (PAT) to authorize your computer to exchange information with your GitHub account. The default settings are fine; just click "Generate Token" (bottom of the page). a. Copy the generated PAT to your clipboard (NOTE: You will not be able to view it again!) a. Run `gitcreds::gitcreds_set()`. This should open a dialog in the R console that allows you to paste the PAT from your clipboard. a. If you do not have a Git user set up on your computer yet (e.g., if this is the first time you will be using Git), run the following - making sure to substitute your actual username and email: ``` worcs::git_user("your_name", "your_email", overwrite = TRUE) ``` 7. Everything should be installed and connected now. You can verify your installation using an automated test suite. The results will be printed to the console; if any tests fail, you will see a hint for how to resolve it. Run the code below this writing in the 'RStudio' console: ``` worcs::check_worcs_installation() ``` ### Optional step If you intend to write documents in APA style, you should additionally install the `papaja` package. Because `papaja` has many dependencies, it is recommended to skip this step if you intend to write documents in a different style than APA. Unfortunately, this package is not yet available on the central R repository CRAN, but you can install it from 'GitHub' using the following code: ``` install.packages("papaja", dependencies = TRUE, update = "never") ```
/scratch/gouwar.j/cran-all/cranData/worcs/inst/doc/setup.Rmd
## ---- include = FALSE--------------------------------------------------------- knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) library(worcs) ## ---- echo = FALSE, out.width="100%"------------------------------------------ knitr::include_graphics("workflow.png")
/scratch/gouwar.j/cran-all/cranData/worcs/inst/doc/workflow.R
--- title: "The WORCS workflow, version 0.1.6" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{The WORCS workflow, version 0.1.6} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} bibliography: "vignettes.bib" --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) library(worcs) ``` ## WORCS: Steps to follow for a project This vignette describes the Workflow for Open Reproducible Code in Science, as introduced in @vanlissaWORCSWorkflowOpen2021. The paper describes the rationale and principled approach on which the workflow is based; this vignette describes the practical steps for R-users in greater detail. Note that, although the steps are numbered for reference purposes, we acknowledge that the process of conducting research is not always linear. The workflow is illustrated in the graph below, with optional steps displayed in blue nodes: ```{r, echo = FALSE, out.width="100%"} knitr::include_graphics("workflow.png") ``` ### Phase 1: Study design 1. <!--S: Create a (Public or Private) remote repository on a 'Git' hosting service-->Create a new remote repository on a 'Git' hosting service, such as ["GitHub"](https://github.com) + For inexperienced users, we recommend making this repository "Private", which means only you and selected co-authors can access it. You can set it to "Public" later - for example, when the paper goes to print - and the entire history of the Repository will be public record. We recommend making the repository "Public" from the start __only if__ you are an experienced user and know what you are doing. + Copy the repository link to clipboard; this link should look something like `https://github.com/username/repository.git` 2. <!--S: When using R, initialize a new RStudio project using the WORCS template. Otherwise, clone the remote repository to your local project folder.-->In Rstudio, click File > New Project > New directory > WORCS Project Template a. Paste the remote Repository address in the textbox. This address should look like `https://github.com/username/repository.git` b. Keep the checkbox for `renv` checked if you want to use dependency management (recommended) c. Select a preregistration template, or add a preregistration later using `add_preregistration()` d. Select a manuscript template, or add a manuscript later using `add_manuscript()` e. Select a license for your project (we recommend a CC-BY license, which allows free use of the licensed material as long as the creator is credited) 3. <!--S: Add a README.md file, explaining how users should interact with the project, and a LICENSE to explain users' rights and limit your liability. This is automated by the `worcs` package.-->A template README.md file will be automatically generated during project creation. Edit this template to explain how users should interact with the project. Based on your selections in the New Project dialog, a LICENSE will also be added to the project, to explain users' rights and limit your liability. We recommend a CC-BY license, which allows free use of the licensed material as long as the creator is credited. 4. <!--S: Optional: Preregister your analysis by committing a plain-text preregistration and [tag this commit](https://docs.github.com/en/free-pro-team@latest/github/administering-a-repository/managing-releases-in-a-repository) with the label "preregistration".-->Optional: Preregister your analysis by committing a plain-text preregistration and [tag this commit](https://docs.github.com/en/free-pro-team@latest/github/administering-a-repository/managing-releases-in-a-repository) with the label "preregistration": + Document study plans in a `preregistration.Rmd` file, and optionally, planned analyses in a `.R` file. + In the top-right panel of 'RStudio', select the 'Git' tab + Select the checkbox next to the preregistration file(s) + Click the Commit button. + In the pop-up window, write an informative "Commit message", e.g., "Preregistration" + Click the Commit button below the message dialog + Click the green arrow labeled "Push" to send your commit to the 'Git' remote repository + [Tag this commit as a release](https://docs.github.com/en/free-pro-team@latest/github/administering-a-repository/managing-releases-in-a-repository) on the remote repository, using the label "preregistration". A tagged release helps others retrieve this commit. + Instructions for 'GitHub' [are explained here ](https://docs.github.com/en/free-pro-team@latest/github/administering-a-repository/managing-releases-in-a-repository) <!-- - Go to the 'GitHub' page for your project - Click the label that says "0 releases" - Click the button labeled "Create new release" - In the textbox labeled "Tag version" and the textbox "Release title", write: "Preregistration" - Click the green button labeled "Publish release"--> 5. <!--S: Optional: Upload the preregistration to a dedicated preregistration server-->Optional: Render the preregistration to PDF, and upload it as an attachment to a dedicated preregistration server like AsPredicted.org or OSF.io + In 'RStudio', with the file 'preregistration.Rmd' open, click the "Knit" button above the top left panel + When the PDF is generated, go to one of the recognized preregistration services' websites, create a new preregistration, and upload it as an attachment. + Optional: Generate a DOI for the preregistration through [the OSF](https://help.osf.io/article/218-sharing-data) or a service like [Zenodo](https://docs.github.com/repositories/archiving-a-github-repository/referencing-and-citing-content) 6. <!--S: Optional: Add study materials to the repository-->Optional: Add study materials to the repository. + Only do this for study materials to which you own the rights, or when the materials' license allows it + You can solicit feedback and outside contributions on a 'Git' remote repository by opening an "Issue" or by accepting "Pull requests" ### Phase 2: Writing and analysis 7. <!--S: Create an executable script documenting the code required to load the raw data into a tabular format, and de-identify human subjects if applicable-->Create an executable script documenting the code required to load the raw data into a tabular format, and de-identify human subjects if applicable + Document this preprocessing ("data wrangling") procedure in the `prepare_data.R` file. + This file is intended to document steps that can not or should not be replicated by end users, unless they have access to the raw data file. + These are steps you would run only once, the first time you load data into R. + Make this file as short as possible; only include steps that are absolutely necessary 8. <!--S: Save the data into a plain-text tabular format like `.csv`. When using open data, commit this file to 'Git'. When using closed data, commit a checksum of the file, and a synthetic copy of the data.-->Save the data using `open_data()` or `closed_data()` + <font colour = "red">__WARNING:__ Once you commit a data file to the 'Git' repository, its record will be retained forever (unless the entire repository is deleted). Assume that pushing data to a 'Git' remote repository cannot be undone. Follow the mantra: "Never commit something you do not intend to share".</font> + When using external data sources (e.g., obtained using an API), it is recommended to store a local copy, to make the project portable and to ensure that end users have access to the same version of the data you used. 9. <!--S: Write the manuscript using a dynamic document generation format, with code chunks to perform the analyses.-->Write the manuscript in `Manuscript.Rmd` + Use code chunks to perform the analyses. The first code chunk should call `load_data()` + Finish each sentence with one carriage return (enter); separate paragraphs with a double carriage return. 10. <!--S: Commit every small change to the 'Git' repository-->Regularly Commit your progress to the Git repository; ideally, after completing each small and clearly defined task. + In the top-right panel of 'RStudio', select the 'Git' tab + Select the checkboxes next to all files whose changes you wish to Commit + Click the Commit button. + In the pop-up window, write an informative "Commit message". + Click the Commit button below the message dialog + Click the green arrow labeled "Push" to send your commit to the remote repository 11. <!--S: Use comprehensive citation-->While writing, cite essential references with one at-symbol, `[@essentialref2020]`, and non-essential references with a double at-symbol, `[@@nonessential2020]`. ### Phase 3: Submission and publication 12. <!--S: Use dependency management to make the computational environment fully reproducible-->Use dependency management to make the computational environment fully reproducible. When using `renv`, you can save the state of the project library (all packages used) by calling `renv::snapshot()`. This updates the lockfile, `renv.lock`. 13. <!--S: Optional: Add a WORCS-badge to your project's README file-->Optional: Add a WORCS-badge to your project's README file and complete the optional elements of the WORCS checklist to qualify for a "Perfect" rating. Run the `check_worcs()` function to see whether your project adheres to the WORCS checklist (see `worcs::checklist`) + This adds a WORCS-badge to your 'README.md' file, with a rank of "Fail", "Limited", or "Open". + Reference the WORCS checklist and your paper's score in the paper. + *Optional:* Complete the additional optional items in the WORCS checklist by hand, and get a "Perfect" rating. 14. <!--S: Make a Private 'Git' remote repository Public-->Make the 'Git' remote repository "Public" if it was set to "Private" + Instructions for 'GitHub': - Go to your project's repository - Click the "Settings" button - Scroll to the bottom of the page; click "Make public", and follow the on-screen instructions 15. <!--S: [Create a project page on the Open Science Framework (OSF)](https://help.osf.io/article/252-create-a-project) and [connect it to the 'Git' remote repository](https://help.osf.io/article/211-connect-github-to-a-project)--> [Create a project on the Open Science Framework (OSF)](https://help.osf.io/article/252-create-a-project) and [connect it to the 'Git' remote repository](https://help.osf.io/article/211-connect-github-to-a-project). + On the OSF project page, you can select a License for the project. This helps clearly communicate the terms of reusability of your project. Make sure to use the same License you selected during project creation in Step 3. 16. <!--S: [Generate a Digital Object Identifier (DOI) for the OSF project](https://help.osf.io/article/220-create-dois)--> [Generate a Digital Object Identifier (DOI) for the OSF project](https://help.osf.io/article/220-create-dois) + A DOI is a persistent identifier that can be used to link to your project page. + You may have already created a project page under Step 5 if you preregistered on the OSF + Optionally, you can [generate additional DOIs for specific resources like datasets](https://help.osf.io/article/218-sharing-data). + Alternatively, you can [connect your 'Git' remote repository to Zenodo](https://docs.github.com/repositories/archiving-a-github-repository/referencing-and-citing-content), instead of the OSF, to create DOIs for the project and specific resources. 17. <!--S: Add an open science statement to the Abstract or Author notes, which links to the 'OSF' project page and/or the 'Git' remote repository-->Add an open science statement to the Abstract or Author notes, which links to the 'OSF' project page and/or the 'Git' remote repository. + Placing this statement in the Abstract or Author note means that readers can find your project even if the paper is published behind a paywall. + The link can be masked for blind review. + The open science statement should indicate which resources are available in the online repository; data, code, materials, study design details, a pre-registration, and/or comprehensive citations. For further guidance, see @aalbersbergMakingScienceTransparent2018. Example: _In the spirit of open science, an online repository is available at XXX, which contains [the data/a synthetic data file], analysis code, the research materials used, details about the study design, more comprehensive citations, and a tagged release with the preregistration._ 18. <!--S: Render the dynamic document to PDF-->Knit the paper to PDF for submission + In 'RStudio', with the file 'manuscript.Rmd' open, click the "Knit" button above the top left panel + To retain essential citations only, change the front matter of the 'manuscript.Rmd' file: Change `knit: worcs::cite_all` to `knit: worcs::cite_essential` 19. <!--S: Optional: [Publish the PDF as a preprint, and add it to the OSF project](https://help.osf.io/article/177-upload-a-preprint)-->Optional: [Publish preprint in a not-for-profit preprint repository such as PsyArchiv, and connect it to your existing OSF project](https://help.osf.io/article/177-upload-a-preprint) + Check [Sherpa Romeo](http://sherpa.ac.uk/romeo/index.php) to be sure that your intended outlet allows the publication of preprints; many journals do, nowadays - and if they do not, it is worth considering other outlets. 20. <!--S: Submit the paper, and [tag the commit of the submitted paper as a release](https://docs.github.com/en/free-pro-team@latest/github/administering-a-repository/managing-releases-in-a-repository) of the submitted paper as a release, as in Step 4.-->Submit the paper, and [tag the commit of the submitted paper as a release](https://docs.github.com/en/free-pro-team@latest/github/administering-a-repository/managing-releases-in-a-repository), as in Step 4. ### Notes for cautious researchers <!--S: Some researchers might want to share their work only once the paper is accepted for publication. In this case, we recommend creating a "Private" repository in Step 1, and completing Steps 13-18 upon acceptance.-->Some researchers might want to share their work only once the paper is accepted for publication. In this case, we recommend creating a "Private" repository in Step 1, and completing Steps 13-18 upon acceptance by the journal. **Image attribution** The [Git Logo](https://git-scm.com/) by Jason Long is licensed under the Creative Commons Attribution 3.0 Unported License. The [OSF logo](https://osf.io/) is licensed under CC0 1.0 Universal. Icons in the workflow graph are obtained from [Flaticon](https://www.flaticon.com); see [detailed attribution](https://github.com/cjvanlissa/worcs/blob/master/paper/workflow_graph/Attribution_for_images.txt). ## Sample WORCS projects For a list of sample `worcs` projects created by the authors and other users, see the [`README.md` file on the WORCS GitHub page](https://github.com/cjvanlissa/worcs). This list is regularly updated. **References**
/scratch/gouwar.j/cran-all/cranData/worcs/inst/doc/workflow.Rmd
# In this file, write the R-code necessary to load your original data file # (e.g., an SPSS, Excel, or SAS-file), and convert it to a data.frame. Then, # use the function open_data(your_data_frame) or closed_data(your_data_frame) # to store the data. library(worcs)
/scratch/gouwar.j/cran-all/cranData/worcs/inst/rstudio/templates/project/resources/prepare_data.R
--- title : "Title here" shorttitle : "Short title here" date : "`r Sys.setlocale('LC_TIME', 'C'); format(Sys.time(), '%d\\\\. %B %Y')`" authors : "Author names here" --- <!-- PSS (Preregistration and Sharing Software) is a template that can be used for pre-registrating confirmatory or exploratory studies. For more details about the description please check the reference: Krypotos, A. M., Klugkist, I., Mertens, G., & Engelhard, I. M. (2019). A step-by-step guide on preregistration and effective data sharing for psychopathology research. Journal of Abnormal Psychology, 128(6), 517-537. https://psycnet.apa.org/doi/10.1037/abn0000424 --> # Title <!-- Here you can provide the project title. --> ## Authors <!-- Here you can provide the authors' list. --> ## Affiliations <!-- Authors' affiliations --> # Background of the study ## Primary study/Secondary analyses <!-- Is it a primary study (where data are collected) or a study where secondary analyses are performed on an existing data set? --> ## Does the study refer to a meta-analysis? <!-- Mention whether the study refers to a meta-analysis. In this case, you can ignore the irrelevant parts below (e.g., number of participants). --> ## Research questions <!-- Define the research question --> ## Study hypotheses <!-- Define the study hypotheses --> # Method ## Stimuli <!-- Define the study stimuli --> ## Questionnaires <!-- Define the study questionnaires, if used --> ## Equipment <!-- Define the study equipment. This includes computer characteristics etc. --> ## Procedure <!-- Define the study's procedure. --> ## Protocol <!-- Define the study's protocol. --> # Statistical analyses ## Participants <!-- How many participants will be recruited? Has a power analysis been performed? --> ## Stopping rule <!-- When will data accumulation end? --> ## Confirming hypotheses threshold <!-- What is the threshold for confirming each of the research hypotheses? --> ## Disconfirming hypotheses threshold <!-- What is the threeshold for rejecting each of the research hypotheses? --> # Other ## Other (Optional) <!-- Further comments --> # References <!-- Included references --> ## \vspace{-2pc} \setlength{\parindent}{-0.5in} \setlength{\leftskip}{-1in} \setlength{\parskip}{8pt} \noindent
/scratch/gouwar.j/cran-all/cranData/worcs/inst/rstudio/templates/project/resources/pss.Rmd
--- title : "Title here" shorttitle : "Short title here" date : "date here" authors : "authors here" --- <!-- This template aims provides a relative short format to preregister secondary analyses on preexisting data (much like the AsPredicted.org format for new data collection). For more details about the description please check the reference: Mertens, G., & Krypotos, A. M. (2019). Preregistration of analyses of preexisting data. Psychologica Belgica, 59(1), 338-352. doi: 10.5334/pb.493 --> # Title <!-- Here you can provide the project title. --> ## Authors <!-- Here you can provide the authors' list, together with the affiliations. --> ## Affiliations <!-- Authors' affiliations--> # Study hypotheses <!-- Provide a brief description of the relevant theory and formulate the hypotheses as precisely as possible. --> # Operationalization <!-- State exactly how the variables specified in each hypothesis will be operationalized. --> # Data source <!-- Specify the source of the obtained data. Also provide information about the context of the data source and clarify whether the data has been previously published. --> # Data request/access <!-- Specify how the data will be requested or accessed. Clarify whether the data were already available and whether the dataset has been previously explored or analyzed. --> # Exclusion criteria <!-- Specify whether there were any criteria for the exclusions of certain datasets, observations or time points.--> # Statistical analyses <!-- Specify the statistical model that will be used to analyze the data. Be as specific as possible and avoid ambiguity. --> # Hypotheses (dis-)confirmation <!-- Specify exactly how the hypothesis will be evaluated. Give specific criteria relevant to the used analytical model and framework (e.g., alpha-values, Bayes Factor, RMSEA).--> # Analysis validation <!-- Indicate whether the proposed analyses have previously been validated on a subset of the data, or a simulated dataset. If so, provide the data files and analysis code.--> # Timeline <!-- Provide the (foreseen) dates for the different steps in this preregistration form.--> # What is known about the data that could be relevant for the tested hypotheses? <!-- What is known about the data that could be relevant for the tested hypotheses? I.e., disclose any prior exposure to the data set, direct or indirect, and specific information (e.g., knowing the mean of a variable) that is relevant for your research question. --> ## Other (Optional) <!-- Further comments --> # References <!-- Included references --> ## \vspace{-2pc} \setlength{\parindent}{-0.5in} \setlength{\leftskip}{-1in} \setlength{\parskip}{8pt} \noindent
/scratch/gouwar.j/cran-all/cranData/worcs/inst/rstudio/templates/project/resources/secondary.Rmd
--- title: "Citing references in worcs" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Citing references in worcs} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) library(worcs) ``` Comprehensive citation of literature, data, materials, methods, and software is one of the hallmarks of open science. When using the R-implementation of WORCS, you will most likely be writing your manuscript in `RMarkdown` format. This means that you will use Markdown `citekey`s to refer to references, and these references will be stored in a separate text file known as a `.bib` file. To ease this process, we recommend following this procedure for citation: 1. During writing, maintain a plain-text `.bib` file with the BibTeX references for all citations. + You can export a `.bib` file from most reference manager programs; the free, open-source reference manager [Zotero](https://www.zotero.org/download/) is excellent and user-friendly, and highly interoperable with other commercial reference managers. [Here](https://christopherjunk.netlify.com/blog/2019/02/25/zotero-RMarkdown/) is a tutorial for using Zotero with RMarkdown. + Alternatively, it is possible to make this file by hand, copy and pasting each new reference below the previous one; e.g., Figure \@ref(fig:scholarbib) shows how to obtain a BibTeX reference from Google Scholar; simply copy-paste each reference into the `.bib` file 2. To cite a reference, use the `citekey` - the first word in the BibTeX entry for that reference. Insert it in the RMarkdown file like so: `@yourcitekey2020`. For a parenthesized reference, use `[@citekeyone2020; @citekeytwo2020]`. For more options, see the [RMarkdown cookbook](https://bookdown.org/yihui/rmarkdown-cookbook/bibliography.html). 3. To indicate a *non-essential* citation, mark it with a double at-symbol: `@@nonessential2020`. 4. When Knitting the document, adapt the `knit` command in the YAML header. `knit: worcs::cite_all` renders all citations, and `knit: worcs::cite_essential` removes all *non-essential* citations. 5. Optional: To be extremely thorough, you could make a "branch" of the GitHub repository for the print version of the manuscript. Only in this branch, you use the function `knit: worcs::cite_essential`. The procedure is documented in [this tutorial](http://rstudio-pubs-static.s3.amazonaws.com/142364_3b344a38149b465c8ebc9a8cd2eee3aa.html). ```{r, scholarbib, echo = FALSE, fig.cap="Exporting a BibTex reference from Google Scholar"} knitr::include_graphics("scholar_bib.png") ```
/scratch/gouwar.j/cran-all/cranData/worcs/vignettes/citation.Rmd
--- title: "Using Endpoints to Check Reproducibility" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Using Endpoints to Check Reproducibility} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` ```{r setup} library(worcs) ``` This vignette describe the `worcs` package's functionality for automating reproducibility. The basic idea is that the entry point, endpoint (or endpoints), and recipe by which to get to the endpoint from the entry point are all well-defined. In a typical `worcs` project, the entry point will be a dynamic document (e.g., `manuscript.Rmd`), and the endpoint will be the rendered manuscript (e.g., `manuscript.pdf`). The recipe by which to get from the entry point to the endpoint is often a simple call to `rmarkdown::render("manuscript.Rmd")`. By default, the entry point and recipe are documented in the `.worcs` project file when the project is created, if an R-script or Rmarkdown file is selected as the manuscript. Endpoints are not created by default, as it only makes sense to define them when the analyses are complete. Custom recipes can be added to a project using `add_recipe()`. ## Adding endpoints Users can add endpoints using the function `add_endpoint("filename")`. When running this function, `filename` is added to the `.worcs` project file, and its checksum is computed so that any changes to the contents of the file can be detected. It is also possible to specify multiple endpoints. For example, maybe the user has finalized the analyses, and wants to track reproducibility for the analysis results - but still wants to make changes to the text of the manuscript without breaking reproducibility checks. In this case, it is useful to track files that contain analysis results instead of the rendered manuscript. Imagine these are intermediary files with analysis results: * `descriptives.csv`: A file with the descriptive statistics of study variables * `model_fit.csv`: A table with model fit indices for several models * `finalmodel.RData`: An RData file with the results of the final model These three files could be tracked as endpoints by calling `add_endpoint("descriptives.csv"); add_endpoint("model_fit.csv"); add_endpoint("finalmodel.RData")`. ## Reproducing a Project A WORCS project can be reproduced by evaluating the function `reproduce()`. This function evaluates the recipe defined in the `.worcs` project file. If no recipe is specified (e.g., when a project was created with an older version of the package), but an entry point is defined, `reproduce()` will try to evaluate the entry point if it is an Rmarkdown or R source file. ## Checking reproducibility Users can verify that the endpoint remains unchanged after reproducing the project by calling the function `check_endpoints()`. If any endpoint has changed relative to the version stored in the `.worcs` project file, this will result in a warning message. ## Updating endpoints To update the endpoints in the `.worcs` file, call `snapshot_endpoints()`. Always call this function to log changes to the code that should result in a different end result. ## Automating Reproducibility If a project is connected to a remote repository on GitHub, it is possible to use GitHub actions to automatically check a project's reproducibility and signal the result of this reproducibility check by displaying a badge on the project's readme page (which is the welcome page visitors of the GitHub repository first see). To do so, follow these steps: 1. Add endpoint using add_endpoint(); for example, if the endpoint of your analyses is a file called `'manuscript/manuscript.md'`, then you would call `add_endpoint('manuscript/manuscript.md')` 1. Run `github_action_reproduce()` 1. You should see a message asking you to copy-paste code for a status badge to your `readme.md`. If you do not see this message, add the following code to your readme.md manually: + `[![worcs_endpoints](https://github.com/YOUR_ACCOUNT/PROJECT_REPOSITORY/actions/workflows/worcs_reproduce.yaml/badge.svg)](https://github.com/YOUR_ACCOUNT/PROJECT_REPOSITORY/actions/worcs_reproduce.yaml/worcs_endpoints.yaml)` 1. Commit these changes to GitHub using `git_update()` Visit your project page on GitHub and select the `Actions` tab to see that your reproducibility check is running; visit the main project page to see the new badge in your readme.md file. ## Automating Endpoint Checks Sometimes, you may wish to verify that the endpoints of a project remain the same but without reproducing all analyses on GitHub's servers. This may be the case when the project has closed data that are not available on GitHub, or if the analyses take a long time to compute and you want to prevent using unnecessary compute power (e.g., for environmental reasons). In these cases, you can still use GitHub actions to automatically check whether the endpoints have remained unchanged. If your local changes to the project introduce deviations from the endpoint snapshots, these tests will fail. If you make intentional changes to the endpoints, you should of course run `snapshot_endpoints()`. You can display a badge on the project's readme page to signal that the endpoints remain unchanged. To do so, follow these steps: 1. Add endpoint using add_endpoint(); for example, if the endpoint of your analyses is a file called `'manuscript/manuscript.md'`, then you would call `add_endpoint('manuscript/manuscript.md')` 1. Run `github_action_check_endpoints()` 1. You should see a message asking you to copy-paste code for a status badge to your `readme.md`. If you do not see this message, add the following code to your readme.md manually: + `[![worcs_endpoints](https://github.com/YOUR_ACCOUNT/PROJECT_REPOSITORY/actions/workflows/worcs_endpoints.yaml/badge.svg)](https://github.com/YOUR_ACCOUNT/PROJECT_REPOSITORY/actions/workflows/worcs_endpoints.yaml)` 1. Commit these changes to GitHub using `git_update()` Visit your project page on GitHub and select the `Actions` tab to see that your reproducibility check is running; visit the main project page to see the new badge in your readme.md file.
/scratch/gouwar.j/cran-all/cranData/worcs/vignettes/endpoints.Rmd
--- title: "Connecting to 'Git' remote repositories" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Connecting to 'Git' remote repositories} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) library(worcs) ``` The WORCS paper describes a workflow centered around 'GitHub', but there are several other cloud hosting services that offer similar functionality. This vignette describes the process of connecting a `worcs` project to these other cloud hosting services. If you are missing your preferred cloud hosting service, please submit a pull request with a step-by-step tutorial for that service [here](https://github.com/cjvanlissa/worcs/pulls). ## GitLab ### Setup steps (do this only once) The 'GitLab' website looks and feels almost identical to 'GitHub'. Steps 4 and 5 of the `setup` vignette can be applied nearly without alterations. To connect `worcs` to 'GitLab', I proceeded as follows: 4. Register on GitLab + Go to [gitlab.com](https://about.gitlab.com/) and click *Register now*. Choose an "Individual", "Free" plan. + Request a [free academic upgrade](https://about.gitlab.com/solutions/education/). 5. Connect 'RStudio' to Git and GitLab (for more support, see [Happy Git with R](https://happygitwithr.com/) a. Open 'RStudio', open the Tools menu, click *Global Options*, and click *Git/SVN* b. Verify that *Enable version control interface for RStudio projects* is selected c. Verify that *Git executable:* shows the location of git.exe. If it is missing, manually fix the location of the file. d. Click *Create RSA Key*. Do not enter a passphrase. Press *Create*. A window with some information will open, which you can close. e. Click *View public key*, and copy the entire text to the clipboard. f. Close 'RStudio' (it might offer to restart by itself; this is fine) g. Go to [gitlab.com](https://about.gitlab.com/) h. Click your user icon in the top right of the screen, click *Settings* i. On the settings page, click *SSH Keys* in the left sidebar j. Copy-paste the public key from your clipboard into the box labeled *Key*. k. Click *Add key*. l. Open 'RStudio' again (unless it restarted by itself) ### Connect new `worcs` project to 'GitLab' To create a new project on 'GitLab', go to your account page, and click the *Create a project* tile in the middle of the screen. * Fill in a *Project name*; do not change anything else. Click the green *Create project* button. * You will see a page titled *"The repository for this project is empty"*. Under the header *"Create a new repository"*, you can see a web address starting with https, like so: `git clone https://gitlab.com/yourname/yourrepo.git` * Copy only this address, from `https://` to `.git`. * Paste this address into the New project dialog window. ## Bitbucket ### Setup steps (do this only once) The 'Bitbucket' website has cosmetic differences from 'GitHub', but works similarly. Steps 4 and 5 of the `setup` vignette can be applied nearly without alterations. To connect `worcs` to 'Bitbucket', I proceeded as follows: 4. Register on Bitbucket + Go to the Bitbucket website and click *Get started for free*. Follow the steps to create your account. Sign in. + Bitbucket has largely automated the process of awarding free academic upgrades. If your email address is not recognized as belonging to an academic institution, you can fill out a form to request this upgrade manually. 5. Connect 'RStudio' to Git and Bitbucket (for more support, see [Happy Git with R](https://happygitwithr.com/) a. Open 'RStudio', open the Tools menu, click *Global Options*, and click *Git/SVN* b. Verify that *Enable version control interface for RStudio projects* is selected c. Verify that *Git executable:* shows the location of git.exe. If it is missing, manually fix the location of the file. d. Click *Create RSA Key*. Do not enter a passphrase. Press *Create*. A window with some information will open, which you can close. e. Click *View public key*, and copy the entire text to the clipboard. f. Close 'RStudio' (it might offer to restart by itself; this is fine) g. Go to the Bitbucket website h. In the bottom left of the screen, click the circular icon with your initials. Select *Personal settings* i. On the settings page, click *SSH Keys* in the left sidebar j. Click *Add key* k. Copy-paste the public key from your clipboard into the box labeled *Key*, and give it a label. Click the *Add key* button. l. Open 'RStudio' again (unless it restarted by itself) ### Connect new `worcs` project to 'Bitbucket' To create a new project on 'Bitbucket', go to your account page, and click *Create repository* in the middle of the page. These steps differ somewhat from the procedure for 'GitHub': * Enter a *Project name* and a *Repository name*. The latter will be used to connect your `worcs` project. * __Important:__ Change the setting *Include a README?* to *No*. * Click "Create repository" * When the project page opens, you will see the tagline "Let's put some bits in your bucket". Change the dropdown menu Just below this tagline from *SSH* to *https*. It will show a web address starting with https, like this: `git clone https://[email protected]/yourrepo.git` * Copy only this address, from `https://` to `.git`. * Paste this address into the New project dialog window.
/scratch/gouwar.j/cran-all/cranData/worcs/vignettes/git_cloud.Rmd
--- title: "Reproducing a WORCS project" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Reproducing a WORCS project} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` This is a tutorial on how to reproduce a project created with the `worcs` package. At the core of a typical `worcs` projects is an 'R Markdown' document, containing prose and analysis code. This document can be compiled, or "knitted", to reproduce the analyses. This tutorial will guide you through the steps necessary to make that happen. ## Install 'RStudio' and 'R' **You can skip these steps if you have a recent and working installation of 'RStudio' and 'R'.** 1. Install [R (free)](https://CRAN.R-project.org) 1. Install ['RStudio' Desktop (Free)](https://posit.co/download/rstudio-desktop/) ## Install R-package dependencies 1. Install all packages required for WORCS by running the following code in the 'RStudio' console. Be prepared for three contingencies: + If you receive any error saying *There is no package called [package name]*, then run the code `install.packages("package name")` + If you are prompted to update packages, just press [ENTER] to avoid updating packages. Updating packages this way in an interactive session sometimes leads to errors if the packages are loaded. + If you see a pop-up dialog asking *Do you want to install from sources the package which needs compilation?*, click *No*. ``` install.packages("worcs", dependencies = TRUE) tinytex::install_tinytex() renv::consent(provided = TRUE) ``` ## Obtaining the project repository <!-- To do so, open RStudio and, in the console, --> <!-- If you are familiar with 'Git' and 'GitHub', you can "clone" the project as usual. --> WORCS projects are typically hosted on 'GitHub', or another 'Git' remote repository. The recommended way to obtain a local copy of the project repository is to "clone" it. On 'GitHub', this is done by clicking the green button labeled "Code". Clicking it reveals the HTTPS link to the project repository (see below). Copy this link to the clipboard by clicking the clipboard icon next to it. ![](github_download.png) Next, open an RStudio instance and run the following code in the console, replacing `https://github.com/username/reponame.git` with the HTTPS address you just copied to clipboard, and replacing the `c:/where/you/want/the/repo` with the location on your hard drive where you want to clone the repository: ``` gert::git_clone("https://github.com/username/reponame.git", path = "c:/where/you/want/the/repo") ``` **Note: While it is also possible to download a compressed (ZIP) archive containing the project (see the image above), this has an important limitation: a repository downloaded via the GitHub interface is, itself, not a Git repository! This is a peculiarity of the GitHub interface. This might result in unexpected behavior when using any WORCS functionality that relies on Git for version control. Thus, as a general rule, we advise cloning projects instead.** ## Open the project in 'RStudio' Most projects can be opened by loading the '.RProj' file in the main folder. This should be explained in the project 'README.md' as well. ## Restore the package dependencies You will need to restore the packages used by the authors, using the `renv` package. See [this article](https://rstudio.github.io/renv/articles/renv.html) for more information about `renv`. With the project open in 'RStudio', type the following in the console: ``` renv::restore() ``` ## Open the project entry point The entry point is the core document that can be executed to reproduce the analysis. This is typically a manuscript, or occasionally an R-script file. Use the following function to open the entry point file in 'RStudio': ``` load_entrypoint() ``` ## Reproduce the analyses From `worcs` version 0.1.12, projects can be reproduced using the function `reproduce()`. This function will evaluate the reproducibility recipe stored in the `.worcs` project file, and checks whether the resulting endpoints have the correct checksums (i.e., are unchanged relative to the authors' original work). ## No access to original data Sometimes, authors have not made the original data available. In this case, the project ought to contain a synthetic data file with similar properties to the original data. This synthetic data allows you to verify that the analyses can be run, and that the code is correct. The results will, however, deviate from the original findings and should not be substantively interpreted. Authors may use the function `notify_synthetic()` to generate a message in the paper when a synthetic dataset is used. Authors should also provide information in the README.md file on how to obtain access to the original data in case an audit is warranted. Please read the WORCS paper [@vanlissaWORCSWorkflowOpen2021] for more information about how checksums are used so that auditors can verify the authenticity of the original data.
/scratch/gouwar.j/cran-all/cranData/worcs/vignettes/reproduce.Rmd
--- title: "Setting up your computer for WORCS - Docker-edition" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Setting up your computer for WORCS - Docker-edition} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` If you do not want to install R, RStudio, Latex and Git on a personal computer (as described in [`vignette("setup", package = "worcs")`](https://cjvanlissa.github.io/worcs/articles/setup.html)), but would like to use Docker instead, follow these steps in order: 1. Install [Docker](https://docs.docker.com/get-docker/) 2. Open a terminal/cmd/shell. 2. Start the `worcs` image: ```{bash, eval=FALSE} docker run -e PASSWORD=secret -p 8787:8787 -it cjvanlissa/worcs:latest ``` 3. Open the address `127.0.0.1:8787/` in a browser. Login using username=rstudio and password=secret. Then setup the container. ```{r, eval=FALSE} renv::consent(provided = TRUE) worcs::git_user("your_name", "your_email") ``` To terminate the container, press Ctrl + C in the terminal. To save files from a Docker session on your disk, you have to link a directory explicitly when starting the container. On Unix file systems, this is done as follows: ```{bash, eval=FALSE} -v /path/on/your/pc:/home/rstudio ``` And on Windows file systems, as follows: ```{bash, eval=FALSE} -v //c/path/on/your/windows/pc:/home/rstudio ``` Then start the Docker session using this command: ```{bash, eval=FALSE} docker run -e PASSWORD=secret -p 8787:8787 -v /path/on/your/pc:/home/rstudio -it cjvanlissa/worcs:latest ``` That's it!
/scratch/gouwar.j/cran-all/cranData/worcs/vignettes/setup-docker.Rmd
--- title: "Setting up your computer for WORCS" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Setting up your computer for WORCS} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` This is a tutorial on how to set up your personal computer for use with the `worcs` package. It guides you through the installation of several software packages, and registration on GitHub. This vignette does not assume a prior installation of `R`, so it is suitable for novice users. You only have to perform these steps once for every computer you intend to use `R` and `worcs` on, and the entire process should take approximately 30 minutes if you start from scratch. In case some of the software is already installed on your system, you can skip those related steps. Follow these steps in order: 1. Install [R (free)](https://CRAN.R-project.org) 2. Install ['RStudio' Desktop (Free)](https://posit.co/download/rstudio-desktop/) 3. Install Git from [git-scm.com](https://git-scm.com/downloads). Use the default, recommended settings. It is especially important to leave these settings selected: + Git from the command line and also from third party software <!--*The `worcs` R-package calls Git from the command line*--> + Use the 'OpenSSL' library <!--*For secure data transfer with GitHub*--> + Checkout Windows-style, commit Unix-style line endings <!--*This is the preferred setting when collaborating with others on different platforms. Be prepared that, on windows, you will receive harmless notifications about LF to CRLF line endings. *--> + Enable Git Credential Manager <!--*For logging in to GitHub*--> + If you run into any trouble, a more comprehensive tutorial on installing Git is available at [happygitwithr.com](https://happygitwithr.com/install-git.html). 4. Register on 'GitHub' (alternatively: see [this vignette](https://cjvanlissa.github.io/worcs/articles/git_cloud.html) on how to use 'GitLab' or 'Bitbucket') + Go to [github.com](https://github.com/) and click *Sign up*. Choose an "Individual", "Free" plan. <!-- + Request a [free academic upgrade](https://help.github.com/en/articles/applying-for-an-educator-or-researcher-discount). This allows you to create *private repositories*, which are only visible to you and selected collaborators, and can be made public when your work is published. --> 5. Install all packages required for WORCS by running the code block below this writing in the 'RStudio' console. Be prepared for three contingencies: + If you receive any error saying *There is no package called [package name]*, then run the code `install.packages("package name")` + If you are prompted to update packages, just press [ENTER] to avoid updating packages. Updating packages this way in an interactive session sometimes leads to errors if the packages are loaded. + If you see a pop-up dialog asking *Do you want to install from sources the package which needs compilation?*, click *No*. ``` install.packages("worcs", dependencies = TRUE) tinytex::install_tinytex() renv::consent(provided = TRUE) ``` 6. Connect 'RStudio' to Git and GitHub (for more support, see [Happy Git with R](https://happygitwithr.com/) a. Open 'RStudio', open the Tools menu, click *Global Options*, and click *Git/SVN* a. Verify that *Enable version control interface for RStudio projects* is selected a. Verify that *Git executable:* shows the location of git.exe. If it is missing, manually fix the location of the file. <!-- a. Click *Create RSA Key*. Do not enter a passphrase. Press *Create*. A window with some information will open, which you can close. --> <!-- a. Click *View public key*, and copy the entire text to the clipboard. --> a. Restart your computer <!-- a. Go to [github.com](https://github.com/) --> <!-- a. Click your user icon, click *Settings*, and then select the *SSH and GPG keys* tab. --> <!-- a. Click *New SSH key*. Give it an arbitrary name (e.g., your computer ID), and paste the public key from your clipboard into the box labeled *Key*. --> <!-- a. Open 'RStudio' again (unless it restarted by itself) --> a. Run `usethis::create_github_token()`. This should open a webpage with a dialog that allows you to create a Personal Access Token (PAT) to authorize your computer to exchange information with your GitHub account. The default settings are fine; just click "Generate Token" (bottom of the page). a. Copy the generated PAT to your clipboard (NOTE: You will not be able to view it again!) a. Run `gitcreds::gitcreds_set()`. This should open a dialog in the R console that allows you to paste the PAT from your clipboard. a. If you do not have a Git user set up on your computer yet (e.g., if this is the first time you will be using Git), run the following - making sure to substitute your actual username and email: ``` worcs::git_user("your_name", "your_email", overwrite = TRUE) ``` 7. Everything should be installed and connected now. You can verify your installation using an automated test suite. The results will be printed to the console; if any tests fail, you will see a hint for how to resolve it. Run the code below this writing in the 'RStudio' console: ``` worcs::check_worcs_installation() ``` ### Optional step If you intend to write documents in APA style, you should additionally install the `papaja` package. Because `papaja` has many dependencies, it is recommended to skip this step if you intend to write documents in a different style than APA. Unfortunately, this package is not yet available on the central R repository CRAN, but you can install it from 'GitHub' using the following code: ``` install.packages("papaja", dependencies = TRUE, update = "never") ```
/scratch/gouwar.j/cran-all/cranData/worcs/vignettes/setup.Rmd
--- title: "The WORCS workflow, version 0.1.6" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{The WORCS workflow, version 0.1.6} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} bibliography: "vignettes.bib" --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) library(worcs) ``` ## WORCS: Steps to follow for a project This vignette describes the Workflow for Open Reproducible Code in Science, as introduced in @vanlissaWORCSWorkflowOpen2021. The paper describes the rationale and principled approach on which the workflow is based; this vignette describes the practical steps for R-users in greater detail. Note that, although the steps are numbered for reference purposes, we acknowledge that the process of conducting research is not always linear. The workflow is illustrated in the graph below, with optional steps displayed in blue nodes: ```{r, echo = FALSE, out.width="100%"} knitr::include_graphics("workflow.png") ``` ### Phase 1: Study design 1. <!--S: Create a (Public or Private) remote repository on a 'Git' hosting service-->Create a new remote repository on a 'Git' hosting service, such as ["GitHub"](https://github.com) + For inexperienced users, we recommend making this repository "Private", which means only you and selected co-authors can access it. You can set it to "Public" later - for example, when the paper goes to print - and the entire history of the Repository will be public record. We recommend making the repository "Public" from the start __only if__ you are an experienced user and know what you are doing. + Copy the repository link to clipboard; this link should look something like `https://github.com/username/repository.git` 2. <!--S: When using R, initialize a new RStudio project using the WORCS template. Otherwise, clone the remote repository to your local project folder.-->In Rstudio, click File > New Project > New directory > WORCS Project Template a. Paste the remote Repository address in the textbox. This address should look like `https://github.com/username/repository.git` b. Keep the checkbox for `renv` checked if you want to use dependency management (recommended) c. Select a preregistration template, or add a preregistration later using `add_preregistration()` d. Select a manuscript template, or add a manuscript later using `add_manuscript()` e. Select a license for your project (we recommend a CC-BY license, which allows free use of the licensed material as long as the creator is credited) 3. <!--S: Add a README.md file, explaining how users should interact with the project, and a LICENSE to explain users' rights and limit your liability. This is automated by the `worcs` package.-->A template README.md file will be automatically generated during project creation. Edit this template to explain how users should interact with the project. Based on your selections in the New Project dialog, a LICENSE will also be added to the project, to explain users' rights and limit your liability. We recommend a CC-BY license, which allows free use of the licensed material as long as the creator is credited. 4. <!--S: Optional: Preregister your analysis by committing a plain-text preregistration and [tag this commit](https://docs.github.com/en/free-pro-team@latest/github/administering-a-repository/managing-releases-in-a-repository) with the label "preregistration".-->Optional: Preregister your analysis by committing a plain-text preregistration and [tag this commit](https://docs.github.com/en/free-pro-team@latest/github/administering-a-repository/managing-releases-in-a-repository) with the label "preregistration": + Document study plans in a `preregistration.Rmd` file, and optionally, planned analyses in a `.R` file. + In the top-right panel of 'RStudio', select the 'Git' tab + Select the checkbox next to the preregistration file(s) + Click the Commit button. + In the pop-up window, write an informative "Commit message", e.g., "Preregistration" + Click the Commit button below the message dialog + Click the green arrow labeled "Push" to send your commit to the 'Git' remote repository + [Tag this commit as a release](https://docs.github.com/en/free-pro-team@latest/github/administering-a-repository/managing-releases-in-a-repository) on the remote repository, using the label "preregistration". A tagged release helps others retrieve this commit. + Instructions for 'GitHub' [are explained here ](https://docs.github.com/en/free-pro-team@latest/github/administering-a-repository/managing-releases-in-a-repository) <!-- - Go to the 'GitHub' page for your project - Click the label that says "0 releases" - Click the button labeled "Create new release" - In the textbox labeled "Tag version" and the textbox "Release title", write: "Preregistration" - Click the green button labeled "Publish release"--> 5. <!--S: Optional: Upload the preregistration to a dedicated preregistration server-->Optional: Render the preregistration to PDF, and upload it as an attachment to a dedicated preregistration server like AsPredicted.org or OSF.io + In 'RStudio', with the file 'preregistration.Rmd' open, click the "Knit" button above the top left panel + When the PDF is generated, go to one of the recognized preregistration services' websites, create a new preregistration, and upload it as an attachment. + Optional: Generate a DOI for the preregistration through [the OSF](https://help.osf.io/article/218-sharing-data) or a service like [Zenodo](https://docs.github.com/repositories/archiving-a-github-repository/referencing-and-citing-content) 6. <!--S: Optional: Add study materials to the repository-->Optional: Add study materials to the repository. + Only do this for study materials to which you own the rights, or when the materials' license allows it + You can solicit feedback and outside contributions on a 'Git' remote repository by opening an "Issue" or by accepting "Pull requests" ### Phase 2: Writing and analysis 7. <!--S: Create an executable script documenting the code required to load the raw data into a tabular format, and de-identify human subjects if applicable-->Create an executable script documenting the code required to load the raw data into a tabular format, and de-identify human subjects if applicable + Document this preprocessing ("data wrangling") procedure in the `prepare_data.R` file. + This file is intended to document steps that can not or should not be replicated by end users, unless they have access to the raw data file. + These are steps you would run only once, the first time you load data into R. + Make this file as short as possible; only include steps that are absolutely necessary 8. <!--S: Save the data into a plain-text tabular format like `.csv`. When using open data, commit this file to 'Git'. When using closed data, commit a checksum of the file, and a synthetic copy of the data.-->Save the data using `open_data()` or `closed_data()` + <font colour = "red">__WARNING:__ Once you commit a data file to the 'Git' repository, its record will be retained forever (unless the entire repository is deleted). Assume that pushing data to a 'Git' remote repository cannot be undone. Follow the mantra: "Never commit something you do not intend to share".</font> + When using external data sources (e.g., obtained using an API), it is recommended to store a local copy, to make the project portable and to ensure that end users have access to the same version of the data you used. 9. <!--S: Write the manuscript using a dynamic document generation format, with code chunks to perform the analyses.-->Write the manuscript in `Manuscript.Rmd` + Use code chunks to perform the analyses. The first code chunk should call `load_data()` + Finish each sentence with one carriage return (enter); separate paragraphs with a double carriage return. 10. <!--S: Commit every small change to the 'Git' repository-->Regularly Commit your progress to the Git repository; ideally, after completing each small and clearly defined task. + In the top-right panel of 'RStudio', select the 'Git' tab + Select the checkboxes next to all files whose changes you wish to Commit + Click the Commit button. + In the pop-up window, write an informative "Commit message". + Click the Commit button below the message dialog + Click the green arrow labeled "Push" to send your commit to the remote repository 11. <!--S: Use comprehensive citation-->While writing, cite essential references with one at-symbol, `[@essentialref2020]`, and non-essential references with a double at-symbol, `[@@nonessential2020]`. ### Phase 3: Submission and publication 12. <!--S: Use dependency management to make the computational environment fully reproducible-->Use dependency management to make the computational environment fully reproducible. When using `renv`, you can save the state of the project library (all packages used) by calling `renv::snapshot()`. This updates the lockfile, `renv.lock`. 13. <!--S: Optional: Add a WORCS-badge to your project's README file-->Optional: Add a WORCS-badge to your project's README file and complete the optional elements of the WORCS checklist to qualify for a "Perfect" rating. Run the `check_worcs()` function to see whether your project adheres to the WORCS checklist (see `worcs::checklist`) + This adds a WORCS-badge to your 'README.md' file, with a rank of "Fail", "Limited", or "Open". + Reference the WORCS checklist and your paper's score in the paper. + *Optional:* Complete the additional optional items in the WORCS checklist by hand, and get a "Perfect" rating. 14. <!--S: Make a Private 'Git' remote repository Public-->Make the 'Git' remote repository "Public" if it was set to "Private" + Instructions for 'GitHub': - Go to your project's repository - Click the "Settings" button - Scroll to the bottom of the page; click "Make public", and follow the on-screen instructions 15. <!--S: [Create a project page on the Open Science Framework (OSF)](https://help.osf.io/article/252-create-a-project) and [connect it to the 'Git' remote repository](https://help.osf.io/article/211-connect-github-to-a-project)--> [Create a project on the Open Science Framework (OSF)](https://help.osf.io/article/252-create-a-project) and [connect it to the 'Git' remote repository](https://help.osf.io/article/211-connect-github-to-a-project). + On the OSF project page, you can select a License for the project. This helps clearly communicate the terms of reusability of your project. Make sure to use the same License you selected during project creation in Step 3. 16. <!--S: [Generate a Digital Object Identifier (DOI) for the OSF project](https://help.osf.io/article/220-create-dois)--> [Generate a Digital Object Identifier (DOI) for the OSF project](https://help.osf.io/article/220-create-dois) + A DOI is a persistent identifier that can be used to link to your project page. + You may have already created a project page under Step 5 if you preregistered on the OSF + Optionally, you can [generate additional DOIs for specific resources like datasets](https://help.osf.io/article/218-sharing-data). + Alternatively, you can [connect your 'Git' remote repository to Zenodo](https://docs.github.com/repositories/archiving-a-github-repository/referencing-and-citing-content), instead of the OSF, to create DOIs for the project and specific resources. 17. <!--S: Add an open science statement to the Abstract or Author notes, which links to the 'OSF' project page and/or the 'Git' remote repository-->Add an open science statement to the Abstract or Author notes, which links to the 'OSF' project page and/or the 'Git' remote repository. + Placing this statement in the Abstract or Author note means that readers can find your project even if the paper is published behind a paywall. + The link can be masked for blind review. + The open science statement should indicate which resources are available in the online repository; data, code, materials, study design details, a pre-registration, and/or comprehensive citations. For further guidance, see @aalbersbergMakingScienceTransparent2018. Example: _In the spirit of open science, an online repository is available at XXX, which contains [the data/a synthetic data file], analysis code, the research materials used, details about the study design, more comprehensive citations, and a tagged release with the preregistration._ 18. <!--S: Render the dynamic document to PDF-->Knit the paper to PDF for submission + In 'RStudio', with the file 'manuscript.Rmd' open, click the "Knit" button above the top left panel + To retain essential citations only, change the front matter of the 'manuscript.Rmd' file: Change `knit: worcs::cite_all` to `knit: worcs::cite_essential` 19. <!--S: Optional: [Publish the PDF as a preprint, and add it to the OSF project](https://help.osf.io/article/177-upload-a-preprint)-->Optional: [Publish preprint in a not-for-profit preprint repository such as PsyArchiv, and connect it to your existing OSF project](https://help.osf.io/article/177-upload-a-preprint) + Check [Sherpa Romeo](http://sherpa.ac.uk/romeo/index.php) to be sure that your intended outlet allows the publication of preprints; many journals do, nowadays - and if they do not, it is worth considering other outlets. 20. <!--S: Submit the paper, and [tag the commit of the submitted paper as a release](https://docs.github.com/en/free-pro-team@latest/github/administering-a-repository/managing-releases-in-a-repository) of the submitted paper as a release, as in Step 4.-->Submit the paper, and [tag the commit of the submitted paper as a release](https://docs.github.com/en/free-pro-team@latest/github/administering-a-repository/managing-releases-in-a-repository), as in Step 4. ### Notes for cautious researchers <!--S: Some researchers might want to share their work only once the paper is accepted for publication. In this case, we recommend creating a "Private" repository in Step 1, and completing Steps 13-18 upon acceptance.-->Some researchers might want to share their work only once the paper is accepted for publication. In this case, we recommend creating a "Private" repository in Step 1, and completing Steps 13-18 upon acceptance by the journal. **Image attribution** The [Git Logo](https://git-scm.com/) by Jason Long is licensed under the Creative Commons Attribution 3.0 Unported License. The [OSF logo](https://osf.io/) is licensed under CC0 1.0 Universal. Icons in the workflow graph are obtained from [Flaticon](https://www.flaticon.com); see [detailed attribution](https://github.com/cjvanlissa/worcs/blob/master/paper/workflow_graph/Attribution_for_images.txt). ## Sample WORCS projects For a list of sample `worcs` projects created by the authors and other users, see the [`README.md` file on the WORCS GitHub page](https://github.com/cjvanlissa/worcs). This list is regularly updated. **References**
/scratch/gouwar.j/cran-all/cranData/worcs/vignettes/workflow.Rmd
align.ibm1 <- function(..., iter = 5, dtfile.path = NULL, name.sorc = 'f',name.trgt = 'e', result.file = 'result', input = FALSE) { date1 = as.POSIXlt (Sys.time(), "Iran") a = b = count0 = count = total = i = j = e = f = g = c () #-----------------------Translation:f to e ---------------------- aa = prepare.data (...) n1 = aa[[1]] aa = cbind(paste('null', aa[[2]][,1]), aa[[2]][,2]) len = nrow(aa) if(is.null(dtfile.path)) { b = apply (aa, 1, function (x) {Vt1 = strsplit (as.character (x [1]), ' ') [[1]]; Vt2 = strsplit (as.character (x[2]), ' ') [[1]]; Vt1 = Vt1 [Vt1 != '']; Vt2 = Vt2 [Vt2 != '']; cbind (Var1 = rep.int (Vt1, length (Vt2)), Var2 = rep (Vt2, each = length (Vt1)))}) cc = vapply (b,length,FUN.VALUE=0)/2 #-------------------------- main code --------------------------- dd1 = data.table (g = rep (1 : len, cc), f = unlist (sapply (b, function (x) x [,1])), e = unlist (sapply (b, function (x) x [,2])), t = as.numeric (rep (1 / cc, cc))) rm (b, cc) gc () iteration = 0 for (iiiii in 1 : iter) { iteration = iteration + 1 dd1 [, count0 := t / sum(t), by = paste (g, e)] dd1 [, t := NULL] dd1 [, count := sum (count0), by = paste (e, f)] dd1 [, total := sum (count0), by = f] dd1 [, t := count/total] dd1 [, count0 := NULL] dd1 [, count := NULL] dd1 [, total := NULL] } save (dd1,iteration, file = paste(name.sorc, name.trgt, n1, iter, 'RData', sep = '.')) if (input) return (dd1) cat(paste(getwd(), '/', name.sorc,'.', name.trgt,'.', n1, '.', iter, '.RData',' created', '\n', sep='')) } # ------- Using saved file ---- if(! is.null(dtfile.path)) if (file.exists(dtfile.path)){ load(dtfile.path) if (input) return (dd1) } else{cat("Error: No such file or directory in dtfile.path.")} #--------------------- Best alignment -------------------------- word = strsplit(aa,' ') word2 = word [1 : len] word2=sapply(1:len,function(x)word2[[x]][word2[[x]] != ""]) word3 = word [(len+1):(2*len)] word3=sapply(1:len,function(x)word3[[x]][word3[[x]] != ""]) lf = vapply(word2 ,length,FUN.VALUE=0) le = vapply(word3 ,length,FUN.VALUE=0) dd1 [, i := unlist (sapply (1 : len, function (x) rep (0 : (lf [x]-1), le [x])))] dd1 [, j := unlist (sapply (1 : len, function (x) rep (1 : (le[x]), each = lf [x])))] d1 = dd1 [, i [ which.max (t)], by = list (g, j)] [[3]] c1 = c (0, cumsum (le)) ef_word = sapply (1 : len, function (x) paste (word3 [[x]], word2[[x]] [d1 [ (c1 [x] + 1) : c1 [x + 1]] + 1], sep = ' ')) ef_init = sapply (1 : 3, function (x) paste (word3 [[x]], word2[[x]] [d1 [ (c1 [x] + 1) : c1 [x + 1]] + 1], sep = ' --> ')) ef_end = sapply ((len - 2) : len, function (x) paste (word3 [[x]], word2[[x]] [d1 [ (c1 [x] + 1) : c1 [x + 1]] + 1], sep = ' --> ')) ef_number = sapply (1 : len, function (x) d1 [ (c1 [x] + 1) : c1 [x + 1]]) #------------- Expected Length of both languages---------------- ex1 = mean (lf) - 1 ex2 = mean (le) #------------- Vocabulary size of both languages---------------- v.s1 = length (unique (unlist (word2))) v.s2 = length (unique (unlist (word3))) #----------------- Word Translation Probability ---------------- dd2 = unique (dd1 [, t, by = list (e,f)]) date2=as.POSIXlt(Sys.time(), "Iran") #^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ mylist = list (model = 'IBM1', initial_n = n1, used_n = len, time = date2 - date1, iterIBM1 = iteration, expended_l_source = ex1, expended_l_target = ex2, VocabularySize_source = v.s1, VocabularySize_target = v.s2, word_translation_prob = dd2, word_align = ef_word, align_init = ef_init, align_end = ef_end, number_align = ef_number, aa = sapply(1:len,function(x)paste(word2[[x]],sep='',collapse=' '))) save(mylist,file = paste(result.file, name.sorc, name.trgt, n1, iter, 'RData', sep = '.')) cat(result.file, '.', name.sorc, '.', name.trgt, '.', n1, '.', iter,'.RData',' created','\n',sep='') #^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^# attr(mylist, "class") <- "align" return (mylist) } ### align.symmet <- function (file.sorc, file.trgt, n = -1L, iter = 4, method = c ('union', 'intersection', 'grow-diag'), encode.sorc = 'unknown', encode.trgt = 'unknown', name.sorc = 'f', name.trgt = 'e', ...) { date1 = as.POSIXlt (Sys.time(), 'Iran') method = match.arg (method) ef1 = align.ibm1 (file.sorc, file.trgt, n = n, iter = iter, encode.sorc = encode.sorc, encode.trgt = encode.trgt, name.sorc = name.sorc, name.trgt = name.trgt, ... ) $ number_align fe1 = align.ibm1 (file.trgt, file.sorc, n = n, iter = iter, encode.sorc = encode.trgt, encode.trgt = encode.sorc, name.sorc = name.trgt, name.trgt = name.sorc, ... ) $ number_align len = length (fe1) aa = prepare.data (file.sorc, file.trgt, n = n, encode.sorc = encode.sorc, encode.trgt = encode.trgt, ...) aa = aa[[2]] aa[,1] = paste('null',aa[,1]); aa[,2] = paste('null',aa[,2]) word = strsplit(aa,' ') word2 = word [1 : len] word2=sapply(1:len,function(x)word2[[x]][word2[[x]] != ""]) word3 = word [(len+1):(2*len)] word3=sapply(1:len,function(x)word3[[x]][word3[[x]] != ""]) lf = vapply (word2, length, FUN.VALUE = 0) le = vapply (word3, length, FUN.VALUE = 0) #---- position of matrix f to e (rows = the source language(e), columns = The target language(f))---- fe = sapply (1 : len, function (x) (2 : lf[x]) * (le[x] + 2) + (fe1[[x]] + 2)) #column's position in added matrix (2 rows and 2 columns are added in the marginal of initial matrix) #---- position of matrix e to f (rows=the target language(e),columns=The source language(f))---- ef = sapply (1 : len, function (x) (2 : le[x]) * (lf[x] + 2) + (ef1[[x]] + 2)) #row's position in added matrix (2 rows and 2 columns are added in the marginal of initial matrix) ef = sapply (1 : len, function (x) (ef [[x]] - (ef1 [[x]] + 2)) / (lf [x] + 2) + (ef1 [[x]] + 1) * (le [x] + 2) + 1) # computing column's position using row's positions #---------------------------------------------------------------- # Union Word Alignment without null #---------------------------------------------------------------- if (method == 'union') { union = sapply (1 : len, function (x) unique (c (ef [[x]], fe [[x]]))) pos_col = sapply (1 : len, function (x) floor (union [[x]] / (le [x] + 2))) # column's number related to the source language in the matrix pos_row = sapply (1 : len, function (x) union [[x]] - pos_col [[x]] * (le[x] + 2) - 1) # row's number related to the target language in the matrix align_un_int = sapply(1 : len, function(x) paste (pos_row[[x]], pos_col[[x]], sep = ' ')) align_un = sapply(1 : len, function(x) paste (word3 [[x]][pos_row[[x]]], word2 [[x]][pos_col[[x]]], sep = ' ')) align_init = sapply(1 : 3, function(x) paste (word3 [[x]][pos_row[[x]]], word2 [[x]][pos_col[[x]]], sep = ' --> ')) align_end = sapply((len - 2) : len, function(x) paste (word3 [[x]][pos_row[[x]]], word2 [[x]][pos_col[[x]]], sep = ' --> ')) date2 = as.POSIXlt(Sys.time(), "Iran") mylist = list(time = date2 - date1, model = paste('symmetric',method), initial_n = n, used_n = len, iterIBM1 = iter, word_align = align_un, align_init = align_init, align_end = align_end, align_un_int = align_un_int, aa = sapply(1:len,function(x)paste(word2[[x]],sep='',collapse=' '))) save(mylist,file = paste('result', method, n, iter, 'RData', sep = '.')) cat(paste(getwd(), '/', 'result', '.', method, '.', n, '.', iter, '.RData',' created','\n',sep='')) attr(mylist, "class") <- "align" return (mylist) } #---------------------------------------------------------------- # Intersection Word Alignment without null #---------------------------------------------------------------- if (method == 'intersection') { intersection = sapply (1 : len, function(x)fe [[x]][fe [[x]] %in% ef[[x]]]) pos_col = sapply (1 : len, function (x) floor (intersection [[x]] / (le [x] + 2))) # column's number related to the source language in the matrix pos_row = sapply (1 : len, function (x) intersection [[x]] - pos_col [[x]] * (le[x] + 2) - 1) # row's number related to the target language in the matrix align_in_int = sapply(1 : len, function(x) paste (pos_row[[x]], pos_col[[x]], sep = ' ')) align_in = sapply(1 : len, function(x) paste ( word3 [[x]][pos_row[[x]]], word2 [[x]][pos_col[[x]]], sep = ' ')) align_init = sapply(1 : 3, function(x) paste (word3 [[x]][pos_row[[x]]], word2 [[x]][pos_col[[x]]], sep = ' --> ')) align_end = sapply((len - 2) : len, function(x) paste (word3 [[x]][pos_row[[x]]], word2 [[x]][pos_col[[x]]], sep = ' --> ')) date2 = as.POSIXlt(Sys.time(), "Iran") mylist = list(time = date2 - date1, model = paste('symmetric',method), initial_n = n, used_n = len, iterIBM1 = iter, word_align = align_in, align_init = align_init, align_end = align_end, align_in_int = align_in_int, aa = sapply(1:len,function(x)paste(word2[[x]],sep='',collapse=' '))) save(mylist,file = paste('result', method, n, iter, 'RData', sep = '.')) cat(paste(getwd(), '/', 'result', '.', method, '.', n, '.', iter, '.RData',' created','\n',sep='')) attr(mylist, "class") <- "align" return(mylist) } #---------------------------------------------------------------- # GROW-DIAG Word Alignment without null #---------------------------------------------------------------- if(method=='grow-diag') { g_d = sapply (1 : len, function(x) neighbor (fe [[x]],ef [[x]],(le [x] + 2))) pos_col = sapply (1 : len, function (x) floor (g_d [[x]] / (le [x] + 2))) # column's number related to the source language in the matrix pos_row = sapply (1 : len, function (x) g_d [[x]] - pos_col [[x]] * (le[x] + 2) - 1) # row's number related to the target language in the matrix align_gd_int = sapply(1 : len, function(x) paste (pos_row[[x]], pos_col[[x]], sep = ' ')) symmet = sapply(1 : len, function(x) paste ( word3 [[x]][pos_row[[x]]], word2 [[x]][pos_col[[x]]], sep = ' ')) align_init = sapply(1 : 3, function(x) paste (word3 [[x]][pos_row[[x]]], word2 [[x]][pos_col[[x]]], sep = ' --> ')) align_end = sapply((len - 2) : len, function(x) paste (word3 [[x]][pos_row[[x]]], word2 [[x]][pos_col[[x]]], sep = ' --> ')) date2 = as.POSIXlt(Sys.time(), "Iran") mylist = list(time = date2 - date1, model = paste('symmetric',method), initial_n = n, used_n = len, iterIBM1 = iter, word_align = symmet, align_init = align_init, align_end = align_end, align_gd_int = align_gd_int, aa = sapply(1:len,function(x)paste(word2[[x]],sep='',collapse=' '))) save(mylist,file = paste('result', method, n, iter, 'RData', sep = '.')) cat(paste(getwd(), '/', 'result', '.', method, '.', n, '.', iter, '.RData',' created','\n',sep='')) attr(mylist, "class") <- "align" return(mylist) } } ### print.align <- function(x, ...) { print(x $ time) cat("The model is",x$model , "\n") cat("The number of input sentence pairs is", x $ initial_n, "\n") cat("The number of used sentence pairs is", x $ used_n, "\n") cat("The number of iterations for EM algorithm is", x$iterIBM1, "\n") cat("Word alignment for some sentence pairs are", "\n") sapply(1:3,function(i){cat(paste(i,x $ aa[i],sep=': '),'\n'); print(noquote(x $ align_init[[i]]))}) cat(" ", ".", "\n") cat(" ", ".", "\n") cat(" ", ".", "\n") sapply((length(x $ word_align) - 2) : length(x $ word_align), function(i){cat(paste(i,x $ aa[i],sep=': '),'\n'); print(noquote(x $ align_end[[i - (length(x $ word_align)- 3) ]]))}) }
/scratch/gouwar.j/cran-all/cranData/word.alignment/R/align.ibm1.R
align.test<- function(file.sorc, file.trgt, test.sorc, test.trgt, n.train = -1, n.test = -1, minlen.train = 5, maxlen.train = 40, minlen.test = 5, maxlen.test = 40, null.tokens = TRUE, dtfile.path = NULL, file.align = 'alignment',name.sorc='f',name.trgt='e',iter = 3, ...) { g = fe = f = e = ge = c() #------- constructing a data.table using align.ibm1 function for the first time if(is.null(dtfile.path)) { yn = 'yn' while(yn != 'y' & yn != 'n') yn = readline('Are you sure that you want to run the align.ibm1 function (It takes time)? (y/ n: if you want to specify word alignment path, please press "n".)') if (yn == 'y') { dd1 = align.ibm1(file.sorc, file.trgt, n = n.train, min.len = minlen.train, max.len = maxlen.train, input = TRUE,...) }else{ return("Error: No such file or directory in dtfile_path.") } } # ------- reading an already built data.table using align.ibm1 function ---- if(! is.null(dtfile.path)) if (file.exists(dtfile.path)){ load(dtfile.path) }else{cat('Error: No such file or directory in dtfile.path.')} # ----------------- aa = prepare.data (test.sorc, test.trgt, n = n.test, min.len = minlen.test, max.len = maxlen.test, ...) aa = aa[[2]] if (null.tokens) aa = cbind(paste('null',aa[,1]),aa[,2]) len = nrow(aa) b = apply (aa, 1, function (x) {Vt1 = strsplit (as.character (x [1]), ' ') [[1]]; Vt2 = strsplit (as.character (x[2]), ' ') [[1]]; Vt1 = Vt1 [Vt1 != '']; Vt2 = Vt2 [Vt2 != '']; cbind (Var1 = rep.int (Vt1, length (Vt2)), Var2 = rep (Vt2, each = length (Vt1)))}) cc = vapply (b,length,FUN.VALUE=0)/2 dd2 = data.table (g = rep (1 : len, cc), f = unlist (sapply (b, function (x) x [,1])), e = unlist (sapply (b, function (x) x [,2]))) dd1[, g := NULL] dd1 = unique(dd1) dd1[,fe := paste(f,e)] dd1[,f := NULL] dd1[,e := NULL] dd2[,fe := paste(f,e)] dd1 = merge(dd1, dd2, by = 'fe', allow.cartesian = TRUE) dd1[, fe := NULL] dd2[, fe := NULL] dd4 = cbind(dd1[,g[which.max(t)],by = paste(g,e)], dd1[,f[which.max(t)],by = paste(g,e)][[2]], dd1[,e[which.max(t)],by = paste(g,e)][[2]]) setnames(dd4,c('ge','g','f','e')) dd4[, ge := NULL] if (null.tokens) { dd = 'null' } else { dd = 'nolink' } save(dd, dd4,file = paste(file.align, name.sorc, name.trgt, n.train, iter,'RData',sep='.')) cat(file.align, '.', name.sorc, '.', name.trgt, '.', n.train, '.', iter, '.RData',' created','\n',sep='') }
/scratch/gouwar.j/cran-all/cranData/word.alignment/R/align.test.R
bidictionary <- function (..., n = -1L, iter = 15, prob = 0.8, dtfile.path = NULL, name.sorc = 'f', name.trgt = 'e') { date1 = as.POSIXlt (Sys.time(), 'Iran') e = f = c() if(is.null(dtfile.path)) { yn = 'yn' while(yn != 'y' & yn != 'n') yn = readline('Are you sure that you want to run the align.ibm1 function (It takes time)? (y/ n: if you want to specify word alignment path, please press "n".)') if (yn == 'y') { dd1 = align.ibm1 (...,n = n, iter = iter, input = TRUE) save(dd1,iter, file = paste(name.sorc, name.trgt, n, iter, 'RData',sep='.')) cat(paste(getwd(), '/', name.sorc,'.', name.trgt,'.', n, '.', iter, '.RData',' created','\n', sep='')) }else{ return(cat('Error: No such file or directory in dtfile.path.', '\n')) } } if(! is.null(dtfile.path)) if (file.exists(dtfile.path)) { load(dtfile.path) }else{ return(cat('Error: No such file or directory in dtfile.path.', '\n')) } u1 = unique (dd1 [round (t, 1) > prob, f, e]) fe = matrix (c (u1$f, u1$e), ncol = 2) fe = fe [order (fe [,1]),] fe = apply(fe,1,paste,collapse=':') date2 = as.POSIXlt (Sys.time(), 'Iran') ################################################################## mylist = list (time = date2 - date1, number_input = n, Value_prob = prob, iterIBM1 = iter, Source_Language = name.sorc, Target_Language = name.trgt, dictionary = fe) ################################################################## return (mylist) }
/scratch/gouwar.j/cran-all/cranData/word.alignment/R/bidictionary.R
cross.table<- function ( ..., null.tokens = TRUE, out.format = c('rdata','excel'), file.align = 'alignment') { out.format = match.arg(out.format) p1 = prepare.data (..., word.align = FALSE) len = p1 $ used p1 = unlist (p1, recursive = FALSE) if (null.tokens) { p1 = sapply(3 : length(p1), function(x) c('null', p1[[x]])); fg1 = "null" } else { p1 = sapply(3 : length(p1), function(x) p1[[x]]); fg1 = "nolink" } if (out.format == 'rdata') { readline(paste("If you want to build a gold standard, please enter '1|2' for Sure|Possible alignments. \nIf you want to construct an alignment matrix which is computed by another software, please enter '1' for alignments.\nNow, press 'Enter' to continue.",sep='')) mm = sapply (1 : len, function (x) {m = matrix (0, length (p1 [[x]]) + 1, length (p1 [[x + len]]) + 1); m [2 : nrow (m), 1] = p1 [[x]]; m [1, 2 : ncol(m)] = p1 [[x+len]]; m [1, 1] = ''; m}) fg = c() for(sn in 1 : len) { fg2 = mm [[sn]] fg2 = fix (fg2) fg[[sn]] = fg2 } save(fg, fg1, file = paste(file.align,'RData',sep='.')) print(paste(getwd(), '/', file.align,'.RData',' created',sep='')) } if (out.format == 'excel') { file_align = paste(file.align,'xlsx',sep='.') wb1 <- createWorkbook ("data") for (j in 1 : len) { m1 = matrix (0, length (p1 [[j]]) + 1, length (p1 [[j + len]]) + 1) m1 [2 : nrow (m1), 1] = p1 [[j]]; m1 [1, 2 : ncol (m1)] = p1 [[j + len]]; m1 [1, 1] = '' addWorksheet (wb1, as.character(j)) writeData (wb1, sheet =j, m1) saveWorkbook (wb1, file_align, overwrite = TRUE) } cat (paste("Now, please edit ","'", file.align,"'",".", "\nIf you want to build a gold standard, please enter 1|2 for Sure|Possible alignments.\nIf you want to construct an alignment matrix which is computed by another software, please enter '1' for alignments.\nImportant: In order to use the created excel file for evaluation function,\ndon't forget to use excel2rdata function to convert the excel file into required R format.\n(evaluation and excel2rdata are functions in the current package.)\n ",sep='')) print(paste(getwd(), '/', file.align,' created',sep='')) } }
/scratch/gouwar.j/cran-all/cranData/word.alignment/R/cross.table.R
evaluation <- function(file.gold = 'gold.RData', file.align = 'alignment.-1.3.RData', agn = c('my.agn','an.agn'), alpha = 0.3) { date1 = as.POSIXlt (Sys.time(), 'Iran') e = f = g = fg1= fg = SP = dd = gfe = A = recall0 = n_AS = n_S = precision0 = n_AP = n_A = AER0 = c() agn = match.arg(agn) #------------------------Constructing a gold standard -------------------------- readline(paste('Please ensure that you create the file(s) by cross.table function in this package and you fill the\n "', file.align, '" file by "1|2" for sure|possible. Then press "Enter" to continue.', sep='')) readline(paste('If you have an excel file, please convert it into required RData format using excel2rdata function available in this package.\nThen press "Enter" to continue.', sep = '')) load(file = file.gold) null1 = fg1 len = length(fg) sum1 = sapply(1 : len, function(x)sum(as.numeric(fg[[x]][-1,-1]))) if( sum(sum1 == 0)!= 0) paste('Warning: sentence(s)', paste(which(sum1==0), collapse=','), 'has (have) not been aligned.') dd2 = c() for (sn in 1 : len) { dd3 = data.table(cbind( g = sn, expand.grid(f = fg[[sn]][-1,1], e = fg[[sn]][1,-1]), SP = c(fg[[sn]][-1,-1]))) dd2 = rbind(dd2, dd3) dd2 $ SP[is.na(dd2 $ SP)] = 0 } dd3 = dd2[,sum(SP == 1),by = g] dd2 = merge(dd3, dd2, by = 'g') setnames(dd2,c('g','n_S','f','e','SP')) rm(dd3) gc() #----- computing word alignment based on my IBM model1 using align.ibm1 function ----- if(agn == 'my.agn') { load(file = file.align) if (dd != null1) { return(paste("Error: gold standard alignment and word alignment must be the same. But, the gold is including " , null1, " and the alignment is containing ", dd, ".", sep='')) } else { dd4[, gfe := paste(g,f,e)] dd2[, gfe := paste(g,f,e)] dd4[, g := NULL] dd4[, f := NULL] dd4[, e := NULL] dd4 = unique( merge(dd4, dd2, by = 'gfe')) dd4[, gfe := NULL] } } #-------------- reading computed word alignment using another software --------------- if (agn == 'an.agn') { readline(paste('Please ensure that you create the file(s) by cross.table function in this package and you fill the\n", file.align, "file by "1|2" for sure|possible. Then press "Enter" to continue.',sep='')) readline(paste('If you have an excel file, please convert it into required RData format using excel2rdata function available in this package. \nThen press "Enter" to continue.', sep = '')) load(file = file.align) if (fg1 != null1){ return(paste("Error: gold standard alignment and word alignment must be the same. But, the gold is including " , null1, " and the alignment is containing ", dd, ".", sep='')) } else { dd1 = c() for (sn in 1 : len) { dd3 = data.table(cbind( g = sn, expand.grid(f = fg[[sn]][-1,1], e = fg[[sn]][1,-1]), A = c(fg[[sn]][-1,-1]))) dd1 = rbind(dd1, dd3) dd1 $ A[is.na(dd1 $ A)] = 0 } dd1 = cbind(dd2, A = dd1 $ A) dd4 = dd1[ A == 1] dd4[, A := NULL] } } ################## dd4[ , `:=`( n_A = .N ) , by = g ] dd5 = sapply(1 : 2,function(x)dd4[,sum(SP == x),by = g]) dd4 = merge(dd4,dd5[,1],by = 'g') dd4 = merge(dd4,dd5[,2],by = 'g') rm(dd5) gc() setnames(dd4,c('g','n_S','f','e', 'SP', 'n_A','n_AS','n_AP')) dd4 = unique(dd4[,by = g]) dd4[,recall0 := as.numeric(n_AS) / as.numeric(n_S)] dd4 $ recall0[is.nan(dd4 $ recall0)] = 1 dd4 $ recall0[is.infinite(dd4 $ recall0)] = 0 dd4[,precision0 := as.numeric(n_AP) / as.numeric(n_A)] dd4 $ precision0[is.nan(dd4 $ precision0)] = 1 dd4 $ precision0[is.infinite(dd4 $ precision0)] = 0 dd4[,AER0 := (as.numeric(n_AP) + as.numeric(n_AS)) / (as.numeric(n_A) + as.numeric(n_S))] dd4 $ AER0[is.nan(dd4 $ AER0)] = 1 dd4 $ AER0[is.infinite(dd4 $ AER0)] = 0 dd4[,precisionS := as.numeric(n_AS)/as.numeric(n_A)] dd4 $ precisionS[is.nan(dd4 $ precisionS)] = 1 dd4 $ precisionS[is.infinite(dd4 $ precisionS)] = 0 #---------- recall, precision and accuracy measures----------- recall = dd4[,mean(recall0)] precision = dd4[,mean(precision0)] AER = dd4[,mean(AER0)] F_measure.PS = 1 / (alpha / precision + (1 - alpha)/recall) precisionS = dd4[,mean(precisionS)] F_measure.S = 1 / (alpha / precisionS + ( 1- alpha)/recall) date2 = as.POSIXlt (Sys.time(), "Iran") ############################################################# list2 = list(time = date2 - date1, Recall = recall, Precision = precision, AER = 1 - AER, F_measure.PS = F_measure.PS, F_measure.S = F_measure.S) ############################################################# return(list2) }
/scratch/gouwar.j/cran-all/cranData/word.alignment/R/evaluation.R
excel2rdata <- function(file.align = 'alignment.xlsx', null.tokens = TRUE, len = len) { fg = c() #wb = loadWorkbook(file.align) for(sn in 1 : len) { df1 <- read.xlsx (xlsxFile = file.align, sheet = sn) df1 = as.matrix(df1) fg[[sn]] = df1 } fg1 = ifelse(null.tokens, 'null', 'nolink') save(fg, fg1, file = paste(file.align,'.RData',sep='')) cat(paste(file.align,'.RData',' created','\n', sep='')) }
/scratch/gouwar.j/cran-all/cranData/word.alignment/R/excel2rdata.R
neighbor <- function (fe, ef, n.row) { kk = function (x,y) c ((x - 1), (x - 1 + 2), (x - 1 - y) : (x - 1 - y + 2), (x - 1 + y) : (x - 1 + y + 2)) iii = fe [fe %in% ef] i2 = length (iii) if (i2 == 0) {iii = numeric(0); return (iii)} repeat { i2 = length (iii) s = sapply (iii, kk, n.row) if (i2 != 0) iii = as.numeric (names (table (c (iii, cbind (s,s) [matrix (c (s %in% ef, s %in% fe), ncol = 2 * length (iii))])))) if (i2 == length (iii)) break } iii }
/scratch/gouwar.j/cran-all/cranData/word.alignment/R/neighbor.R
nfirst2lower <- function(x, n = 1, first = TRUE, second = FALSE) { if (first) x = paste (tolower (substr (x, 1, n)), substring (x, n+1), sep = '') if (second) x [substring (x, 2, 2) %in% LETTERS] = tolower (x [substring (x, 2, 2) %in% LETTERS]) return (x) }
/scratch/gouwar.j/cran-all/cranData/word.alignment/R/nfirst2lower.R
prepare.data <- function(file.sorc, file.trgt, n = -1L, encode.sorc = 'unknown' , encode.trgt = 'unknown', min.len = 5, max.len = 40, remove.pt = TRUE, word.align = TRUE) { s_sen = t_sen = aa = t = c() s_sen = readLines (con <- file(file.sorc), encoding = encode.sorc, n = n, warn = FALSE) close(con) t_sen = readLines (con <- file(file.trgt), encoding = encode.trgt, n = n, warn = FALSE) close(con) if (length(s_sen) == length(t_sen)) { for (k1 in 1 : length (s_sen)) if (s_sen[k1] == '') {t_sen [k1+1] = paste (t_sen [k1], t_sen [k1+1]); t_sen [k1] = ''} for (k2 in 1 : length (t_sen)) if (t_sen[k2] == '') {s_sen [k2+1] = paste (s_sen [k2], s_sen [k2+1]); s_sen [k2] = ''} } s_sen = s_sen [nzchar (s_sen)] t_sen = t_sen [nzchar (t_sen)] aa = cbind(s_sen,t_sen) len1 = nrow(aa) #------------------------- Tokenization -------------------------- aa[,1] = nfirst2lower (aa [,1]) aa[,2] = nfirst2lower (aa [,2]) rm (s_sen, t_sen) gc () if(remove.pt) aa = sapply(1:(2*len1), function(x)remove.punct(aa[[x]])) if(!remove.pt) aa = strsplit(aa,' ') word2 = aa [1 : len1] word3 = aa [ (len1+1) : (2 * len1)] aa = cbind (sapply(word2,paste, collapse = ' '), sapply(word3, paste, collapse = ' ')) aa = aa [apply (aa, 1, function(x) prod (vapply (strsplit (x, ' '), length, FUN.VALUE=0) >= min.len)& prod (vapply (strsplit (x, ' '), length, FUN.VALUE=0) <= max.len) == 1) ,] if(word.align) { aa = list (len1, aa) return(aa) } len2 = length(aa) / 2 if(remove.pt) aa = sapply(1:(2*len2), function(x)remove.punct(aa[[x]])) if(!remove.pt) aa = strsplit(aa,' ') list1 = list (initial = len1, used = len2, sorc.tok = aa [1 : len2], trgt.tok = aa[ (len2 + 1) : (2 * len2)] ) return (list1) }
/scratch/gouwar.j/cran-all/cranData/word.alignment/R/prepare.data.R
remove.punct<-function(text) { x1 = strsplit(text,' ')[[1]] x2 = gsub('[!?,{}"();:]','',x1) x3 = gsub("'s"," 's",x2) x3 = unlist(strsplit(x3,' ')) x3 = gsub("'",'',x3) x4 = gsub('[[]','',x3) x5 = gsub('[]]','',x4) x5 = x5[nzchar(x5)] if(x5[length(x5)]==".") x5 = x5[-length(x5)] return (x5) }
/scratch/gouwar.j/cran-all/cranData/word.alignment/R/remove.punct.R
# Generated by using Rcpp::compileAttributes() -> do not edit by hand # Generator token: 10BE3573-1514-4C36-9D1C-5A225CD40393 w2v_train <- function(texts_, stopWords_, trainFile, modelFile, stopWordsFile, minWordFreq = 5L, size = 100L, window = 5L, expTableSize = 1000L, expValueMax = 6L, sample = 0.001, withHS = FALSE, negative = 5L, threads = 1L, iterations = 5L, alpha = 0.05, withSG = FALSE, wordDelimiterChars = " \n,.-!?:;/\"#$%&'()*+<=>@[]\\^_`{|}~\t\v\f\r", endOfSentenceChars = ".\n?!", verbose = FALSE, normalize = TRUE) { .Call('_word2vec_w2v_train', PACKAGE = 'word2vec', texts_, stopWords_, trainFile, modelFile, stopWordsFile, minWordFreq, size, window, expTableSize, expValueMax, sample, withHS, negative, threads, iterations, alpha, withSG, wordDelimiterChars, endOfSentenceChars, verbose, normalize) } w2v_load_model <- function(file, normalize = TRUE) { .Call('_word2vec_w2v_load_model', PACKAGE = 'word2vec', file, normalize) } w2v_save_model <- function(ptr, file) { .Call('_word2vec_w2v_save_model', PACKAGE = 'word2vec', ptr, file) } w2v_dictionary <- function(ptr) { .Call('_word2vec_w2v_dictionary', PACKAGE = 'word2vec', ptr) } w2v_embedding <- function(ptr, x) { .Call('_word2vec_w2v_embedding', PACKAGE = 'word2vec', ptr, x) } w2v_nearest <- function(ptr, x, top_n = 10L, min_distance = 0.0) { .Call('_word2vec_w2v_nearest', PACKAGE = 'word2vec', ptr, x, top_n, min_distance) } w2v_nearest_vector <- function(ptr, x, top_n = 10L, min_distance = 0.0) { .Call('_word2vec_w2v_nearest_vector', PACKAGE = 'word2vec', ptr, x, top_n, min_distance) } w2v_read_binary <- function(modelFile, normalize, n) { .Call('_word2vec_w2v_read_binary', PACKAGE = 'word2vec', modelFile, normalize, n) } d2vec <- function(ptr, x, wordDelimiterChars = " \n,.-!?:;/\"#$%&'()*+<=>@[]\\^_`{|}~\t\v\f\r") { .Call('_word2vec_d2vec', PACKAGE = 'word2vec', ptr, x, wordDelimiterChars) } d2vec_nearest <- function(ptr_w2v, ptr_d2v, x, wordDelimiterChars = " \n,.-!?:;/\"#$%&'()*+<=>@[]\\^_`{|}~\t\v\f\r") { .Call('_word2vec_d2vec_nearest', PACKAGE = 'word2vec', ptr_w2v, ptr_d2v, x, wordDelimiterChars) }
/scratch/gouwar.j/cran-all/cranData/word2vec/R/RcppExports.R
embed_doc <- function(model, tokens, encoding = "UTF-8"){ ## Get embedding of the tokens emb <- predict(model, tokens, "embedding", encoding = encoding) emb <- emb[which(!is.na(emb[, 1])), , drop = FALSE] if(nrow(emb) == 0){ emb <- rep(NA_real_, ncol(emb)) return(emb) } ## Sum the embeddings and standardise emb <- colSums(emb, na.rm = TRUE) emb <- emb / sqrt(sum(emb^2) / length(emb)) emb } #' @title Get document vectors based on a word2vec model #' @description Document vectors are the sum of the vectors of the words which are part of the document standardised by the scale of the vector space. #' This scale is the sqrt of the average inner product of the vector elements. #' @param object a word2vec model as returned by \code{\link{word2vec}} or \code{\link{read.word2vec}} #' @param newdata either a list of tokens where each list element is a character vector of tokens which form the document and the list name is considered the document identifier; #' or a data.frame with columns doc_id and text; or a character vector with texts where the character vector names will be considered the document identifier #' @param split in case \code{newdata} is not a list of tokens, text will be splitted into tokens by splitting based on function \code{\link{strsplit}} with the provided \code{split} argument #' @param encoding set the encoding of the text elements to the specified encoding. Defaults to 'UTF-8'. #' @param ... not used #' @return a matrix with 1 row per document containing the text document vectors, the rownames of this matrix are the document identifiers #' @seealso \code{\link{word2vec}}, \code{\link{predict.word2vec}} #' @export #' @examples #' path <- system.file(package = "word2vec", "models", "example.bin") #' model <- read.word2vec(path) #' x <- data.frame(doc_id = c("doc1", "doc2", "testmissingdata"), #' text = c("there is no toilet. on the bus", "no tokens from dictionary", NA), #' stringsAsFactors = FALSE) #' emb <- doc2vec(model, x, type = "embedding") #' emb #' #' newdoc <- doc2vec(model, "i like busses with a toilet") #' word2vec_similarity(emb, newdoc) #' #' ## similar way of extracting embeddings #' x <- setNames(object = c("there is no toilet. on the bus", "no tokens from dictionary", NA), #' nm = c("a", "b", "c")) #' emb <- doc2vec(model, x, type = "embedding") #' emb #' #' ## similar way of extracting embeddings #' x <- setNames(object = c("there is no toilet. on the bus", "no tokens from dictionary", NA), #' nm = c("a", "b", "c")) #' x <- strsplit(x, "[ .]") #' emb <- doc2vec(model, x, type = "embedding") #' emb #' #' ## show behaviour in case of NA or character data of no length #' x <- list(a = character(), b = c("bus", "toilet"), c = NA) #' emb <- doc2vec(model, x, type = "embedding") #' emb doc2vec <- function(object, newdata, split = " ", encoding = "UTF-8", ...){ if(!inherits(object, c("word2vec", "word2vec_trained"))){ warning("doc2vec requires as input an object of class word2vec") } if(is.character(newdata)){ newdata <- strsplit(newdata, split = split) }else if(is.data.frame(newdata) && all(c("doc_id", "text") %in% colnames(newdata))){ txt <- as.character(newdata$text) names(txt) <- newdata$doc_id newdata <- strsplit(txt, split) }else{ stopifnot(is.list(newdata)) } embedding <- lapply(newdata, FUN=function(x){ if(length(x) == 0){ return(rep(NA_real_, object$dim)) } embed_doc(object, x, encoding = encoding) }) embedding <- do.call(rbind, embedding) embedding }
/scratch/gouwar.j/cran-all/cranData/word2vec/R/doc2vec.R
#' @importFrom Rcpp evalCpp #' @importFrom stats predict #' @useDynLib word2vec NULL
/scratch/gouwar.j/cran-all/cranData/word2vec/R/pkg.R
#' @title Text cleaning specific for input to word2vec #' @description Standardise text by #' \itemize{ #' \item{Conversion of text from UTF-8 to ASCII} #' \item{Keeping only alphanumeric characters: letters and numbers} #' \item{Removing multiple spaces} #' \item{Removing leading/trailing spaces} #' \item{Performing lowercasing} #' } #' @param x a character vector in UTF-8 encoding #' @param ascii logical indicating to use \code{iconv} to convert the input from UTF-8 to ASCII. Defaults to TRUE. #' @param alpha logical indicating to keep only alphanumeric characters. Defaults to TRUE. #' @param tolower logical indicating to lowercase \code{x}. Defaults to TRUE. #' @param trim logical indicating to trim leading/trailing white space. Defaults to TRUE. #' @return a character vector of the same length as \code{x} #' which is standardised by converting the encoding to ascii, lowercasing and #' keeping only alphanumeric elements #' @export #' @examples #' x <- c(" Just some.texts, ok?", "123.456 and\tsome MORE! ") #' txt_clean_word2vec(x) txt_clean_word2vec <- function(x, ascii = TRUE, alpha = TRUE, tolower = TRUE, trim = TRUE){ text <- x if(ascii){ text <- iconv(text, from = "UTF-8", to = "ASCII//TRANSLIT") } if(alpha){ text <- gsub("[^[:alnum:]]", " ", text) } text <- gsub(" +", " ", text) if(tolower){ text <- tolower(text) } if(trim){ text <- trimws(text) } text }
/scratch/gouwar.j/cran-all/cranData/word2vec/R/utils.R
#' @title Train a word2vec model on text #' @description Construct a word2vec model on text. The algorithm is explained at \url{https://arxiv.org/pdf/1310.4546.pdf} #' @param x a character vector with text or the path to the file on disk containing training data or a list of tokens. See the examples. #' @param type the type of algorithm to use, either 'cbow' or 'skip-gram'. Defaults to 'cbow' #' @param dim dimension of the word vectors. Defaults to 50. #' @param iter number of training iterations. Defaults to 5. #' @param lr initial learning rate also known as alpha. Defaults to 0.05 #' @param window skip length between words. Defaults to 5. #' @param hs logical indicating to use hierarchical softmax instead of negative sampling. Defaults to FALSE indicating to do negative sampling. #' @param negative integer with the number of negative samples. Only used in case hs is set to FALSE #' @param sample threshold for occurrence of words. Defaults to 0.001 #' @param min_count integer indicating the number of time a word should occur to be considered as part of the training vocabulary. Defaults to 5. #' @param stopwords a character vector of stopwords to exclude from training #' @param threads number of CPU threads to use. Defaults to 1. #' @param ... further arguments passed on to the methods \code{\link{word2vec.character}}, \code{\link{word2vec.list}} as well as the C++ function \code{w2v_train} - for expert use only #' @return an object of class \code{w2v_trained} which is a list with elements #' \itemize{ #' \item{model: a Rcpp pointer to the model} #' \item{data: a list with elements file: the training data used, stopwords: the character vector of stopwords, n} #' \item{vocabulary: the number of words in the vocabulary} #' \item{success: logical indicating if training succeeded} #' \item{error_log: the error log in case training failed} #' \item{control: as list of the training arguments used, namely min_count, dim, window, iter, lr, skipgram, hs, negative, sample, split_words, split_sents, expTableSize and expValueMax} #' } #' @references \url{https://github.com/maxoodf/word2vec}, \url{https://arxiv.org/pdf/1310.4546.pdf} #' @details #' Some advice on the optimal set of parameters to use for training as defined by Mikolov et al. #' \itemize{ #' \item{argument type: skip-gram (slower, better for infrequent words) vs cbow (fast)} #' \item{argument hs: the training algorithm: hierarchical softmax (better for infrequent words) vs negative sampling (better for frequent words, better with low dimensional vectors)} #' \item{argument dim: dimensionality of the word vectors: usually more is better, but not always} #' \item{argument window: for skip-gram usually around 10, for cbow around 5} #' \item{argument sample: sub-sampling of frequent words: can improve both accuracy and speed for large data sets (useful values are in range 0.001 to 0.00001)} #' } #' @note #' Some notes on the tokenisation #' \itemize{ #' \item{If you provide to \code{x} a list, each list element should correspond to a sentence (or what you consider as a sentence) and should contain a character vector of tokens. The word2vec model is then executed using \code{\link{word2vec.list}}} #' \item{If you provide to \code{x} a character vector or the path to the file on disk, the tokenisation into words depends on the first element provided in \code{split} and the tokenisation into sentences depends on the second element provided in \code{split} when passed on to \code{\link{word2vec.character}}} #' } #' @seealso \code{\link{predict.word2vec}}, \code{\link{as.matrix.word2vec}}, \code{\link{word2vec}}, \code{\link{word2vec.character}}, \code{\link{word2vec.list}} #' @export #' @examples #' \dontshow{if(require(udpipe))\{} #' library(udpipe) #' ## Take data and standardise it a bit #' data(brussels_reviews, package = "udpipe") #' x <- subset(brussels_reviews, language == "nl") #' x <- tolower(x$feedback) #' #' ## Build the model get word embeddings and nearest neighbours #' model <- word2vec(x = x, dim = 15, iter = 20) #' emb <- as.matrix(model) #' head(emb) #' emb <- predict(model, c("bus", "toilet", "unknownword"), type = "embedding") #' emb #' nn <- predict(model, c("bus", "toilet"), type = "nearest", top_n = 5) #' nn #' #' ## Get vocabulary #' vocab <- summary(model, type = "vocabulary") #' #' # Do some calculations with the vectors and find similar terms to these #' emb <- as.matrix(model) #' vector <- emb["buurt", ] - emb["rustige", ] + emb["restaurants", ] #' predict(model, vector, type = "nearest", top_n = 10) #' #' vector <- emb["gastvrouw", ] - emb["gastvrij", ] #' predict(model, vector, type = "nearest", top_n = 5) #' #' vectors <- emb[c("gastheer", "gastvrouw"), ] #' vectors <- rbind(vectors, avg = colMeans(vectors)) #' predict(model, vectors, type = "nearest", top_n = 10) #' #' ## Save the model to hard disk #' path <- "mymodel.bin" #' \dontshow{ #' path <- tempfile(pattern = "w2v", fileext = ".bin") #' } #' write.word2vec(model, file = path) #' model <- read.word2vec(path) #' #' \dontshow{ #' file.remove(path) #' } #' ## #' ## Example of word2vec with a list of tokens #' ## #' toks <- strsplit(x, split = "[[:space:][:punct:]]+") #' model <- word2vec(x = toks, dim = 15, iter = 20) #' emb <- as.matrix(model) #' emb <- predict(model, c("bus", "toilet", "unknownword"), type = "embedding") #' emb #' nn <- predict(model, c("bus", "toilet"), type = "nearest", top_n = 5) #' nn #' #' ## #' ## Example getting word embeddings #' ## which are different depending on the parts of speech tag #' ## Look to the help of the udpipe R package #' ## to get parts of speech tags on text #' ## #' library(udpipe) #' data(brussels_reviews_anno, package = "udpipe") #' x <- subset(brussels_reviews_anno, language == "fr") #' x <- subset(x, grepl(xpos, pattern = paste(LETTERS, collapse = "|"))) #' x$text <- sprintf("%s/%s", x$lemma, x$xpos) #' x <- subset(x, !is.na(lemma)) #' x <- split(x$text, list(x$doc_id, x$sentence_id)) #' #' model <- word2vec(x = x, dim = 15, iter = 20) #' emb <- as.matrix(model) #' nn <- predict(model, c("cuisine/NN", "rencontrer/VB"), type = "nearest") #' nn #' nn <- predict(model, c("accueillir/VBN", "accueillir/VBG"), type = "nearest") #' nn #' #' \dontshow{\} # End of main if statement running only if the required packages are installed} word2vec <- function(x, type = c("cbow", "skip-gram"), dim = 50, window = ifelse(type == "cbow", 5L, 10L), iter = 5L, lr = 0.05, hs = FALSE, negative = 5L, sample = 0.001, min_count = 5L, stopwords = character(), threads = 1L, ...) { UseMethod("word2vec") } #' @inherit word2vec title description params details seealso return references examples #' @param split a character vector of length 2 where the first element indicates how to split words and the second element indicates how to split sentences in \code{x} #' @param encoding the encoding of \code{x} and \code{stopwords}. Defaults to 'UTF-8'. #' Calculating the model always starts from files allowing to build a model on large corpora. The encoding argument #' is passed on to \code{file} when writing \code{x} to hard disk in case you provided it as a character vector. #' @param useBytes logical passed on to \code{\link{writeLines}} when writing the text and stopwords on disk before building the model. Defaults to \code{TRUE}. #' @export word2vec.character <- function(x, type = c("cbow", "skip-gram"), dim = 50, window = ifelse(type == "cbow", 5L, 10L), iter = 5L, lr = 0.05, hs = FALSE, negative = 5L, sample = 0.001, min_count = 5L, stopwords = character(), threads = 1L, split = c(" \n,.-!?:;/\"#$%&'()*+<=>@[]\\^_`{|}~\t\v\f\r", ".\n?!"), encoding = "UTF-8", useBytes = TRUE, ...){ type <- match.arg(type) stopw <- stopwords model <- file.path(tempdir(), "w2v.bin") if(length(stopw) == 0){ stopw <- "" } file_stopwords <- tempfile() filehandle_stopwords <- file(file_stopwords, open = "wt", encoding = encoding) writeLines(stopw, con = filehandle_stopwords, useBytes = useBytes) close(filehandle_stopwords) on.exit({ if (file.exists(file_stopwords)) file.remove(file_stopwords) }) if(length(x) == 1){ file_train <- x }else{ file_train <- tempfile(pattern = "textspace_", fileext = ".txt") on.exit({ if (file.exists(file_stopwords)) file.remove(file_stopwords) if (file.exists(file_train)) file.remove(file_train) }) filehandle_train <- file(file_train, open = "wt", encoding = encoding) writeLines(text = x, con = filehandle_train, useBytes = useBytes) close(filehandle_train) } #expTableSize <- 1000L #expValueMax <- 6L #expTableSize <- as.integer(expTableSize) #expValueMax <- as.integer(expValueMax) min_count <- as.integer(min_count) dim <- as.integer(dim) window <- as.integer(window) iter <- as.integer(iter) sample <- as.numeric(sample) hs <- as.logical(hs) negative <- as.integer(negative) threads <- as.integer(threads) iter <- as.integer(iter) lr <- as.numeric(lr) skipgram <- as.logical(type %in% "skip-gram") split <- as.character(split) model <- w2v_train(list(), character(), trainFile = file_train, modelFile = model, stopWordsFile = file_stopwords, minWordFreq = min_count, size = dim, window = window, #expTableSize = expTableSize, expValueMax = expValueMax, sample = sample, withHS = hs, negative = negative, threads = threads, iterations = iter, alpha = lr, withSG = skipgram, wordDelimiterChars = split[1], endOfSentenceChars = split[2], ...) model$data$stopwords <- stopwords model } #' @inherit word2vec title description params details seealso return references #' @export #' @examples #' \dontshow{if(require(udpipe))\{} #' library(udpipe) #' data(brussels_reviews, package = "udpipe") #' x <- subset(brussels_reviews, language == "nl") #' x <- tolower(x$feedback) #' toks <- strsplit(x, split = "[[:space:][:punct:]]+") #' model <- word2vec(x = toks, dim = 15, iter = 20) #' emb <- as.matrix(model) #' head(emb) #' emb <- predict(model, c("bus", "toilet", "unknownword"), type = "embedding") #' emb #' nn <- predict(model, c("bus", "toilet"), type = "nearest", top_n = 5) #' nn #' #' ## #' ## Example of word2vec with a list of tokens #' ## which gives the same embeddings as with a similarly tokenised character vector of texts #' ## #' txt <- txt_clean_word2vec(x, ascii = TRUE, alpha = TRUE, tolower = TRUE, trim = TRUE) #' table(unlist(strsplit(txt, ""))) #' toks <- strsplit(txt, split = " ") #' set.seed(1234) #' modela <- word2vec(x = toks, dim = 15, iter = 20) #' set.seed(1234) #' modelb <- word2vec(x = txt, dim = 15, iter = 20, split = c(" \n\r", "\n\r")) #' all.equal(as.matrix(modela), as.matrix(modelb)) #' \dontshow{\} # End of main if statement running only if the required packages are installed} word2vec.list <- function(x, type = c("cbow", "skip-gram"), dim = 50, window = ifelse(type == "cbow", 5L, 10L), iter = 5L, lr = 0.05, hs = FALSE, negative = 5L, sample = 0.001, min_count = 5L, stopwords = character(), threads = 1L, ...){ x <- lapply(x, as.character) type <- match.arg(type) stopwords <- as.character(stopwords) model <- file.path(tempdir(), "w2v.bin") #expTableSize <- 1000L #expValueMax <- 6L #expTableSize <- as.integer(expTableSize) #expValueMax <- as.integer(expValueMax) min_count <- as.integer(min_count) dim <- as.integer(dim) window <- as.integer(window) iter <- as.integer(iter) sample <- as.numeric(sample) hs <- as.logical(hs) negative <- as.integer(negative) threads <- as.integer(threads) iter <- as.integer(iter) lr <- as.numeric(lr) skipgram <- as.logical(type %in% "skip-gram") encoding <- "UTF-8" model <- w2v_train(x, stopwords, trainFile = "", modelFile = model, stopWordsFile = "", minWordFreq = min_count, size = dim, window = window, #expTableSize = expTableSize, expValueMax = expValueMax, sample = sample, withHS = hs, negative = negative, threads = threads, iterations = iter, alpha = lr, withSG = skipgram, wordDelimiterChars = "", endOfSentenceChars = "", ...) model$data$stopwords <- stopwords model } #' @title Get the word vectors of a word2vec model #' @description Get the word vectors of a word2vec model as a dense matrix. #' @param x a word2vec model as returned by \code{\link{word2vec}} or \code{\link{read.word2vec}} #' @param encoding set the encoding of the row names to the specified encoding. Defaults to 'UTF-8'. #' @param ... not used #' @return a matrix with the word vectors where the rownames are the words from the model vocabulary #' @export #' @seealso \code{\link{word2vec}}, \code{\link{read.word2vec}} #' @export #' @examples #' path <- system.file(package = "word2vec", "models", "example.bin") #' model <- read.word2vec(path) #' #' embedding <- as.matrix(model) as.matrix.word2vec <- function(x, encoding='UTF-8', ...){ words <- w2v_dictionary(x$model) x <- w2v_embedding(x$model, words) Encoding(rownames(x)) <- encoding x } #' @export as.matrix.word2vec_trained <- function(x, encoding='UTF-8', ...){ as.matrix.word2vec(x) } #' @title Save a word2vec model to disk #' @description Save a word2vec model as a binary file to disk or as a text file #' @param x an object of class \code{w2v} or \code{w2v_trained} as returned by \code{\link{word2vec}} #' @param file the path to the file where to store the model #' @param type either 'bin' or 'txt' to write respectively the file as binary or as a text file. Defaults to 'bin'. #' @param encoding encoding to use when writing a file with type 'txt' to disk. Defaults to 'UTF-8' #' @return a logical indicating if the save process succeeded #' @export #' @seealso \code{\link{word2vec}} #' @examples #' path <- system.file(package = "word2vec", "models", "example.bin") #' model <- read.word2vec(path) #' #' #' ## Save the model to hard disk as a binary file #' path <- "mymodel.bin" #' \dontshow{ #' path <- tempfile(pattern = "w2v", fileext = ".bin") #' } #' write.word2vec(model, file = path) #' \dontshow{ #' file.remove(path) #' } #' #' \dontshow{if(require(udpipe))\{} #' ## Save the model to hard disk as a text file (uses package udpipe) #' library(udpipe) #' path <- "mymodel.txt" #' \dontshow{ #' path <- tempfile(pattern = "w2v", fileext = ".txt") #' } #' write.word2vec(model, file = path, type = "txt") #' \dontshow{ #' file.remove(path) #' } #' \dontshow{\} # End of main if statement running only if the required packages are installed} write.word2vec <- function(x, file, type = c("bin", "txt"), encoding = "UTF-8"){ type <- match.arg(type) if(type == "bin"){ stopifnot(inherits(x, "w2v_trained") || inherits(x, "w2v") || inherits(x, "word2vec_trained") || inherits(x, "word2vec")) w2v_save_model(x$model, file) }else if(type == "txt"){ requireNamespace(package = "udpipe") wordvectors <- as.matrix(x) wv <- udpipe::as_word2vec(wordvectors) f <- base::file(file, open = "wt", encoding = encoding) cat(wv, file = f) close(f) file.exists(file) } } #' @title Read a binary word2vec model from disk #' @description Read a binary word2vec model from disk #' @param file the path to the model file #' @param normalize logical indicating to normalize the embeddings by dividing by the factor (sqrt(sum(x . x) / length(x))). Defaults to FALSE. #' @return an object of class w2v which is a list with elements #' \itemize{ #' \item{model: a Rcpp pointer to the model} #' \item{model_path: the path to the model on disk} #' \item{dim: the dimension of the embedding matrix} #' \item{n: the number of words in the vocabulary} #' } #' @export #' @examples #' path <- system.file(package = "word2vec", "models", "example.bin") #' model <- read.word2vec(path) #' vocab <- summary(model, type = "vocabulary") #' emb <- predict(model, c("bus", "naar", "unknownword"), type = "embedding") #' emb #' nn <- predict(model, c("bus", "toilet"), type = "nearest") #' nn #' #' # Do some calculations with the vectors and find similar terms to these #' emb <- as.matrix(model) #' vector <- emb["gastvrouw", ] - emb["gastvrij", ] #' predict(model, vector, type = "nearest", top_n = 5) #' vectors <- emb[c("gastheer", "gastvrouw"), ] #' vectors <- rbind(vectors, avg = colMeans(vectors)) #' predict(model, vectors, type = "nearest", top_n = 10) read.word2vec <- function(file, normalize = FALSE){ stopifnot(file.exists(file)) w2v_load_model(file, normalize = normalize) } #' @title Read word vectors from a word2vec model from disk #' @description Read word vectors from a word2vec model from disk into a dense matrix #' @param file the path to the model file #' @param type either 'bin' or 'txt' indicating the \code{file} is a binary file or a text file #' @param n integer, indicating to limit the number of words to read in. Defaults to reading all words. #' @param normalize logical indicating to normalize the embeddings by dividing by the factor (sqrt(sum(x . x) / length(x))). Defaults to FALSE. #' @param encoding encoding to be assumed for the words. Defaults to 'UTF-8' #' @return A matrix with the embeddings of the words. The rownames of the matrix are the words which are by default set to UTF-8 encoding. #' @export #' @examples #' path <- system.file(package = "word2vec", "models", "example.bin") #' embed <- read.wordvectors(path, type = "bin", n = 10) #' embed <- read.wordvectors(path, type = "bin", n = 10, normalize = TRUE) #' embed <- read.wordvectors(path, type = "bin") #' #' path <- system.file(package = "word2vec", "models", "example.txt") #' embed <- read.wordvectors(path, type = "txt", n = 10) #' embed <- read.wordvectors(path, type = "txt", n = 10, normalize = TRUE) #' embed <- read.wordvectors(path, type = "txt") read.wordvectors <- function(file, type = c("bin", "txt"), n = .Machine$integer.max, normalize = FALSE, encoding = "UTF-8"){ type <- match.arg(type) if(type == "bin"){ x <- w2v_read_binary(file, normalize = normalize, n = as.integer(n)) Encoding(rownames(x)) <- encoding x }else if(type == "txt"){ if(n < .Machine$integer.max){ x <- readLines(file, skipNul = TRUE, encoding = encoding, n = n + 1L, warn = FALSE) }else{ x <- readLines(file, skipNul = TRUE, encoding = encoding, warn = FALSE) } size <- x[1] size <- as.numeric(unlist(strsplit(size, " "))) x <- x[-1] x <- strsplit(x, " ") size[1] <- length(x) token <- sapply(x, FUN=function(x) x[1]) emb <- lapply(x, FUN=function(x) as.numeric(x[-1])) embedding <- matrix(data = unlist(emb), nrow = size[1], ncol = size[2], dimnames = list(token), byrow = TRUE) if(normalize){ embedding <- t(apply(embedding, MARGIN=1, FUN=function(x) x / sqrt(sum(x * x) / length(x)))) } embedding } } #' @export summary.word2vec <- function(object, type = "vocabulary", encoding = "UTF-8", ...){ type <- match.arg(type) if(type == "vocabulary"){ x <- w2v_dictionary(object$model) Encoding(x) <- encoding x }else{ stop("not implemented") } } #' @export summary.word2vec_trained <- function(object, type = "vocabulary", ...){ summary.word2vec(object = object, type = type, ...) } #' @title Predict functionalities for a word2vec model #' @description Get either #' \itemize{ #' \item{the embedding of words} #' \item{the nearest words which are similar to either a word or a word vector} #' } #' @param object a word2vec model as returned by \code{\link{word2vec}} or \code{\link{read.word2vec}} #' @param newdata for type 'embedding', \code{newdata} should be a character vector of words\cr #' for type 'nearest', \code{newdata} should be a character vector of words or a matrix in the embedding space #' @param type either 'embedding' or 'nearest'. Defaults to 'nearest'. #' @param top_n show only the top n nearest neighbours. Defaults to 10. #' @param encoding set the encoding of the text elements to the specified encoding. Defaults to 'UTF-8'. #' @param ... not used #' @return depending on the type, you get a different result back: #' \itemize{ #' \item{for type nearest: a list of data.frames with columns term, similarity and rank indicating with words which are closest to the provided \code{newdata} words or word vectors. If \code{newdata} is just one vector instead of a matrix, it returns a data.frame} #' \item{for type embedding: a matrix of word vectors of the words provided in \code{newdata}} #' } #' @seealso \code{\link{word2vec}}, \code{\link{read.word2vec}} #' @export #' @examples #' path <- system.file(package = "word2vec", "models", "example.bin") #' model <- read.word2vec(path) #' emb <- predict(model, c("bus", "toilet", "unknownword"), type = "embedding") #' emb #' nn <- predict(model, c("bus", "toilet"), type = "nearest", top_n = 5) #' nn #' #' # Do some calculations with the vectors and find similar terms to these #' emb <- as.matrix(model) #' vector <- emb["buurt", ] - emb["rustige", ] + emb["restaurants", ] #' predict(model, vector, type = "nearest", top_n = 10) #' #' vector <- emb["gastvrouw", ] - emb["gastvrij", ] #' predict(model, vector, type = "nearest", top_n = 5) #' #' vectors <- emb[c("gastheer", "gastvrouw"), ] #' vectors <- rbind(vectors, avg = colMeans(vectors)) #' predict(model, vectors, type = "nearest", top_n = 10) predict.word2vec <- function(object, newdata, type = c("nearest", "embedding"), top_n = 10L, encoding = "UTF-8", ...){ type <- match.arg(type) top_n <- as.integer(top_n) if(type == "embedding"){ x <- w2v_embedding(object$model, x = newdata) Encoding(rownames(x)) <- encoding }else if(type == "nearest"){ if(is.character(newdata)){ x <- lapply(newdata, FUN=function(x, top_n, ...){ data <- w2v_nearest(object$model, x = x, top_n = top_n, ...) Encoding(data$term1) <- encoding Encoding(data$term2) <- encoding data }, top_n = top_n, ...) names(x) <- newdata }else if(is.matrix(newdata)){ x <- lapply(seq_len(nrow(newdata)), FUN=function(i, top_n, ...){ data <- w2v_nearest_vector(object$model, x = newdata[i, ], top_n = top_n, ...) Encoding(data$term) <- encoding data }, top_n = top_n, ...) if(!is.null(rownames(newdata))){ names(x) <- rownames(newdata) } }else if(is.numeric(newdata)){ x <- w2v_nearest_vector(object$model, x = newdata, top_n = top_n, ...) Encoding(x$term) <- encoding } } x } #' @export predict.word2vec_trained <- function(object, newdata, type = c("nearest", "embedding"), ...){ predict.word2vec(object = object, newdata = newdata, type = type, ...) } #' @title Similarity between word vectors as used in word2vec #' @description The similarity between word vectors is defined #' \itemize{ #' \item{for type 'dot': }{as the square root of the average inner product of the vector elements (sqrt(sum(x . y) / ncol(x))) capped to zero} #' \item{for type 'cosine': }{as the the cosine similarity, namely sum(x . y) / (sum(x^2)*sum(y^2)) } #' } #' @param x a matrix with embeddings where the rownames of the matrix provide the label of the term #' @param y a matrix with embeddings where the rownames of the matrix provide the label of the term #' @param top_n integer indicating to return only the top n most similar terms from y for each row of x. #' If \code{top_n} is supplied, a data.frame will be returned with only the highest similarities between x and y #' instead of all pairwise similarities #' @param type character string with the type of similarity. Either 'dot' or 'cosine'. Defaults to 'dot'. #' @return #' By default, the function returns a similarity matrix between the rows of \code{x} and the rows of \code{y}. #' The similarity between row i of \code{x} and row j of \code{y} is found in cell \code{[i, j]} of the returned similarity matrix.\cr #' If \code{top_n} is provided, the return value is a data.frame with columns term1, term2, similarity and rank #' indicating the similarity between the provided terms in \code{x} and \code{y} #' ordered from high to low similarity and keeping only the top_n most similar records. #' @export #' @seealso \code{\link{word2vec}} #' @examples #' x <- matrix(rnorm(6), nrow = 2, ncol = 3) #' rownames(x) <- c("word1", "word2") #' y <- matrix(rnorm(15), nrow = 5, ncol = 3) #' rownames(y) <- c("term1", "term2", "term3", "term4", "term5") #' #' word2vec_similarity(x, y) #' word2vec_similarity(x, y, top_n = 1) #' word2vec_similarity(x, y, top_n = 2) #' word2vec_similarity(x, y, top_n = +Inf) #' word2vec_similarity(x, y, type = "cosine") #' word2vec_similarity(x, y, top_n = 1, type = "cosine") #' word2vec_similarity(x, y, top_n = 2, type = "cosine") #' word2vec_similarity(x, y, top_n = +Inf, type = "cosine") #' #' ## Example with a word2vec model #' path <- system.file(package = "word2vec", "models", "example.bin") #' model <- read.word2vec(path) #' emb <- as.matrix(model) #' #' x <- emb[c("gastheer", "gastvrouw", "kamer"), ] #' y <- emb #' word2vec_similarity(x, x) #' word2vec_similarity(x, y, top_n = 3) #' predict(model, x, type = "nearest", top_n = 3) word2vec_similarity <- function(x, y, top_n = +Inf, type = c("dot", "cosine")){ type <- match.arg(type) if (!is.matrix(x)) { x <- matrix(x, nrow = 1) } if (!is.matrix(y)) { y <- matrix(y, nrow = 1) } if(type == "dot"){ vectorsize <- ncol(x) similarities <- tcrossprod(x, y) similarities <- similarities / vectorsize similarities[similarities < 0] <- 0 similarities <- sqrt(similarities) }else if (type == "cosine"){ similarities <- tcrossprod(x, y) x_scale <- sqrt(apply(x, MARGIN = 1, FUN = crossprod)) y_scale <- sqrt(apply(y, MARGIN = 1, FUN = crossprod)) similarities <- similarities / outer(x_scale, y_scale, FUN = "*") } if (!missing(top_n)) { similarities <- as.data.frame.table(similarities, stringsAsFactors = FALSE) colnames(similarities) <- c("term1", "term2", "similarity") similarities <- similarities[order(factor(similarities$term1), similarities$similarity, decreasing = TRUE), ] similarities$rank <- stats::ave(similarities$similarity, similarities$term1, FUN = seq_along) similarities <- similarities[similarities$rank <= top_n, ] rownames(similarities) <- NULL } similarities }
/scratch/gouwar.j/cran-all/cranData/word2vec/R/word2vec.R
# Main functions #' @include utils.R NULL #' Run word puzzle game #' #' \code{run_game} is the main function to run word puzzle game. #' The word puzzle game requires you to guess the word with single letters #' in a limited times of trials. The letters you have guessed in the word #' reveal themselves. If all letters are revealed before your guesses run out, #' you win this round, otherwise you fail. #' #' @param mask_char (String) letter to mask the letters not guessed in the word. #' @param verbose (Logical) whether to print welcome and score messages. #' @param ... For internal use only. #' #' @return Named list of game stats invisibly, including: #' \describe{ #' \item{score}{Named integer with names as \code{success} (success rounds) #' and \code{total} (total rounds).} #' \item{best_guess}{Integer as the minimal number of guesses.} #' \item{best_hit}{Named integer with names as \code{hit} (guesses that hit #' any letters in the word) and \code{guess} (total guesses).} #' } #' @export #' #' @examples #' # Run word puzzle game #' if (interactive() == TRUE) { #' run_game() #' } run_game <- function(mask_char = "_", verbose = TRUE, ...) { stopifnot(nchar(mask_char) == 1L) if (verbose == TRUE) { message("========= Welcome to Word Puzzle in R =========") } score <- c(success = 0L, total = 0L) best_guess <- NA_integer_ best_hit <- c(hit = 0L, guess = 0L) best_hit_rate <- NA_real_ while (TRUE) { round_res <- .run_puzzle_round(mask_char = mask_char, verbose = verbose, ...) if (round_res[["success"]] == TRUE) { score[["success"]] <- score[["success"]] + 1L if ((is.na(best_guess) == TRUE) || (best_guess > round_res[["iter"]])) { best_guess <- round_res[["iter"]] } total_unique_letters <- length(unique(split_word(round_res[["word"]]))) if ((is.na(best_hit_rate) == TRUE) || (total_unique_letters / round_res[["iter"]] > best_hit_rate)) { best_hit <- c(hit = total_unique_letters, guess = round_res[["iter"]]) best_hit_rate <- best_hit[["hit"]] / best_hit[["guess"]] } } score[["total"]] <- score[["total"]] + 1L if (verbose == TRUE) { message("Game stats:") message("* Current score: ", score[["success"]], "/", score[["total"]], " (", scales::percent(score[["success"]] / score[["total"]], accuracy = 0.01), ")") if (is.na(best_guess) == FALSE) { message("* Best guess: ", best_guess) } if (is.na(best_hit_rate) == FALSE) { message("* Best hit rate: ", best_hit[["hit"]], "/", best_hit[["guess"]], " (", scales::percent(best_hit_rate, accuracy = 0.01), ")") } } args_list <- list(...) if ((".auto" %in% names(args_list) == FALSE) || (as.logical(args_list[[".auto"]]) == FALSE)) { proceed <- readline("Proceed with a new word? [Y/N] ") if (any(c("N", "n") %in% proceed == TRUE)) break } else { if ((".round" %in% names(args_list) == FALSE) || (as.numeric(args_list[[".round"]]) <= 0)) { message("Proceed with a new word? [Y/N] N") break } else { message("Proceed with a new word? [Y/N] Y") } } } invisible(list(score = score, best_guess = best_guess, best_hit = best_hit)) } #' One round of word puzzle #' @noRd .run_puzzle_round <- function(mask_char, verbose, .auto = FALSE, .letters = NULL) { target_word <- sample(.dict, size = 1L) word_mask <- rep(FALSE, nchar(target_word)) success <- FALSE guess_pool <- c() for (cur_guess in seq_len(.wordPuzzleConfig[["guess"]])) { message("Guess: ", cur_guess) message("Word: ", mask_word(word = target_word, mask = word_mask, char = mask_char)) while (TRUE) { if (.auto == FALSE) { input_char <- tolower(readline("Input a letter: ")) } else { if (is.null(.letters) == TRUE) { .letters <- letters } input_char <- sample(setdiff(.letters, guess_pool), size = 1L) message("Input a letter: ", input_char) } if (nchar(input_char) == 1L) { if (input_char %in% guess_pool == TRUE) { message("! You have already guessed this letter") } else { guess_pool <- c(guess_pool, input_char) break } } else { message("! You should input a single letter") } } word_mask_new <- update_mask(word = target_word, cur_mask = word_mask, letter = input_char) if (sum(word_mask_new) > sum(word_mask)) { message("* Your guess hit ", sum(word_mask_new) - sum(word_mask), " letter(s) in the word!") } else { message("* Your guess hit no letters in the word! Better luck next time~") } word_mask <- word_mask_new if (all(word_mask == TRUE)) { success <- TRUE break } } if (success == TRUE) { message("You guessed the word [", target_word, "] in ", cur_guess, " guess(es)! Good job!") } else { message("You failed to guess the word! The answer is: ", target_word) } invisible(list(word = target_word, success = success, iter = cur_guess)) }
/scratch/gouwar.j/cran-all/cranData/wordPuzzleR/R/main.R
# Functions to run at start/end of package #' @include zzz.R #' @import purrr NULL #' Configure wordPuzzleR #' #' \code{config_game} configures wordPuzzleR, or show current configuration #' when used with no arguments. #' #' @importFrom utils assignInMyNamespace #' #' @param ... Arguments passed on to configurations. Valid names may be: #' \describe{ #' \item{dict}{(String) path to dictionary file, where each line is a word.} #' \item{min_len}{(Integer) minimal word length, default 3.} #' \item{max_len}{(Integer) maximal word length, default 8.} #' \item{guess}{(Integer) maximal guesses, default 10.} #' \item{pattern}{(String) Regular expression to filter qualified words, #' default "\[A-Za-z\]+".} #' } #' @param .verbose (Logical) whether config messages should be printed. #' #' @return Named list of new configurations, invisibly. #' @export #' #' @examples #' # Show current config #' config_game() config_game <- function(..., .verbose = TRUE) { args_list <- list(...) new_config <- purrr::reduce2( args_list, names(args_list), function(cur_config, value, name) { res <- cur_config if (name %in% names(.wordPuzzleConfig) == TRUE) { res[[name]] <- value } else { stop("Invalid config term: ", name) } res }, .init = .wordPuzzleConfig ) if (new_config[["guess"]] <= 0L) { stop("[guess] should be a positive integer") } utils::assignInMyNamespace(".wordPuzzleConfig", new_config) if (.verbose == TRUE) { message("wordPuzzleR config:") purrr::iwalk( new_config, ~ message("* ", .y, ": ", .x) ) } prep_dict(config = new_config, verbose = .verbose) invisible(new_config) } #' Split word into letters #' @noRd split_word <- function(word) { stopifnot(length(word) == 1L) stringr::str_extract_all(word, ".")[[1L]] } #' Update word mask #' @noRd update_mask <- function(word, cur_mask, letter) { word_split <- split_word(word) (tolower(word_split) %in% tolower(letter) == TRUE) | (cur_mask == TRUE) } #' Mask letters without correct guess #' @noRd mask_word <- function(word, mask, char = "_") { word_split <- split_word(word) word_split_mask <- word_split word_split_mask[mask == FALSE] <- char paste(word_split_mask, collapse = "") }
/scratch/gouwar.j/cran-all/cranData/wordPuzzleR/R/utils.R
#' @examples #' # Run word puzzle game #' if (interactive() == TRUE) { #' run_game() #' } #' @keywords internal "_PACKAGE" ## usethis namespace: start ## usethis namespace: end NULL
/scratch/gouwar.j/cran-all/cranData/wordPuzzleR/R/wordPuzzleR-package.R
# onLoad functions #' @import stringr NULL .PKG_NAME <- "" .wordPuzzleConfig <- list( dict = "", min_len = 3L, max_len = 8L, guess = 10L, pattern = "[A-Za-z]+" ) .dict <- c() #' Prepare dictionary #' @importFrom utils assignInMyNamespace #' @noRd prep_dict <- function(config, verbose = TRUE) { .core_fun <- function(dict_path, min_len = 3L, max_len = 8L, pattern = "[A-Za-z]+", guess = 10L) { dict_raw <- readLines(dict_path) dict <- dict_raw dict <- dict[nchar(dict_raw) >= min_len & nchar(dict_raw) <= max_len] dict <- dict[stringr::str_extract_all(dict, pattern) == dict] if (length(dict) == 0L) { stop("No valid words loaded from dictionary", "; please check config with config_game()") } if (verbose == TRUE) { message("Loaded ", length(dict), " word(s) from dict") } dict } utils::assignInMyNamespace(".dict", do.call(.core_fun, config)) } #' @importFrom utils assignInMyNamespace #' @noRd .onLoad <- function(...) { args_list <- list(...) utils::assignInMyNamespace(".PKG_NAME", args_list[[2L]]) init_config <- .wordPuzzleConfig init_config[["dict"]] <- system.file(file.path("resources", "dict.txt"), package = .PKG_NAME) utils::assignInMyNamespace(".wordPuzzleConfig", init_config) prep_dict(config = .wordPuzzleConfig, verbose = FALSE) }
/scratch/gouwar.j/cran-all/cranData/wordPuzzleR/R/zzz.R
#' Fit age of acquisition estimates for Wordbank data #' #' For each item in the input data, estimate its age of acquisition as the #' earliest age (in months) at which the proportion of children who #' understand/produce the item is greater than some threshold. The proportions #' used can be empirical or first smoothed by a model. #' #' @param instrument_data A data frame returned by \code{get_instrument_data}, #' which must have an "age" column and a "num_item_id" column. #' @param measure One of "produces" or "understands" (defaults to "produces"). #' @param method A string indicating which smoothing method to use: #' \code{empirical} to use empirical proportions, \code{glm} to fit a #' logistic linear model, \code{glmrob} a robust logistic linear model #' (defaults to \code{glm}). #' @param proportion A number between 0 and 1 indicating threshold proportion of #' children. #' @param age_min The minimum age to allow for an age of acquisition. Defaults #' to the minimum age in \code{instrument_data} #' @param age_max The maximum age to allow for an age of acquisition. Defaults #' to the maximum age in \code{instrument_data} #' #' @return A data frame where every row is an item, the item-level columns from #' the input data are preserved, and the \code{aoa} column contains the age of #' acquisition estimates. #' #' @examples #' \donttest{ #' eng_ws_data <- get_instrument_data(language = "English (American)", #' form = "WS", #' items = c("item_1", "item_42"), #' administration_info = TRUE) #' if (!is.null(eng_ws_data)) eng_ws_aoa <- fit_aoa(eng_ws_data) #' } #' @export fit_aoa <- function(instrument_data, measure = "produces", method = "glm", proportion = 0.5, age_min = min(instrument_data$age, na.rm = TRUE), age_max = max(instrument_data$age, na.rm = TRUE)) { assertthat::assert_that(is.element("age", colnames(instrument_data))) assertthat::assert_that(is.element("item_id", colnames(instrument_data))) assertthat::assert_that(age_min <= age_max) instrument_data <- instrument_data %>% dplyr::mutate(num_item_id = strip_item_id(.data$item_id)) instrument_summary <- instrument_data %>% dplyr::filter(!is.na(.data$age)) %>% # dplyr::mutate( # produces = !is.na(.data$value) & .data$value == "produces", # understands = !is.na(.data$value) & # (.data$value == "understands" | .data$value == "produces") # ) %>% dplyr::select(-"value") %>% tidyr::gather("measure_name", "value", .data$produces, .data$understands) %>% dplyr::filter(.data$measure_name == measure) %>% dplyr::group_by(.data$age, .data$num_item_id) %>% dplyr::summarise(num_true = sum(.data$value), num_false = dplyr::n() - .data$num_true) inv_logit <- function(x) 1 / (exp(-x) + 1) ages <- dplyr::tibble(age = age_min:age_max) fit_methods <- list( "empirical" = function(item_data) { item_data %>% dplyr::mutate( prop = .data$num_true / (.data$num_true + .data$num_false) ) }, "glm" = function(item_data) { model <- stats::glm(cbind(num_true, num_false) ~ age, item_data, family = "binomial") ages %>% dplyr::mutate(prop = inv_logit(stats::predict(model, ages))) }, "glmrob" = function(item_data) { model <- robustbase::glmrob(cbind(num_true, num_false) ~ age, item_data, family = "binomial") ages %>% dplyr::mutate(prop = inv_logit(stats::predict(model, ages))) } ) compute_aoa <- function(fit_data) { acq <- fit_data %>% dplyr::filter(.data$prop >= proportion) if (nrow(acq) & any(fit_data$prop < proportion)) min(acq$age) else NA } instrument_fits <- instrument_summary %>% dplyr::group_by(.data$num_item_id) %>% tidyr::nest() %>% dplyr::ungroup() %>% dplyr::mutate(fit_data = .data$data %>% purrr::map(fit_methods[[method]])) instrument_aoa <- instrument_fits %>% dplyr::mutate(aoa = .data$fit_data %>% purrr::map_dbl(compute_aoa)) %>% dplyr::select(-"data", -"fit_data") item_cols <- c("num_item_id", "item_id", "item_kind", "item_definition", "category", "lexical_category", "lexical_class", "uni_lemma", "complexity_category") %>% purrr::keep(~.x %in% colnames(instrument_data)) item_data <- instrument_data %>% dplyr::select(!!!item_cols) %>% dplyr::distinct() instrument_aoa %>% dplyr::left_join(item_data, by = "num_item_id") %>% dplyr::select(-"num_item_id") }
/scratch/gouwar.j/cran-all/cranData/wordbankr/R/aoa.R
#' Get the uni_lemmas available in Wordbank #' #' @inheritParams connect_to_wordbank #' @return A data frame with the column \code{uni_lemma}. #' #' @examples #' \donttest{ #' uni_lemmas <- get_crossling_items() #' } #' @export get_crossling_items <- function(db_args = NULL) { src <- connect_to_wordbank(db_args) if (is.null(src)) return() uni_lemmas <- get_common_table(src, "uni_lemma") %>% dplyr::collect() DBI::dbDisconnect(src) return(uni_lemmas) } #' Get item-by-age summary statistics #' #' @param item_data A dataframe as returned by \code{get_item_data()}. #' @inheritParams connect_to_wordbank #' @return A dataframe with a row for each combination of item and age, and #' columns for summary statistics for the group: number of children #' (\code{n_children}), means (\code{comprehension}, \code{production}), #' standard deviations (\code{comprehension_sd}, \code{production_sd}); also #' retains item-level variables from \code{lang_items} (\code{item_id}, #' \code{item_definition}, \code{uni_lemma}, \code{lexical_category}). #' #' @examples #' \donttest{ #' italian_items <- get_item_data(language = "Italian", form = "WG") #' if (!is.null(italian_items)) { #' italian_dog <- dplyr::filter(italian_items, uni_lemma == "dog") #' italian_dog_summary <- summarise_items(italian_dog) #' } #' } #' @export summarise_items <- function(item_data, db_args = NULL) { lang <- unique(item_data$language) frm <- unique(item_data$form) message(glue("Getting data for {lang} {frm}")) src <- connect_to_wordbank(db_args) if (is.null(src)) return() instrument_data <- get_instrument_data(language = lang, form = frm, items = item_data$item_id, administration_info = TRUE, item_info = item_data, db_args = db_args) if (is.null(instrument_data)) return() comp <- !all(is.na(instrument_data$understands)) item_summary <- instrument_data %>% dplyr::group_by(.data$language, .data$form, .data$item_id, .data$item_definition, .data$uni_lemma, .data$lexical_category, .data$age) %>% dplyr::summarise( n_children = dplyr::n(), comprehension = if (comp) sum(.data$understands, na.rm = TRUE) / .data$n_children else NA, production = sum(.data$produces, na.rm = TRUE) / .data$n_children, comprehension_sd = if (comp) stats::sd(.data$understands, na.rm = TRUE) / .data$n_children else NA, production_sd = stats::sd(.data$produces, na.rm = TRUE) / .data$n_children ) %>% dplyr::ungroup() suppressWarnings(DBI::dbDisconnect(src)) return(item_summary) } #' Get item-by-age summary statistics for items across languages #' #' @param uni_lemmas A character vector of uni_lemmas. #' @inheritParams connect_to_wordbank #' @return A dataframe with a row for each combination of language, item, and #' age, and columns for summary statistics for the group: number of children #' (\code{n_children}), means (\code{comprehension}, \code{production}), #' standard deviations (\code{comprehension_sd}, \code{production_sd}); and #' item-level variables (\code{item_id}, \code{definition}, \code{uni_lemma}, #' \code{lexical_category}, \code{lexical_class}). #' @examples #' \donttest{ #' crossling_data <- get_crossling_data(uni_lemmas = "dog") #' } #' @export get_crossling_data <- function(uni_lemmas, db_args = NULL) { src <- connect_to_wordbank(db_args) if (is.null(src)) return() item_data <- get_item_data(db_args = db_args) if (is.null(item_data)) return() item_data <- item_data %>% dplyr::filter(.data$uni_lemma %in% uni_lemmas) %>% dplyr::select("language", "form", "form_type", "item_id", "item_kind", "item_definition", "uni_lemma", "lexical_category") if (nrow(item_data) == 0) { message("No items found for uni_lemma") return() } safe_summarise_items <- purrr::safely(summarise_items, quiet = FALSE, otherwise = dplyr::tibble()) item_summary <- item_data %>% dplyr::mutate(lang = .data$language, frm = .data$form) %>% tidyr::nest(df = -c("lang", "frm")) %>% dplyr::transmute(summary = .data$df %>% purrr::map(~safe_summarise_items(., db_args)$result)) %>% tidyr::unnest(cols = "summary") suppressWarnings(DBI::dbDisconnect(src)) return(item_summary) }
/scratch/gouwar.j/cran-all/cranData/wordbankr/R/crossling.R
#' Fit quantiles to vocabulary sizes using quantile regression #' #' @param vocab_data A data frame returned by \code{get_administration_data}. #' @param measure A column of \code{vocab_data} with vocabulary values #' (\code{production} or \code{comprehension}). #' @param group (Optional) A column of \code{vocab_data} to group by. #' @param quantiles Either one of "standard" (default), "deciles", "quintiles", #' "quartiles", "median", or a numeric vector of quantile values. #' #' @importFrom quantregGrowth ps #' #' @return A data frame with the columns "language", "form", "age", \code{group} #' (if specified), "quantile", and \code{measure}, where \code{measure} is the #' fit vocabulary value for that quantile at that age. #' @export #' #' @examples #' \donttest{ #' eng_wg <- get_administration_data(language = "English (American)", #' form = "WG", #' include_demographic_info = TRUE) #' if (!is.null(eng_wg)) { #' vocab_quantiles <- fit_vocab_quantiles(eng_wg, production) #' vocab_quantiles_sex <- fit_vocab_quantiles(eng_wg, production, sex) #' vocab_quartiles <- fit_vocab_quantiles(eng_wg, production, quantiles = "quartiles") #' } #' } fit_vocab_quantiles <- function(vocab_data, measure, group, quantiles = "standard") { lifecycle::deprecate_warn( when = "1.0.0", what = "fit_vocab_quantiles()", details = "Please use the vocabulary norms shiny app at http://wordbank.stanford.edu/analyses?name=vocab_norms") quantile_opts <- list( standard = c(0.10, 0.25, 0.50, 0.75, 0.90), deciles = c(0.10, 0.20, 0.30, 0.40, 0.50, 0.60, 0.70, 0.80, 0.90), quintiles = c(0.20, 0.40, 0.60, 0.80), quartiles = c(0.25, 0.50, 0.75), median = c(0.5) ) if (is.numeric(quantiles)) { if (any(quantiles >= 1, quantiles <= 0)) stop("Numeric quantiles must be between 0 and 1") num_quantiles <- quantiles } else if (is.character(quantiles) & length(quantiles) == 1) { if (!(quantiles %in% names(quantile_opts))) stop("Character quantiles must be one of ", paste(names(quantile_opts), collapse = ", ")) num_quantiles <- quantile_opts[[quantiles]] } else { stop("Quantiles must be a numeric vector or a character vector of length 1") } vocab_data <- vocab_data %>% dplyr::group_by(.data$language, .data$form) if (!missing(group)) { vocab_data <- vocab_data %>% dplyr::filter((dplyr::if_any({{ group }}, ~!is.na(.x)))) %>% dplyr::group_by({{ group }}, .add = TRUE) } vocab_models <- vocab_data %>% dplyr::rename(vocab = {{ measure }}) %>% tidyr::nest() %>% dplyr::mutate(group_label = paste(.data$language, .data$form, {{ group }})) %>% dplyr::mutate(model = purrr::map2( .data$group_label, .data$data, function(gl, df) { tryCatch( suppressWarnings( quantregGrowth::gcrq(vocab ~ ps(age, monotone = 1, lambda = 1000), tau = num_quantiles, data = df) ), error = function(e) { message(glue("Unable to fit model for {gl}")) return(NULL) }) })) %>% dplyr::select(-"group_label") %>% dplyr::filter(purrr::map_lgl(.data$model, ~!is.null(.))) %>% dplyr::ungroup() if (nrow(vocab_models) == 0) return(NULL) ages <- data.frame(age = min(vocab_data$age):max(vocab_data$age)) get_predicted <- function(vocab_model) { vocab_fits <- stats::predict(vocab_model, newdata = ages) if (length(vocab_model$taus) == 1) vocab_fits <- rlang::set_names(list(vocab_fits), vocab_model$taus) vocab_fits %>% dplyr::as_tibble() %>% dplyr::mutate(age = ages$age) %>% tidyr::gather("quantile", "predicted", -.data$age) } vocab_fits <- vocab_models %>% dplyr::mutate(predicted = purrr::map(.data$model, get_predicted)) %>% dplyr::select(-"data", -"model") %>% tidyr::unnest(cols = "predicted") %>% dplyr::rename("{{measure}}" := .data$predicted) %>% dplyr::mutate(quantile = factor(.data$quantile)) return(vocab_fits) }
/scratch/gouwar.j/cran-all/cranData/wordbankr/R/quantiles.R
#' @keywords internal "_PACKAGE" ## usethis namespace: start #' @importFrom dplyr %>% #' @importFrom rlang .data := #' @importFrom glue glue ## usethis namespace: end NULL
/scratch/gouwar.j/cran-all/cranData/wordbankr/R/wordbankr-package.R
handle_con <- function(e) { message(strwrap( prefix = " ", initial = "", "Could not retrieve Wordbank connection information. Please check your internet connection. If this error persists please contact [email protected].")) } #' Get database connection arguments #' #' @return List of database connection arguments: host, db_name, username, #' password #' @export #' #' @examples #' \donttest{ #' get_wordbank_args() #' } get_wordbank_args <- function() { tryCatch(jsonlite::fromJSON("http://wordbank.stanford.edu/db_args"), error = handle_con) } #' Connect to the Wordbank database #' #' @param db_args List with arguments to connect to wordbank mysql database #' (host, dbname, user, and password). #' @return A \code{src} object which is connection to the Wordbank database. #' #' @examples #' \donttest{ #' src <- connect_to_wordbank() #' } #' @export connect_to_wordbank <- function(db_args = NULL) { if (is.null(db_args)) { db_args <- get_wordbank_args() if (is.null(db_args)) return() } tryCatch(error = handle_con, { src <- DBI::dbConnect(RMySQL::MySQL(), host = db_args$host, dbname = db_args$dbname, user = db_args$user, password = db_args$password) enc <- DBI::dbGetQuery(src, "SELECT @@character_set_database") DBI::dbSendQuery(src, glue::glue("SET CHARACTER SET {enc}")) return(src) }) } # safe_tbl <- function(src, ...) { # purrr::safely(dplyr::tbl) # } #' Connect to an instrument's Wordbank table #' #' @keywords internal #' #' @param src A connection to the Wordbank database. #' @param language A string of the instrument's language (insensitive to case #' and whitespace). #' @param form A string of the instrument's form (insensitive to case and #' whitespace). #' @return A \code{tbl} object containing the instrument's data. get_instrument_table <- function(src, language, form) { san_string <- function(s) { s %>% tolower() %>% stringr::str_replace_all("[[:punct:]]", "") %>% stringr::str_split(" ") %>% unlist() } table_name <- paste(c("instruments", san_string(language), san_string(form)), collapse = "_") tryCatch(dplyr::tbl(src, table_name), error = handle_con) } #' Connect to a single Wordbank common table #' #' @keywords internal #' #' @param src A connection to the Wordbank database. #' @param name A string indicating the name of a common table. #' @return A \code{tbl} object. get_common_table <- function(src, name) { suppressWarnings( tryCatch(dplyr::tbl(src, paste("common", name, sep = "_")), error = handle_con) ) } #' Get the Wordbank instruments #' #' @return A data frame where each row is a CDI instrument and each column is a #' variable about the instrument (\code{instrument_id}, \code{language}, #' \code{form}, \code{age_min}, \code{age_max}, \code{has_grammar}). #' @inheritParams connect_to_wordbank #' #' @examples #' \donttest{ #' instruments <- get_instruments() #' } #' @export get_instruments <- function(db_args = NULL) { src <- connect_to_wordbank(db_args) if (is.null(src)) return() suppressWarnings( instruments <- get_common_table(src, name = "instrument") %>% dplyr::rename(instrument_id = "id") %>% dplyr::collect() ) DBI::dbDisconnect(src) return(instruments) } #' Get the Wordbank data sources #' #' @param language An optional string specifying which language's datasets to #' retrieve. #' @param form An optional string specifying which form's datasets to retrieve. #' @param admin_data A logical indicating whether to include summary-level #' statistics on the administrations within a dataset. #' @inheritParams connect_to_wordbank #' @return A data frame where each row is a particular dataset and its #' characteristics: \code{dataset_id}, \code{dataset_name}, #' \code{dataset_origin_name} (unique identifier for groups of datasets that #' may share children), \code{language}, \code{form}, \code{form_type}, #' \code{contributor} (contributor name and affiliated institution), #' \code{citation}, \code{license}, \code{longitudinal} (whether dataset #' includes longitudinal participants). Also includes summary statistics on a #' dataset if the \code{admin_data} flag is \code{TRUE}: number of #' administrations (\code{n_admins}). #' #' @examples #' \donttest{ #' english_ws_datasets <- get_datasets(language = "English (American)", #' form = "WS", #' admin_data = TRUE) #' } #' @export get_datasets <- function(language = NULL, form = NULL, admin_data = FALSE, db_args = NULL) { src <- connect_to_wordbank(db_args) if (is.null(src)) return() instruments_tbl <- get_instruments(db_args = db_args) %>% dplyr::select("instrument_id", "language", "form", "form_type") if (is.null(instruments_tbl)) return() datasets <- get_common_table(src, "dataset") %>% dplyr::collect() if (is.null(datasets)) return() dataset_data <- datasets %>% dplyr::left_join(instruments_tbl, by = "instrument_id") input_language <- language input_form <- form if (!is.null(language) | !is.null(form)) { if (!is.null(language)) { dataset_data <- dataset_data %>% dplyr::filter(.data$language == input_language) } if (!is.null(form)) { dataset_data <- dataset_data %>% dplyr::filter(.data$form == input_form) } assertthat::assert_that(nrow(dataset_data) > 0) } dataset_data <- dataset_data %>% dplyr::rename(dataset_id = .data$id, dataset_origin_name = .data$dataset_origin_id) %>% dplyr::mutate(longitudinal = as.logical(.data$longitudinal)) %>% dplyr::select(dplyr::starts_with("dataset"), dplyr::everything()) %>% dplyr::select(-"instrument_id") if (admin_data) { admins_tbl <- get_common_table(src, "administration") if (is.null(admins_tbl)) return() suppressWarnings( admins <- admins_tbl %>% dplyr::group_by(.data$dataset_id) %>% dplyr::summarise(n_admins = dplyr::n_distinct(.data$data_id)) %>% dplyr::collect() ) if (is.null(admins)) return() dataset_data <- dataset_data %>% dplyr::left_join(admins, by = "dataset_id") } DBI::dbDisconnect(src) return(dataset_data) } filter_query <- function(filter_language = NULL, filter_form = NULL, db_args = NULL) { if (!is.null(filter_language) | !is.null(filter_form)) { instruments <- get_instruments(db_args = db_args) if (!is.null(filter_language)) { instruments <- instruments %>% dplyr::filter(.data$language == filter_language) } if (!is.null(filter_form)) { instruments <- instruments %>% dplyr::filter(.data$form == filter_form) } assertthat::assert_that(nrow(instruments) > 0) instrument_ids <- instruments$instrument_id return(sprintf("WHERE instrument_id IN (%s)", paste(instrument_ids, collapse = ", "))) } else { return("") } } #' Get the Wordbank by-administration data #' #' @param language An optional string specifying which language's #' administrations to retrieve. #' @param form An optional string specifying which form's administrations to #' retrieve. #' @param filter_age A logical indicating whether to filter the administrations #' to ones in the valid age range for their instrument. #' @param include_demographic_info A logical indicating whether to include the #' child's demographic information (\code{birth_order}, \code{ethnicity}, #' \code{race}, \code{sex}, \code{caregiver_education}). #' @param include_birth_info A logical indicating whether to include the child's #' birth information (\code{birth_weight}, \code{born_early_or_late}, #' \code{gestational_age}, \code{zygosity}). #' @param include_health_conditions A logical indicating whether to include the #' child's health condition information (a nested dataframe under #' \code{health_conditions} with the column \code{health_condition_name}). #' @param include_language_exposure A logical indicating whether to include the #' child's language exposure information at time of administration (a nested #' dataframe under \code{language_exposures} with the columns \code{language}, #' \code{exposure_proportion}, \code{age_of_first_exposure}). #' @param include_study_internal_id A logical indicating whether to include #' the child's ID in the original study data. #' @inheritParams connect_to_wordbank #' @return A data frame where each row is a CDI administration and each column #' is a variable about the administration (\code{data_id}, #' \code{date_of_test}, \code{age}, \code{comprehension}, \code{production}, #' \code{is_norming}), the dataset it's from (\code{dataset_name}, #' \code{dataset_origin_name}, \code{language}, \code{form}, #' \code{form_type}), and information about the child as described in the #' parameter specification. #' #' @examples #' \donttest{ #' english_ws_admins <- get_administration_data("English (American)", "WS") #' all_admins <- get_administration_data() #' } #' @export get_administration_data <- function(language = NULL, form = NULL, filter_age = TRUE, include_demographic_info = FALSE, include_birth_info = FALSE, include_health_conditions = FALSE, include_language_exposure = FALSE, include_study_internal_id = FALSE, db_args = NULL) { src <- connect_to_wordbank(db_args) if (is.null(src)) return() datasets_tbl <- get_datasets(db_args = db_args) %>% dplyr::select("dataset_id", "dataset_name", "dataset_origin_name", "language", "form", "form_type") if (is.null(datasets_tbl)) return() select_cols <- c("data_id", "date_of_test", "age", "comprehension", "production", "is_norming", "child_id", "dataset_id", "age_min", "age_max") if (include_study_internal_id) select_cols <- c(select_cols, "study_internal_id") demo_cols <- c("birth_order", "ethnicity", "race", "sex", "caregiver_education_id") if (include_demographic_info) select_cols <- c(select_cols, demo_cols) birth_cols <- c("birth_weight", "born_early_or_late", "gestational_age", "zygosity") if (include_birth_info) select_cols <- c(select_cols, birth_cols) select_str <- paste(select_cols, collapse = ', ') admin_query <- glue( "SELECT common_administration.id AS administration_id, {select_str} FROM common_administration LEFT JOIN common_instrument ON common_administration.instrument_id = common_instrument.id LEFT JOIN common_child ON common_administration.child_id = common_child.id\n", {filter_query(language, form, db_args)} ) suppressWarnings( admins_tbl <- dplyr::tbl(src, dbplyr::sql(admin_query)) ) if (is.null(admins_tbl)) return() suppressWarnings( admins <- admins_tbl %>% dplyr::collect() %>% dplyr::mutate(data_id = as.numeric(.data$data_id), is_norming = as.logical(.data$is_norming)) %>% dplyr::left_join(datasets_tbl, by = "dataset_id") %>% dplyr::select(-"dataset_id") %>% dplyr::select("data_id", "date_of_test", "age", "comprehension", "production", "is_norming", dplyr::starts_with("dataset"), "language", "form", "form_type", dplyr::everything()) ) if (include_demographic_info) { caregiver_education_tbl <- get_common_table(src, "caregiver_education") if (is.null(caregiver_education_tbl)) return() caregiver_education <- caregiver_education_tbl %>% dplyr::collect() %>% dplyr::rename(caregiver_education_id = .data$id) %>% dplyr::arrange(.data$education_order) %>% dplyr::mutate(caregiver_education = factor( .data$education_level, levels = .data$education_level) ) %>% dplyr::select("caregiver_education_id", "caregiver_education") admins <- admins %>% dplyr::left_join(caregiver_education, by = "caregiver_education_id") %>% dplyr::select(-"caregiver_education_id") %>% dplyr::relocate(.data$caregiver_education, .after = .data$birth_order) %>% dplyr::mutate(sex = factor(.data$sex, levels = c("F", "M", "O"), labels = c("Female", "Male", "Other")), ethnicity = factor(.data$ethnicity, levels = c("H", "N"), labels = c("Hispanic", "Non-Hispanic")), race = factor(.data$race, levels = c("A", "B", "O", "W"), labels = c("Asian", "Black", "Other", "White")), birth_order = factor(.data$birth_order, levels = c(1, 2, 3, 4, 5, 6, 7, 8), labels = c("First", "Second", "Third", "Fourth", "Fifth", "Sixth", "Seventh", "Eighth"))) } if (include_language_exposure) { language_exposure_tbl <- get_common_table(src, "language_exposure") if (is.null(language_exposure_tbl)) return() language_exposures <- language_exposure_tbl %>% dplyr::semi_join(admins_tbl, by = "administration_id") %>% dplyr::select(-"id") %>% dplyr::collect() %>% tidyr::nest(language_exposures = -"administration_id") admins <- admins %>% dplyr::left_join(language_exposures, by = "administration_id") } if (include_health_conditions) { health_condition_tbl <- get_common_table(src, "health_condition") if (is.null(health_condition_tbl)) return() child_health_conditions_tbl <- get_common_table(src, "child_health_conditions") if (is.null(child_health_conditions_tbl)) return() child_health_conditions <- child_health_conditions_tbl %>% dplyr::semi_join(admins_tbl, by = "child_id") %>% dplyr::left_join(health_condition_tbl, by = c("healthcondition_id" = "id")) %>% dplyr::select(-"id", -"healthcondition_id") %>% dplyr::collect() %>% tidyr::nest(health_conditions = -"child_id") admins <- admins %>% dplyr::left_join(child_health_conditions, by = "child_id") } DBI::dbDisconnect(src) if (filter_age) admins <- admins %>% dplyr::filter(.data$age >= .data$age_min, .data$age <= .data$age_max) admins <- admins %>% dplyr::select(-"age_min", -"age_max", -"administration_id") return(admins) } strip_item_id <- function(item_id) { as.numeric(stringr::str_sub(item_id, 6, stringr::str_length(item_id))) } #' Get the Wordbank by-item data #' #' @param language An optional string specifying which language's items to #' retrieve. #' @param form An optional string specifying which form's items to retrieve. #' @inheritParams connect_to_wordbank #' @return A data frame where each row is a CDI item and each column is a #' variable about it: \code{item_id}, \code{item_kind} (e.g. word, gestures, #' word_endings), \code{item_definition}, \code{english_gloss}, #' \code{language}, \code{form}, \code{form_type}, \code{category} #' (meaning-based group as shown on the CDI form), \code{lexical_category}, #' \code{lexical_class}, \code{complexity_category}, \code{uni_lemma}). #' #' @examples #' \donttest{ #' english_ws_items <- get_item_data("English (American)", "WS") #' all_items <- get_item_data() #' } #' @export get_item_data <- function(language = NULL, form = NULL, db_args = NULL) { src <- connect_to_wordbank(db_args) if (is.null(src)) return() item_tbl <- get_common_table(src, "item") if (is.null(item_tbl)) return() item_query <- paste( "SELECT item_id, language, form, form_type, item_kind, category, item_definition, english_gloss, uni_lemma, lexical_category, complexity_category FROM common_item LEFT JOIN common_instrument ON common_item.instrument_id = common_instrument.id LEFT JOIN common_item_category ON common_item.item_category_id = common_item_category.id LEFT JOIN common_uni_lemma ON common_item.uni_lemma_id = common_uni_lemma.id", filter_query(language, form, db_args), sep = "\n") items <- dplyr::tbl(src, dbplyr::sql(item_query)) %>% dplyr::collect() DBI::dbDisconnect(src) return(items) } #' Get the Wordbank administration-by-item data #' #' @param language A string of the instrument's language (insensitive to case #' and whitespace). #' @param form A string of the instrument's form (insensitive to case and #' whitespace). #' @param items A character vector of column names of \code{instrument_table} of #' items to extract. If not supplied, defaults to all the columns of #' \code{instrument_table}. #' @param administration_info Either a logical indicating whether to include #' administration data or a data frame of administration data (as returned by #' \code{get_administration_data}). #' @param item_info Either a logical indicating whether to include item data or #' a data frame of item data (as returned by \code{get_item_data}). #' @param ... <[`dynamic-dots`][rlang::dyn-dots]> Arguments passed to #' \code{get_administration_data()}. #' @inheritParams connect_to_wordbank #' @return A data frame where each row contains the values (\code{value}, #' \code{produces}, \code{understands}) of a given item (\code{item_id}) for a #' given administration (\code{data_id}), with additional columns of variables #' about the administration and item, as specified. #' #' @examples #' \donttest{ #' eng_ws_data <- get_instrument_data(language = "English (American)", #' form = "WS", #' items = c("item_1", "item_42"), #' item_info = TRUE) #' } #' @export get_instrument_data <- function(language, form, items = NULL, administration_info = FALSE, item_info = FALSE, db_args = NULL, ...) { items_quo <- rlang::enquo(items) input_language <- language input_form <- form src <- connect_to_wordbank(db_args) if (is.null(src)) return() instrument_tbl <- get_instrument_table(src, language, form) if (is.null(instrument_tbl)) return() columns <- colnames(instrument_tbl) if (is.null(items)) { items <- columns[2:length(columns)] items_quo <- rlang::enquo(items) } else { assertthat::assert_that(all(items %in% columns)) names(items) <- NULL } if ("logical" %in% class(administration_info)) { if (administration_info) { administration_info <- get_administration_data(language, form, db_args = db_args, ...) } else { administration_info <- NULL } } if (!is.null(administration_info)) { administration_info <- administration_info %>% dplyr::filter(.data$language == input_language, .data$form == input_form) %>% dplyr::select(-"language", -"form", -"form_type") } if ("logical" %in% class(item_info)) { item_data <- get_item_data(language, form, db_args = db_args) } else { item_data <- item_info } item_data <- item_data %>% dplyr::filter(.data$language == input_language, .data$form == input_form, is.element(.data$item_id, items)) %>% dplyr::mutate(num_item_id = strip_item_id(.data$item_id)) %>% dplyr::select(-"item_id") item_data_cols <- colnames(item_data) produces_vals <- c("produces", "produce") understands_vals <- c("understands", "underst") sometimes_vals <- c("sometimes", "sometim") na_vals <- c(NA, "NA") instrument_data <- instrument_tbl %>% dplyr::select("basetable_ptr_id", !!items_quo) %>% dplyr::collect() %>% dplyr::mutate(data_id = as.numeric(.data$basetable_ptr_id)) %>% dplyr::select(-"basetable_ptr_id") %>% tidyr::gather("item_id", "value", !!items_quo) %>% dplyr::mutate(num_item_id = strip_item_id(.data$item_id)) %>% dplyr::left_join(item_data, by = "num_item_id") %>% dplyr::mutate( .after = .data$value, # recode value for single-char values value = dplyr::case_when(.data$value %in% produces_vals ~ "produces", .data$value %in% understands_vals ~ "understands", .data$value %in% sometimes_vals ~ "sometimes", .data$value %in% na_vals ~ NA, .default = .data$value), # code value as produces only for words produces = .data$value == "produces", produces = dplyr::if_else(.data$item_kind == "word", .data$produces, NA), # code value as understands only for words in WG-type forms understands = .data$value == "understands" | .data$value == "produces", understands = dplyr::if_else( .data$form_type == "WG" & .data$item_kind == "word", .data$understands, NA ) ) if (!is.null(administration_info)) { instrument_data <- instrument_data %>% dplyr::right_join(administration_info, by = "data_id") } if ("logical" %in% class(item_info) && !item_info) { instrument_data <- instrument_data %>% dplyr::select(-{{ item_data_cols }}) } else { instrument_data <- instrument_data %>% dplyr::select(-"num_item_id") } DBI::dbDisconnect(src) return(instrument_data) }
/scratch/gouwar.j/cran-all/cranData/wordbankr/R/wordbankr.R
## ----include=FALSE------------------------------------------------------------ library(wordbankr) library(dplyr) library(ggplot2) knitr::opts_chunk$set(message = FALSE, warning = FALSE, cache = FALSE) theme_set(theme_minimal()) con <- connect_to_wordbank() can_connect <- !is.null(con) knitr::opts_chunk$set(eval = can_connect) ## ----------------------------------------------------------------------------- get_administration_data(language = "English (American)", form = "WS") get_administration_data() ## ----------------------------------------------------------------------------- get_item_data(language = "Italian", form = "WG") get_item_data() ## ----------------------------------------------------------------------------- get_instrument_data( language = "English (American)", form = "WS", items = c("item_26", "item_46") ) ## ----fig.width=6, fig.height=4------------------------------------------------ items <- get_item_data(language = "English (American)", form = "WS") if (!is.null(items)) { animals <- items %>% filter(category == "animals") } ## ----------------------------------------------------------------------------- if (!is.null(animals)) { animal_data <- get_instrument_data(language = "English (American)", form = "WS", items = animals$item_id, administration_info = TRUE, item_info = TRUE) } ## ----fig.width=6, fig.height=4------------------------------------------------ if (!is.null(animal_data)) { animal_summary <- animal_data %>% group_by(age, data_id) %>% summarise(num_animals = sum(produces, na.rm = TRUE)) %>% group_by(age) %>% summarise(median_num_animals = median(num_animals, na.rm = TRUE)) ggplot(animal_summary, aes(x = age, y = median_num_animals)) + geom_point() + labs(x = "Age (months)", y = "Median animal words producing") } ## ----------------------------------------------------------------------------- get_instruments() ## ----------------------------------------------------------------------------- get_datasets(form = "WG") get_datasets(language = "Spanish (Mexican)", admin_data = TRUE) ## ----------------------------------------------------------------------------- fit_aoa(animal_data) fit_aoa(animal_data, method = "glmrob", proportion = 1/3) ## ----------------------------------------------------------------------------- get_crossling_items() ## ----eval=FALSE--------------------------------------------------------------- # get_crossling_data(uni_lemmas = c("hat", "nose")) %>% # select(language, uni_lemma, item_definition, age, n_children, comprehension, # production, comprehension_sd, production_sd) %>% # arrange(uni_lemma)
/scratch/gouwar.j/cran-all/cranData/wordbankr/inst/doc/wordbankr.R
--- title: "Accessing the Wordbank database" author: "Mika Braginsky" date: "`r Sys.Date()`" output: rmarkdown::html_vignette: toc: true vignette: > %\VignetteIndexEntry{Accessing the Wordbank database} %\VignetteEncoding{UTF-8} %\VignetteEngine{knitr::rmarkdown} --- ```{r, include=FALSE} library(wordbankr) library(dplyr) library(ggplot2) knitr::opts_chunk$set(message = FALSE, warning = FALSE, cache = FALSE) theme_set(theme_minimal()) con <- connect_to_wordbank() can_connect <- !is.null(con) knitr::opts_chunk$set(eval = can_connect) ``` The `wordbankr` package allows you to access data in the [Wordbank database](http://wordbank.stanford.edu/) from `R`. This vignette shows some examples of how to use the data loading functions and what the resulting data look like. There are three different data views that you can pull out of Wordbank: by-administration, by-item, and administration-by-item. Additionally, you can get metadata about the datasets and instruments underlying the data. Advanced functionality let's you get estimates of words' age of acquisition and word mappings across languages. ## Administrations The `get_administration_data()` function gives by-administration information, either for a specific language and/or form or for all instruments. ```{r} get_administration_data(language = "English (American)", form = "WS") get_administration_data() ``` ## Items The `get_item_data()` function gives by-item information, either for a specific language and/or form or for all instruments. ```{r} get_item_data(language = "Italian", form = "WG") get_item_data() ``` ## Administrations x Items If you are only looking at total vocabulary size, `admins` is all you need, since it has both productive and receptive vocabulary sizes calculated. If you are looking at specific items or subsets of items, you need to load instrument data, using the `get_instrument_data()` function. Pass it an instrument language and form, along with a list of items you want to extract (by `item_id`). ```{r} get_instrument_data( language = "English (American)", form = "WS", items = c("item_26", "item_46") ) ``` By default `get_instrument_table()` returns a data frame with columns of the administration's `data_id`, the item's `num_item_id` (numerical `item_id`), and the corresponding value. To include administration information, you can set the `administrations` argument to `TRUE`, or pass the result of `get_administration_data()` as `administrations` (that way you can prevent the administration data from being loaded multiple times). Similarly, you can set the `iteminfo` argument to `TRUE`, or pass it result of `get_item_data()`. Loading the data is fast if you need only a handful of items, but the time scales about linearly with the number of items, and can get quite slow if you need many or all of them. So, it's a good idea to filter down to only the items you need before calling `get_instrument_data()`. As an example, let's say we want to look at the production of animal words on English Words & Sentences over age. First we get the items we want: ```{r, fig.width=6, fig.height=4} items <- get_item_data(language = "English (American)", form = "WS") if (!is.null(items)) { animals <- items %>% filter(category == "animals") } ``` Then we get the instrument data for those items: ```{r} if (!is.null(animals)) { animal_data <- get_instrument_data(language = "English (American)", form = "WS", items = animals$item_id, administration_info = TRUE, item_info = TRUE) } ``` Finally, we calculate how many animals words each child produces and the median number of animals of each age bin: ```{r, fig.width=6, fig.height=4} if (!is.null(animal_data)) { animal_summary <- animal_data %>% group_by(age, data_id) %>% summarise(num_animals = sum(produces, na.rm = TRUE)) %>% group_by(age) %>% summarise(median_num_animals = median(num_animals, na.rm = TRUE)) ggplot(animal_summary, aes(x = age, y = median_num_animals)) + geom_point() + labs(x = "Age (months)", y = "Median animal words producing") } ``` ## Metadata ### Instruments The `get_instruments()` function gives information on all the CDI instruments in Wordbank. ```{r} get_instruments() ``` ### Datasets The `get_datasets()` function gives information on all the datasets in Wordbank, either for a specific language and/or form or for all instruments. If the `admin_data` argument is set to `TRUE`, the results will also include the number of administrations in the database from that dataset. ```{r} get_datasets(form = "WG") get_datasets(language = "Spanish (Mexican)", admin_data = TRUE) ``` ## Advanced functionality: Age of acquisition The `fit_aoa()` function computes estimates of items' age of acquisition (AoA). It needs to be provided with a data frame returned by `get_instrument_data()` -- one row per administration x item combination, and minimally the columns `age` and `num_item_id`. It returns a data frame with one row per item and an `aoa` column with the estimate, preserving and item-level columns in the input data. The AoA is estimated by computing the proportion of administrations for which the child understands/produces (`measure`) each word, smoothing the proportion using `method`, and taking the age at which the smoothed value is greater than `proportion`. ```{r} fit_aoa(animal_data) fit_aoa(animal_data, method = "glmrob", proportion = 1/3) ``` ## Advanced functionality: Cross-linguistic data One of the item-level fields is `uni_lemma` ("universal lemma"), which is intended to be an approximate semantic mapping between words across the languages in Wordbank. The function `get_crossling_items()` simply gives all the available `uni_lemma` values. ```{r} get_crossling_items() ``` The function `get_crossling_data()` takes a vector of `uni_lemmas` and returns a data frame of summary statistics for each item mapped to that uni_lemma in any language (on `WG` forms). Each row is combination of item and age, and the columns indicate the number of children (`n_children`), means (`comprehension`, `production`), standard deviations (`comprehension_sd`, `production_sd`), and item-level fields. ```{r, eval=FALSE} get_crossling_data(uni_lemmas = c("hat", "nose")) %>% select(language, uni_lemma, item_definition, age, n_children, comprehension, production, comprehension_sd, production_sd) %>% arrange(uni_lemma) ```
/scratch/gouwar.j/cran-all/cranData/wordbankr/inst/doc/wordbankr.Rmd
--- title: "Accessing the Wordbank database" author: "Mika Braginsky" date: "`r Sys.Date()`" output: rmarkdown::html_vignette: toc: true vignette: > %\VignetteIndexEntry{Accessing the Wordbank database} %\VignetteEncoding{UTF-8} %\VignetteEngine{knitr::rmarkdown} --- ```{r, include=FALSE} library(wordbankr) library(dplyr) library(ggplot2) knitr::opts_chunk$set(message = FALSE, warning = FALSE, cache = FALSE) theme_set(theme_minimal()) con <- connect_to_wordbank() can_connect <- !is.null(con) knitr::opts_chunk$set(eval = can_connect) ``` The `wordbankr` package allows you to access data in the [Wordbank database](http://wordbank.stanford.edu/) from `R`. This vignette shows some examples of how to use the data loading functions and what the resulting data look like. There are three different data views that you can pull out of Wordbank: by-administration, by-item, and administration-by-item. Additionally, you can get metadata about the datasets and instruments underlying the data. Advanced functionality let's you get estimates of words' age of acquisition and word mappings across languages. ## Administrations The `get_administration_data()` function gives by-administration information, either for a specific language and/or form or for all instruments. ```{r} get_administration_data(language = "English (American)", form = "WS") get_administration_data() ``` ## Items The `get_item_data()` function gives by-item information, either for a specific language and/or form or for all instruments. ```{r} get_item_data(language = "Italian", form = "WG") get_item_data() ``` ## Administrations x Items If you are only looking at total vocabulary size, `admins` is all you need, since it has both productive and receptive vocabulary sizes calculated. If you are looking at specific items or subsets of items, you need to load instrument data, using the `get_instrument_data()` function. Pass it an instrument language and form, along with a list of items you want to extract (by `item_id`). ```{r} get_instrument_data( language = "English (American)", form = "WS", items = c("item_26", "item_46") ) ``` By default `get_instrument_table()` returns a data frame with columns of the administration's `data_id`, the item's `num_item_id` (numerical `item_id`), and the corresponding value. To include administration information, you can set the `administrations` argument to `TRUE`, or pass the result of `get_administration_data()` as `administrations` (that way you can prevent the administration data from being loaded multiple times). Similarly, you can set the `iteminfo` argument to `TRUE`, or pass it result of `get_item_data()`. Loading the data is fast if you need only a handful of items, but the time scales about linearly with the number of items, and can get quite slow if you need many or all of them. So, it's a good idea to filter down to only the items you need before calling `get_instrument_data()`. As an example, let's say we want to look at the production of animal words on English Words & Sentences over age. First we get the items we want: ```{r, fig.width=6, fig.height=4} items <- get_item_data(language = "English (American)", form = "WS") if (!is.null(items)) { animals <- items %>% filter(category == "animals") } ``` Then we get the instrument data for those items: ```{r} if (!is.null(animals)) { animal_data <- get_instrument_data(language = "English (American)", form = "WS", items = animals$item_id, administration_info = TRUE, item_info = TRUE) } ``` Finally, we calculate how many animals words each child produces and the median number of animals of each age bin: ```{r, fig.width=6, fig.height=4} if (!is.null(animal_data)) { animal_summary <- animal_data %>% group_by(age, data_id) %>% summarise(num_animals = sum(produces, na.rm = TRUE)) %>% group_by(age) %>% summarise(median_num_animals = median(num_animals, na.rm = TRUE)) ggplot(animal_summary, aes(x = age, y = median_num_animals)) + geom_point() + labs(x = "Age (months)", y = "Median animal words producing") } ``` ## Metadata ### Instruments The `get_instruments()` function gives information on all the CDI instruments in Wordbank. ```{r} get_instruments() ``` ### Datasets The `get_datasets()` function gives information on all the datasets in Wordbank, either for a specific language and/or form or for all instruments. If the `admin_data` argument is set to `TRUE`, the results will also include the number of administrations in the database from that dataset. ```{r} get_datasets(form = "WG") get_datasets(language = "Spanish (Mexican)", admin_data = TRUE) ``` ## Advanced functionality: Age of acquisition The `fit_aoa()` function computes estimates of items' age of acquisition (AoA). It needs to be provided with a data frame returned by `get_instrument_data()` -- one row per administration x item combination, and minimally the columns `age` and `num_item_id`. It returns a data frame with one row per item and an `aoa` column with the estimate, preserving and item-level columns in the input data. The AoA is estimated by computing the proportion of administrations for which the child understands/produces (`measure`) each word, smoothing the proportion using `method`, and taking the age at which the smoothed value is greater than `proportion`. ```{r} fit_aoa(animal_data) fit_aoa(animal_data, method = "glmrob", proportion = 1/3) ``` ## Advanced functionality: Cross-linguistic data One of the item-level fields is `uni_lemma` ("universal lemma"), which is intended to be an approximate semantic mapping between words across the languages in Wordbank. The function `get_crossling_items()` simply gives all the available `uni_lemma` values. ```{r} get_crossling_items() ``` The function `get_crossling_data()` takes a vector of `uni_lemmas` and returns a data frame of summary statistics for each item mapped to that uni_lemma in any language (on `WG` forms). Each row is combination of item and age, and the columns indicate the number of children (`n_children`), means (`comprehension`, `production`), standard deviations (`comprehension_sd`, `production_sd`), and item-level fields. ```{r, eval=FALSE} get_crossling_data(uni_lemmas = c("hat", "nose")) %>% select(language, uni_lemma, item_definition, age, n_children, comprehension, production, comprehension_sd, production_sd) %>% arrange(uni_lemma) ```
/scratch/gouwar.j/cran-all/cranData/wordbankr/vignettes/wordbankr.Rmd
# Generated by using Rcpp::compileAttributes() -> do not edit by hand # Generator token: 10BE3573-1514-4C36-9D1C-5A225CD40393 is_overlap <- function(x11, y11, sw11, sh11, boxes1) { .Call('_wordcloud_is_overlap', PACKAGE = 'wordcloud', x11, y11, sw11, sh11, boxes1) }
/scratch/gouwar.j/cran-all/cranData/wordcloud/R/RcppExports.R
# Author: ianfellows ############################################################################### wordcloud <- function(words, freq, scale = c(4, .5), min.freq = 3, max.words = Inf, random.order = TRUE, random.color = FALSE, rot.per = .1, colors = "black", ordered.colors = FALSE, use.r.layout = FALSE, fixed.asp = TRUE, ...) { if (!fixed.asp && rot.per > 0) stop("Variable aspect ratio not supported for rotated words. Set rot.per=0.") tails <- "g|j|p|q|y" last <- 1 nc <- length(colors) if (missing(freq)) { requireNamespace("tm") requireNamespace("slam") #if(!require("tm")) # stop("freq must either be non-missing, or the tm package must be available") if (is.character(words) || is.factor(words)) { corpus <- tm::Corpus(tm::VectorSource(words)) corpus <- tm::tm_map(corpus, tm::removePunctuation) corpus <- tm::tm_map(corpus, function(x) tm::removeWords(x, tm::stopwords())) } else corpus <- words tdm <- tm::TermDocumentMatrix(corpus) freq <- slam::row_sums(tdm) words <- names(freq) } if (ordered.colors) { if (length(colors) != 1 && length(colors) != length(words)) { stop(paste("Length of colors does not match length of words", "vector")) } } if (min.freq > max(freq)) min.freq <- 0 overlap <- function(x1, y1, sw1, sh1) { if (!use.r.layout) return(is_overlap(x1, y1, sw1, sh1, boxes)) s <- 0 if (length(boxes) == 0) return(FALSE) for (i in c(last, 1:length(boxes))) { bnds <- boxes[[i]] x2 <- bnds[1] y2 <- bnds[2] sw2 <- bnds[3] sh2 <- bnds[4] if (x1 < x2) overlap <- x1 + sw1 > x2 - s else overlap <- x2 + sw2 > x1 - s if (y1 < y2) overlap <- overlap && (y1 + sh1 > y2 - s) else overlap <- overlap && (y2 + sh2 > y1 - s) if (overlap) { last <<- i return(TRUE) } } FALSE } ord <- rank(-freq, ties.method = "random") words <- words[ord <= max.words] freq <- freq[ord <= max.words] if (ordered.colors) { colors <- colors[ord <= max.words] } if (random.order) ord <- sample.int(length(words)) else ord <- order(freq, decreasing = TRUE) words <- words[ord] freq <- freq[ord] words <- words[freq >= min.freq] freq <- freq[freq >= min.freq] if (ordered.colors) { colors <- colors[ord][freq >= min.freq] } thetaStep <- .1 rStep <- .05 plot.new() op <- par("mar") par(mar = c(0, 0, 0, 0)) if (fixed.asp) plot.window(c(0, 1), c(0, 1), asp = 1) else plot.window(c(0, 1), c(0, 1)) normedFreq <- freq / max(freq) size <- (scale[1] - scale[2]) * normedFreq + scale[2] boxes <- list() for (i in 1:length(words)) { rotWord <- runif(1) < rot.per r <- 0 theta <- runif(1, 0, 2 * pi) x1 <- .5 y1 <- .5 wid <- strwidth(words[i], cex = size[i], ...) ht <- strheight(words[i], cex = size[i], ...) #mind your ps and qs if (grepl(tails, words[i])) ht <- ht + ht * .2 if (rotWord) { tmp <- ht ht <- wid wid <- tmp } isOverlaped <- TRUE while (isOverlaped) { if (!overlap(x1 - .5 * wid, y1 - .5 * ht, wid, ht) && x1 - .5 * wid > 0 && y1 - .5 * ht > 0 && x1 + .5 * wid < 1 && y1 + .5 * ht < 1) { if (!random.color) { if (ordered.colors) { cc <- colors[i] } else { cc <- ceiling(nc * normedFreq[i]) cc <- colors[cc] } } else { cc <- colors[sample(1:nc, 1)] } text( x1, y1, words[i], cex = size[i], offset = 0, srt = rotWord * 90, col = cc, ... ) #rect(x1-.5*wid,y1-.5*ht,x1+.5*wid,y1+.5*ht) boxes[[length(boxes) + 1]] <- c(x1 - .5 * wid, y1 - .5 * ht, wid, ht) isOverlaped <- FALSE } else{ if (r > sqrt(.5)) { warning(paste(words[i], "could not be fit on page. It will not be plotted.")) isOverlaped <- FALSE } theta <- theta + thetaStep r <- r + rStep * thetaStep / (2 * pi) x1 <- .5 + r * cos(theta) y1 <- .5 + r * sin(theta) } } } par(mar = op) invisible() } #Call down to c++ to find out if any overplotting would occur #.overlap <- function(x11,y11,sw11,sh11,boxes1){ # .Call("is_overlap",x11,y11,sw11,sh11,boxes1) #} #a word cloud showing the common words among documents commonality.cloud <- function(term.matrix, comonality.measure = min, max.words = 300, ...) { ndoc <- ncol(term.matrix) for (i in 1:ndoc) { term.matrix[, i] <- term.matrix[, i] / sum(term.matrix[, i]) } freq <- apply(term.matrix, 1, function(x) comonality.measure(x)) freq <- freq + min(freq) wordcloud(rownames(term.matrix)[freq > 0], freq[freq > 0], min.freq = 0, max.words = max.words, ...) } #a cloud comparing the frequencies of words across documents comparison.cloud <- function(term.matrix, scale = c(4, .5), max.words = 300, random.order = FALSE, rot.per = .1, colors = brewer.pal(max(3, ncol(term.matrix)), "Dark2"), use.r.layout = FALSE, title.size = 3, title.colors = NULL, match.colors = FALSE, title.bg.colors = "grey90", ...) { ndoc <- ncol(term.matrix) thetaBins <- seq(from = 0, to = 2 * pi, length = ndoc + 1) for (i in 1:ndoc) { term.matrix[, i] <- term.matrix[, i] / sum(term.matrix[, i]) } mean.rates <- rowMeans(term.matrix) for (i in 1:ndoc) { term.matrix[, i] <- term.matrix[, i] - mean.rates } group <- apply(term.matrix, 1, function(x) which.max(x)) words <- rownames(term.matrix) freq <- apply(term.matrix, 1, function(x) max(x)) tails <- "g|j|p|q|y" last <- 1 nc <- length(colors) overlap <- function(x1, y1, sw1, sh1) { if (!use.r.layout) return(is_overlap(x1, y1, sw1, sh1, boxes)) s <- 0 if (length(boxes) == 0) return(FALSE) for (i in c(last, 1:length(boxes))) { bnds <- boxes[[i]] x2 <- bnds[1] y2 <- bnds[2] sw2 <- bnds[3] sh2 <- bnds[4] if (x1 < x2) overlap <- x1 + sw1 > x2 - s else overlap <- x2 + sw2 > x1 - s if (y1 < y2) overlap <- overlap && (y1 + sh1 > y2 - s) else overlap <- overlap && (y2 + sh2 > y1 - s) if (overlap) { last <<- i return(TRUE) } } FALSE } ord <- rank(-freq, ties.method = "random") words <- words[ord <= max.words] freq <- freq[ord <= max.words] group <- group[ord <= max.words] if (random.order) { ord <- sample.int(length(words)) } else{ ord <- order(freq, decreasing = TRUE) } words <- words[ord] freq <- freq[ord] group <- group[ord] thetaStep <- .05 rStep <- .05 plot.new() op <- par("mar") par(mar = c(0, 0, 0, 0)) plot.window(c(0, 1), c(0, 1), asp = 1) normedFreq <- freq / max(freq) size <- (scale[1] - scale[2]) * normedFreq + scale[2] boxes <- list() #add titles docnames <- colnames(term.matrix) if (!is.null(title.colors)) { title.colors <- rep(title.colors, length.out = ndoc) } title.bg.colors <- rep(title.bg.colors, length.out = ndoc) for (i in 1:ndoc) { th <- mean(thetaBins[i:(i + 1)]) word <- docnames[i] wid <- strwidth(word, cex = title.size) * 1.2 ht <- strheight(word, cex = title.size) * 1.2 x1 <- .5 + .45 * cos(th) y1 <- .5 + .45 * sin(th) rect(x1 - .5 * wid, y1 - .5 * ht, x1 + .5 * wid, y1 + .5 * ht, col = title.bg.colors[i], border = "transparent") if (is.null(title.colors)) { if (match.colors) { text(x1, y1, word, cex = title.size, col = colors[i]) } else{ text(x1, y1, word, cex = title.size) } } else{ text(x1, y1, word, cex = title.size, col = title.colors[i]) } boxes[[length(boxes) + 1]] <- c(x1 - .5 * wid, y1 - .5 * ht, wid, ht) } for (i in 1:length(words)) { rotWord <- runif(1) < rot.per r <- 0 theta <- runif(1, 0, 2 * pi) x1 <- .5 y1 <- .5 wid <- strwidth(words[i], cex = size[i], ...) ht <- strheight(words[i], cex = size[i], ...) #mind your ps and qs if (grepl(tails, words[i])) ht <- ht + ht * .2 if (rotWord) { tmp <- ht ht <- wid wid <- tmp } isOverlaped <- TRUE while (isOverlaped) { inCorrectRegion <- theta > thetaBins[group[i]] && theta < thetaBins[group[i] + 1] if (inCorrectRegion && !overlap(x1 - .5 * wid, y1 - .5 * ht, wid, ht) && x1 - .5 * wid > 0 && y1 - .5 * ht > 0 && x1 + .5 * wid < 1 && y1 + .5 * ht < 1) { text( x1, y1, words[i], cex = size[i], offset = 0, srt = rotWord * 90, col = colors[group[i]], ... ) #rect(x1-.5*wid,y1-.5*ht,x1+.5*wid,y1+.5*ht) boxes[[length(boxes) + 1]] <- c(x1 - .5 * wid, y1 - .5 * ht, wid, ht) isOverlaped <- FALSE } else{ if (r > sqrt(.5)) { warning(paste(words[i], "could not be fit on page. It will not be plotted.")) isOverlaped <- FALSE } theta <- theta + thetaStep if (theta > 2 * pi) theta <- theta - 2 * pi r <- r + rStep * thetaStep / (2 * pi) x1 <- .5 + r * cos(theta) y1 <- .5 + r * sin(theta) } } } par(mar = op) invisible() } wordlayout <- function(x, y, words, cex = 1, rotate90 = FALSE, xlim = c(-Inf, Inf), ylim = c(-Inf, Inf), tstep = .1, rstep = .1, ...) { tails <- "g|j|p|q|y" n <- length(words) sdx <- sd(x, na.rm = TRUE) sdy <- sd(y, na.rm = TRUE) if (sdx == 0) sdx <- 1 if (sdy == 0) sdy <- 1 if (length(cex) == 1) cex <- rep(cex, n) if (length(rotate90) == 1) rotate90 <- rep(rotate90, n) boxes <- list() for (i in 1:length(words)) { rotWord <- rotate90[i] r <- 0 theta <- runif(1, 0, 2 * pi) x1 <- xo <- x[i] y1 <- yo <- y[i] wid <- strwidth(words[i], cex = cex[i], ...) ht <- strheight(words[i], cex = cex[i], ...) #mind your ps and qs if (grepl(tails, words[i])) ht <- ht + ht * .2 if (rotWord) { tmp <- ht ht <- wid wid <- tmp } isOverlaped <- TRUE while (isOverlaped) { if (!is_overlap(x1 - .5 * wid, y1 - .5 * ht, wid, ht, boxes) && x1 - .5 * wid > xlim[1] && y1 - .5 * ht > ylim[1] && x1 + .5 * wid < xlim[2] && y1 + .5 * ht < ylim[2]) { boxes[[length(boxes) + 1]] <- c(x1 - .5 * wid, y1 - .5 * ht, wid, ht) isOverlaped <- FALSE } else{ theta <- theta + tstep r <- r + rstep * tstep / (2 * pi) x1 <- xo + sdx * r * cos(theta) y1 <- yo + sdy * r * sin(theta) } } } result <- do.call(rbind, boxes) colnames(result) <- c("x", "y", "width", "ht") rownames(result) <- words result } textplot <- function(x, y, words, cex = 1, new = TRUE, show.lines = TRUE, ...) { if (new) plot(x, y, type = "n", ...) lay <- wordlayout(x, y, words, cex, ...) if (show.lines) { for (i in 1:length(x)) { xl <- lay[i, 1] yl <- lay[i, 2] w <- lay[i, 3] h <- lay[i, 4] if (x[i] < xl || x[i] > xl + w || y[i] < yl || y[i] > yl + h) { points(x[i], y[i], pch = 16, col = "red", cex = .5) nx <- xl + .5 * w ny <- yl + .5 * h lines(c(x[i], nx), c(y[i], ny), col = "grey") } } } text(lay[, 1] + .5 * lay[, 3], lay[, 2] + .5 * lay[, 4], words, cex = cex, ...) }
/scratch/gouwar.j/cran-all/cranData/wordcloud/R/cloud.R
#' Demo dataset with Words and Frequency #' #' A data file of words and frequency from tm package #' #' @format A data set with 1011 observations of 2 variables, words and frequancy #' "demoFreq" #' Demo dataset with Chinese character Words and Frequency #' #' A data file of words and frequency from tm package #' #' @format A data set with 885 observations of 2 variables, words and frequancy #' "demoFreqC"
/scratch/gouwar.j/cran-all/cranData/wordcloud2/R/data-definitions.R
##' Create wordcloud with the shape of a word ##' ##' @description ##' Function for Creating wordcloud with the shape of a word ##' ##' @usage ##' letterCloud(data, word, wordSize = 0, letterFont = NULL, ...) ##' ##' @param data A data frame including word and freq in each column ##' @param word A word to create shape for wordcloud. ##' @param wordSize Parameter of the size of the word. ##' @param letterFont Letter font ##' @param ... Other parameters for wordcloud. ##' ##' @examples ##' library(wordcloud2) ##' ##' letterCloud(demoFreq,"R") #' @export letterCloud = function(data, word, wordSize = 0, letterFont = NULL,...){ fileid = paste('ID', format(Sys.time(), "%Y%m%d%H%M%S"), round(proc.time()[3]*100), sep="_") figDir = paste0(tempdir(),"/",fileid,".png") # word = "COS" if(nchar(word)==1){ ofCex = -25 }else if(nchar(word)==2){ ofCex = -10 }else{ ofCex = -1 } png(filename = figDir,width = 800,height = 600) offset = par(mar = par()$mar) op = par(mar = c(0,0,0,0)) plot.new() text(0.5, 0.5, word, font = 2, family = letterFont, cex = 1/strwidth(word) + ofCex + wordSize) dev.off() par(offset) wordcloud2(data,figPath = figDir,...) }
/scratch/gouwar.j/cran-all/cranData/wordcloud2/R/letterCloud.R
##' Change the themes of wordcloud2 ##' ##' @description ##' Function for Creating wordcloud theme ##' ##' ##' @param e1 wordcloud2 object ##' @param e2 wordcloud2 themes ##' @export ##' @method + wordcloud2 ##' @examples ##' wc = wordcloud2(demoFreq) ##' ##' wc + WCtheme(1) ##' wc + WCtheme(2) ##' wc + WCtheme(3) ##' wc + WCtheme(2) + WCtheme(3) `+.wordcloud2` = function(e1, e2){ if(e2$class == 1){ e1$x$minRotation = -pi/2 e1$x$maxRotation = -pi/2 }else if (e2$class ==2){ e1$x$minRotation = -pi/6 e1$x$maxRotation = -pi/6 e1$x$rotateRatio = 1 }else if (e2$class == 3){ e1$x$color = "random-light" e1$x$backgroundColor = "grey" } return(e1) }
/scratch/gouwar.j/cran-all/cranData/wordcloud2/R/plot-construction.R
##' Plot wordcloud2 in shiny ##' ##' @description ##' Function for plotting wordcloud2 in shiny ##' ##' @usage ##' wordcloud2Output(outputId, width = "100\%", height = "400px") ##' renderWordcloud2(expr, env = parent.frame(), quoted = FALSE) ##' ##' @param outputId output variable to read from ##' @param width,height Must be a valid CSS unit (like \code{"100\%"}, ##' \code{"400px"}, \code{"auto"}) or a number, which will be coerced to a ##' string and have \code{"px"} appended. ##' @param expr An expression that generates a networkD3 graph ##' @param env The environment in which to evaluate \code{expr}. ##' @param quoted Is \code{expr} a quoted expression (with \code{quote()})? This ##' is useful if you want to save an expression in a variable. ##' ##' ##' @details ##' Use renderWordcloud2 to render an wordcloud2 object and use wordcloud2Output ##' output an wordcloud2 object. See more details in shiny package. ##' ##' @name wordcloud2-shiny NULL #' @rdname wordcloud2-shiny #' @export wordcloud2Output <- function(outputId, width = "100%", height = "400px") { htmlwidgets::shinyWidgetOutput(outputId, "wordcloud2", width, height, package = "wordcloud2") } #' @rdname wordcloud2-shiny #' @export renderWordcloud2 <- function(expr, env = parent.frame(), quoted = FALSE) { if (!quoted) { expr <- substitute(expr) } # force quoted htmlwidgets::shinyRenderWidget(expr, wordcloud2Output, env, quoted = TRUE) }
/scratch/gouwar.j/cran-all/cranData/wordcloud2/R/renderWordcloud2.R
##' Change the themes of wordcloud2 ##' ##' @description ##' Function for Creating wordcloud theme ##' ##' @usage ##' WCtheme(class = 1) ##' ##' @param class class for theme in wordcloud2 ##' ##' @export ##' @examples ##' wc = wordcloud2(demoFreq) ##' ##' wc + WCtheme(1) ##' wc + WCtheme(2) ##' wc + WCtheme(3) ##' wc + WCtheme(2) + WCtheme(3) WCtheme = function(class = 1){ if(class == 1){ return(list(class =1, minRotation = -pi/2, maxRotation = -pi/2)) }else if (class ==2){ return(list(class =2, minRotation = -pi/6, maxRotation = -pi/6, rotateRatio = 1)) }else if (class == 3| class=='Dark'){ return(list(class =3, color = "random-light", backgroundColor = "grey")) }else{ stop("Out of themes~") } }
/scratch/gouwar.j/cran-all/cranData/wordcloud2/R/theme.R
##' Create wordcloud by wordcloud2.js ##' ##' @description ##' Function for Creating wordcloud by wordcloud2.js ##' ##' @usage ##' wordcloud2(data, size = 1, minSize = 0, gridSize = 0, ##' fontFamily = 'Segoe UI', fontWeight = 'bold', ##' color = 'random-dark', backgroundColor = "white", ##' minRotation = -pi/4, maxRotation = pi/4, shuffle = TRUE, ##' rotateRatio = 0.4, shape = 'circle', ellipticity = 0.65, ##' widgetsize = NULL, figPath = NULL, hoverFunction = NULL) ##' ##' @param data A data frame including word and freq in each column ##' @param size Font size, default is 1. The larger size means the bigger word. ##' @param minSize minimum font size to draw on the canvas. ##' @param gridSize Size of the grid in pixels for marking the availability of the canvas ##' the larger the grid size, the bigger the gap between words. ##' @param fontFamily Font to use. ##' @param fontWeight Font weight to use, e.g. normal, bold or 600 ##' @param color color of the text, keyword 'random-dark' and 'random-light' can be used. ##' color vector is also supported in this param ##' @param backgroundColor Color of the background. ##' @param minRotation If the word should rotate, the minimum rotation ##' (in rad) the text should rotate. ##' @param maxRotation If the word should rotate, the maximum rotation (in rad) the text should rotate. ##' Set the two value equal to keep all text in one angle. ##' @param shuffle Shuffle the points to draw so the result will be different each time for the same list and settings. ##' @param rotateRatio Probability for the word to rotate. Set the number to 1 to always rotate. ##' @param shape The shape of the "cloud" to draw. Can be a keyword present. Available presents are 'circle' ##' (default), 'cardioid' (apple or heart shape curve, the most known polar equation), ##' 'diamond' (alias of square), 'triangle-forward', 'triangle', 'pentagon', and 'star'. ##' @param ellipticity degree of "flatness" of the shape wordcloud2.js should draw. ##' @param figPath The path to a figure used as a mask. ##' @param widgetsize size of the widgets ##' @param hoverFunction Callback to call when the cursor enters or leaves a region occupied ##' by a word. A string of java script function. ##' ##' @examples ##'library(wordcloud2) ##'# Global variables can go here ##' ##' ##' ##' wordcloud2(demoFreq) ##' wordcloud2(demoFreq, size = 2) ##' ##' wordcloud2(demoFreq, size = 1,shape = 'pentagon') ##' wordcloud2(demoFreq, size = 1,shape = 'star') ##' ##' wordcloud2(demoFreq, size = 2, ##' color = "random-light", backgroundColor = "grey") ##' ##' wordcloud2(demoFreq, size = 2, minRotation = -pi/2, maxRotation = -pi/2) ##' wordcloud2(demoFreq, size = 2, minRotation = -pi/6, maxRotation = -pi/6, ##' rotateRatio = 1) ##' wordcloud2(demoFreq, size = 2, minRotation = -pi/6, maxRotation = pi/6, ##' rotateRatio = 0.9) ##' ##' wordcloud2(demoFreqC, size = 2, ##' color = "random-light", backgroundColor = "grey") ##' wordcloud2(demoFreqC, size = 2, minRotation = -pi/6, maxRotation = -pi/6, ##' rotateRatio = 1) ##' ##' # Color Vector ##' ##' colorVec = rep(c('red', 'skyblue'), length.out=nrow(demoFreq)) ##' wordcloud2(demoFreq, color = colorVec, fontWeight = "bold") ##' ##' wordcloud2(demoFreq, ##' color = ifelse(demoFreq[, 2] > 20, 'red', 'skyblue')) #' @import htmlwidgets #' @export # data = data.frame(name=c("New","Old"), # freq=c(100,30)) wordcloud2 <- function(data, size = 1, minSize = 0, gridSize = 0, fontFamily = 'Segoe UI', fontWeight = 'bold', color = 'random-dark', backgroundColor = "white", minRotation = -pi/4, maxRotation = pi/4, shuffle = TRUE, rotateRatio = 0.4, shape = 'circle', ellipticity = 0.65, widgetsize = NULL, figPath = NULL, hoverFunction = NULL ) { if("table" %in% class(data)){ dataOut = data.frame(name = names(data), freq = as.vector(data)) }else{ data = as.data.frame(data) dataOut = data[,1:2] names(dataOut) = c("name", "freq") } if(!is.null(figPath)){ if(!file.exists(figPath)){ stop("cannot find fig in the figPath") } spPath = strsplit(figPath, "\\.")[[1]] len = length(spPath) figClass = spPath[len] if(!figClass %in% c("jpeg","jpg","png","bmp","gif")){ stop("file should be a jpeg, jpg, png, bmp or gif file!") } base64 = base64enc::base64encode(figPath) base64 = paste0("data:image/",figClass ,";base64,",base64) }else{ base64 = NULL } # create a list that contains the settings weightFactor = size * 180 / max(dataOut$freq) settings <- list( word = dataOut$name, freq = dataOut$freq, fontFamily = fontFamily, fontWeight = fontWeight, color = color, minSize = minSize, weightFactor = weightFactor, backgroundColor = backgroundColor, gridSize = gridSize, minRotation = minRotation, maxRotation = maxRotation, shuffle = shuffle, rotateRatio = rotateRatio, shape = shape, ellipticity = ellipticity, figBase64 = base64, hover = htmlwidgets::JS(hoverFunction) ) chart = htmlwidgets::createWidget("wordcloud2", settings, width = widgetsize[1], height = widgetsize[2], sizingPolicy = htmlwidgets::sizingPolicy( viewer.padding = 0, # viewer.suppress = T, browser.padding = 0, browser.fill = TRUE )) htmlwidgets::onRender(chart,"function(el,x){ console.log(123); if(!iii){ window.location.reload(); iii = False; } }") }
/scratch/gouwar.j/cran-all/cranData/wordcloud2/R/wordcloud2.R
.onAttach <- function(libname, pkgname ){ # Sys.setlocale("LC_CTYPE","eng") }
/scratch/gouwar.j/cran-all/cranData/wordcloud2/R/zzz.R
## ----eval = F------------------------------------------------------------ # require(devtools) # install_github("lchiffon/wordcloud2") ## ------------------------------------------------------------------------ library(wordcloud2) wordcloud2(data = demoFreq) ## ------------------------------------------------------------------------ head(demoFreq) ## ----eval = F------------------------------------------------------------ # wordcloud2(demoFreq, color = "random-light", backgroundColor = "grey") ## ------------------------------------------------------------------------ wordcloud2(demoFreq, minRotation = -pi/6, maxRotation = -pi/6, minSize = 10, rotateRatio = 1) ## ----eval = F------------------------------------------------------------ # figPath = system.file("examples/t.png",package = "wordcloud2") # wordcloud2(demoFreq, figPath = figPath, size = 1.5,color = "skyblue") ## ------------------------------------------------------------------------ letterCloud(demoFreq, word = "R", size = 2) ## ------------------------------------------------------------------------ letterCloud(demoFreq, word = "WORDCLOUD2", wordSize = 1) ## ----eval= F------------------------------------------------------------- # if(require(shiny)){ # library(wordcloud2) # # Global variables can go here # n <- 1 # # # Define the UI # ui <- bootstrapPage( # numericInput('size', 'Size of wordcloud', n), # wordcloud2Output('wordcloud2') # ) # # # # Define the server code # server <- function(input, output) { # output$wordcloud2 <- renderWordcloud2({ # # wordcloud2(demoFreqC, size=input$size) # wordcloud2(demoFreq, size=input$size) # }) # } # # Return a Shiny app object # # Sys.setlocale("LC_CTYPE","chs") #if you use Chinese character # ## Do not Run! # shinyApp(ui = ui, server = server) # }
/scratch/gouwar.j/cran-all/cranData/wordcloud2/inst/doc/wordcloud.R
--- title: "Wordcloud2 introduction" date: "`r Sys.Date()`" output: html_document: highlight: kate toc: true toc_depth: 4 mathjax: null vignette: > %\VignetteIndexEntry{Introduction} %\VignetteEngine{knitr::rmarkdown} \usepackage[utf8]{inputenc} --- This is an introduction to `wordcloud2` package. This package provides an HTML5 interface to wordcloud for data visualization. [Timdream's wordcloud2.js](https://github.com/timdream/wordcloud2.js) is used in this package. ![png](img/wordcloud2.png) This document show two main function in `Wordcloud2`: 1. `wordcloud2`: provide traditional wordcloud with HTML5 2. `letterCloud`: provide wordcloud with selected word(letters). ### install wordcloud2 You may have installed this package. Well, I still want to leave these codes here for installing. ```{r eval = F} require(devtools) install_github("lchiffon/wordcloud2") ``` ### `wordlcoud2` function You can use wordcloud directly: ```{r} library(wordcloud2) wordcloud2(data = demoFreq) ``` `demoFreq` is a data.frame including word and freq in each column. ```{r} head(demoFreq) ``` ### Parameters - `data` - A data frame including word and freq in each column - `size` - Font size, default is 1. The larger size means the bigger word. - `fontFamily` - Font to use. - `fontWeight` - Font weight to use, e.g. normal, bold or 600 - `color` - color of the text, keyword 'random-dark' and 'random-light' can be used. color vector is also supported in this param - `minSize` - A character string of the subtitle - `backgroundColor` - Color of the background. - `gridSize` - Size of the grid in pixels for marking the availability of the canvas the larger the grid size, the bigger the gap between words. - `minRotation` - If the word should rotate, the minimum rotation (in rad) the text should rotate. - `maxRotation` - If the word should rotate, the maximum rotation (in rad) the text should rotate. Set the two value equal to keep all text in one angle. - `rotateRatio` - Probability for the word to rotate. Set the number to 1 to always rotate. - `shape` - The shape of the "cloud" to draw. Can be a keyword present. Available presents are 'circle' (default), 'cardioid' (apple or heart shape curve, the most known polar equation), 'diamond' (alias of square), 'triangle-forward', 'triangle', 'pentagon', and 'star'. - `ellipticity` - degree of "flatness" of the shape wordcloud2.js should draw. - `figPath` - A fig used for the wordcloud. - `widgetsize` - size of the widgets #### Example1: use color and backgroundcolor ```{r eval = F} wordcloud2(demoFreq, color = "random-light", backgroundColor = "grey") ``` ![png](img/ex1.png) #### Example2: use rotations ```{r} wordcloud2(demoFreq, minRotation = -pi/6, maxRotation = -pi/6, minSize = 10, rotateRatio = 1) ``` #### Example3: use figure file as a mask. For example, `t.png` is A BIRD with black and white: ![png](img/t.png) ```{r eval = F} figPath = system.file("examples/t.png",package = "wordcloud2") wordcloud2(demoFreq, figPath = figPath, size = 1.5,color = "skyblue") ``` ![png](img/tcloud.png) ### `letterCloud` function `letterCloud` provide the function to create a wordcloud with a word, like this: ```{r} letterCloud(demoFreq, word = "R", size = 2) ``` ![png](img/R.png) Or: ```{r} letterCloud(demoFreq, word = "WORDCLOUD2", wordSize = 1) ``` ![png](img/wordcloud2.png) **wordcloud with fig and letterCloud may disappeared in Rstudio Viewer, open into brower when you meet this bug** #### Parameters - `data` - A data frame including word and freq in each column - `word` - A word to create shape for wordcloud. - `wordSize` - Parameter of the size of the word, default is 2. - `letterFont` - Letter font - `...` - Other parameters for wordcloud2 Go to [wordcloud2](http://github.com/lchiffon/wordcloud2) in the github to leave a comment or give this package a star. ### shiny See Example: ```{r eval= F} if(require(shiny)){ library(wordcloud2) # Global variables can go here n <- 1 # Define the UI ui <- bootstrapPage( numericInput('size', 'Size of wordcloud', n), wordcloud2Output('wordcloud2') ) # Define the server code server <- function(input, output) { output$wordcloud2 <- renderWordcloud2({ # wordcloud2(demoFreqC, size=input$size) wordcloud2(demoFreq, size=input$size) }) } # Return a Shiny app object # Sys.setlocale("LC_CTYPE","chs") #if you use Chinese character ## Do not Run! shinyApp(ui = ui, server = server) } ```
/scratch/gouwar.j/cran-all/cranData/wordcloud2/inst/doc/wordcloud.Rmd
--- title: "Wordcloud2 introduction" date: "`r Sys.Date()`" output: html_document: highlight: kate toc: true toc_depth: 4 mathjax: null vignette: > %\VignetteIndexEntry{Introduction} %\VignetteEngine{knitr::rmarkdown} \usepackage[utf8]{inputenc} --- This is an introduction to `wordcloud2` package. This package provides an HTML5 interface to wordcloud for data visualization. [Timdream's wordcloud2.js](https://github.com/timdream/wordcloud2.js) is used in this package. ![png](img/wordcloud2.png) This document show two main function in `Wordcloud2`: 1. `wordcloud2`: provide traditional wordcloud with HTML5 2. `letterCloud`: provide wordcloud with selected word(letters). ### install wordcloud2 You may have installed this package. Well, I still want to leave these codes here for installing. ```{r eval = F} require(devtools) install_github("lchiffon/wordcloud2") ``` ### `wordlcoud2` function You can use wordcloud directly: ```{r} library(wordcloud2) wordcloud2(data = demoFreq) ``` `demoFreq` is a data.frame including word and freq in each column. ```{r} head(demoFreq) ``` ### Parameters - `data` - A data frame including word and freq in each column - `size` - Font size, default is 1. The larger size means the bigger word. - `fontFamily` - Font to use. - `fontWeight` - Font weight to use, e.g. normal, bold or 600 - `color` - color of the text, keyword 'random-dark' and 'random-light' can be used. color vector is also supported in this param - `minSize` - A character string of the subtitle - `backgroundColor` - Color of the background. - `gridSize` - Size of the grid in pixels for marking the availability of the canvas the larger the grid size, the bigger the gap between words. - `minRotation` - If the word should rotate, the minimum rotation (in rad) the text should rotate. - `maxRotation` - If the word should rotate, the maximum rotation (in rad) the text should rotate. Set the two value equal to keep all text in one angle. - `rotateRatio` - Probability for the word to rotate. Set the number to 1 to always rotate. - `shape` - The shape of the "cloud" to draw. Can be a keyword present. Available presents are 'circle' (default), 'cardioid' (apple or heart shape curve, the most known polar equation), 'diamond' (alias of square), 'triangle-forward', 'triangle', 'pentagon', and 'star'. - `ellipticity` - degree of "flatness" of the shape wordcloud2.js should draw. - `figPath` - A fig used for the wordcloud. - `widgetsize` - size of the widgets #### Example1: use color and backgroundcolor ```{r eval = F} wordcloud2(demoFreq, color = "random-light", backgroundColor = "grey") ``` ![png](img/ex1.png) #### Example2: use rotations ```{r} wordcloud2(demoFreq, minRotation = -pi/6, maxRotation = -pi/6, minSize = 10, rotateRatio = 1) ``` #### Example3: use figure file as a mask. For example, `t.png` is A BIRD with black and white: ![png](img/t.png) ```{r eval = F} figPath = system.file("examples/t.png",package = "wordcloud2") wordcloud2(demoFreq, figPath = figPath, size = 1.5,color = "skyblue") ``` ![png](img/tcloud.png) ### `letterCloud` function `letterCloud` provide the function to create a wordcloud with a word, like this: ```{r} letterCloud(demoFreq, word = "R", size = 2) ``` ![png](img/R.png) Or: ```{r} letterCloud(demoFreq, word = "WORDCLOUD2", wordSize = 1) ``` ![png](img/wordcloud2.png) **wordcloud with fig and letterCloud may disappeared in Rstudio Viewer, open into brower when you meet this bug** #### Parameters - `data` - A data frame including word and freq in each column - `word` - A word to create shape for wordcloud. - `wordSize` - Parameter of the size of the word, default is 2. - `letterFont` - Letter font - `...` - Other parameters for wordcloud2 Go to [wordcloud2](http://github.com/lchiffon/wordcloud2) in the github to leave a comment or give this package a star. ### shiny See Example: ```{r eval= F} if(require(shiny)){ library(wordcloud2) # Global variables can go here n <- 1 # Define the UI ui <- bootstrapPage( numericInput('size', 'Size of wordcloud', n), wordcloud2Output('wordcloud2') ) # Define the server code server <- function(input, output) { output$wordcloud2 <- renderWordcloud2({ # wordcloud2(demoFreqC, size=input$size) wordcloud2(demoFreq, size=input$size) }) } # Return a Shiny app object # Sys.setlocale("LC_CTYPE","chs") #if you use Chinese character ## Do not Run! shinyApp(ui = ui, server = server) } ```
/scratch/gouwar.j/cran-all/cranData/wordcloud2/vignettes/wordcloud.Rmd
#' All five-letter words from the Nettalk Corpus Syllable Data Set. #' #' A dataset containing all five-letter words from the Nettalk Corpus Syllable #' Data Set as returned by qdapDictionaries::dictionaries(). #' #' @format A character vector of length 2488. #' @source \url{https://CRAN.R-project.org/package=qdapDictionaries/} "qdap_dict" #' All five-letter words from the Ubuntu dictionary. #' #' A dataset containing all five-letter words from Ubuntu dictionary #' `/usr/share/dict/words`. #' #' @format A character vector of length 4594. #' @source \url{https://ubuntu.com/} "ubuntu_dict" #' All words used as potential answers by the original WORDLE game. #' #' A dataset containing all words which can be used as answers to the original #' WORDLE game. #' #' @format A character vector of length 2315. #' @source \url{https://gist.github.com/cfreshman/a03ef2cba789d8cf00c08f767e0fad7b/} "wordle_answers" #' All words used to validate guesses by the original WORDLE game. #' #' A dataset containing all words which are used to validate guesses by the #' original WORDLE game. Note that this does not include the words which can be #' answers. Theses are held in ?wordle_answers. #' #' @format A character vector of length 10657. #' @source \url{https://gist.github.com/cfreshman/cdcdf777450c5b5301e439061d29694c} "wordle_allowed" #' Keyboard layouts for printing a wordler game at the console. #' #' A list of keyboard layouts used to show letters known to be not in target #' word, in the target word, and in the right position in the target word. #' Each element must be a list having 3 items, each representing a row of a #' keyboard layout. #' #' @format A list of length 1. #' @source \url{https://gist.github.com/cfreshman/cdcdf777450c5b5301e439061d29694c} "keyboards"
/scratch/gouwar.j/cran-all/cranData/wordler/R/data.R
#' Assess a guess against the target word #' #' Assesses the guess in list \code{game$guess} (index from #' \code{game$guess_count}) against the target word in \code{game$target}. #' #' Adds the assessment to the corresponding list item in \code{game$assess}. #' This assessment should be considered as how the guesses should be displayed #' to the user and replicates the behaviour of the WORDLE game #' (\url{https://www.powerlanguage.co.uk/wordle/}). #' #' For each letter in each guess, one of the following assessments are made: #' \itemize{ #' \item 'not_in_word' - the letter is not present in the target word (or has #' already been flagged as 'in_word' earlier in the word). #' \item 'in_word' - the letter is in the target word. More specifically, #' the first instance of the letter in the guess present in the word. #' Subsequent instances are flagged as 'not_in_word'. #' \item 'in_position' - the letter is in the same position in the target #' word. #' } #' #' @param game 'wordler' game object (as generated by #' \code{\link{new_wordler}}). #' #' @return 'wordler' game object. assess_guess <- function(game){ # Confirm wordler object if(!is.wordler(game)){ stop("`game` argument must be of class 'wordler'.") } # Get required items from game object guess <- game$guess[[game$guess_count]] target <- unlist(strsplit(game$target, "")) # Do letters match position in target word? in_position <- guess == target # Letters that aren't in position can still be in word to_check <- guess[!in_position] # Lookup counts of remaining guess letters in target word lookup <- count_freqs(to_check, target[!in_position]) # We only count as many occurrences of a guess letter as # are in the lookup as being in the word in_word <- mapply( function(idx, l) { if (in_position[idx]) return(TRUE) if (sum(lookup[[l]]) > 0) { lookup[[l]] <<- lookup[[l]] - 1L return(TRUE) } else FALSE }, idx = 1:length(guess), l = unlist(strsplit(guess, "")) ) # Build assessment vector assessment <- ifelse(in_word, "in_word", "not_in_word") assessment <- ifelse(in_position, "in_position", assessment) # Add assessment to game object and return game$assess[[game$guess_count]] <- assessment game } #' Get counts of each letter in the target #' #' @param xs,target we count the occurrences of each element in #' \code{xs} in \code{target} #' @return Named list of elements of \code{xs} with counts. count_freqs <- function(xs, target) { xs <- unique(xs) names(xs) <- xs lapply(xs, function(x) sum(target == x)) } #' Play a game of WORDLE in the console #' #' Starts an interactive game of WORDLE in the console. Based on WORDLE #' (\url{https://www.powerlanguage.co.uk/wordle/}). #' #' @param target_words character vector of potential target words for the #' game. A word will be randomly selected from this vector as the target word #' to be guessed. Defaults to words used by the WORDLE game online #' (?wordler::wordle_answers) if not provided. #' @param allowed_words character vector of valid words for the guess. Guess #' must be in this vector to be allowed. Defaults to words used by the WORDLE #' game online (?wordler::wordle_allowed) if not provided. #' #' @return No return value. Starts interactive game in console. #' #' @export play_wordler <- function(target_words = NULL, allowed_words = NULL){ print_instructions() # Establish default target words if none provided if(is.null(target_words)){ target_words <- wordler::wordle_answers } # Establish default allowed words if none provided if(is.null(allowed_words)){ allowed_words <- c(wordler::wordle_allowed, wordler::wordle_answers ) } # Create a new game game <- new_wordler(target = sample(target_words, 1)) while(!game$game_over){ print(game) # Ask player to guess a word new_guess <- readline("Enter a word: ") new_guess <- toupper(new_guess) # Make guess game <- have_a_guess(new_guess, game, allowed_words) # Has the player guessed correctly? if(game$game_won){ print(game) cat("Congratulations, you won!") next() } # Are all the guesses used up if(game$guess_count == 6){ print(game) cat("You have used all your guesses.\n") cat("The word you were looking for is", game$target) } } } #' Constructs a new object of class "wordler" #' #' Returns a "wordler" object which holds the state of a wordler game as #' guesses are made. The returned object will have a target word which is #' selected from the default list unless provided in the \code{target} #' argument. #' #' The wordler object is a list which has the following elements: #' #' \itemize{ #' \item \code{target} - The target word. #' \item \code{game_over} - A logical indicating if the game is over. Set to #' \code{TRUE} if either the word is correctly guessed, or all guesses are #' used. #' \item \code{game_won} - A logical indicating if the game has been won #' (target word correctly guessed). #' \item \code{guess_count} - The number of guesses made so far. #' \item \code{guess} - A list of guesses of the target word. #' \item \code{assess} - A list of assessments of the target word. Note that #' this represents how the letters in each guess should be displayed when #' printing the game. #' \item \code{keyboard} - A list representing the keyboard layout to be used #' when printing the game state. #' \item \code{letters_known_not_in_word} - A vector of letters known not to #' be in the target word based on guesses made so far. #' \item \code{letters_known_in_word} - A vector of letters known to #' be in the target word based on guesses made so far. #' \item \code{letters_known_not_in_word} - A vector of letters known to #' be in the right position in the target word based on guesses made so far. #' } #' #' @param target the target word for the game. Defaults to a random selection #' from words used by the WORDLE game online (?wordler::wordle_answers) if not #' provided. #' @param game_over a logical indicating if the game is over. Defaults to FALSE. #' @param game_won a logical indicating if the game has been won. In other #' words, has the target word been correctly guessed. #' @param guess_count an integer representing the number of guesses made so #' far. Defaults to 0. #' @param guess a list (of length 6) of character vectors (each of length 5) #' representing the guesses of the target word. Each element of the list #' represents one of six guesses allowed. Each guess defaults to #' \code{c("_", "_", "_", "_", "_")} to represent a guess not yet made. #' @param assess a list (of length 6) of character vectors (each of length 5) #' representing an assessment of each letter in each guess. #' @param keyboard a list (of length 3) of character vectors each representing #' a row of a keyboard layout used to visualise the game by \code{print()}. #' Defaults to QWERTY layout. #' @param letters_known_not_in_word a character vector of letters known not to #' be in the target word. #' @param letters_known_in_word a character vector of letters know to be in the #' target word. #' @param letters_known_in_position a character vector of letters known to be #' in the correct position in the target word. #' #' @return An object of class "wordler". #' @export #' #' @examples new_wordler <- function(target = sample(wordler::wordle_answers, 1), game_over = FALSE, game_won = FALSE, guess_count = 0, guess = lapply(1:6, function(x) unlist( strsplit("_____", ""))), assess = lapply(1:6, function(x) rep("not_in_word", 5)), keyboard = wordler::keyboards$qwerty, letters_known_not_in_word = character(0), letters_known_in_word = character(0), letters_known_in_position = character(0)){ # Validate target argument if(class(target) != "character"){ stop("`target` must be of class 'character'") } if(nchar(target) != 5){ stop("`target` must have exactly 5 characters") } if(length(target) != 1){ stop("`target` must be a character vector of length 1") } # Validate logical arguments if(class(game_over) != "logical" | class(game_won) != "logical"){ stop("`game_over` and `game_won` must both be of class 'logical'") } # Validate guess if(class(guess) != "list" | length(guess) != 6 | !all(unlist(lapply(guess, function(x) length(x) == 5)))){ stop("`guess` must be a list with six items, ", "each of which is a character vector of length 5") } # Validate assess if(class(assess) != "list" | length(assess) != 6 | !all(unlist(lapply(assess, function(x) length(x) == 5)))){ stop("`assess` must be a list with six items, ", "each of which is a character vector of length 5") } # Validate keyboard if(class(keyboard) != "list" | length(keyboard) != 3){ stop("`keyboard` must be a list with three items") } # Validate letters in word vectors if(class(letters_known_not_in_word) != "character" | class(letters_known_in_word) != "character" | class(letters_known_in_position) != "character"){ stop("`letters_known_not_in_word`, `letters_known_in_word`, and ", "`letters_known_in_position` must all be character vectors") } # Build list to represent game state wordler <- list(target = target, game_over = game_over, game_won = game_won, guess_count = guess_count, guess = guess, assess = assess, keyboard = keyboard, letters_known_not_in_word = letters_known_not_in_word, letters_known_in_word = letters_known_in_word, letters_known_in_position = letters_known_in_position) # Set class and return class(wordler) <- "wordler" wordler } #' Establish if guess is correct and set game state accordingly #' #' Compares the guess in \code{game$guess} (index from \code{game$guess_count}) #' with the corresponding target word in \code{game$target}. If the guess is #' equal to the target, \code{game$game_won} and \code{game$game_over} are #' both set to \code{TRUE}. #' #' @param game 'wordler' game object (as generated by #' \code{\link{new_wordler}}). #' #' @return A 'wordler' game object. #' #' @examples is_guess_correct <- function(game){ # Confirm wordler object if(!is.wordler(game)){ stop("`game` argument must be of class 'wordler'.") } # Get required items from game object guess <- game$guess[[game$guess_count]] target <- unlist(strsplit(game$target, "")) # Set game state if guess is correct if(all(guess == target)){ game$game_over <- TRUE game$game_won <- TRUE } game } #' Submit a guess word to a wordler game object #' #' If \code{x} is a valid guess, it is added to \code{game$guess} and assessed #' against the target word. Increments game$guess_count if a valid guess is made. #' #' @param x the guess. #' @param game 'wordler' game object (as generated by #' \code{\link{new_wordler}}). #' @param allowed_words a character vector of valid words for the guess. x #' must be in this vector to be allowed. Defaults to words used by the WORDLE #' game online (?wordler::wordle_allowed) if not provided. #' #' @return A 'wordler' game object. #' @export #' #' @examples have_a_guess <- function(x, game, allowed_words = NULL){ # Confirm wordler object if(!is.wordler(game)){ stop("`game` argument must be of class 'wordler'.") } # Game must not be already over if(game$game_over){ message("The game is already over. ", "Start a new one if you want to play again.") return(game) } # Default allowed_words if(is.null(allowed_words)){ allowed_words <- c(wordler::wordle_answers, wordler::wordle_allowed) } # Guess must be in word list if(!(x %in% allowed_words)){ message("Your word isn't in the list of valid words. Try again.") } else { # Player has used a guess game$guess_count <- game$guess_count + 1 # Add guess to game game$guess[[game$guess_count]] <- unlist(strsplit(x, "")) # Assess guess game <- assess_guess(game) # Update known letters game <- update_letters_known_not_in_word(game) game <- update_letters_known_in_word(game) game <- update_letters_known_in_position(game) # Is guess correct? game <- is_guess_correct(game) # Are guesses all used? if(game$guess_count == 6){ game$game_over <- TRUE } } game } #' Prints instructions to play a wordler game in the console #' #' @return No return value. #' #' @examples print_instructions <- function(){ # Introductory instructions cat("Guess the WORDLE in 6 tries.\n\n") cat("After each guess, the color of the letters will change to show how", "close your guess was to the word. e.g.\n\n") cat(crayon::green("W"), "E A R Y\n") cat("The letter W is in the word and in the correct spot\n\n") cat("P I", crayon::yellow("L"), "O T\n") cat("The letter L is in the word but in the wrong spot\n\n") cat("V A G U E\n") cat("None of the letters are in the word\n\n") } #' Prints a wordler game to the console. #' #' @param x 'wordler' game object (as generated by #' \code{\link{new_wordler}}). #' @param ... additional arguments #' #' @return No return value. #' #' @export #' #' @examples print.wordler <- function(x, ...){ game <- x keyboard <- game$keyboard # Determine which letters are known to be in word, not in word, or in position keyboard_letter_not_in_word <- lapply(game$keyboard, function(x) x %in% game$letters_known_not_in_word) keyboard_letter_in_word <- lapply(game$keyboard, function(x) x %in% game$letters_known_in_word) keyboard_letter_in_position <- lapply(game$keyboard, function(x) x %in% game$letters_known_in_position) # Print game state to console cat("\n") # Loop through all guesses for (i in 1:6) { cat(" ") # Loop through letters in each guess for(j in 1:5){ if(game$assess[[i]][j] == "in_position"){ cat(crayon::green$bold(game$guess[[i]][j])) } else if (game$assess[[i]][j] == "in_word") { cat(crayon::yellow$bold(game$guess[[i]][j])) } else { cat(crayon::bold(game$guess[[i]][j])) } cat(" ") } if(i == 2){ # Display top row of keyboard cat(" ") for(j in seq_along(keyboard[[1]])){ if(keyboard_letter_in_position[[1]][j]){ cat(crayon::green(keyboard[[1]][j], " ")) } else if (keyboard_letter_in_word[[1]][j]){ cat(crayon::yellow(keyboard[[1]][j], " ")) } else if (keyboard_letter_not_in_word[[1]][j]){ cat(crayon::yellow(" ", " ")) } else { cat(keyboard[[1]][j], " ") } } } if(i == 3){ # Display middle row of keyboard cat(" ") for(j in seq_along(keyboard[[2]])){ if(keyboard_letter_in_position[[2]][j]){ cat(crayon::green(keyboard[[2]][j], " ")) } else if (keyboard_letter_in_word[[2]][j]){ cat(crayon::yellow(keyboard[[2]][j], " ")) } else if (keyboard_letter_not_in_word[[2]][j]){ cat(crayon::yellow(" ", " ")) } else { cat(keyboard[[2]][j], " ") } } } if(i == 4){ # Display bottom row of keyboard cat(" ") for(j in seq_along(keyboard[[3]])){ if(keyboard_letter_in_position[[3]][j]){ cat(crayon::green(keyboard[[3]][j], " ")) } else if (keyboard_letter_in_word[[3]][j]){ cat(crayon::yellow(keyboard[[3]][j], " ")) } else if (keyboard_letter_not_in_word[[3]][j]){ cat(crayon::yellow(" ", " ")) } else { cat(keyboard[[3]][j], " ") } } } cat("\n") } cat("\n") } #' Establish which letters are known to be in the target word #' #' For all items in \code{game$guess}, establishes the letters which are now #' known to be in the target word. These are present as a character vector in #' \code{game$letters_known_in_word} in the returned object. #' #' @param game 'wordler' game object (as generated by #' \code{\link{new_wordler}}). #' #' @return A 'wordler' game object. #' #' @examples update_letters_known_in_word <- function(game){ # Confirm wordler object if(!is.wordler(game)){ stop("`game` argument must be of class 'wordler'.") } # Target word represented as character vector target <- unlist(strsplit(game$target, "")) # Establish letters from all guesses which are in target word letters_known_in_word <- lapply(game$guess, function(x) x[x %in% target]) letters_known_in_word <- unique(unlist(letters_known_in_word)) # Add to game object and return game$letters_known_in_word <- letters_known_in_word game } #' Establish which letters are known to _not_ be in the target word #' #' For all items in \code{game$guess}, establishes the letters which are now #' known to not be in the target word. These are present as a character vector #' in \code{game$letters_known_not_in_word} in the returned object. #' #' @param game 'wordler' game object (as generated by #' \code{\link{new_wordler}}). #' #' @return A 'wordler' game object. #' #' @examples update_letters_known_not_in_word <- function(game){ # Confirm wordler object if(!is.wordler(game)){ stop("`game` argument must be of class 'wordler'.") } # Target word represented as character vector target <- unlist(strsplit(game$target, "")) # Establish letters from all guesses which are not in target word letters_known_not_in_word <- lapply(game$guess, function(x) x[!x %in% target]) letters_known_not_in_word <- unique(unlist(letters_known_not_in_word)) # Remove underscore (used as blanks for guesses not yet made) letters_known_not_in_word <- letters_known_not_in_word[!letters_known_not_in_word == "_"] # Add to game object and return game$letters_known_not_in_word <- letters_known_not_in_word game } #' Establish which letters are known to be in the correct position in the target #' word #' #' For all items in \code{game$guess}, establishes the letters which are now #' known to be in the correct position in the target word. These are present as #' a character vector in \code{game$letters_known_in_position} in the returned #' object. #' #' @param game 'wordler' game object (as generated by #' \code{\link{new_wordler}}). #' #' @return A 'wordler' game object. #' #' @examples update_letters_known_in_position <- function(game){ # Confirm wordler object if(!is.wordler(game)){ stop("`game` argument must be of class 'wordler'.") } letters_known_in_position <- mapply(function(guess, assess) guess[assess == "in_position"], guess = game$guess, assess = game$assess) letters_known_in_position <- unlist(letters_known_in_position) letters_known_in_position <- unique(letters_known_in_position) game$letters_known_in_position <- letters_known_in_position game } #' Detects wordler objects #' #' @param x an R object #' @param ... additional arguments #' #' @return Returns \code{TRUE} if x is a 'wordler' object, otherwise #' \code{FALSE}. #' #' @export #' #' @examples is.wordler <- function(x, ...) { class(x) == "wordler" }
/scratch/gouwar.j/cran-all/cranData/wordler/R/wordler.R
## ---- include = FALSE--------------------------------------------------------- knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ## ----setup-------------------------------------------------------------------- library(wordler) ## ----------------------------------------------------------------------------- # Commented out so vignette can be generated #play_wordler() ## ----------------------------------------------------------------------------- game <- new_wordler() ## ----------------------------------------------------------------------------- game <- have_a_guess("SQUID", game, allowed_words = c(wordle_answers, wordle_allowed)) game <- have_a_guess("VIDEO", game, allowed_words = c(wordle_answers, wordle_allowed)) ## ----------------------------------------------------------------------------- game <- have_a_guess("DONKY", game, allowed_words = c(wordle_answers, wordle_allowed)) ## ----------------------------------------------------------------------------- game$game_over ## ----------------------------------------------------------------------------- game$game_won ## ----------------------------------------------------------------------------- game$guess_count ## ----------------------------------------------------------------------------- game$target ## ----------------------------------------------------------------------------- game$guess ## ----------------------------------------------------------------------------- game$assess ## ----------------------------------------------------------------------------- print(game) ## ----------------------------------------------------------------------------- game$letters_known_not_in_word ## ----------------------------------------------------------------------------- game$letters_known_in_word ## ----------------------------------------------------------------------------- game$letters_known_in_position
/scratch/gouwar.j/cran-all/cranData/wordler/inst/doc/introduction_to_wordler.R
--- title: "Introduction to wordler" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Introduction to wordler} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` This package lets you play a version of the [WORDLE game](https://www.powerlanguage.co.uk/wordle/) in R. You can either play interactively in the console, or programmatically to explore different solvers (for example). Players must attempt to correctly guess a five-letter word in (at most) six attempts. After each guess, the letters are coloured to indicate how well the guess matches the target word. Green letters are in the word and in the right position (but may be repeated elsewhere). Yellow letters are present (at least once) somewhere in the target word. Each guess must be a valid word. # Load the package First, load the package. ```{r setup} library(wordler) ``` # Playing a game in the console To play a game in the console, call the `play_wordler()` function. ```{r} # Commented out so vignette can be generated #play_wordler() ``` # Playing a game programmatically ## Initialise a new game First, initialise a new game. ```{r} game <- new_wordler() ``` This returns a list which represents the game state. We'll have a look at the items in this list after we've made a few guesses. ## Make a guess Use `have_a_guess()` to submit a guess of the target word. We'll make a few guesses below. ```{r} game <- have_a_guess("SQUID", game, allowed_words = c(wordle_answers, wordle_allowed)) game <- have_a_guess("VIDEO", game, allowed_words = c(wordle_answers, wordle_allowed)) ``` The `allowed_words` argument is a character vector used to validate the guesses. The guess must be present in this vector to be permitted. If the guess is not in `allowed_words`, a message is displayed and you can have another go. ```{r} game <- have_a_guess("DONKY", game, allowed_words = c(wordle_answers, wordle_allowed)) ``` ## Game state Now let's look what's happening in the game object. We have an item which represents whether the game is over, or still in play. ```{r} game$game_over ``` This is set to `TRUE` if either the word is correctly guessed, or all guesses are used. The `game_won` item indicates if the target word has been guessed correctly. ```{r} game$game_won ``` The number of guesses made so far is held in the `guess_count` item. ```{r} game$guess_count ``` The word we're trying to guess is held in the `target` item. ```{r} game$target ``` The list of guesses so far is available in the `guess` item. ```{r} game$guess ``` The `assess` item is a list holding the assessments of each guess. ```{r} game$assess ``` At any time, we can print the status of the game as follows. ```{r} print(game) ``` A vector of letters known to _not_ be in the target word is available. ```{r} game$letters_known_not_in_word ``` A vector of letters known to be in the target word is available. ```{r} game$letters_known_in_word ``` A vector of letters known to be in the right position in the target word is available. ```{r} game$letters_known_in_position ```
/scratch/gouwar.j/cran-all/cranData/wordler/inst/doc/introduction_to_wordler.Rmd
--- title: "Introduction to wordler" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Introduction to wordler} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` This package lets you play a version of the [WORDLE game](https://www.powerlanguage.co.uk/wordle/) in R. You can either play interactively in the console, or programmatically to explore different solvers (for example). Players must attempt to correctly guess a five-letter word in (at most) six attempts. After each guess, the letters are coloured to indicate how well the guess matches the target word. Green letters are in the word and in the right position (but may be repeated elsewhere). Yellow letters are present (at least once) somewhere in the target word. Each guess must be a valid word. # Load the package First, load the package. ```{r setup} library(wordler) ``` # Playing a game in the console To play a game in the console, call the `play_wordler()` function. ```{r} # Commented out so vignette can be generated #play_wordler() ``` # Playing a game programmatically ## Initialise a new game First, initialise a new game. ```{r} game <- new_wordler() ``` This returns a list which represents the game state. We'll have a look at the items in this list after we've made a few guesses. ## Make a guess Use `have_a_guess()` to submit a guess of the target word. We'll make a few guesses below. ```{r} game <- have_a_guess("SQUID", game, allowed_words = c(wordle_answers, wordle_allowed)) game <- have_a_guess("VIDEO", game, allowed_words = c(wordle_answers, wordle_allowed)) ``` The `allowed_words` argument is a character vector used to validate the guesses. The guess must be present in this vector to be permitted. If the guess is not in `allowed_words`, a message is displayed and you can have another go. ```{r} game <- have_a_guess("DONKY", game, allowed_words = c(wordle_answers, wordle_allowed)) ``` ## Game state Now let's look what's happening in the game object. We have an item which represents whether the game is over, or still in play. ```{r} game$game_over ``` This is set to `TRUE` if either the word is correctly guessed, or all guesses are used. The `game_won` item indicates if the target word has been guessed correctly. ```{r} game$game_won ``` The number of guesses made so far is held in the `guess_count` item. ```{r} game$guess_count ``` The word we're trying to guess is held in the `target` item. ```{r} game$target ``` The list of guesses so far is available in the `guess` item. ```{r} game$guess ``` The `assess` item is a list holding the assessments of each guess. ```{r} game$assess ``` At any time, we can print the status of the game as follows. ```{r} print(game) ``` A vector of letters known to _not_ be in the target word is available. ```{r} game$letters_known_not_in_word ``` A vector of letters known to be in the target word is available. ```{r} game$letters_known_in_word ``` A vector of letters known to be in the right position in the target word is available. ```{r} game$letters_known_in_position ```
/scratch/gouwar.j/cran-all/cranData/wordler/vignettes/introduction_to_wordler.Rmd
.jevalIterator <- function(x) { r <- NULL while(.jcall(x, "Z", "hasNext")) r <- c(r, list(.jcall(x, "Ljava/lang/Object;", "next"))) r }
/scratch/gouwar.j/cran-all/cranData/wordnet/R/AAA.R
initDict <- function(pathData = "") { validPath <- FALSE for(path in c(## Try user supplied path pathData, ## Try WNHOME (UNIX) environment variable file.path(Sys.getenv("WNHOME"), "dict"), ## Windows editions provide a registry key ## Try UNIX Wordnet 3.0 default path "/usr/local/WordNet-3.0/dict", ## Try UNIX Wordnet 2.1 default path "/usr/local/WordNet-2.1/dict", ## Try Debian WordNet default path "/usr/share/wordnet" )) { .jcall("com.nexagis.jawbone.Dictionary", "V", "initialize", path) validPath <- .jcall("com.nexagis.jawbone.Dictionary", "Z", "pathIsValid") if(validPath) break } if(!validPath) warning("cannot find WordNet 'dict' directory: please set the environment variable WNHOME to its parent") validPath } getDictInstance <- function() { .jnew("com.nexagis.jawbone.Dictionary") } setDict <- function(pathData) { if(initDict(pathData)) dict(getDictInstance()) else stop("could not find WordNet installation") } getDict <- function() { if(!is.null(d <- dict())) d else stop("could not find Wordnet dictionary") } getIndexTerms <- function(pos, maxLimit, filter) { pos <- .expand_synset_type(pos[1L]) iterator <- .jcall(getDict(), "Ljava/util/Iterator;", "getIndexTermIterator", .jfield("com.nexagis.jawbone.PartOfSpeech", "Lcom/nexagis/jawbone/PartOfSpeech;", pos), as.integer(maxLimit), .jcast(filter, "com.nexagis.jawbone.filter.TermFilter")) .jevalIterator(iterator) } WN_synset_types <- c("NOUN", "VERB", "ADJECTIVE", "ADJECTIVE_SATELLITE", "ADVERB") .expand_synset_type <- function(x) { y <- charmatch(x, WN_synset_types) if(is.na(y)) stop(sprintf("Unknown synset type '%s'", x)) if(y == 0) { if(nchar(x) < 3L) stop(sprintf("Ambiguous synset type abbrev '%s'", x)) if(substring(x, 3L, 3L) == "J") "ADJECTIVE" else "ADVERB" } else { WN_synset_types[y] } }
/scratch/gouwar.j/cran-all/cranData/wordnet/R/dictionary.R
getLemma <- function(indexterm) { .jcall(indexterm, "S", "getLemma") } getSynsets <- function(indexterm) { .jcall(indexterm, "[Lcom/nexagis/jawbone/Synset;", "getSynsets") } getSynonyms <- function(indexterm) { synsets <- .jcall(indexterm, "[Lcom/nexagis/jawbone/Synset;", "getSynsets") sort(unique(unlist(lapply(synsets, getWord)))) }
/scratch/gouwar.j/cran-all/cranData/wordnet/R/indexterm.R
dict <- local({ d <- NULL function(new, ...) { if (!missing(new)) d <<- new else d } }) .onLoad <- function(libname, pkgname) { .jpackage(pkgname, lib.loc = libname) if (initDict()) dict(getDictInstance()) }
/scratch/gouwar.j/cran-all/cranData/wordnet/R/init.R
synonyms <- function(word, pos) { filter <- getTermFilter("ExactMatchFilter", word, TRUE) terms <- getIndexTerms(pos, 1L, filter) if (is.null(terms)) character() else getSynonyms(terms[[1L]]) }
/scratch/gouwar.j/cran-all/cranData/wordnet/R/synonyms.R
getRelatedSynsets <- function(synset, pointerSymbol) { l <- .jcall(synset, "Ljava/util/List;", "getRelatedSynsets", pointerSymbol) if(is.null(l)) return(list()) iterator <- .jcall(l, "Ljava/util/Iterator;", "iterator") i <- .jevalIterator(iterator) lapply(i, .jcast, "Lcom/nexagis/jawbone/Synset;") } getWord <- function(synset) { l <- .jcall(synset, "Ljava/util/List;", "getWord") iterator <- .jcall(l, "Ljava/util/Iterator;", "iterator") i <- .jevalIterator(iterator) i <- lapply(i, .jcast, "Lcom/nexagis/jawbone/WordData;") sapply(i, .jcall, "S", "getWord") }
/scratch/gouwar.j/cran-all/cranData/wordnet/R/synsets.R
WN_filter_types <- c("ContainsFilter", "EndsWithFilter", "ExactMatchFilter", "RegexFilter", "SoundFilter", "StartsWithFilter", "WildcardFilter") getFilterTypes <- function() WN_filter_types getTermFilter <- function(type, word, ignoreCase) { type <- .expand_filter_type(type[1L]) .jnew(paste0("com.nexagis.jawbone.filter.", type), word, ignoreCase) } .expand_filter_type <- function(x) { y <- pmatch(tolower(x), tolower(WN_filter_types)) if(is.na(y)) stop(sprintf("Unknown filter type '%s'", x)) WN_filter_types[y] }
/scratch/gouwar.j/cran-all/cranData/wordnet/R/termfilter.R
### R code from vignette source 'wordnet.Rnw' ################################################### ### code chunk number 1: wordnet.Rnw:15-16 ################################################### options(width = 75) ################################################### ### code chunk number 2: wordnet.Rnw:46-47 ################################################### library("wordnet") ################################################### ### code chunk number 3: wordnet.Rnw:66-67 ################################################### getFilterTypes() ################################################### ### code chunk number 4: wordnet.Rnw:77-80 (eval = FALSE) ################################################### ## filter <- getTermFilter("StartsWithFilter", "car", TRUE) ## terms <- getIndexTerms("NOUN", 5, filter) ## sapply(terms, getLemma) ################################################### ### code chunk number 5: wordnet.Rnw:91-94 (eval = FALSE) ################################################### ## filter <- getTermFilter("ExactMatchFilter", "company", TRUE) ## terms <- getIndexTerms("NOUN", 1, filter) ## getSynonyms(terms[[1]]) ################################################### ### code chunk number 6: wordnet.Rnw:102-103 (eval = FALSE) ################################################### ## synonyms("company", "NOUN") ################################################### ### code chunk number 7: wordnet.Rnw:114-119 (eval = FALSE) ################################################### ## filter <- getTermFilter("ExactMatchFilter", "hot", TRUE) ## terms <- getIndexTerms("ADJECTIVE", 1, filter) ## synsets <- getSynsets(terms[[1]]) ## related <- getRelatedSynsets(synsets[[1]], "!") ## sapply(related, getWord)
/scratch/gouwar.j/cran-all/cranData/wordnet/inst/doc/wordnet.R
# Copyright 2021 Bedford Freeman & Worth Pub Grp LLC DBA Macmillan Learning. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #' @importFrom rlang %||% rlang::`%||%` #' @importFrom fastmatch %fin% fastmatch::`%fin%` #' @importFrom wordpiece.data wordpiece_vocab #' @export wordpiece.data::wordpiece_vocab
/scratch/gouwar.j/cran-all/cranData/wordpiece/R/imports.R
# Copyright 2021 Bedford Freeman & Worth Pub Grp LLC DBA Macmillan Learning. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # wordpiece_tokenize ---------------------------------------------------- #' Tokenize Sequence with Word Pieces #' #' Given a sequence of text and a wordpiece vocabulary, tokenizes the text. #' #' @inheritParams .wp_tokenize_single_string #' @param text Character; text to tokenize. #' #' @return A list of named integer vectors, giving the tokenization of the input #' sequences. The integer values are the token ids, and the names are the #' tokens. #' @export #' #' @examples #' tokens <- wordpiece_tokenize( #' text = c( #' "I love tacos!", #' "I also kinda like apples." #' ) #' ) wordpiece_tokenize <- function(text, vocab = wordpiece_vocab(), unk_token = "[UNK]", max_chars = 100) { is_cased <- .get_casedness(vocab) vocab <- .process_vocab(vocab) if (!is_cased) { text <- tolower(text) } text <- piecemaker::prepare_and_tokenize( text = text, prepare = TRUE, remove_terminal_hyphens = FALSE ) tokens <- lapply( X = text, FUN = .wp_tokenize_single_string, vocab = vocab, unk_token = unk_token, max_chars = max_chars ) return(tokens) } # .wp_tokenize_single_string ------------------------------------------------- #' Tokenize an Input Word-by-word #' #' @param words Character; a vector of words (generated by space-tokenizing a #' single input). #' @inheritParams .wp_tokenize_word #' #' @return A named integer vector of tokenized words. #' @keywords internal .wp_tokenize_single_string <- function(words, vocab, unk_token, max_chars) { token_vector <- unlist( lapply( X = words, FUN = .wp_tokenize_word, vocab = vocab, unk_token = unk_token, max_chars = max_chars ) ) # Get IDs by position. ids <- fastmatch::fmatch(token_vector, vocab) names(ids) <- token_vector return(ids - 1L) # default to 0-based index, for historical consistency } # .wp_tokenize_word ----------------------------------------------------------- #' Tokenize a Word #' #' Tokenize a single "word" (no whitespace). The word can technically contain #' punctuation, but in BERT's tokenization, punctuation has been split out by #' this point. #' #' @param word Word to tokenize. #' @param vocab Character vector of vocabulary tokens. The tokens are assumed to #' be in order of index, with the first index taken as zero to be compatible #' with Python implementations. #' @param unk_token Token to represent unknown words. #' @param max_chars Maximum length of word recognized. #' #' @return Input word as a list of tokens. #' @keywords internal .wp_tokenize_word <- function(word, vocab, unk_token = "[UNK]", max_chars = 100) { word_len <- stringi::stri_length(word) if (word_len > max_chars) { return(unk_token) } if (word %fin% vocab) { return(word) } is_bad <- FALSE start <- 1 sub_tokens <- character(0) while (start <= word_len) { end <- word_len cur_substr <- NA_character_ while (start <= end) { sub_str <- substr(word, start, end) # inclusive on both ends if (start > 1) { # means this substring is a suffix, so add '##' sub_str <- paste0("##", sub_str) } if (sub_str %fin% vocab) { cur_substr <- sub_str break } end <- end - 1 } if (is.na(cur_substr)) { is_bad <- TRUE # nocov break # nocov } sub_tokens <- append(sub_tokens, cur_substr) start <- end + 1 # pick up where we left off } if (is_bad) { return(unk_token) # nocov } return(sub_tokens) } # .process_vocab ----------------------------------------------------------- #' Process a Vocabulary for Tokenization #' #' @param v An object of class `wordpiece_vocabulary` or a character vector. #' #' @return A character vector of tokens for tokenization. #' @keywords internal .process_vocab <- function(v) { UseMethod(".process_vocab", v) } #' @rdname dot-process_vocab #' @keywords internal #' @export .process_vocab.default <- function(v) { stop("Unsupported vocabulary type. ", "The vocabulary should be a character vector ", "or an object of type `wordpiece_vocabulary.` ", "To use the default wordpiece vocabulary, see `wordpiece_vocab()`.") } #' @rdname dot-process_vocab #' @keywords internal #' @export .process_vocab.wordpiece_vocabulary <- function(v) { return(.process_wp_vocab(v)) } #' @rdname dot-process_vocab #' @keywords internal #' @export .process_vocab.character <- function(v) { return(v) } #' Process a Wordpiece Vocabulary for Tokenization #' #' @param v An object of class `wordpiece_vocabulary`. #' #' @return A character vector of tokens for tokenization. #' @keywords internal .process_wp_vocab <- function(v) { UseMethod(".process_wp_vocab", v) } #' @rdname dot-process_wp_vocab #' @keywords internal #' @export .process_wp_vocab.default <- function(v) { stop("Unsupported vocabulary type. ", "The vocabulary should be an object of type `wordpiece_vocabulary.` ", "To use the default wordpiece vocabulary, see `wordpiece_vocab()`.") } #' @rdname dot-process_wp_vocab #' @keywords internal #' @export .process_wp_vocab.wordpiece_vocabulary <- function(v) { NextMethod() } #' @rdname dot-process_wp_vocab #' @keywords internal #' @export .process_wp_vocab.integer <- function(v) { return(names(v)[order(v)]) } #' @rdname dot-process_wp_vocab #' @keywords internal #' @export .process_wp_vocab.character <- function(v) { return(v) } # .get_casedness ---------------------------------------------------------- #' Determine Casedness of Vocabulary #' #' @param v An object of class `wordpiece_vocabulary`, or a character vector. #' #' @return TRUE if the vocabulary is case-sensitive, FALSE otherwise. #' @keywords internal .get_casedness <- function(v) { UseMethod(".get_casedness", v) } #' @rdname dot-get_casedness #' @keywords internal #' @export .get_casedness.default <- function(v) { stop("Unsupported vocabulary type. ", "The vocabulary should be a character vector ", "or an object of type `wordpiece_vocabulary.` ", "To use the default wordpiece vocabulary, see `wordpiece_vocab()`.") } #' @rdname dot-get_casedness #' @keywords internal #' @export .get_casedness.wordpiece_vocabulary <- function(v) { return(attr(v, "is_cased")) } #' @rdname dot-get_casedness #' @keywords internal #' @export .get_casedness.character <- function(v) { return(.infer_case_from_vocab(v)) }
/scratch/gouwar.j/cran-all/cranData/wordpiece/R/tokenization.R
# Copyright 2021 Bedford Freeman & Worth Pub Grp LLC DBA Macmillan Learning. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # wordpiece_cache_dir -------------------------------------------------- #' Retrieve Directory for wordpiece Cache #' #' The wordpiece cache directory is a platform- and user-specific path where #' wordpiece saves caches (such as a downloaded vocabulary). You can override #' the default location in a few ways: \itemize{ \item{Option: #' \code{wordpiece.dir}}{Use \code{\link{set_wordpiece_cache_dir}} to set a #' specific cache directory for this session} \item{Environment: #' \code{WORDPIECE_CACHE_DIR}}{Set this environment variable to specify a #' wordpiece cache directory for all sessions.} \item{Environment: #' \code{R_USER_CACHE_DIR}}{Set this environment variable to specify a cache #' directory root for all packages that use the caching system.} } #' #' @return A character vector with the normalized path to the cache. #' @export wordpiece_cache_dir <- function() { return(dlr::app_cache_dir("wordpiece")) # nocov } #' Set a Cache Directory for wordpiece #' #' Use this function to override the cache path used by wordpiece for the #' current session. Set the \code{WORDPIECE_CACHE_DIR} environment variable #' for a more permanent change. #' #' @param cache_dir Character scalar; a path to a cache directory. #' #' @return A normalized path to a cache directory. The directory is created if #' the user has write access and the directory does not exist. #' @export set_wordpiece_cache_dir <- function(cache_dir = NULL) { return(dlr::set_app_cache_dir("wordpiece", cache_dir)) # nocov }
/scratch/gouwar.j/cran-all/cranData/wordpiece/R/utils.R
# Copyright 2021 Bedford Freeman & Worth Pub Grp LLC DBA Macmillan Learning. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # load_vocab -------------------------------------------------------------- #' Load a vocabulary file #' #' @param vocab_file path to vocabulary file. File is assumed to be a text file, #' with one token per line, with the line number corresponding to the index of #' that token in the vocabulary. #' #' @return The vocab as a character vector of tokens. The casedness of the #' vocabulary is inferred and attached as the "is_cased" attribute. The #' vocabulary indices are taken to be the positions of the tokens, #' *starting at zero* for historical consistency. #' #' Note that from the perspective of a neural net, the numeric indices *are* #' the tokens, and the mapping from token to index is fixed. If we changed the #' indexing (the order of the tokens), it would break any pre-trained models. #' #' @export #' @examples #' # Get path to sample vocabulary included with package. #' vocab_path <- system.file("extdata", "tiny_vocab.txt", package = "wordpiece") #' vocab <- load_vocab(vocab_file = vocab_path) load_vocab <- function(vocab_file) { token_list <- readLines(vocab_file) return(prepare_vocab(token_list)) } #' Format a Token List as a Vocabulary #' #' We use a special named integer vector with class wordpiece_vocabulary to #' provide information about tokens used in \code{\link{wordpiece_tokenize}}. #' This function takes a character vector of tokens and puts it into that #' format. #' #' @param token_list A character vector of tokens. #' #' @return The vocab as a character vector of tokens. The casedness of the #' vocabulary is inferred and attached as the "is_cased" attribute. The #' vocabulary indices are taken to be the positions of the tokens, #' *starting at zero* for historical consistency. #' #' Note that from the perspective of a neural net, the numeric indices *are* #' the tokens, and the mapping from token to index is fixed. If we changed the #' indexing (the order of the tokens), it would break any pre-trained models. #' @export #' @examples #' my_vocab <- prepare_vocab(c("some", "example", "tokens")) #' class(my_vocab) #' attr(my_vocab, "is_cased") prepare_vocab <- function(token_list) { token_list <- piecemaker::validate_utf8(trimws(token_list)) is_cased <- .infer_case_from_vocab(token_list) vocab_all <- .new_wordpiece_vocabulary( vocab = token_list, is_cased = is_cased ) return(.validate_wordpiece_vocabulary(vocab = vocab_all)) } # load_or_retrieve_vocab ------------------------------------------------------ #' Load a vocabulary file, or retrieve from cache #' #' @inheritParams load_vocab #' #' @return The vocab as a character vector of tokens. The casedness of the #' vocabulary is inferred and attached as the "is_cased" attribute. The #' vocabulary indices are taken to be the positions of the tokens, #' *starting at zero* for historical consistency. #' #' Note that from the perspective of a neural net, the numeric indices *are* #' the tokens, and the mapping from token to index is fixed. If we changed the #' indexing (the order of the tokens), it would break any pre-trained models. #' #' @export load_or_retrieve_vocab <- function(vocab_file) { return( # nocov start dlr::read_or_cache( source_path = vocab_file, appname = "wordpiece", process_f = load_vocab ) ) # nocov end } # .infer_case_from_vocab -------------------------------------------------- #' Determine Vocabulary Casedness #' #' Determine whether or not a wordpiece vocabulary is case-sensitive. #' #' If none of the tokens in the vocabulary start with a capital letter, it will #' be assumed to be uncased. Note that tokens like "\\[CLS\\]" contain uppercase #' letters, but don't start with uppercase letters. #' #' @param vocab The vocabulary as a character vector. #' @return TRUE if the vocabulary is cased, FALSE if uncased. #' #' @keywords internal .infer_case_from_vocab <- function(vocab) { is_cased <- any(grepl(pattern = "^[A-Z]", vocab)) return(is_cased) } # .new_wordpiece_vocabulary -------------------------------------------------- #' Constructor for Class wordpiece_vocabulary #' #' @param vocab Character vector of tokens. #' @param is_cased Logical; whether the vocabulary is cased. #' @return The vocabulary with `is_cased` attached as an attribute, and the #' class `wordpiece_vocabulary` applied. #' #' @keywords internal .new_wordpiece_vocabulary <- function(vocab, is_cased) { return( structure( vocab, "is_cased" = is_cased, class = c("wordpiece_vocabulary", "character") ) ) } # .validate_wordpiece_vocabulary ---------------------------------------------- #' Validator for Objects of Class wordpiece_vocabulary #' #' @param vocab wordpiece_vocabulary object to validate #' @return \code{vocab} if the object passes the checks. Otherwise, abort with #' message. #' #' @keywords internal .validate_wordpiece_vocabulary <- function(vocab) { if (length(vocab) == 0) { stop("Empty vocabulary.") } if (anyDuplicated(vocab) > 0) { stop("Duplicate tokens found in vocabulary.") } if (any(grepl("\\s", vocab))) { stop("Whitespace found in vocabulary tokens.") } return(vocab) }
/scratch/gouwar.j/cran-all/cranData/wordpiece/R/vocab.R
.onLoad <- function(libname, pkgname) { # nocov start .process_wp_vocab.integer <<- memoise::memoise(.process_wp_vocab.integer) .infer_case_from_vocab <<- memoise::memoise(.infer_case_from_vocab) } # nocov end
/scratch/gouwar.j/cran-all/cranData/wordpiece/R/zzz.R
## ---- include = FALSE--------------------------------------------------------- knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ## ----default-vocabs----------------------------------------------------------- library(wordpiece) # The default vocabulary is uncased. wordpiece_tokenize( "I like tacos!" ) # A cased vocabulary is also provided. wordpiece_tokenize( "I like tacos!", vocab = wordpiece_vocab(cased = TRUE) ) ## ----example0----------------------------------------------------------------- # Get path to sample vocabulary included with package. vocab_path <- system.file("extdata", "tiny_vocab.txt", package = "wordpiece") # Load the vocabulary. vocab <- load_vocab(vocab_path) # Take a peek at the vocabulary. head(vocab) ## ----example1----------------------------------------------------------------- # Now tokenize some text! wordpiece_tokenize(text = "I love tacos, apples, and tea!", vocab = vocab) ## ----example2----------------------------------------------------------------- # The above vocabulary was uncased. attr(vocab, "is_cased") # Here is the same vocabulary, but containing the capitalized token "Hi". vocab_path2 <- system.file("extdata", "tiny_vocab_cased.txt", package = "wordpiece") vocab_cased <- load_vocab(vocab_path2) head(vocab_cased) # vocab_cased is inferred to be case-sensitive... attr(vocab_cased, "is_cased") # ... so the tokenization will *not* convert strings to lowercase, and so the # words "I" and "And" are not found in the vocabulary (though "and" still is). wordpiece_tokenize(text = "And I love tacos and salsa!", vocab = vocab_cased) ## ----example3----------------------------------------------------------------- wordpiece_tokenize(text = "I love tacos!", vocab = vocab_cased, unk_token = "[missing]")
/scratch/gouwar.j/cran-all/cranData/wordpiece/inst/doc/basic_usage.R
--- title: "Using wordpiece" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Using wordpiece} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- <!-- Copyright 2021 Bedford Freeman & Worth Pub Grp LLC DBA Macmillan Learning. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` This package applies [WordPiece](https://arxiv.org/pdf/1609.08144v2.pdf) tokenization to input text, given an appropriate WordPiece vocabulary. The [BERT](https://arxiv.org/pdf/1810.04805.pdf) tokenization conventions are used. The basic tokenization algorithm is: - Put spaces around punctuation. - For each resulting word, if the word is found in the WordPiece vocabulary, keep it as-is. If not, starting from the beginning, pull off the biggest piece that *is* in the vocabulary, and prefix "##" to the remaining piece. Repeat until the entire word is represented by pieces from the vocabulary, if possible. - If the word can't be represented by vocabulary pieces, or if it exceeds a certain length, replace it with a specified "unknown" token. Ideally, a WordPiece vocabulary will be complete enough to represent any word, but this is not required. ## Provided Vocabularies Two vocabularies are provided via the {wordpiece.data} package. These are the wordpiece vocabularies used in Google Research's BERT models (and most models based on BERT). ```{r default-vocabs} library(wordpiece) # The default vocabulary is uncased. wordpiece_tokenize( "I like tacos!" ) # A cased vocabulary is also provided. wordpiece_tokenize( "I like tacos!", vocab = wordpiece_vocab(cased = TRUE) ) ``` ## Loading a Vocabulary For the rest of this vignette, we use a tiny vocabulary for illustrative purposes. You should not use this vocabulary for actual tokenization. The vocabulary is represented by the package as a named integer vector, with a logical attribute `is_cased` to indicate whether the vocabulary is case sensitive. The names are the actual tokens, and the integer values are the token indices. The integer values would be the input to a BERT model, for example. A vocabulary can be read from a text file containing a single token per line. The token index is taken to be the line number, *starting from zero*. These conventions are adopted for compatibility with the vocabulary and file format used in the pretrained BERT checkpoints released by Google Research. The casedness of the vocabulary is inferred from the content of the vocabulary. ```{r example0} # Get path to sample vocabulary included with package. vocab_path <- system.file("extdata", "tiny_vocab.txt", package = "wordpiece") # Load the vocabulary. vocab <- load_vocab(vocab_path) # Take a peek at the vocabulary. head(vocab) ``` When a text vocabulary is loaded with `load_or_retrieve_vocabulary` in an interactive R session, the option is given to cache the vocabulary as an RDS file for faster future loading. ## Tokenizing Text Tokenize text by calling `wordpiece_tokenize` on the text, passing the vocabulary as the `vocab` parameter. The output of `wordpiece_tokenize` is a named integer vector of token indices. ```{r example1} # Now tokenize some text! wordpiece_tokenize(text = "I love tacos, apples, and tea!", vocab = vocab) ``` ## Vocabulary Case The above vocabulary contained no tokens starting with an uppercase letter, so it was assumed to be uncased. When tokenizing text with an uncased vocabulary, the input is converted to lowercase before any other processing is applied. If the vocabulary contains at least one capitalized token, it will be taken as case-sensitive, and the case of the input text is preserved. Note that in a cased vocabulary, capitalized and uncapitalized versions of the same word are different tokens, and must *both* be included in the vocabulary to be recognized. ```{r example2} # The above vocabulary was uncased. attr(vocab, "is_cased") # Here is the same vocabulary, but containing the capitalized token "Hi". vocab_path2 <- system.file("extdata", "tiny_vocab_cased.txt", package = "wordpiece") vocab_cased <- load_vocab(vocab_path2) head(vocab_cased) # vocab_cased is inferred to be case-sensitive... attr(vocab_cased, "is_cased") # ... so the tokenization will *not* convert strings to lowercase, and so the # words "I" and "And" are not found in the vocabulary (though "and" still is). wordpiece_tokenize(text = "And I love tacos and salsa!", vocab = vocab_cased) ``` ## Representing "Unknown" Tokens Note that the default value for the `unk_token` argument, "[UNK]", is present in the above vocabularies, so it had an integer index in the tokenization. If that token were not in the vocabulary, its index would be coded as `NA`. ```{r example3} wordpiece_tokenize(text = "I love tacos!", vocab = vocab_cased, unk_token = "[missing]") ``` The package defaults are set to be compatible with BERT tokenization. If you have a different use case, be sure to check all parameter values.
/scratch/gouwar.j/cran-all/cranData/wordpiece/inst/doc/basic_usage.Rmd
--- title: "Using wordpiece" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Using wordpiece} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- <!-- Copyright 2021 Bedford Freeman & Worth Pub Grp LLC DBA Macmillan Learning. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` This package applies [WordPiece](https://arxiv.org/pdf/1609.08144v2.pdf) tokenization to input text, given an appropriate WordPiece vocabulary. The [BERT](https://arxiv.org/pdf/1810.04805.pdf) tokenization conventions are used. The basic tokenization algorithm is: - Put spaces around punctuation. - For each resulting word, if the word is found in the WordPiece vocabulary, keep it as-is. If not, starting from the beginning, pull off the biggest piece that *is* in the vocabulary, and prefix "##" to the remaining piece. Repeat until the entire word is represented by pieces from the vocabulary, if possible. - If the word can't be represented by vocabulary pieces, or if it exceeds a certain length, replace it with a specified "unknown" token. Ideally, a WordPiece vocabulary will be complete enough to represent any word, but this is not required. ## Provided Vocabularies Two vocabularies are provided via the {wordpiece.data} package. These are the wordpiece vocabularies used in Google Research's BERT models (and most models based on BERT). ```{r default-vocabs} library(wordpiece) # The default vocabulary is uncased. wordpiece_tokenize( "I like tacos!" ) # A cased vocabulary is also provided. wordpiece_tokenize( "I like tacos!", vocab = wordpiece_vocab(cased = TRUE) ) ``` ## Loading a Vocabulary For the rest of this vignette, we use a tiny vocabulary for illustrative purposes. You should not use this vocabulary for actual tokenization. The vocabulary is represented by the package as a named integer vector, with a logical attribute `is_cased` to indicate whether the vocabulary is case sensitive. The names are the actual tokens, and the integer values are the token indices. The integer values would be the input to a BERT model, for example. A vocabulary can be read from a text file containing a single token per line. The token index is taken to be the line number, *starting from zero*. These conventions are adopted for compatibility with the vocabulary and file format used in the pretrained BERT checkpoints released by Google Research. The casedness of the vocabulary is inferred from the content of the vocabulary. ```{r example0} # Get path to sample vocabulary included with package. vocab_path <- system.file("extdata", "tiny_vocab.txt", package = "wordpiece") # Load the vocabulary. vocab <- load_vocab(vocab_path) # Take a peek at the vocabulary. head(vocab) ``` When a text vocabulary is loaded with `load_or_retrieve_vocabulary` in an interactive R session, the option is given to cache the vocabulary as an RDS file for faster future loading. ## Tokenizing Text Tokenize text by calling `wordpiece_tokenize` on the text, passing the vocabulary as the `vocab` parameter. The output of `wordpiece_tokenize` is a named integer vector of token indices. ```{r example1} # Now tokenize some text! wordpiece_tokenize(text = "I love tacos, apples, and tea!", vocab = vocab) ``` ## Vocabulary Case The above vocabulary contained no tokens starting with an uppercase letter, so it was assumed to be uncased. When tokenizing text with an uncased vocabulary, the input is converted to lowercase before any other processing is applied. If the vocabulary contains at least one capitalized token, it will be taken as case-sensitive, and the case of the input text is preserved. Note that in a cased vocabulary, capitalized and uncapitalized versions of the same word are different tokens, and must *both* be included in the vocabulary to be recognized. ```{r example2} # The above vocabulary was uncased. attr(vocab, "is_cased") # Here is the same vocabulary, but containing the capitalized token "Hi". vocab_path2 <- system.file("extdata", "tiny_vocab_cased.txt", package = "wordpiece") vocab_cased <- load_vocab(vocab_path2) head(vocab_cased) # vocab_cased is inferred to be case-sensitive... attr(vocab_cased, "is_cased") # ... so the tokenization will *not* convert strings to lowercase, and so the # words "I" and "And" are not found in the vocabulary (though "and" still is). wordpiece_tokenize(text = "And I love tacos and salsa!", vocab = vocab_cased) ``` ## Representing "Unknown" Tokens Note that the default value for the `unk_token` argument, "[UNK]", is present in the above vocabularies, so it had an integer index in the tokenization. If that token were not in the vocabulary, its index would be coded as `NA`. ```{r example3} wordpiece_tokenize(text = "I love tacos!", vocab = vocab_cased, unk_token = "[missing]") ``` The package defaults are set to be compatible with BERT tokenization. If you have a different use case, be sure to check all parameter values.
/scratch/gouwar.j/cran-all/cranData/wordpiece/vignettes/basic_usage.Rmd
# Copyright 2021 Bedford Freeman & Worth Pub Grp LLC DBA Macmillan Learning. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #' Load a wordpiece Vocabulary #' #' A wordpiece vocabulary is a named integer vector with class #' "wordpiece_vocabulary". The names of the vector are the tokens, and the #' values are the integer identifiers of those tokens. The vocabulary is #' 0-indexed for compatibility with Python implementations. #' #' @return A wordpiece_vocabulary. #' @param cased Logical; load the uncased vocabulary, or the cased vocabulary? #' @export #' #' @examples #' head(wordpiece_vocab()) #' head(wordpiece_vocab(cased = TRUE)) wordpiece_vocab <- function(cased = FALSE) { filetype <- "uncased" n_tokens <- 30522L if (cased) { filetype <- "cased" n_tokens <- 28996L } return( .load_inst_rds(filetype, n_tokens) ) } #' Load an RDS from inst Dir #' #' @inheritParams .get_path #' #' @return The R object. #' @keywords internal .load_inst_rds <- function(filetype, n_tokens) { return(readRDS(.get_path(filetype, n_tokens))) } #' Generate the inst path #' #' @param filetype Character scalar; the type of file, like "uncased". #' @param n_tokens Integer scalar; The number of tokens used for that file. #' #' @return Character scalar; the path to the file. #' @keywords internal .get_path <- function(filetype, n_tokens) { return( system.file( "rds", paste0( paste( "wordpiece", filetype, n_tokens, sep = "_" ), ".rds" ), package = "wordpiece.data" ) ) }
/scratch/gouwar.j/cran-all/cranData/wordpiece.data/R/loaders.R
#' Base class for all other classes #' #' @description #' Provides a basic structure for processing text files. Also provides methods #' for reading and writing files and objects. #' #' @details #' It provides pre-processing, processing and post-processing methods, which #' need to be overridden by derived classes. #' #' The pre-processing function is called before reading a file. The process #' function is called for processing a given number of lines. The post #' processing function is called on the processed data. #' #' Also provides methods for reading and writing text files and R objects. All #' class methods are private. #' @export Base <- R6::R6Class( "Base", public = list( #' @description #' It initializes the current object. It is used to set the file name #' and verbose options. #' @param fn The path to the file to clean. #' @param lc The number of lines to read and clean at a time. #' @param ve The level of detail in the information messages. initialize = function(fn = NULL, lc = 100, ve = 2) { # If the given file name is not NULL and is not valid if (!is.null(fn) && !file.exists(fn)) { private$dm( "The given file name is not valid", md = -1, ty = "e" ) } # The base class attributes are set # The file name is set private$fn <- fn # The verbose option is set private$ve <- ve # The line count is set private$lc <- lc # The processed output is set private$p_output <- NULL } ), private = list( # @field opts The list of file processing options. # * **save_data**. If the combined processed lines should be saved. # * **ret_data**. If the data should be returned. # * **output_file**. Name of the output file used to store the data. opts = list( "save_data" = F, "ret_data" = F, "output_file" = NULL ), # @field lc The number of lines to read and process at a time. lc = 100, # @field p_output The output of the processing step p_output = NULL, # @field fn The name of the text file to process. fn = NULL, # @field ve Indicates if progress data should be printed. ve = 0, # @field con The input file connection con = NULL, # @description # Reads the contents of the given file. Loads the file # contents to a R object, a data frame or character vector. # @param fn The file name. # @param format The file format. 'plain' or 'obj' # @param opts Options for reading the file. # @return The required data. read_data = function(fn, format, opts) { # If the format is plain if (format == "plain") { # The file is read data <- private$read_file(fn, opts) } # If the format is obj else if (format == "obj") { # The file is read data <- private$read_obj(fn) } return(data) }, # @description # Writes the given data to a file. The data may be a R object, a # character vector or a data frame. # @param fn The file name. # @param format The file format. 'plain' or 'obj' # @param opts Options for writting to the file. # @return The required data. write_data = function(data, fn, format, opts) { # If the format is plain if (format == "plain") { # The data is written to a file private$write_file(data, fn, opts) } # If the format is obj if (format == "obj") { # The R object is saved private$save_obj(data, fn) } }, # @description #' Reads the given file one line at a time. It runs the given #' pre-processing function before reading the file. It runs the given # line processing function for each line. It optionally saves the # output of line processing after reading the file or after processing # certain number of lines. # @param pre_process The pre-processing function. # @param process The function used to process each line. # @param post_process The function used to perform post processing. # @return The combined processed data process_file = function(pre_process, process, post_process) { # Pre-processing is done pre_process() # The file is opened private$con <- file(private$fn) # The connection is opened for reading open(private$con) # The lines to be read, lines <- c() # The loop counter c <- 0 # Indicates that data should not be appended is_app <- F # The output file name of <- private$opts[["output_file"]] # All lines are read while (TRUE) { # The lines are read lines <- readLines(private$con, n = private$lc, skipNul = TRUE ) # If all the lines have been read if (length(lines) == 0) break # The lines are processed p_lines <- process(lines) # If the processed lines are NULL if (is.null(p_lines)) next # If the data should be saved if (private$opts[["save_data"]]) { # The cleaned data is written to file private$write_file(p_lines, of, is_app) # Debug message private$dm( length(p_lines), "lines were written\n", md = 1 ) # Indicates that data should be appended is_app <- T } # If the processed data should be returned if (private$opts[["ret_data"]]) { # The processed output is merged private$p_output <- c(private$p_output, p_lines) } # The loop counter is increased by 1 c <- c + 1 # The information message is displayed private$dm( private$lc * c, "lines have been processed\n", md = 1 ) } # The file connection is closed if it is open close(private$con) # Post processing is performed post_process() # If the data should be returned if (private$opts[["ret_data"]]) { # The processed output is returned return(private$p_output) } }, # @description # Reads the given file and returns its contents. # @param fn The name of the file to read. # @param is_csv If the data is a csv file # @return The file data read_file = function(fn, is_csv) { # The information message msg <- paste0("Reading \033[0;", 32, "m'", fn, "'\033[0m") # Information message is shown private$dm(msg, md = 1) # If the file is not a csv file if (!is_csv) { # File is opened for reading con <- file(fn) # The file contents are read data <- readLines(con, skipNul = TRUE) # The file connection is closed close(con) } else { data <- read.csv(fn) } # The information message is shown private$dm(" \u2714\n", md = 1) # The data is returned return(data) }, # @description # Reads the given number of lines from the given file. # @param fn The name of the file to read. # @param lc The number of lines to read. # @return The file data read_lines = function(fn, lc) { # The information message msg <- paste0("Reading \033[0;", 32, "m'", fn, "'\033[0m") # Information message is shown private$dm(msg, md = 1) # File is opened for reading con <- file(fn) # The file contents are read data <- readLines(con, n = lc, skipNul = TRUE) # The file connection is closed close(con) # The information message is shown private$dm(" \u2714\n", md = 1) # The data is returned return(data) }, # @description # Writes the given data to the given file. The data may be appended to # an existing file. # @param data The data to be written. # @param fn The name of the file. # @param is_append Indicates if data should be saved. write_file = function(data, fn, is_append) { # The information message msg <- paste0("Writing \033[0;", 34, "m'", fn, "'\033[0m") # Information message is shown private$dm(msg, md = 1) # If the given data is a data frame if ("data.frame" %in% class(data)) { # The data frame is written to a file write.csv(data, fn, row.names = F) } else { # The file open mode mode <- "w" # If the data should be appended if (is_append) mode <- "a" # The output file is opened for writing con <- file(fn, open = mode) # The data is written to the output file writeLines(data, con) # The file connection is closed close(con) } # The information message is shown private$dm(" \u2714\n", md = 1) }, # @description # Saves the given object as a file. # @param obj The object to save. # @param fn The file name. save_obj = function(obj, fn) { # The information message msg <- paste0("Writing \033[0;", 34, "m'", fn, "'\033[0m") # Information message is shown private$dm(msg, md = 1) # The object is saved to a file in version 2 format saveRDS(obj, fn, version = 2) # The information message is shown private$dm(" \u2714\n", md = 1) }, # @description # Reads the contents of the given file. Loads the file # contents to a R object. # @param fn The file name. # @return The loaded R obj. read_obj = function(fn) { # The information message msg <- paste0("Reading \033[0;", 32, "m'", fn, "'\033[0m") # Information message is shown private$dm(msg, md = 1) # If the file does not exist if (!file.exists(fn)) { # The error message private$dm( "The file: ", fn, " cannot be read !", md = -1, ty = "e" ) } else { # The object is saved obj <- readRDS(fn) } # The information message is shown private$dm(" \u2714\n", md = 1) return(obj) }, # @description # Prints the given message depending on verbose settings. # @param ... The text messages to be displayed. # @param md The minimum debugging level. # @param ty The type of message. dm = function(..., md, ty = "m") { # If verbose is >= min_debug, then message is displayed if (private$ve >= md) { # If the type is message if (ty == "m") { cat(...) } # If the type is warning else if (ty == "w") { warning(...) } # If the type is error else if (ty == "e") { stop(...) } } }, # @description # Displays the given heading text in bold. # @param text The heading text to display. # @param char The padding character to use. # @param md The minimum debugging level. # @param ll The total length of the line. Default is 80 chars. dh = function(text, char, md, ll = 80) { # If verbose is >= min_debug, then message is displayed if (private$ve >= md) { # The heading prefix pre <- paste0(rep(char, 2), collapse = "") pre <- paste0(pre, " ", collapse = "") # The number of times the suffix should be repeated c <- ll - (nchar(text) - 3) # The heading text is added msg <- paste0(pre, text, collapse = "") msg <- paste0(msg, " ", collapse = "") # The heading suffix su <- paste0(rep(char, c), collapse = "") msg <- paste0(msg, su, collapse = "") msg <- paste0(msg, "\n", collapse = "") # The heading prefix is printed cat(msg) } }, # @description # Performs processing on the data. It should be # overriden by a derived class. # @param lines The lines to process process = function(lines) { }, # @description # Performs post-processing on the processed data. It should be # overriden by a derived class. post_process = function() { }, # @description # Performs pre-processing on the processed data. It should be # overriden by a derived class. pre_process = function() { return(NULL) } ) )
/scratch/gouwar.j/cran-all/cranData/wordpredictor/R/base.R
#' Analyzes input text files and n-gram token files #' #' @description #' It provides a method that returns information about text files, such as #' number of lines and number of words. It also provides a method that displays #' bar plots of n-gram frequencies. Additionally it provides a method for #' searching for n-grams in a n-gram token file. This file is generated using #' the TokenGenerator class. #' #' @details #' It provides a method that returns text file information. The text #' file information includes total number of lines, max, min and mean line #' length and file size. #' #' It also provides a method that generates a bar plot showing the most common #' n-gram tokens. #' #' Another method is provided which returns a list of n-grams that match the #' given regular expression. #' @importFrom ggplot2 ggplot geom_bar ggtitle coord_flip ylab xlab aes ggsave DataAnalyzer <- R6::R6Class( "DataAnalyzer", inherit = Base, public = list( #' @description #' It initializes the current object. It is used to set the file name #' and verbose options. #' @param fn The path to the input file. #' @param ve The level of detail in the information messages. #' @export initialize = function(fn = NULL, ve = 0) { # The file name is set private$fn <- fn # The processed output is initialized private$p_output <- data.frame() # The verbose options is set private$ve <- ve }, #' @description #' It allows generating two type of n-gram plots. It first reads n-gram #' token frequencies from an input text file. The n-gram frequencies are #' displayed in a bar plot. #' #' The type of plot is specified by the type option. The type options #' can have the values 'top_features' or 'coverage'. 'top_features' #' displays the top n most occurring tokens along with their #' frequencies. 'coverage' displays the number of words along with their #' frequencies. #' #' The plot stats are returned as a data frame. #' @param opts The options for analyzing the data. #' * **type**. The type of plot to display. The options are: #' 'top_features', 'coverage'. #' * **n**. For 'top_features', it is the number of top most occurring #' tokens. For 'coverage' it is the first n frequencies. #' * **save_to**. The graphics devices to save the plot to. #' NULL implies plot is printed. #' * **dir**. The output directory where the plot will be saved. #' @return A data frame containing the stats. #' @examples #' # Start of environment setup code #' # The level of detail in the information messages #' ve <- 0 #' # The name of the folder that will contain all the files. It will be #' # created in the current directory. NULL value implies tempdir will #' # be used. #' fn <- NULL #' # The required files. They are default files that are part of the #' # package #' rf <- c("n2.RDS") #' # An object of class EnvManager is created #' em <- EnvManager$new(ve = ve, rp = "./") #' # The required files are downloaded #' ed <- em$setup_env(rf, fn) #' # End of environment setup code #' #' # The n-gram file name #' nfn <- paste0(ed, "/n2.RDS") #' # The DataAnalyzer object is created #' da <- DataAnalyzer$new(nfn, ve = ve) #' # The top features plot is checked #' df <- da$plot_n_gram_stats(opts = list( #' "type" = "top_features", #' "n" = 10, #' "save_to" = NULL, #' "dir" = ed #' )) #' # N-gram statistics are displayed #' print(df) #' # The test environment is removed. Comment the below line, so the #' # files generated by the function can be viewed #' em$td_env() plot_n_gram_stats = function(opts) { # The information message is shown private$dh("Displaying Plot", "-", md = 1) # The n-gram data is read df <- private$read_obj(private$fn) # If the coverage option was specified if (opts[["type"]] == "coverage") { # The y values y <- as.character(1:opts[["n"]]) # The x values x <- numeric() # The percentage frequencies is calculated for (i in 1:opts[["n"]]) { # The percentage of tokens with frequency i x[i] <- 100 * (nrow(df[df$freq == i, ]) / nrow(df)) # The percentage is rounded to 2 decimal places x[i] <- round(x[i], 2) } # A data frame is created df <- data.frame("freq" = x, "pre" = y) # The plot labels labels <- list( y = "Percentage of total", x = "Word Frequency", title = "Coverage" ) } # If the top_features option was specified else if (opts[["type"]] == "top_features") { # The plot labels labels <- list( y = "Frequency", x = "Feature", title = paste("Top", opts[["n"]], "Features") ) } # The freq column is converted to numeric df$freq <- as.numeric(df$freq) # The pre column is converted to character df$pre <- as.character(df$pre) # The data frame is sorted in descending order df <- (df[order(df$freq, decreasing = T), ]) # The top n terms are extracted df <- df[1:opts[["n"]], ] # The chart is plotted g <- private$display_plot(df, labels) # If the save_to and dir options are not NULL if (!is.null(opts[["save_to"]]) && !is.null(opts[["dir"]])) { # The file name for the plot fn <- paste0(opts[["type"]], ".", opts[["save_to"]]) # The plot object is saved ggsave( filename = fn, plot = g, device = opts[["save_to"]], path = opts[["dir"]], width = 7, height = 7, units = "in" ) } else { # The plot is printed print(g) } # The information message is shown private$dh("DONE", "=", md = 1) return(df) }, #' @description #' It generates information about text files. It takes as input a file #' or a directory containing text files. For each file it calculates the #' total number of lines, maximum, minimum and mean line lengths and the #' total file size. The file information is returned as a data frame. #' @param res The name of a directory or a file name. #' @return A data frame containing the text file statistics. #' @examples #' # Start of environment setup code #' # The level of detail in the information messages #' ve <- 0 #' # The name of the folder that will contain all the files. It will be #' # created in the current directory. NULL implies tempdir will be used #' fn <- NULL #' # The required files. They are default files that are part of the #' # package #' rf <- c("test.txt") #' # An object of class EnvManager is created #' em <- EnvManager$new(ve = ve, rp = "./") #' # The required files are downloaded #' ed <- em$setup_env(rf, fn) #' # End of environment setup code #' #' # The test file name #' cfn <- paste0(ed, "/test.txt") #' # The DataAnalyzer object is created #' da <- DataAnalyzer$new(ve = ve) #' # The file info is fetched #' fi <- da$get_file_info(cfn) #' # The file information is printed #' print(fi) #' #' # The test environment is removed. Comment the below line, so the #' # files generated by the function can be viewed #' em$td_env() get_file_info = function(res) { # The information message is shown private$dh("Generating file stats", "-", md = 1) # The list of files to check fl <- NULL # If a directory name was passed if (dir.exists(res)) { # All files in the directory are fetched fl <- dir(res, full.names = T, pattern = "*.txt") } # If a file name was passed else if (file.exists(res)) { # The file name is set fl <- res } # Used to store overall information about files ostats <- data.frame( "total_lc" = 0, "max_ll" = 0, "min_ll" = 0, "mean_ll" = 0, "total_s" = 0 ) # Used to store information about each file fstats <- tstats <- data.frame() # Temporary variables for calculating max, min, mean line length temp_max <- temp_min <- temp_mean <- 0 # For each file in the list for (fn in fl) { # The file is read lines <- private$read_file(fn, F) # The line count lc <- length(lines) # The file size size <- file.size(fn) # The file stats are updated ostats[["total_s"]] <- ostats[["total_s"]] + size ostats[["total_lc"]] <- ostats[["total_lc"]] + lc # The temporary variables are updated temp_max <- max(nchar(lines)) temp_min <- min(nchar(lines)) temp_mean <- round(mean(nchar(lines))) # The file stats are updated tstats <- data.frame( "fn" = fn, "total_lc" = lc, "max_ll" = temp_max, "min_ll" = temp_min, "mean_ll" = temp_mean, "size" = size ) # The size is formatted tstats["size"] <- utils:::format.object_size(tstats["size"], "auto") # The file stats are appended fstats <- rbind(fstats, tstats) if (temp_max > ostats["max_ll"]) { ostats["max_ll"] <- temp_max } if (temp_min > ostats["min_ll"]) { ostats["min_ll"] <- temp_min } if (temp_mean > ostats["mean_ll"]) { ostats["mean_ll"] <- temp_mean } } # The total size is formatted ostats["total_s"] <- utils:::format.object_size(ostats["total_s"], "auto") # The required stats stats <- list("file_stats" = fstats, "overall_stats" = ostats) # The information message is shown private$dh("DONE", "=", md = 1) # The required stats are returned return(stats) }, #' @description #' It extracts a given number of n-grams and their frequencies from a #' n-gram token file. #' #' The prefix parameter specifies the regular expression for matching #' n-grams. If this parameter is not specified then the given number of #' n-grams are randomly chosen. #' @param fn The n-gram file name. #' @param c The number of n-grams to return. #' @param pre The n-gram prefix, given as a regular expression. #' @examples #' # Start of environment setup code #' # The level of detail in the information messages #' ve <- 0 #' # The name of the folder that will contain all the files. It will be #' # created in the current directory. NULL implies tempdir will be used #' fn <- NULL #' # The required files. They are default files that are part of the #' # package #' rf <- c("n2.RDS") #' # An object of class EnvManager is created #' em <- EnvManager$new(ve = ve, rp = "./") #' # The required files are downloaded #' ed <- em$setup_env(rf, fn) #' # End of environment setup code #' #' # The n-gram file name #' nfn <- paste0(ed, "/n2.RDS") #' # The DataAnalyzer object is created #' da <- DataAnalyzer$new(nfn, ve = ve) #' # Bi-grams starting with "and_" are returned #' df <- da$get_ngrams(fn = nfn, c = 10, pre = "^and_*") #' # The data frame is sorted by frequency #' df <- df[order(df$freq, decreasing = TRUE),] #' # The data frame is printed #' print(df) #' #' # The test environment is removed. Comment the below line, so the #' # files generated by the function can be viewed #' em$td_env() get_ngrams = function(fn, c = NULL, pre = NULL) { # The data is read df <- private$read_obj(fn) # If the prefix is not given if (is.null(pre)) { seq_l # The sample indexes i <- sample(seq_len(nrow(df)), c) # The n-gram samples s <- df[i, ] } else { # The n-gram samples s <- df[grepl(pre, df$pre), ] } return(s) } ), private = list( # @field da_opts The options for data analyzer object. # * **type**. The type of plot to display. The options are: # 'top_features', 'coverage'. # * **n**. For 'top_features', it is the number of top most occurring # tokens. da_opts = list( "type" = "top_features", "n" = 10 ), # @description # Displays a plot using ggplot2. The plot is a horizontal # bar plot filled with red. It has the given labels and main title # @param df The data to plot. It is a data frame with prefix and freq # columns. # @param labels The main title, x and y axis labels. # @return The ggplot object is returned. display_plot = function(df, labels) { # The n-gram names and their frequencies are plotted g <- ggplot(data = df, aes(x = reorder(pre, freq), y = freq)) + geom_bar(stat = "identity", fill = "red") + ggtitle(labels[["title"]]) + coord_flip() + ylab(labels[["y"]]) + xlab(labels[["x"]]) return(g) } ) )
/scratch/gouwar.j/cran-all/cranData/wordpredictor/R/data-analyzer.R
#' Provides data cleaning functionality #' #' @description #' It provides a memory efficient method for removing unneeded #' characters from text files. It is suitable for cleaning large text files. #' #' @details #' It provides a method for cleaning text files. It allows removing bad words, #' stop words, non dictionary words, extra space, punctuation and non-alphabet #' characters. It also allows conversion to lower case. It supports large text #' files. #' #' @importFrom stringr str_count boundary DataCleaner <- R6::R6Class( "DataCleaner", inherit = Base, public = list( #' @description #' It initializes the current object. It is used to set the file name #' and verbose options. #' @param fn The path to the file to clean. #' @param opts The options for data cleaning. #' * **min_words**. The minimum number of words per sentence. #' * **line_count**. The number of lines to read and clean at a time. #' * **save_data**. If the combined processed lines should be saved. #' * **output_file**. Name of the output file used to store the data. #' * **sw_file**. The stop words file path. #' * **dict_file**. The dictionary file path. #' * **bad_file**. The bad words file path. #' * **to_lower**. If the words should be converted to lower case. #' * **remove_stop**. If stop words should be removed. #' * **remove_punct**. If punctuation symbols should be removed. #' * **remove_non_dict**. If non dictionary words should be removed. #' * **remove_non_alpha**. -> If non alphabet symbols should be removed. #' * **remove_extra_space**. -> If leading, trailing and double spaces #' should be removed. #' * **remove_bad**. If bad words should be removed #' @param ve The level of detail in the information messages. #' @export initialize = function(fn = NULL, opts = list(), ve = 0) { # An object of class EnvManager is created em <- EnvManager$new(ve) # The stop words file is checked opts[["sw_file"]] <- em$get_data_fn( opts[["sw_file"]], "stop-words.txt" ) # The bad words file is checked opts[["bad_file"]] <- em$get_data_fn( opts[["bad_file"]], "bad-words.txt" ) # The dict words file is checked opts[["dict_file"]] <- em$get_data_fn( opts[["dict_file"]], "dict-no-bad.txt" ) # The given options are merged with the opts attribute private$dc_opts <- modifyList(private$dc_opts, opts) # The base class is initialized super$initialize(fn, private$dc_opts[["line_count"]], ve) # The stop words file is read private$sw <- private$read_file(private$dc_opts[["sw_file"]], F) # The dictionary file is read private$dw <- private$read_file(private$dc_opts[["dict_file"]], F) # The bad word file is read private$bw <- private$read_file(private$dc_opts[["bad_file"]], F) # If the output file name is not given, then the default file name # is used. The default file name is generated by appending "-test" # to the input file name. if (!is.null(fn) && is.null(private$dc_opts[["output_file"]])) { # The default file name dfn <- gsub(".txt", "-clean.txt", fn) # The default file name is set private$dc_opts[["output_file"]] <- dfn # The information message msg <- paste0("Output file name not given.") msg <- paste0(msg, " Using the default file name: ", dfn, "\n") # The information message is shown private$dm(msg, md = 1, ty = "w") } # The save_data option of base class is set private$opts[["save_data"]] <- private$dc_opts[["save_data"]] # The output_file option of base class is set private$opts[["output_file"]] <- private$dc_opts[["output_file"]] }, #' @description #' It provides an efficient method for cleaning text files. #' It removes unneeded characters from the given text file with several #' options. #' #' It allows removing punctuation, bad words, stop words, #' non-alphabetical symbols and non-dictionary words. It reads a certain #' number of lines from the given text file. It removes unneeded #' characters from the lines and then saves the lines to an output text #' file. #' #' File cleaning progress is displayed if the verbose option was #' set in the class constructor. It is suitable for cleaning large text #' files. #' @examples #' # Start of environment setup code #' # The level of detail in the information messages #' ve <- 0 #' # The name of the folder that will contain all the files. It will be #' # created in the current directory. NULL implies tempdir will be used #' fn <- NULL #' # The required files. They are default files that are part of the #' # package #' rf <- c("test.txt") #' # An object of class EnvManager is created #' em <- EnvManager$new(ve = ve, rp = "./") #' # The required files are downloaded #' ed <- em$setup_env(rf, fn) #' # End of environment setup code #' #' # The cleaned test file name #' cfn <- paste0(ed, "/test-clean.txt") #' # The test file name #' fn <- paste0(ed, "/test.txt") #' # The data cleaning options #' dc_opts <- list("output_file" = cfn) #' # The data cleaner object is created #' dc <- DataCleaner$new(fn, dc_opts, ve = ve) #' # The sample file is cleaned #' dc$clean_file() #' #' # The test environment is removed. Comment the below line, so the #' # files generated by the function can be viewed #' em$td_env() clean_file = function() { # The information message is shown private$dh("Cleaning file", "-", md = 1) # The base class process_file function is called private$process_file( private$pre_process, private$process, private$post_process ) # The information message is shown private$dh("DONE", "=", md = 1) # If the data should not be saved if (!private$dc_opts[["save_data"]]) { # The processed output is returned return(private$p_output) } }, #' @description #' It cleans the given lines of text using the options #' passed to the current object. #' @param lines The input sentences. #' @return The cleaned lines of text. #' @examples #' # The level of detail in the information messages #' ve <- 0 #' # Test data is read #' l <- c( #' "If you think I'm wrong, send me a link to where it's happened", #' "We're about 90percent done with this room", #' "This isn't how I wanted it between us.", #' "Almost any cute breed can become ornamental", #' "Once upon a time there was a kingdom with a castle", #' "That's not a thing any of us are granted'", #' "Why are you being so difficult? she asks." #' ) #' # The expected results #' res <- c( #' "if you think wrong send me a link to where its happened", #' "were about percent done with this room", #' "this how i wanted it between us", #' "almost any cute breed can become ornamental", #' "once upon a time there was a kingdom with a castle", #' "thats not a thing any of us are granted", #' "why are you being so difficult she asks" #' ) #' # The DataCleaner object is created #' dc <- DataCleaner$new(ve = ve) #' # The line is cleaned #' cl <- dc$clean_lines(l) #' # The cleaned lines are printed #' print(cl) clean_lines = function(lines) { # The lines to clean l <- lines # If a line does not end with a ".", then "." is appended to the # line l <- gsub("(.+[^\\.])$", "\\1.", l) # The "." character is replaced with the string "specialdotsep" l <- gsub("\\.", " specialdotsep ", l) # If the words should be converted to lower case if (private$dc_opts[["to_lower"]]) { # The information message private$dm("Converting lines to lower case\n", md = 3) # The line is converted to lower case l <- tolower(l) } # If punctuation symbols should be removed if (private$dc_opts[["remove_punct"]]) { # The information message private$dm("Removing punctuation symbols\n", md = 3) # The pattern for removing all punctuation symbols l <- gsub("[[:punct:]\u2026\u2019\u201c\u201d]", "", l) } # If non alphabet symbols should be removed if (private$dc_opts[["remove_non_alpha"]]) { # The information message private$dm("Removing non alphabet symbols\n", md = 3) # Words containing non alphabetical characters are removed l <- gsub("([^[:alpha:]\\s])", "", l, perl = T) } # If stop words should be removed if (private$dc_opts[["remove_stop"]]) { # The information message private$dm("Removing stop words\n", md = 3) # Stop words are collapsed sw <- paste(private$sw, collapse = "|") swp <- paste("\\b(", sw, ")\\b", sep = "") # The stop words are removed l <- gsub(swp, "", l) } # The words in the lines are extracted words <- strsplit(l, split = " ") # The words are converted to an atomic list words <- unlist(words) # If non dictionary words should be removed if (private$dc_opts[["remove_non_dict"]]) { # The information message private$dm("Removing non dictionary words\n", md = 3) # The "specialdotsep" string is added to list of dictionary # words dw <- c(private$dw, "specialdotsep") # The non dictionary words are removed from the data words <- words[words %in% dw] # All 1 length words except for 'a' and 'i' are removed # The indexes position of all words that are "a" or "i" i1 <- (words == "a" | words == "i") # The index position of words of length 2 or more i2 <- (nchar(words) > 1) # The list of all words of length 2 or more including "a" and # "i" words <- words[i1 | i2] } # If bad words should be removed if (private$dc_opts[["remove_bad"]]) { # The information message private$dm("Removing bad words\n", md = 3) # The "specialdotsep" string is added to list of bad words bw <- c(private$bw, "specialdotsep") # The bad words are removed from the data words <- words[!words %in% bw] } # The words are combined with space l <- paste(words, collapse = " ") # The "specialdotsep" string is replaced with "." l <- gsub("specialdotsep", ".", l) # The sentences in the lines are extracted l <- strsplit(l, split = "\\.") # The sentences are converted to an atomic list l <- unlist(l) # If each sentence should have a minimum number of words if (private$dc_opts[["min_words"]] > -1) { # The information message msg <- paste0("Removing lines that have less than ") msg <- paste0(msg, private$dc_opts[["min_words"]], " words\n") # The information message private$dm(msg, md = 3) # The number of words in each sentence wc <- str_count(l, pattern = boundary("word")) # The lines containing less than min_words number of words are # removed l <- l[wc >= private$dc_opts[["min_words"]]] } # Consecutive 'a' and 'i' are replaced with single 'a' or 'i' l <- gsub("(a\\s){2,}", "\\1 ", l) l <- gsub("(i\\s){2,}", "\\1 ", l) l <- gsub("a$", "", l) # If extra spaces should be removed if (private$dc_opts[["remove_extra_space"]]) { # The information message private$dm("Removing extra spaces\n", md = 3) # Multiple spaces are replaced by single space l <- gsub("\\s{2,}", " ", l) # Leading and trailing whitespaces are removed l <- trimws(l) } return(l) } ), private = list( # @field dc_opts The options for the data cleaner object. # * **min_words**. The minimum number of words per sentence. # * **line_count**. The number of lines to read and clean at a time. # * **save_data**. If the combined processed lines should be saved. # * **output_file**. Name of the output file used to store the data. # * **sw_file**. The stop words file path. # * **dict_file**. The dictionary file path. # * **bad_file**. The bad words file path. # * **to_lower**. If the words should be converted to lower case. # * **remove_stop**. If stop words should be removed. # * **remove_punct**. If punctuation symbols should be removed. # * **remove_non_dict**. If non dictionary words should be removed. # * **remove_non_alpha**. If non alphabet symbols should be removed. # * **remove_extra_space**. If leading, trailing and double spaces # should be removed. # * **remove_bad**. If bad words should be removed dc_opts = list( "min_words" = 2, "line_count" = 1000, "save_data" = T, "output_file" = NULL, "sw_file" = NULL, "dict_file" = NULL, "bad_file" = NULL, "to_lower" = T, "remove_stop" = F, "remove_punct" = T, "remove_non_dict" = T, "remove_non_alpha" = T, "remove_extra_space" = T, "remove_bad" = F ), # @field sw The list of stop words. sw = list(), # @field bw The list of bad words. bw = list(), # @field dw The list of dictionary words. dw = list(), # @description # Performs processing for the clean_file function. # It processes the given lines of text. It divides the given lines of # text into sentences by spliting on '.'. Each sentence is then cleaned # using clean_lines. If the number of words in the cleaned # sentence is less than min_words, then the sentence is rejected. # @param lines The lines of text to clean. # @return The processed line is returned. process = function(lines) { # The sentence is cleaned cl <- self$clean_lines(lines) return(cl) } ) )
/scratch/gouwar.j/cran-all/cranData/wordpredictor/R/data-cleaner.R
#' Generates data samples from text files #' #' @description #' It provides a method for generating training, testing and validation data #' sets from a given input text file. #' #' It also provides a method for generating a sample file of given size or #' number of lines from an input text file. The contents of the sample file may #' be cleaned or randomized. DataSampler <- R6::R6Class( "DataSampler", inherit = Base, public = list( #' @description #' It initializes the current object. It is used to set the #' verbose option. #' @param dir The directory for storing the input and output files. #' @param ve The level of detail in the information messages. #' @export initialize = function(dir = ".", ve = 0) { # The directory name is set private$dir <- dir # The base class is initialized super$initialize(NULL, NULL, ve) }, #' @description #' Generates a sample file of given size from the given input file. The #' file is saved to the directory given by the dir object attribute. #' Once the file has been generated, its contents may be cleaned or #' randomized. #' @param fn The input file name. It is the short file name relative to #' the dir attribute. #' @param ss The number of lines or proportion of lines to sample. #' @param ic If the sample file should be cleaned. #' @param ir If the sample file contents should be randomized. #' @param ofn The output file name. It will be saved to the dir. #' @param is If the sampled data should be saved to a file. #' @param dc_opts The options for cleaning the data. #' @examples #' # Start of environment setup code #' # The level of detail in the information messages #' ve <- 0 #' # The name of the folder that will contain all the files. It will be #' # created in the current directory. NULL implies tempdir will be used #' fn <- NULL #' # The required files. They are default files that are part of the #' # package #' rf <- c("input.txt") #' # An object of class EnvManager is created #' em <- EnvManager$new(ve = ve, rp = "./") #' # The required files are downloaded #' ed <- em$setup_env(rf, fn) #' # End of environment setup code #' #' # The sample file name #' sfn <- paste0(ed, "/sample.txt") #' # An object of class DataSampler is created #' ds <- DataSampler$new(dir = ed, ve = ve) #' # The sample file is generated #' ds$generate_sample( #' fn = "input.txt", #' ss = 0.5, #' ic = FALSE, #' ir = FALSE, #' ofn = "sample.txt", #' is = TRUE #' ) #' #' # The test environment is removed. Comment the below line, so the #' # files generated by the function can be viewed #' em$td_env() generate_sample = function(fn, ss, ic, ir, ofn, is, dc_opts = NULL) { # The full path to the input file fn <- paste0(private$dir, "/", fn) # If the input file does not exist if (!file.exists(fn)) { # The error message private$dm("The input file: ", fn, " does not exist\n", md = -1, ty = "e" ) } # The output file name path of <- paste0(private$dir, "/", ofn) # The sample file is generated from the given file data <- private$generate_sf_from_f(fn, ss, ic, ir, of, is, dc_opts) # If the data should not be saved if (!is) { # The data is returned return(data) } }, #' @description #' It generates training, testing and validation data sets #' from the given input file. It first reads the file given as a #' parameter to the current object. It partitions the data into #' training, testing and validation sets, according to the perc #' parameter. The files are named train.txt, test.txt and va.txt and are #' saved to the given output folder. #' @param fn The input file name. It should be relative to the dir #' attribute. #' @param percs The size of the training, testing and validation sets. #' @examples #' # Start of environment setup code #' # The level of detail in the information messages #' ve <- 0 #' # The name of the folder that will contain all the files. It will be #' # created in the current directory. NULL implies tempdir will be #' # used #' fn <- NULL #' # The required files. They are default files that are part of the #' # package #' rf <- c("input.txt") #' # An object of class EnvManager is created #' em <- EnvManager$new(ve = ve) #' # The required files are downloaded #' ed <- em$setup_env(rf, fn) #' # End of environment setup code #' #' # The files to clean #' fns <- c("train", "test", "validate") #' # An object of class DataSampler is created #' ds <- DataSampler$new(dir = ed, ve = ve) #' # The train, test and validation files are generated #' ds$generate_data( #' fn = "input.txt", #' percs = list( #' "train" = 0.8, #' "test" = 0.1, #' "validate" = 0.1 #' ) #' ) #' #' # The test environment is removed. Comment the below line, so the #' # files generated by the function can be viewed #' em$td_env() generate_data = function(fn, percs) { # The directory containing the input and output files dir <- private$dir # The information message is shown private$dm( "Generating training,", "testing and validation data sets\n", md = 1 ) # The input file path is generated fn <- paste0(private$dir, "/", fn) # If the input file does not exist if (!file.exists(fn)) { # The error message private$dm("The input file: ", fn, " does not exist", md = -1, ty = "e" ) } # If the train, test and validation files already exist if (file.exists(paste0(dir, "/train.txt")) && file.exists(paste0(dir, "/test.txt")) && file.exists(paste0(dir, "/validate.txt"))) { # The information message is shown private$dm( "The train, test and validate files already exist\n", md = 1, ty = "w" ) } else { # The input file is read data <- private$read_file(fn, F) # The number of lines in the data lc <- length(data) # The required data rd <- data[1:lc] # The number of lines in train set tr_lc <- round(lc * percs[["train"]]) # The number of lines in test set te_lc <- round(lc * percs[["test"]]) # The number of lines in validate set va_lc <- round(lc * percs[["validate"]]) # The training set data train_ds <- rd[1:tr_lc] # The testing set data test_ds <- rd[tr_lc:(tr_lc + te_lc)] # The validation set data validate_ds <- rd[(tr_lc + te_lc):(tr_lc + te_lc + va_lc)] # The training data is written to file private$write_file(train_ds, paste0(dir, "/train.txt"), F) # The testing data is written to file private$write_file(test_ds, paste0(dir, "/test.txt"), F) # The validation data is written to file private$write_file(validate_ds, paste0(dir, "/validate.txt"), F) } # The information message is shown private$dh("DONE", "=", md = 1) } ), private = list( # @field dir The folder containing the input and output files. dir = ".", # @description Generates a sample file of given size from the given # input file. The file is optionally cleaned and saved. # @param fn The input file name. It is the short file name relative to # the dir. If not given, then the file name is auto generated from # the type parameter. # @param ss The number of lines or proportion of lines to sample. # @param ic If the sample file should be cleaned. # @param ir If the sample file contents should be randomized. # @param of The output file path. # @param is If the sampled data should be saved to a file. # @param dc_opts The options for cleaning the data. # @return The sampled data is returned generate_sf_from_f = function(fn = NULL, ss, ic, ir, of, is, dc_opts) { # The information message is shown private$dh("Generating sample file", "-", md = 1) # The input file is read data <- private$read_file(fn, F) # The number of lines in the main file lc <- length(data) # If the data should be randomized if (ir) { # The random indexes i <- sample(1:lc, size = lc) # The randomized data data <- data[i] } # If the sample size is less than 1 if (ss < 1) { # The number of lines in the sample file lc <- round(lc * ss) } else { lc <- ss } # The sample file data data <- data[1:lc] # If the data should be saved if (is) { # The sample file data is saved private$write_file(data, of, F) } # If the sample file should be cleaned if (ic) { # If the data should be saved dc_opts[["save_data"]] <- is # The data cleaner object is created dc <- DataCleaner$new(of, dc_opts, ve = private$ve) # The sample file is cleaned data <- dc$clean_file() } # The information message is shown private$dh("DONE", "=", md = 1) return(data) } ) )
/scratch/gouwar.j/cran-all/cranData/wordpredictor/R/data-sampler.R
#' Allows managing the test environment #' #' @description #' This class provides a method for creating directories in the tempdir folder #' for testing purposes. It also provides a method for reading files from the #' inst/extdata folder. EnvManager <- R6::R6Class( "EnvManager", inherit = Base, public = list( #' @description #' It initializes the current object. It simply calls the base class #' constructor. #' @param rp The prefix for accessing the package root folder. #' @param ve The level of detail in the information messages. #' @export initialize = function(rp = "../../", ve = 0) { # The base class is initialized super$initialize(NULL, NULL, ve) # The root prefix is set private$rp <- rp }, #' @description #' Checks if the given file exists. If it does not exist, #' then it tries to load the file from the inst/extdata data folder of #' the package. It throws an error if the file was not found. If the #' file exists, then the method simply returns the file name. #' @param fn The file name. #' @param dfn The name of the default file in the external data folder #' of the package. #' @return The name of the file if it exists, or the full path to the #' default file. get_data_fn = function(fn, dfn) { # The required file name rfn <- fn # If the file is not given if (is.null(fn)) { # The file path is set to the default file # included with the wordpredictor package rfn <- system.file("extdata", dfn, package = "wordpredictor") # If the file was not found if (!file.exists(rfn)) { # An error message is shown private$dm("The file: ", rfn, " does not exist !", md = -1, ty = "e" ) } } # If the file name is given but the file does not exist else if (!file.exists(fn)) { # An error message is shown private$dm("The file: ", fn, " does not exist !", md = -1, ty = "e" ) } return(rfn) }, #' @description #' Removes all files in the given directory. #' @param dn The directory name. remove_files = function(dn) { # The information message msg <- paste0("Removing all files in ", dn, "\n") # The information message is shown private$dm(msg, md = 1) # Each file in the directory is deleted for (fn in dir(dn, full.names = T)) { # The file is removed file.remove(fn) } # The information message is shown private$dm(" \u2714\n", md = 1) }, #' @description #' Removes the ed folder created by the setup_env method. Also sets #' the R option, "ed" to NULL. #' @param rf If the environment folder should be removed. td_env = function(rf = F) { # The wordpredictor options wp <- getOption("wordpredictor") # The information message msg <- paste0("Removing the folder ", wp$ed) # The information message is shown private$dm(msg, md = 1) # The environment folder is removed unlink(wp$ed, recursive = T, force = T) # If the folder should not be removed if (!rf) { # The folder is created dir.create(wp$ed) } # The "ed" option is set to NULL wp$ed <- NULL # The wordpredictor options are updated options("wordpredictor" = wp) # The information message is shown private$dm(" \u2714\n", md = 1) }, #' @description #' Copies the ed folder created by the setup_env method to #' inst/extdata. cp_env = function() { # The wordpredictor options wp <- getOption("wordpredictor") # The path to the folder fp <- paste0(private$rp, "inst/extdata/") # The information message msg <- paste0( "Copying the directory: ", wp$ed, " to the folder ", fp) # The information message is shown private$dm(msg, md = 1) # If the folder does not exist if (!dir.exists(fp)) { # The new folder path is created dir.create(fp) } # The tempdir is copied to the inst/extdata folder file.copy(wp$ed, fp, recursive = T) # The information message is shown private$dm(" \u2714\n", md = 1) }, #' @description #' Copies the given files from test folder to the #' environment folder. #' @param fns The list of test files to copy #' @param cf A custom environment folder. It is a path relative to the #' current directory. If not specified, then the tempdir function is #' used to generate the environment folder. #' @return The list of folders that can be used during testing. setup_env = function(fns = c(), cf = NULL) { # The information message msg <- paste0("Setting up the test environment") # The information message is shown private$dh(msg, "-", md = 1) # The environment folder name ed <- NULL # If the cf is given and it does not exist if (!is.null(cf) && !dir.exists(cf)) { # The information message msg <- paste0("Creating custom environment folder: ", cf) # The information message is shown private$dm(msg, md = 1) # The custom environment folder is created dir.create(cf) # The information message is shown private$dm(" \u2714\n", md = 1) # The environment folder is set ed <- cf } else { # The tempdir location ed <- tempdir() # If the tempdir does not exist, then it is created if (!dir.exists(ed)) { # The information message is shown private$dm( "The tempdir:", ed, "does not exist. Creating the dir", md = 1 ) # The tempdir is created dir.create(ed) # The information message is shown private$dm(" \u2714\n", md = 1) } } # The wordpredictor options wp <- getOption("wordpredictor") # The ed option is updated wp$ed <- ed # The wordpredictor options are updated options("wordpredictor" = wp) # print(list.files(system.file(package = "wordpredictor"))) # Each file is copied from extdata to the given folder for (fn in fns) { # The source file path sfp <- system.file("extdata", fn, package = "wordpredictor") # If the source file path does not exist if (!file.exists(sfp)) { # The inst/extdata folder is checked sfp <- system.file( "inst/extdata", fn, package = "wordpredictor", mustWork = T) # If the source file path does not exist if (!file.exists(sfp)) { # The error message msg <- paste0("The file: ", fn, " could not be found") # The error message stop(msg) } } # The information message is shown private$dm("Copying file:", fn, "to", ed, md = 1) # The source file is copied file.copy(sfp, ed) # The information message is shown private$dm(" \u2714\n", md = 1) } # The information message is shown private$dh("DONE", "=", md = 1) return(ed) } ), private = list( # @field rp The prefix for accessing the package root. rp = "../../" ) )
/scratch/gouwar.j/cran-all/cranData/wordpredictor/R/env-manager.R
#' Evaluates performance of n-gram models #' #' @description #' It provides methods for performing extrinsic and intrinsic #' evaluation of a n-gram model. It also provides a method for comparing #' performance of multiple n-gram models. #' #' Intrinsic evaluation is based on calculation of Perplexity. Extrinsic #' evaluation involves determining the percentage of correct next word #' predictions. #' #' @details #' Before performing the intrinsic and extrinsic model evaluation, a validation #' file must be first generated. This can be done using the DataSampler class. #' #' Each line in the validation file is evaluated. For intrinsic evaluation #' Perplexity for the line is calculated. An overall summary of the Perplexity #' calculations is returned. It includes the min, max and mean Perplexity. #' #' For extrinsic evaluation, next word prediction is performed on each line. If #' the actual next word is one of the three predicted next words, then the #' prediction is considered to be accurate. The extrinsic evaluation returns the #' percentage of correct and incorrect predictions. #' @importFrom patchwork plot_annotation #' @importFrom ggplot2 ggplot aes geom_point geom_smooth coord_cartesian labs ModelEvaluator <- R6::R6Class( "ModelEvaluator", inherit = Base, public = list( #' @description #' It initializes the current object. It is used to set the #' model file name and verbose options. #' @param mf The model file name. #' @param ve The level of detail in the information messages. #' @export initialize = function(mf = NULL, ve = 0) { # The base class is initialized super$initialize(NULL, NULL, ve) # If the model file is not NULL if (!is.null(mf)) { # If the model file name is not valid, then an error is thrown if (!file.exists(mf)) { private$dm("Invalid model file: ", mf, md = -1, ty = "e") } else { # The model file is set private$mf <- mf # The ModelPredictor class object is created mp <- ModelPredictor$new(private$mf, ve = private$ve) # The ModelPredictor object is set private$mp <- mp } } }, #' @description #' It compares the performance of the models in the given folder. #' #' The performance of the model is compared for the 4 metric which are #' time taken, memory used, Perplexity and accuracy. The performance #' comparison is displayed on plots. #' #' 4 plots are displayed. One for each performance metric. A fifth plot #' shows the variation of Perplexity with accuracy. All 5 plots are #' plotted on one page. #' @param opts The options for comparing model performance. #' * **save_to**. The graphics device to save the plot to. #' NULL implies plot is printed. #' * **dir**. The directory containing the model file, plot and stats. #' @examples #' # Start of environment setup code #' # The level of detail in the information messages #' ve <- 0 #' # The name of the folder that will contain all the files. It will be #' # created in the current directory. NULL implies tempdir will be #' # used. #' fn <- NULL #' # The required files. They are default files that are part of the #' # package #' rf <- c("def-model.RDS") #' # An object of class EnvManager is created #' em <- EnvManager$new(ve = ve, rp = "./") #' # The required files are downloaded #' ed <- em$setup_env(rf, fn) #' # End of environment setup code # #' # ModelEvaluator class object is created #' me <- ModelEvaluator$new(ve = ve) #' # The performance evaluation is performed #' me$compare_performance(opts = list( #' "save_to" = NULL, #' "dir" = ed #' )) #' #' # The test environment is removed. Comment the below line, so the #' # files generated by the function can be viewed #' em$td_env() compare_performance = function(opts) { # A data frame containing the combined performance stats cps <- data.frame() # The list of model files in the given directory fl <- dir(opts[["dir"]], full.names = T) # Each model file in the directory is read for (fn in fl) { # If the file name does not contain .RDS if (!endsWith(fn, ".RDS")) next # The model file is read m <- private$read_obj(fn) # The performance stats for the model pstats <- m$pstats # The memory used by the object is formated mu <- pstats$m / (10^6) # The temporary performance stats tstats <- data.frame( "n" = m$name, "m" = mu, "t" = pstats$t, "p" = pstats$p, "a" = pstats$a ) # The combined performance stats are updated cps <- rbind(cps, tstats) } # The combined performance stats are plotted g <- self$plot_stats(cps) # If the save_to and dir options are not NULL if (!is.null(opts[["save_to"]])) { # The file name for the plot fn <- paste0("performance.", opts[["save_to"]]) # The plot object is saved ggsave( filename = fn, plot = g, device = opts[["save_to"]], path = opts[["dir"]], width = 7, height = 7, units = "in" ) } else { # The plot is printed print(g) } # If the directory path was given if (dir.exists(opts[["dir"]])) { # The performance stats file name fn <- paste0(opts[["dir"]], "/pstats.RDS") # The combined performance stats are save private$save_obj(cps, fn) } }, #' @description #' It plots the given stats on 5 plots. The plots are displayed on a #' single page. #' #' The 4 performance metrics which are time taken, memory, Perplexity #' and accuracy are plotted against the model name. Another plot #' compares Perplexity with accuracy for each model. #' @param data The data to plot #' @return The ggplot object is returned. plot_stats = function(data) { # The information message is shown private$dm("Plotting performance stats", md = 1) # The x values. Each value in the range is the model number x_vals <- seq_len(length(data$n)) # The data frames df1 <- data.frame(x = x_vals, y = data$m) df2 <- data.frame(x = x_vals, y = data$t) df3 <- data.frame(x = x_vals, y = data$p) df4 <- data.frame(x = x_vals, y = data$a) df5 <- data.frame(x = data$a, y = data$p) # The options for plot 1 popts <- list( "x_lab" = "model", "y_lab" = "memory" ) # Plot 1 p1 <- private$plot_graph(df1, popts) # The options for plot 2 popts <- list( "x_lab" = "model", "y_lab" = "time" ) # Plot 2 p2 <- private$plot_graph(df2, popts) # The options for plot 3 popts <- list( "x_lab" = "model", "y_lab" = "perplexity" ) # Plot 3 p3 <- private$plot_graph(df3, popts) # The options for plot 4 popts <- list( "x_lab" = "model", "y_lab" = "accuracy" ) # Plot 4 p4 <- private$plot_graph(df4, popts) # The options for plot 5 popts <- list( "x_lab" = "accuracy", "y_lab" = "perplexity" ) # Plot 5 p5 <- private$plot_graph(df5, popts) # The plots are displayed on a single page patchwork <- p1 + p2 + p3 + p4 + p5 # The model names mn <- paste0(data$n, collapse = ", ") # The subtitle st <- paste0( "The performance of following models is compared: ", mn ) # Main title is added p <- (patchwork + plot_annotation( "title" = "Performance comparison of n-gram models", "subtitle" = st )) # The information message is shown private$dm(" \u2714\n", md = 1) return(p) }, #' @description #' It performs intrinsic and extrinsic evaluation for the given model #' and validation text file. The given number of lines in the validation #' file are used in the evaluation #' #' It performs two types of evaluations. One is intrinsic evaluation, #' based on Perplexity, the other is extrinsic evaluation based on #' accuracy. #' #' It returns the results of evaluation. 4 evaluation metrics are #' returned. Perplexity, accuracy, memory and time taken. Memory is the #' size of the model object. Time taken is the time needed for #' performing both evaluations. #' #' The results of the model evaluation are saved within the model object #' and also returned. #' @param lc The number of lines of text in the validation file to be #' used for the evaluation. #' @param fn The name of the validation file. If it does not exist, then #' the default file validation-clean.txt is checked in the models #' folder #' @return The performance stats are returned. #' @examples #' # Start of environment setup code #' # The level of detail in the information messages #' ve <- 0 #' # The name of the folder that will contain all the files. It will be #' # created in the current directory. NULL implies tempdir will be used #' fn <- NULL #' # The required files. They are default files that are part of the #' # package #' rf <- c("def-model.RDS", "validate-clean.txt") #' # An object of class EnvManager is created #' em <- EnvManager$new(ve = ve, rp = "./") #' # The required files are downloaded #' ed <- em$setup_env(rf, fn) #' # End of environment setup code #' #' # The model file name #' mfn <- paste0(ed, "/def-model.RDS") #' # The validation file name #' vfn <- paste0(ed, "/validate-clean.txt") #' #' # ModelEvaluator class object is created #' me <- ModelEvaluator$new(mf = mfn, ve = ve) #' # The performance evaluation is performed #' stats <- me$evaluate_performance(lc = 20, fn = vfn) #' # The evaluation stats are printed #' print(stats) #' #' # The test environment is removed. Comment the below line, so the #' # files generated by the function can be viewed #' em$td_env() evaluate_performance = function(lc, fn) { # The information message is shown private$dh("Evaluating model performance", "-", md = 1) # The Model class object is fetched m <- private$mp$get_model() # The performance stats pstats <- list("m" = NULL, "t" = NULL, "p" = NULL, "a" = NULL) # The time taken is checked tt <- system.time({ # Intrinsic evaluation is performed istats <- self$intrinsic_evaluation(lc, fn) # Extrinsic evaluation is performed estats <- self$extrinsic_evaluation(lc, fn) }) # The y-axis values are updated pstats[["m"]] <- m$get_size() pstats[["t"]] <- tt[[3]] pstats[["p"]] <- istats$mean pstats[["a"]] <- estats$valid_perc # The performance stats are saved m$pstats <- pstats # The information message is shown private$dm("Saving stats to model file\n", md = 1) # The model is saved private$save_obj(m, private$mf) # The information message is shown private$dh("DONE", "=", md = 1) # The performance stats are returned return(pstats) }, #' @description #' Evaluates the model using intrinsic evaluation based on #' Perplexity. The given number of sentences are taken from the #' validation file. For each sentence, the Perplexity is calculated. #' @param lc The number of lines of text in the validation file to be #' used for the evaluation. #' @param fn The name of the validation file. If it does not exist, then #' the default file validation-clean.txt is checked in the models #' folder #' @return The min, max and mean Perplexity score. #' @examples #' # Start of environment setup code #' # The level of detail in the information messages #' ve <- 0 #' # The name of the folder that will contain all the files. It will be #' # created in the current directory. NULL implies tempdir will be used #' fn <- NULL #' # The required files. They are default files that are part of the #' # package #' rf <- c("def-model.RDS", "validate-clean.txt") #' # An object of class EnvManager is created #' em <- EnvManager$new(ve = ve, rp = "./") #' # The required files are downloaded #' ed <- em$setup_env(rf, fn) #' # End of environment setup code #' #' # The model file name #' mfn <- paste0(ed, "/def-model.RDS") #' # The validation file name #' vfn <- paste0(ed, "/validate-clean.txt") #' #' # ModelEvaluator class object is created #' me <- ModelEvaluator$new(mf = mfn, ve = ve) #' # The intrinsic evaluation is performed #' stats <- me$intrinsic_evaluation(lc = 20, fn = vfn) #' # The evaluation stats are printed #' print(stats) #' #' # The test environment is removed. Comment the below line, so the #' # files generated by the function can be viewed #' em$td_env() intrinsic_evaluation = function(lc, fn) { # The information message is shown private$dh("Performing intrinsic evaluation", "-", md = 1) # The validation data is read data <- private$read_lines(fn, lc) # The list of perplexities pl <- c() # The loop counter c <- 1 # The Perplexity of each sentence in the test data is calculated for (line in data) { # The line is split on space words <- strsplit(line, " ")[[1]] # The perplexity for the line is calculated p <- private$mp$calc_perplexity(words) # The information message msg <- paste0( "Perplexity of the sentence '", line, "' is: ", p, "\n" ) # The information message is shown private$dm(msg, md = 2) # The list of perplexities is updated pl <- c(pl, p) # If the counter is divisible by 10 if (c %% 10 == 0) { # The information message is shown private$dm(c, " lines have been processed\n", md = 1) } # The counter is increased by 1 c <- c + 1 } # The perplexity stats stats <- list( "min" = min(pl), "max" = max(pl), "mean" = mean(pl) ) # The information message is shown private$dh("DONE", "=", md = 1) return(stats) }, #' @description #' Evaluates the model using extrinsic evaluation based on #' Accuracy. The given number of sentences are taken from the validation #' file. #' #' For each sentence, the model is used to predict the next word. #' The accuracy stats are returned. A prediction is considered to be #' correct if one of the predicted words matches the actual word. #' @param lc The number of lines of text in the validation file to be #' used for the evaluation. #' @param fn The name of the validation file. #' @return The number of correct and incorrect predictions. #' @examples #' # Start of environment setup code #' # The level of detail in the information messages #' ve <- 0 #' # The name of the folder that will contain all the files. It will be #' # created in the current directory. NULL implies tempdir will be used #' fn <- NULL #' # The required files. They are default files that are part of the #' # package #' rf <- c("def-model.RDS", "validate-clean.txt") #' # An object of class EnvManager is created #' em <- EnvManager$new(ve = ve, rp = "./") #' # The required files are downloaded #' ed <- em$setup_env(rf, fn) #' # End of environment setup code #' #' # The model file name #' mfn <- paste0(ed, "/def-model.RDS") #' # The validation file name #' vfn <- paste0(ed, "/validate-clean.txt") #' #' # ModelEvaluator class object is created #' me <- ModelEvaluator$new(mf = mfn, ve = ve) #' # The intrinsic evaluation is performed #' stats <- me$extrinsic_evaluation(lc = 100, fn = vfn) #' # The evaluation stats are printed #' print(stats) #' #' # The test environment is removed. Comment the below line, so the #' # files generated by the function can be viewed #' em$td_env() extrinsic_evaluation = function(lc, fn) { # The information message is shown private$dh("Performing extrinsic evaluation", "-", md = 1) # The Model class object is fetched m <- private$mp$get_model() # The TokenGenerator object options tg_opts <- m$get_config("tg_opts") # The validation data is read data <- private$read_lines(fn, lc) # The statistics stats <- list("valid" = 0, "invalid" = 0) # The loop counter c <- 1 # The last word for each sentence is predicted for (line in data) { # The line is split on space words <- strsplit(line, " ")[[1]] # The word to predict w <- words[length(words)] # The previous words used to predict the word pw <- words[seq_len(length(words) - 1)] # If the words should be stemmed if (tg_opts[["stem_words"]]) { # The previous words are stemmed pw <- wordStem(pw) } # The next word is predicted res <- private$mp$predict_word(pw, F) # If the predicted word matches the actual word if (w %in% res["words"]) { stats[["valid"]] <- stats[["valid"]] + 1 # The information message private$dm( "The word: ", w, " was predicted\n", md = 3 ) } # If the predicted word does not match else { stats[["invalid"]] <- stats[["invalid"]] + 1 # The information message private$dm( "The word: ", w, " could not be predicted\n", md = 3 ) } # The counter is increased by 1 c <- c + 1 # If the counter is divisible by 10 if (c %% 10 == 0) { # The information message is shown private$dm( c, " sentences have been processed\n", md = 1 ) } } # The valid stats v <- stats[["valid"]] # The invalid stats i <- stats[["invalid"]] # The precentage of valid stats[["valid_perc"]] <- (v / (v + i)) * 100 # The precentage of invalid stats[["invalid_perc"]] <- 100 - stats[["valid_perc"]] # The information message is shown private$dh("DONE", "=", md = 1) return(stats) } ), private = list( # @field mf The model file name. mf = NULL, # @field mp The ModelPredictor class object. mp = NULL, # @description # It creates a single plot based on ggplot2. # @param data A data frame containing the data to be plotted. It should # have 2 variables, x and y. # @param opts The options for plotting the data. It contains: # * **x_lab**. The x-axis label. # * **y_lab**. The y-axis label. # @return A ggplot object representing the plot. plot_graph = function(data, opts) { # y-max y_max <- max(data$y) # The graph is plotted p <- ggplot(data, aes(x, y)) + geom_point() + geom_smooth(method = "lm", formula = y ~ x) + labs(x = opts[["x_lab"]], y = opts[["y_lab"]]) + coord_cartesian(ylim = c(0, y_max)) return(p) } ) )
/scratch/gouwar.j/cran-all/cranData/wordpredictor/R/model-evaluator.R
#' Generates n-gram models from a text file #' #' @description #' It provides a method for generating n-gram models. The n-gram models may be #' customized by specifying data cleaning and tokenization options. #' #' @details #' It provides a method that generates a n-gram model. The n-gram model #' may be customized by specifying the data cleaning and tokenization options. #' #' The data cleaning options include removal of punctuation, stop words, extra #' space, non-dictionary words and bad words. The tokenization options include #' n-gram number and word stemming. ModelGenerator <- R6::R6Class( "ModelGenerator", inherit = Base, public = list( #' @description #' It initializes the current object. It is used to set the maximum #' n-gram number, sample size, input file name, data cleaner options, #' tokenization options and verbose option. #' @param name The model name. #' @param desc The model description. #' @param fn The model file name. #' @param df The path of the input text file. It should be the short #' file name and should be present in the data directory. #' @param n The n-gram size of the model. #' @param ssize The sample size as a proportion of the input file. #' @param dir The directory containing the input and output files. #' @param dc_opts The data cleaner options. #' @param tg_opts The token generator options. #' @param ve The level of detail in the information messages. #' @export initialize = function(name = NULL, desc = NULL, fn = NULL, df = NULL, n = 4, ssize = 0.3, dir = ".", dc_opts = list(), tg_opts = list(), ve = 0) { # The base class is initialized super$initialize(NULL, NULL, ve) # An object of class Model is created private$m <- Model$new( name = name, desc = desc, fn = fn, df = df, n = n, ssize = ssize, dir = dir, dc_opts = dc_opts, tg_opts = tg_opts, ve = ve ) }, #' @description #' It generates the model using the parameters passed to #' the object's constructor. It generates a n-gram model file and saves #' it to the model directory. #' @examples #' # Start of environment setup code #' # The level of detail in the information messages #' ve <- 0 #' # The name of the folder that will contain all the files. It will be #' # created in the current directory. NULL implies tempdir will be used #' fn <- NULL #' # The required files. They are default files that are part of the #' # package #' rf <- c("input.txt") #' # An object of class EnvManager is created #' em <- EnvManager$new(ve = ve, rp = "./") #' # The required files are downloaded #' ed <- em$setup_env(rf, fn) #' # End of environment setup code #' #' # ModelGenerator class object is created #' mg <- ModelGenerator$new( #' name = "default-model", #' desc = "1 MB size and default options", #' fn = "def-model.RDS", #' df = "input.txt", #' n = 4, #' ssize = 0.99, #' dir = ed, #' dc_opts = list(), #' tg_opts = list(), #' ve = ve #' ) #' # The n-gram model is generated #' mg$generate_model() #' #' # The test environment is removed. Comment the below line, so the #' # files generated by the function can be viewed #' em$td_env() generate_model = function() { # The information message is displayed private$dh("Generating n-gram model", "-", md = 1) # The cleaned sample data file is generated private$generate_sample() # The data files are generated private$generate_data_files() # The n-gram tokens are generated private$generate_ngram_tokens() # The tp data is generated private$generate_tp_data() # The model is saved private$save_model() # The information message is shown private$dh("DONE", "=", md = 1) } ), private = list( # @field m The model object. m = NULL, # @description # Saves the model to a file save_model = function() { # The directory path dir <- private$m$get_config("dir") # The model file name ofn <- private$m$get_config("fn") # The output file path ofp <- paste0(dir, "/", ofn) # The information message is shown private$dh("Saving model", "-", md = 1) # The model object is loaded private$m$load_model() # The model object is saved to the models folder using the output # file name private$save_obj(private$m, ofp) }, # @description # Generates a cleaned sample file of given size from the # given input data file. The name of the output file is # sample-clean.txt. generate_sample = function() { # The input data file name df <- private$m$get_config("df") # The sample size ssize <- private$m$get_config("ssize") # The directory path dir <- private$m$get_config("dir") # The data cleaning options dc_opts <- private$m$get_config("dc_opts") # If the output file name is not set if (is.null(dc_opts[["output_file"]])) { # The output file name dc_opts[["output_file"]] <- paste0(dir, "/sample-clean.txt") } # The DataSampler object is created ds <- DataSampler$new(dir = dir, ve = private$ve) # Sample is taken and cleaned ds$generate_sample(df, ssize, T, F, "sample.txt", T, dc_opts) }, # @description # Generates test, train and validation files from the # cleaned sample file. The name of the output files are train.txt, # test.txt and validation.txt. generate_data_files = function() { # The directory path dir <- private$m$get_config("dir") # The DataSampler object is created ds <- DataSampler$new(dir = dir, ve = private$ve) # The training, testing and validation data sets are generated ds$generate_data("sample-clean.txt", list( train = .8, test = .1, validate = .1 )) }, # @description # Generates transition probabilities data from n-gram token # file. The transition probabilties data is saved as files. generate_tp_data = function() { # The n-gram number n <- private$m$get_config("n") # The directory path dir <- private$m$get_config("dir") # The options for generating combined transition probabilities tp_opts <- list( "n" = n, "save_tp" = T, "format" = "obj", "dir" = dir ) # The TPGenerator object is created tp <- TPGenerator$new(tp_opts, private$ve) # The transition probabilities are generated tp$generate_tp() }, # @description # Generates n-gram tokens from the cleaned data input file. # The n-gram tokens are saved as files. generate_ngram_tokens = function() { # The n-gram number n <- private$m$get_config("n") # The directory path dir <- private$m$get_config("dir") # The TokenGenerator object options tg_opts <- private$m$get_config("tg_opts") # The directory is set tg_opts$dir <- dir # The clean train data file name fn <- paste0(dir, "/train.txt") # For each n-gram number, the n-gram token file is generated for (i in 1:n) { # The n-gram number is set tg_opts$n <- i # The TokenGenerator object is created tg <- TokenGenerator$new(fn, tg_opts, private$ve) # The n-gram tokens are generated tg$generate_tokens() } } ) )
/scratch/gouwar.j/cran-all/cranData/wordpredictor/R/model-generator.R
#' Allows predicting text, calculating word probabilities and Perplexity #' #' @description #' It provides a method for predicting the new word given a set of #' previous words. It also provides a method for calculating the Perplexity #' score for a set of words. Furthermore it provides a method for calculating #' the probability of a given word and set of previous words. #' @importFrom digest digest2int #' @importFrom SnowballC wordStem ModelPredictor <- R6::R6Class( "ModelPredictor", inherit = Base, public = list( #' @description #' It initializes the current object. It is used to set the #' model file name and verbose options. #' @param mf The model file name. #' @param ve The level of detail in the information messages. #' @export initialize = function(mf, ve = 0) { # The base class is initialized super$initialize(NULL, NULL, ve) # If the model file name is not valid, then an error is thrown if (!file.exists(mf)) { private$dm("Invalid model file: ", mf, md = -1, ty = "e") } else { # The model object is read private$m <- private$read_obj(mf) } }, #' @description #' Returns the Model class object. #' @return The Model class object is returned. get_model = function() { # The model object is returned return(private$m) }, #' @description #' The Perplexity for the given sentence is calculated. For #' each word, the probability of the word given the previous words is #' calculated. The probabilities are multiplied and then inverted. The #' nth root of the result is the perplexity, where n is the number of #' words in the sentence. If the stem_words tokenization option was #' specified when creating the given model file, then the previous words #' are converted to their stems. #' @param words The list of words. #' @return The perplexity of the given list of words. #' @examples #' # Start of environment setup code #' # The level of detail in the information messages #' ve <- 0 #' # The name of the folder that will contain all the files. It will be #' # created in the current directory. NULL implies tempdir will be used #' fn <- NULL #' # The required files. They are default files that are part of the #' # package #' rf <- c("def-model.RDS") #' # An object of class EnvManager is created #' em <- EnvManager$new(ve = ve, rp = "./") #' # The required files are downloaded #' ed <- em$setup_env(rf, fn) #' # End of environment setup code #' #' # The model file name #' mfn <- paste0(ed, "/def-model.RDS") #' # ModelPredictor class object is created #' mp <- ModelPredictor$new(mf = mfn, ve = ve) #' # The sentence whoose Perplexity is to be calculated #' l <- "last year at this time i was preparing for a trip to rome" #' # The line is split in to words #' w <- strsplit(l, " ")[[1]] #' # The Perplexity of the sentence is calculated #' p <- mp$calc_perplexity(w) #' # The sentence Perplexity is printed #' print(p) #' # The test environment is removed. Comment the below line, so the #' # files generated by the function can be viewed #' em$td_env() calc_perplexity = function(words) { # The model size n <- private$m$get_config("n") # The options for token generation tg_opts <- private$m$get_config("tg_opts") # The number of words in the sentence wl <- length(words) # The product of the word probabilities prob_prod <- 1 # For each word, the probability of the word is calculated for (i in 1:wl) { # The word word <- words[i] # The list of previous words pw <- NULL # If i is more than 1 if (i > 1) { # The start index start <- 1 # If i > self$model if (i > n) start <- i - (n - 1) # The list of previous words pw <- words[start:(i - 1)] # If the words should be stemmed if (tg_opts[["stem_words"]]) { # The previous words are stemmed pw <- wordStem(pw) } } # The word probability prob <- self$get_word_prob(word, pw) # The probability product is updated prob_prod <- prob_prod * prob } # The inverse of the number of words iwl <- 1 / wl # The nth root of the inverse of the probability product is taken p <- (1 / prob_prod) p <- p^iwl p <- round(p) return(p) }, #' @description #' Predicts the next word given a list of previous words. It #' checks the last n previous words in the transition probabilities #' data, where n is equal to 1 - n-gram size of model. If there is a #' match, the top 3 next words with highest probabilities are returned. #' If there is no match, then the last n-1 previous words are checked. #' This process is continued until the last word is checked. If there is #' no match, then empty result is returned. The given words may #' optionally be stemmed. #' @param words A character vector of previous words or a single vector #' containing the previous word text. #' @param count The number of results to return. #' @param dc A DataCleaner object. If it is given, then the given words # are cleaned #' @return The top 3 predicted words along with their probabilities. #' @examples #' # Start of environment setup code #' # The level of detail in the information messages #' ve <- 0 #' # The name of the folder that will contain all the files. It will be #' # created in the current directory. NULL implies tempdir will be used #' fn <- NULL #' # The required files. They are default files that are part of the #' # package #' rf <- c("def-model.RDS") #' # An object of class EnvManager is created #' em <- EnvManager$new(ve = ve, "rp" = "./") #' # The required files are downloaded #' ed <- em$setup_env(rf, fn) #' # End of environment setup code #' #' # The model file name #' mfn <- paste0(ed, "/def-model.RDS") #' # ModelPredictor class object is created #' mp <- ModelPredictor$new(mf = mfn, ve = ve) #' # The next word is predicted #' nws <- mp$predict_word("today is", count = 10) #' # The predicted next words are printed #' print(nws) #' #' # The test environment is removed. Comment the below line, so the #' # files generated by the function can be viewed #' em$td_env() predict_word = function(words, count = 3, dc = NULL) { # The tp data is fetched from the model object tp <- private$m$get_config("tp") # The loop counter c <- 1 # The required results result <- list("found" = F, "words" = "", "probs" = "") # The previous words are fetched pw <- private$get_prev_words(words, dc) # If the previous words are NULL if (is.null(pw)) { return(result) } # The length of previous words pwl <- length(pw) # Each n-gram in the previous word list is checked starting from # largest n-gram for (i in pwl:1) { # The previous words to check tpw <- pw[c:pwl] # The key to use for the transition probabilities data k <- paste(tpw, collapse = "_") # The key is converted to a numeric hash h <- digest2int(k) # The transition probabilities data is checked res <- tp[tp$pre == h, ] # The results are checked result <- private$check_results(res, count, k) # If the data was found if (result[["found"]]) break # Information message is shown private$dm("Backing off to ", (i), "-gram\n", md = 3) # The counter is increased by 1 c <- c + 1 } return(result) }, #' @description #' Calculates the probability of the given word given the #' previous words. The last n words are converted to numeric hash using #' digest2int function. All other words are ignored. n is equal to 1 - #' size of the n-gram model. The hash is looked up in a data frame of #' transition probabilities. The last word is converted to a number by #' checking its position in a list of unique words. If the hash and the #' word position were found, then the probability of the previous word #' and hash is returned. If it was not found, then the hash of the n-1 #' previous words is taken and the processed is repeated. If the data #' was not found in the data frame, then the word probability is #' returned. This is known as back-off. If the word probability could #' not be found then the default probability is returned. The default #' probability is calculated as 1/(N+V), Where N = number of words in #' corpus and V is the number of dictionary words. #' @param word The word whose probability is to be calculated. #' @param pw The previous words. #' @return The probability of the word given the previous words. #' @examples #' # Start of environment setup code #' # The level of detail in the information messages #' ve <- 0 #' # The name of the folder that will contain all the files. It will be #' # created in the current directory. NULL implies tempdir will be used #' fn <- NULL #' # The required files. They are default files that are part of the #' # package #' rf <- c("def-model.RDS") #' # An object of class EnvManager is created #' em <- EnvManager$new(ve = ve, "rp" = "./") #' # The required files are downloaded #' ed <- em$setup_env(rf, fn) #' # End of environment setup code #' #' # The model file name #' mfn <- paste0(ed, "/def-model.RDS") #' # ModelPredictor class object is created #' mp <- ModelPredictor$new(mf = mfn, ve = ve) #' # The probability that the next word is "you" given the prev words #' # "how" and "are" #' prob <- mp$get_word_prob(word = "you", pw = c("how", "are")) #' # The probability is printed #' print(prob) #' #' # The test environment is removed. Comment the below line, so the #' # files generated by the function can be viewed #' em$td_env() get_word_prob = function(word, pw) { # The tp data is fetched from the model object tp <- private$m$get_config("tp") # The word list data is fetched from the model object wl <- private$m$get_config("wl") # The default probability is fetched from the model object dp <- private$m$get_config("dp") # If the default probability is not set, then an error is raised if (is.null(dp)) { private$dm( "The default probability is not set in the model file !", md = -1, ty = "e" ) } # The length of previous words pwl <- length(pw) # The probability of the word given the previous words. It is # initialized to the default probability, which should be 1/(N+V) prob <- dp # The loop counter c <- 1 # Indicates if the word was found found <- FALSE # The next word id nw <- match(word, wl$pre) # If the next word was not found if (is.na(nw)) { # Information message is shown private$dm( "The next word: ", word, " was not found\n", md = 3 ) # The default probability is returned return(prob) } # If the previous word count is 0 if (pwl == 0) { return(prob) } # The previous words are checked for (i in pwl:1) { # The previous words to check tpw <- pw[c:pwl] # The key to use for the transition matrix k <- paste(tpw, collapse = "_") # The key is converted to a numeric hash h <- digest2int(k) # The transition probabilities data is checked res <- tp[tp$pre == h & tp$nw == nw, ] # If the prefix was found if (nrow(res) > 0) { # The word was found found <- TRUE # The probability is set prob <- as.numeric(res$prob) # The information message private$dm( "The n-gram key: ", k, " and the next word: ", word, " were found\n", md = 3 ) # The loop ends break } else { # The information message private$dm( "The n-gram key: ", k, " and the next word: ", word, " were not found\n", md = 3 ) } # Information message is shown private$dm("Backing off to ", (i), "-gram\n", md = 3) # The counter is increased by 1 c <- c + 1 } # If the word was not found then the probability of the word is # checked in the n1-gram if (!found) { # If the word was not found if (sum(wl$pre == word) == 0) { # Information message is shown private$dm("Using default probability\n", md = 3) } else { # The word probability prob <- as.numeric(wl[wl$pre == word, "prob"]) } } return(prob) } ), private = list( # @field m The model object. m = NULL, # @description # Fetches the list of previous words from the given list of words. # @param words A character vector of previous words or a single vector # containing the previous word text. # @param dc A DataCleaner object. If it is given, then the given words # are cleaned. # @return The list of previous words. get_prev_words = function(words, dc) { # The options for token generation tg_opts <- private$m$get_config("tg_opts") # The words are assigned to temp variable w <- words # If the DataCleaner obj was specified if (!is.null(dc)) { # If the words is a set of vectors if (length(w) > 1) { # The words are converted to a single line of text w <- paste0(w, collapse = " ") } # The words are cleaned w <- dc$clean_lines(w) } # If the words should be stemmed if (tg_opts[["stem_words"]]) { # The previous words are stemmed w <- wordStem(w) } # If the words are in the form of a line if (length(w) == 1) { # The words are split on space w <- strsplit(w, " ")[[1]] } # The length of previous words pwl <- length(w) # If the previous words length is 0 if (pwl == 0) { return(NULL) } # If the previous word length is more than 3 if (pwl > 3) { # The last 3 words are extracted. pw <- w[(pwl - 2):pwl] } else { pw <- w } }, # @description # Checks the result from the tp table # @param res The rows from the combined tp table. # @param count The number of results to return. # @param k The key string used to search the tp table. # @return The results of checking tp table. check_results = function(res, count, k) { # The word list data is fetched from the model object wl <- private$m$get_config("wl") # The word was found found <- FALSE # The required results result <- list("found" = F, "words" = "", "probs" = "") # If the prefix was found if (nrow(res) > 0) { # The word was found found <- TRUE # The result is sorted by probability sres <- res[order(res$prob, decreasing = T), ] # The number of rows in the result set rcount <- nrow(sres) # If the number of results is more than the required number # of results if (rcount > count) { # The result count is set to the required number of # results rc <- count } else { # The result count is set to the number of results rc <- rcount } # The required word probabilities probs <- sres$prob[1:rc] # The next words indexes ind <- sres$nw[1:rc] # The required words nw <- as.character(wl$pre[ind]) # The result is updated result[["words"]] <- nw result[["probs"]] <- probs result[["found"]] <- T # Information message is shown private$dm("The n-gram key: ", k, " was found\n", md = 3) } else { private$dm("The n-gram key: ", k, " was not found\n", md = 3) # The result is updated result[["found"]] <- F } return(result) } ) )
/scratch/gouwar.j/cran-all/cranData/wordpredictor/R/model-predictor.R
#' Represents n-gram models #' #' @description #' The Model class represents n-gram models. An instance of the class is a #' single n-gram model. The attributes of this class are used to store n-gram #' model information. The class provides methods for loading and saving the #' model. #' #' @details #' The attributes of this class are used to store n-gram model information such #' as model name, model description, model file name, n-gram size, transition #' probabilities data, default probability for words, data cleaning and #' tokenization options, word list, model path, data directory path and #' performance stats. The model is saved to a single file as a R object. #' #' A model file contains all the information required by the model. The model #' object is used as input by classes that perform operations on the model such #' as evaluation of model performance, text predictions and comparison of model #' performance. Model <- R6::R6Class( "Model", inherit = Base, public = list( #' @field pstats The performance stats for the model. pstats = list(), #' @field name The model name. name = NULL, #' @field desc The model description. desc = NULL, #' @description #' It initializes the current object. It is used to set the #' maximum n-gram number, sample size, input file name, data cleaner #' options, tokenization options, combined transition probabilities file #' name and verbose. #' @param name The model name. #' @param desc The model description. #' @param fn The model file name. #' @param df The name of the file used to generate the model. #' @param n The maximum n-gram number supported by the model. #' @param ssize The sample size as a proportion of the input file. #' @param dir The directory containing the model files. #' @param dc_opts The data cleaner options. #' @param tg_opts The token generator options. #' @param ve The level of detail in the information messages. #' @export initialize = function(name = NULL, desc = NULL, fn = NULL, df = NULL, n = 4, ssize = 0.3, dir = ".", dc_opts = list(), tg_opts = list(), ve = 0) { # The base class is initialized super$initialize(NULL, NULL, ve) # If the output file name is not given if (is.null(fn)) { # Error message is shown private$dm("Output file name was not given", md = -1, ty = "e") } # The path to the data file dfp <- paste0(dir, "/", df) # If the data file does not exist, then an error is thrown if (!file.exists(dfp)) { # Error message is shown private$dm("Invalid input file: ", dfp, md = -1, ty = "e") } # If the directory does not exist, then an error is thrown if (!dir.exists(dir)) { private$dm( "The dir: ", dir, " does not exist !", md = -1, ty = "e" ) } # An object of class EnvManager is created em <- EnvManager$new(ve) # The dict words file is checked dc_opts[["dict_file"]] <- em$get_data_fn( dc_opts[["dict_file"]], "dict-no-bad.txt" ) # The model name is set self$name <- name # The model description is set self$desc <- desc # The n-gram number is set private$n <- n # The sample size is set private$ssize <- ssize # The directory name is set private$dir <- dir # The input file name is set private$df <- df # The word list file name is set private$wlf <- paste0(dir, "/words.RDS") # The model file name is set private$fn <- fn # If the dc_opts are given if (length(dc_opts) > 0) { # The custom dc_opts are merged with the default dc_opts private$dc_opts <- modifyList(private$dc_opts, dc_opts) } # If the tg_opts are given if (length(tg_opts) > 0) { # The custom tg_opts are merged with the default tg_opts private$tg_opts <- modifyList(private$tg_opts, tg_opts) } }, #' @description #' It loads the model using the given information load_model = function() { # The tp file name fn <- paste0(private$dir, "/model-", private$n, ".RDS") # The tp file is read private$tp <- private$read_obj(fn) # The wl file is read private$wl <- private$read_obj(private$wlf) # The dictionary file name fn <- private$dc_opts[["dict_file"]] # The file contents dict <- private$read_file(fn, F) # The information message is shown private$dh("Calculating default probability", "-", md = 1) # The number of words in the dictionary file. It is used to # calculate Perplexity. vc <- length(dict) # The path to the input data file dfp <- paste0(private$dir, "/", private$df) # The data file is read data <- private$read_file(dfp, F) # The words are split on " " w <- strsplit(data, " ") # The words are converted to atomic list w <- unlist(w) # The number of words n <- length(w) # The default probability is set private$dp <- 1 / (n + vc) # The information message is shown private$dh("DONE", "=", md = 1) }, #' @description #' It returns the given configuration data #' @param cn The name of the required configuration. #' @return The configuration value. get_config = function(cn) { # The required configuration value cv <- private[[cn]] return(cv) }, #' @description #' It returns the size of the current object. The object #' size is calculated as the sum of sizes of the object attributes. #' @return The size of the object in bytes. get_size = function() { # The required object size s <- 0 # The tp size is added s <- s + as.numeric(object.size(private$tp)) # The wl size is added s <- s + as.numeric(object.size(private$wl)) # The dc_opts size is added s <- s + as.numeric(object.size(private$dc_opts)) # The tg_opts size is added s <- s + as.numeric(object.size(private$tg_opts)) # The pstats size is added s <- s + as.numeric(object.size(self$pstats)) return(s) } ), private = list( # @field fn The path to the model file. fn = NULL, # @field wlf The path to the word list file. wlf = NULL, # @field df The short name of the input file. df = NULL, # @field tp The transition probabilities data frame. tp = NULL, # @field wl The list of unique words. wl = NULL, # @field dp The default probability is equal to 1/(N+V), where N is the # number of words in the sentence, V is the number of words in the # vocabulary. dp = NULL, # @field n The maximum number of n-grams supported by the model. n = 4, # @field dc_opts The options for the data cleaner object. # * **min_words**. The minimum number of words per sentence. # * **line_count**. The number of lines to read and clean at a time. # * **sw_file**. The stop words file path. # * **dict_file**. The dictionary file path. # * **bad_file**. The bad words file path. # * **to_lower**. If the words should be converted to lower case. # * **remove_stop**. If stop words should be removed. # * **remove_punct**. If punctuation symbols should be removed. # * **remove_non_dict**. If non dictionary words should be removed. # * **remove_non_alpha**. If non alphabet symbols should be removed. # * **remove_extra_space**. If leading, trailing and double spaces # should be removed. # * **remove_bad**. If bad words should be removed dc_opts = list( "min_words" = 2, "line_count" = 1000, "sw_file" = NULL, "dict_file" = NULL, "bad_file" = NULL, "to_lower" = T, "remove_stop" = F, "remove_punc" = T, "remove_non_dict" = T, "remove_non_alpha" = T, "remove_extra_space" = T, "remove_bad" = F ), # @field tg_opts The options for the token generator obj. # * **n**. The n-gram size. # * **save_ngrams**. If the n-gram data should be saved. # * **min_freq**. All n-grams with frequency less than min_freq are # ignored. # * **line_count**. The number of lines to process at a time. # * **stem_words**. If words should be converted to their stem. # * **dir**. The dir where the output file should be saved. # * **format**. The format for the output. There are two options. # ** **plain**. The data is stored in plain text. # ** **obj**. The data is stored as a R obj. tg_opts = list( "min_freq" = -1, "n" = 1, "save_ngrams" = T, "min_freq" = -1, "line_count" = 5000, "stem_words" = F, "dir" = "./data/models", "format" = "obj" ), # @field ssize The sample size as a proportion of the input file. ssize = 0.3, # @field dir The folder containing the model related files. dir = "./data" ) )
/scratch/gouwar.j/cran-all/cranData/wordpredictor/R/model.R
#' Generates n-grams from text files #' #' @description #' It generates n-gram tokens along with their frequencies. The data #' may be saved to a file in plain text format or as a R object. #' #' @importFrom SnowballC wordStem #' @importFrom dplyr group_by summarize_all %>% TokenGenerator <- R6::R6Class( "TokenGenerator", inherit = Base, public = list( #' @description #' It initializes the current obj. It is used to set the file name, #' tokenization options and verbose option. #' @param fn The path to the input file. #' @param opts The options for generating the n-gram tokens. #' * **n**. The n-gram size. #' * **save_ngrams**. If the n-gram data should be saved. #' * **min_freq**. All n-grams with frequency less than min_freq are #' ignored. #' * **line_count**. The number of lines to process at a time. #' * **stem_words**. If words should be transformed to their stems. #' * **dir**. The dir where the output file should be saved. #' * **format**. The format for the output. There are two options. #' * **plain**. The data is stored in plain text. #' * **obj**. The data is stored as a R obj. #' @param ve The level of detail in the information messages. #' @export initialize = function(fn = NULL, opts = list(), ve = 0) { # The given options are merged with the opts attribute private$tg_opts <- modifyList(private$tg_opts, opts) # The base class is initialized super$initialize(fn, private$tg_opts$line_count, ve) # The processed output is initialized private$p_output <- NULL }, #' @description #' It generates n-gram tokens and their frequencies from the #' given file name. The tokens may be saved to a text file as plain text #' or a R object. #' @return The data frame containing n-gram tokens along with their #' frequencies. #' @examples #' # Start of environment setup code #' # The level of detail in the information messages #' ve <- 0 #' # The name of the folder that will contain all the files. It will be #' # created in the current directory. NULL implies tempdir will be used #' fn <- NULL #' # The required files. They are default files that are part of the #' # package #' rf <- c("test-clean.txt") #' # An object of class EnvManager is created #' em <- EnvManager$new(ve = ve, rp = "./") #' # The required files are downloaded #' ed <- em$setup_env(rf, fn) #' # End of environment setup code #' #' # The n-gram size #' n <- 4 #' # The test file name #' tfn <- paste0(ed, "/test-clean.txt") #' # The n-gram number is set #' tg_opts <- list("n" = n, "save_ngrams" = TRUE, "dir" = ed) #' # The TokenGenerator object is created #' tg <- TokenGenerator$new(tfn, tg_opts, ve = ve) #' # The n-gram tokens are generated #' tg$generate_tokens() #' #' # The test environment is removed. Comment the below line, so the #' # files generated by the function can be viewed #' em$td_env() generate_tokens = function() { # The processed output is initialized private$p_output <- NULL # The output file name fn <- private$get_file_name() # If the output file already exists if (file.exists(fn)) { # The information message is shown private$dm("The ", private$tg_opts[["n"]], "-gram file already exists\n", md = 1, ty = "w" ) # If the n-gram data should not be saved if (!private$tg_opts[["save_ngrams"]]) { # The n-grams file is read private$p_output <- private$read_data( fn, private$tg_opts[["format"]], T ) } } else { # The information message msg <- paste0("Generating ", private$tg_opts[["n"]]) msg <- paste0(msg, "-gram tokens") # The information message is shown private$dh(msg, "-", md = 1) # The base class process_file function is called private$process_file( private$pre_process, private$process, private$post_process ) # The information message is shown private$dh("DONE", "=", md = 1) } } ), private = list( # @field tg_opts The options for the token generator obj. # * **n**. The n-gram size. # * **save_ngrams**. If the n-gram data should be saved. # * **min_freq**. All n-grams with frequency less than min_freq are # ignored. # * **stem_words**. If words should be transformed to their stems. # * **line_count**. The number of lines to process at a time. # * **dir**. The dir where the output file should be saved. # * **format**. The format for the output. There are two options. # * **plain**. The data is stored in plain text. # * **obj**. The data is stored as a R obj. tg_opts = list( "n" = 1, "save_ngrams" = F, "stem_words" = F, "min_freq" = -1, "line_count" = 5000, "dir" = NULL, "format" = "obj" ), # @description # Performs processing for the generate_tokens function. It # processes the given line of text. It converts each line of text into # n-grams of the given size. The frequency of each n-gram is updated. # @param lines The lines of text. process = function(lines) { # n-grams are extracted from each line ngrams <- private$generate_ngrams(lines) # The n-gram words are appended to the processed output private$p_output <- c(private$p_output, ngrams) }, # @description # It returns the name of the output n-gram file. get_file_name = function() { # The n-gram number n <- private$tg_opts[["n"]] # The format fo <- private$tg_opts[["format"]] # The output directory dir <- private$tg_opts[["dir"]] # The file extension if (fo == "plain") { ext <- ".txt" } else { ext <- ".RDS" } # The file name file_name <- paste0(dir, "/n", n, ext) return(file_name) }, # @description # It saves the n-gram tokens and their frequencies to a text file. post_process = function() { # The information message msg <- paste0("Calculating ", private$tg_opts[["n"]]) msg <- paste0(msg, "-gram frequencies") # The information message is shown private$dm(msg, md = 1) # The output is copied to a variable df <- data.frame("pre" = private$p_output) # A frequency column is added df$freq <- 1 # Each prefix is grouped and summed df <- df %>% group_by(pre) %>% summarize_all(sum) # The information message is shown private$dm(" \u2714\n", md = 1) # If the minimum n-gram frequency is given if (private$tg_opts[["min_freq"]] > -1) { # The information message is shown private$dm("Removing low frequency n-grams", md = 1) # All n-grams with frequency less than min_freq are ignored df <- df[df$freq >= private$tg_opts[["min_freq"]], ] # The information message is shown private$dm(" \u2714\n", md = 1) } # The column names are set colnames(df) <- c("pre", "freq") # The output is set to the updated variable private$p_output <- df # If the n-gram data should be saved if (private$tg_opts[["save_ngrams"]]) { # The required file name fn <- private$get_file_name() # The format fo <- private$tg_opts[["format"]] # The n-gram data frame is written to file private$write_data(private$p_output, fn, fo, F) } # If n-gram data should not be saved else { return(private$p_output) } }, # @description # It generates n-gram frequencies for the given lines of text. # @param lines The lines of text to process generate_ngrams = function(lines) { # The n-gram number n <- private$tg_opts[["n"]] # If n > 1 if (n > 1) { # Trailing and leading white space is removed l <- trimws(lines, "both") # Start and end of sentence tags are added l <- gsub("(^)(.+)($)", "<s>\\2<e>", l) # The lines are split on space w <- strsplit(l, " ") # The words are converted to an atomic vector w <- unlist(w) # The index of empty words i <- (w == "") # The empty words are removed w <- w[!i] # The indexes for the words indexes <- seq(length(w)) # The n-grams are generated l <- sapply(indexes, function(i) { # If the words should be stemmed if (private$tg_opts[["stem_words"]]) { # The n-gram prefix words are stemmed. The next word is # not stemmed v <- c(wordStem(w[i:(i + n - 2)]), w[(i + n - 1)]) } else { # The n-gram token v <- w[i:(i + n - 1)] } # The n-gram token v <- paste0(v, collapse = "_") # The n-gram token is returned return(v) }, simplify = T ) # Invalid n-grams need to be removed # A logical vector indicating position of invalid n-grams i <- grepl(".+<e>.+", l) # The list of valid n-grams l <- l[!i] # The start of sentence tokens are removed l <- gsub("<s>", "", l) # The end of sentence tokens are removed l <- gsub("<e>", "", l) } else { # The line is split on " " words <- strsplit(lines, " ") # The list of words is converted to atomic vector l <- unlist(words) # The index of empty words i <- l == "" # The empty words are removed l <- l[!i] } return(l) } ) )
/scratch/gouwar.j/cran-all/cranData/wordpredictor/R/token-generator.R
#' Generates transition probabilities for n-grams #' #' @description #' It provides a method for generating transition probabilities for #' the given n-gram size. It also provides a method for generating the combined #' transition probabilities data for n-gram sizes from 1 to the given size. The #' combined transition probabilities data can be used to implement back-off. #' #' @details #' It provides a method for generating n-gram transition probabilities. #' It reads n-gram frequencies from an input text file that is generated by the #' TokenGenerator class. #' #' It parses each n-gram into a prefix, a next word, the next word frequency and #' the next word probability. Maximum Likelihood count is used to generate the #' next word probabilities. #' #' Each n-gram prefix is converted to a numeric hash using the digest2int #' function. The next word is replaced with the position of the next word in the #' list of all words. The transition probabilities data is stored as a dataframe #' in a file. #' #' Another method is provided that combines the transition probabilities for #' n-grams of size 1 to the given size. The combined transition probabilities #' can be saved to a file as a data frame. This file may be regarded as a #' completed self contained n-gram model. By combining the transition #' probabilities of n-grams, back-off may be used to evaluate word probabilities #' or predict the next word. #' @importFrom stringr str_match #' @importFrom digest digest2int #' @importFrom dplyr group_by mutate TPGenerator <- R6::R6Class( "TPGenerator", inherit = Base, public = list( #' @description #' It initializes the current obj. It is used to set the #' transition probabilities options and verbose option. #' @param opts The options for generating the transition probabilities. #' * **save_tp**. If the data should be saved. #' * **n**. The n-gram size. #' * **dir**. The directory containing the input and output files. #' * **format**. The format for the output. There are two options. #' * **plain**. The data is stored in plain text. #' * **obj**. The data is stored as a R obj. #' @param ve The level of detail in the information messages. #' @export initialize = function(opts = list(), ve = 0) { # The given options are merged with the opts attribute private$tp_opts <- modifyList(private$tp_opts, opts) # The base class is initialized super$initialize(NULL, NULL, ve) # The processed output is initialized private$p_output <- data.frame() }, #' @description #' It first generates the transition probabilities for each #' n-gram of size from 1 to the given size. The transition probabilities #' are then combined into a single data frame and saved to the output #' folder that is given as parameter to the current object. #' #' By combining the transition probabilities for all n-gram sizes from 1 #' to n, back-off can be used to calculate next word probabilities or #' predict the next word. #' @examples #' # Start of environment setup code #' # The level of detail in the information messages #' ve <- 0 #' # The name of the folder that will contain all the files. It will be #' # created in the current directory. NULL implies tempdir will be used #' fn <- NULL #' # The required files. They are default files that are part of the #' # package #' rf <- c("n1.RDS", "n2.RDS", "n3.RDS", "n4.RDS") #' # An object of class EnvManager is created #' em <- EnvManager$new(ve = ve, rp = "./") #' # The required files are downloaded #' ed <- em$setup_env(rf, fn) #' # End of environment setup code #' #' # The list of output files #' fns <- c("words", "model-4", "tp2", "tp3", "tp4") #' #' # The TPGenerator object is created #' tp <- TPGenerator$new(opts = list(n = 4, dir = ed), ve = ve) #' # The combined transition probabilities are generated #' tp$generate_tp() #' #' # The test environment is removed. Comment the below line, so the #' # files generated by the function can be viewed #' em$td_env() generate_tp = function() { # The information message msg <- paste0("Generating Transition Probabilities for n = ") msg <- paste0(msg, "1:", private$tp_opts[["n"]]) # Information message is shown private$dh(msg, "-", md = 1) # The processed output is cleared private$p_output <- data.frame() # The output format fo <- private$tp_opts[["format"]] # The n-gram number nmax <- private$tp_opts[["n"]] # The file extension if (fo == "plain") { ext <- ".txt" } else { ext <- ".RDS" } # The short output file name fn <- paste0("model-", nmax, ext) # The model file name path fp <- paste0(private$tp_opts[["dir"]], "/", fn) # If the combined tp file already exists if (file.exists(fp)) { # Information message is shown private$dm( "The output file: ", fp, " already exists\n", md = 1, ty = "w" ) } else { # The options for generating transition probabilities tp_opts <- list( n = 1, format = fo, save_tp = T, dir = private$tp_opts[["dir"]] ) # The combined tp data c_pre <- c_nw <- c_prob <- c() # For each n-gram number, the transition probabilities data is # generated. for (n in 1:nmax) { # The value of n is set tp_opts$n <- n # The transition probabilities or word list is generated self$generate_tp_for_n(n) # If n > 1 if (n > 1) { # c_pre is updated c_pre <- c(c_pre, private$p_output$pre) # c_nw is updated c_nw <- c(c_nw, private$p_output$nw) # c_prob is updated c_prob <- c(c_prob, private$p_output$prob) # The processed output is cleared private$p_output <- data.frame() } } # The processed output is set to the combined tp data private$p_output <- data.frame( "pre" = c_pre, "nw" = c_nw, "prob" = c_prob ) # If the data should be saved if (private$tp_opts[["save_tp"]]) { private$save_data(fn) } # Information message is shown private$dh("DONE", "=", md = 1) } }, #' @description #' It generates the transition probabilities table for the #' given n-gram size. It first reads n-gram token frequencies from an #' input text file. #' #' It then generates a data frame whose columns are the #' n-gram prefix, next word and next word frequency. The data frame may #' be saved to a file as plain text or as a R obj. If n = 1, then the #' list of words is saved. #' @param n The n-gram size for which the tp data is generated. generate_tp_for_n = function(n) { # The n value is set private$tp_opts[["n"]] <- n # The output format fo <- private$tp_opts[["format"]] # The output file name fn <- private$get_file_name(T) # If the output file already exists if (file.exists(fn)) { # The information message is shown private$dm( "The file: ", fn, " already exists", md = 1, ty = "w" ) # The file is read data <- private$read_data(fn, fo, T) # If n = 1 if (n == 1) { # The word list is set to the data private$wl <- data } else { # The processed output is set to the data private$p_output <- data } } else { # The information message msg <- paste0( "Generating transition probabilities for n = ", n) # Information message is shown private$dh(msg, "-", md = 1) # The input file name private$fn <- private$get_file_name(F) # The data is read df <- private$read_data(private$fn, fo, T) # If n = 1 if (n == 1) { # The word list is set to the data frame private$wl <- df # A probabilities column is added private$wl$prob <- (private$wl$freq / sum(private$wl$freq)) # The probabilities are rounded to 8 decimal places private$wl$prob <- round(private$wl$prob, 8) # The frequency column is removed private$wl$freq <- NULL } else { # The 1-gram words are read private$read_words() # The lines are split on "prefix_nextword:frequency" m <- str_match(df$pre, "(.+)_(.+)") # The hash of the prefix is taken np <- digest2int(m[, 2]) # The next word id based on index position nw <- match(m[, 3], private$wl$pre) # The next word frequencies nf <- df$freq # The data is added to a data frame df <- data.frame( "pre" = np, "nw" = nw, "freq" = nf ) # The processed output is set to the data frame private$p_output <- df # The next word probabilities are generated private$generate_probs() # The frequency column is removed private$p_output$freq <- NULL } # If the data should be saved if (private$tp_opts[["save_tp"]]) { private$save_data() } # Information message is shown private$dh("DONE", "=", md = 1) } } ), private = list( # @field tp_opts The options for generating the transition # probabilities. # * **save_tp**. If the data should be saved. # * **n**. The n-gram number # * **dir**. The directory containing the input and output files. # * **format**. The format for the output. There are two options. # * **plain**. The data is stored in plain text. # * **obj**. The data is stored as a R obj. tp_opts = list( "save_tp" = T, "n" = 1, "dir" = "./data/model", "format" = "obj" ), # @field The list of unique words and their frequencies wl = data.frame(), # @description # It calculates the next word probabilities and optionally # saves the transition probability data to a file. generate_probs = function() { # The n-gram number n <- private$tp_opts[["n"]] # If n > 1 if (n > 1) { # The output is copied to a variable df <- private$p_output # A new probability column is added. It is set to the sum of # frequency column for each prefix group. df <- df %>% group_by(pre) %>% mutate(prob = sum(freq)) # Each frequency is divided by the sum to give the probability. df$prob <- round(df$freq / df$prob, 8) # The output is set to the updated variable private$p_output <- df } }, # @description # It returns the name of the output or input file. # @param is_output If the output file name is required. get_file_name = function(is_output) { # The n-gram number n <- private$tp_opts[["n"]] # The directory od <- private$tp_opts[["dir"]] # The format fo <- private$tp_opts[["format"]] # The file extension if (fo == "plain") { ext <- ".txt" } else { ext <- ".RDS" } # If the output file name is required if (is_output) { # If n = 1 if (n == 1) { # The file name fn <- paste0(od, "/words", ext) } # If n > 1 else if (n > 1) { # The file name fn <- paste0(od, "/tp", n, ext) } } else { # The file name fn <- paste0(od, "/n", n, ext) } return(fn) }, # @description # It saves the transition probabilities to a file in plain format or as # a R obj. If the file name is not given, then it is generated using the # current object attributes. # @param fn The file name to use. save_data = function(fn = NULL) { # The n-gram number n <- private$tp_opts[["n"]] # The directory od <- private$tp_opts[["dir"]] # The format fo <- private$tp_opts[["format"]] # If n = 1 if (n == 1) { # The data to save data <- private$wl } # If n > 1 else if (n > 1) { # The data to save data <- private$p_output } # If the file name is given as parameter then it is used if (!is.null(fn)) { fn <- paste0(od, "/", fn) } else { fn <- private$get_file_name(T) } # The data is written private$write_data(data, fn, fo, F) }, # @description # It reads the list of 1-gram words. read_words = function() { # If the word list has not been read if (nrow(private$wl) == 0) { # The format fo <- private$tp_opts[["format"]] # The file extension if (fo == "plain") { ext <- ".txt" } else { ext <- ".RDS" } # The 1-gram words file name fn <- paste0(private$tp_opts[["dir"]], "/words", ext) # The words are read private$wl <- private$read_data( fn, private$tp_opts[["format"]], F ) } } ) )
/scratch/gouwar.j/cran-all/cranData/wordpredictor/R/tp-generator.R
#' @keywords internal "_PACKAGE" # The following block is used by usethis to automatically manage # roxygen namespace tags. Modify with care! ## usethis namespace: start ## usethis namespace: end NULL
/scratch/gouwar.j/cran-all/cranData/wordpredictor/R/wordpredictor-package.R
# This is the demo word-predictor application. You can run the application by # clicking 'Run App' above. # # The application allows users to enter a set of words. For the given words the # application attempts to predict the top ten most likely words. These words are # presented in a bar plot along with the respective probabilities. # # Find out more about building applications with Shiny here: # # http://shiny.rstudio.com/ library(shiny) library(ggplot2) library(wordpredictor) # Define UI for application that draws a histogram ui <- fluidPage( # Application title titlePanel("Word Predictor"), # Horizontal rule hr(), # Sidebar with a slider input for number of bins sidebarLayout( sidebarPanel( # The input field textInput("ngram", "Enter a n-gram:", value = "where is") ), # Show a plot of the possible predicted words mainPanel( # The predicted word uiOutput("next_word"), # The predicted word probability uiOutput("word_prob"), # Horizontal rule hr(), # The bar plot of possible next words plotOutput("next_word_plot") ) ) ) # Define server logic required to draw a histogram server <- function(input, output) { # The model file path sfp <- system.file("extdata", "def-model.RDS", package = "wordpredictor") # The ModelPredictor object is created mp <- ModelPredictor$new(sfp) # The predicted word information p <- NULL # The next word is predicted output$next_word <- renderUI({ # If the user entered some text if (trimws(input$ngram) != "") { # The text entered by the user is split on space w <- trimws(input$ngram) # The next word is predicted p <- mp$predict_word(w, 10) # If the next word was not found if (!p$found) { # The next word and next word is set to an information # message nw <- span("Not Found", style = "color:red") # The next word probability is set to an information # message nwp <- span("N.A", style = "color:red") # The plot is set to empty output$next_word_plot <- renderPlot({}) # The predicted next word nw <- tags$div("Predicted Word: ", tags$strong(nw)) # The predicted next word probability nwp <- tags$div("Word Probability: ", tags$strong(nwp)) # The next word probability is updated output$word_prob <- renderUI(nwp) } else { # The next word nw <- p$words[[1]] # The next word probability nwp <- p$probs[[1]] # The plot is updated output$next_word_plot <- renderPlot({ # A data frame containing the data to plot df <- data.frame("word" = p$words, "prob" = p$probs) # The data frame is sorted in descending order df <- (df[order(df$prob, decreasing = T),]) # The words and their probabilities are plotted g <- ggplot(data = df, aes(x = reorder(word, prob), y = prob)) + geom_bar(stat = "identity", fill = "red") + ggtitle("Predicted words and their probabilities") + ylab("Probability") + xlab("Word") print(g) }) # The predicted next word nw <- tags$div("Predicted Word: ", tags$strong(nw)) # The predicted next word probability nwp <- tags$div("Word Probability: ", tags$strong(nwp)) # The next word probability is updated output$word_prob <- renderUI(nwp) } } else { # The next word is set to "" nw <- tags$span() # The next word probability text is set to "" output$word_prob <- renderUI(tags$span()) # The plot is set to empty output$next_word_plot <- renderPlot({}) } return(nw) }) } # Run the application shinyApp(ui = ui, server = server)
/scratch/gouwar.j/cran-all/cranData/wordpredictor/demo/word-predictor.R
## ----echo=FALSE, results='hide'----------------------------------------------- knitr::opts_chunk$set( collapse = TRUE, comment = "#>", fig.path = "reference/figures/" ) ## ----setup, echo=FALSE, results='hide', message=FALSE------------------------- library(wordpredictor) # The level of verbosity in the information messages ve <- 0 #' @description #' Used to setup the test environment #' @param rf The required files. #' @param ve The verbosity level. #' @return The list of directories in the test environment setup_env <- function(rf, ve) { # An object of class EnvManager is created em <- EnvManager$new(rp = "../", ve = ve) # The required files are downloaded ed <- em$setup_env(rf) return(ed) } #' @description #' Used to clean up the test environment clean_up <- function(ve) { # An object of class EnvManager is created em <- EnvManager$new(ve = ve) # The test environment is removed em$td_env(T) } ## ----data-exploration, cache=FALSE-------------------------------------------- # The required files rf <- c( "test.txt", "validate.txt", "validate-clean.txt", "test-clean.txt" ) # The test environment is setup ed <- setup_env(rf, ve) # The DataAnalyzer object is created da <- DataAnalyzer$new(ve = ve) # Information on all text files in the ed folder is returned fi <- da$get_file_info(ed) # The file information is printed print(fi) # The test environment is cleaned up clean_up(ve) ## ----data-sampling-1, cache=FALSE--------------------------------------------- # The required files rf <- c("input.txt") # The test environment is setup ed <- setup_env(rf, ve) # The sample size as a proportion of the input.txt file ssize <- 0.1 # The data file path dfp <- paste0(ed, "/input.txt") # The object size is formatted obj_size <- file.size(dfp)/10^6 # The proportion of data to sample prop <- (ssize/obj_size) # An object of class DataSampler is created ds <- DataSampler$new(dir = ed, ve = ve) # The sample file is generated. # The randomized sample is saved to the file train.txt in the ed folder ds$generate_sample( fn = "input.txt", ss = prop, ic = F, ir = T, ofn = "train.txt", is = T ) # The test environment is cleaned up clean_up(ve) ## ----data-sampling-2, cache=FALSE--------------------------------------------- # The required files rf <- c("input.txt") # The test environment is setup ed <- setup_env(rf, ve) # An object of class DataSampler is created ds <- DataSampler$new(dir = ed, ve = ve) # The train, test and validation files are generated ds$generate_data( fn = "input.txt", percs = list( "train" = 0.8, "test" = 0.1, "validate" = 0.1 ) ) # The test environment is cleaned up clean_up(ve) ## ----data-cleaning, cache=FALSE----------------------------------------------- # The required files rf <- c("input.txt") # The test environment is setup ed <- setup_env(rf, ve) # The data file path fn <- paste0(ed, "/input.txt") # The clean file path cfn <- paste0(ed, "/input-clean.txt") # The data cleaning options dc_opts = list( "min_words" = 2, "to_lower" = T, "remove_stop" = F, "remove_punct" = T, "remove_non_dict" = T, "remove_non_alpha" = T, "remove_extra_space" = T, "remove_bad" = F, "output_file" = cfn ) # The data cleaner object is created dc <- DataCleaner$new(fn, dc_opts, ve = ve) # The sample file is cleaned and saved as input-clean.txt in the ed dir dc$clean_file() # The test environment is cleaned up clean_up(ve) ## ----tokenization-1, cache=FALSE---------------------------------------------- # The required files rf <- c("test-clean.txt") # The test environment is setup ed <- setup_env(rf, ve) # The test file path fn <- paste0(ed, "/test-clean.txt") # The n-grams are generated for (n in 1:4) { # The ngram number is set tg_opts = list("n" = n, "save_ngrams" = T, dir = ed) # The TokenGenerator object is created tg <- TokenGenerator$new(fn, tg_opts, ve = ve) # The ngram tokens are generated tg$generate_tokens() } # The test environment is cleaned up clean_up(ve) ## ----tokenization-2, cache=FALSE, out.width="70%", out.height="70%"----------- # The required files rf <- c("n2.RDS") # The test environment is setup ed <- setup_env(rf, ve) # The ngram file name fn <- paste0(ed, "/n2.RDS") # The DataAnalyzer object is created da <- DataAnalyzer$new(fn, ve = ve) # The top features plot is checked df <- da$plot_n_gram_stats(opts = list( "type" = "top_features", "n" = 10, "save_to" = "png", "dir" = "./reference/figures" )) # The output file path fn <- paste0("./reference/figures/top_features.png") knitr::include_graphics(fn) # The test environment is cleaned up clean_up(ve) ## ----tokenization-3, cache=FALSE, out.width="70%", out.height="70%"----------- # The required files rf <- c("n2.RDS") # The test environment is setup ed <- setup_env(rf, ve) # The ngram file name fn <- paste0(ed, "/n2.RDS") # The DataAnalyzer object is created da <- DataAnalyzer$new(fn, ve = ve) # The top features plot is checked df <- da$plot_n_gram_stats(opts = list( "type" = "coverage", "n" = 10, "save_to" = "png", "dir" = "./reference/figures" )) # The output file path fn <- paste0("./reference/figures/coverage.png") knitr::include_graphics(fn) # The test environment is cleaned up clean_up(ve) ## ----tokenization-4, cache=FALSE---------------------------------------------- # The required files rf <- c("n2.RDS") # The test environment is setup ed <- setup_env(rf, ve) # The ngram file name fn <- paste0(ed, "/n2.RDS") # The DataAnalyzer object is created da <- DataAnalyzer$new(ve = ve) # Bi-grams starting with "and_" are returned df <- da$get_ngrams(fn = fn, c = 10, pre = "^and_*") # The data frame is sorted by frequency df <- df[order(df$freq, decreasing = T),] # The first 10 rows of the data frame are printed knitr::kable(df[1:10,], col.names = c("Prefix", "Frequency")) # The test environment is cleaned up clean_up(ve) ## ----transition-probabilities, cache=FALSE------------------------------------ # The required files rf <- c("n1.RDS", "n2.RDS", "n3.RDS", "n4.RDS") # The test environment is setup ed <- setup_env(rf, ve) # The TPGenerator object is created tp <- TPGenerator$new(opts = list(n = 4, dir = ed), ve = ve) # The combined transition probabilities are generated tp$generate_tp() # The test environment is cleaned up clean_up(ve) ## ----generate-model, results='hide', cache=FALSE------------------------------ # The required files rf <- c("input.txt") # The test environment is setup ed <- setup_env(rf, ve) # The following code generates n-gram model using default options for data # cleaning and tokenization. See the following section on how to customize these # options. Note that input.txt is the name of the input data file. It should be # present in the data directory. dir is the directory containing the input and output files. It is set to the path of the environment directory, ed. # ModelGenerator class object is created mg <- ModelGenerator$new( name = "def-model", desc = "N-gram model generating using default options", fn = "def-model.RDS", df = "input.txt", n = 4, ssize = 0.1, dir = ed, dc_opts = list(), tg_opts = list(), ve = ve ) # Generates n-gram model. The output is the file def-model.RDS mg$generate_model() # The test environment is cleaned up clean_up(ve) ## ----model-evaluation-1, cache=FALSE------------------------------------------ # The required files rf <- c("def-model.RDS", "validate-clean.txt") # The test environment is setup ed <- setup_env(rf, ve) # The model file name mfn <- paste0(ed, "/def-model.RDS") # The path to the cleaned validation file vfn <- paste0(ed, "/validate-clean.txt") # ModelEvaluator class object is created me <- ModelEvaluator$new(mf = mfn, ve = ve) # The intrinsic evaluation is performed on first 20 lines stats <- me$intrinsic_evaluation(lc = 20, fn = vfn) # The test environment is cleaned up clean_up(ve) ## ----model-evaluation-2, cache=FALSE------------------------------------------ # The required files rf <- c("def-model.RDS", "validate-clean.txt") # The test environment is setup ed <- setup_env(rf, ve) # The model file name mfn <- paste0(ed, "/def-model.RDS") # The path to the cleaned validation file vfn <- paste0(ed, "/validate-clean.txt") # ModelEvaluator class object is created me <- ModelEvaluator$new(mf = mfn, ve = ve) # The intrinsic evaluation is performed on first 100 lines stats <- me$extrinsic_evaluation(lc = 100, fn = vfn) # The test environment is cleaned up clean_up(ve) ## ----predict-word, cache=FALSE------------------------------------------------ # The required files rf <- c("def-model.RDS") # The test environment is setup ed <- setup_env(rf, ve) # The model file name mfn <- paste0(ed, "/def-model.RDS") # An object of class ModelPredictor is created. The mf parameter is the name of # the model file that was generated in the previous example. mp <- ModelPredictor$new(mf = mfn, ve = ve) # Given the words: "how are", the next word is predicted. The top 3 most likely # next words are returned along with their respective probabilities. res <- mp$predict_word(words = "how are", 3) # The test environment is cleaned up clean_up(ve)
/scratch/gouwar.j/cran-all/cranData/wordpredictor/inst/doc/features.R
--- title: "Features" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Features} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r echo=FALSE, results='hide'} knitr::opts_chunk$set( collapse = TRUE, comment = "#>", fig.path = "reference/figures/" ) ``` ```{r setup, echo=FALSE, results='hide', message=FALSE} library(wordpredictor) # The level of verbosity in the information messages ve <- 0 #' @description #' Used to setup the test environment #' @param rf The required files. #' @param ve The verbosity level. #' @return The list of directories in the test environment setup_env <- function(rf, ve) { # An object of class EnvManager is created em <- EnvManager$new(rp = "../", ve = ve) # The required files are downloaded ed <- em$setup_env(rf) return(ed) } #' @description #' Used to clean up the test environment clean_up <- function(ve) { # An object of class EnvManager is created em <- EnvManager$new(ve = ve) # The test environment is removed em$td_env(T) } ``` ## Introduction This document describes all the features provided by the **wordpredictor** package. It first describes how to generate n-gram models. Next it describes how to evaluate the performance of the n-gram models. Finally it describes how to make word predictions using the n-gram model. ## Model Generation The **wordpredictor** package provides several classes that can be used to generate n-gram models. These classes may be used to generate n-gram models step by step. An alternative is to use the **ModelGenerator** class which combines all the steps and provides a single method for generating n-gram models. The following steps are involved in generating n-gram models: ### Data Exploration The first step in generating a n-gram model is data exploration. This involves determining the type of textual content and various text related statistics. The type of text may be news content, blog posts, Twitter feeds, product reviews, customer chat history etc. Example of text related statistics are line count, word count, average line length and input file size. It is also important to determine the unwanted words and symbols in the data such as vulgar words, punctuation symbols, non-alphabetical symbols etc. The **wordpredictor** package provides the **DataAnalyzer** class which can be used to find out statistics about the input data. The following example shows how to get statistics on all text files within a folder: ```{r data-exploration, cache=FALSE} # The required files rf <- c( "test.txt", "validate.txt", "validate-clean.txt", "test-clean.txt" ) # The test environment is setup ed <- setup_env(rf, ve) # The DataAnalyzer object is created da <- DataAnalyzer$new(ve = ve) # Information on all text files in the ed folder is returned fi <- da$get_file_info(ed) # The file information is printed print(fi) # The test environment is cleaned up clean_up(ve) ``` The word count of a text file can be fetched using the command: `cat file-name | wc -w`. This command should work on all Unix based systems. ### Data Sampling The next step is to generate training, testing and validation samples from the input text file. If there are many input text files, then they can be combined to a single file using the command: `cat file-1 file-2 file3 > output-file`. The contents of the combined text file may need to be randomized. The **wordpredictor** package provides the **DataSampler** class which can be used to generate a random sample containing given number of lines. The following example shows how to generate a random sample of size 10 Mb from an input text file: ```{r data-sampling-1, cache=FALSE} # The required files rf <- c("input.txt") # The test environment is setup ed <- setup_env(rf, ve) # The sample size as a proportion of the input.txt file ssize <- 0.1 # The data file path dfp <- paste0(ed, "/input.txt") # The object size is formatted obj_size <- file.size(dfp)/10^6 # The proportion of data to sample prop <- (ssize/obj_size) # An object of class DataSampler is created ds <- DataSampler$new(dir = ed, ve = ve) # The sample file is generated. # The randomized sample is saved to the file train.txt in the ed folder ds$generate_sample( fn = "input.txt", ss = prop, ic = F, ir = T, ofn = "train.txt", is = T ) # The test environment is cleaned up clean_up(ve) ``` Usually we need a train data set for generating the n-gram model. A test data set for testing the model and a validation data set for evaluating the performance of the model. The following example shows how to generate the train, test and validation files. The train file contains the first 80% of the lines, the test set contains the next 10% of the lines. The remaining lines are in the validation set. The data in the validation file must be different from the data in the train file. Otherwise it can result in over-fitting of the model. When a model is over-fitted, the model evaluation results will be exaggerated, overly optimistic and unreliable. So care should be taken to ensure that the data in the validation and train files is different. ```{r data-sampling-2, cache=FALSE} # The required files rf <- c("input.txt") # The test environment is setup ed <- setup_env(rf, ve) # An object of class DataSampler is created ds <- DataSampler$new(dir = ed, ve = ve) # The train, test and validation files are generated ds$generate_data( fn = "input.txt", percs = list( "train" = 0.8, "test" = 0.1, "validate" = 0.1 ) ) # The test environment is cleaned up clean_up(ve) ``` In the above example, **dir** parameter is the directory containing the **input.txt** file and the generated test, validation and train data files. ### Data Cleaning The next step is to remove unwanted symbols and words from the input text file. This reduces the memory requirement of the n-gram model and makes it more efficient. Example of unwanted words are vulgar words, words that are not part of the vocabulary, punctuation, numbers, non-printable characters and extra spaces. The **wordpredictor** package provides the **DataCleaner** class which can be used to remove unwanted words and symbols from text files. The following example shows how to clean a given text file: ```{r data-cleaning, cache=FALSE} # The required files rf <- c("input.txt") # The test environment is setup ed <- setup_env(rf, ve) # The data file path fn <- paste0(ed, "/input.txt") # The clean file path cfn <- paste0(ed, "/input-clean.txt") # The data cleaning options dc_opts = list( "min_words" = 2, "to_lower" = T, "remove_stop" = F, "remove_punct" = T, "remove_non_dict" = T, "remove_non_alpha" = T, "remove_extra_space" = T, "remove_bad" = F, "output_file" = cfn ) # The data cleaner object is created dc <- DataCleaner$new(fn, dc_opts, ve = ve) # The sample file is cleaned and saved as input-clean.txt in the ed dir dc$clean_file() # The test environment is cleaned up clean_up(ve) ``` The **clean_file** method reads a certain number of lines at a time, cleans the lines of text and saves them to an output text file. It can be used for cleaning large text files. ### Tokenization The next step is to generate n-gram tokens from the cleaned text file. The **TokenGenerator** class allows generating n-gram tokens of given size from a given input text file. The following example shows how to generate n-grams tokens of size 1,2,3 and 4: ```{r tokenization-1, cache=FALSE} # The required files rf <- c("test-clean.txt") # The test environment is setup ed <- setup_env(rf, ve) # The test file path fn <- paste0(ed, "/test-clean.txt") # The n-grams are generated for (n in 1:4) { # The ngram number is set tg_opts = list("n" = n, "save_ngrams" = T, dir = ed) # The TokenGenerator object is created tg <- TokenGenerator$new(fn, tg_opts, ve = ve) # The ngram tokens are generated tg$generate_tokens() } # The test environment is cleaned up clean_up(ve) ``` The above code generates the files **n1.RDS, n2.RDS, n3.RDS and n4.RDS** in the data directory. These files contains n-gram tokens along with their frequencies. N-grams of larger size provide more context. Usually n-grams of size 4 are generated. Two important customization options supported by the **TokenGenerator** class are **min_freq** and **stem_words**. **min_freq** sets minimum frequency for n-gram tokens. All n-gram tokens with frequency less than **min_freq** are excluded. The **stem_words** option is used to transform n-gram prefix components to their stems. The next word is not transformed. The n-gram token frequencies may be analyzed using the **DataAnalyzer** class. The following example displays the top most occurring 2-gram tokens: ```{r tokenization-2, cache=FALSE, out.width="70%", out.height="70%"} # The required files rf <- c("n2.RDS") # The test environment is setup ed <- setup_env(rf, ve) # The ngram file name fn <- paste0(ed, "/n2.RDS") # The DataAnalyzer object is created da <- DataAnalyzer$new(fn, ve = ve) # The top features plot is checked df <- da$plot_n_gram_stats(opts = list( "type" = "top_features", "n" = 10, "save_to" = "png", "dir" = "./reference/figures" )) # The output file path fn <- paste0("./reference/figures/top_features.png") knitr::include_graphics(fn) # The test environment is cleaned up clean_up(ve) ``` The following example shows the distribution of word frequencies: ```{r tokenization-3, cache=FALSE, out.width="70%", out.height="70%"} # The required files rf <- c("n2.RDS") # The test environment is setup ed <- setup_env(rf, ve) # The ngram file name fn <- paste0(ed, "/n2.RDS") # The DataAnalyzer object is created da <- DataAnalyzer$new(fn, ve = ve) # The top features plot is checked df <- da$plot_n_gram_stats(opts = list( "type" = "coverage", "n" = 10, "save_to" = "png", "dir" = "./reference/figures" )) # The output file path fn <- paste0("./reference/figures/coverage.png") knitr::include_graphics(fn) # The test environment is cleaned up clean_up(ve) ``` The following example returns top 10 2-gram tokens that start with **and_**: ```{r tokenization-4, cache=FALSE} # The required files rf <- c("n2.RDS") # The test environment is setup ed <- setup_env(rf, ve) # The ngram file name fn <- paste0(ed, "/n2.RDS") # The DataAnalyzer object is created da <- DataAnalyzer$new(ve = ve) # Bi-grams starting with "and_" are returned df <- da$get_ngrams(fn = fn, c = 10, pre = "^and_*") # The data frame is sorted by frequency df <- df[order(df$freq, decreasing = T),] # The first 10 rows of the data frame are printed knitr::kable(df[1:10,], col.names = c("Prefix", "Frequency")) # The test environment is cleaned up clean_up(ve) ``` ### Transition Probabilities The next step in generating the n-gram model is to generate transition probabilities (tp) from the n-gram files. The **TPGenerator** class is used to generate the tps. For each n-gram token file a corresponding tp file is generated. The tp files are then combined into a single file containing tp data for n-grams of size 1, 2, 3, 4 etc. The following example shows how to generate combined tps for n-grams of size 1, 2, 3 and 4: ```{r transition-probabilities, cache=FALSE} # The required files rf <- c("n1.RDS", "n2.RDS", "n3.RDS", "n4.RDS") # The test environment is setup ed <- setup_env(rf, ve) # The TPGenerator object is created tp <- TPGenerator$new(opts = list(n = 4, dir = ed), ve = ve) # The combined transition probabilities are generated tp$generate_tp() # The test environment is cleaned up clean_up(ve) ``` The above code produces the file **model-4.RDS**. ### The model file The final step is to generate a n-gram model file from the files generated in the previous steps. The **Model** class contains the method **load_model**, which reads the combined tps files and other files that are used by the model. An instance of the **Model** class represents the n-gram model. ### Generating the model in one step All the previous steps may be combined into a single step. The **ModelGenerator** class allows generating the final n-gram model using a single method call. The following example generates a n-gram model using default data cleaning and tokenization options: ```{r generate-model, results='hide', cache=FALSE} # The required files rf <- c("input.txt") # The test environment is setup ed <- setup_env(rf, ve) # The following code generates n-gram model using default options for data # cleaning and tokenization. See the following section on how to customize these # options. Note that input.txt is the name of the input data file. It should be # present in the data directory. dir is the directory containing the input and output files. It is set to the path of the environment directory, ed. # ModelGenerator class object is created mg <- ModelGenerator$new( name = "def-model", desc = "N-gram model generating using default options", fn = "def-model.RDS", df = "input.txt", n = 4, ssize = 0.1, dir = ed, dc_opts = list(), tg_opts = list(), ve = ve ) # Generates n-gram model. The output is the file def-model.RDS mg$generate_model() # The test environment is cleaned up clean_up(ve) ``` ## Evaluating the model performance The **wordpredictor** package provides the **ModelEvaluator** class for evaluating the performance of the generated n-gram model. Intrinsic and Extrinsic evaluation are supported. Also the performance of several n-gram models may be compared. The following example performs Intrinsic evaluation. It measures the Perplexity score for each sentence in the **validation.txt** file, that was generated in the data sampling step. It returns the minimum, mean and maximum Perplexity score for each line. ```{r model-evaluation-1, cache=FALSE} # The required files rf <- c("def-model.RDS", "validate-clean.txt") # The test environment is setup ed <- setup_env(rf, ve) # The model file name mfn <- paste0(ed, "/def-model.RDS") # The path to the cleaned validation file vfn <- paste0(ed, "/validate-clean.txt") # ModelEvaluator class object is created me <- ModelEvaluator$new(mf = mfn, ve = ve) # The intrinsic evaluation is performed on first 20 lines stats <- me$intrinsic_evaluation(lc = 20, fn = vfn) # The test environment is cleaned up clean_up(ve) ``` The following example performs Extrinsic evaluation. It measures the accuracy score for each sentence in **validation.txt** file. For each sentence the model is used to predict the last word in the sentence given the previous words. If the last word was correctly predicted, then the prediction is considered to be accurate. ```{r model-evaluation-2, cache=FALSE} # The required files rf <- c("def-model.RDS", "validate-clean.txt") # The test environment is setup ed <- setup_env(rf, ve) # The model file name mfn <- paste0(ed, "/def-model.RDS") # The path to the cleaned validation file vfn <- paste0(ed, "/validate-clean.txt") # ModelEvaluator class object is created me <- ModelEvaluator$new(mf = mfn, ve = ve) # The intrinsic evaluation is performed on first 100 lines stats <- me$extrinsic_evaluation(lc = 100, fn = vfn) # The test environment is cleaned up clean_up(ve) ``` ## Making word predictions The n-gram model generated in the previous step can be used to predict the next word given a set of words. The following example shows how to predict the next word. It returns the 3 possible next words along with their probabilities. ```{r predict-word, cache=FALSE} # The required files rf <- c("def-model.RDS") # The test environment is setup ed <- setup_env(rf, ve) # The model file name mfn <- paste0(ed, "/def-model.RDS") # An object of class ModelPredictor is created. The mf parameter is the name of # the model file that was generated in the previous example. mp <- ModelPredictor$new(mf = mfn, ve = ve) # Given the words: "how are", the next word is predicted. The top 3 most likely # next words are returned along with their respective probabilities. res <- mp$predict_word(words = "how are", 3) # The test environment is cleaned up clean_up(ve) ```
/scratch/gouwar.j/cran-all/cranData/wordpredictor/inst/doc/features.Rmd
## ----echo=FALSE, results='hide'----------------------------------------------- knitr::opts_chunk$set( collapse = TRUE, comment = "#>", fig.path = "reference/figures/" ) ## ----setup, echo=FALSE, results='hide', message=FALSE------------------------- library(wordpredictor) # The level of verbosity in the information messages ve <- 0 #' @description #' Used to setup the test environment #' @param rf The required files. #' @param ve The verbosity level. #' @return The list of directories in the test environment setup_env <- function(rf, ve) { # An object of class EnvManager is created em <- EnvManager$new(rp = "../", ve = ve) # The required files are downloaded ed <- em$setup_env(rf) return(ed) } #' @description #' Used to clean up the test environment clean_up <- function(ve) { # An object of class EnvManager is created em <- EnvManager$new(ve = ve) # The test environment is removed em$td_env(T) } ## ----generate-model, results='hide', cache=FALSE------------------------------ # The required files rf <- c("input.txt") # The test environment is setup ed <- setup_env(rf, ve) # The following code generates n-gram model using default options for data # cleaning and tokenization. See the following section on how to customize these # options. Note that input.txt is the name of the input data file. It should be # present in the ed directory. The generated model file is also placed in this # directory. # ModelGenerator class object is created mg <- ModelGenerator$new( name = "def-model", desc = "N-gram model generating using default options", fn = "def-model.RDS", df = "input.txt", n = 4, ssize = 0.1, dir = ed, dc_opts = list(), tg_opts = list(), ve = ve ) # Generates n-gram model. The output is the file # ./data/model/def-model.RDS mg$generate_model() # The test environment is cleaned up clean_up(ve) ## ----model-evaluation-1, cache=FALSE------------------------------------------ # The required files rf <- c("def-model.RDS", "validate-clean.txt") # The test environment is setup ed <- setup_env(rf, ve) # The model file name mfn <- paste0(ed, "/def-model.RDS") # The path to the cleaned validation file vfn <- paste0(ed, "/validate-clean.txt") # ModelEvaluator class object is created me <- ModelEvaluator$new(mf = mfn, ve = ve) # The intrinsic evaluation is performed on first 20 lines stats <- me$intrinsic_evaluation(lc = 20, fn = vfn) # The test environment is cleaned up clean_up(ve) ## ----model-evaluation-2, cache=FALSE------------------------------------------ # The required files rf <- c("def-model.RDS", "validate-clean.txt") # The test environment is setup ed <- setup_env(rf, ve) # The model file name mfn <- paste0(ed, "/def-model.RDS") # The path to the cleaned validation file vfn <- paste0(ed, "/validate-clean.txt") # ModelEvaluator class object is created me <- ModelEvaluator$new(mf = mfn, ve = ve) # The intrinsic evaluation is performed on first 100 lines stats <- me$extrinsic_evaluation(lc = 100, fn = vfn) # The test environment is cleaned up clean_up(ve) ## ----predict-word, cache=FALSE------------------------------------------------ # The required files rf <- c("def-model.RDS", "validate-clean.txt") # The test environment is setup ed <- setup_env(rf, ve) # The model file name mfn <- paste0(ed, "/def-model.RDS") # An object of class ModelPredictor is created. The mf parameter is the name of # the model file that was generated in the previous example. mp <- ModelPredictor$new(mf = mfn, ve = ve) # Given the words: "how are", the next word is predicted. The top 3 most likely # next words are returned along with their respective probabilities. res <- mp$predict_word(words = "how are", 3) # The test environment is cleaned up clean_up(ve)
/scratch/gouwar.j/cran-all/cranData/wordpredictor/inst/doc/overview.R
--- title: "Overview" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Overview} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} bibliography: references.bib nocite: '@*' --- ```{r echo=FALSE, results='hide'} knitr::opts_chunk$set( collapse = TRUE, comment = "#>", fig.path = "reference/figures/" ) ``` ```{r setup, echo=FALSE, results='hide', message=FALSE} library(wordpredictor) # The level of verbosity in the information messages ve <- 0 #' @description #' Used to setup the test environment #' @param rf The required files. #' @param ve The verbosity level. #' @return The list of directories in the test environment setup_env <- function(rf, ve) { # An object of class EnvManager is created em <- EnvManager$new(rp = "../", ve = ve) # The required files are downloaded ed <- em$setup_env(rf) return(ed) } #' @description #' Used to clean up the test environment clean_up <- function(ve) { # An object of class EnvManager is created em <- EnvManager$new(ve = ve) # The test environment is removed em$td_env(T) } ``` ## Introduction This document describes the theory behind the n-gram models generated by the **wordpredictor** package. It also provides code examples that describe how to use the package. The goal of the **wordpredictor** package is to provide a flexible and easy to use framework for generating [n-gram models](https://en.wikipedia.org/wiki/N-gram) for word prediction. The package allows generating n-gram models. It also allows exploring n-gram frequencies using plots. Additionally it provides methods for measuring n-gram model performance using [Perplexity](https://en.wikipedia.org/wiki/Perplexity) and accuracy. The n-gram model may be customized using several options such as n-gram size, data cleaning options and options for converting text to tokens. ## How the model works The n-gram model generated by the **wordpredictor** package uses the [Markov model](https://en.wikipedia.org/wiki/Markov_chain) for approximating the language model. It means that the probability of a word depends only on the probability of the n-1 previous words. Maximum Likelihood Estimation (MLE) is used to calculate the probability of a word. The probability of a word is calculated by regarding the word as the last component of a n-gram. The total number of occurrences of the n-gram is divided by the total number of occurrences of the (n-1)-gram. This gives the probability for the word. The n-gram model is generated in steps. In the first step, the input data is cleaned. Unwanted symbols and words are removed from the input data. In the next step, the cleaned data file is read. N-grams are extracted from the file, starting from 1-grams up to the configured n-gram size. The 1-gram, 2-gram, 3-gram etc tokens are saved in separate files along with the frequency. So the 3-gram file contains all extracted 3-grams and their respective frequencies. The next step is to generate transition probability tables for each n-gram file. For the 1-gram file the transition probability table is simply the list of unique words along with the word frequencies. For the other n-gram files, the transition probability table is a data frame with 3 columns. The hash of n-gram prefixes, the next word id and the next word probability. The n-gram prefix is the set of n-1 components before the last component. The n-1 components are combined using "_" and converted to a numeric hash value using the digest2Int method of the [digest](https://cran.r-project.org/package=digest) package. The next word id is the numeric index of the next word in the list of 1-grams. The next word probability is the probability of the next word given the previous n-1 words. It is calculated using Maximum Likelihood Estimation (MLE) as described above. Instead of storing the n-gram prefix strings, a single number is saved. Also instead of storing the next word, the numeric index of the next word is saved. This saves a lot of memory and allows more data to be stored, which improves the n-gram model's efficiency. In R, a number requires a fixed amount of storage, which about 56 bytes. In contrast the memory required to store a string increases with the number of characters in the string. The data frames that represent each transition probability table are combined into a single data frame. The combined transition probability table is used to make word predictions. ## Using the model to predict words To predict a word, the word along with the n-1 previous words are used as input. The model computes the hash of the previous words and looks up the hash in the combined transition probabilities table. If the hash was found, then the model extracts the top 3 next word ids that have the highest probabilities. The model looks up the next word text that corresponds to the next word ids. The result is the top 3 most likely next words along with their probabilities. If the hash was not found, then the hash of the n-2 previous words is calculated and looked up in the combined transition probabilities table. This process is repeated until there are no previous words. When this happens, the model returns a "word not found" message. This method of checking the transition probabilities of lower level n-grams is called **back-off**. An alternate method of predicting a word is to use **interpolation**. This involves weighing and summing the probabilities for each n-gram size. ## Predicting the model performance The **wordpredictor** package provides methods for performing **intrinsic** and **extrinsic** evaluation of the n-gram model. The **wordpredictor** package performs intrinsic evaluation by calculating the mean Perplexity score for all sentences in a validation text file. The Perplexity for a sentence is calculated by taking the N-th root of the inverse of the product of probabilities of all words in a sentence. N is the number of words in the sentence. The probability of a word is calculated by considering all n-1 words before that word. If the word was not found in the transition probabilities table, then the n-2 words are looked up. This process is repeated until there are no previous words. If the word was found in the 1-gram list, then the probability of the word is calculated by simply dividing the number of times the word occurs by the total number words. If the word was not found in the 1-gram list, then the model uses a default probability as the probability of the word. The default probability is calculated using [Laplace Smoothing](https://towardsdatascience.com/n-gram-language-models-af6085435eeb#:~:text=Laplace%20Smoothing,algorithm%20is%20called%20Laplace%20smoothing.). Laplace Smoothing involves adding 1 to the frequency count of each word in the vocabulary. Essentially this means that the total number of words in the data set are increased by vc, where vc is the number of words in the vocabulary. In Laplace Smoothing 1 is added to the word count. Since an unknown word occurs zero times, after Laplace Smoothing it will have a count of 1. So the default probability is calculated as: **P(unk) = 1/(N+VC)**, where **N** is the total number of words in the data set and **VC** is the number of words in the vocabulary. This default probability is assigned to unknown words. Alternative methods to Laplace Smoothing are **Add-k smoothing**, **Kneser-Ney smoothing** and **Good-Turing Smoothing**. The **wordpredictor** package uses the file **/usr/share/dict/cracklib-small** as the dictionary file. This file is pre-installed in most Linux distributions. Extrinsic evaluation involves calculating the accuracy score. The model tries to predict the last word of a sentence. If the actual last word was one of the 3 words predicted by the model, then the prediction is considered to be accurate. The accuracy score is the number of sentences that were correctly predicted. ## Generating the model The **ModelGenerator** class allows generating the final n-gram model using a single method call. The following example generates a n-gram model using default data cleaning and tokenization options: ```{r generate-model, results='hide', cache=FALSE} # The required files rf <- c("input.txt") # The test environment is setup ed <- setup_env(rf, ve) # The following code generates n-gram model using default options for data # cleaning and tokenization. See the following section on how to customize these # options. Note that input.txt is the name of the input data file. It should be # present in the ed directory. The generated model file is also placed in this # directory. # ModelGenerator class object is created mg <- ModelGenerator$new( name = "def-model", desc = "N-gram model generating using default options", fn = "def-model.RDS", df = "input.txt", n = 4, ssize = 0.1, dir = ed, dc_opts = list(), tg_opts = list(), ve = ve ) # Generates n-gram model. The output is the file # ./data/model/def-model.RDS mg$generate_model() # The test environment is cleaned up clean_up(ve) ``` ## Evaluating the model performance The **wordpredictor** package provides the **ModelEvaluator** class for evaluating the performance of the generated n-gram model. The following example performs intrinsic evaluation. It measures the Perplexity score for each sentence in the **validation.txt** file. It returns the minimum, mean and maximum Perplexity score for each line. ```{r model-evaluation-1, cache=FALSE} # The required files rf <- c("def-model.RDS", "validate-clean.txt") # The test environment is setup ed <- setup_env(rf, ve) # The model file name mfn <- paste0(ed, "/def-model.RDS") # The path to the cleaned validation file vfn <- paste0(ed, "/validate-clean.txt") # ModelEvaluator class object is created me <- ModelEvaluator$new(mf = mfn, ve = ve) # The intrinsic evaluation is performed on first 20 lines stats <- me$intrinsic_evaluation(lc = 20, fn = vfn) # The test environment is cleaned up clean_up(ve) ``` The following example performs extrinsic evaluation. It measures the accuracy score for each sentence in **validation.txt** file. For each sentence the model is used to predict the last word in the sentence given the previous words. If the last word was correctly predicted, then the prediction is considered to be accurate. The extrinsic evaluation returns the number of correct and incorrect predictions. ```{r model-evaluation-2, cache=FALSE} # The required files rf <- c("def-model.RDS", "validate-clean.txt") # The test environment is setup ed <- setup_env(rf, ve) # The model file name mfn <- paste0(ed, "/def-model.RDS") # The path to the cleaned validation file vfn <- paste0(ed, "/validate-clean.txt") # ModelEvaluator class object is created me <- ModelEvaluator$new(mf = mfn, ve = ve) # The intrinsic evaluation is performed on first 100 lines stats <- me$extrinsic_evaluation(lc = 100, fn = vfn) # The test environment is cleaned up clean_up(ve) ``` ## How to predict a word The following example shows how to predict the next word. It returns the 3 possible next words along with their probabilities. ```{r predict-word, cache=FALSE} # The required files rf <- c("def-model.RDS", "validate-clean.txt") # The test environment is setup ed <- setup_env(rf, ve) # The model file name mfn <- paste0(ed, "/def-model.RDS") # An object of class ModelPredictor is created. The mf parameter is the name of # the model file that was generated in the previous example. mp <- ModelPredictor$new(mf = mfn, ve = ve) # Given the words: "how are", the next word is predicted. The top 3 most likely # next words are returned along with their respective probabilities. res <- mp$predict_word(words = "how are", 3) # The test environment is cleaned up clean_up(ve) ``` ## Demo The wordpredictor package includes a demo called "word-predictor". The demo is a Shiny application that displays the ten most likely words for a given set of words. To access the demo, run the following command from the R shell: **`demo("word-predictor", package = "wordpredictor", ask = F)`**. ## Package dependencies The wordpredictor package uses the following packages: [digest](https://cran.r-project.org/package=digest), [dply](https://cran.r-project.org/package=dplyr), [ggplot2](https://cran.r-project.org/package=ggplot2), [R6](https://cran.r-project.org/package=R6), [testthat](https://cran.r-project.org/package=testthat) and [stingr](https://cran.r-project.org/package=stringr) The following packages were useful during package development: [quanteda](https://cran.r-project.org/package=quanteda), [tm](https://cran.r-project.org/package=tm) and [hash](https://cran.r-project.org/package=hash) [lintr](https://cran.r-project.org/package=lintr) [styler](https://cran.r-project.org/package=styler) [pkgdown](https://cran.r-project.org/package=pkgdown) [pryr](https://cran.r-project.org/package=pryr), ## Useful Links The following articles and tutorials were very useful: * [N-Gram Model](https://devopedia.org/n-gram-model) * [Probability Smoothing for Natural Language Processing](https://lazyprogrammer.me/probability-smoothing-for-natural-language-processing/) * [Natural Language Processing is Fun!](https://medium.com/@ageitgey/natural-language-processing-is-fun-9a0bff37854e) * [Quanteda Tutorials](https://tutorials.quanteda.io/) ## Bibliography
/scratch/gouwar.j/cran-all/cranData/wordpredictor/inst/doc/overview.Rmd
# Start of environment setup code # The level of detail in the information messages ve <- 2 # The name of the folder that will contain all the files. It will be created in # the current directory. NULL implies tempdir will be used. fn <- NULL # The required files. They are default files that are part of the package rf <- c("def-model.RDS") # An object of class EnvManager is created em <- EnvManager$new(ve = ve, rp = "./") # The required files are downloaded ed <- em$setup_env(rf, fn) # End of environment setup code # The model file name mfn <- paste0(ed, "/def-model.RDS") # ModelPredictor class object is created mp <- ModelPredictor$new(mf = mfn, ve = ve) # The sentence whoose Perplexity is to be calculated l <- "last year at this time i was preparing for a trip to rome" # The line is split in to words w <- strsplit(l, " ")[[1]] # The Perplexity of the sentence is calculated p <- mp$calc_perplexity(w) # The sentence Perplexity is printed print(p) # The test environment is removed. Comment the below line, so the files # generated by the function can be viewed em$td_env()
/scratch/gouwar.j/cran-all/cranData/wordpredictor/inst/examples/calc-perplexity.R
# Start of environment setup code # The level of detail in the information messages ve <- 2 # The name of the folder that will contain all the files. It will be created in # the current directory. NULL implies tempdir will be used. fn <- NULL # The required files. They are default files that are part of the package rf <- c("test.txt") # An object of class EnvManager is created em <- EnvManager$new(ve = ve, rp = "./") # The required files are downloaded ed <- em$setup_env(rf, fn) # End of environment setup code # The cleaned test file name cfn <- paste0(ed, "/test-clean.txt") # The test file name fn <- paste0(ed, "/test.txt") # The data cleaning options dc_opts <- list("output_file" = cfn) # The data cleaner object is created dc <- DataCleaner$new(fn, dc_opts, ve = ve) # The sample file is cleaned dc$clean_file() # The test environment is removed. Comment the below line, so the files # generated by the function can be viewed em$td_env()
/scratch/gouwar.j/cran-all/cranData/wordpredictor/inst/examples/clean-file.R
# Start of environment setup code # The level of detail in the information messages ve <- 2 # Test data is read l <- c( "If you think I’m wrong, send me a link to where it’s happened", "We’re about 90percent done with this room", "“This isn’t how I wanted it between us.”", "Almost any “cute” breed can become ornamental", "Once upon a time there was a kingdom with a castle…", "That's not a thing any of us are granted'", "“Why are you being so difficult?” she asks." ) # The expected results res <- c( "if you think wrong send me a link to where its happened", "were about percent done with this room", "this how i wanted it between us", "almost any cute breed can become ornamental", "once upon a time there was a kingdom with a castle", "thats not a thing any of us are granted", "why are you being so difficult she asks" ) # The DataCleaner object is created dc <- DataCleaner$new(ve = ve) # The line is cleaned cl <- dc$clean_lines(l) # The cleaned lines are printed print(cl)
/scratch/gouwar.j/cran-all/cranData/wordpredictor/inst/examples/clean-lines.R
# Start of environment setup code # The level of detail in the information messages ve <- 2 # The name of the folder that will contain all the files. It will be created in # the current directory. NULL implies tempdir will be used. fn <- NULL # The required files. They are default files that are part of the package rf <- c("def-model.RDS") # An object of class EnvManager is created em <- EnvManager$new(ve = ve, rp = "./") # The required files are downloaded ed <- em$setup_env(rf, fn) # End of environment setup code # ModelEvaluator class object is created me <- ModelEvaluator$new(ve = ve) # The performance evaluation is performed me$compare_performance(opts = list( "save_to" = NULL, "dir" = ed )) # The test environment is removed. Comment the below line, so the files # generated by the function can be viewed em$td_env()
/scratch/gouwar.j/cran-all/cranData/wordpredictor/inst/examples/compare-performance.R
# Start of environment setup code # The level of detail in the information messages ve <- 2 # The name of the folder that will contain all the files. It will be created in # the current directory. NULL implies tempdir will be used fn <- NULL # The required files. They are default files that are part of the package rf <- c("def-model.RDS", "validate-clean.txt") # An object of class EnvManager is created em <- EnvManager$new(ve = ve, rp = "./") # The required files are downloaded ed <- em$setup_env(rf, fn) # End of environment setup code # The model file name mfn <- paste0(ed, "/def-model.RDS") # The validation file name vfn <- paste0(ed, "/validate-clean.txt") # ModelEvaluator class object is created me <- ModelEvaluator$new(mf = mfn, ve = ve) # The performance evaluation is performed stats <- me$evaluate_performance(lc = 20, fn = vfn) # The evaluation stats are printed print(stats) # The test environment is removed. Comment the below line, so the files # generated by the function can be viewed em$td_env()
/scratch/gouwar.j/cran-all/cranData/wordpredictor/inst/examples/evaluate-performance.R