|
## Model info |
|
|
|
This is a BPE tokenizer retrained from scratch on the concatenated [Wikitext-103](https://paperswithcode.com/dataset/wikitext-103) train, evaluation, and test sets. The vocabulary had 28,439 entries. |
|
|
|
This tokenizer was use to tokenize text for [the GPT-2 model trained on Wikitext-103](https://huggingface.co/Kristijan/gpt2_wt103-40m_12-layer). |
|
|
|
## Usage |
|
|
|
You can download the tokenizer directly from hub as follows: |
|
|
|
``` |
|
from transformers import GPT2TokenizerFast |
|
|
|
tokenizer = GPT2TokenizerFast.from_pretrained("Kristijan/wikitext-103-tokenizer") |
|
|
|
``` |
|
|
|
After cloning/downloading the files, you can load the tokenizer using the `/from_pretrained()` methods as follows: |
|
|
|
``` |
|
from transformers import GPT2TokenizerFast |
|
|
|
tokenizer = GPT2TokenizerFast.from_pretrained(path_to_folder_with_merges_and_vocab_files) |
|
|
|
``` |