Ahmadzei's picture
added 3 more tables for large emb model
5fa1a76
raw
history blame contribute delete
531 Bytes
Before getting in the specifics, let's first start by creating a dummy tokenizer in a few lines:
thon
from tokenizers import Tokenizer
from tokenizers.models import BPE
from tokenizers.trainers import BpeTrainer
from tokenizers.pre_tokenizers import Whitespace
tokenizer = Tokenizer(BPE(unk_token="[UNK]"))
trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
tokenizer.pre_tokenizer = Whitespace()
files = []
tokenizer.train(files, trainer)
We now have a tokenizer trained on the files we defined.