Why the vocab_size of tokenizer is different from model?
#2
by
xianf
- opened
I am new to t5 and mt5. I found the vocab_size in tokenizer is 250100, but the model's embedding size is 250112. Could you tell me why they are different?
Thx in advance~
xianf
changed discussion status to
closed
Got the answer.