File size: 172 Bytes
5fa1a76
 
1
2
This is very analogous to tokenization - you generally get the
best performance for inference or fine-tuning when you precisely match the tokenization used during training.