tokenizer_config.json fix

#1
by divinetaco - opened

fyi - the tokenizer_config.json on this is likely wrong. It should be using llama3.3 rather than deepseek-r1.

I've updated here:
https://huggingface.co/divinetaco/L3.3-70B-Lycosa-v0.1/blob/main/tokenizer.json

Whilst the deepseek chat template will work (minus the token) I see much better results with the llama template.

I've reverted it - tokenizer_config.json is the same as the original upload. Probably fine to leave as is as both chat templates work.
Swapping in the llama 3 tokenizer_config.json was breaking the model. Will add a note to the README.md suggesting people to try forcing the llama3 template.

Sign up or log in to comment