Modified Llama3.x-tokenizer
This is a modified version of the Llama3.x tokenizer. It is made especially for reasoning.
The following tokens are replaced:
Token ID | Token |
---|---|
128013 | <|think|> |
128014 | <|/think|> |
128015 | <|answer|> |
128016 | <|/answer|> |
The script (replace_reserved_tokens.py
) for changing the tokens are also included in this repo.
Verification
You can verify that the tokenizer is working correctly by using this script:
#####################################
# Check if the tokenizer is correct #
#####################################
import torch
from transformers import AutoTokenizer
from tabulate import tabulate
# Tokenize the text
text = "<|think|>think<|/think|><|answer|>answer<|/answer|>"
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokens = tokenizer(text, return_tensors="pt")
token_ids = tokens["input_ids"].squeeze().tolist()
decoded_tokens = [tokenizer.decode([tid]) for tid in token_ids]
# Create table
table_data = list(zip(token_ids, decoded_tokens))
print(tabulate(table_data, headers=["Token ID", "Token"], tablefmt="grid"))
Credits
This tokenizer is created by Per Egil Kummervold and is part of the NoTraM-project.
- Downloads last month
- 10
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.