Nemotron-4-340B-Instruct Tokenizer
A ๐ค-compatible version of the Nemotron-4-340B-Instruct (adapted from nvidia/Nemotron-4-340B-Instruct). This means it can be used with Hugging Face libraries including Transformers, Tokenizers, and Transformers.js.
Example usage:
Transformers/Tokenizers
from transformers import PreTrainedTokenizerFast
tokenizer = PreTrainedTokenizerFast.from_pretrained('Xenova/Nemotron-4-340B-Instruct-Tokenizer')
assert tokenizer.encode('hello world') == [38150, 2268]
Transformers.js
import { AutoTokenizer } from '@xenova/transformers';
const tokenizer = await AutoTokenizer.from_pretrained('Xenova/Nemotron-4-340B-Instruct-Tokenizer');
const tokens = tokenizer.encode('hello world'); // [38150, 2268]
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no pipeline_tag.