CTranslate2 Conversion of whisper-large-v3-turbo (INT8 Quantization)

This model is converted from openai/whisper-large-v3-turbo to the CTranslate2 format using INT8 quantization, primarily for use with faster-whisper

Model Details

For more details about the model, see its original model card

Conversion Details

The original model was converted using the following command:

ct2-transformers-converter --model whisper-large-v3-turbo --copy_files tokenizer.json preprocessor_config.json --output_dir faster-whisper-large-v3-turbo-int8-ct2 --quantization int8
Downloads last month
124
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Zoont/faster-whisper-large-v3-turbo-int8-ct2

Finetuned
(205)
this model