File size: 3,481 Bytes
b1e245a c4f6181 9aa4042 c4f6181 9aa4042 b1e245a c4f6181 b1e245a 57db549 c4f6181 9aa4042 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
---
base_model: openai/whisper-large-v3-turbo
library_name: transformers
license: apache-2.0
pipeline_tag: automatic-speech-recognition
tags:
- audio
- automatic-speech-recognition
- whisper
- hf-asr-leaderboard
---
# Model Card for Lite-Whisper large-v3-turbo-acc
<!-- Provide a quick summary of what the model is/does. -->
Lite-Whisper is a compressed version of OpenAI Whisper with LiteASR. See our [GitHub repository](https://github.com/efeslab/LiteASR) and [paper](https://arxiv.org/abs/2502.20583) for details.
## Benchmark Results
Following is the average word error rate (WER) evaluated on the [ESB datasets](https://huggingface.co/datasets/hf-audio/esb-datasets-test-only-sorted):
| Model | Average WER (↓) | Encoder Size | Decoder Size |
|-------|----------------|--------------|--------------|
| [whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) | 10.1 | 635M | 907M |
| [lite-whisper-large-v3-acc](https://huggingface.co/efficient-speech/lite-whisper-large-v3-acc) | 10.1 | 429M | 907M |
| [lite-whisper-large-v3](https://huggingface.co/efficient-speech/lite-whisper-large-v3) | 10.2 | 377M | 907M |
| [lite-whisper-large-v3-fast](https://huggingface.co/efficient-speech/lite-whisper-large-v3-fast) | 11.3 | 308M | 907M |
| | | | |
| [whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) | 10.1 | 635M | 172M |
| [lite-whisper-large-v3-turbo-acc](https://huggingface.co/efficient-speech/lite-whisper-large-v3-turbo-acc) | 10.2 | 421M | 172M |
| [lite-whisper-large-v3-turbo](https://huggingface.co/efficient-speech/lite-whisper-large-v3-turbo) | 12.6 | 374M | 172M |
| [lite-whisper-large-v3-turbo-fast](https://huggingface.co/efficient-speech/lite-whisper-large-v3-turbo-fast) | 20.1 | 313M | 172M |
| | | | |
| [whisper-medium](https://huggingface.co/openai/whisper-medium) | 14.8 | 306M | 457M |
## Quick Start
The easiest way to run our model is to use our integration with HuggingFace Transformers library.
We provide model weights for the compressed version of OpenAI Whisper series [here](https://huggingface.co/efficient-speech).
```python
import librosa
import torch
from transformers import AutoProcessor, AutoModel
device = "cuda:0"
dtype = torch.float16
# load the compressed Whisper model
model = AutoModel.from_pretrained(
"efficient-speech/lite-whisper-large-v3-turbo",
trust_remote_code=True,
)
model.to(dtype).to(device)
# we use the same processor as the original model
processor = AutoProcessor.from_pretrained("openai/whisper-large-v3")
# set the path to your audio file
path = "path/to/audio.wav"
audio, _ = librosa.load(path, sr=16000)
input_features = processor(audio, sampling_rate=16000, return_tensors="pt").input_features
input_features = input_features.to(dtype).to(device)
predicted_ids = model.generate(input_features)
transcription = processor.batch_decode(
predicted_ids,
skip_special_tokens=True
)[0]
print(transcription)
```
## Citation
If you use LiteASR in your research, please cite the following paper:
```
@misc{kamahori2025liteasrefficientautomaticspeech,
title={LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation},
author={Keisuke Kamahori and Jungo Kasai and Noriyuki Kojima and Baris Kasikci},
year={2025},
eprint={2502.20583},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2502.20583},
}
``` |