YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

How to run this ggml file?

WARNING: this can be slow and CPU intensive

Clone whisper.cpp repository:

git clone https://github.com/ggerganov/whisper.cpp

cd whisper.cpp

make

place this model called large_q5_0.bin inside whisper.cpp/models folder and run:

./main -m models/large_q5_0.bin yourfilename.wav

Command to transcribe to SRT subtitle files:

./main -m models/large_q5_0.bin yourfilename.wav --output-srt --print-progress

Command to transcribe to TRANSLATED (to English) SRT subtitle files:

./main -m models/large_q5_0.bin yourfilename.wav --output-srt --print-progress --translate

It can transcribe ONLY wav files!

Command line to convert mp4 (works for any video, just change the extension) to wav:

ffmpeg -i yourfilename.mp4 -vn -acodec pcm_s16le -ar 16000 -ac 2 yourfilename.wav

Command to convert all mp4 (works for any video, just change the extension) inside folder to wav:

find . -type f -iname ".mp4" -exec bash -c 'ffmpeg -i "$0" -vn -acodec pcm_s16le -ar 16000 -ac 2 "${0%.}.wav"' {} ;

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.