Titouan commited on
Commit
e126af8
·
1 Parent(s): b2e9ba1

multiple decoding

Browse files
Files changed (1) hide show
  1. README.md +8 -0
README.md CHANGED
@@ -63,9 +63,17 @@ Please notice that we encourage you to read our tutorials and learn more about
63
  from speechbrain.inference.ASR import EncoderDecoderASR
64
 
65
  asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-conformer-largescaleasr", savedir="pretrained_models/asr-conformer-largescaleasr")
 
 
66
  asr_model.transcribe_file("speechbrain/asr-conformer-largescaleasr/example.wav")
67
 
 
 
 
 
 
68
  ```
 
69
  ### Inference on GPU
70
  To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
71
 
 
63
  from speechbrain.inference.ASR import EncoderDecoderASR
64
 
65
  asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-conformer-largescaleasr", savedir="pretrained_models/asr-conformer-largescaleasr")
66
+
67
+ # For a full decoding with a large beam size (can be slow):
68
  asr_model.transcribe_file("speechbrain/asr-conformer-largescaleasr/example.wav")
69
 
70
+ # For smaller beam size:
71
+ asr_model.transcribe_file("speechbrain/asr-conformer-largescaleasr/example.wav", overrides={"test_beam_size":"10"})
72
+
73
+ # For even faster decoding
74
+ asr_model.transcribe_file("speechbrain/asr-conformer-largescaleasr/example.wav", overrides={"test_beam_size":"10", "ctc_weight_decode":0.0})
75
  ```
76
+
77
  ### Inference on GPU
78
  To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
79