Commit
·
7b37713
1
Parent(s):
666475f
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,124 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: mr
|
3 |
+
datasets:
|
4 |
+
- openslr
|
5 |
+
- interspeech_2021_asr
|
6 |
+
metrics:
|
7 |
+
- wer
|
8 |
+
tags:
|
9 |
+
- audio
|
10 |
+
- automatic-speech-recognition
|
11 |
+
- speech
|
12 |
+
- xlsr-fine-tuning-week
|
13 |
+
license: apache-2.0
|
14 |
+
model-index:
|
15 |
+
- name: XLSR Wav2Vec2 Large 53 Marathi by Gunjan Chhablani
|
16 |
+
results:
|
17 |
+
- task:
|
18 |
+
name: Speech Recognition
|
19 |
+
type: automatic-speech-recognition
|
20 |
+
dataset:
|
21 |
+
name: OpenSLR mr, InterSpeech 2021 ASR mr
|
22 |
+
type: openslr, interspeech_2021_asr
|
23 |
+
metrics:
|
24 |
+
- name: Test WER
|
25 |
+
type: wer
|
26 |
+
value: 19.05
|
27 |
+
---
|
28 |
+
|
29 |
+
# Wav2Vec2-Large-XLSR-53-Marathi
|
30 |
+
|
31 |
+
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Marathi using the [OpenSLR SLR64](http://openslr.org/64/) dataset and [InterSpeech 2021](https://navana-tech.github.io/IS21SS-indicASRchallenge/data.html) Marathi datasets. Note that this data OpenSLR contains only female voices. Please keep this in mind before using the model for your task. When using this model, make sure that your speech input is sampled at 16kHz.
|
32 |
+
|
33 |
+
## Usage
|
34 |
+
|
35 |
+
The model can be used directly (without a language model) as follows, assuming you have a dataset with Marathi `sentence` and `path` fields:
|
36 |
+
|
37 |
+
```python
|
38 |
+
import torch
|
39 |
+
import torchaudio
|
40 |
+
from datasets import load_dataset
|
41 |
+
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
|
42 |
+
|
43 |
+
# test_data = #TODO: WRITE YOUR CODE TO LOAD THE TEST DATASET. For sample see the Colab link in Training Section.
|
44 |
+
|
45 |
+
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-mr-3")
|
46 |
+
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-mr-3")
|
47 |
+
|
48 |
+
|
49 |
+
import librosa
|
50 |
+
import numpy as np
|
51 |
+
|
52 |
+
# Preprocessing the datasets.
|
53 |
+
# We need to read the audio files as arrays
|
54 |
+
def speech_file_to_array_fn(batch):
|
55 |
+
speech_array, sampling_rate = torchaudio.load(batch["audio_path"])
|
56 |
+
batch["speech"] = librosa.resample(speech_array[0].numpy(), sampling_rate, 16_000) # sampling_rate can vary
|
57 |
+
return batch
|
58 |
+
|
59 |
+
test_data= test_data.map(speech_file_to_array_fn)
|
60 |
+
inputs = processor(test_data["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
|
61 |
+
|
62 |
+
with torch.no_grad():
|
63 |
+
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
|
64 |
+
|
65 |
+
predicted_ids = torch.argmax(logits, dim=-1)
|
66 |
+
|
67 |
+
print("Prediction:", processor.batch_decode(predicted_ids))
|
68 |
+
print("Reference:", test_data["text"][:2])
|
69 |
+
```
|
70 |
+
|
71 |
+
|
72 |
+
## Evaluation
|
73 |
+
|
74 |
+
The model can be evaluated as follows on 10% of the Marathi data on OpenSLR.
|
75 |
+
|
76 |
+
```python
|
77 |
+
import torch
|
78 |
+
import torchaudio
|
79 |
+
from datasets import load_dataset, load_metric
|
80 |
+
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
|
81 |
+
import re
|
82 |
+
|
83 |
+
# test_data = #TODO: WRITE YOUR CODE TO LOAD THE TEST DATASET. For sample see the Colab link in Training Section.
|
84 |
+
|
85 |
+
wer = load_metric("wer")
|
86 |
+
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-mr-3")
|
87 |
+
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-mr-3")
|
88 |
+
model.to("cuda")
|
89 |
+
|
90 |
+
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\–\…]'
|
91 |
+
|
92 |
+
# Preprocessing the datasets.
|
93 |
+
# We need to read the audio files as arrays
|
94 |
+
def speech_file_to_array_fn(batch):
|
95 |
+
batch["text"] = re.sub(chars_to_ignore_regex, '', batch["text"]).lower()
|
96 |
+
speech_array, sampling_rate = torchaudio.load(batch["audio_path"])
|
97 |
+
batch["speech"] = librosa.resample(speech_array[0].numpy(), sampling_rate, 16_000)
|
98 |
+
return batch
|
99 |
+
|
100 |
+
test_data= test_data.map(speech_file_to_array_fn)
|
101 |
+
|
102 |
+
# Preprocessing the datasets.
|
103 |
+
# We need to read the audio files as arrays
|
104 |
+
def evaluate(batch):
|
105 |
+
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
|
106 |
+
with torch.no_grad():
|
107 |
+
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
|
108 |
+
pred_ids = torch.argmax(logits, dim=-1)
|
109 |
+
batch["pred_strings"] = processor.batch_decode(pred_ids)
|
110 |
+
return batch
|
111 |
+
|
112 |
+
result = test_data.map(evaluate, batched=True, batch_size=8)
|
113 |
+
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["text"])))
|
114 |
+
```
|
115 |
+
|
116 |
+
**Test Result**: 19.05 %
|
117 |
+
**Test Result on OpenSLR test**: 14.15 % (157 examples)
|
118 |
+
**Test Results on InterSpeech test**: 27.14 % (157 examples)
|
119 |
+
|
120 |
+
## Training
|
121 |
+
|
122 |
+
1412 examples of the OpenSLR Marathi dataset and 1412 examples of InterSpeech 2021 Marathi ASR dataset were used for training. For testing, 157 examples from each were used.
|
123 |
+
|
124 |
+
The colab notebook used for training and evaluation can be found [here](https://colab.research.google.com/drive/15fUhb4bUFFGJyNLr-_alvPxVX4w0YXRu?usp=sharing).
|