nickdee96 commited on
Commit
d65e143
·
2 Parent(s): 76a51c0 76a1465

Merge branch 'main' of https://huggingface.co/nickdee96/wav2vec2-large-xls-r-300m-sw into main

Browse files
Files changed (1) hide show
  1. README.md +62 -0
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - mozilla-foundation/common_voice_11_0
5
+ language:
6
+ - en
7
+ - sw
8
+ metrics:
9
+ - wer
10
+ pipeline_tag: automatic-speech-recognition
11
+ ---
12
+
13
+ # Swahili Automatic Speech Recognition (ASR)
14
+
15
+ ## Model details
16
+ The Swahili ASR is an end-to-end automatic speech recognition system that was finetuned on the Common Voice Corpus 11.0 Swahili dataset. This repository provides the necessary tools to perform ASR using this model, allowing for high-quality speech-to-text conversions in Swahili.
17
+
18
+
19
+ | EVAL_LOSS | EVAL_WER | EVAL_RUNTIME | EVAL_SAMPLES_PER_SECOND | EVAL_STEPS_PER_SECOND | EPOCH |
20
+ |-------------------|--------------------|--------------|-------------------------|-----------------------|-------|
21
+ | 0.345414400100708 | 0.2602372795622284 | 578.4006 | 17.701 | 2.213 | 4.17 |
22
+
23
+ ## Intended Use
24
+ This model is intended for any application requiring Swahili speech-to-text conversion, including but not limited to transcription services, voice assistants, and accessibility technology. It can be particularly beneficial in any context where demographic metadata (age, sex, accent) is significant, as these features have been taken into account during training.
25
+
26
+ ## Dataset
27
+ The model was trained on the Common Voice Corpus 11.0 Swahili dataset, which consists of unique MP3 files and corresponding text files, totaling 16,413 validated hours. Additionally, much of the dataset includes valuable demographic metadata, such as age, sex, and accent, contributing to a more accurate and contextually-aware ASR model.
28
+
29
+ [Dataset link](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0)
30
+
31
+ ## Training Procedure
32
+
33
+ ### Pipeline Description
34
+ The ASR system has two interconnected stages: the Tokenizer (unigram) and the Acoustic model (wav2vec2.0 + CTC).
35
+
36
+ 1. **Tokenizer (unigram):** It transforms words into subword units, using a vocabulary extracted from the training and test datasets. The resulting `Wav2Vec2CTCTokenizer` is then pushed to the Hugging Face model hub.
37
+
38
+ 2. **Acoustic model (wav2vec2.0 + CTC):** Utilizes a pretrained wav2vec 2.0 model (`facebook/wav2vec2-base`), which is fine-tuned on the dataset. The processed audio data is passed through the CTC (Connectionist Temporal Classification) decoder, which converts the acoustic representations into a sequence of tokens/characters. The trained model is then also pushed to the Hugging Face model hub.
39
+
40
+ ### Technical Specifications
41
+ The ASR system uses the Wav2Vec2ForCTC model architecture from the Hugging Face's Transformers library. This model, with a built-in Connectionist Temporal Classification (CTC) layer, provides an optimal solution for speech recognition tasks. The model includes a pretrained wav2vec 2.0 model and a linear layer for CTC, which are trained together in an end-to-end manner. The ASR system's performance is measured using the Word Error Rate (WER) during the training process.
42
+
43
+ ### Compute Infrastructure
44
+ The training was performed using the following compute infrastructure:
45
+
46
+ | [Compute](https://instances.vantage.sh/aws/ec2/g5.8xlarge#Compute) | Value |
47
+ | ------------------------------------------------------------------------------------------ | ------------- |
48
+ | vCPUs | 32 |
49
+ | Memory (GiB) | 128.0 |
50
+ | Memory per vCPU (GiB) | 4.0 |
51
+ | Physical Processor | AMD EPYC 7R32 |
52
+ | Clock Speed (GHz) | 2.8 |
53
+ | CPU Architecture | x86_64 |
54
+ | GPU | 1 |
55
+ | GPU Architecture | nvidia a10g |
56
+ | Video Memory (GiB) | 24 |
57
+ | GPU Compute Capability [(?)](https://handbook.vantage.sh/aws/reference/aws-gpu-instances/) | 7.5 |
58
+ | FPGA | 0 |
59
+
60
+ ## About THiNK
61
+ THiNK is a technology initiative driven by a community of innovators and businesses. It brings together a collaborative platform that provides services to assist businesses in all sectors, particularly in their digital transformation journey.
62
+