Commit
·
10756e6
1
Parent(s):
14eb179
README.md
Browse files
README.md
CHANGED
@@ -9,16 +9,22 @@ tags:
|
|
9 |
---
|
10 |
|
11 |
## Model description
|
|
|
|
|
12 |
|
13 |
-
|
14 |
|
15 |
-
|
16 |
|
17 |
-
|
|
|
18 |
|
19 |
-
##
|
|
|
|
|
20 |
|
21 |
-
|
|
|
22 |
|
23 |
## Training procedure
|
24 |
|
|
|
9 |
---
|
10 |
|
11 |
## Model description
|
12 |
+
This model helps to classify speakers from the frequency domain representation of speech recordings, obtained via Fast Fourier Transform (FFT).
|
13 |
+
The model is created by a 1D convolutional network with residual connections for audio classification.
|
14 |
|
15 |
+
This repo contains the model for the notebook [**Speaker Recognition**](https://keras.io/examples/audio/speaker_recognition_using_cnn/).
|
16 |
|
17 |
+
Full credits go to [**Fadi Badine**](https://twitter.com/fadibadine)
|
18 |
|
19 |
+
## Dataset Used
|
20 |
+
This model uses a [**speaker recognition dataset**](https://www.kaggle.com/kongaevans/speaker-recognition-dataset) of Kaggle
|
21 |
|
22 |
+
## Intended uses & limitations
|
23 |
+
This should be run with `TensorFlow 2.3` or higher, or `tf-nightly`.
|
24 |
+
Also, The noise samples in the dataset need to be resampled to a sampling rate of 16000 Hz before using for this model so, In order to do this, you will need to have installed `ffmpg`.
|
25 |
|
26 |
+
## Training and evaluation data
|
27 |
+
During dataset preparation, the speech samples & background noise samples were sorted and categorized into 2 folders - audio & noise, and then noise samples were resampled to 16000Hz & then the background noise was added to the speech samples to augment the data. After that, the FFT of these samples was given to the model for the training & evaluation part.
|
28 |
|
29 |
## Training procedure
|
30 |
|