Speaker Embeddings extractor
This model produces speaker embeddings for automatic speaker verification (ASV). Speaker verification is performed by computing embeddings vectors by applying this model to any two voice signals. Then the cosine similarity between the two embeddings can be used to compare the two voices.
The model has been derived from the self-supervised pretrained model WavLM-large.
Usage
The following code snippet uses the file spk_embeddings.py to build the architecture of the model. Its weights are then downloaded from this repository.
from spk_embeddings import EmbeddingsModel, compute_embedding
import torch
model = EmbeddingsModel.from_pretrained("Orange/Speaker-wavLM-id")
model.eval()
The model produces normalized vectors as embeddings.
The python file also contains the function to compute the embeddings vector of an audio file. In this tutorial version, the audio file is expected to be sampled at 16kHz. Depending on the available memory (cpu or gpu), you may change the value of the max_size parameter, which is used to truncate the long audio signals.
finally, we can compute two embeddings from two different files and compare them with a cosine similarity:
wav1 = "/voxceleb1_2019/test/wav/id10270/x6uYqmx31kE/00001.wav"
wav2 = "/voxceleb1_2019/test/wav/id10270/8jEAjG6SegY/00008.wav"
e1 = compute_embedding(wav1, model)
e2 = compute_embedding(wav2, model)
sim = float(torch.matmul(e1,e2.t()))
print(sim) #0.7334115505218506
Evaluations
The model has been evaluated on the standard ASV VoxCeleb1-clean test set. It results in an Equal Error Rate (EER, lower value denotes a better identification, random prediction leads to a value of 50%) of 0.946% (with a decision threshold of 0.388).
Please note that the EER value can vary a little depending on the max_size defined to reduce long audios (max 30 seconds in our case).
Limitations
The fine tuning data used to produce this model (VoxCeleb1 and 2) are mostly in english, which may affect the performance on other languages. The performance may also vary with the audio quality (recording device, background noise, ...), specially for audio qualities not covered by the training set, as no specific algorithm, e.g. data augmentation, was used during training to tackle this problem.
Publication
This model was used as a baseline in the context of voice characterization (prosodic and timbral cues) in the study described in the following research paper: Disentangling prosody and timbre embeddings via voice conversion.
In this paper the model is denoted as W-SPK. The other two models used in this study can also be found on HuggingFace :
Citation
Gengembre, N., Le Blouch, O., Gendrot, C. (2024) Disentangling prosody and timbre embeddings via voice conversion. Proc. Interspeech 2024, 2765-2769, doi: 10.21437/Interspeech.2024-207
BibteX citation
@inproceedings{gengembre24_interspeech,
title = {Disentangling prosody and timbre embeddings via voice conversion},
author = {Nicolas Gengembre and Olivier {Le Blouch} and Cédric Gendrot},
year = {2024},
booktitle = {Interspeech 2024},
pages = {2765--2769},
doi = {10.21437/Interspeech.2024-207},
issn = {2958-1796},
}
License
CREATIVE COMMONS Attribution-ShareAlike 3.0 Unported
- Downloads last month
- 6
Model tree for Orange/Speaker-wavLM-id
Base model
microsoft/wavlm-large