parquet-converter commited on
Commit
496f15a
·
1 Parent(s): 5c1297a

Update parquet files

Browse files
README.md DELETED
@@ -1,33 +0,0 @@
1
- ---
2
- pretty_name: CMU ARCTIC X-Vectors
3
- task_categories:
4
- - text-to-speech
5
- - audio-to-audio
6
- license: mit
7
- ---
8
-
9
- # Speaker embeddings extracted from CMU ARCTIC
10
-
11
- There is one `.npy` file for each utterance in the dataset, 7931 files in total. The speaker embeddings are 512-element X-vectors.
12
-
13
- The [CMU ARCTIC](http://www.festvox.org/cmu_arctic/) dataset divides the utterances among the following speakers:
14
-
15
- - bdl (US male)
16
- - slt (US female)
17
- - jmk (Canadian male)
18
- - awb (Scottish male)
19
- - rms (US male)
20
- - clb (US female)
21
- - ksp (Indian male)
22
-
23
- The X-vectors were extracted using [this script](https://huggingface.co/mechanicalsea/speecht5-vc/blob/main/manifest/utils/prep_cmu_arctic_spkemb.py), which uses the `speechbrain/spkrec-xvect-voxceleb` model.
24
-
25
- Usage:
26
-
27
- ```python
28
- from datasets import load_dataset
29
- embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation")
30
-
31
- speaker_embeddings = embeddings_dataset[7306]["xvector"]
32
- speaker_embeddings = torch.tensor(speaker_embeddings).unsqueeze(0)
33
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cmu-arctic-xvectors.py DELETED
@@ -1,45 +0,0 @@
1
- # coding=utf-8
2
-
3
- import os
4
- import numpy as np
5
- import datasets
6
-
7
-
8
- _DATA_URL = "https://huggingface.co/datasets/Matthijs/cmu-arctic-xvectors/resolve/main/spkrec-xvect.zip"
9
-
10
-
11
- class ArcticXvectors(datasets.GeneratorBasedBuilder):
12
-
13
- BUILDER_CONFIGS = [
14
- datasets.BuilderConfig(
15
- name="default",
16
- version=datasets.Version("0.0.1", ""),
17
- description="",
18
- )
19
- ]
20
-
21
- def _info(self):
22
- return datasets.DatasetInfo(
23
- features=datasets.Features(
24
- {
25
- "filename": datasets.Value("string"),
26
- "xvector": datasets.Sequence(feature=datasets.Value(dtype="float32"), length=512),
27
- }
28
- ),
29
- )
30
-
31
- def _split_generators(self, dl_manager):
32
- archive = os.path.join(dl_manager.download_and_extract(_DATA_URL), "spkrec-xvect")
33
- return [
34
- datasets.SplitGenerator(
35
- name=datasets.Split.VALIDATION, gen_kwargs={"files": dl_manager.iter_files(archive)}
36
- ),
37
- ]
38
-
39
- def _generate_examples(self, files):
40
- for i, file in enumerate(sorted(files)):
41
- if os.path.basename(file).endswith(".npy"):
42
- yield str(i), {
43
- "filename": os.path.basename(file)[:-4], # strip off .npy
44
- "xvector": np.load(file),
45
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spkrec-xvect.zip → default/cmu-arctic-xvectors-validation.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:28ea1b685a49fedce92d1af7e68b22bf511a23432bc7a13d621a4deeee9fe9a1
3
- size 17943510
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9521dad9a9dcb0237598fde11ae880502cdc3d4a7f5b9fd135f6bdc88d00e4b2
3
+ size 21283426