File size: 1,226 Bytes
3cdc826
7222459
 
 
 
 
 
 
3cdc826
 
7222459
 
3cdc826
7222459
3cdc826
7222459
3cdc826
7222459
3cdc826
7222459
3cdc826
7222459
3cdc826
7222459
3cdc826
7222459
3cdc826
7222459
3cdc826
7222459
3cdc826
7222459
3cdc826
7222459
 
 
 
 
 
 
 
 
 
 
 
3cdc826
7222459
3cdc826
 
 
7222459
3cdc826
7222459
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
---
license: mit
base_model: facebook/w2v-bert-2.0
tags:
- generated_from_trainer
model-index:
- name: w2v2_uclass_clipped_10_seconds
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# w2v2_uclass_clipped_10_seconds

This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on an unknown dataset.

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP

### Training results



### Framework versions

- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1