File size: 3,935 Bytes
329fae3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---

license: apache-2.0
base_model: asapp/sew-d-tiny-100k-ft-ls100h
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: sewd-classifier-aug-large
  results: []
---


<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# sewd-classifier-aug-large

This model is a fine-tuned version of [asapp/sew-d-tiny-100k-ft-ls100h](https://huggingface.co/asapp/sew-d-tiny-100k-ft-ls100h) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2094
- Accuracy: 0.1712
- Precision: 0.0867
- Recall: 0.1712
- F1: 0.0887
- Binary: 0.4166

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 1e-05

- train_batch_size: 32

- eval_batch_size: 32

- seed: 42

- gradient_accumulation_steps: 4

- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10

- mixed_precision_training: Native AMP



### Training results



| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1     | Binary |

|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:------:|

| No log        | 0.1   | 50   | 4.4166          | 0.0108   | 0.0010    | 0.0108 | 0.0013 | 0.1291 |

| No log        | 0.2   | 100  | 4.3510          | 0.0135   | 0.0069    | 0.0135 | 0.0026 | 0.1367 |

| No log        | 0.29  | 150  | 4.2531          | 0.0323   | 0.0016    | 0.0323 | 0.0030 | 0.2372 |

| No log        | 0.39  | 200  | 4.1748          | 0.0350   | 0.0022    | 0.0350 | 0.0039 | 0.2779 |

| No log        | 0.49  | 250  | 4.0970          | 0.0431   | 0.0076    | 0.0431 | 0.0094 | 0.2969 |

| No log        | 0.59  | 300  | 4.0268          | 0.0499   | 0.0179    | 0.0499 | 0.0116 | 0.3198 |

| No log        | 0.69  | 350  | 3.9648          | 0.0472   | 0.0029    | 0.0472 | 0.0054 | 0.3175 |

| No log        | 0.78  | 400  | 3.8977          | 0.0553   | 0.0279    | 0.0553 | 0.0164 | 0.3283 |

| No log        | 0.88  | 450  | 3.8290          | 0.0768   | 0.0239    | 0.0768 | 0.0230 | 0.3437 |

| No log        | 0.98  | 500  | 3.7650          | 0.0701   | 0.0313    | 0.0701 | 0.0252 | 0.3418 |

| 4.166         | 1.08  | 550  | 3.7060          | 0.0997   | 0.0444    | 0.0997 | 0.0418 | 0.3625 |

| 4.166         | 1.18  | 600  | 3.6526          | 0.1132   | 0.0497    | 0.1132 | 0.0501 | 0.3733 |

| 4.166         | 1.27  | 650  | 3.5931          | 0.1132   | 0.0433    | 0.1132 | 0.0509 | 0.3732 |

| 4.166         | 1.37  | 700  | 3.5467          | 0.1240   | 0.0664    | 0.1240 | 0.0558 | 0.3805 |

| 4.166         | 1.47  | 750  | 3.4981          | 0.1213   | 0.0589    | 0.1213 | 0.0553 | 0.3792 |

| 4.166         | 1.57  | 800  | 3.4544          | 0.1361   | 0.0606    | 0.1361 | 0.0641 | 0.3904 |

| 4.166         | 1.67  | 850  | 3.4081          | 0.1456   | 0.0557    | 0.1456 | 0.0655 | 0.3980 |

| 4.166         | 1.76  | 900  | 3.3696          | 0.1536   | 0.0812    | 0.1536 | 0.0786 | 0.4040 |

| 4.166         | 1.86  | 950  | 3.3213          | 0.1523   | 0.0609    | 0.1523 | 0.0745 | 0.4038 |

| 4.166         | 1.96  | 1000 | 3.2814          | 0.1779   | 0.0991    | 0.1779 | 0.0922 | 0.4217 |

| 3.6654        | 2.06  | 1050 | 3.2435          | 0.1550   | 0.0883    | 0.1550 | 0.0737 | 0.4061 |

| 3.6654        | 2.16  | 1100 | 3.2094          | 0.1712   | 0.0867    | 0.1712 | 0.0887 | 0.4166 |





### Framework versions



- Transformers 4.38.2

- Pytorch 2.3.0

- Datasets 2.19.1

- Tokenizers 0.15.1