AuriAetherwiing commited on
Commit
2f1146c
·
verified ·
1 Parent(s): 2b4bd30

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -65
README.md CHANGED
@@ -2,6 +2,17 @@
2
  library_name: transformers
3
  license: other
4
  base_model: Qwen/Qwen2.5-72B
 
 
 
 
 
 
 
 
 
 
 
5
  tags:
6
  - generated_from_trainer
7
  model-index:
@@ -9,8 +20,59 @@ model-index:
9
  results: []
10
  ---
11
 
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
  [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
16
  <details><summary>See axolotl config</summary>
@@ -424,66 +486,4 @@ weight_decay: 0.1
424
  # fsdp_mixed_precision: BF16 # Added
425
  ```
426
 
427
- </details><br>
428
-
429
- # EVA-Qwen2.5-72B-SFFT-v0.1
430
-
431
- This model is a fine-tuned version of [Qwen/Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) on the None dataset.
432
- It achieves the following results on the evaluation set:
433
- - Loss: 0.9789
434
-
435
- ## Model description
436
-
437
- More information needed
438
-
439
- ## Intended uses & limitations
440
-
441
- More information needed
442
-
443
- ## Training and evaluation data
444
-
445
- More information needed
446
-
447
- ## Training procedure
448
-
449
- ### Training hyperparameters
450
-
451
- The following hyperparameters were used during training:
452
- - learning_rate: 5e-05
453
- - train_batch_size: 1
454
- - eval_batch_size: 1
455
- - seed: 42
456
- - distributed_type: multi-GPU
457
- - num_devices: 8
458
- - gradient_accumulation_steps: 8
459
- - total_train_batch_size: 64
460
- - total_eval_batch_size: 8
461
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
462
- - lr_scheduler_type: cosine
463
- - lr_scheduler_warmup_steps: 20
464
- - num_epochs: 3
465
-
466
- ### Training results
467
-
468
- | Training Loss | Epoch | Step | Validation Loss |
469
- |:-------------:|:------:|:----:|:---------------:|
470
- | 1.3353 | 0.0061 | 1 | 1.2986 |
471
- | 1.0318 | 0.2549 | 42 | 0.9304 |
472
- | 0.9864 | 0.5099 | 84 | 0.9031 |
473
- | 0.9114 | 0.7648 | 126 | 0.9029 |
474
- | 0.4781 | 1.0182 | 168 | 0.9177 |
475
- | 0.4764 | 1.2736 | 210 | 0.9251 |
476
- | 0.4871 | 1.5289 | 252 | 0.9055 |
477
- | 0.5003 | 1.7842 | 294 | 0.8990 |
478
- | 0.2145 | 2.0356 | 336 | 0.9696 |
479
- | 0.2008 | 2.2902 | 378 | 0.9782 |
480
- | 0.1909 | 2.5447 | 420 | 0.9783 |
481
- | 0.1773 | 2.7992 | 462 | 0.9789 |
482
-
483
-
484
- ### Framework versions
485
-
486
- - Transformers 4.45.1
487
- - Pytorch 2.4.0+cu121
488
- - Datasets 2.21.0
489
- - Tokenizers 0.20.2
 
2
  library_name: transformers
3
  license: other
4
  base_model: Qwen/Qwen2.5-72B
5
+ datasets:
6
+ - anthracite-org/kalo-opus-instruct-22k-no-refusal
7
+ - Nopm/Opus_WritingStruct
8
+ - Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
9
+ - Gryphe/Sonnet3.5-Charcard-Roleplay
10
+ - Gryphe/ChatGPT-4o-Writing-Prompts
11
+ - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
12
+ - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
13
+ - nothingiisreal/Reddit-Dirty-And-WritingPrompts
14
+ - allura-org/Celeste-1.x-data-mixture
15
+ - cognitivecomputations/dolphin-2.9.3
16
  tags:
17
  - generated_from_trainer
18
  model-index:
 
20
  results: []
21
  ---
22
 
23
+
24
+ # EVA Qwen2.5-72B v0.1
25
+
26
+ <p>
27
+ A RP/storywriting specialist model, full-parameter finetune of Qwen2.5-72B on mixture of synthetic and natural data.<br>
28
+ It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.<br>
29
+ </p>
30
+
31
+ <p>Dedicated to Nev.</p>
32
+
33
+ <p><b>Version notes for 0.1</b>: Reprocessed dataset (via Cahvay), readjusted training config for 8xH100 SXM. Significant improvements in instruction following, long context understanding and overall coherence over v0.0.</p>
34
+
35
+ <p>
36
+ <p>Prompt format is ChatML.</p><br>
37
+ <h3>Recommended sampler values:</h3>
38
+ <ul>
39
+ <li>Temperature: 1</li>
40
+ <li>Min-P: 0.05</li>
41
+ <li>Top-A: 0.2</li>
42
+ <li>Repetition Penalty: 1.03</li>
43
+ </ul>
44
+
45
+ <h3>Recommended SillyTavern presets (via CalamitousFelicitousness):</h3>
46
+
47
+ - [Context](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Context.json)
48
+ - [Instruct and System Prompt](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Instruct.json)
49
+ </p>
50
+
51
+ <p>
52
+ <br>
53
+ <h3>
54
+ Training data:
55
+ </h3>
56
+ <ul>
57
+ <li>Celeste 70B 0.1 data mixture minus Opus Instruct subset. See that model's <a href=https://huggingface.co/nothingiisreal/L3.1-70B-Celeste-V0.1-BF16>card</a> for details.</li>
58
+ <li>Kalomaze's Opus_Instruct_25k dataset, filtered for refusals.</li>
59
+ <li>A subset (1k rows) of ChatGPT-4o-WritingPrompts by Gryphe</li>
60
+ <li>A subset (2k rows) of Sonnet3.5-Charcards-Roleplay by Gryphe</li>
61
+ <li>Synthstruct and SynthRP datasets by Epiculous</li>
62
+ <li>A subset from Dolphin-2.9.3, including filtered version of not_samantha and a small subset of systemchat.</li>
63
+ </ul>
64
+ <h3>
65
+ Training time and hardware:
66
+ </h3>
67
+ <ul><li>15 hours on 8xH100 SXM, provided by <a href=https://featherless.ai/>FeatherlessAI</a></li></ul><br>
68
+ </p>
69
+ <p>Model was created by Kearm, Auri and Cahvay.</p>
70
+ <h4>Special thanks:</h4><ul>
71
+ <li><b>to Cahvay for his work on investigating and reprocessing the corrupted dataset, removing the single biggest source of data poisoning.</b></li>
72
+ <li><b>to <a href=https://featherless.ai/>FeatherlessAI</a> for generously providing 8xH100 SXM node for training of this model</b></li>
73
+ <li>to Gryphe, Lemmy, Kalomaze, Nopm, Epiculous and CogninitiveComputations for the data</li>
74
+ <li>and to Allura-org for support, feedback, beta-testing and doing quality control of EVA models.</li></ul>
75
+
76
 
77
  [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
78
  <details><summary>See axolotl config</summary>
 
486
  # fsdp_mixed_precision: BF16 # Added
487
  ```
488
 
489
+ </details><br>