athirdpath commited on
Commit
48d1228
·
1 Parent(s): dadd3d2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -17
README.md CHANGED
@@ -1,36 +1,32 @@
1
  ---
2
- license: apache-2.0
3
  base_model: athirdpath/BigMistral-11b
4
  tags:
5
  - generated_from_trainer
6
  model-index:
7
  - name: qlora
8
  results: []
 
 
 
9
  ---
10
 
11
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
- should probably proofread and complete it, then remove this comment. -->
13
-
14
  [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
 
15
  # qlora
16
 
17
  This model is a fine-tuned version of [athirdpath/BigMistral-11b](https://huggingface.co/athirdpath/BigMistral-11b) on the athirdpath/Merge_Glue dataset.
18
  It achieves the following results on the evaluation set:
19
  - Loss: 0.9174
20
 
21
- ## Model description
22
-
23
- More information needed
24
-
25
- ## Intended uses & limitations
26
-
27
- More information needed
28
-
29
- ## Training and evaluation data
30
-
31
- More information needed
32
-
33
- ## Training procedure
34
 
35
  ### Training hyperparameters
36
 
 
1
  ---
2
+ license: cc-by-nc-4.0
3
  base_model: athirdpath/BigMistral-11b
4
  tags:
5
  - generated_from_trainer
6
  model-index:
7
  - name: qlora
8
  results: []
9
+ language:
10
+ - en
11
+ pipeline_tag: text-generation
12
  ---
13
 
 
 
 
14
  [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
15
+
16
  # qlora
17
 
18
  This model is a fine-tuned version of [athirdpath/BigMistral-11b](https://huggingface.co/athirdpath/BigMistral-11b) on the athirdpath/Merge_Glue dataset.
19
  It achieves the following results on the evaluation set:
20
  - Loss: 0.9174
21
 
22
+ <p align="center"><font size="7"> <b>Before and After Example</b></font></p>
23
+ <p align="center"><font size="4"> <b>Example model is athirdpath/CleverMage-11b</b></font></p>
24
+ <p align="center"><font size="5"> <b>Examples with LoRA (min_p, alpaca)</b></font></p>
25
+ <p align="center"><img src="https://iili.io/JzsmBWv.png"/>
26
+ <p align="center"><img src="https://iili.io/JzsmqzJ.png"/>
27
+ <p align="center"><font size="5"> <b>Examples without LoRA (min_p, chatML)</b></font></p>
28
+ <p align="center"><img src="https://iili.io/JzsmKba.png"/>
29
+ <p align="center"><img src="https://iili.io/JzsmCsR.png"/>
 
 
 
 
 
30
 
31
  ### Training hyperparameters
32