Update README.md
Browse files
README.md
CHANGED
@@ -1,4 +1,9 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
2 |
library_name: transformers
|
3 |
tags:
|
4 |
- 4-bit
|
@@ -6,10 +11,24 @@ tags:
|
|
6 |
- text-generation
|
7 |
- autotrain_compatible
|
8 |
- endpoints_compatible
|
|
|
9 |
pipeline_tag: text-generation
|
10 |
inference: false
|
11 |
quantized_by: Suparious
|
12 |
---
|
13 |
-
#
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
|
15 |
-
**UPLOAD IN PROGRESS**
|
|
|
1 |
---
|
2 |
+
license: apache-2.0
|
3 |
+
base_model:
|
4 |
+
- nbeerbower/flammen17-mistral-7B
|
5 |
+
datasets:
|
6 |
+
- jondurbin/py-dpo-v0.1
|
7 |
library_name: transformers
|
8 |
tags:
|
9 |
- 4-bit
|
|
|
11 |
- text-generation
|
12 |
- autotrain_compatible
|
13 |
- endpoints_compatible
|
14 |
+
- experimental
|
15 |
pipeline_tag: text-generation
|
16 |
inference: false
|
17 |
quantized_by: Suparious
|
18 |
---
|
19 |
+
# nbeerbower/flammen17-py-DPO-v1-7B AWQ
|
20 |
+
|
21 |
+
- Model creator: [nbeerbower](https://huggingface.co/nbeerbower)
|
22 |
+
- Original model: [flammen17-py-DPO-v1-7B](https://huggingface.co/nbeerbower/flammen17-py-DPO-v1-7B)
|
23 |
+
|
24 |
+

|
25 |
+
|
26 |
+
## Model Summary
|
27 |
+
|
28 |
+
A Mistral 7B LLM built from merging pretrained models and finetuning on [Jon Durbin](https://huggingface.co/jondurbin)'s [py-dpo-v0.1](https://huggingface.co/datasets/jondurbin/py-dpo-v0.1).
|
29 |
+
|
30 |
+
Finetuned using an A100 on Google Colab. 🙏
|
31 |
+
|
32 |
+
[Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne)
|
33 |
+
|
34 |
|
|