Upload folder using huggingface_hub
Browse files- .gitattributes +7 -35
- Quyen-Pro-Max-v0.1-Q2_K.gguf +3 -0
- Quyen-Pro-Max-v0.1-Q3_K_L.gguf +3 -0
- Quyen-Pro-Max-v0.1-Q4_K_M.gguf +3 -0
- Quyen-Pro-Max-v0.1-Q5_K_M.gguf-part-a +3 -0
- Quyen-Pro-Max-v0.1-Q5_K_M.gguf-part-b +3 -0
- Quyen-Pro-Max-v0.1-Q6_K.gguf-part-a +3 -0
- Quyen-Pro-Max-v0.1-Q6_K.gguf-part-b +3 -0
- README.md +63 -0
- huggingface-metadata.txt +34 -0
- merges.txt +0 -0
.gitattributes
CHANGED
@@ -1,35 +1,7 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
-
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
-
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
-
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
12 |
-
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
-
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
14 |
-
*.npy filter=lfs diff=lfs merge=lfs -text
|
15 |
-
*.npz filter=lfs diff=lfs merge=lfs -text
|
16 |
-
*.onnx filter=lfs diff=lfs merge=lfs -text
|
17 |
-
*.ot filter=lfs diff=lfs merge=lfs -text
|
18 |
-
*.parquet filter=lfs diff=lfs merge=lfs -text
|
19 |
-
*.pb filter=lfs diff=lfs merge=lfs -text
|
20 |
-
*.pickle filter=lfs diff=lfs merge=lfs -text
|
21 |
-
*.pkl filter=lfs diff=lfs merge=lfs -text
|
22 |
-
*.pt filter=lfs diff=lfs merge=lfs -text
|
23 |
-
*.pth filter=lfs diff=lfs merge=lfs -text
|
24 |
-
*.rar filter=lfs diff=lfs merge=lfs -text
|
25 |
-
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
26 |
-
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
27 |
-
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
28 |
-
*.tar filter=lfs diff=lfs merge=lfs -text
|
29 |
-
*.tflite filter=lfs diff=lfs merge=lfs -text
|
30 |
-
*.tgz filter=lfs diff=lfs merge=lfs -text
|
31 |
-
*.wasm filter=lfs diff=lfs merge=lfs -text
|
32 |
-
*.xz filter=lfs diff=lfs merge=lfs -text
|
33 |
-
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
-
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
-
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
1 |
+
Quyen-Pro-Max-v0.1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
2 |
+
Quyen-Pro-Max-v0.1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
3 |
+
Quyen-Pro-Max-v0.1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
4 |
+
Quyen-Pro-Max-v0.1-Q5_K_M.gguf-part-a filter=lfs diff=lfs merge=lfs -text
|
5 |
+
Quyen-Pro-Max-v0.1-Q5_K_M.gguf-part-b filter=lfs diff=lfs merge=lfs -text
|
6 |
+
Quyen-Pro-Max-v0.1-Q6_K.gguf-part-a filter=lfs diff=lfs merge=lfs -text
|
7 |
+
Quyen-Pro-Max-v0.1-Q6_K.gguf-part-b filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Quyen-Pro-Max-v0.1-Q2_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:647aff74ef3b78f59dddb1ef84f22e104c2a1c699f85e61638624ee252e32954
|
3 |
+
size 28461062400
|
Quyen-Pro-Max-v0.1-Q3_K_L.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d7b8720a9f94d5afd932b03fd6caf04b7c4a58b162d5a32f81c1138fd0feba60
|
3 |
+
size 38486137088
|
Quyen-Pro-Max-v0.1-Q4_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:64dd9fcf22c6bece41b9d339058b137667e277a824cc6b8bfcb3ef8ac828a918
|
3 |
+
size 44104177920
|
Quyen-Pro-Max-v0.1-Q5_K_M.gguf-part-a
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4eb164f25c601f6f8a0daa68aacbe94743ddcce9e858d5372f5c95c0153c85f4
|
3 |
+
size 25653161088
|
Quyen-Pro-Max-v0.1-Q5_K_M.gguf-part-b
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8d4cc02c73f8f0999301a91141788064d27ce5fd0a82e14a49db7a6e596a5abd
|
3 |
+
size 25653161088
|
Quyen-Pro-Max-v0.1-Q6_K.gguf-part-a
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:48ce0938c2065a7efd937557eb56f834c98d73f7772c0dedbdb4f4819ad7f733
|
3 |
+
size 29657558144
|
Quyen-Pro-Max-v0.1-Q6_K.gguf-part-b
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6d6cef011822e4f82facb68b7c7a87d3773e27d299162dc522aed6b6d9f2c4d8
|
3 |
+
size 29657558144
|
README.md
ADDED
@@ -0,0 +1,63 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: transformers
|
3 |
+
license: other
|
4 |
+
datasets:
|
5 |
+
- teknium/OpenHermes-2.5
|
6 |
+
- LDJnr/Capybara
|
7 |
+
- Intel/orca_dpo_pairs
|
8 |
+
- argilla/distilabel-capybara-dpo-7k-binarized
|
9 |
+
language:
|
10 |
+
- en
|
11 |
+
pipeline_tag: text-generation
|
12 |
+
---
|
13 |
+
|
14 |
+
# Quyen
|
15 |
+
<img src="quyen.webp" width="512" height="512" alt="Quyen">
|
16 |
+
|
17 |
+
# Model Description
|
18 |
+
Quyen is our first flagship LLM series based on the Qwen1.5 family. We introduced 6 different versions:
|
19 |
+
|
20 |
+
- **Quyen-SE (0.5B)**
|
21 |
+
- **Quyen-Mini (1.8B)**
|
22 |
+
- **Quyen (4B)**
|
23 |
+
- **Quyen-Plus (7B)**
|
24 |
+
- **Quyen-Pro (14B)**
|
25 |
+
- **Quyen-Pro-Max (72B)**
|
26 |
+
|
27 |
+
All models were trained with SFT and DPO using the following dataset:
|
28 |
+
|
29 |
+
- *OpenHermes-2.5* by **Teknium**
|
30 |
+
- *Capyabara* by **LDJ**
|
31 |
+
- *argilla/distilabel-capybara-dpo-7k-binarized* by **argilla**
|
32 |
+
- *orca_dpo_pairs* by **Intel**
|
33 |
+
- and Private Data by **Ontocord** & **BEE-spoke-data**
|
34 |
+
|
35 |
+
# Prompt Template
|
36 |
+
- All Quyen models use ChatML as the default template:
|
37 |
+
|
38 |
+
```
|
39 |
+
<|im_start|>system
|
40 |
+
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
|
41 |
+
<|im_start|>user
|
42 |
+
Hello world.<|im_end|>
|
43 |
+
<|im_start|>assistant
|
44 |
+
```
|
45 |
+
|
46 |
+
- You can also use `apply_chat_template`:
|
47 |
+
|
48 |
+
```python
|
49 |
+
messages = [
|
50 |
+
{"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."},
|
51 |
+
{"role": "user", "content": "Hello world."}
|
52 |
+
]
|
53 |
+
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
|
54 |
+
model.generate(**gen_input)
|
55 |
+
```
|
56 |
+
|
57 |
+
# Benchmarks:
|
58 |
+
|
59 |
+
- Coming Soon! We will update the benchmarks later
|
60 |
+
|
61 |
+
# Acknowledgement
|
62 |
+
- We're incredibly grateful to **Tensoic** and **Ontocord** for their generous support with compute and data preparation.
|
63 |
+
- Special thanks to the Qwen team for letting us access the models early for these amazing finetunes.
|
huggingface-metadata.txt
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
url: https://huggingface.co/vilm/Quyen-Pro-Max-v0.1
|
2 |
+
branch: main
|
3 |
+
download date: 2024-02-06 12:41:18
|
4 |
+
sha256sum:
|
5 |
+
b5e723701f9189c6341b11a8b5abdaca19838eb26b3a739ee47aa9333ac58406 pytorch_model-00001-of-00030.bin
|
6 |
+
0f3571d082e12670df4bd37977e49ebcd216d363815d5ad34873755c23e29559 pytorch_model-00002-of-00030.bin
|
7 |
+
b29fb3705e5ed29fcaa30dfde7ad126e145e26b5d61e94b0f9b4100f38cca7bc pytorch_model-00003-of-00030.bin
|
8 |
+
6beacb3b5e5250dd5416695d2f304bff9965f0222177cd54ce3389958a8fbf7b pytorch_model-00004-of-00030.bin
|
9 |
+
f0bcf82d1c2d3d5b1727aad703842731b18c254093075895e13ba3dba8ddda6b pytorch_model-00005-of-00030.bin
|
10 |
+
dd0a8d88eae3f2d217e25428a0bead92f530496cefbf8d59136077a13ed8dd12 pytorch_model-00006-of-00030.bin
|
11 |
+
3a39fa019c618945afe160bc3dcdcfe13109b80d6ccbf6bd51089d34927afd16 pytorch_model-00007-of-00030.bin
|
12 |
+
4835397d22a3873788efc239cd231c8a28b9074bb09c8763733ff4e3e1fe3243 pytorch_model-00008-of-00030.bin
|
13 |
+
ec5266249c093e52ac131860ce82da48571003b0e7e15cd61f95e5a929fff175 pytorch_model-00009-of-00030.bin
|
14 |
+
b4b5bddafce7fc9d33e9cf51891b6a706ed854988a3282c31fac723b492100cf pytorch_model-00010-of-00030.bin
|
15 |
+
5710bf00a9442f145ce9292fb34e572837933e26b3fea793518db746e19a05d9 pytorch_model-00011-of-00030.bin
|
16 |
+
a7cc3f315c90b4088bcb49df8da12ff9f64d214a646ae48d9bc3e7e216dd30f5 pytorch_model-00012-of-00030.bin
|
17 |
+
1b0bf669f6a88a1badae42ad498629cf3e579221bcb2c2c39640597c7adb0487 pytorch_model-00013-of-00030.bin
|
18 |
+
3458275d165c5871981e7cec4cf6612f74665e1b7446888b08dcc1a0f37a52a1 pytorch_model-00014-of-00030.bin
|
19 |
+
97490810a62d0066a1f25b8c48d97c980e5dad9d57cf0c56c2eab8d63dcaf1fd pytorch_model-00015-of-00030.bin
|
20 |
+
c54ff0613065047f1f2833179bafdd964f549516da482491da710fe875d257e9 pytorch_model-00016-of-00030.bin
|
21 |
+
abe4a3e63818bb717b2faaaa932abbf282680420110270eb44c5f456db52b03d pytorch_model-00017-of-00030.bin
|
22 |
+
902ed056ab6db0d93b0707f471b4e517f4c15fc2457c21db8f9958c93b1fdc5c pytorch_model-00018-of-00030.bin
|
23 |
+
495563200ef15dee3b7531f746eaaf9a72f78a363bdd214aa3153ff29cbcb159 pytorch_model-00019-of-00030.bin
|
24 |
+
224c999ebdadc8546de87fcfdd015837aa731e20bd914c9ea54d575be43b7bce pytorch_model-00020-of-00030.bin
|
25 |
+
3a43ce4750ca93e34d60e0e2e8784c6efb30b7e2766579625141279653eb842b pytorch_model-00021-of-00030.bin
|
26 |
+
3c1ec287a1948a05f69db9e97dbf919cbe63936374d51b1090e247be5816ef20 pytorch_model-00022-of-00030.bin
|
27 |
+
41e2e8e8bc7eecf9ad409443bedec34ce6e2bd0eda8194513ca76cbbc68a6343 pytorch_model-00023-of-00030.bin
|
28 |
+
a8acdbb0a5f0439a9abcab925d92bc070c393f3a82db885e40176f9789402532 pytorch_model-00024-of-00030.bin
|
29 |
+
14c346cee257c0e04fd6cfe72670e44ed01be57a042060ade265c5350ad970d7 pytorch_model-00025-of-00030.bin
|
30 |
+
c76075e21eecf21ce47ae019ef3454eb111e1f0d9662f4d2d685b0f19e47ad02 pytorch_model-00026-of-00030.bin
|
31 |
+
f392fba089edd704e6d0113774730dfd9db9aea8bc6b96ea8433434b65e4bc03 pytorch_model-00027-of-00030.bin
|
32 |
+
6749f1dfe540beda096dc11527cc6fc300b7a1500abc697542dfd5d61c53541f pytorch_model-00028-of-00030.bin
|
33 |
+
dc80bf660517e7d5a0b25d1d3e447c83ea81e5aef681cc987f2ecb03fa40af75 pytorch_model-00029-of-00030.bin
|
34 |
+
994d75dec49cf52785e3688af0d8c159da4282454426f46e95c04bc2f8e03866 pytorch_model-00030-of-00030.bin
|
merges.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|