RichardErkhov commited on
Commit
d9821ac
·
verified ·
1 Parent(s): e52a325

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +67 -0
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ MFANNv0.14.10 - bnb 4bits
11
+ - Model creator: https://huggingface.co/netcat420/
12
+ - Original model: https://huggingface.co/netcat420/MFANNv0.14.10/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ base_model:
20
+ - netcat420/MFANNv0.14
21
+ - MaziyarPanahi/Llama-3-8B-Instruct-v0.4
22
+ - netcat420/MFANNv0.13
23
+ library_name: transformers
24
+ tags:
25
+ - mergekit
26
+ - merge
27
+
28
+ ---
29
+ # MFANNv0.14.10
30
+
31
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
32
+
33
+ ## Merge Details
34
+ ### Merge Method
35
+
36
+ This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [MaziyarPanahi/Llama-3-8B-Instruct-v0.4](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-v0.4) as a base.
37
+
38
+ ### Models Merged
39
+
40
+ The following models were included in the merge:
41
+ * [netcat420/MFANNv0.14](https://huggingface.co/netcat420/MFANNv0.14)
42
+ * [netcat420/MFANNv0.13](https://huggingface.co/netcat420/MFANNv0.13)
43
+
44
+ ### Configuration
45
+
46
+ The following YAML configuration was used to produce this model:
47
+
48
+ ```yaml
49
+ models:
50
+ - model: netcat420/MFANNv0.14
51
+ parameters:
52
+ density: [1, 0.7, 0.1] # density gradient
53
+ weight: 1.0
54
+ - model: netcat420/MFANNv0.13
55
+ parameters:
56
+ density: [1, 0.7, 0.1] # density gradient
57
+ weight: 1.0
58
+ merge_method: ties
59
+ base_model: MaziyarPanahi/Llama-3-8B-Instruct-v0.4
60
+ parameters:
61
+ normalize: true
62
+ int8_mask: true
63
+ dtype: float16
64
+
65
+ ```
66
+
67
+