aifeifei798 commited on
Commit
f621c79
·
verified ·
1 Parent(s): 6843c97

Upload 2 files

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ llama3-8B-DarkIdol-2.3-Uncensored-32K.png filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,10 +1,112 @@
1
  ---
2
- base_model: []
3
- library_name: transformers
 
4
  tags:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  - mergekit
6
  - merge
7
-
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  ---
9
  # llama3-8B-DarkIdol-2.3-Uncensored-32K
10
 
@@ -15,28 +117,39 @@ This is a merge of pre-trained language models created using [mergekit](https://
15
 
16
  This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using ./llama3-8B-DarkIdol-2.3b as a base.
17
 
18
- ### Models Merged
19
-
20
- The following models were included in the merge:
21
- * ./Meta-Llama-3-8B-abliterated
22
- * ./Llama-3-8B-LexiFun-Uncensored-V1
23
- * ./Llama-3-8B-Lexi-Uncensored
24
- * ./Llama-3-8B-Lexi-Smaug-Uncensored
25
- * ./Configurable-Hermes-2-Pro-Llama-3-8B
26
- * ./Unsafe-Llama-3-8B
27
-
28
  ### Configuration
29
 
30
  The following YAML configuration was used to produce this model:
31
 
32
  ```yaml
33
  models:
34
- - model: ./Meta-Llama-3-8B-abliterated
35
- - model: ./Llama-3-8B-LexiFun-Uncensored-V1
36
- - model: ./Llama-3-8B-Lexi-Uncensored
37
- - model: ./Llama-3-8B-Lexi-Smaug-Uncensored
38
- - model: ./Unsafe-Llama-3-8B
39
- - model: ./Configurable-Hermes-2-Pro-Llama-3-8B
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
  - model: ./llama3-8B-DarkIdol-2.3b
41
  merge_method: model_stock
42
  base_model: ./llama3-8B-DarkIdol-2.3b
 
1
  ---
2
+ license: llama3
3
+ language:
4
+ - en
5
  tags:
6
+ - roleplay
7
+ - llama3
8
+ - sillytavern
9
+ - idol
10
+ ---
11
+ # Special Thanks:
12
+ - Lewdiculous's superb gguf version, thank you for your conscientious and responsible dedication.
13
+ - https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-2.3-Uncensored-32K-GGUF-IQ-Imatrix-Request
14
+ - mradermacher's superb gguf version, thank you for your conscientious and responsible dedication.
15
+ - https://huggingface.co/mradermacher/llama3-8B-DarkIdol-2.3-Uncensored-32K-i1-GGUF
16
+ - https://huggingface.co/mradermacher/llama3-8B-DarkIdol-2.3-Uncensored-32K-GGUF
17
+
18
+ # These are my own quantizations (updated almost daily).
19
+ The difference with normal quantizations is that I quantize the output and embed tensors to f16.
20
+ and the other tensors to 15_k,q6_k or q8_0.
21
+ This creates models that are little or not degraded at all and have a smaller size.
22
+ They run at about 3-6 t/sec on CPU only using llama.cpp
23
+ And obviously faster on computers with potent GPUs
24
+ - the fast cat at [ZeroWw/llama3-8B-DarkIdol-2.3-Uncensored-32K-GGUF](https://huggingface.co/ZeroWw/llama3-8B-DarkIdol-2.2-Uncensored-32K-GGUF)
25
+
26
+ # Model Description:
27
+ The module combination has been readjusted to better fulfill various roles and has been adapted for mobile phones.
28
+ - Saving money(LLama 3)
29
+ - only test en.
30
+ - Input Models input text only. Output Models generate text and code only.
31
+ - Uncensored
32
+ - Quick response
33
+ - The underlying model used is winglian/Llama-3-8b-64k-PoSE (The theoretical support is 64k, but I have only tested up to 32k. :)
34
+ - A scholarly response akin to a thesis.(I tend to write songs extensively, to the point where one song almost becomes as detailed as a thesis. :)
35
+ - DarkIdol:Roles that you can imagine and those that you cannot imagine.
36
+ - Roleplay
37
+ - Specialized in various role-playing scenarios
38
+ - more look at test role. (https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.2/tree/main/test)
39
+ - more look at LM Studio presets (https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.2/tree/main/config-presets)
40
+ ![image/png](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-2.3-Uncensored-32K/resolve/main/llama3-8B-DarkIdol-2.3-Uncensored-32K.png)
41
+
42
+ ## virtual idol Twitter
43
+ - https://x.com/aifeifei799
44
+
45
+ # Questions
46
+ - The model's response results are for reference only, please do not fully trust them.
47
+
48
+
49
+ # Stop Strings
50
+ ```python
51
+ stop = [
52
+ "## Instruction:",
53
+ "### Instruction:",
54
+ "<|end_of_text|>",
55
+ " //:",
56
+ "</s>",
57
+ "<3```",
58
+ "### Note:",
59
+ "### Input:",
60
+ "### Response:",
61
+ "### Emoticons:"
62
+ ],
63
+ ```
64
+ # Model Use
65
+ - Koboldcpp https://github.com/LostRuins/koboldcpp
66
+ - Since KoboldCpp is taking a while to update with the latest llama.cpp commits, I'll recommend this [fork](https://github.com/Nexesenex/kobold.cpp) if anyone has issues.
67
+ - LM Studio https://lmstudio.ai/
68
+ - Please test again using the Default LM Studio Windows preset.
69
+ - llama.cpp https://github.com/ggerganov/llama.cpp
70
+ - Backyard AI https://backyard.ai/
71
+ - Meet Layla,Layla is an AI chatbot that runs offline on your device.No internet connection required.No censorship.Complete privacy.Layla Lite https://www.layla-network.ai/
72
+ - Layla Lite llama3-8B-DarkIdol-1.1-Q4_K_S-imat.gguf https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-2.3-Uncensored-32K/blob/main/llama3-8B-DarkIdol-2.3-Uncensored-32K-Q4_K_S-imat.gguf?download=true
73
+ - more gguf at https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-2.3-Uncensored-32K-GGUF-IQ-Imatrix-Request
74
+ # character
75
+ - https://character-tavern.com/
76
+ - https://characterhub.org/
77
+ - https://pygmalion.chat/
78
+ - https://aetherroom.club/
79
+ - https://backyard.ai/
80
+ - Layla AI chatbot
81
+ ### If you want to use vision functionality:
82
+ * You must use the latest versions of [Koboldcpp](https://github.com/Nexesenex/kobold.cpp).
83
+
84
+ ### To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found inside this model repo. [Llava MMProj](https://huggingface.co/Nitral-AI/Llama-3-Update-3.0-mmproj-model-f16)
85
+
86
+ * You can load the **mmproj** by using the corresponding section in the interface:
87
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/UX6Ubss2EPNAT3SKGMLe0.png)
88
+ ### Thank you:
89
+ To the authors for their hard work, which has given me more options to easily create what I want. Thank you for your efforts.
90
+ - Hastagaras
91
+ - Gryphe
92
+ - cgato
93
+ - ChaoticNeutrals
94
  - mergekit
95
  - merge
96
+ - transformers
97
+ - llama
98
+ - Nitral-AI
99
+ - MLP-KTLim
100
+ - rinna
101
+ - hfl
102
+ - Rupesh2
103
+ - stephenlzc
104
+ - theprint
105
+ - Sao10K
106
+ - turboderp
107
+ - TheBossLevel123
108
+ - winglian
109
+ - .........
110
  ---
111
  # llama3-8B-DarkIdol-2.3-Uncensored-32K
112
 
 
117
 
118
  This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using ./llama3-8B-DarkIdol-2.3b as a base.
119
 
 
 
 
 
 
 
 
 
 
 
120
  ### Configuration
121
 
122
  The following YAML configuration was used to produce this model:
123
 
124
  ```yaml
125
  models:
126
+ - model: Sao10K/L3-8B-Niitama-v1
127
+ - model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
128
+ - model: Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
129
+ - model: turboderp/llama3-turbcat-instruct-8b
130
+ - model: winglian/Llama-3-8b-64k-PoSE
131
+ merge_method: model_stock
132
+ base_model: winglian/Llama-3-8b-64k-PoSE
133
+ dtype: bfloat16
134
+
135
+ models:
136
+ - model: maldv/badger-writer-llama-3-8b
137
+ - model: underwoods/writer-8b
138
+ - model: Gryphe/Pantheon-RP-1.0-8b-Llama-3
139
+ - model: vicgalle/Roleplay-Llama-3-8B
140
+ - model: cgato/TheSalt-RP-L3-8b-DPO-v0.3.2-e0.15.2
141
+ - model: ./llama3-8B-DarkIdol-2.3a
142
+ merge_method: model_stock
143
+ base_model: ./llama3-8B-DarkIdol-2.3a
144
+ dtype: bfloat16
145
+
146
+ models:
147
+ - model: Rupesh2/Meta-Llama-3-8B-abliterated
148
+ - model: Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1
149
+ - model: Orenguteng/Llama-3-8B-Lexi-Uncensored
150
+ - model: theprint/Llama-3-8B-Lexi-Smaug-Uncensored
151
+ - model: vicgalle/Unsafe-Llama-3-8B
152
+ - model: vicgalle/Configurable-Hermes-2-Pro-Llama-3-8B
153
  - model: ./llama3-8B-DarkIdol-2.3b
154
  merge_method: model_stock
155
  base_model: ./llama3-8B-DarkIdol-2.3b
llama3-8B-DarkIdol-2.3-Uncensored-32K.png ADDED

Git LFS Details

  • SHA256: 73acbb95169835672816ae7910c6350e0863aeea3b9283043d8837817bd23f26
  • Pointer size: 132 Bytes
  • Size of remote file: 1.75 MB