File size: 3,371 Bytes
1d04e5b
 
 
 
 
 
 
 
 
 
7b1cbc7
 
06beba9
 
 
57b731d
9709984
 
 
 
94133a7
9709984
 
7b1cbc7
 
91ab63b
 
 
ea9c357
91ab63b
 
 
 
 
62d891d
57b731d
 
 
 
 
 
 
 
 
 
 
 
91ab63b
 
1d04e5b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
base_model:
- Sao10K/L3-8B-Stheno-v3.2
- NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
library_name: transformers
tags:
- mergekit
- merge

---
<img src="https://huggingface.co/Alsebay/L3-8B-SMaid-v0.1/resolve/main/cover/cover.png" alt="img" style="width: 60%; min-width: 120px; height:80%; min-height: 200px; max-width:360px; max-height:600px; display: block">

> [!IMPORTANT]
> Thank @mradermacher so much for help me find out that LumiMaid use 'smaug-bpe' pre-tokenizer. So that mean all its quant is unable to use. That mean you can only use Transformer to load this model for now (maybe they will fix or add feature in future)

# Update: Both version have different presents (settings) to work well
Overall:

Sao10K Stheno > SMaid V0.3 > SMaid V0.1 in Chai Benchmark

SMaid V0.1 = Sao10K Stheno > SMaid V0.3 in my custom EQ bench (Sadness and deep thought and Depression test)

Disclaimed: same seed, same character card, same scenario. 4 times try for each models.

# Best of L3-8B merge series for me. I choose 2 best variants to publish.

SMaid-V0.1: More smart, understand well content, more novelwriting. I like this version.

[SMaid-V0.3](https://huggingface.co/Alsebay/L3-8B-SMaid-v0.3): Upgrade from v0.1. More talkative, active, energetic (wrong setting, lol).

No V0.2 because I deleted it, it's a worst model of series.

I think Stheno and Lumumaid can be like a 'ying-yang', so I combine them, lol. Have test on Chaiverse, both of them got > 1995 elo score from begining. (Thanks Sao10K let me know about ChaiVerse :) )

SMaid = Stheno (it's very good) + LumiMaid (not too good, but the writing style is good)

**Recommend present (You can feedback if any setting is better)**

```
Temperature - 1.1-1.25
Min-P - 0.075
Top-K - 50
Top_P - 0.5
Repetition Penalty - 1.1
```

---

# Below is the auto-generate by Mergekit

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS) as a base.

### Models Merged

The following models were included in the merge:
* [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)

### Configuration

The following YAML configuration was used to produce this model:

```yaml

slices:
- sources:
  - layer_range: [0, 16]
    model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
    parameters:
      density: 0.5
      weight: 1.0
  - layer_range: [0, 16]
    model: Sao10K/L3-8B-Stheno-v3.2
    parameters:
      density: 0.5
      weight: 0.9
- sources:
  - layer_range: [16, 24]
    model: Sao10K/L3-8B-Stheno-v3.2
    parameters:
      density: 0.75
      weight: 0.5
  - layer_range: [16, 24]
    model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
    parameters:
      density: 0.25
      weight: 0.5
- sources:
  - layer_range: [24, 32]
    model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
    parameters:
      density: 0.5
      weight: 0.5
  - layer_range: [24, 32]
    model: Sao10K/L3-8B-Stheno-v3.2
    parameters:
      density: 0.5
      weight: 1.0
merge_method: dare_ties
base_model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
parameters:
int8_mask: true
dtype: bfloat16

```