Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,38 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
tags:
|
| 4 |
+
- mixtral
|
| 5 |
+
- dense
|
| 6 |
+
- mistral
|
| 7 |
+
- expert
|
| 8 |
+
---
|
| 9 |
+
|
| 10 |
+
# Unmixtraled 8x22B expert 1
|
| 11 |
+
|
| 12 |
+
> [!WARNING]
|
| 13 |
+
> This model outputs gibberish as it was not trained under the dense configuration. Finetuning or merging is needed to make this model useful.
|
| 14 |
+
|
| 15 |
+
This is a 22B Mistral model recycling weights from [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1).
|
| 16 |
+
The model was adapted from a Mixtral architecture to a dense Mistral architecture with the same number of layers, attention heads and hidden dimensions.
|
| 17 |
+
Embeddings, attention, layer norms and LM head weights were taken directly from the 8x22B model, all MLP weights were taken from expert 1.
|
| 18 |
+
|
| 19 |
+
The following named weight correspondance was used:
|
| 20 |
+
|
| 21 |
+
| Mistral weight | Mixtral weight |
|
| 22 |
+
|----------------|----------------------------------|
|
| 23 |
+
| `gate_proj` | `experts.1.w1` |
|
| 24 |
+
| `down_proj` | `experts.1.w2` |
|
| 25 |
+
| `up_proj` | `experts.1.w3` |
|
| 26 |
+
|
| 27 |
+
## Unmixtraled models
|
| 28 |
+
| Expert | Source | Wikitext perplexity |
|
| 29 |
+
|--------|-----------------|---------------------|
|
| 30 |
+
| [Unmixtraled-22B-v0.1-expert-0](https://huggingface.co/thomasgauthier/Unmixtraled-22B-v0.1-expert-0) | Mixtral 8x22B embed, attn, layernorm, lm_head + expert 0 MLPs | 696.6932983398438 |
|
| 31 |
+
| [**Unmixtraled-22B-v0.1-expert-1**](https://huggingface.co/thomasgauthier/Unmixtraled-22B-v0.1-expert-1) | **Mixtral 8x22B embed, attn, layernorm, lm_head + expert 1 MLPs** | **6853.04248046875** |
|
| 32 |
+
| [Unmixtraled-22B-v0.1-expert-2](https://huggingface.co/thomasgauthier/Unmixtraled-22B-v0.1-expert-2) | Mixtral 8x22B embed, attn, layernorm, lm_head + expert 2 MLPs | 4689.181640625 |
|
| 33 |
+
| [Unmixtraled-22B-v0.1-expert-3](https://huggingface.co/thomasgauthier/Unmixtraled-22B-v0.1-expert-3) | Mixtral 8x22B embed, attn, layernorm, lm_head + expert 3 MLPs | 782.3755493164062 |
|
| 34 |
+
| [Unmixtraled-22B-v0.1-expert-4](https://huggingface.co/thomasgauthier/Unmixtraled-22B-v0.1-expert-4) | Mixtral 8x22B embed, attn, layernorm, lm_head + expert 4 MLPs | 2844.943603515625 |
|
| 35 |
+
| [Unmixtraled-22B-v0.1-expert-5](https://huggingface.co/thomasgauthier/Unmixtraled-22B-v0.1-expert-5) | Mixtral 8x22B embed, attn, layernorm, lm_head + expert 5 MLPs | 1099.32373046875 |
|
| 36 |
+
| [Unmixtraled-22B-v0.1-expert-6](https://huggingface.co/thomasgauthier/Unmixtraled-22B-v0.1-expert-6) | Mixtral 8x22B embed, attn, layernorm, lm_head + expert 6 MLPs | 341.5309753417969 |
|
| 37 |
+
| [Unmixtraled-22B-v0.1-expert-7](https://huggingface.co/thomasgauthier/Unmixtraled-22B-v0.1-expert-7) | Mixtral 8x22B embed, attn, layernorm, lm_head + expert 7 MLPs | 2099.63818359375 |
|
| 38 |
+
| [Unmixtraled-22B-v0.1-lerp](https://huggingface.co/thomasgauthier/Unmixtraled-22B-v0.1-lerp) | Mixtral 8x22B embed, attn, layernorm, lm_head + linear merge of expert 0-7 MLPs | 1873.9874267578125 |
|