Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -23,7 +23,7 @@ quantized_by: MaziyarPanahi
23
 
24
  ## Model Description
25
 
26
- Calme-7B is a state-of-the-art language model with 24 billion parameters, fine-tuned over high-quality datasets on top of Mistral-7B. The Calme-4x7B models excel in generating text that resonates with clarity, calmness, and coherence.
27
 
28
  ### How to Use
29
 
 
23
 
24
  ## Model Description
25
 
26
+ Calme-4x7B is a Mixture of Experts (MoE) model, integrating four state-of-the-art Calme-7B models. Essentially, Calme-4x7B is composed of four Calme-7B models that have been individually fine-tuned, featuring two experts per token. This configuration brings the total to over 24 billion parameters. Calme-4x7B models are distinguished by their ability to generate text with exceptional clarity, calmness, and coherence.
27
 
28
  ### How to Use
29