Hamid-Nazeri commited on
Commit
c80209e
·
verified ·
1 Parent(s): 4db7744

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -17,10 +17,15 @@ In this organization, you can find models in both the original Meta format as we
17
 
18
  Current:
19
 
20
- * **Llama 3.3:** The Llama 3.3 is a text only instruct-tuned model in 70B size (text in/text out).
 
 
 
 
21
 
22
  History:
23
 
 
24
  * **Llama 3.2:** The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out).
25
  * **Llama 3.2 Vision:** The Llama 3.2-Vision collection of multimodal large language models (LLMs) is a collection of pretrained and instruction-tuned image reasoning generative models in 11B and 90B sizes (text + images in / text out)
26
  * **Llama 3.1:** a collection of pretrained and fine-tuned text models with sizes ranging from 8 billion to 405 billion parameters pre-trained on ~15 trillion tokens.
 
17
 
18
  Current:
19
 
20
+ **Llama 4** The Llama 4 collection of models are natively multimodal AI models that enable text and multimodal experiences. These models leverage a mixture-of-experts architecture to offer industry-leading performance in text and image understanding.
21
+
22
+ These Llama 4 models mark the beginning of a new era for the Llama ecosystem. We are launching two efficient models in the Llama 4 series, Llama 4 Scout, a 17 billion parameter model with 16 experts, and Llama 4 Maverick, a 17 billion parameter model with 128 experts.
23
+
24
+
25
 
26
  History:
27
 
28
+ * **Llama 3.3:** The Llama 3.3 is a text only instruct-tuned model in 70B size (text in/text out).
29
  * **Llama 3.2:** The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out).
30
  * **Llama 3.2 Vision:** The Llama 3.2-Vision collection of multimodal large language models (LLMs) is a collection of pretrained and instruction-tuned image reasoning generative models in 11B and 90B sizes (text + images in / text out)
31
  * **Llama 3.1:** a collection of pretrained and fine-tuned text models with sizes ranging from 8 billion to 405 billion parameters pre-trained on ~15 trillion tokens.