AvivBick commited on
Commit
2b5a9d1
·
verified ·
1 Parent(s): 6cc1bd8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +74 -3
README.md CHANGED
@@ -1,3 +1,74 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - Llamba
4
+ - recurrent-models
5
+ - distillation
6
+ - cartesia
7
+ - edge
8
+ license: apache-2.0
9
+ library_name: cartesia-pytorch
10
+ datasets:
11
+ - ai2_arc
12
+ - PIQA
13
+ - Winogrande
14
+ - HellaSwag
15
+ - Lambada
16
+ - MMLU
17
+ - OpenBookQA
18
+ inference:
19
+ precision: bf16
20
+ hardware: gpu
21
+ ---
22
+
23
+ # Llamba Models
24
+
25
+ The Llamba models are part of Cartesia's [Edge](https://github.com/cartesia-ai/edge) library, designed for efficient, high-performance machine learning applications.
26
+
27
+ For more details, refer to the [paper](https://arxiv.org/abs/2502.14458).
28
+
29
+ ---
30
+ ## Usage
31
+
32
+ ### Llamba on PyTorch
33
+
34
+ To use Llamba with PyTorch:
35
+
36
+ 1. Install the required package:
37
+ ```bash
38
+ pip install --no-binary :all: cartesia-pytorch
39
+ ```
40
+ 2. Load and run the model
41
+ ```python
42
+ from transformers import AutoTokenizer
43
+ from cartesia_pytorch.Llamba.llamba import LlambaLMHeadModel
44
+
45
+ model = LlambaLMHeadModel.from_pretrained("cartesia-ai/Llamba-8B", strict=True).to('cuda')
46
+ tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-8B")
47
+ input_ids = tokenizer("Hello, my name is", return_tensors="pt").input_ids
48
+ input_ids = input_ids.to('cuda')
49
+ output = model.generate(input_ids, max_length=100)[0]
50
+ print(tokenizer.decode(output, skip_special_tokens=True))
51
+ ```
52
+
53
+ ### Llamba on MLX
54
+
55
+ To run Llamba with the Metal framework see [cartesia-metal](https://github.com/cartesia-ai/edge/tree/main/cartesia-metal)
56
+
57
+ ---
58
+ ### Evaluations
59
+
60
+ The Llamba models have been evaluated on multiple standard benchmarks, demonstrating efficiency gains while maintaining strong performance. Below are the results:
61
+
62
+ | Model | ARC-C (0-shot) | ARC-C (25-shot) | ARC-E (0-shot) | ARC-E (25-shot) | PIQA (0-shot) | PIQA (10-shot) | WG (0-shot) | WG (5-shot) |
63
+ |------------|---------------|----------------|---------------|----------------|---------------|---------------|------------|------------|
64
+ | Llamba-1B | 37.2 | 41.8 | 69.5 | 71.2 | 74.0 | 74.3 | 60.6 | 58.1 |
65
+ | Llamba-3B | 48.5 | 53.0 | 79.0 | 81.1 | 78.6 | 79.5 | 70.4 | 72.4 |
66
+ | Llamba-8B | 54.6 | 60.0 | 82.5 | 85.8 | 80.9 | 81.5 | 73.3 | 76.9 |
67
+
68
+ | Model | HS (0-shot) | HS (10-shot) | LMB (0-shot) | LMB (10-shot) | MMLU (0-shot) | MMLU (5-shot) | OBQA (0-shot) | OBQA (10-shot) |
69
+ |------------|------------|------------|------------|------------|------------|------------|------------|------------|
70
+ | Llamba-1B | 61.2 | 60.2 | 48.4 | 39.0 | 38.0 | 31.3 | 37.0 | 38.0 |
71
+ | Llamba-3B | 73.8 | 74.3 | 65.8 | 60.0 | 52.7 | 50.3 | 42.8 | 42.8 |
72
+ | Llamba-8B | 77.6 | 78.7 | 69.4 | 65.0 | 61.0 | 60.0 | 43.4 | 45.8 |
73
+
74
+ More details on model performance, benchmarks, and evaluation metrics can be found in the [paper](https://arxiv.org/abs/2502.14458).