Update LMFlow support (#6)
Browse files- Update LMFlow support (88cd2a92ef203e8eee99b981498f9593fa99f97f)
Co-authored-by: Shizhe <[email protected]>
README.md
CHANGED
|
@@ -115,6 +115,41 @@ print(f"Model response: {response}")
|
|
| 115 |
|
| 116 |
```
|
| 117 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 118 |
## Evaluation
|
| 119 |
We use [`LM Evaluation Harness`](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate the model. The evaluation commands are as follows:
|
| 120 |
|
|
|
|
| 115 |
|
| 116 |
```
|
| 117 |
|
| 118 |
+
## Finetuning Hymba
|
| 119 |
+
|
| 120 |
+
|
| 121 |
+
[LMFlow](https://github.com/OptimalScale/LMFlow) is a complete pipeline for fine-tuning large language models.
|
| 122 |
+
The following steps provide an example of how to fine-tune the `Hymba-1.5B-Base` models using LMFlow.
|
| 123 |
+
|
| 124 |
+
1. Using Docker
|
| 125 |
+
|
| 126 |
+
```
|
| 127 |
+
docker pull ghcr.io/tilmto/hymba:v1
|
| 128 |
+
docker run --gpus all -v /home/$USER:/home/$USER -it ghcr.io/tilmto/hymba:v1 bash
|
| 129 |
+
```
|
| 130 |
+
2. Install LMFlow
|
| 131 |
+
|
| 132 |
+
```
|
| 133 |
+
git clone https://github.com/OptimalScale/LMFlow.git
|
| 134 |
+
cd LMFlow
|
| 135 |
+
conda create -n lmflow python=3.9 -y
|
| 136 |
+
conda activate lmflow
|
| 137 |
+
conda install mpi4py
|
| 138 |
+
pip install -e .
|
| 139 |
+
```
|
| 140 |
+
|
| 141 |
+
3. Fine-tune the model using the following command.
|
| 142 |
+
|
| 143 |
+
```
|
| 144 |
+
cd LMFlow
|
| 145 |
+
bash ./scripts/run_finetune_hymba.sh
|
| 146 |
+
```
|
| 147 |
+
|
| 148 |
+
With LMFlow, you can also fine-tune the model on your custom dataset. The only thing you need to do is transform your dataset into the [LMFlow data format](https://optimalscale.github.io/LMFlow/examples/DATASETS.html).
|
| 149 |
+
In addition to full-finetuniing, you can also fine-tune hymba efficiently with [DoRA](https://arxiv.org/html/2402.09353v4), [LoRA](https://github.com/OptimalScale/LMFlow?tab=readme-ov-file#lora), [LISA](https://github.com/OptimalScale/LMFlow?tab=readme-ov-file#lisa), [Flash Attention](https://github.com/OptimalScale/LMFlow/blob/main/readme/flash_attn2.md), and other acceleration techniques.
|
| 150 |
+
For more details, please refer to the [LMFlow for Hymba](https://github.com/OptimalScale/LMFlow/tree/main/experimental/Hymba) documentation.
|
| 151 |
+
|
| 152 |
+
|
| 153 |
## Evaluation
|
| 154 |
We use [`LM Evaluation Harness`](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate the model. The evaluation commands are as follows:
|
| 155 |
|