Update README.md
Browse files
README.md
CHANGED
@@ -21,7 +21,7 @@ This model is a **fine-tuned LLaMA-based** (or derivative) model designed to (1)
|
|
21 |
|
22 |
## Model Details
|
23 |
|
24 |
-
- **Base Model**: LLaMA-based architecture
|
25 |
- **Parameter-Efficient Fine-tuning**: We used **LoRA** adapters or 4-bit quantization to reduce GPU memory usage.
|
26 |
- **Data**:
|
27 |
1. **Suicide detection** (Reddit & Twitter) – labeled as “suicidal” vs. “non-suicidal.”
|
|
|
21 |
|
22 |
## Model Details
|
23 |
|
24 |
+
- **Base Model**: LLaMA-based architecture
|
25 |
- **Parameter-Efficient Fine-tuning**: We used **LoRA** adapters or 4-bit quantization to reduce GPU memory usage.
|
26 |
- **Data**:
|
27 |
1. **Suicide detection** (Reddit & Twitter) – labeled as “suicidal” vs. “non-suicidal.”
|