File size: 5,179 Bytes
9bbe68c 0e358df 9bbe68c 0e358df 9bbe68c 0e358df 9bbe68c 0e358df 9bbe68c 0e358df fcb55ab 9bbe68c 0e358df 9bbe68c 0e358df 9bbe68c 0e358df 9bbe68c 0e358df 9bbe68c 0e358df 9bbe68c 0e358df 9bbe68c 0e358df 9bbe68c 0e358df 9bbe68c 0e358df 9bbe68c 0e358df 9bbe68c 0e358df 9bbe68c 0e358df 9bbe68c 0e358df 9bbe68c 0e358df 9bbe68c 0e358df 9bbe68c 0e358df 9bbe68c 0e358df 9bbe68c 0e358df 9bbe68c 0e358df 9bbe68c 0e358df 9bbe68c 0e358df 9bbe68c 0e358df |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 |
---
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
library_name: peft
---
# Model Card for Fine-Tuned LLaMA Empathy
## Model Summary
Fine-Tuned LLaMA Empathy is a large language model fine-tuned to enhance emotional understanding and generate needs-based responses. This model is designed for use in psychology, therapy, conflict resolution, human-computer interaction, and online moderation. It is based on the Meta-Llama-3.1-8B-Instruct model and utilizes LoRA (Low-Rank Adaptation) for efficient fine-tuning.
## Model Details
### Model Description
- **Developed by:** AI Medical in collaboration with Ruslanmv.com
- **Funded by:**
- **Shared by:** AI Medical
- **Model type:** Fine-tuned Meta-Llama-3.1-8B-Instruct
- **Language(s) (NLP):** English
- **License:** Creative Commons Attribution 4.0 International License (CC BY 4.0)
- **Fine-tuned from model:** meta-llama/Meta-Llama-3.1-8B-Instruct
### Model Sources
- **Repository:** [Hugging Face Model Repository](https://huggingface.co/ruslanmv/fine_tuned_llama_empathy)
## Uses
### Direct Use
- **Psychology & Therapy:** Assisting professionals in understanding and responding empathetically to patient emotions.
- **Conflict Resolution:** Helping mediators decode emotional expressions and address underlying needs.
- **Human-Computer Interaction:** Enhancing chatbots and virtual assistants with emotionally aware responses.
- **Social Media Moderation:** Reducing toxicity and improving online discourse through need-based responses.
- **Education:** Supporting emotional intelligence training and communication skill development.
### Downstream Use
- Fine-tuning for specialized applications in mental health, conflict resolution, or AI-driven assistance.
- Integration into virtual therapists, mental health applications, and online support systems.
### Out-of-Scope Use
- Not a substitute for professional psychological evaluation or medical treatment.
- Not suitable for high-risk applications requiring absolute accuracy in emotional interpretation.
## Bias, Risks, and Limitations
- **Bias:** As with any NLP model, biases may exist due to the dataset and training methodology. LLaMA models, in particular, have shown biases.
- **Risk of Misinterpretation:** Emotional expressions are subjective and may be misclassified in complex scenarios.
- **Generalization Limitations:** May not fully capture cultural and contextual variations in emotional expressions.
### Recommendations
Users should verify outputs before applying them in professional or high-stakes settings. Continuous evaluation and user feedback are recommended.
## How to Get Started with the Model
```python
from transformers import pipeline
model_name = "ruslanmv/fine_tuned_llama_empathy"
model = pipeline("text-generation", model=model_name)
prompt = "I feel betrayed."
response = model(prompt, max_length=50)
print(response)
```
## Training Details
### Training Data
- **Dataset:** Annotated dataset mapping evaluative expressions to emotions and needs.
- **Annotations:** 1,500+ labeled examples linking expressions to emotional states and corresponding needs.
### Training Procedure
#### Preprocessing
- Tokenized using Hugging Face `transformers` library.
- Augmented with synonym variations and paraphrased sentences.
#### Training Hyperparameters
- **Training regime:** Mixed precision training using LoRA.
- **Batch size:** 32
- **Learning rate:** 2e-5
- **Training steps:** 1k
- **Hardware:** 1x A100 GPU using DeepSpeed ZeRO-3
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
- Held-out dataset containing unseen evaluative expressions.
#### Factors
- Performance across different emotional expression categories.
- Sensitivity to nuanced phrasing and variations.
#### Metrics
- **Accuracy:** Measures correct classification of emotions and needs.
- **Precision & Recall:** Evaluates the balance between capturing true emotions and avoiding false positives.
- **F1-Score:** Measures the balance between precision and recall.
### Results
- **Accuracy:** 89.5%
- **F1-Score:** 87.2%
- **Latency:** <500ms response time
## Environmental Impact
- **Hardware Type:** A100 GPUs
- **Training Time:** hours
- **Carbon Emitted:** Estimated using [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute).
## Technical Specifications
### Model Architecture and Objective
- Base Model: meta-llama/Meta-Llama-3.1-8B-Instruct
- Fine-tuned using LoRA for parameter-efficient training. Key LoRA parameters: `r=8`, `lora_alpha=16`, `lora_dropout=0.2`, `target_modules=["v_proj", "q_proj"]`
### Compute Infrastructure
- **Hardware:** AWS spot instances (1x A100 GPUs)
- **Software:** Hugging Face `transformers`, PEFT, PyTorch
## Citation
If you use this model, please cite:
```bibtex
@misc{ai-medical_2025,
author = {AI Medical, ruslanmv.com},
title = {Fine-Tuned LLaMA Empathy},
year = {2025},
howpublished = {\url{[https://huggingface.co/ruslanmv/fine_tuned_llama_empathy](https://huggingface.co/ruslanmv/fine_tuned_llama_empathy)}}
}
```
## More Information
- **Model Card Authors:** AI Medical Team, ruslanmv.com
- **Framework Versions:** PEFT 0.14.0
|