File size: 1,212 Bytes
fd358e2
 
d2bb887
 
fd358e2
d2bb887
fd358e2
d2bb887
fd358e2
d2bb887
fd358e2
d2bb887
fd358e2
d2bb887
fd358e2
d2bb887
fd358e2
d2bb887
 
fd358e2
d2bb887
 
fd358e2
d2bb887
fd358e2
d2bb887
d7c0e7e
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
---
library_name: transformers
datasets:
- bitext/Bitext-customer-support-llm-chatbot-training-dataset
---
# Fine-Tuned GEMMA Model for Chatbot

This repository hosts the fine-tuned version of the GEMMA 1.1-2B model, specifically fine-tuned for a customer support chatbot use case.

## Model Description

The GEMMA 1.1-2B model has been fine-tuned on the [Bitext Customer Support Dataset](https://huggingface.co/datasets/bitext/customer-support-l1m-chatbot-training-dataset) for answering customer support queries. The fine-tuning process involved adjusting the model's weights based on question and answer pairs, which should enable it to generate more accurate and contextually relevant responses in a conversational setting.

## How to Use

You can use this model directly with a pipeline for text generation:

```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

tokenizer = AutoTokenizer.from_pretrained("your-username/your-model-name")
model = AutoModelForCausalLM.from_pretrained("your-username/your-model-name")

chatbot = pipeline("text-generation", model=model, tokenizer=tokenizer)

response = chatbot("How can I cancel my order?")
print(response[0]['generated_text'])
```