Update README.md
Browse files
README.md
CHANGED
@@ -4,23 +4,23 @@ tags: [Text Generation, Question-Answering]
|
|
4 |
inference: false
|
5 |
---
|
6 |
|
7 |
-
|
8 |
|
9 |
<!-- Provide a quick summary of what the model is/does. -->
|
10 |
|
11 |
-
It's a fine
|
12 |
|
13 |
|
14 |
## Uses
|
15 |
|
16 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
17 |
-
The above model, with applicable changes to the generation_config file passed to model.generate()
|
18 |
|
19 |
|
20 |
|
21 |
## Bias, Risks, and Limitations
|
22 |
|
23 |
-
The model was developed as a proof-of-concept type hobby project and is not intended to be used without careful
|
24 |
|
25 |
[More Information Needed]
|
26 |
|
@@ -29,6 +29,55 @@ The model was developed as a proof-of-concept type hobby project and is not inte
|
|
29 |
|
30 |
Use the code below to get started with the model.
|
31 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
```python
|
33 |
|
34 |
new_model = "YuvrajSingh9886/medicinal-QnA-phi2-custom"
|
|
|
4 |
inference: false
|
5 |
---
|
6 |
|
7 |
+
|
8 |
|
9 |
<!-- Provide a quick summary of what the model is/does. -->
|
10 |
|
11 |
+
It's a fine-tuned version of Phi-2 model by Microsoft on [Amod/mental_health_counseling_conversations](https://huggingface.co/datasets/Amod/mental_health_counseling_conversations).
|
12 |
|
13 |
|
14 |
## Uses
|
15 |
|
16 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
17 |
+
The above model, with applicable changes to the generation_config file, passed to model.generate() function can lead to the generation of better results which could then be used for Health Counseling Chatbot dev.
|
18 |
|
19 |
|
20 |
|
21 |
## Bias, Risks, and Limitations
|
22 |
|
23 |
+
The model was developed as a proof-of-concept type hobby project and is not intended to be used without careful consideration of its implications.
|
24 |
|
25 |
[More Information Needed]
|
26 |
|
|
|
29 |
|
30 |
Use the code below to get started with the model.
|
31 |
|
32 |
+
### Load in the model using the BitsandBytes library
|
33 |
+
|
34 |
+
```python
|
35 |
+
pip install bitsandbytes
|
36 |
+
```
|
37 |
+
|
38 |
+
#### Load model from Hugging Face Hub with model name and bitsandbytes configuration
|
39 |
+
|
40 |
+
```python
|
41 |
+
|
42 |
+
def load_model_tokenizer(model_name: str, bnb_config: BitsAndBytesConfig) -> Tuple[AutoModelForCausalLM, AutoTokenizer]:
|
43 |
+
"""
|
44 |
+
Load the model and tokenizer from the HuggingFace model hub using quantization.
|
45 |
+
|
46 |
+
Args:
|
47 |
+
model_name (str): The name of the model.
|
48 |
+
bnb_config (BitsAndBytesConfig): The quantization configuration of BitsAndBytes.
|
49 |
+
|
50 |
+
Returns:
|
51 |
+
Tuple[AutoModelForCausalLM, AutoTokenizer]: The model and tokenizer.
|
52 |
+
"""
|
53 |
+
|
54 |
+
|
55 |
+
model = AutoModelForCausalLM.from_pretrained(
|
56 |
+
model_name,
|
57 |
+
quantization_config = bnb_config,
|
58 |
+
# device_map = "auto",
|
59 |
+
torch_dtype="auto",
|
60 |
+
trust_remote_code=True
|
61 |
+
)
|
62 |
+
|
63 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name, use_auth_token = True, trust_remote_code=True)
|
64 |
+
|
65 |
+
tokenizer.pad_token = tokenizer.eos_token
|
66 |
+
|
67 |
+
return model, tokenizer
|
68 |
+
|
69 |
+
|
70 |
+
bnb_config = BitsAndBytesConfig(
|
71 |
+
load_in_4bit = load_in_4bit,
|
72 |
+
bnb_4bit_use_double_quant = bnb_4bit_use_double_quant,
|
73 |
+
bnb_4bit_quant_type = bnb_4bit_quant_type,
|
74 |
+
bnb_4bit_compute_dtype = bnb_4bit_compute_dtype,
|
75 |
+
)
|
76 |
+
|
77 |
+
model, tokenizer = load_model_tokenizer(model_name, bnb_config)
|
78 |
+
|
79 |
+
```
|
80 |
+
|
81 |
```python
|
82 |
|
83 |
new_model = "YuvrajSingh9886/medicinal-QnA-phi2-custom"
|