File size: 1,998 Bytes
35adf36
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a81b3d7
35adf36
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8922126
 
 
35adf36
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99



# Model summary

* instruction-tuning on medical data based on LLaMA 


# data

* Common 
    *  alpaca-5.2k
    *  unatural-instruct 80k
    *  OIG-40M
* Chinese
  * english/chinese translation data 
  * zhihu QA 
  * pCLUE
* Medical Domain:
  * MedDialog-200k
  * Chinese-medical-dialogue-data
  * WebMedQA 
* code
  * alpaca_code-20k
 

# training

## Model

* LLaMA-7B

## Hardware
* 6 x A100 40G using NVLink 4 inter-gpu connects

## Software

* tokenizers==0.12.1
* sentencepiece==0.1.97
* transformers==4.28
* torch==2.0.0+cu117



# How to use

```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
from peft import PeftModel

base_model="llma-7b"
LORA_WEIGHTS = "llma-med-alpaca-7b"
LOAD_8BIT = False

tokenizer = LlamaTokenizer.from_pretrained(base_model)

model = LlamaForCausalLM.from_pretrained(
    base_model
    load_in_8bit=LOAD_8BIT,
    torch_dtype=torch.float16,
    device_map="auto",
)
model = PeftModel.from_pretrained(
    model,
    LORA_WEIGHTS,
    torch_dtype=torch.float16,
)

config = {
    "temperature": 0 ,
    "max_new_tokens": 1024,
    "top_p": 0.5
}

prompt = "Translate to English: Je t’aime."
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(model.device)
outputs = model.generate(input_ids=input_ids, max_new_tokens=config["max_new_tokens"], temperature=config["temperature"])
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True).strip()
print(decoded[len(prompt):])


```

# Limitations

* This model may output harmful, biased, toxic, and illusory things, and currently does not undergo RLHF training, so this model is only for research purposes

# TODO

- [x] self-instruct data
- [x] english medical data
- [ ] code data
- [ ] chinese corpus/medical dialog data


# Reference
* [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971)
* [Alpaca: A strong open-source instruction-following model](https://crfm.stanford.edu/2023/03/13/alpaca.html)