zhjohnchan commited on
Commit
f1bae35
·
verified ·
1 Parent(s): 9853e75

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -0
README.md CHANGED
@@ -1,3 +1,50 @@
1
  ---
2
  license: llama2
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: llama2
3
  ---
4
+
5
+ <div align="center">
6
+ <h1>
7
+ CheXagent
8
+ </h1>
9
+ </div>
10
+
11
+ <p align="center">
12
+ 📝 <a href="https://arxiv.org/" target="_blank">Paper</a> • 🤗 <a href="https://huggingface.co/StanfordAIMI/RadLLaMA-7b" target="_blank">Hugging Face</a> • 🧩 <a href="https://github.com/Stanford-AIMI/aimi-fms" target="_blank">Github</a> • 🪄 <a href="https://huggingface.co/StanfordAIMI/RadLLaMA-7b" target="_blank">Hugging Face</a> • 🧩 <a href="https://github.com/Stanford-AIMI/aimi-fms" target="_blank">Project</a>
13
+ </p>
14
+
15
+ <div align="center">
16
+ </div>
17
+
18
+ ## ✨ Latest News
19
+
20
+ - [01/20/2023]: Model released in [Hugging Face](https://huggingface.co/StanfordAIMI/RadLLaMA-7b).
21
+
22
+ ## 🎬 Get Started
23
+
24
+ ```python
25
+ from transformers import AutoTokenizer
26
+ from transformers import AutoModelForCausalLM
27
+
28
+ tokenizer = AutoTokenizer.from_pretrained("StanfordAIMI/RadLLaMA-7b", trust_remote_code=True)
29
+ model = AutoModelForCausalLM.from_pretrained("StanfordAIMI/RadLLaMA-7b")
30
+
31
+ prompt = "Hi"
32
+ conv = [{"from": "human", "value": prompt}]
33
+ input_ids = tokenizer.apply_chat_template(conv, add_generation_prompt=True, return_tensors="pt")
34
+
35
+ outputs = model.generate(input_ids)
36
+ response = tokenizer.decode(outputs[0])
37
+ print(response)
38
+ ```
39
+
40
+ ## ✏️ Citation
41
+
42
+ ```
43
+ @article{chexagent-2024,
44
+ title={},
45
+ author={},
46
+ journal={arXiv preprint arXiv:xxxx.xxxxx},
47
+ url={https://arxiv.org/abs/xxxx.xxxxx},
48
+ year={2024}
49
+ }
50
+ ```