Update README.md
Browse files
README.md
CHANGED
@@ -1,4 +1,7 @@
|
|
1 |
---
|
|
|
|
|
|
|
2 |
library_name: transformers
|
3 |
tags:
|
4 |
- 4-bit
|
@@ -10,6 +13,26 @@ pipeline_tag: text-generation
|
|
10 |
inference: false
|
11 |
quantized_by: Suparious
|
12 |
---
|
13 |
-
#
|
14 |
|
15 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
library_name: transformers
|
6 |
tags:
|
7 |
- 4-bit
|
|
|
13 |
inference: false
|
14 |
quantized_by: Suparious
|
15 |
---
|
16 |
+
# IBI-CAAI/MELT-Mistral-3x7B-Instruct-v0.1 AWQ
|
17 |
|
18 |
+
- Model creator: [IBI-CAAI](https://huggingface.co/IBI-CAAI)
|
19 |
+
- Original model: [MELT-Mistral-3x7B-Instruct-v0.1](https://huggingface.co/IBI-CAAI/MELT-Mistral-3x7B-Instruct-v0.1)
|
20 |
+
|
21 |
+
## Model Summary
|
22 |
+
|
23 |
+
The MELT-Mistral-3x7B-Instruct-v0.1 Large Language Model (LLM) is a pretrained generative text model pre-trained and fine-tuned on using publically avalable medical data.
|
24 |
+
|
25 |
+
MELT-Mistral-3x7B-Instruct-v0.1 demonstrated a average 19.7% improvement over Mistral-3x7B-Instruct-v0.1 (MoE of 3 X Mistral-7B-Instruct-v0.1) across 3 USMLE, Indian AIIMS, and NEET medical examination benchmarks.
|
26 |
+
|
27 |
+
This is MoE model, thanks to [Charles Goddard](https://huggingface.co/chargoddard) for code/tools.
|
28 |
+
|
29 |
+
The Medical Education Language Transformer (MELT) models have been trained on a wide-range of text, chat, Q/A, and instruction data in the medical domain.
|
30 |
+
|
31 |
+
While the model was evaluated using publically avalable [USMLE](https://www.usmle.org/), Indian AIIMS, and NEET medical examination example questions, its use it intented to be more broadly applicable.
|
32 |
+
|
33 |
+
- **Developed by:** [Center for Applied AI](https://caai.ai.uky.edu/)
|
34 |
+
- **Funded by:** [Institute or Biomedical Informatics](https://www.research.uky.edu/IBI)
|
35 |
+
- **Model type:** LLM
|
36 |
+
- **Language(s) (NLP):** English
|
37 |
+
- **License:** Apache 2.0
|
38 |
+
- **Finetuned from model:** A MoE x 3 [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
|