Update README.md
Browse files
README.md
CHANGED
@@ -8,7 +8,7 @@ pipeline_tag: image-text-to-text
|
|
8 |
[](https://arxiv.org/abs/2402.14289)[](https://github.com/TinyLLaVA/TinyLLaVA_Factory)[](http://8843843nmph5.vicp.fun/#/)
|
9 |
TinyLLaVA has released a family of small-scale Large Multimodel Models(LMMs), ranging from 1.4B to 3.1B. Our best model, TinyLLaVA-Phi-2-SigLIP-3.1B, achieves better overall performance against existing 7B models such as LLaVA-1.5 and Qwen-VL.
|
10 |
|
11 |
-
Here, we introduce TinyLLaVA-Phi-2-SigLIP-3.1B, which is trained by the TinyLLaVA Factory codebase. For LLM and vision tower, we choose [Phi-2](microsoft/phi-2) and [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384), respectively. The dataset used for training this model is the [ShareGPT4V](https://github.com/InternLM/InternLM-XComposer/blob/main/projects/ShareGPT4V/docs/Data.md) dataset.
|
12 |
|
13 |
### Usage
|
14 |
Execute the following test code:
|
|
|
8 |
[](https://arxiv.org/abs/2402.14289)[](https://github.com/TinyLLaVA/TinyLLaVA_Factory)[](http://8843843nmph5.vicp.fun/#/)
|
9 |
TinyLLaVA has released a family of small-scale Large Multimodel Models(LMMs), ranging from 1.4B to 3.1B. Our best model, TinyLLaVA-Phi-2-SigLIP-3.1B, achieves better overall performance against existing 7B models such as LLaVA-1.5 and Qwen-VL.
|
10 |
|
11 |
+
Here, we introduce TinyLLaVA-Phi-2-SigLIP-3.1B, which is trained by the [TinyLLaVA Factory](https://github.com/TinyLLaVA/TinyLLaVA_Factory) codebase. For LLM and vision tower, we choose [Phi-2](microsoft/phi-2) and [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384), respectively. The dataset used for training this model is the [ShareGPT4V](https://github.com/InternLM/InternLM-XComposer/blob/main/projects/ShareGPT4V/docs/Data.md) dataset.
|
12 |
|
13 |
### Usage
|
14 |
Execute the following test code:
|