|
--- |
|
datasets: |
|
- Lin-Chen/ShareGPT4V |
|
pipeline_tag: image-text-to-text |
|
library_name: xtuner |
|
--- |
|
|
|
<div align="center"> |
|
<img src="https://github.com/InternLM/lmdeploy/assets/36994684/0cf8d00f-e86b-40ba-9b54-dc8f1bc6c8d8" width="600"/> |
|
|
|
|
|
[](https://github.com/InternLM/xtuner) |
|
|
|
|
|
</div> |
|
|
|
## Model |
|
|
|
llava-llama-3-8b-v1_1 is a LLaVA model fine-tuned from [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) and [CLIP-ViT-Large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) with [ShareGPT4V-PT](https://huggingface.co/datasets/Lin-Chen/ShareGPT4V) and [InternVL-SFT](https://github.com/OpenGVLab/InternVL/tree/main/internvl_chat#prepare-training-datasets) by [XTuner](https://github.com/InternLM/xtuner). |
|
|
|
|
|
**Note: This model is in .pth format.** |
|
|
|
|
|
## Citation |
|
|
|
```bibtex |
|
@misc{2023xtuner, |
|
title={XTuner: A Toolkit for Efficiently Fine-tuning LLM}, |
|
author={XTuner Contributors}, |
|
howpublished = {\url{https://github.com/InternLM/xtuner}}, |
|
year={2023} |
|
} |
|
``` |
|
|