Text Generation
Transformers
Safetensors
English
llava
multimodal
conversational
Eval Results
ZhangYuanhan commited on
Commit
857b981
·
verified ·
1 Parent(s): 30f3b69

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -131,7 +131,7 @@ model-index:
131
  The LLaVA-OneVision models are 7/72B parameter models trained on [LLaVA-NeXT-Video-SFT](https://huggingface.co/datasets/lmms-lab/LLaVA-NeXT-Video-SFT-Data), based on Qwen2 language model with a context window of 32K tokens.
132
 
133
  - **Repository:** [LLaVA-VL/LLaVA-NeXT](https://github.com/LLaVA-VL/LLaVA-NeXT?tab=readme-ov-file)
134
- - **Point of Contact:** [Yuanhan Zhang](mailto:drluodian@gmail.com)
135
  - **Languages:** English, Chinese
136
 
137
 
 
131
  The LLaVA-OneVision models are 7/72B parameter models trained on [LLaVA-NeXT-Video-SFT](https://huggingface.co/datasets/lmms-lab/LLaVA-NeXT-Video-SFT-Data), based on Qwen2 language model with a context window of 32K tokens.
132
 
133
  - **Repository:** [LLaVA-VL/LLaVA-NeXT](https://github.com/LLaVA-VL/LLaVA-NeXT?tab=readme-ov-file)
134
+ - **Point of Contact:** [Yuanhan Zhang](https://zhangyuanhan-ai.github.io/)
135
  - **Languages:** English, Chinese
136
 
137