ZhangYuanhan commited on
Commit
4988776
·
verified ·
1 Parent(s): b38bf01

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md CHANGED
@@ -1,3 +1,47 @@
1
  ---
2
  license: llama2
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: llama2
3
  ---
4
+
5
+ <br>
6
+
7
+ # LLaVA-Next-Video Model Card
8
+
9
+ ## Model details
10
+
11
+ **Model type:**
12
+ LLaVA-Next-Video is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data.
13
+ Base LLM: [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5)
14
+
15
+ **Model date:**
16
+ LLaVA-Next-Video-7B was trained in April 2024.
17
+
18
+ **Paper or resources for more information:**
19
+ https://github.com/LLaVA-VL/LLaVA-NeXT
20
+
21
+ ## License
22
+ Llama 2 is licensed under the LLAMA 2 Community License,
23
+ Copyright (c) Meta Platforms, Inc. All Rights Reserved.
24
+
25
+ **Where to send questions or comments about the model:**
26
+ https://github.com/LLaVA-VL/LLaVA-NeXT/issues
27
+
28
+ ## Intended use
29
+ **Primary intended uses:**
30
+ The primary use of LLaVA is research on large multimodal models and chatbots.
31
+
32
+ **Primary intended users:**
33
+ The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
34
+
35
+ ## Training dataset
36
+
37
+ ### Image
38
+ - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
39
+ - 158K GPT-generated multimodal instruction-following data.
40
+ - 500K academic-task-oriented VQA data mixture.
41
+ - 50K GPT-4V data mixture.
42
+ - 40K ShareGPT data.
43
+ ### Video
44
+ - 100K VideoChatGPT-Instruct.
45
+
46
+ ## Evaluation dataset
47
+ A collection of 4 benchmarks, including 3 academic VQA benchmarks and 1 captioning benchmark.