zhibinlan commited on
Commit
2176478
·
verified ·
1 Parent(s): d0139cd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -27,7 +27,7 @@ We achieved the top ranking on the MMEB leaderboard using only a small amount of
27
 
28
 
29
  ## Model Performance
30
- LLaVE-7B achieved the SOTA performance on MMEB using only 662K training pairs.
31
  ![MMEB](./figures/results.png)
32
 
33
  Although LLaVE is trained on image-text data, it can generalize to text-video retrieval tasks in a zero-shot manner and achieve strong performance, demonstrating its remarkable potential for transfer to other embedding tasks.
 
27
 
28
 
29
  ## Model Performance
30
+ LLaVE-2B achieved excellent performance on MMEB using fewer parameters and 662K training pairs.
31
  ![MMEB](./figures/results.png)
32
 
33
  Although LLaVE is trained on image-text data, it can generalize to text-video retrieval tasks in a zero-shot manner and achieve strong performance, demonstrating its remarkable potential for transfer to other embedding tasks.