Safetensors
qwen2
xiangan commited on
Commit
8efa227
·
verified ·
1 Parent(s): 19089d4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -9,8 +9,7 @@ base_model:
9
 
10
  [[GitHub]](https://github.com/deepglint/unicom)
11
  ## Model
12
- We used our model as the Vision Encoder in [LLaVA-Next](https://huggingface.co/lmms-lab/llava-next-qwen-32b) which the same Vision Transformer architecture [ViT-L/14@336px as CLIP](https://huggingface.co/openai/clip-vit-large-patch14-336).
13
-
14
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6478679d7b370854241b2ad8/8n_jBobanaLNAQjM5eZeg.png)
15
 
16
 
 
9
 
10
  [[GitHub]](https://github.com/deepglint/unicom)
11
  ## Model
12
+ We used [**MLCD**](https://huggingface.co/DeepGlint-AI/mlcd-vit-large-patch14-336) as the Vision Encoder in [LLaVA-Next](https://huggingface.co/lmms-lab/llava-next-qwen-32b).
 
13
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6478679d7b370854241b2ad8/8n_jBobanaLNAQjM5eZeg.png)
14
 
15