wanzin commited on
Commit
1a9bde9
Β·
1 Parent(s): ce0bd03

update README

Browse files
Files changed (2) hide show
  1. README.md +24 -7
  2. app.py +1 -1
README.md CHANGED
@@ -1,13 +1,30 @@
1
  ---
2
- title: Clip Eval
3
- emoji: πŸ‘€
4
- colorFrom: pink
5
- colorTo: blue
6
  sdk: gradio
7
- sdk_version: 5.31.0
8
  app_file: app.py
9
  pinned: false
10
- license: apache-2.0
11
  ---
12
 
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: CLIP Model Evaluation
3
+ emoji: πŸ“ŠπŸš€
4
+ colorFrom: blue
5
+ colorTo: green
6
  sdk: gradio
7
+ sdk_version: 3.45.0
8
  app_file: app.py
9
  pinned: false
10
+ license: apache-2.0
11
  ---
12
 
13
+ # πŸ“Š CLIP Model Evaluation Space
14
+
15
+ This Space provides an interactive interface to evaluate the performance of various CLIP (Contrastive Language-Image Pre-Training) models on standard image-text retrieval benchmarks.
16
+
17
+ It calculates Recall@K (R@1, R@5, R@10) metrics for both:
18
+ * **Image Retrieval (Text-to-Image):** Given a text query, how well does the model retrieve the correct image?
19
+ * **Text Retrieval (Image-to-Text):** Given an image query, how well does the model retrieve the correct text description?
20
+
21
+ ## πŸš€ How to Use
22
+
23
+ 1. **Select a CLIP Model:** Choose a pre-trained CLIP model from the dropdown menu.
24
+ 2. **Select a Dataset:** Choose the dataset you want to evaluate on (e.g., "mscoco", "flickr").
25
+ 3. **Number of Samples:** Specify the number of image-text pairs from the dataset to use for the evaluation. Using fewer samples will be faster but less representative.
26
+ 4. **Click "Evaluate Model":** The evaluation will run, and the Recall@K metrics will be displayed.
27
+
28
+ ## πŸ› οΈ Under the Hood
29
+
30
+ This Space uses the `evaluate` library from Hugging Face and a custom metric script (`clipmodel_eval.py`) to perform the CLIP model evaluations. The models and datasets are loaded from the Hugging Face Hub.
app.py CHANGED
@@ -1,7 +1,7 @@
1
  import gradio as gr
2
  import evaluate
3
 
4
- clip_metric = evaluate.load("d-matrix/dmx_clip_eval")
5
  print("Successfully loaded CLIP evaluation metric")
6
 
7
  AVAILABLE_MODELS = [
 
1
  import gradio as gr
2
  import evaluate
3
 
4
+ clip_metric = evaluate.load("d-matrix/clip_eval")
5
  print("Successfully loaded CLIP evaluation metric")
6
 
7
  AVAILABLE_MODELS = [