lidingm commited on
Commit
69ac811
·
verified ·
1 Parent(s): 715df9a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -25,7 +25,10 @@ configs:
25
  <!-- Provide a longer summary of what this dataset is. -->
26
  We introduce ViewSpatial-Bench, a comprehensive benchmark with over 5,700 question-answer pairs across 1,000+ 3D scenes from ScanNet and MS-COCO validation sets. This benchmark evaluates VLMs' spatial localization capabilities from multiple perspectives, specifically testing both egocentric (camera) and allocentric (human subject) viewpoints across five distinct task types.
27
 
28
- ViewSpatial-Bench addresses a critical gap: while VLMs excel at spatial reasoning from their own perspective, they struggle with perspective-taking—adopting another entity's spatial frame of reference—which is essential for embodied interaction and multi-agent collaboration.
 
 
 
29
  - **Language(s) (NLP):** en
30
  - **License:** apache-2.0
31
 
 
25
  <!-- Provide a longer summary of what this dataset is. -->
26
  We introduce ViewSpatial-Bench, a comprehensive benchmark with over 5,700 question-answer pairs across 1,000+ 3D scenes from ScanNet and MS-COCO validation sets. This benchmark evaluates VLMs' spatial localization capabilities from multiple perspectives, specifically testing both egocentric (camera) and allocentric (human subject) viewpoints across five distinct task types.
27
 
28
+ ViewSpatial-Bench addresses a critical gap: while VLMs excel at spatial reasoning from their own perspective, they struggle with perspective-taking—adopting another entity's spatial frame of reference—which is essential for embodied interaction and multi-agent collaboration.The figure below shows the construction pipeline and example demonstrations of our benchmark.
29
+
30
+ <img alt="ViewSpatial-Bench construction pipeline and example questions" src="https://cdn.jsdelivr.net/gh/lidingm/blog_img/img/202505190038733.png" style="width: 100%; max-width: 1000px;" />
31
+
32
  - **Language(s) (NLP):** en
33
  - **License:** apache-2.0
34