Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -23,9 +23,8 @@ configs:
|
|
23 |
## Dataset Description
|
24 |
|
25 |
<!-- Provide a longer summary of what this dataset is. -->
|
26 |
-
We introduce ViewSpatial-Bench
|
27 |
-
ViewSpatial-Bench
|
28 |
-
The benchmark addresses a critical limitation in current VLMs: while they excel at egocentric spatial reasoning (from the camera's perspective), they struggle to generalize to allocentric viewpoints when required to adopt another entity's spatial frame of reference. This capability, known as "perspective-taking," is crucial for embodied interaction, spatial navigation, and multi-agent collaboration.
|
29 |
- **Language(s) (NLP):** en
|
30 |
- **License:** apache-2.0
|
31 |
|
|
|
23 |
## Dataset Description
|
24 |
|
25 |
<!-- Provide a longer summary of what this dataset is. -->
|
26 |
+
We introduce ViewSpatial-Bench, a comprehensive benchmark with over 5,700 question-answer pairs across 1,000+ 3D scenes from ScanNet and MS-COCO validation sets. This benchmark evaluates VLMs' spatial localization capabilities from multiple perspectives, specifically testing both egocentric (camera) and allocentric (human subject) viewpoints across five distinct task types.
|
27 |
+
ViewSpatial-Bench addresses a critical gap: while VLMs excel at spatial reasoning from their own perspective, they struggle with perspective-taking—adopting another entity's spatial frame of reference—which is essential for embodied interaction and multi-agent collaboration.
|
|
|
28 |
- **Language(s) (NLP):** en
|
29 |
- **License:** apache-2.0
|
30 |
|