lidingm commited on
Commit
7ce7d00
·
verified ·
1 Parent(s): bcfa0fa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -8
README.md CHANGED
@@ -27,14 +27,10 @@ This dataset card aims to be a base template for new datasets. It has been gener
27
  ### Dataset Description
28
 
29
  <!-- Provide a longer summary of what this dataset is. -->
30
-
31
-
32
-
33
- - **Curated by:** [More Information Needed]
34
- - **Funded by [optional]:** [More Information Needed]
35
- - **Shared by [optional]:** [More Information Needed]
36
- - **Language(s) (NLP):** [More Information Needed]
37
- - **License:** [More Information Needed]
38
 
39
  ### Dataset Sources [optional]
40
 
 
27
  ### Dataset Description
28
 
29
  <!-- Provide a longer summary of what this dataset is. -->
30
+ ViewSpatial-Bench is the first comprehensive benchmark designed specifically for evaluating multi-viewpoint spatial orientation recognition capabilities of vision-language models (VLMs) across five distinct task types. The benchmark assesses how well VLMs can perform spatial reasoning from different perspectives, focusing on both egocentric (camera) and allocentric (human subject) viewpoints.
31
+ The benchmark addresses a critical limitation in current VLMs: while they excel at egocentric spatial reasoning (from the camera's perspective), they struggle to generalize to allocentric viewpoints when required to adopt another entity's spatial frame of reference. This capability, known as "perspective-taking," is crucial for embodied interaction, spatial navigation, and multi-agent collaboration.
32
+ - **Language(s) (NLP):** en
33
+ - **License:** apache-2.0
 
 
 
 
34
 
35
  ### Dataset Sources [optional]
36