|
--- |
|
license: apache-2.0 |
|
task_categories: |
|
- visual-question-answering |
|
language: |
|
- en |
|
tags: |
|
- spatial-reasoning |
|
- cross-viewpoint localization |
|
pretty_name: ViewSpatial-Bench |
|
size_categories: |
|
- 1K<n<10K |
|
configs: |
|
- config_name: ViewSpatial-Bench |
|
data_files: |
|
- split: test |
|
path: ViewSpatial-Bench.json |
|
--- |
|
# ViewSpatial-Bench: Evaluating Multi-perspective Spatial Localization in Vision-Language Models |
|
|
|
<!-- Provide a quick summary of the dataset. --> |
|
|
|
## Dataset Description |
|
|
|
<!-- Provide a longer summary of what this dataset is. --> |
|
We introduce ViewSpatial-Bench, a comprehensive benchmark with over 5,700 question-answer pairs across 1,000+ 3D scenes from ScanNet and MS-COCO validation sets. This benchmark evaluates VLMs' spatial localization capabilities from multiple perspectives, specifically testing both egocentric (camera) and allocentric (human subject) viewpoints across five distinct task types. |
|
ViewSpatial-Bench addresses a critical gap: while VLMs excel at spatial reasoning from their own perspective, they struggle with perspective-taking—adopting another entity's spatial frame of reference—which is essential for embodied interaction and multi-agent collaboration. |
|
- **Language(s) (NLP):** en |
|
- **License:** apache-2.0 |
|
|
|
## Uses |
|
|
|
**I. With HuggingFace datasets library.** |
|
```py |
|
from datasets import load_dataset |
|
ds = load_dataset("lidingm/ViewSpatial-Bench") |
|
``` |
|
**II. Evaluation using Open-Source Code.** |
|
Evaluate using our open-source evaluation code available on Github.(Coming Soon) |
|
```py |
|
# Clone the repository |
|
git clone https://github.com/lidingm/ViewSpatial-Bench.git |
|
cd ViewSpatial-Bench |
|
|
|
# Install dependencies |
|
pip install -r requirements.txt |
|
|
|
# Run evaluation |
|
python eval.py --model_name your_model --dataset_path path/to/dataset |
|
``` |
|
You can configure the appropriate model parameters and evaluation settings according to the framework's requirements to obtain performance evaluation results on the ViewSpatial-Bench dataset. |
|
|
|
## Benchamrk |
|
We provide benchmark results for various open-source models as well as **GPT-4o** and **Gemini 2.0 Flash** on our benchmark. *More model evaluations will be added.* |
|
<table> |
|
<thead> |
|
<tr> |
|
<th rowspan="2">Model</th> |
|
<th colspan="3">Camera-based Tasks</th> |
|
<th colspan="4">Person-based Tasks</th> |
|
<th rowspan="2">Overall</th> |
|
</tr> |
|
<tr> |
|
<th>Rel. Dir.</th> |
|
<th>Obj. Ori.</th> |
|
<th>Avg.</th> |
|
<th>Obj. Ori.</th> |
|
<th>Rel. Dir.</th> |
|
<th>Sce. Sim.</th> |
|
<th>Avg.</th> |
|
</tr> |
|
</thead> |
|
<tbody> |
|
<tr> |
|
<td>InternVL2.5 (2B)</td> |
|
<td>38.52</td><td>22.59</td><td>32.79</td> |
|
<td>47.09</td><td>40.02</td><td>25.70</td><td>37.04</td> |
|
<td>34.98</td> |
|
</tr> |
|
<tr> |
|
<td>Qwen2.5-VL (3B) [Backbone]</td> |
|
<td>43.43</td><td>33.33</td><td>39.80</td> |
|
<td>39.16</td><td>28.62</td><td>28.51</td><td>32.14</td> |
|
<td>35.85</td> |
|
</tr> |
|
<tr> |
|
<td>Qwen2.5-VL (7B)</td> |
|
<td>46.64</td><td>29.72</td><td>40.56</td> |
|
<td>37.05</td><td>35.04</td><td>28.78</td><td>33.37</td> |
|
<td>36.85</td> |
|
</tr> |
|
<tr> |
|
<td>LLaVA-NeXT-Video (7B)</td> |
|
<td>26.34</td><td>19.28</td><td>23.80</td> |
|
<td>44.68</td><td>38.60</td><td>29.05</td><td>37.07</td> |
|
<td>30.64</td> |
|
</tr> |
|
<tr> |
|
<td>LLaVA-OneVision (7B)</td> |
|
<td>29.84</td><td>26.10</td><td>28.49</td> |
|
<td>22.39</td><td>31.00</td><td>26.88</td><td>26.54</td> |
|
<td>27.49</td> |
|
</tr> |
|
<tr> |
|
<td>InternVL2.5 (8B)</td> |
|
<td>49.41</td><td><b>41.27</b></td><td>46.48</td> |
|
<td>46.79</td><td>42.04</td><td><b>32.85</b></td><td>40.20</td> |
|
<td><b>43.24</b></td> |
|
</tr> |
|
<tr> |
|
<td>Llama-3.2-Vision (11B)</td> |
|
<td>25.27</td><td>20.98</td><td>23.73</td> |
|
<td>51.20</td><td>32.19</td><td>18.82</td><td>33.61</td> |
|
<td>28.82</td> |
|
</tr> |
|
<tr> |
|
<td>InternVL3 (14B)</td> |
|
<td><b>54.65</b></td><td>33.63</td><td><b>47.09</b></td> |
|
<td>33.43</td><td>37.05</td><td>31.86</td><td>33.88</td> |
|
<td>40.28</td> |
|
</tr> |
|
<tr> |
|
<td>Kimi-VL-Instruct (16B)</td> |
|
<td>26.85</td><td>22.09</td><td>25.14</td> |
|
<td><b>63.05</b></td><td><b>43.94</b></td><td>20.27</td><td><b>41.52</b></td> |
|
<td>33.58</td> |
|
</tr> |
|
<tr> |
|
<td>GPT-4o</td> |
|
<td>41.46</td><td>19.58</td><td>33.57</td> |
|
<td>42.97</td><td>40.86</td><td>26.79</td><td>36.29</td> |
|
<td>34.98</td> |
|
</tr> |
|
<tr> |
|
<td>Gemini 2.0 Flash</td> |
|
<td>45.29</td><td>12.95</td><td>33.66</td> |
|
<td>41.16</td><td>32.78</td><td>21.90</td><td>31.53</td> |
|
<td>32.56</td> |
|
</tr> |
|
<tr> |
|
<td>Random Baseline</td> |
|
<td>25.16</td><td>26.10</td><td>25.50</td> |
|
<td>24.60</td><td>31.12</td><td>26.33</td><td>27.12</td> |
|
<td>26.33</td> |
|
</tr> |
|
</tbody> |
|
</table> |
|
|
|
|
|
|
|
## Citation |
|
|
|
``` |
|
Coming Soon |
|
``` |