lidingm commited on
Commit
7c43ffb
·
verified ·
1 Parent(s): 92af311

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +94 -15
README.md CHANGED
@@ -53,21 +53,100 @@ You can configure the appropriate model parameters and evaluation settings accor
53
 
54
  ## Benchamrk
55
  We provide benchmark results for various open-source models as well as **GPT-4o** and **Gemini 2.0 Flash** on our benchmark. *More model evaluations will be added.*
56
- | **Model** | **Camera-based Tasks** | | | **Person-based Tasks** | | | | **Overall** |
57
- |-----------|----------|----------|-----|----------|----------|----------|-----|-----------|
58
- | | Rel. Dir. | Obj. Ori. | Avg. | Obj. Ori. | Rel. Dir. | Sce. Sim. | Avg. | |
59
- | InternVL2.5 (2B) | 38.52 | 22.59 | 32.79 | 47.09 | 40.02 | 25.70 | 37.04 | 34.98 |
60
- | Qwen2.5-VL (3B) [Backbone] | 43.43 | 33.33 | 39.80 | 39.16 | 28.62 | 28.51 | 32.14 | 35.85 |
61
- | Qwen2.5-VL (7B) | 46.64 | 29.72 | 40.56 | 37.05 | 35.04 | 28.78 | 33.37 | 36.85 |
62
- | LLaVA-NeXT-Video (7B) | 26.34 | 19.28 | 23.80 | 44.68 | 38.60 | 29.05 | 37.07 | 30.64 |
63
- | LLaVA-OneVision (7B) | 29.84 | 26.10 | 28.49 | 22.39 | 31.00 | 26.88 | 26.54 | 27.49 |
64
- | InternVL2.5 (8B) | 49.41 | **41.27** | 46.48 | 46.79 | 42.04 | **32.85** | 40.20 | **43.24** |
65
- | Llama-3.2-Vision (11B) | 25.27 | 20.98 | 23.73 | 51.20 | 32.19 | 18.82 | 33.61 | 28.82 |
66
- | InternVL3 (14B) | **54.65** | 33.63 | **47.09** | 33.43 | 37.05 | 31.86 | 33.88 | 40.28 |
67
- | Kimi-VL-Instruct (16B) | 26.85 | 22.09 | 25.14 | **63.05** | **43.94** | 20.27 | **41.52** | 33.58 |
68
- | GPT-4o | 41.46 | 19.58 | 33.57 | 42.97 | 40.86 | 26.79 | 36.29 | 34.98 |
69
- | Gemini 2.0 Flash | 45.29 | 12.95 | 33.66 | 41.16 | 32.78 | 21.90 | 31.53 | 32.56 |
70
- | Random Baseline | 25.16 | 26.10 | 25.50 | 24.60 | 31.12 | 26.33 | 27.12 | 26.33 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71
 
72
 
73
  ## Citation
 
53
 
54
  ## Benchamrk
55
  We provide benchmark results for various open-source models as well as **GPT-4o** and **Gemini 2.0 Flash** on our benchmark. *More model evaluations will be added.*
56
+ <table>
57
+ <thead>
58
+ <tr>
59
+ <th rowspan="2">Model</th>
60
+ <th colspan="3">Camera-based Tasks</th>
61
+ <th colspan="4">Person-based Tasks</th>
62
+ <th rowspan="2">Overall</th>
63
+ </tr>
64
+ <tr>
65
+ <th>Rel. Dir.</th>
66
+ <th>Obj. Ori.</th>
67
+ <th>Avg.</th>
68
+ <th>Obj. Ori.</th>
69
+ <th>Rel. Dir.</th>
70
+ <th>Sce. Sim.</th>
71
+ <th>Avg.</th>
72
+ </tr>
73
+ </thead>
74
+ <tbody>
75
+ <tr>
76
+ <td>InternVL2.5 (2B)</td>
77
+ <td>38.52</td><td>22.59</td><td>32.79</td>
78
+ <td>47.09</td><td>40.02</td><td>25.70</td><td>37.04</td>
79
+ <td>34.98</td>
80
+ </tr>
81
+ <tr>
82
+ <td>Qwen2.5-VL (3B) [Backbone]</td>
83
+ <td>43.43</td><td>33.33</td><td>39.80</td>
84
+ <td>39.16</td><td>28.62</td><td>28.51</td><td>32.14</td>
85
+ <td>35.85</td>
86
+ </tr>
87
+ <tr>
88
+ <td>Qwen2.5-VL (7B)</td>
89
+ <td>46.64</td><td>29.72</td><td>40.56</td>
90
+ <td>37.05</td><td>35.04</td><td>28.78</td><td>33.37</td>
91
+ <td>36.85</td>
92
+ </tr>
93
+ <tr>
94
+ <td>LLaVA-NeXT-Video (7B)</td>
95
+ <td>26.34</td><td>19.28</td><td>23.80</td>
96
+ <td>44.68</td><td>38.60</td><td>29.05</td><td>37.07</td>
97
+ <td>30.64</td>
98
+ </tr>
99
+ <tr>
100
+ <td>LLaVA-OneVision (7B)</td>
101
+ <td>29.84</td><td>26.10</td><td>28.49</td>
102
+ <td>22.39</td><td>31.00</td><td>26.88</td><td>26.54</td>
103
+ <td>27.49</td>
104
+ </tr>
105
+ <tr>
106
+ <td>InternVL2.5 (8B)</td>
107
+ <td>49.41</td><td><b>41.27</b></td><td>46.48</td>
108
+ <td>46.79</td><td>42.04</td><td><b>32.85</b></td><td>40.20</td>
109
+ <td><b>43.24</b></td>
110
+ </tr>
111
+ <tr>
112
+ <td>Llama-3.2-Vision (11B)</td>
113
+ <td>25.27</td><td>20.98</td><td>23.73</td>
114
+ <td>51.20</td><td>32.19</td><td>18.82</td><td>33.61</td>
115
+ <td>28.82</td>
116
+ </tr>
117
+ <tr>
118
+ <td>InternVL3 (14B)</td>
119
+ <td><b>54.65</b></td><td>33.63</td><td><b>47.09</b></td>
120
+ <td>33.43</td><td>37.05</td><td>31.86</td><td>33.88</td>
121
+ <td>40.28</td>
122
+ </tr>
123
+ <tr>
124
+ <td>Kimi-VL-Instruct (16B)</td>
125
+ <td>26.85</td><td>22.09</td><td>25.14</td>
126
+ <td><b>63.05</b></td><td><b>43.94</b></td><td>20.27</td><td><b>41.52</b></td>
127
+ <td>33.58</td>
128
+ </tr>
129
+ <tr>
130
+ <td>GPT-4o</td>
131
+ <td>41.46</td><td>19.58</td><td>33.57</td>
132
+ <td>42.97</td><td>40.86</td><td>26.79</td><td>36.29</td>
133
+ <td>34.98</td>
134
+ </tr>
135
+ <tr>
136
+ <td>Gemini 2.0 Flash</td>
137
+ <td>45.29</td><td>12.95</td><td>33.66</td>
138
+ <td>41.16</td><td>32.78</td><td>21.90</td><td>31.53</td>
139
+ <td>32.56</td>
140
+ </tr>
141
+ <tr>
142
+ <td>Random Baseline</td>
143
+ <td>25.16</td><td>26.10</td><td>25.50</td>
144
+ <td>24.60</td><td>31.12</td><td>26.33</td><td>27.12</td>
145
+ <td>26.33</td>
146
+ </tr>
147
+ </tbody>
148
+ </table>
149
+
150
 
151
 
152
  ## Citation