doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2310.10631
96
Table 12: Finetuning of various 7B base models on supervised mathematics datasets. All results with a Llama 2 initialization are copied from the literature (Luo et al., 2023; Yu et al., 2023). The LLEMMA 7B finetune is trained with identical hyperparameters to the models in Yu et al. (2023) . H QUALITATIVE EXAMPLES Dataset overlap. Figure 6 shows example false positives when checking n-gram overlap with OpenWebMath documents for various n. Figure 7 shows an example OpenWebMath document that has 30-gram overlap with a MATH problem, and LLEMMA-7b’s generated solution. Task outputs. Figure 8 shows a generated proof in the informal2formal theorem proving task. 25 # Preprint.
2310.10631#96
Llemma: An Open Language Model For Mathematics
We present Llemma, a large language model for mathematics. We continue pretraining Code Llama on the Proof-Pile-2, a mixture of scientific papers, web data containing mathematics, and mathematical code, yielding Llemma. On the MATH benchmark Llemma outperforms all known open base models, as well as the unreleased Minerva model suite on an equi-parameter basis. Moreover, Llemma is capable of tool use and formal theorem proving without any further finetuning. We openly release all artifacts, including 7 billion and 34 billion parameter models, the Proof-Pile-2, and code to replicate our experiments.
http://arxiv.org/pdf/2310.10631
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert Q. Jiang, Jia Deng, Stella Biderman, Sean Welleck
cs.CL, cs.AI, cs.LO
Updated references; corrected description of COPRA search budget
null
cs.CL
20231016
20231201
[ { "id": "2308.09583" }, { "id": "2205.11491" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2203.15556" }, { "id": "2009.03300" }, { "id": "2309.12284" }, { "id": "2110.08207" }, { "id": "2109.00110" }, { "id": "2009.03393" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2202.01344" }, { "id": "2309.00071" }, { "id": "2110.14168" }, { "id": "2303.04910" }, { "id": "2204.05862" }, { "id": "2211.03540" }, { "id": "2211.10435" }, { "id": "2305.10429" }, { "id": "2104.09864" }, { "id": "1909.08593" }, { "id": "2205.10893" }, { "id": "2001.08361" }, { "id": "2308.12950" }, { "id": "2201.08239" }, { "id": "2203.02155" }, { "id": "2109.01652" }, { "id": "2306.01694" } ]
2310.10631
97
Task outputs. Figure 8 shows a generated proof in the informal2formal theorem proving task. 25 # Preprint. OpenWebMath document 2D affine transformations can be better represented using 2 by 2 matrices, since they are simply linear combinations of 2 variables. The advantage of this is that the matrices are associative under multiplication Also, GPUs and modern toolkits are optimised to work with this representation. As a result, a scale matrix is egin{bmatrix} s_x & 0 \ 0 & s_y \end{bmatrix}, and a rotation matrix is egin{bmatrix} \cos heta & -\sin heta \ \sin heta & \cos heta \end{bmatrix}. A translation matrix is simply egin{bmatrix} 1 & rac{t_x}{y} \ rac{t_y}{x} & 1 ... # MATH problem A rotation centered at the origin takes ( # R ) (4s) # to . Which vector does the rotation take (‘) # MATH solution The rotation matrix must be of the form # cos @ —siné sin@ cos@ # sin θ . Thus,... # Hit \cos heta & -\sin heta \ \sin heta & \cos # OpenWebMath document # Basic Probability
2310.10631#97
Llemma: An Open Language Model For Mathematics
We present Llemma, a large language model for mathematics. We continue pretraining Code Llama on the Proof-Pile-2, a mixture of scientific papers, web data containing mathematics, and mathematical code, yielding Llemma. On the MATH benchmark Llemma outperforms all known open base models, as well as the unreleased Minerva model suite on an equi-parameter basis. Moreover, Llemma is capable of tool use and formal theorem proving without any further finetuning. We openly release all artifacts, including 7 billion and 34 billion parameter models, the Proof-Pile-2, and code to replicate our experiments.
http://arxiv.org/pdf/2310.10631
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert Q. Jiang, Jia Deng, Stella Biderman, Sean Welleck
cs.CL, cs.AI, cs.LO
Updated references; corrected description of COPRA search budget
null
cs.CL
20231016
20231201
[ { "id": "2308.09583" }, { "id": "2205.11491" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2203.15556" }, { "id": "2009.03300" }, { "id": "2309.12284" }, { "id": "2110.08207" }, { "id": "2109.00110" }, { "id": "2009.03393" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2202.01344" }, { "id": "2309.00071" }, { "id": "2110.14168" }, { "id": "2303.04910" }, { "id": "2204.05862" }, { "id": "2211.03540" }, { "id": "2211.10435" }, { "id": "2305.10429" }, { "id": "2104.09864" }, { "id": "1909.08593" }, { "id": "2205.10893" }, { "id": "2001.08361" }, { "id": "2308.12950" }, { "id": "2201.08239" }, { "id": "2203.02155" }, { "id": "2109.01652" }, { "id": "2306.01694" } ]
2310.10631
98
# sin θ . Thus,... # Hit \cos heta & -\sin heta \ \sin heta & \cos # OpenWebMath document # Basic Probability A number is selected at random from 1 through 100, inclusive. What is the probability that the number is a divisor of 50? Express your answer as a common fraction. Apr 24, 2019 There are a 100 integers between 1-100, inclusive. Since 50 is $$2*5^2$$, it has $$(1+1)(1+2)=(2)(3)=6$$ factors. Thus, the answer is $$ rac{6}{100}=oxed{ rac{3}{50}}.$$ MATH problem A number is selected at random from 1 through 100, inclusive. What is the probability that the number is a perfect square? Hit A number is selected at random from 1 through 100, inclusive. What is the probability that the number is a # OpenWebMath document Fig. 2. Use values of the most used medicinal plants in the Safi Province (Morocco).
2310.10631#98
Llemma: An Open Language Model For Mathematics
We present Llemma, a large language model for mathematics. We continue pretraining Code Llama on the Proof-Pile-2, a mixture of scientific papers, web data containing mathematics, and mathematical code, yielding Llemma. On the MATH benchmark Llemma outperforms all known open base models, as well as the unreleased Minerva model suite on an equi-parameter basis. Moreover, Llemma is capable of tool use and formal theorem proving without any further finetuning. We openly release all artifacts, including 7 billion and 34 billion parameter models, the Proof-Pile-2, and code to replicate our experiments.
http://arxiv.org/pdf/2310.10631
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert Q. Jiang, Jia Deng, Stella Biderman, Sean Welleck
cs.CL, cs.AI, cs.LO
Updated references; corrected description of COPRA search budget
null
cs.CL
20231016
20231201
[ { "id": "2308.09583" }, { "id": "2205.11491" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2203.15556" }, { "id": "2009.03300" }, { "id": "2309.12284" }, { "id": "2110.08207" }, { "id": "2109.00110" }, { "id": "2009.03393" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2202.01344" }, { "id": "2309.00071" }, { "id": "2110.14168" }, { "id": "2303.04910" }, { "id": "2204.05862" }, { "id": "2211.03540" }, { "id": "2211.10435" }, { "id": "2305.10429" }, { "id": "2104.09864" }, { "id": "1909.08593" }, { "id": "2205.10893" }, { "id": "2001.08361" }, { "id": "2308.12950" }, { "id": "2201.08239" }, { "id": "2203.02155" }, { "id": "2109.01652" }, { "id": "2306.01694" } ]
2310.10631
99
# OpenWebMath document Fig. 2. Use values of the most used medicinal plants in the Safi Province (Morocco). It is also important to note that for the abovementioned medicinal plants, many other folk uses have been reported in different regions of Morocco. Furthermore, literature-based proof revealed that these species have proven a wide variety of biological and pharmacological activities (Table 4, Ref. [14, 17, 19, 20, 21, 23, 24, 26, 28, 30, 31, 34, 35, 36, 38, 39, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116]), which may confirm the different popular applications of ...
2310.10631#99
Llemma: An Open Language Model For Mathematics
We present Llemma, a large language model for mathematics. We continue pretraining Code Llama on the Proof-Pile-2, a mixture of scientific papers, web data containing mathematics, and mathematical code, yielding Llemma. On the MATH benchmark Llemma outperforms all known open base models, as well as the unreleased Minerva model suite on an equi-parameter basis. Moreover, Llemma is capable of tool use and formal theorem proving without any further finetuning. We openly release all artifacts, including 7 billion and 34 billion parameter models, the Proof-Pile-2, and code to replicate our experiments.
http://arxiv.org/pdf/2310.10631
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert Q. Jiang, Jia Deng, Stella Biderman, Sean Welleck
cs.CL, cs.AI, cs.LO
Updated references; corrected description of COPRA search budget
null
cs.CL
20231016
20231201
[ { "id": "2308.09583" }, { "id": "2205.11491" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2203.15556" }, { "id": "2009.03300" }, { "id": "2309.12284" }, { "id": "2110.08207" }, { "id": "2109.00110" }, { "id": "2009.03393" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2202.01344" }, { "id": "2309.00071" }, { "id": "2110.14168" }, { "id": "2303.04910" }, { "id": "2204.05862" }, { "id": "2211.03540" }, { "id": "2211.10435" }, { "id": "2305.10429" }, { "id": "2104.09864" }, { "id": "1909.08593" }, { "id": "2205.10893" }, { "id": "2001.08361" }, { "id": "2308.12950" }, { "id": "2201.08239" }, { "id": "2203.02155" }, { "id": "2109.01652" }, { "id": "2306.01694" } ]
2310.10631
100
Generated solution (LLEMMA 7b): The are 21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49, 50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78, 79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100. We can see that... Figure 6: Data overlap: Example false positives using 10-gram match between MATH solutions and Open- WebMath documents (top), 20-gram match between MATH problems and OpenWebMath documents (middle), and 30-gram match between LLEMMA-7b’s generated solutions and OpenWebMath documents (bottom). 26 # to? # Preprint.
2310.10631#100
Llemma: An Open Language Model For Mathematics
We present Llemma, a large language model for mathematics. We continue pretraining Code Llama on the Proof-Pile-2, a mixture of scientific papers, web data containing mathematics, and mathematical code, yielding Llemma. On the MATH benchmark Llemma outperforms all known open base models, as well as the unreleased Minerva model suite on an equi-parameter basis. Moreover, Llemma is capable of tool use and formal theorem proving without any further finetuning. We openly release all artifacts, including 7 billion and 34 billion parameter models, the Proof-Pile-2, and code to replicate our experiments.
http://arxiv.org/pdf/2310.10631
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert Q. Jiang, Jia Deng, Stella Biderman, Sean Welleck
cs.CL, cs.AI, cs.LO
Updated references; corrected description of COPRA search budget
null
cs.CL
20231016
20231201
[ { "id": "2308.09583" }, { "id": "2205.11491" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2203.15556" }, { "id": "2009.03300" }, { "id": "2309.12284" }, { "id": "2110.08207" }, { "id": "2109.00110" }, { "id": "2009.03393" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2202.01344" }, { "id": "2309.00071" }, { "id": "2110.14168" }, { "id": "2303.04910" }, { "id": "2204.05862" }, { "id": "2211.03540" }, { "id": "2211.10435" }, { "id": "2305.10429" }, { "id": "2104.09864" }, { "id": "1909.08593" }, { "id": "2205.10893" }, { "id": "2001.08361" }, { "id": "2308.12950" }, { "id": "2201.08239" }, { "id": "2203.02155" }, { "id": "2109.01652" }, { "id": "2306.01694" } ]
2310.10631
101
OpenWebMath document A triangle is formed with edges along the line $y= rac{2}{3}x+5$, the $x$-axis, and the line $x=k$. If the area of the triangle is less than $20$, find the sum of all possible integral values of $k$. Feb 28, 2018 Look at the graph, here...two triangles are possible : https://www.desmos.com/calculator/m6wnjpgldq The height of the triangles at any point will be formed by [ (2/3)x + 5 ] And the bases will be So....we want to solve this (1/2) [ (2/3)x + 5 ] [ x + 7.5 ] [ (2/3)x + 5 ] [ x + 7.5] 40 (2/3)x^2 + 5x + 5x (2/3)x^2 + 10x - 2.5 [ x - (- 7.5)] = = + 37.5 0 = = 0 20 = [ x + 7.5] Using a little technology.....the max x triangle formed above the x axis will be = .246 value for the And the min x value for the triangle formed below the x axis will be = -15.246 With the given boundaries, the triangles with an area < 20
2310.10631#101
Llemma: An Open Language Model For Mathematics
We present Llemma, a large language model for mathematics. We continue pretraining Code Llama on the Proof-Pile-2, a mixture of scientific papers, web data containing mathematics, and mathematical code, yielding Llemma. On the MATH benchmark Llemma outperforms all known open base models, as well as the unreleased Minerva model suite on an equi-parameter basis. Moreover, Llemma is capable of tool use and formal theorem proving without any further finetuning. We openly release all artifacts, including 7 billion and 34 billion parameter models, the Proof-Pile-2, and code to replicate our experiments.
http://arxiv.org/pdf/2310.10631
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert Q. Jiang, Jia Deng, Stella Biderman, Sean Welleck
cs.CL, cs.AI, cs.LO
Updated references; corrected description of COPRA search budget
null
cs.CL
20231016
20231201
[ { "id": "2308.09583" }, { "id": "2205.11491" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2203.15556" }, { "id": "2009.03300" }, { "id": "2309.12284" }, { "id": "2110.08207" }, { "id": "2109.00110" }, { "id": "2009.03393" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2202.01344" }, { "id": "2309.00071" }, { "id": "2110.14168" }, { "id": "2303.04910" }, { "id": "2204.05862" }, { "id": "2211.03540" }, { "id": "2211.10435" }, { "id": "2305.10429" }, { "id": "2104.09864" }, { "id": "1909.08593" }, { "id": "2205.10893" }, { "id": "2001.08361" }, { "id": "2308.12950" }, { "id": "2201.08239" }, { "id": "2203.02155" }, { "id": "2109.01652" }, { "id": "2306.01694" } ]
2310.10631
102
.246 value for the And the min x value for the triangle formed below the x axis will be = -15.246 With the given boundaries, the triangles with an area < 20 units^2 integer sums = of all possible x values of k giving [ (-15) + (-14 ) + (-13) + ...+ ( -2) + ( - 1 ) + 0 ] - -120 (15) (16) / 2 = = Feb 28, 2018 MATH Problem: A triangle is formed with edges along the line y = 2 x = k. If the area of the triangle is less than 20, find the sum of all possible integral values of k. Answer: -120 3 x + 5, the x-axis, and the line Generated solution (LLEMMA 7b): The triangle has vertices (k, 0), (0, 5), and (k, 2 height of the triangle has length 2 3 k + 5). The base of the triangle has length k, and the 3 k + 5. The area of the triangle is therefore A = 1 2 bh = 1 2 k( 2 3 k + 5) = 1 3 k2 + 5 2 k.
2310.10631#102
Llemma: An Open Language Model For Mathematics
We present Llemma, a large language model for mathematics. We continue pretraining Code Llama on the Proof-Pile-2, a mixture of scientific papers, web data containing mathematics, and mathematical code, yielding Llemma. On the MATH benchmark Llemma outperforms all known open base models, as well as the unreleased Minerva model suite on an equi-parameter basis. Moreover, Llemma is capable of tool use and formal theorem proving without any further finetuning. We openly release all artifacts, including 7 billion and 34 billion parameter models, the Proof-Pile-2, and code to replicate our experiments.
http://arxiv.org/pdf/2310.10631
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert Q. Jiang, Jia Deng, Stella Biderman, Sean Welleck
cs.CL, cs.AI, cs.LO
Updated references; corrected description of COPRA search budget
null
cs.CL
20231016
20231201
[ { "id": "2308.09583" }, { "id": "2205.11491" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2203.15556" }, { "id": "2009.03300" }, { "id": "2309.12284" }, { "id": "2110.08207" }, { "id": "2109.00110" }, { "id": "2009.03393" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2202.01344" }, { "id": "2309.00071" }, { "id": "2110.14168" }, { "id": "2303.04910" }, { "id": "2204.05862" }, { "id": "2211.03540" }, { "id": "2211.10435" }, { "id": "2305.10429" }, { "id": "2104.09864" }, { "id": "1909.08593" }, { "id": "2205.10893" }, { "id": "2001.08361" }, { "id": "2308.12950" }, { "id": "2201.08239" }, { "id": "2203.02155" }, { "id": "2109.01652" }, { "id": "2306.01694" } ]
2310.10631
103
Since the area of the triangle is less than 20, we have 5 2 6 k2 + 5 This inequality is true if and only if 1 4 k − 10 is a quadratic with positive leading coefficient, it is positive if and only if its vertex is less than 0. The vertex of 6 k2 + 5 4 k − 10 is (− 10 1 3 < 0, we have that 1 4 k − 10 > 0 if and only if k < − 10 3 . Therefore, the only possible integral values of k are k = 0, 1, 2, 3, 4. The sum of these values is 10 . Final Answer: The final answer is 10. 5 4 4 k − 10 is positive. Since 1 Figure 7: Data overlap: Example OpenWebMath document that has a 30-gram overlap with the given MATH problem, and LLEMMA-7b’s generated solution. 27 Preprint.
2310.10631#103
Llemma: An Open Language Model For Mathematics
We present Llemma, a large language model for mathematics. We continue pretraining Code Llama on the Proof-Pile-2, a mixture of scientific papers, web data containing mathematics, and mathematical code, yielding Llemma. On the MATH benchmark Llemma outperforms all known open base models, as well as the unreleased Minerva model suite on an equi-parameter basis. Moreover, Llemma is capable of tool use and formal theorem proving without any further finetuning. We openly release all artifacts, including 7 billion and 34 billion parameter models, the Proof-Pile-2, and code to replicate our experiments.
http://arxiv.org/pdf/2310.10631
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert Q. Jiang, Jia Deng, Stella Biderman, Sean Welleck
cs.CL, cs.AI, cs.LO
Updated references; corrected description of COPRA search budget
null
cs.CL
20231016
20231201
[ { "id": "2308.09583" }, { "id": "2205.11491" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2203.15556" }, { "id": "2009.03300" }, { "id": "2309.12284" }, { "id": "2110.08207" }, { "id": "2109.00110" }, { "id": "2009.03393" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2202.01344" }, { "id": "2309.00071" }, { "id": "2110.14168" }, { "id": "2303.04910" }, { "id": "2204.05862" }, { "id": "2211.03540" }, { "id": "2211.10435" }, { "id": "2305.10429" }, { "id": "2104.09864" }, { "id": "1909.08593" }, { "id": "2205.10893" }, { "id": "2001.08361" }, { "id": "2308.12950" }, { "id": "2201.08239" }, { "id": "2203.02155" }, { "id": "2109.01652" }, { "id": "2306.01694" } ]
2310.10631
104
Problem: If 3a + b + c = −3, a + 3b + c = 9, a + b + 3c = 19, then find abc. Show that it is -56. Informal Proof (Human-written): Summing all three equations yields that 5a + 5b + 5c = −3 + 9 + 19 = 25. Thus, a + b + c = 5. Subtracting this from each of the given equations, we obtain that 2a = −8, 2b = 4, 2c = 14. Thus, a = −4, b = 2, c = 7, and their product is abc = −4 × 2 × 7 = −56. Formal Statement and Proof: theorem mathd_algebra_338: fixes a b c :: real assumes "3 * a + b + c = -3" and "a + 3 * b + c = 9" and "a + b + 3 * c = 19" shows "a * b * c = -56" proof - (* Summing all three equations yields that 5a + 5b + 5c = -3 + 9 + 19 = 25. Thus, a + b + c = 5. *) have "5 * a + 5 * b + 5 * c = -3 + 9 + 19" using assms <ATP> then have "5 *
2310.10631#104
Llemma: An Open Language Model For Mathematics
We present Llemma, a large language model for mathematics. We continue pretraining Code Llama on the Proof-Pile-2, a mixture of scientific papers, web data containing mathematics, and mathematical code, yielding Llemma. On the MATH benchmark Llemma outperforms all known open base models, as well as the unreleased Minerva model suite on an equi-parameter basis. Moreover, Llemma is capable of tool use and formal theorem proving without any further finetuning. We openly release all artifacts, including 7 billion and 34 billion parameter models, the Proof-Pile-2, and code to replicate our experiments.
http://arxiv.org/pdf/2310.10631
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert Q. Jiang, Jia Deng, Stella Biderman, Sean Welleck
cs.CL, cs.AI, cs.LO
Updated references; corrected description of COPRA search budget
null
cs.CL
20231016
20231201
[ { "id": "2308.09583" }, { "id": "2205.11491" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2203.15556" }, { "id": "2009.03300" }, { "id": "2309.12284" }, { "id": "2110.08207" }, { "id": "2109.00110" }, { "id": "2009.03393" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2202.01344" }, { "id": "2309.00071" }, { "id": "2110.14168" }, { "id": "2303.04910" }, { "id": "2204.05862" }, { "id": "2211.03540" }, { "id": "2211.10435" }, { "id": "2305.10429" }, { "id": "2104.09864" }, { "id": "1909.08593" }, { "id": "2205.10893" }, { "id": "2001.08361" }, { "id": "2308.12950" }, { "id": "2201.08239" }, { "id": "2203.02155" }, { "id": "2109.01652" }, { "id": "2306.01694" } ]
2310.10631
105
b + c = 5. *) have "5 * a + 5 * b + 5 * c = -3 + 9 + 19" using assms <ATP> then have "5 * (a + b + c) = 25" <ATP> then have "a + b + c = 5" <ATP> (* Subtracting this from each of the given equations, we obtain that 2a = -8, 2b = 4, 2c = 14. Thus, a = -4, b = 2, c =7, and their product is abc = -4 imes 2 imes 7 = -56. *) then have "2 * a = -8" "2 * b = 4" "2 * c = 14" using assms <ATP> then have "a = -4" "b = 2" "c = 7" <ATP> then show ?thesis <ATP> qed
2310.10631#105
Llemma: An Open Language Model For Mathematics
We present Llemma, a large language model for mathematics. We continue pretraining Code Llama on the Proof-Pile-2, a mixture of scientific papers, web data containing mathematics, and mathematical code, yielding Llemma. On the MATH benchmark Llemma outperforms all known open base models, as well as the unreleased Minerva model suite on an equi-parameter basis. Moreover, Llemma is capable of tool use and formal theorem proving without any further finetuning. We openly release all artifacts, including 7 billion and 34 billion parameter models, the Proof-Pile-2, and code to replicate our experiments.
http://arxiv.org/pdf/2310.10631
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert Q. Jiang, Jia Deng, Stella Biderman, Sean Welleck
cs.CL, cs.AI, cs.LO
Updated references; corrected description of COPRA search budget
null
cs.CL
20231016
20231201
[ { "id": "2308.09583" }, { "id": "2205.11491" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2203.15556" }, { "id": "2009.03300" }, { "id": "2309.12284" }, { "id": "2110.08207" }, { "id": "2109.00110" }, { "id": "2009.03393" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2202.01344" }, { "id": "2309.00071" }, { "id": "2110.14168" }, { "id": "2303.04910" }, { "id": "2204.05862" }, { "id": "2211.03540" }, { "id": "2211.10435" }, { "id": "2305.10429" }, { "id": "2104.09864" }, { "id": "1909.08593" }, { "id": "2205.10893" }, { "id": "2001.08361" }, { "id": "2308.12950" }, { "id": "2201.08239" }, { "id": "2203.02155" }, { "id": "2109.01652" }, { "id": "2306.01694" } ]
2310.10631
106
Problem: If 3a + b + c = −3, a + 3b + c = 9, a + b + 3c = 19, then find abc. Show that it is -56. Figure 8: Informal-to-formal proving. The model is given the problem, informal proof, and formal statement, following Jiang et al. (2023). It generates a formal proof (starting with proof -) containing Isabelle code, comments ((*...*)) that align the informal and formal proofs, and calls to an automated prover (shown as <ATP>). The proof is from LLEMMA-7b with greedy decoding. 28
2310.10631#106
Llemma: An Open Language Model For Mathematics
We present Llemma, a large language model for mathematics. We continue pretraining Code Llama on the Proof-Pile-2, a mixture of scientific papers, web data containing mathematics, and mathematical code, yielding Llemma. On the MATH benchmark Llemma outperforms all known open base models, as well as the unreleased Minerva model suite on an equi-parameter basis. Moreover, Llemma is capable of tool use and formal theorem proving without any further finetuning. We openly release all artifacts, including 7 billion and 34 billion parameter models, the Proof-Pile-2, and code to replicate our experiments.
http://arxiv.org/pdf/2310.10631
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert Q. Jiang, Jia Deng, Stella Biderman, Sean Welleck
cs.CL, cs.AI, cs.LO
Updated references; corrected description of COPRA search budget
null
cs.CL
20231016
20231201
[ { "id": "2308.09583" }, { "id": "2205.11491" }, { "id": "2305.20050" }, { "id": "2204.02311" }, { "id": "2203.15556" }, { "id": "2009.03300" }, { "id": "2309.12284" }, { "id": "2110.08207" }, { "id": "2109.00110" }, { "id": "2009.03393" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2202.01344" }, { "id": "2309.00071" }, { "id": "2110.14168" }, { "id": "2303.04910" }, { "id": "2204.05862" }, { "id": "2211.03540" }, { "id": "2211.10435" }, { "id": "2305.10429" }, { "id": "2104.09864" }, { "id": "1909.08593" }, { "id": "2205.10893" }, { "id": "2001.08361" }, { "id": "2308.12950" }, { "id": "2201.08239" }, { "id": "2203.02155" }, { "id": "2109.01652" }, { "id": "2306.01694" } ]
2310.09611
0
3 2 0 2 # t c O 4 1 ] C H . s c [ 1 v 1 1 6 9 0 . 0 1 3 2 : v i X r a # VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction Joshua Gorniak [email protected] Boston College Chestnut Hill, Massachusetts, USA # Yoon Kim [email protected] MIT Cambridge, Massachusetts, USA # Stephen Gwon [email protected] Cambridge Rindge & Latin School Cambridge, Massachusetts, USA # Donglai Wei [email protected] Boston College Chestnut Hill, Massachusetts, USA # Nam Wook Kim nam.wook.kim@bcu Boston College Chestnut Hill, Massachusetts, USA
2310.09611#0
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
1
# Bevan Koopman CSIRO [email protected] Guido Zuccon The University of Queensland [email protected] ABSTRACT Large Language Models (LLMs) demonstrate impressive effective- ness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effective- ness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness. # CCS CONCEPTS • Information systems → Language models.
2310.09497#1
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
1
# Nam Wook Kim nam.wook.kim@bcu Boston College Chestnut Hill, Massachusetts, USA VizAbility Interface Global Land and Ocean January-December Temperature Anomalies we Explore the structure and components ofthe chart through a text representation, Instructions: Press enter on the traeview to explore ‘the contents ofthe chart. Navigate using the arrows keys. To exit press escape. _————— ne vail 3 through typing or voice input ee Vega-lite Spec Supplement your Knowledge ofthe chart by asking questions, ether Natural Language Query Q&A Pipeline & Query Classification Few shot prompting Analytical Query Visual Query Contextual Query Navigation Query = Data Web Browser Agent + Tree view text + User location l rr Shortest Path Finding { CSV Agent End-Point Detection ‘@ Large Language Model (Open Al GPT 3.5 Turbo) ) Figure 1: VizAbility pipeline: users navigate the chart using a keyboard and ask questions that are answered by classifying their query type (e.g., visual query) and referring to underlying data, chart visual structure, user location, and internet browsing.
2310.09611#1
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
2
# CCS CONCEPTS • Information systems → Language models. KEYWORDS Large Language Model for Zero-shot ranking, setwise prompting, sorting algorithm ACM Reference Format: Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, and Guido Zuccon. 2023. A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models. In Arxiv, 2023, preprint. ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn 1 INTRODUCTION Large Language Models (LLMs) such as GPT-3 [2], FlanT5 [26], and PaLM [3] have been shown highly effective across a diverse range of natural language processing tasks under the zero-shot settings [1, 2, 9, 25]. Notably, these LLMs have also been adapted for zero-shot document ranking tasks, exhibiting strong zero-shot
2310.09497#2
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
2
ABSTRACT Data visualization serves as a crucial tool for communicating im- portant information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual im- pairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual im- pairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising poten- tial of VizAbility’s multimodal approach. We explore opportunities for further refinement, including comprehensive benchmark testing and integration with current visualization tools. Conference acronym ’XX, June 03–05, 2018, Woodstock, NY © 2018 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in Woodstock ’18: ACM Symposium on Neural Gaze Detection, June 03–05, 2018, Woodstock, NY , https://doi.org/XXXXXXX.XXXXXXX.
2310.09611#2
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
3
ranking capabilities [10, 12, 17–20]. The methodologies for harness- ing LLMs in zero-shot ranking tasks can be broadly categorized into three main approaches: Pointwise [10, 19], Listwise [12, 17, 20], and Pairwise [18]. These approaches employ different prompting strategies to instruct the LLM to output a relevance estimation for each candidate document. While these LLM-based zero-shot rank- ing approaches have been successful individually, it is worth noting that there has been a lack of fair comparison in the literature re- garding their effectiveness, and in particular, their efficiency within the exact same experimental framework. This includes factors such as utilizing the same size of LLM, evaluation benchmarks, and com- putational resources. We believe it is very important to establish a rigorous framework for evaluating these LLM-based zero-shot rank- ing approaches. By doing so, we can draw meaningful conclusions about their comparative effectiveness and efficiency.
2310.09497#3
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
3
CCS CONCEPTS • Human-centered computing → Interactive systems and tools; Visualization systems and tools. # KEYWORDS data visualization, accessibility, blind and low vision people ACM Reference Format: Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, and Nam Wook Kim. 2018. VizAbility: Multimodal Accessible Data Visualization with Key- board Navigation and Conversational Interaction. In Woodstock ’18: ACM Symposium on Neural Gaze Detection, June 03–05, 2018, Woodstock, NY . ACM, New York, NY, USA, 13 pages. https://doi.org/XXXXXXX.XXXXXXX 1 INTRODUCTION Data visualization has become an indispensable tool in our broader society, aiding in the comprehension of vital information and fa- cilitating informed decision-making [36]. Its strength stems from leveraging the vast information bandwidth of our visual perception, which surpasses other sensory modalities [18]. However, an over- reliance on visual representation can inadvertently marginalize those with blindness or low vision (BLV), restricting their ability to engage with and understand data visualizations [39]. Individ- uals with BLV often come across data visualizations while using screen readers such as JAWS, NVDA, and VoiceOver to navigate the Conference acronym ’XX, June 03–05, 2018, Woodstock, NY
2310.09611#3
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
4
Thus, in this paper, we first conduct a systematic evaluation of all existing approaches within a consistent experimental envi- ronment. In addition to assessing ranking effectiveness, we also compare the efficiency of these methods in terms of computational expenses and query latency. Our findings indicate that the Pairwise approach emerges as the most effective but falls short in terms of efficiency even with the assistance of sorting algorithms aimed at improving this. Conversely, the Pointwise approach stands out as the most efficient but lags behind other methods in terms of rank- ing effectiveness. The Listwise approach, which relies solely on the generation of document labels in order, can strike a middle ground between efficiency and effectiveness but this varies considerably based on configuration, implementation and evaluation dataset (highlighting the importance of thoroughly evaluating these model under multiple settings). Overall, these comprehensive results fur- nish an understanding of the strengths and weaknesses of these LLM-based zero-shot ranking approaches, providing valuable in- sights for practitioners seeking to select the most suitable approach for real-world applications.
2310.09497#4
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
4
Conference acronym ’XX, June 03–05, 2018, Woodstock, NY web [34, 46]. Unfortunately, a significant portion of data visualiza- tions on the web remains largely inaccessible to this group [26, 46], resulting in a pronounced information gap. Numerous assistive technologies have been developed to allow BLV users to access visualizations using sensory modalities other than vision [34]. Tactile visualizations can provide a tangible rep- resentation of data while necessitating specialized hardware such as haptic displays [42] and embossing machines [15]. On the other hand, sonification can enable users to discern trends and anomalies through sound [51], but it is typically limited to single-series data. Traditional methods for adapting web visualizations for screen read- ers include data tables and alternative text [34]. However, these methods often diminish the inherent advantages of data visualiza- tions. New strategies have emerged that aim to offer enriched data experiences by enabling users to navigate chart structures with keyboards [48, 53, 55] or by permitting them to pose verbal ques- tions [45]. A recent comparative study indicates that each approach has its own advantages and disadvantages [33].
2310.09611#4
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
5
Having considered all the different approaches and their results in terms of efficiency and effectiveness tradeoffs, we set about de- vising a method that was both effective and efficient. Our approach was to take the most effective model (Pairwise) and to enhance its efficiency (without seriously compromising effectiveness). Our solu- tion is a novel Setwise prompting approach. This concept stems from our realization that the sorting algorithms employed by Pairwise approaches can be accelerated by comparing multiple documents, as opposed to just a pair at a time. Publication rights licensed to ACM. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of a national govern- ment. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only. Arxiv, 2023, preprint © 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. . . $15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn
2310.09497#5
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
5
This work introduces VizAbility, a multimodal approach to cre- ating accessible data visualizations for screen readers, blending keyboard navigation with conversational interaction (Figure 1). In- stead of focusing exclusively on single-modality techniques, we combine the strengths of existing accessibility methods [33] to deliver an enhanced data experience, while minimizing their draw- backs. We utilize the established structured navigation method to facilitate a richer comprehension of chart appearances [10] while also giving users the option to transition to a data table view for a more familiar interaction. Our innovation lies in the question-and- answer segment, which addresses on-demand queries, fostering efficient data exploration.
2310.09611#5
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
6
Our Setwise prompting approach instructs LLMs to select the most relevant document to the query from a set of candidate doc- uments. This straightforward adjustment allows the sorting algo- rithms to infer relevance preferences for more than two candidate documents at each step, thus significantly reducing the total number Arxiv, 2023, preprint Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, and Guido Zuccon
2310.09497#6
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
6
Our LLM-based pipeline first uses few-shot prompting to classify user queries into visual, analytical, contextual, and naviga- tion queries. Once classified, VizAbility employs a query-specific prompting strategy. For analytical and visual queries, we aggre- gate both the chart’s transformed data and color encoding into one CSV file, which is subsequently fed along with the keyboard- navigable text representation [10] to the LLM via a CSV Agent [2]. Contextual queries utilize a Web Browser Agent [3], whereas navigation queries employ the LLM to discern the starting/ending nodes from a user query and employ a breadth-search algorithm to calculate the shortest path between the nodes. We designed the prompts to minimize hallucinations and address unanswerable queries via structured output formatting. We collaborated with a blind co-design participant in the development of VizAbility, hold- ing two feedback sessions. Their insights, particularly on enhancing interface transparency, were integral to shaping our system design. We carried out both quantitative and qualitative assessments to evaluate VizAbility’s question & answering pipeline and overall usability. We evaluated response accuracy using a
2310.09611#6
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
7
Arxiv, 2023, preprint Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, and Guido Zuccon Passage: {passage} Query: {query} Does the passage answer the query? Answer Logits yes_no "Yes! or ‘No! Passage: {passage} Logits Please write a question based on this passage. QLM Given a query {query}, which of the following two passages is more relevant to the query? (a) Generate oF logits Passage A: {passage_I} Passage B: {passage_2} Output Passage A or Passage B: (b) (d) | Passage C: {passage 3} The following are {num} passages, each indicated by number identifier (]. I can rank them based on their relevance to query: {query} [1] {passage_1} [2] {passage_2} uM = Bet Generate or logits sorting Logits The ranking results of the {num} passages (only identifiers) is: Given a query {query}, which of the following passages is more relevant one to the query? Passage A: {passage_I} Passage B: {passage 2} Output only the passage label of the most relevant passage:
2310.09497#7
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
7
out both quantitative and qualitative assessments to evaluate VizAbility’s question & answering pipeline and overall usability. We evaluated response accuracy using a dataset of 979 real BLV user questions derived from previous research [32]. Splitting the dataset, 80% was used for testing and 20% for validation. Our query classification achieved an accuracy of 88.5%. For response evaluation, we leveraged GPT4 to measure the coherence between the ground truth and our response on a 5-point Likert scale, ranging from “Very Poor” to “Very Good”. Notably, 47% of the responses were rated as “Very Good”. Additionally, using a binary scale to
2310.09611#7
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
8
Figure 1: Different prompting strategies. (a) Pointwise, (b) Listwise, (c) Pairwise and (d) our proposed Setwise. of comparisons required; this leads to substantial savings in compu- tational resources. Furthermore, beyond the adjustment to Pairwise approaches, Setwise prompting allows the utilization of model out- put logits to estimate the likelihood of ranks of document labels, a capability not feasible in existing Listwise approaches, which solely rely on document label ranking generation —– a process that is slow and less effective. We evaluate our Setwise approach along with other existing approaches under the same experimental setting. Our results show that the incorporation of our Setwise prompting substantially improves the efficiency of both Pairwise and Listwise approaches. In addition, Setwise sorting enhances Pairwise and Listwise robustness to variations in the internal ordering quality of the initial rankings: no matter what the initial ordering of the top-k documents to rank is, our method provides consistent and effective results. This is unlike other methods that are highly susceptible to such initial ordering. To conclude, this paper makes three key contributions to our understanding of LLM-based zero-shot ranking approaches:
2310.09497#8
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
8
Gorniak et al. categorize responses as either “Correct” or “Incorrect”, we attained a 69.4% accuracy rate. For the usability study, we enlisted six BLV participants through the National Institute for the Blind. Initially, participants explored VizAbility without guidance and were subsequently introduced to various query types. They also completed the System Usability Scale survey. The results suggest that while participants could learn to use the system, discerning query types without guidance proved challenging. Nonetheless, they acknowledged the merits of the inte- grated approach and offered suggestions for further improvements and potential applications. Combining insights from both quantita- tive and qualitative evaluations, we identify potential avenues for future work. These include enhancing user-driven customization, developing a more robust benchmarking system, and integrating our solution into existing visualization tools.
2310.09611#8
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
9
To conclude, this paper makes three key contributions to our understanding of LLM-based zero-shot ranking approaches: 2.1 Pointwise prompting approaches Figure 1a shows pointwise approaches. There are two popular di- rections of prompting LLMs for ranking documents in a pointwise manner: generation and likelihood. In the generation approach, a “yes/no" generation technique is used: LLMs are prompted to gen- erate whether the provided candidate document is relevant to the query, with the process repeated for each candidate document. Sub- sequently, these candidate documents are re-ranked based on the normalized likelihood of generating a "yes" response [10, 14]. The likelihood approach involves query likelihood modelling (QLM) [15, 28, 29], wherein LLMs are prompted to produce a relevant query for each candidate document. The documents are then re-ranked based on the likelihood of generating the actual query [19]. It is worth noting that both pointwise methods require access to the output logits of the model to be able to compute the likelihood scores. Thus, it is not possible to use closed-sourced LLMs to implement these approaches if the corresponding APIs do not expose the logits values: this is the case for example of GPT-4.
2310.09497#9
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
9
2 RELATED WORK 2.1 Accessibility Systems for Data Visualization The recent survey offers an overview of previous efforts explor- ing the use of non-visual modalities, such as speech, sound, and touch [34]. For example, sonification employs non-speech audi- tory channels, such as pitch and volume, to represent data [43, 51]. While this can offer users a swift overview of a graph, it struggles to communicate exact values and might not be effective beyond single-series charts [19]. An empirical study indicates that blind individuals favor speech over sonification, as the cognitive load for a sonified graph feels subjectively more intense [43]. Tactile systems employ methods like embossed prints, haptic feedback through vibrations, and braille for text representation. These systems enable both simultaneous and on-demand explo- ration of data trends, offering an advantage over linear audio [17]. However, they also necessitate enhanced perceptual motor skills. Similar to sonification, accurately discerning complex structures can be challenging, often demanding a more refined spatial reso- lution [15]. Producing tactile graphs typically involves specialized hardware, such as embossers, which might not be economically feasible for the average user [34]; thus, they are typically used and created in the field of education by teachers [16].
2310.09611#9
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
10
(1) We conduct a systematic examination of all existing LLM-based zero-shot ranking approaches and our novel Setwise approach under strict and consistent experimental conditions, including efficiency comparisons which have been overlooked in the lit- erature. Our comprehensive empirical evaluation on popular zero-shot document ranking benchmarks offers valuable insights for practitioners. (2) We introduce an innovative Setwise prompting approach that en- hances the sorting algorithms employed in the Pairwise method, resulting in highly efficient zero-shot ranking with LLMs. (3) We further adapt how our Setwise prompting approach computes rankings to the Listwise approach, leveraging the model output logits to estimate the likelihood of rankings. This leads to a more effective and efficient Listwise zero-shot ranking.
2310.09497#10
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
10
Screen readers, utilizing text/speech modalities, stand as the predominant assistive technology, particularly for navigating web content. The go-to accessibility techniques for screen readers en- compass alternative text and data tables. Yet, these strategies often reduce data visualizations to brief descriptions or mere numbers, undermining their inherent advantages. An alternative approach in- volves crafting navigable text descriptions derived from the chart’s structure. A select group of data visualization tools and toolkits, such as HighCharts, offer some degree of this navigation and cus- tomization [33]. In recent times, several systems have elevated their offerings by introducing advanced navigation structures, represent- ing the chart as a traversable graph structure [14, 22, 48, 53, 55]. Voice-based virtual assistants are emerging as valuable acces- sibility tools in human-computer interaction [49]. However, only a handful of studies have delved into using natural language for accessing data visualization content. For instance, Murillo-Morales & Miesenberger [41] showcased a prototype system where users can ask predefined questions related to data metrics such as mean, # VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction Conference acronym ’XX, June 03–05, 2018, Woodstock, NY
2310.09611#10
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
11
2.2 Listwise prompting approaches Figure 1b shows listwise approaches. Here the LLMs receive a query along with a list of candidate documents and are prompted to gener- ate a ranked list of document labels based on their relevance to the query [12, 17, 20]. However, due to the limited input length allowed by LLMs, including all candidate documents in the prompt is not feasible. To address this, current listwise approaches use a sliding window method. This involves re-ranking a window of candidate documents, starting from the bottom of the original ranking list and progressing upwards. This process can be repeated multiple times to achieve an improved final ranking and allows for early stopping mechanisms to target only the top-𝑘 ranking, thereby con- serving computational resources. In contrast to pointwise methods, which utilize the likelihood value of the output tokens for ranking documents, listwise approaches rely on the more efficient process of generation of the ranking list. 2 BACKGROUND & RELATED WORK There are three main prompting approaches for zero-shot document ranking employing LLMs: Pointwise [10, 19], Listwise [12, 17, 20], and Pairwise [18]. In this section, we delve into the specifics of these while situating our work within the existing literature. As a visual aid we will refer to Figure 1 as we discuss each method.
2310.09497#11
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
11
# VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction Conference acronym ’XX, June 03–05, 2018, Woodstock, NY extremes, and range. In a similar vein, VoxLens [32] facilitates voice-activated interactions capable of addressing basic queries with terms like “maximum” and “median”. Additionally, Kim et al. [32] used a wizard-of-oz approach to study the types of ques- tions blind individuals pose about charts. To address the limitations of relying on a single sensory modality, multi-sensory perception is frequently utilized. A prevalent strategy involves merging verbal (speech) cues with non-verbal ones, such as sonification, tactile graphics, and haptic feedback. Examples include offering on-demand audio descriptions of touched elements [21, 23, 35] or pairing sonification with speech or screen readers [47, 48]. However, these solutions often necessitate specialized software and hardware, especially for interactive tactile support, making them expensive to implement.
2310.09611#11
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09611
12
In this study, we adopt a different multimodal approach that merges structured chart and table navigation using the keyboard with conversational interaction via verbal commands. Our work builds on the prior work that showcases the respective advantages of data tables (familiarity), structured navigation via keyboard (deeper understanding) [55], and conversational interaction via verbal commands (faster data exploration) [45]. Our primary tech- nical advancement centers on employing LLMs to substantially enhance the current chart question-and-answer mechanism for the visually impaired. # 2.2 Question & Answering Systems for Data Visualization Within the realm of image understanding research, visual question answering has been rigorously explored in both natural language processing and computer vision, specifically regarding answering text-based queries about images [8, 28, 54]. Yet, the majority of these endeavors have centered on natural scene images rather than human-generated visuals such as data visualizations.
2310.09611#12
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09611
13
questions differently compared to those with sight [13, 24]. A lim- ited number of systems directly address the challenge of crafting question-and-answer systems tailored for the blind [41, 45]. How- ever, these systems do not always offer specialized features for the blind and are constrained in their question-answering capabilities. For instance, VoxLens [45] is limited to charts with single series data, while the system by Murillo-Morales & Miesenberger [41] is restricted to bar charts. Kim et al. [32] have recently curated a set of questions posed by blind individuals through a wizard-of- oz study, laying the groundwork for more refined and targeted question-and-answer systems. In this paper, we present an enhanced chart question-and-answer system for the blind, harnessing the power of LLMs. We integrate structured information from the keyboard navigation method [10], which takes Vega-lite as input. Our system addresses a wide range of queries, from data and visual to contextual ones that necessi- tate auxiliary information surrounding the chart. Additionally, it facilitates navigation queries to synchronize with keyboard naviga- tion. We assessed our system using the data collection from Kim et al. [32], which comprises questions posed by blind individuals. # 3 VIZABILITY DESIGN DECISIONS
2310.09611#13
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09611
14
# 3 VIZABILITY DESIGN DECISIONS G1: Enable understanding the chart structure. Bridging the per- ceptual gap between BLV and sighted individuals requires a deep understanding of chart structures. While some blind individuals may not prioritize visual encoding information [38, 48], previous research indicates that navigating charts based on their visual en- coding helps BLV users gain a clearer visual understanding. Fur- thermore, a hierarchical representation of charts, rooted in visual encodings, offers a layered approach to information, allowing users to delve from broad summaries to specific data points [48]. In this study, we employ Olli [10] to facilitate structured chart navigation.
2310.09611#14
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
15
query [16, 18]. To re-rank all candidate documents, a basic method, called AllPairs, involves generating all possible permutations of document pairs from the candidate set. Pairs are independently then fed into the LLM, and the preferred document for each pair is determined. Subsequently, an aggregation function is employed to assign a score to each document based on the inferred pairwise preferences, and the final ranking is established based on the total score assigned to each document [16]. However, this aggregation- based approach suffers from high query latency: LLM inference on all document pairs can be computationally expensive. To ad- dress this efficiency issue in pairwise approaches, prior studies have introduced sampling [7, 13] and sorting [18] algorithms. In this paper, we focus on sorting algorithms because, assuming an LLM can provide ideal pairwise preferences, the sorting algorithms offer the theoretical assurance of identifying the top-𝑘 most rele- vant documents from the candidate pool. In prior work [18], two sorting algorithms [8], heap sort and bubble sort, were employed. Unlike AllPairs, these algorithms leverage efficient data structures to selectively compare document
2310.09497#15
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
15
Recent studies have begun to focus on data visualization im- ages [25]. For example, FigureQA [30] offers a corpus tailored for yes/no questions, such as “Is Light Gold less than Periwinkle?”. Con- versely, DVQA [29] expands its purview to encompass questions about chart structure (“are the bars horizontal?”), data retrieval (“what percent of people prefer A?”), and reasoning (“Is A preferred more than B?”). While both FigureQA and DVQA rely on synthet- ically generated charts, PlotQA introduces a large-scale dataset of real-world scientific plots. Unlike the templated questions of the aforementioned datasets, ChartQA delivers human-composed questions, enhanced using LLMs [40]. These models predominantly process pixel images as input. For instance, ChartQA extracts data tables and other image features, feeding them into vision and lan- guage task models [12]. Consequently, their accuracy largely hinges on their image processing capabilities, often leading to suboptimal results. In a different approach, Kim et al.[31] unveiled a system that not only answers questions but also provides explanations, op- erating on Vega-lite[44] instead of images. All the current question- answering systems are limited to basic visualization types like bar, line, and pie charts.
2310.09611#15
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
16
sorting algorithms [8], heap sort and bubble sort, were employed. Unlike AllPairs, these algorithms leverage efficient data structures to selectively compare document pairs, which can quickly pull the most relevant documents out from the candidate pool and place them at the top of the final ranking. This is particularly suitable for the top-𝑘 ranking task, where only a ranking of the 𝑘 most relevant documents is needed. These sorting algorithms provide a stopping mechanism that prevents the need to rank all candidate documents. From a theoretical standpoint the differences and relative advan- tages among these three families of zero-shot document ranking that employ LLMs are clear. However, from an empirical standpoint there has been no fair and comprehensive evaluation of these tech- niques in terms of effectiveness vs. efficiency, and across factors such as sizes of LLMs, benchmarks, and computational resources.
2310.09497#16
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
16
While chart QA systems hint at the potential for enhancing visualization accessibility, they often overlook the specific needs of BLV users. Recent studies have shown that BLV users frame G2: Support efficient data exploration. Navigating through a large number of data points using keyboard navigation can be cum- bersome, as highlighted in previous studies [33, 55]. Furthermore, extracting aggregate measures and discerning perceptual patterns beyond basic value retrievals becomes challenging when navigating data points individually. A conversational agent offers a potential solution to these challenges [33]. When combined with keyboard navigation, the user’s current location can offer situational context, reducing the cognitive load when formulating clear questions for the intelligent agent. In this study, we leverage the advanced lan- guage understanding and reasoning capabilities of LLMs to address on-demand conversational queries. G3: Provide contextual knowledge on demand. Current chart ques- tion and answering systems often neglect the distinct types of ques- tions posed by blind versus sighted individuals. Recent research involving blind participants indicates that they frequently ask con- textual questions alongside data-related and visual inquiries [32]. These questions often seek external information not present in the chart, such as meanings about axes or specific data labels. Provid- ing answers to these inquiries can enhance the self-efficacy and autonomy of blind individuals. In our approach, we utilize an LLM with web search capabilities to address these contextual queries.
2310.09611#16
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
17
3 SETWISE RANKING PROMPTING 3.1 Limitations of Current Approaches The efficiency of LLM-based zero-shot ranking methods hinges on two critical dimensions. First, the number of LLM inferences significantly impacts effi- ciency. Given that LLMs are large neural networks with billions of parameters, inference is computationally intensive. Hence, an increased number of LLM inferences introduces a considerable computational overhead. This is notably observed in the current Pairwise approach, which is inefficient due to the extensive need for inferring preferences for the many document pairs. While sort- ing algorithms offer some relief, they do not entirely mitigate the efficiency issue. Second, the number of LLM-generated tokens per inference plays a pivotal role. LLMs employ a transformer decoder for autoregres- sive token generation, where the next token generation depend on previously tokens generated. Each additional generated token requires an extra LLM inference. This accounts for the inefficiency of the existing Listwise approach, which relies on generating an entire ranking of document label lists, often requiring a substantial number of generated tokens.
2310.09497#17
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
17
Conference acronym ’XX, June 03–05, 2018, Woodstock, NY G4: Use data tables as a familiar fallback strategy. The hierarchi- cal text representation of the chart may be regarded as excessive for smaller data sets, in which case conventional data tables are the preferable alternative. Moreover, data tables are well supported by screen readers and the most familiar method. This perspective, although not our initial focus, was reinforced by our user study and corroborated by previous research [33, 55]. Consequently, we incorporated the data table feature post-user study (Section 6). G5: Reduce gulf of execution and execution. Beyond the primary objectives, enhancing the user experience of VizAbility was also a key focus. For example, we expanded upon the query types iden- tified in prior research [32] by introducing navigation queries, fa- cilitating nonlinear navigation across charts and assisting users with orientation. We meticulously designed LLM prompts to ensure responses were succinct yet descriptive, while also minimizing the risk of misinterpretations or fabricated information. Additionally, we ensured numbers were formatted properly for screen readers, offered an alternative text box for speech queries, and added loading indicators to signal when LLM responses were pending. # 4 VIZABILITY SYSTEM INTERFACE & ARCHITECTURE
2310.09611#17
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
18
3.2 Speeding-up Pairwise with Setwise To solve the inefficiency issue of these approaches, we propose a novel Setwise prompting approach. Our prompt, as illustrated in Figure 1d, instructs the LLM to select the most relevant document for the given query from a set of documents, hence the term Setwise prompting. We specifically treat the collection of documents as an unordered set and later experiments will show that Setwise prompting is quite robust to document ordering. Arxiv, 2023, preprint
2310.09497#18
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
18
# 4 VIZABILITY SYSTEM INTERFACE & ARCHITECTURE Below, we outline the input chart format for VizAbility, explain how VizAbility facilitates keyboard navigation and conversational interaction with the chart, and address additional accessibility con- siderations based on the design decisions mentioned earlier. 4.1 Input Chart Format VizAbility assumes that both the visual encoding information and underlying dataset are made accessible. In this work, we use a Vega-Lite specification [44] as input to our system, while other specifications such as Observable Plot [4] are easily adaptable. 4.2 Exploring Chart Content using Keyboard Among many keyboard navigation methods available, we leverage Olli [10] to make the chart explorable as it is open-source. Olli accepts a Vega-lite spec and renders a visual chart for sighted users and also a keyboard navigable text representation (Figure 2).
2310.09611#18
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
19
With our prompt, sorting-based Pairwise approaches can be considerably accelerated. This is because the original heap sort and bubble sort algorithm used in the Pairwise approach only compares a pair of documents at each step in the sorting process, as illustrated in Figure 2a and 2c. These sorting algorithms can be sped up by comparing more than two documents at each step. For example, in the heap sort algorithm, the “heapify" function needs to be invoked for each subtree, where the parent node must be swapped with the child node with the highest value if it exceeds the parent value. In the case of Figure 2a, to perform “heapify" with pairwise prompting, a minimum of 6 comparisons (each root node paired with each child node) are required. Conversely, if we increase the number of child nodes in each subtree to 3 and can compare 4 nodes at a time, only 2 comparisons are needed to “heapify" a tree with 9 nodes, as illustrated in Figure 2b. Similarly, for the bubble sort algorithm, if we can compare more than a pair of documents at a time, each “bubbling” process will be accelerated. For instance, in
2310.09497#19
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
19
Olli’s tree view displays the chart content in a hierarchical struc- ture, starting with the chart type description at the root—A bar chart. With axes Year and Temperature Anomaly (°C), followed by visual encoding channels such as axes and legends—Legend titled Temporal Polarity. For a nominal scale. With 2 values from nega- tive to positive. Within each encoding channel node, Olli lists data categories or numerical ranges depending on the data type being encoded; e.g., for a color legend, it lists all categories in the legend— 1 of 2. Temporal Polarity equals negative. 101 values. Press t to open table. Individual data points reside in these group nodes. All four chart types we used in this work, including line chart, bar chart, scatter plot, and choropleth map, had four levels of information granularity. A user first needs to enter the tree view to explore the content. Based on its hierarchical structure, users can navigate the different levels of the tree view using up and down arrow keys (barchart → legend → negative polarity) while using left and right arrow keys Gorniak et al.
2310.09611#19
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
20
bubble sort algorithm, if we can compare more than a pair of documents at a time, each “bubbling” process will be accelerated. For instance, in Figure 2c, there are 4 comparisons in total, but in Figure 2d, with the ability to compare 3 documents at once, only 2 comparisons are required to be able to bring the node with the largest value to the top. Our Setwise prompting is designed to instruct LLMs to compare the relevance of multiple documents at a time, making it well-suited for this purpose.
2310.09497#20
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
20
Gorniak et al. to navigate sibling nodes in the level (negative polarity → positive polarity). In order to access individual data points, Olli requires users to press t to open up a screen-reader-compatible data table. This table shows a subset of the whole data, only displaying data points within the category or numerical range. The current version of Olli does not support navigating a choro- pleth map by geographic regions. We extended it to support the level of detail channel in Vega-lite1. As a result, we can encode country names or state names into the detail channel, which is in turn converted into an additional encoding channel node (see Figure 2). # 4.3 Rapid Chart Probing via Conversational Interaction The keyboard navigation of the chart content can convey a clear picture of how a chart looks to blind users [33]. However, it can also be cumbersome to navigate individual nodes in the tree view or derive aggregate measures on the go. To address this challenge, we integrate a speech-based interaction in which users can ask natural language questions as needed. Leveraging the question-answering capabilities of Large Language Models (LLMs), we detail our incor- poration of LLMs into our accessible data visualization systems. We outline the supported query types and how we seamlessly merge keyboard and speech inputs to enhance the chart experience.
2310.09611#20
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
21
3.3 Listwise Likelihoods with Setwise Our Setwise prompting can also accelerate the ranking process for the Listwise approach. The original Listwise method relies on the LLM’s next token generation to produce the complete ordered list of document labels at each step of the sliding window process, as illustrated in Figure 1b. As we discussed, generating the document label list is computationally intensive, because the LLM must do one inference for each next token prediction. On the other hand, the LLM may generate results in an unexpected format or even de- cline to generate the desired document label list [20], thus harming effectiveness. Fortunately, if we have access to the LLM’s output logits, these issues can be avoided by evaluating the likelihood of generating every conceivable document label list and then se- lecting the most probable one as the output. Regrettably, this is only theoretically possible, but in practice, it is unfeasible for the existing Listwise approach due to the very large number of possible document label permutation, which implies that the process of like- lihood checking may actually become even more time-consuming than generating the list itself.
2310.09497#21
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
21
4.3.1 Data Set. We utilized a prior study’s data set, comprising 979 BLV user questions spanning four visual stimuli (bar, line, scat- ter, and map) for the development and quantitative evaluation of VizAbility. These questions were gathered through a wizard-of-oz study, where a human facilitator acted as a question-answering system. We reconstructed the visualization images into Vega-Lite specifications and partitioned the questions into analytical, visual, and contextual queries. We then partition the pool of questions once more into an 80/20 split between the testing and validation sets via stratified random sampling so that there is a proportionate representation of each query type amongst both sets.
2310.09611#21
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
22
Setwise prompting again provides a solution: we can easily derive an ordered list of document labels from the LLM output logits. This is done by assessing the likelihood of each document label being chosen as the most relevant, as shown in Figure 1d. This straightforward trick markedly accelerates Listwise ranking, as it requires only a single forward pass of the LLM, and also guarantees that the output matches the desired document label list. 3.4 Advantages of Setwise We summarize and compare the key different properties of exist- ing zero-shot LLM ranking approaches along with our proposed Setwise prompting approach in Table 1. Notably, pointwise.qlm, pointwise.yes_no and pairwise.allpair require a brute-force of LLM Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, and Guido Zuccon
2310.09497#22
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
22
The ground truths for the testing and validation sets were gen- erated manually. Each user query within the data set has an accom- panying ground truth classification, expressed as either “Analyti- cal Query”, “Visual Query”, or “Classification Query”, as well as a ground truth for the query response, for which we emphasized ver- boseness. For instance, the ground truth response to the question “What is the vaccination rate of South Africa” is “The vaccination rate for South Africa is 36%”, as opposed to the more concise “36%”. This enables us to evaluate both the quantitative and qualitative aspects of the response yielded by VizAbility. Supported Query Types. Analytical queries primarily focus 4.3.2 on understanding the underlying data, such as “Is Africa the country that needs the vaccine the most?” or “What is the highest positive anomaly?” Visual queries relate to visual encoding information or demand visual interpretation, exemplified by questions like “What color is North America?” or “Is the line fluctuating?” Analytical and visual queries are not entirely distinct; visual queries often necessitate data interpretation, as in “Which country exhibits the darkest shades for both the lowest and highest values?”.
2310.09611#22
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
23
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, and Guido Zuccon Table 1: Properties of different methods. Logits: requires access to the LLM’s logits. Generate: only requires to generate tokens. Batching: allows batch inference. Top-𝑘: allows early stopping once top-𝑘 most relevant documents found. # LLM calls: the number of LLM forward passes needed in the worst case. (𝑁 : number of document to re-rank. 𝑟 : number of repeats. 𝑠: step size for sliding window. 𝑘: number of top-𝑘 relevant documents to find. 𝑐: number of compared documents at each step.)
2310.09497#23
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09497
24
Methods pointwise.qlm pointwise.yes_no listwise.generation listwise.likelihood pairwise.allpair pairwise.heapsort pairwise.bubblesort setwise.heapsort setwise.bubblesort Logits Generate Batching Top-𝑘 ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ # LLM calls O (𝑁 ) O (𝑁 ) O (𝑟 ∗ (𝑁 /𝑠)) O (𝑟 ∗ (𝑁 /𝑠)) O (𝑁 2 − 𝑁 ) O (𝑘 ∗ log2 O (𝑘 ∗ 𝑁 ) O (𝑘 ∗ log𝑐 𝑁 ) O (𝑘 ∗ (𝑁 /(𝑐 − 1))) 𝑁 )
2310.09497#24
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
24
Tree View Keyboard Navigation ‘A geographic map. percent fully vaccinated, country. | Press Close Table View ‘geographic map. percent.fully_vaccinated, country. - Legend titled percent fully_vaccinated. For a quantitative scale, With values from 0 to 100. Table View ‘Share of Population Receiving at Least One Dose i aeaeee Detail ttied country. For a —— vn 180 values from Costa Rica to Vanuatu percent fully_vaccinated is between 10 ress and 20 #4 ‘A geographic map. percent fully vaccinated, country. ¥ A . Legend titled percent fully.vaccinated, For a quantitative scale. With values from 0 to 100 lif Aad . 1 of 10, Percent.fully_vaccinated Is between 0 and 10.6 values. Press t to open table. percent_fully vaccinated country Lye ae 2 of 10, Percent fully vaccinated is between 10 and 20, 12 values. Press t to open 5 syria ee > table. 3 of 10. Percent fully vaccinated is between 20 and 30. 11 values. Press t to open 12 ‘Congo table 7 Democratic Republic 4 of 10, Percent fully vaccinated is between 30 and 40, 19 values. Press t to open creas table 3 J Press G2 18 Libya [A geographic map. percent-fully vaccinated, country,
2310.09611#24
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
25
inference for all available documents relevance or preferences. Thus, they are unable to facilitate early-stopping for the top-𝑘 ranking. However, these approaches do allow batch inferences, hence the maximum GPU memory utilization could be easily achieved by us- ing the highest batch size. On the other hand, other approaches use sorting algorithms, enabling early-stopping once the top-𝑘 most relevant documents are identified. However, this compromises the feasibility of batching inference, as the LLM inference at each step of the sorting algorithms relies on the results from the preced- ing step. Our Setwise prompting empowers the previous Listwise approach (listwise.generation), which relied on LLM’s next token generations, to now utilize the LLM’s output logits. We refer to the Listwise approach that incorporates our Setwise prompt as list- wise.likelihood. Finally, comparing with Pairwise approaches, our Setwise prompting has fewer LLM calls by comparing a minimum of 𝑐 ≥ 3 documents at each step of the sorting algorithms.
2310.09497#25
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
25
vaccinated is between 30 and 40, 19 values. Press t to open creas table 3 J Press G2 18 Libya [A geographic map. percent-fully vaccinated, country, ; Qe Legend titted percent fully vaccinated. For a quantitative scale, With values from 0 to 100. - te 1 of 10, Percent fully-vaccinatedis between 0 and 10.6 values. Press to open table. rage rH 15 Algeria porn Percent fully_vaccinated is between 10 and 20, 12 values. Press t to open 13 Cameroon 3 of 10, Percent fuly-vaccinated is between 20 and 30, 11 values. Press to open 12 Gabon table 7 Burkina Faso 4 of 10, Percentfully. vaccinated is between 30 and 40. 19 values. Press to open table 19 ‘Togo
2310.09611#25
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
26
4 EXPERIMENTS 4.1 Datasets and evaluations The first objective of this study is to contribute a fair and comprehen- sive evaluation of existing LLM-based zero-shot ranking methods in terms ranking effectiveness and efficiency. To achieve this goal, we carried out extensive empirical evaluations using well-established document ranking datasets: the TREC Deep Learning 2019 [5] and 2020 [4], along with the BEIR benchmark datasets [21]. To guaran- tee a fair comparison across different approaches, we tested all of the methods using the same open-source Flan-t5 LLMs [26], avail- able on the Huggingface model hub in various sizes (780M, 3B, and 11B parameters). All LLM methods were used to re-rank 100 documents retrieved by a BM25 first-stage retriever. In order to optimize efficiency, the focus was on a top-𝑘 ranking task, whereby the re-ranking process stopped as soon as the top-𝑘 most rele- vant documents were identified and ranked. Here, we set 𝑘 = 10. The effectiveness of different approaches was evaluated using the NDCG@10 metric, which serves as the official evaluation metric for the employed datasets. Efficiency was evaluated with the following metrics: A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models Arxiv, 2023, preprint
2310.09497#26
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
26
Figure 2: An example of a user’s keyboard traversal of the Olli Tree. Users can widen/narrow the scope of the text via the up/down arrow keys (respectively), and otherwise navigate between sibling nodes using left/right arrow keys. To access individual data, users can press the ’t’ key to view a snapshot data table Line Chart Bar Chart 117 36 137 21 196 37 155 32 605 126 8 8 N/A 21 N/A 9 N/A 46 N/A 161 166 254 196 777 Table 1: Testing data distribution amongst the four query classifications Contextual questions seek information not directly present on the chart but require ancillary knowledge related to it. For instance, some questions aim to understand the chart’s encoding, like “What is a scatterplot?” or “What does ’positive temperature anomaly’ mean?” Others ask about context related to the data, such as “Where is Palestine?” or “Why does the data start in 1880? What occurred then?” Additionally, there are inquiries about the data’s origin, exemplified by “What is the source of this information?” or “From where was this data obtained?”
2310.09611#26
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
27
Efficiency was evaluated with the following metrics: A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models Arxiv, 2023, preprint • The average number of LLM inferences per query. LLMs have limited input length. Thus, to re-rank 100 documents, multiple LLM inferences are often needed. It’s important to note that an increased number of LLM inferences translates to higher compu- tational demands. Thus, we regard this as an efficiency metric worth considering. • The average number of prompt tokens inputted to the LLMs per query. This metric takes into account the actual average quan- tity of input tokens required in the prompts for each method to re-rank 100 documents per query. Given that self-attention mechanisms in transformer-based LLMs become prohibitively costly for a large number of input tokens [24], an increase in to- kens within the prompts also translates to higher computational demands. Notably, numerous LLM web API services, including OpenAI APIs, charge based on the number of input tokens in the API calls. As such, we deem this metric valuable in assessing efficiency.
2310.09497#27
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
27
Navigation queries are a category we introduced to enhance the user experience. These queries are tailored to the synergy be- tween keyboard navigation and conversational interaction. For instance, to reduce cumbersome keyboard navigation and assist users in orientation, questions such as “How can I get to the X-axis” (direction) or “Where am I?” (orientation) can be beneficial. Our motivation for this stems from a previous empirical study [33], where blind individuals highlighted such challenges with Olli’s tree view. 4.3.3 Query Classification. First, we aim to classify user queries based on this categorization rather than diving straight into re- sponses. Once classified, we proceed to address each type of query in the subsequent phase (see the next section). This task division provides the LLM with a well-defined task and has been proven to increase its performance [52]. Figure 3 shows our few-shot prompt- ing approach. In the prompt, we provide a clear definition for each query type. To bolster the definition, we accompany each with four exemplar questions.
2310.09611#27
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
28
• The average number of generated tokens outputted by LLMs per query. Much like the assessment of average prompt tokens, this metric provides an evaluation of computational efficiency, but from a token generation perspective. Instead of focusing on the number of tokens in the prompt, it takes into account the number of tokens generated. This is particularly significant because transformer-based generative LLMs produce content token-by-token, with each subsequent token relying on the gen- eration of preceding ones. Consequently, an increase in number of generated tokens leads to a corresponding increase in the computational cost, as each additional generated token implies another LLM forward inference. In fact, OpenAI applies a pricing structure wherein the cost for the number of generated tokens is twice that of the number of prompt tokens for their LLM APIs 1. This underscores the substantial impact that generated tokens can have on computational expenses.
2310.09497#28
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
28
query type. To bolster the definition, we accompany each with four exemplar questions. These examples are sourced from our validation set, chosen based on their close alignment with the user query. Specifically, for each query type and the given user query, we sift through the validation set to pinpoint the four most analogous queries. These are then incorporated as representative examples for each query definition within the prompt. For this endeavor, we used sentence transformers to generate text embeddings and then applied cosine similarity to these embeddings to identify the most closely aligned examples. This method offers greater precision compared to arbitrarily selecting samples for each query type. We constrain the range of LLM responses by explicitly instruct- ing it to output either: “Analytical Query”, “Visual Query”, “Con- textual Query”, or “Navigation Query”. To thwart any potential hallucinations from the LLM, we provide an accessible escape route by instructing the model to return “I am sorry. I am unable to an- swer this question” when confronted with a question that does not immediately conform to any of the specified query types. Without such a safeguard, GPT frequently generates technical jargon and error messages that can deter users. 4.3.4 Query-Specific Prompting. The answering pipeline diverges into three unique paths, depending on the query type.
2310.09611#28
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
29
• The average query latency. We evaluate the run time efficiency of all the methods with average query latency. To conduct this assessment, a single GPU is employed, and queries are issued one at a time. The per-query latency is then averaged across all the queries in the dataset. It’s important to highlight that for methods that support batching we always employ the maximum batch size to optimize GPU memory usage and parallel compu- tation, thus maximizing efficiency for these particular methods. This approach ensures that the evaluation is conducted under conditions most favourable for efficiency gains. It is important to acknowledge that while other methods may not be able to use the batching strategy for individual queries, they do have the capability to utilize batching and parallel computing across various user queries in real-world scenarios. However, this lies more in engineering efforts and falls outside the scope of this paper: as such, we do not investigate this perspective. 4.2 Implementation details To establish the initial BM25 first-stage ranking for all datasets, we employed the Pyserini Python library [11] with default settings. For LLM-based zero-shot re-rankers, we followed the prompts rec- ommended in existing literature to guide Flan-t5 models of varying
2310.09497#29
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
30
Example Queries ‘Validation Dataset ‘The number of homes for sale nationally has plummeted Analytical Queries involve any possible lookup operations, computations, or analysis involving data. ‘Analytical Query // What Is the average number of homes for sale from 2018 to 2021: ‘Analytical Query // What were the percentage of increase or decrease in average number of nouses an sal between 2015 and 20177 (CAnaiytical Query) aN fra (Visual Query) Oe i Analytical Query // How many houses were sold in 2017? I lesed Analytical Query // What is the average amount of houses sold? Classification Prompt ia (Contextual Query } ins Visual Queries involve references to visual cues such as color or graph shape/characteristics. Your objective is to classify Navigation Query Visual Query // Is column two showing houses for sale? Visual Query // Is the picture(cnart) in between $0 to 80? Visual Query // What countries are in brighter range? Visual Query // Does each circle give the specific number of population lke the table or just the size of the circle? the following question into one of these four categories | (Example Queries } User Query (© What's the average number of homes for. > sale between 2017 and 2020? Navigation
2310.09611#30
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
31
Specifically, for the pointwise.qlm method, we adopted the prompt suggested by Sachan et al. [19]. For pointwise.yes_no, we use the prompt provided by Qin et al. [18]. For listwise.generate, we uti- lized the prompt designed by Sun et al. [20]. As for pairwise.allpair, pairwise.heapsort, and pairwise.bubblesort, we relied on the prompts from the original paper by Qin et al. [18]. For methods leveraging our Setwise prompting (i.e. listwise.likelihood, setwise.heapsort, and setwise.bubblesort), we employed the prompts detailed in Section 3. In the case of Listwise approaches, we configure the window size (𝑤) to contain 4 documents, each capped at a maximum of 100 tokens. The step size (𝑠) is set to 2, and the number of repetitions (𝑟 ) is set to 5. These settings take into account the token limitations imposed by Flan-t5 models, which have an input token cap of 512. A window size of 4 documents appears reasonable as it aligns well with the prompt capacity. Additionally, a step size of 2, combined with 5
2310.09497#31
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
31
the following question into one of these four categories | (Example Queries } User Query (© What's the average number of homes for. > sale between 2017 and 2020? Navigation Queries involve questions relating to location within the Olli navigation table. They usually take up the form: "How do get from () to ()" Navigation Query // Does this one that 'm on have a comma in it? | Refer back to the examples | above to help classify the question: @ — Contextual Queries involve broad questions that do not necessarily need the graph's specific data to be answered, | Contextual Query // Does the system know the causes for sharp decrease in home s. | Contextual Query // when was this data collected? | Contextual Query // Do you have information on the percent of people who recelved two doses of a | Contextual Query // What is meant by upper range? Compute cosine similarity scores (User Query} to extract four most aligned queries per type « other than Covia?
2310.09611#31
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
32
an input token cap of 512. A window size of 4 documents appears reasonable as it aligns well with the prompt capacity. Additionally, a step size of 2, combined with 5 repetitions, has theoretical guarantees of bringing the 10 most relevant documents to the top. For our Setwise approaches, we set the number of compared documents 𝑐 in each step to 3 for the main results. We further investigate the impact of 𝑐 in Section 5.4. For all other methods, we truncate the documents with the maximum number of tokens to 128.
2310.09497#32
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09497
33
We note that, among all the methods capable of utilizing both model output logits and generation outputs, we exclusively employ the latter. This choice is made in favor of a more general approach that allows for leveraging generation APIs across a wider range of closed-source LLMs. Nevertheless, we investigate the difference between using model output logits and generation outputs for our Setwise approaches in Section 5.1. We carried out the efficiency evaluations on a local GPU work- station equipped with an AMD Ryzen Threadripper PRO 3955WX 16-Core CPU, a NVIDIA RTX A6000 GPU with 49GB of memory, and 128GB of DDR4 RAM. 5 RESULTS AND ANALYSIS 5.1 Effectiveness Results Table 2 presents results for both ranking effectiveness and efficiency on TREC DL datasets.
2310.09497#33
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
33
Data, Visual, Context LLM Output ! Orange data points represent countries Entity Year Life Expectancy at Birth _ GDP per Capita in Asia. . . Afghanistan 1950 27.7 S Albania 1950 44.7 ae : 1950 42.4 Population 7480464 1252587 9019866 Explore the structure and components of the chart through a text representation, Instructions: Press enter onthe treeview to explore the contents ofthe chart. Navigate using the arrows keys. To el, press escape, Context! Ascatrplet howe xsi at han Ppa apr counts adhe wari anf yar rm 15002018, —————->| Chart text + User cursor location 1.3.2 |/ 2 of 6. Continent equals Europe. 25 values {Address of Node // String Representation} | User Query | What do orange data points mean? 1156 1596 2176 orange darkolivegreen teal | LLM Prompt | Before you output the answer check for the following | Make sure to format all numerical responses ' appropriately, using things such as commas, dollar ' signs, appropriate rounding, and other identifiers when | necessary. ' {Context | Use this information along with everything else you are ! given to answer the following question: i! {User Query}
2310.09611#33
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
34
5 RESULTS AND ANALYSIS 5.1 Effectiveness Results Table 2 presents results for both ranking effectiveness and efficiency on TREC DL datasets. In regards to ranking effectiveness, it is notable that all LLM- based zero-shot ranking approaches demonstrate a significant im- provement over the initial BM25 ranking. The only exception to this trend is the pointwise.qlm approach on DL2019 across all models and DL2020 with the Flan-t5-xxl model. Interestingly, as the LLM size increases, the effectiveness of pointwise.qlm decreases. This finding is particularly unexpected, given the common assumption that larger LLMs tend to be more effectiveness. On the other hand, pointwise.yes_no method achieved a decent NDCG@10 score with Flan-t5-large when compared to other methods. However, effec- tiveness also did not increase as model size increased. These unex- pected results for both Pointwise methods might be attributed to the requirement of a more refined model output calibration process, ensuring their suitability for comparison and sorting across differ- ent documents [18]. The Listwise approaches (listwise.generation) are far less effective when tested with Flan-t5-large and Flan-t5-xl. 1https://openai.com/pricing, last visited 12 October 2023. Arxiv, 2023, preprint
2310.09497#34
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
34
Figure 4: Query-specific evaluation for Analytical and Visual queries. We parse the chart’s transformed data set and aggregate color encoding within a CSV file, which we then supply to an LLM via a CSV agent. For further context, we also populate the prompt with the user’s active position within the Olli Tree, in addition to a text representation of the Tree itself. Analytical & Visual Queries. Figure 5 illustrates our approach to handling analytical and visual queries. To circumvent the pre- defined token limit of the LLM, we consolidate the transformed data extracted from the Vega View [5] into an external CSV file. This file is then processed by LangChain’s CSV Agent [2], which operates in the background. Under the hood, this agent leverages the Pandas DataFrame agent, subsequently executing Python code generated by the LLM. We purposefully avoid including the entire raw dataset, recognizing that it might differ from the final view data. Often, the agent can get stuck in an infinite loop of thinking. To prevent this, we have implemented a time constraint. If this time limit is exceeded, VizAbility will display the message: “Answer: I’m sorry, but the process has been terminated because it took too long to arrive at an answer.”
2310.09611#34
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
35
1https://openai.com/pricing, last visited 12 October 2023. Arxiv, 2023, preprint Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, and Guido Zuccon Table 2: Results on TREC DL. All the methods re-rank BM25 top 100 documents. We present the ranking effectiveness in terms of NDCG@10, best values highlighted in boldface. Superscripts denote statistically significant improvements (paired Student’s t-test with 𝑝 ≤ 0.05 with Bonferroni correction). #Inferences denotes the average number of LLM inferences per query. Pro. Tokens is the average number of tokens in the prompt for each query. Gen. tokens is the average number of generated tokens per query. Latency is the average query latency, in seconds.
2310.09497#35
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
35
While the CSV agent can handle most data-related queries, it is not aware of any visual encoding information of the chart. To ad- dress visual queries, we extract color information directly from the Vega View [5] and incorporate it as an additional column within the CSV file. This modification ensures that each data point is paired with its corresponding color. Initially, the extracted color data is in hex codes. To enhance user-friendliness, we employ a color-matching algorithm to convert the hex codes into more com- mon English names. This algorithm works by cross-referencing the source hex code with a predefined list of color hex codes and English names [1], ultimately determining the closest matching name based on RGB distance. The color augmentation process enables answering visual ques- tions like “What color is Algeria? What other countries are the color of Algeria?”, as VizAbility responds: “Algeria is orange-red and other countries with the same color are Syria, Iraq, Congo, [...].” Furthermore, LLM is lenient with user queries and accepts a certain margin of error for color input. e.g., if the user asks about what blue represents, the system can infer blue refers to steelblue in the map. # VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
2310.09611#35
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
36
TREC DL 2019 TREC DL 2020 # Methods NDCG@10 #Inferences Pro. tokens Gen. tokens Latency(s) NDCG@10 #Inferences Pro. tokens Gen. tokens Latency(s) e g r a l - 5 t - n a l F l x - 5 t - n a l F l x x - 5 t - n a l F 𝑎 𝑏 𝑐 𝑑 𝑒 𝑓 𝑔 ℎ 𝑖 𝑗 𝑏 𝑐 𝑑 𝑒 𝑓 𝑔 ℎ 𝑖 𝑗 𝑏 𝑐 𝑑 𝑒 𝑓 𝑔 BM25 pointwise.qlm pointwise.yes_no listwise.generation listwise.likelihood pairwise.allpair pairwise.heapsort pairwise.bubblesort setwise.heapsort setwise.bubblesort pointwise.qlm pointwise.yes_no listwise.generation listwise.likelihood pairwise.allpair pairwise.heapsort pairwise.bubblesort setwise.heapsort setwise.bubblesort pointwise.qlm pointwise.yes_no listwise.generation listwise.likelihood pairwise.allpair pairwise.heapsort
2310.09497#36
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
36
# VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction Conference acronym ’XX, June 03–05, 2018, Woodstock, NY To provide further context for the chart, we have integrated a tex- tual representation of the chart generated by Olli directly into the LLM prompt (see Figure 5). This addition has the potential to signif- icantly enhance the performance of visual question-answering. For example, when presented with the question “What does the graph show?”, the system, without the text representation, provided a response like “The graph shows the data from the dataframe, which includes the year, value, temporal polarity, ...”. However, when fur- nished with the text representation, the LLM responded with a more comprehensive and human-friendly answer: “The graph shows the temporal polarity of the temperature anomaly (in degrees Celsius) from 1850 to 2021...”, illustrating the substantial improvement in response quality. interpreted as involving navigation, but either no starting/ending point was provided, or the tree view was not activated. Please try again.”
2310.09611#36
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
37
setwise.bubblesort pointwise.qlm pointwise.yes_no listwise.generation listwise.likelihood pairwise.allpair pairwise.heapsort pairwise.bubblesort setwise.heapsort setwise.bubblesort .506 .557 .654𝑎𝑏𝑑 .561𝑎 .669𝑎𝑏𝑑 .666𝑎𝑏𝑑 .657𝑎𝑏𝑑 .636𝑎𝑏𝑑 .670𝑎𝑏𝑑 .678𝑎𝑏𝑑ℎ .542 .650𝑎𝑏𝑑 .569𝑎 .689𝑎𝑏𝑑 .713𝑎𝑏𝑐𝑑𝑒ℎ𝑖 .705𝑎𝑏𝑐𝑑 .683𝑎𝑏𝑑 .693𝑎𝑏𝑐𝑑 .705𝑎𝑏𝑐𝑑 .506 .644𝑎𝑏 .662𝑎𝑏 .701𝑎𝑏𝑐𝑑 .699𝑎𝑏𝑐𝑑
2310.09497#37
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
37
interpreted as involving navigation, but either no starting/ending point was provided, or the tree view was not activated. Please try again.” Once the starting and ending points have been identified, we employ a breadth-search algorithm that returns string instructions of the shortest path, which users can then manually follow at their own discretion. We opted for this approach as opposed to auto- matically moving the user to their desired ending point with the rationale that autonomy and transparency are crucial for our in- tended audience. # 4.4 Other Accessibility and Usability Considerations Moreover, we supplement it with the user’s current position within the tree view, tracked via the user’s keyboard movements. This feature can help address potentially ambiguous questions. For instance, a user might ask, "What’s an average?" with the intention of inquiring about the average within a category where their cursor is located. We also ensure that the responses are properly formatted with commas and special characters so that they are optimized for screen reader interpretation (e.g., 468297 → 468,297).
2310.09611#37
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
38
.662𝑎𝑏 .701𝑎𝑏𝑐𝑑 .699𝑎𝑏𝑐𝑑 .708𝑎𝑏𝑐𝑑ℎ .679𝑎𝑏 .706𝑎𝑏𝑐𝑑 .711𝑎𝑏𝑐𝑑ℎ - 100 100 245 245 9900 230.3 844.2 125.4 460.5 100 100 245 245 9900 241.9 886.9 129.5 466.9 100 100 245 245 9900 239.4 870.5 130.1 468.3 - 15211.6 16111.6 119120.8 94200.7 3014383.1 104952.5 381386.3 40460.6 147774.1 15211.6 16111.6 119163.0 94446.1 2953436.2 110126.9 400367.1 41665.7 149949.1 15211.6 16111.6 119334.7 94537.5 2794942.6 109402 394386 42078.6 150764.8 - - - 2581.35 - 49500 2303.3 8441.6 626.9 2302.3 - - 2910 - 49500
2310.09497#38
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
38
Contextual Queries. To address contextual queries that do not necessitate a deep understanding of the chart or its data, we have incorporated a Web Browser agent [3] to retrieve more general information relevant to chart comprehension. For example, when presented with the contextual query, “What do you mean by temper- ature anomalies,” the LLM responds with, “Temperature anomalies are any measure of temperatures that are unusual for a partic- ular region, season, or time period. [...]” Categorizing questions beforehand enabled us to streamline the process and eliminate un- necessary, resource-intensive prompts needed for analytical and visual queries. Previous research[33, 55] highlights that data tables are a highly familiar and well-supported technology among blind individuals. In this context, VizAbility offers users the flexibility to seamlessly switch between the tree view and a conventional raw data table view. While the tree view facilitates structured exploration based on visual encoding, the data table provides additional advantages like sorting features, enabling users to quickly access specific data values and patterns. We disable navigation queries in the data table mode.
2310.09611#38
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
39
150764.8 - - - 2581.35 - 49500 2303.3 8441.6 626.9 2302.3 - - 2910 - 49500 2418.6 8869.1 647.4 2334.5 - - 2824 - 49500 2394 8705.3 650.5 2341.6 - 0.6 0.6 54.2 10.0 109.6 16.1 58.3 8.0 29.1 1.4 1.5 71.4 12.5 254.9 20.5 75.1 9.6 35.2 3.7 3.9 100.1 36.6 730.2 45.0 162.5 20.2 72.6 .480 .567𝑎 .615𝑎𝑑 .547𝑎 .626𝑎𝑏𝑑 .622𝑎𝑏𝑑 .619𝑎𝑏𝑑 .589𝑎𝑑 .618𝑎𝑑 .624𝑎𝑏𝑑 .542𝑎 .636𝑎𝑏𝑑 .547𝑎 .672𝑎𝑏𝑑 .682𝑎𝑏𝑐𝑑
2310.09497#39
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
39
Users can submit conversational queries via voice recordings that are processed via the Whisper speech recognition [6]. However, oftentimes, enabling microphones can be problematic. Thus, we provide an alternative text box so that they can type the queries using the keyboard. Upon inputting their question (regardless of the modality), users are provided with an audible cue of “Loading. Please Wait”. Every subsequent 3 seconds, the user is exposed to yet another audible cue, this time “Still Loading”. This loading cue significantly improves transparency and mitigates any possible confusion that can arise from an unresponsive webpage. Navigation Queries. We seek to integrate users’ keyboard naviga- tion with the conversational module via navigation queries. VizAbil- ity currently supports two types of navigation queries: (a) wayfind- ing questions, in which, upon being provided a starting and ending point within the tree view, the model returns a series of directions dictating the shortest traversal and (b) orientation questions, in which the VizAbility returns the user’s current location within the tree view.
2310.09611#39
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
40
.547𝑎 .672𝑎𝑏𝑑 .682𝑎𝑏𝑐𝑑 .692𝑎𝑏𝑐𝑑ℎ .662𝑎𝑏𝑑 .678𝑎𝑏𝑐𝑑 .676𝑎𝑏𝑐𝑑 .492 .632𝑎𝑏 .637𝑎𝑏 .690𝑎𝑏𝑐𝑑 .688𝑎𝑏𝑐𝑑 .699𝑎𝑏𝑐𝑑 .681𝑎𝑏𝑐𝑑 .688𝑎𝑏𝑐𝑑 .686𝑎𝑏𝑐𝑑 - 100 100 245 245 9900 226.8 778.5 124.2 457.4 100 100 245 245 9900 244.3 863.9 127.8 463.5 100 100 245 245 9900 240.5 842.9 128.1 467.9 - 15285.2 16185.2 119629.6 95208.3 3014232.7 104242.1 357358.5 40362.0 148947.3 15285.2 16185.2
2310.09497#40
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
40
To handle navigation queries, we attribute a unique address to each node of the tree view and convey this, along with the user’s current position, to the LLM. Through the utilization of few-shot prompting, we instruct the LLM to discern the starting point and ending point from the user query. It is crucial that the model has significant leniency in user queries, as it is highly unlikely that the user will specify the exact starting/ending points verbatim. Thus, the few-shot prompting primes the LLM to properly interpret the user query. For example, in response to the query “Take me to Haiti” (related to the choropleth map), the LLM comprehends the user query’s context and correctly deduces that the absence of an explicit starting node means the user intends to initiate navigation from their current location. On the other hand, VizAbility can easily infer the ending point, which is the node titled: “3 of 180. Country equals Haiti. 1 value. Press t to open table.” If the model cannot discern any starting or ending point, it yields: “The question was
2310.09611#40
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
41
119629.6 95208.3 3014232.7 104242.1 357358.5 40362.0 148947.3 15285.2 16185.2 119814.3 95298.7 2949457.6 111341 394954.2 41569.1 151249.8 15285.2 16185.2 119951.6 95482.7 2794928.4 110211.8 387359.2 41633.7 152709.5 - - - 2460.1 - 49500 2268.3 7785.4 621.1 2287.1 - - 2814.7 - 49500 2443.3 8638.5 638.9 2317.6 - - 2707.9 - 49500 2404.8 8428.5 640.6 2339.6 - 0.5 0.6 52.0 10.0 108.9 16.1 54.1 8.0 28.9 1.4 1.5 69.0 12.6 254.8 20.8 74.3 9.7 35.3 3.7 3.9 97.3 36.9 730.5 45.2 158.8 20.0 73.2 h i 𝑗
2310.09497#41
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
41
VizAbility does not solely display the answer, and instead pro- vides the user query and brief justification behind its response in conjunction with the actual answer. For instance, the following is articulated by VizAbility when a user asks, “What is a choropleth map?”: “Your question ’What is a choropleth map?’ was categorized as being context-seeking, and as such, has been answered based on information found on the web.” By letting users know the scope of the answer (i.e., whether it was sourced from the internet, data, or the tree view), we allow users to verify and evaluate the effective- ness of the LLM response independently, thus bolstering user trust and system transparency. # 5 EVALUATION: Q&A PERFORMANCE BENCHMARK For our quantitative evaluation, we concentrated on validating the question-answering pipeline using the testing dataset. This evaluation comprised two components: assessing the accuracy of query classification and evaluating the correctness of question responses. 5.1 Classification Evaluation We simply compared the classification result of VizAbility to the ground truth query type. We used a relaxed comparison, allowing for any potential discrepancies in formatting (such as the addition Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Gorniak et al.
2310.09611#41
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
42
However, listwise.generation shows some improvement with Flan- t5-xxl. These results may be attributed to the fact that generating a ranking list requires fine-grained relevance preferences across mul- tiple documents, a task that may exceed the capabilities of smaller models. In contrast, the listwise.likelihood approach, empowered by our Setwise prompt, markedly enhances the ranking effectiveness of the Listwise approach, even when utilizing smaller models. We acknowledge however that listwise.likelihood requires access to the model output logits, whereas listwise.generation does not. In the case of Pairwise and Setwise approaches, they consistently exhibit good ranking effectiveness across various model sizes and datasets. In Table 3, we present the zero-shot ranking effectiveness of all methods (with the exception of pairwise.allpair due to its com- putationally intensive nature) across 9 widely-used BEIR datasets. Notably, we identify several different trends that deviate from ob- servations made on the TREC DL datasets. Firstly, pointwise.qlm ex- hibits a slightly higher average NDCG@10 score compared to point- wise.yes_no. Moreover, the effectiveness of
2310.09497#42
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
42
A scatterplot showing life expectancy at birth and GDP per capita for ‘ A circle chart. With axes GOP per capita and Life expectancy at birth ( : X-axis titled GDP per capita. For a quantitative scale. With value: : Y-axis titled Life expectancy at birth (historical). For a quantitative ' Legend titled Continent. For a nominal scale. With 6 values from, 1 of 6. Continent equals Asia. 36 values. Press t to open ta 2 of 6. Continent equals Europe. 25 values. Press t to open ' 3 of 6. Continent equals Africa. 50 values. Press t to open ti : : Ending Point | Europe.} : Chart text + User cursor location : ! 1.3.2 // 2 of 6. Continent equals Europe. 25 values ! {Address of Node // String Representation} | {Press the up arrow key. Press the left arrow ! key. Press the left arrow key} a | Starting Point: Not Explicitly Stated > Consult : Active Node = {1.3.2 // Continent equals '—, Ending Node: Explicitly Stated > “X-Axis” > {1.1 // X-axis titled GDP per capita} Press @) A scatterplot showing life
2310.09611#42
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
43
Firstly, pointwise.qlm ex- hibits a slightly higher average NDCG@10 score compared to point- wise.yes_no. Moreover, the effectiveness of pointwise.qlm remains stable even as the model size increases. Secondly, listwise.generation demonstrates comparable effectiveness to listwise.likelihood, with the majority of gains obtained in the Touche dataset, where other methods perform worse. Lastly, both Pairwise and Setwise methods that leverage the bubble sort algorithm consistently demonstrate higher average NDCG@10 compared to when they utilize the heap
2310.09497#43
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
43
Stated > “X-Axis” > {1.1 // X-axis titled GDP per capita} Press @) A scatterplot showing life expectancy at birth and GDP p A circle chart, With axes GDP per capita and Life expect: X-axis titled GDP per capita. For a quantitative scal Y-axis titled Life expectancy at birth (historical). Fo Legend titled Continent. For a nominal scale. With Press A scatterplot showing life expectancy at birth and GDP Acircle chart. With axes GDP per capita and Life expec! X-axis titled GDP per capita. For a quantitative sc Y-axis titled Life expectancy at birth (historical), Fé Legend titled Continent, For a nominal scale. With] Press A scaiterplot showing life expectancy at birth and GDP A circle chart. With axes GDP per capita and Life expe X-axis titled GDP per capita, For a quantitative sc! Y-axis titled Life expectancy at birth (historical, Fi Legend titled Continent. For a nominal scale. Witl)
2310.09611#43
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
44
sort algorithm, regardless of the model size. Overall, the variety of results we observe across different experimental settings shows the importance of not drawing conclusions about effectiveness from single datasets or model sizes. 5.2 Efficiency Results Regarding computational and runtime efficiency, the results pre- sented in Table 2 indicate that both Pointwise methods exhibit fewest inference, prompt tokens, and no generated tokens. Furthermore, their computational efficiency and query latency are optimized due to efficient GPU-based batched inference. It is worth noting, how- ever, that these methods do come with certain limitations. Specifi- cally, they require access to the model output logits (thus currently limiting their use to just open source LLMs) and are less effective when used with larger models. In contrast, pairwise.allpair appears to be the most expensive method that consumes the most number of prompt tokens and generated tokens due to the large number of document pair preferences needed to be inferred. Hence, even with GPU batching, pairwise.allpair still has the worst query latency. In constrast, approaches utilizing our Setwise prompting—namely, list- wise.likelihood, setwise.heapsort, and setwise.bubblesort, are far more efficient than their counterparts, listwise.generate, pairwise.heapsort, and pairwise.bubblesort respectively. Notably, these improvements
2310.09497#44
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
44
Figure 5: Query-specific evaluation for Navigation queries. We pass a text representation of the Olli Tree and the addresses of corresponding nodes within the Tree to an LLM alongside the user question. With the aid of few-shot prompting, the LLM then identifies the starting and ending nodes within the Olli Tree. Should the starting node not be explicitly mentioned within the question, the model instead utilizes the user’s current location within the Tree. We then execute a breadth-search algorithm and relay the shortest path between starting and ending nodes back to the user. Classification Accuracy # Response @® Incorrect Ml Correct 0% 20% 40% 60% 80% 100% Percentage (%) Binary Scale Response @® Incorrect Mi Correct 0% 20% 40% 60% 80% 100% Percentage (%) Response @™ Very Poor = poor me ae ood M@™ Very Good 0% 20% 40% 60% 80% 100% Percentage (%) Figure 6: Quantitative results display the distributions for classification accuracy, factual accuracy (via a binary scale), and qualitative rating (via a 5-point likert scale) for user questions in the testing set. of a white space or newline character by the LLM). If there is a 1:1 correspondence, we output “Correct Classification”. Otherwise, we output “Incorrect Classification”.
2310.09611#44
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
45
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models Arxiv, 2023, preprint Table 3: Overall NDCG@10 obtained by methods on BEIR datasets. The best results are highlighted in boldface. Superscripts denote statistically significant improvements (paired Student’s t-test with 𝑝 ≤ 0.05 with Bonferroni correction). # Methods Covid NFCorpus Touche DBPedia SciFact Signal News Robust04 Avg e g r a l - 5 t - n a l F l x - 5 t - n a l F l x x - 5 t - n a l F a BM25 .322 .442 .318 .436
2310.09497#45
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
45
The evaluation of our testing set yielded a classification accuracy of 88.5% ( 688 777 ). The 88 user queries which were incorrectly classified by the LLM consisted of 52 queries that could not be classified (signifying overtly vague or impossible to answer questions) and 36 queries that were classified into a query type that did not correspond with the ground truth. 5.2 Question Response Quality Evaluation We employed GPT-4 to evaluate the quality of natural language responses, following recent studies [20, 37, 50]. With us identifying trustworthiness and transparency as vital factors, we wanted to reflect this fact by emphasizing explanatory responses over more concise ones. We adopted a Likert scale evaluation prompt, inspired by Liu et al. [37], which framed a response’s ’correctness’ in terms of its coherence with the ground truth. This coherence metric ranged from 1-5, but for clarity, we adapted it to the Likert Scale of Quality: [Very Poor, Poor, Fair, Good, Very Good].
2310.09611#45
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
46
_ TREC DL 2019 fer TREC DL 2020 0.70 0.68 0.68 0.66 0.66 0.64 i o g g © 0.64 oe ia] re) 0:62 Q 0.60 Cg 4 0.60 0.58 0.58 0.56 0.56 we 054 gp @0 0.54 0.52 - 0 50 100 150 0 50 100 150 Latency (s) Latency (s) @ fian-t5-large, generation @ ‘fian-t5-x1, generation @ fian-t5-xx!, generation @ flan-ts-large, likelihood @ fian-t5-xi, likelihood @ fian-t5-xx, likelihood TREC DL 2019 Fer TREC DL 2020 0.68 0.66 S 0.64 g e 0.62 8 =z 0.60 0.58 0.56 0.60 ) 20 40 60 0 20 40 60 Latency (s) Latency (s) @ flan-ts-large, heapsort @ flants-xi, heapsort @ | fian-t5-xx1, heapsort @ fian-t5-large, bubblesort @ | flan-ts-xi, bubblesort @ fan-t5-xx1, bubblesort (a) Setwise (b) Listwise
2310.09497#46
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
46
The evaluation prompt presented two responses: Response A and Response B, with Response A acting as the ground truth. GPT4 was directed to assess the coherence of Response B in relation to Response A. To prevent bias, we refrained from revealing which re- sponse was the ground truth or our own creation. GPT4 pinpointed five key elements (keywords/numbers/characters) from Response A and sought matches or their synonyms in Response B. Each match increased the coherence score by one. If the score deviated from the 1-5 range, GPT4 reassessed. The results were formatted as “Score: coherence score”.
2310.09611#46
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
47
(a) Setwise (b) Listwise Figure 3: Effectiveness and efficiency trade-offs offered by different approaches. (a – Setwise): The numbers in the scatter plots represent the number of compared documents 𝑐 at each step of the sorting algorithm. (b – Listwise) The numbers in the scatter plots represent the number of sliding windows repetitions 𝑟 . are achieved without compromising effectiveness. Section 5.4 will discuss further approaches on improving efficiency. Table 5 shows calculations for the estimated cost of API calls; this estimation is obtained using the OpenAI GPT-4 cost structure, and applying this same structure to the number of tokens measured in our experiments. At time of writing, OpenAI costs were $0.03/1,000 prompt tokens and $0.06/1,000 generated tokens. To estimate the token count if GPT-4 were used, we average the number of prompt tokens and generated tokens from Table 2 across Flan-T5 models. The setwise.bottlesort and pairwise.heapsort methods show compa- rable NDCG@10, but pairwise.heapsort is cheaper. On the other Arxiv, 2023, preprint
2310.09497#47
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
47
Of the 777 user queries, 365 or 47% were deemed "Very Good" by our evaluation pipeline. Responses rated as "Very Good" often re- stated the user’s question, formatted quantitative data correctly, and included contextual labels. For example, in response to the query “What country has the highest vaccination rate in the world?” re- lated to a choropleth map, VizAbility answered, “Malta has the highest vaccination rate in the world with 94.0%.” This response, more detailed than the ground truth “Malta has the highest vaccina- tion rate according to the data,” demonstrates VizAbility’s ability to provide comprehensive answers. Moreover, by appropriately rating this response as “Very Good,” the evaluation pipeline effectively showcases its capability to judge response quality and depth. The distribution for Good, Fair, and Poor responses was 13.5% or 105 777 , 10.9% or 85 777 respectively. The pipeline evaluated 149 or 19.5% questions as being ”Very Poor” in coherence to the ground truth. As will be discussed in the binary scale evalua- tion, this statistic is significantly less than the percent of questions deemed to be “Incorrect” by the LLM operating under a binary
2310.09611#47
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09611
48
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction Conference acronym ’XX, June 03–05, 2018, Woodstock, NY scale (31.7%). This might indicate a successful distinction between response quality in the Likert evaluation and factual correctness in the binary scale. For example, the response “House for sale has been decreasing over time from 2014-10-3 to 2021-2-12” to the query “What years have house for sale decreasing over time?” was rated “Poor” on the Likert scale but “Correct” on the binary scale. Com- pared to the ground truth, "2014, 2016, 2017, 2019, 2020, and 2021 all saw house for sale decrease", the response, while factually accurate in its date range, did not explicitly list out the years. 5.3 Question Response Correctness Evaluation We aimed to evaluate the factual accuracy of VizAbility outputs, essential for its trustworthiness. Using a binary scale, given the binary nature of accuracy, our evaluation method was similar to the Likert Scale but with a key difference. Instead of five key elements, we had GPT extract one or two key items from Response A to compare with Response B. This narrowed focus shifts the evaluation from the verbosity of the response to its factual accuracy.
2310.09611#48
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]