陈俊杰 commited on
Commit
f161115
·
1 Parent(s): 9a958c5
Files changed (1) hide show
  1. app.py +16 -16
app.py CHANGED
@@ -186,10 +186,10 @@ elif page == "Datasets":
186
  st.markdown("""
187
  <p class='main-text'>A brief description of the specific dataset we used, along with the original download link, is provided below:</p>
188
  <ul class='main-text'>
189
- <li><strong>Summary Generation (SG): <a href="https://huggingface.co/datasets/EdinburghNLP/xsum">Xsum</a></strong>: A real-world single document news summary dataset collected from online articles by the British Broadcasting Corporation (BBC) and contains over 220 thousand news documents.</li>
190
- <li><strong>Non-Factoid QA (NFQA): <a href="https://github.com/Lurunchik/NF-CATS">NF_CATS</a></strong>: A dataset contains examples of 12k natural questions divided into eight categories.</li>
191
- <li><strong>Text Expansion (TE): <a href="https://huggingface.co/datasets/euclaise/writingprompts">WritingPrompts</a></strong>: A large dataset of 300K human-written stories paired with writing prompts from an online forum.</li>
192
- <li><strong>Dialogue Generation (DG): <a href="https://huggingface.co/datasets/daily_dialog">DailyDialog</a></strong>: A high-quality dataset of 13k multi-turn dialogues. The language is human-written and less noisy.</li>
193
  </ul>
194
  <p class='main-text'>For your convenience, we have released <strong>the training set</strong> (with human-annotated results) and <strong>the test set</strong> (without human-annotated results) on <a href="https://huggingface.co/datasets/THUIR/AEOLLM">https://huggingface.co/datasets/THUIR/AEOLLM</a>, which you can easily download.</p>
195
  """,unsafe_allow_html=True)
@@ -213,13 +213,13 @@ elif page == "Evaluation Measures":
213
  <script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script>
214
  <script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
215
  <ul class='main-text'>
216
- <li><strong>Acc(Accuracy): </strong>The proportion of identical preference results between the model and human annotations. Specifically, we first convert individual scores (ranks) into pairwise preferences and then calculate consistency with human annotations.</li>
217
- <li><strong>Kendall's tau: </strong>Measures the ordinal association between two ranked variables. $$\tau = \frac{C-D}{\frac{1}{2}n(n-1)}$$
218
  where:
219
  C is the number of concordant pairs,
220
  D is the number of discordant pairs,
221
  n is the number of pairs.</li>
222
- <li><strong>Spearman's Rank Correlation Coefficient: </strong>Measures the strength and direction of the association between two ranked variables. $$\rho = 1 - \frac{6 \sum d_i^2}{n(n^2 - 1)}$$
223
  where:
224
  \(d_i\) is the difference between the ranks of corresponding elements in the two lists,
225
  n is the number of elements.</li>
@@ -231,11 +231,11 @@ elif page == "Data and File format":
231
  <p class='main-text'>We will be following a similar format as the ones used by most <strong>TREC submissions</strong>, which is repeated below. White space is used to separate columns. The width of the columns in the format is not important, but it is important to have exactly five columns per line with at least one space between the columns.</p>
232
  <p class='main-text'><strong>taskId questionId answerId score rank</strong></p>
233
  <ol class='main-text'>
234
- <li>the first column is the taskeId (index different tasks)</li>
235
- <li>the second column is questionId (index different questions in the same task)</li>
236
- <li>the third column is answerId (index the answer provided by different LLMs to the same question)</li>
237
- <li>the fourth column is score (index the score to the answer given by participants)</li>
238
- <li>the fifth column is rank (index the rank of the answer within all answers to the same question)</li>
239
  </ol>
240
  """,unsafe_allow_html=True)
241
  elif page == "Submit":
@@ -250,10 +250,10 @@ elif page == "LeaderBoard":
250
  <div class='main-text'>
251
  This leaderboard is used to show the performance of the <strong>automatic evaluation methods of LLMs</strong> submitted by the <strong>AEOLLM team</strong> on four tasks:
252
  <ul class='main-text'>
253
- <li>Dialogue Generation (DG)</li>
254
- <li>Text Expansion (TE)</li>
255
- <li>Summary Generation (SG)</li>
256
- <li>Non-Factoid QA (NFQA)</li>
257
  </ul>
258
  </div>
259
  """, unsafe_allow_html=True)
 
186
  st.markdown("""
187
  <p class='main-text'>A brief description of the specific dataset we used, along with the original download link, is provided below:</p>
188
  <ul class='main-text'>
189
+ <li class='main-text'><strong>Summary Generation (SG): <a href="https://huggingface.co/datasets/EdinburghNLP/xsum">Xsum</a></strong>: A real-world single document news summary dataset collected from online articles by the British Broadcasting Corporation (BBC) and contains over 220 thousand news documents.</li>
190
+ <li class='main-text'><strong>Non-Factoid QA (NFQA): <a href="https://github.com/Lurunchik/NF-CATS">NF_CATS</a></strong>: A dataset contains examples of 12k natural questions divided into eight categories.</li>
191
+ <li class='main-text'><strong>Text Expansion (TE): <a href="https://huggingface.co/datasets/euclaise/writingprompts">WritingPrompts</a></strong>: A large dataset of 300K human-written stories paired with writing prompts from an online forum.</li>
192
+ <li class='main-text'><strong>Dialogue Generation (DG): <a href="https://huggingface.co/datasets/daily_dialog">DailyDialog</a></strong>: A high-quality dataset of 13k multi-turn dialogues. The language is human-written and less noisy.</li>
193
  </ul>
194
  <p class='main-text'>For your convenience, we have released <strong>the training set</strong> (with human-annotated results) and <strong>the test set</strong> (without human-annotated results) on <a href="https://huggingface.co/datasets/THUIR/AEOLLM">https://huggingface.co/datasets/THUIR/AEOLLM</a>, which you can easily download.</p>
195
  """,unsafe_allow_html=True)
 
213
  <script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script>
214
  <script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
215
  <ul class='main-text'>
216
+ <li class='main-text'><strong>Acc(Accuracy): </strong>The proportion of identical preference results between the model and human annotations. Specifically, we first convert individual scores (ranks) into pairwise preferences and then calculate consistency with human annotations.</li>
217
+ <li class='main-text'><strong>Kendall's tau: </strong>Measures the ordinal association between two ranked variables. $$\tau = \frac{C-D}{\frac{1}{2}n(n-1)}$$
218
  where:
219
  C is the number of concordant pairs,
220
  D is the number of discordant pairs,
221
  n is the number of pairs.</li>
222
+ <li class='main-text'><strong>Spearman's Rank Correlation Coefficient: </strong>Measures the strength and direction of the association between two ranked variables. $$\rho = 1 - \frac{6 \sum d_i^2}{n(n^2 - 1)}$$
223
  where:
224
  \(d_i\) is the difference between the ranks of corresponding elements in the two lists,
225
  n is the number of elements.</li>
 
231
  <p class='main-text'>We will be following a similar format as the ones used by most <strong>TREC submissions</strong>, which is repeated below. White space is used to separate columns. The width of the columns in the format is not important, but it is important to have exactly five columns per line with at least one space between the columns.</p>
232
  <p class='main-text'><strong>taskId questionId answerId score rank</strong></p>
233
  <ol class='main-text'>
234
+ <li class='main-text'>the first column is the taskeId (index different tasks)</li>
235
+ <li class='main-text'>the second column is questionId (index different questions in the same task)</li>
236
+ <li class='main-text'>the third column is answerId (index the answer provided by different LLMs to the same question)</li>
237
+ <li class='main-text'>the fourth column is score (index the score to the answer given by participants)</li>
238
+ <li class='main-text'>the fifth column is rank (index the rank of the answer within all answers to the same question)</li>
239
  </ol>
240
  """,unsafe_allow_html=True)
241
  elif page == "Submit":
 
250
  <div class='main-text'>
251
  This leaderboard is used to show the performance of the <strong>automatic evaluation methods of LLMs</strong> submitted by the <strong>AEOLLM team</strong> on four tasks:
252
  <ul class='main-text'>
253
+ <li class='main-text'>Dialogue Generation (DG)</li>
254
+ <li class='main-text'>Text Expansion (TE)</li>
255
+ <li class='main-text'>Summary Generation (SG)</li>
256
+ <li class='main-text'>Non-Factoid QA (NFQA)</li>
257
  </ul>
258
  </div>
259
  """, unsafe_allow_html=True)