zli12321 commited on
Commit
09e9f9c
·
verified ·
1 Parent(s): 113072f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -16
README.md CHANGED
@@ -35,7 +35,7 @@ The Wiki dataset consists of 14,290 articles spanning 15 high-level and 45 mid-l
35
 
36
  Please cite us if you find the data and the papers useful, and do not hesitate to create an issue or email us if you have problems!
37
 
38
- If you find LLM-based topic generation has hallucination or instability:
39
  ```
40
  @misc{li2025largelanguagemodelsstruggle,
41
  title={Large Language Models Struggle to Describe the Haystack without Human Help: Human-in-the-loop Evaluation of LLMs},
@@ -49,21 +49,6 @@ If you find LLM-based topic generation has hallucination or instability:
49
  ```
50
 
51
  If you use the human annotations or preprocessing:
52
- ```
53
- @inproceedings{hoyle-etal-2021-automated,
54
- title = "Is Automated Topic Evaluation Broken? The Incoherence of Coherence",
55
- author = "Hoyle, Alexander Miserlis and
56
- Goel, Pranav and
57
- Hian-Cheong, Andrew and
58
- Peskov, Denis and
59
- Boyd-Graber, Jordan and
60
- Resnik, Philip",
61
- booktitle = "Advances in Neural Information Processing Systems",
62
- year = "2021",
63
- url = "https://arxiv.org/abs/2107.02173",
64
- }
65
- ```
66
-
67
  ```
68
  @inproceedings{li-etal-2024-improving,
69
  title = "Improving the {TENOR} of Labeling: Re-evaluating Topic Models for Content Analysis",
@@ -87,6 +72,23 @@ If you use the human annotations or preprocessing:
87
  }
88
  ```
89
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
90
  If you evaluate ground-truth evaluations or stability:
91
  ```
92
  @inproceedings{hoyle-etal-2022-neural,
 
35
 
36
  Please cite us if you find the data and the papers useful, and do not hesitate to create an issue or email us if you have problems!
37
 
38
+ If you find LLM-based topic generation has hallucination or instability, and coherence not applicable to LLM-based topic models:
39
  ```
40
  @misc{li2025largelanguagemodelsstruggle,
41
  title={Large Language Models Struggle to Describe the Haystack without Human Help: Human-in-the-loop Evaluation of LLMs},
 
49
  ```
50
 
51
  If you use the human annotations or preprocessing:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52
  ```
53
  @inproceedings{li-etal-2024-improving,
54
  title = "Improving the {TENOR} of Labeling: Re-evaluating Topic Models for Content Analysis",
 
72
  }
73
  ```
74
 
75
+ If you want to use the claim coherence does not generalize to neural topic models:
76
+ ```
77
+ @inproceedings{hoyle-etal-2021-automated,
78
+ title = "Is Automated Topic Evaluation Broken? The Incoherence of Coherence",
79
+ author = "Hoyle, Alexander Miserlis and
80
+ Goel, Pranav and
81
+ Hian-Cheong, Andrew and
82
+ Peskov, Denis and
83
+ Boyd-Graber, Jordan and
84
+ Resnik, Philip",
85
+ booktitle = "Advances in Neural Information Processing Systems",
86
+ year = "2021",
87
+ url = "https://arxiv.org/abs/2107.02173",
88
+ }
89
+ ```
90
+
91
+
92
  If you evaluate ground-truth evaluations or stability:
93
  ```
94
  @inproceedings{hoyle-etal-2022-neural,